Jan 31 06:09:18 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 06:09:18 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 06:09:18 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 06:09:18 localhost kernel: BIOS-provided physical RAM map:
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 06:09:18 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 06:09:18 localhost kernel: NX (Execute Disable) protection: active
Jan 31 06:09:18 localhost kernel: APIC: Static calls initialized
Jan 31 06:09:18 localhost kernel: SMBIOS 2.8 present.
Jan 31 06:09:18 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 06:09:18 localhost kernel: Hypervisor detected: KVM
Jan 31 06:09:18 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 06:09:18 localhost kernel: kvm-clock: using sched offset of 10344652995 cycles
Jan 31 06:09:18 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 06:09:18 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 31 06:09:18 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 06:09:18 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 06:09:18 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 06:09:18 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 06:09:18 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 06:09:18 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 06:09:18 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 06:09:18 localhost kernel: Using GB pages for direct mapping
Jan 31 06:09:18 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 06:09:18 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 06:09:18 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 06:09:18 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:18 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:18 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:18 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 06:09:18 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:18 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 06:09:18 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 06:09:18 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 06:09:18 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 06:09:18 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 06:09:18 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 06:09:18 localhost kernel: No NUMA configuration found
Jan 31 06:09:18 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 06:09:18 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 06:09:18 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 06:09:18 localhost kernel: Zone ranges:
Jan 31 06:09:18 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 06:09:18 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 06:09:18 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 06:09:18 localhost kernel:   Device   empty
Jan 31 06:09:18 localhost kernel: Movable zone start for each node
Jan 31 06:09:18 localhost kernel: Early memory node ranges
Jan 31 06:09:18 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 06:09:18 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 06:09:18 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 06:09:18 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 06:09:18 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 06:09:18 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 06:09:18 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 06:09:18 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 06:09:18 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 06:09:18 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 06:09:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 06:09:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 06:09:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 06:09:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 06:09:18 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 06:09:18 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 06:09:18 localhost kernel: TSC deadline timer available
Jan 31 06:09:18 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 06:09:18 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 06:09:18 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 06:09:18 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 06:09:18 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 06:09:18 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 06:09:18 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 06:09:18 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 06:09:18 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 06:09:18 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 06:09:18 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 06:09:18 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 06:09:18 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 06:09:18 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 06:09:18 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 06:09:18 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 06:09:18 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 06:09:18 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 06:09:18 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 06:09:18 localhost kernel: random: crng init done
Jan 31 06:09:18 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 06:09:18 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 06:09:18 localhost kernel: Fallback order for Node 0: 0 
Jan 31 06:09:18 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 06:09:18 localhost kernel: Policy zone: Normal
Jan 31 06:09:18 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 06:09:18 localhost kernel: software IO TLB: area num 8.
Jan 31 06:09:18 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 06:09:18 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 06:09:18 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 06:09:18 localhost kernel: Dynamic Preempt: voluntary
Jan 31 06:09:18 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 06:09:18 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 06:09:18 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 06:09:18 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 06:09:18 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 06:09:18 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 06:09:18 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 06:09:18 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 06:09:18 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 06:09:18 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 06:09:18 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 06:09:18 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 06:09:18 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 06:09:18 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 06:09:18 localhost kernel: Console: colour VGA+ 80x25
Jan 31 06:09:18 localhost kernel: printk: console [ttyS0] enabled
Jan 31 06:09:18 localhost kernel: ACPI: Core revision 20230331
Jan 31 06:09:18 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 06:09:18 localhost kernel: x2apic enabled
Jan 31 06:09:18 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 06:09:18 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 06:09:18 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 31 06:09:18 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 06:09:18 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 06:09:18 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 06:09:18 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 06:09:18 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 06:09:18 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 06:09:18 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 06:09:18 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 06:09:18 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 06:09:18 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 06:09:18 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 06:09:18 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 06:09:18 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 06:09:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 06:09:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 06:09:18 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 06:09:18 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 06:09:18 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 06:09:18 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 06:09:18 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 06:09:18 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 06:09:18 localhost kernel: landlock: Up and running.
Jan 31 06:09:18 localhost kernel: Yama: becoming mindful.
Jan 31 06:09:18 localhost kernel: SELinux:  Initializing.
Jan 31 06:09:18 localhost kernel: LSM support for eBPF active
Jan 31 06:09:18 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 06:09:18 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 06:09:18 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 06:09:18 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 06:09:18 localhost kernel: ... version:                0
Jan 31 06:09:18 localhost kernel: ... bit width:              48
Jan 31 06:09:18 localhost kernel: ... generic registers:      6
Jan 31 06:09:18 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 06:09:18 localhost kernel: ... max period:             00007fffffffffff
Jan 31 06:09:18 localhost kernel: ... fixed-purpose events:   0
Jan 31 06:09:18 localhost kernel: ... event mask:             000000000000003f
Jan 31 06:09:18 localhost kernel: signal: max sigframe size: 1776
Jan 31 06:09:18 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 06:09:18 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 06:09:18 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 06:09:18 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 06:09:18 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 06:09:18 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 06:09:18 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 31 06:09:18 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 31 06:09:18 localhost kernel: Memory: 7763724K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Jan 31 06:09:18 localhost kernel: devtmpfs: initialized
Jan 31 06:09:18 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 06:09:18 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 06:09:18 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 06:09:18 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 06:09:18 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 06:09:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 06:09:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 06:09:18 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 06:09:18 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 06:09:18 localhost kernel: audit: type=2000 audit(1769839757.817:1): state=initialized audit_enabled=0 res=1
Jan 31 06:09:18 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 06:09:18 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 06:09:18 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 06:09:18 localhost kernel: cpuidle: using governor menu
Jan 31 06:09:18 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 06:09:18 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 06:09:18 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 06:09:18 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 06:09:18 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 06:09:18 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 06:09:18 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 06:09:18 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 06:09:18 localhost kernel: Demotion targets for Node 0: null
Jan 31 06:09:18 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 06:09:18 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 06:09:18 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 06:09:18 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 06:09:18 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 06:09:18 localhost kernel: ACPI: Interpreter enabled
Jan 31 06:09:18 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 06:09:18 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 06:09:18 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 06:09:18 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 06:09:18 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 06:09:18 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 06:09:18 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [3] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [4] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [5] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [6] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [7] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [8] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [9] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [10] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [11] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [12] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [13] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [14] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [15] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [16] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [17] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [18] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [19] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [20] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [21] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [22] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [23] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [24] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [25] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [26] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [27] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [28] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [29] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [30] registered
Jan 31 06:09:18 localhost kernel: acpiphp: Slot [31] registered
Jan 31 06:09:18 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 06:09:18 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 06:09:18 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 06:09:18 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 06:09:18 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 06:09:18 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 06:09:18 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 06:09:18 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 06:09:18 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 06:09:18 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 06:09:18 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 06:09:18 localhost kernel: iommu: Default domain type: Translated
Jan 31 06:09:18 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 06:09:18 localhost kernel: SCSI subsystem initialized
Jan 31 06:09:18 localhost kernel: ACPI: bus type USB registered
Jan 31 06:09:18 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 06:09:18 localhost kernel: usbcore: registered new interface driver hub
Jan 31 06:09:18 localhost kernel: usbcore: registered new device driver usb
Jan 31 06:09:18 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 06:09:18 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 06:09:18 localhost kernel: PTP clock support registered
Jan 31 06:09:18 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 06:09:18 localhost kernel: NetLabel: Initializing
Jan 31 06:09:18 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 06:09:18 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 06:09:18 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 06:09:18 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 06:09:18 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 06:09:18 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 06:09:18 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 06:09:18 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 06:09:18 localhost kernel: vgaarb: loaded
Jan 31 06:09:18 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 06:09:18 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 06:09:18 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 06:09:18 localhost kernel: pnp: PnP ACPI init
Jan 31 06:09:18 localhost kernel: pnp 00:03: [dma 2]
Jan 31 06:09:18 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 06:09:18 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 06:09:18 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 06:09:18 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 06:09:18 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 06:09:18 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 06:09:18 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 06:09:18 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 06:09:18 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 06:09:18 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 06:09:18 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 06:09:18 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 06:09:18 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 06:09:18 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 06:09:18 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 06:09:18 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 06:09:18 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 06:09:18 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 31163 usecs
Jan 31 06:09:18 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 06:09:18 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 06:09:18 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 06:09:18 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 06:09:18 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 06:09:18 localhost kernel: Initialise system trusted keyrings
Jan 31 06:09:18 localhost kernel: Key type blacklist registered
Jan 31 06:09:18 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 06:09:18 localhost kernel: zbud: loaded
Jan 31 06:09:18 localhost kernel: integrity: Platform Keyring initialized
Jan 31 06:09:18 localhost kernel: integrity: Machine keyring initialized
Jan 31 06:09:18 localhost kernel: Freeing initrd memory: 88000K
Jan 31 06:09:18 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 06:09:18 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 06:09:18 localhost kernel: Key type asymmetric registered
Jan 31 06:09:18 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 06:09:18 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 06:09:18 localhost kernel: io scheduler mq-deadline registered
Jan 31 06:09:18 localhost kernel: io scheduler kyber registered
Jan 31 06:09:18 localhost kernel: io scheduler bfq registered
Jan 31 06:09:18 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 06:09:18 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 06:09:18 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 06:09:18 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 06:09:18 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 06:09:18 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 06:09:18 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 06:09:18 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 06:09:18 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 06:09:18 localhost kernel: Non-volatile memory driver v1.3
Jan 31 06:09:18 localhost kernel: rdac: device handler registered
Jan 31 06:09:18 localhost kernel: hp_sw: device handler registered
Jan 31 06:09:18 localhost kernel: emc: device handler registered
Jan 31 06:09:18 localhost kernel: alua: device handler registered
Jan 31 06:09:18 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 06:09:18 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 06:09:18 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 06:09:18 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 06:09:18 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 06:09:18 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 06:09:18 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 06:09:19 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 06:09:19 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 06:09:19 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 06:09:19 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 06:09:19 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 06:09:19 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 06:09:19 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 06:09:19 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 06:09:19 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 06:09:19 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 06:09:19 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 06:09:19 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 06:09:19 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T06:09:18 UTC (1769839758)
Jan 31 06:09:19 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 06:09:19 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 06:09:19 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 06:09:19 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 06:09:19 localhost kernel: usbhid: USB HID core driver
Jan 31 06:09:19 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 06:09:19 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 06:09:19 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 06:09:19 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 06:09:19 localhost kernel: Initializing XFRM netlink socket
Jan 31 06:09:19 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 06:09:19 localhost kernel: Segment Routing with IPv6
Jan 31 06:09:19 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 06:09:19 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 06:09:19 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 06:09:19 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 06:09:19 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 06:09:19 localhost kernel: sched_clock: Marking stable (1204016608, 151709778)->(1432204353, -76477967)
Jan 31 06:09:19 localhost kernel: registered taskstats version 1
Jan 31 06:09:19 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 06:09:19 localhost kernel: Demotion targets for Node 0: null
Jan 31 06:09:19 localhost kernel: page_owner is disabled
Jan 31 06:09:19 localhost kernel: Key type .fscrypt registered
Jan 31 06:09:19 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 06:09:19 localhost kernel: Key type big_key registered
Jan 31 06:09:19 localhost kernel: Key type encrypted registered
Jan 31 06:09:19 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 06:09:19 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 06:09:19 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 06:09:19 localhost kernel: ima: No architecture policies found
Jan 31 06:09:19 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 06:09:19 localhost kernel: evm: security.selinux
Jan 31 06:09:19 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 06:09:19 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 06:09:19 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 06:09:19 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 06:09:19 localhost kernel: evm: security.apparmor (disabled)
Jan 31 06:09:19 localhost kernel: evm: security.ima
Jan 31 06:09:19 localhost kernel: evm: security.capability
Jan 31 06:09:19 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 06:09:19 localhost kernel: Running certificate verification RSA selftest
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 06:09:19 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 06:09:19 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 06:09:19 localhost kernel: clk: Disabling unused clocks
Jan 31 06:09:19 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 06:09:19 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 06:09:19 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 06:09:19 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 06:09:19 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 06:09:19 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 06:09:19 localhost kernel: Run /init as init process
Jan 31 06:09:19 localhost kernel:   with arguments:
Jan 31 06:09:19 localhost kernel:     /init
Jan 31 06:09:19 localhost kernel:   with environment:
Jan 31 06:09:19 localhost kernel:     HOME=/
Jan 31 06:09:19 localhost kernel:     TERM=linux
Jan 31 06:09:19 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 06:09:19 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 06:09:19 localhost systemd[1]: Detected virtualization kvm.
Jan 31 06:09:19 localhost systemd[1]: Detected architecture x86-64.
Jan 31 06:09:19 localhost systemd[1]: Running in initrd.
Jan 31 06:09:19 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 06:09:19 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 06:09:19 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 06:09:19 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 06:09:19 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 06:09:19 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 06:09:19 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 06:09:19 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 06:09:19 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 06:09:19 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 06:09:19 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 06:09:19 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 06:09:19 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 06:09:19 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 06:09:19 localhost systemd[1]: Reached target Local File Systems.
Jan 31 06:09:19 localhost systemd[1]: Reached target Path Units.
Jan 31 06:09:19 localhost systemd[1]: Reached target Slice Units.
Jan 31 06:09:19 localhost systemd[1]: Reached target Swaps.
Jan 31 06:09:19 localhost systemd[1]: Reached target Timer Units.
Jan 31 06:09:19 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 06:09:19 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 06:09:19 localhost systemd[1]: Listening on Journal Socket.
Jan 31 06:09:19 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 06:09:19 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 06:09:19 localhost systemd[1]: Reached target Socket Units.
Jan 31 06:09:19 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 06:09:19 localhost systemd[1]: Starting Journal Service...
Jan 31 06:09:19 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 06:09:19 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:09:19 localhost systemd[1]: Starting Create System Users...
Jan 31 06:09:19 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 06:09:19 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 06:09:19 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:09:19 localhost systemd-journald[301]: Journal started
Jan 31 06:09:19 localhost systemd-journald[301]: Runtime Journal (/run/log/journal/8f281c2a1a4441ea82686c420f002b7f) is 8.0M, max 153.6M, 145.6M free.
Jan 31 06:09:19 localhost systemd-sysusers[305]: Creating group 'users' with GID 100.
Jan 31 06:09:19 localhost systemd[1]: Started Journal Service.
Jan 31 06:09:19 localhost systemd-sysusers[305]: Creating group 'dbus' with GID 81.
Jan 31 06:09:19 localhost systemd-sysusers[305]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 06:09:19 localhost systemd[1]: Finished Create System Users.
Jan 31 06:09:19 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 06:09:19 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 06:09:19 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 06:09:19 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 06:09:19 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 06:09:19 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 06:09:19 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 06:09:19 localhost dracut-cmdline[321]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 06:09:19 localhost dracut-cmdline[321]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 06:09:19 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 06:09:19 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 06:09:19 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 06:09:19 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 06:09:19 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 06:09:19 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 06:09:19 localhost kernel: RPC: Registered udp transport module.
Jan 31 06:09:19 localhost kernel: RPC: Registered tcp transport module.
Jan 31 06:09:19 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 06:09:19 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 06:09:19 localhost rpc.statd[438]: Version 2.5.4 starting
Jan 31 06:09:19 localhost rpc.statd[438]: Initializing NSM state
Jan 31 06:09:19 localhost rpc.idmapd[443]: Setting log level to 0
Jan 31 06:09:19 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 06:09:19 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 06:09:19 localhost systemd-udevd[456]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 06:09:19 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 06:09:19 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 06:09:19 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 06:09:19 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 06:09:19 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 06:09:19 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 06:09:19 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 06:09:19 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 06:09:19 localhost systemd[1]: Reached target Network.
Jan 31 06:09:19 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 06:09:19 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 06:09:19 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 06:09:19 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 06:09:19 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 06:09:19 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 06:09:19 localhost kernel:  vda: vda1
Jan 31 06:09:19 localhost systemd-udevd[484]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:09:19 localhost kernel: libata version 3.00 loaded.
Jan 31 06:09:19 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 06:09:19 localhost kernel: scsi host0: ata_piix
Jan 31 06:09:19 localhost kernel: scsi host1: ata_piix
Jan 31 06:09:19 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 06:09:19 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 06:09:19 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 06:09:19 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 06:09:20 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 06:09:20 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 06:09:20 localhost systemd[1]: Reached target System Initialization.
Jan 31 06:09:20 localhost systemd[1]: Reached target Basic System.
Jan 31 06:09:20 localhost kernel: ata1: found unknown device (class 0)
Jan 31 06:09:20 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 06:09:20 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 06:09:20 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 06:09:20 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 06:09:20 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 06:09:20 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 06:09:20 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 06:09:20 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 06:09:20 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 06:09:20 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 06:09:20 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 06:09:20 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 06:09:20 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 06:09:20 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 06:09:20 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 06:09:20 localhost systemd[1]: Mounting /sysroot...
Jan 31 06:09:20 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 06:09:20 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 06:09:21 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 06:09:21 localhost systemd[1]: Mounted /sysroot.
Jan 31 06:09:21 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 06:09:21 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 06:09:21 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 06:09:21 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 06:09:21 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 06:09:21 localhost systemd[1]: Starting dracut mount hook...
Jan 31 06:09:21 localhost systemd[1]: Finished dracut mount hook.
Jan 31 06:09:21 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 06:09:21 localhost rpc.idmapd[443]: exiting on signal 15
Jan 31 06:09:21 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 06:09:21 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 06:09:21 localhost systemd[1]: Stopped target Network.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Timer Units.
Jan 31 06:09:21 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 06:09:21 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Basic System.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Path Units.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Slice Units.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Socket Units.
Jan 31 06:09:21 localhost systemd[1]: Stopped target System Initialization.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Swaps.
Jan 31 06:09:21 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 06:09:21 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 06:09:21 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 06:09:21 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 06:09:21 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 06:09:21 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 06:09:21 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 06:09:21 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 06:09:21 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 06:09:21 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 06:09:21 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 06:09:21 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 06:09:21 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 06:09:21 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Closed udev Control Socket.
Jan 31 06:09:21 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 06:09:21 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 06:09:21 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 06:09:21 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 06:09:21 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 06:09:21 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 06:09:21 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Stopped Create System Users.
Jan 31 06:09:21 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 06:09:21 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 06:09:21 localhost systemd[1]: Reached target Switch Root.
Jan 31 06:09:21 localhost systemd[1]: Starting Switch Root...
Jan 31 06:09:21 localhost systemd[1]: Switching root.
Jan 31 06:09:21 localhost systemd-journald[301]: Journal stopped
Jan 31 06:09:23 localhost systemd-journald[301]: Received SIGTERM from PID 1 (systemd).
Jan 31 06:09:23 localhost kernel: audit: type=1404 audit(1769839762.144:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:09:23 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:09:23 localhost kernel: audit: type=1403 audit(1769839762.348:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 06:09:23 localhost systemd[1]: Successfully loaded SELinux policy in 231.782ms.
Jan 31 06:09:23 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.405ms.
Jan 31 06:09:23 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 06:09:23 localhost systemd[1]: Detected virtualization kvm.
Jan 31 06:09:23 localhost systemd[1]: Detected architecture x86-64.
Jan 31 06:09:23 localhost systemd-rc-local-generator[634]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:09:23 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Stopped Switch Root.
Jan 31 06:09:23 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 06:09:23 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 06:09:23 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 06:09:23 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 06:09:23 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 06:09:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 06:09:23 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 06:09:23 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 06:09:23 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 06:09:23 localhost systemd[1]: Stopped target Switch Root.
Jan 31 06:09:23 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 06:09:23 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 06:09:23 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 06:09:23 localhost systemd[1]: Reached target Path Units.
Jan 31 06:09:23 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 06:09:23 localhost systemd[1]: Reached target Slice Units.
Jan 31 06:09:23 localhost systemd[1]: Reached target Swaps.
Jan 31 06:09:23 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 06:09:23 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 06:09:23 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 06:09:23 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 06:09:23 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 06:09:23 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 06:09:23 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 06:09:23 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 06:09:23 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 06:09:23 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 06:09:23 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 06:09:23 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 06:09:23 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 06:09:23 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 06:09:23 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 06:09:23 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 06:09:23 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 06:09:23 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 06:09:23 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 06:09:23 localhost systemd[1]: Stopped Journal Service.
Jan 31 06:09:23 localhost systemd[1]: Starting Journal Service...
Jan 31 06:09:23 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 06:09:23 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 06:09:23 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 06:09:23 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 06:09:23 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 06:09:23 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:09:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 06:09:23 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 06:09:23 localhost kernel: fuse: init (API version 7.37)
Jan 31 06:09:23 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 06:09:23 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 06:09:23 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 06:09:23 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 06:09:23 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 06:09:23 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 06:09:23 localhost systemd-journald[675]: Journal started
Jan 31 06:09:23 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 06:09:23 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 06:09:23 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Started Journal Service.
Jan 31 06:09:23 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 06:09:23 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 06:09:23 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 06:09:23 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 06:09:23 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 06:09:23 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 06:09:23 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 06:09:23 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 06:09:23 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 06:09:23 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 06:09:23 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 06:09:23 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 06:09:23 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 06:09:23 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 06:09:23 localhost systemd[1]: Starting Create System Users...
Jan 31 06:09:23 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:09:23 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 06:09:23 localhost systemd-journald[675]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 06:09:23 localhost systemd-journald[675]: Received client request to flush runtime journal.
Jan 31 06:09:23 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 06:09:23 localhost systemd[1]: Finished Create System Users.
Jan 31 06:09:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 06:09:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 06:09:23 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 06:09:23 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 06:09:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 06:09:23 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 06:09:23 localhost systemd[1]: Reached target Local File Systems.
Jan 31 06:09:23 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 06:09:23 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 06:09:23 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 06:09:23 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 06:09:23 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 06:09:23 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 06:09:23 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 06:09:23 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Jan 31 06:09:23 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 06:09:23 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 06:09:23 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 06:09:23 localhost systemd[1]: Starting RPC Bind...
Jan 31 06:09:23 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 06:09:23 localhost auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 06:09:23 localhost auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 06:09:23 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 06:09:23 localhost systemd[1]: Started RPC Bind.
Jan 31 06:09:23 localhost augenrules[704]: /sbin/augenrules: No change
Jan 31 06:09:24 localhost augenrules[719]: No rules
Jan 31 06:09:24 localhost augenrules[719]: enabled 1
Jan 31 06:09:24 localhost augenrules[719]: failure 1
Jan 31 06:09:24 localhost augenrules[719]: pid 699
Jan 31 06:09:24 localhost augenrules[719]: rate_limit 0
Jan 31 06:09:24 localhost augenrules[719]: backlog_limit 8192
Jan 31 06:09:24 localhost augenrules[719]: lost 0
Jan 31 06:09:24 localhost augenrules[719]: backlog 0
Jan 31 06:09:24 localhost augenrules[719]: backlog_wait_time 60000
Jan 31 06:09:24 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 31 06:09:24 localhost augenrules[719]: enabled 1
Jan 31 06:09:24 localhost augenrules[719]: failure 1
Jan 31 06:09:24 localhost augenrules[719]: pid 699
Jan 31 06:09:24 localhost augenrules[719]: rate_limit 0
Jan 31 06:09:24 localhost augenrules[719]: backlog_limit 8192
Jan 31 06:09:24 localhost augenrules[719]: lost 0
Jan 31 06:09:24 localhost augenrules[719]: backlog 3
Jan 31 06:09:24 localhost augenrules[719]: backlog_wait_time 60000
Jan 31 06:09:24 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 31 06:09:24 localhost augenrules[719]: enabled 1
Jan 31 06:09:24 localhost augenrules[719]: failure 1
Jan 31 06:09:24 localhost augenrules[719]: pid 699
Jan 31 06:09:24 localhost augenrules[719]: rate_limit 0
Jan 31 06:09:24 localhost augenrules[719]: backlog_limit 8192
Jan 31 06:09:24 localhost augenrules[719]: lost 0
Jan 31 06:09:24 localhost augenrules[719]: backlog 0
Jan 31 06:09:24 localhost augenrules[719]: backlog_wait_time 60000
Jan 31 06:09:24 localhost augenrules[719]: backlog_wait_time_actual 0
Jan 31 06:09:24 localhost systemd[1]: Started Security Auditing Service.
Jan 31 06:09:24 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 06:09:24 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 06:09:24 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 06:09:24 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 06:09:24 localhost systemd-udevd[727]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 06:09:24 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 06:09:24 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 06:09:24 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 06:09:24 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 06:09:24 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 06:09:24 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 06:09:24 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 06:09:24 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 06:09:24 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 06:09:24 localhost systemd-udevd[743]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:09:24 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 06:09:24 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 06:09:24 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 06:09:24 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 06:09:24 localhost kernel: [drm] features: -context_init
Jan 31 06:09:24 localhost kernel: [drm] number of scanouts: 1
Jan 31 06:09:24 localhost kernel: [drm] number of cap sets: 0
Jan 31 06:09:24 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 06:09:24 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 06:09:24 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 06:09:24 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 06:09:24 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 06:09:24 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 06:09:24 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 06:09:24 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 06:09:24 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 06:09:24 localhost systemd[1]: Starting Update is Completed...
Jan 31 06:09:24 localhost systemd[1]: Finished Update is Completed.
Jan 31 06:09:25 localhost systemd[1]: Reached target System Initialization.
Jan 31 06:09:25 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 06:09:25 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 06:09:25 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 06:09:25 localhost systemd[1]: Reached target Timer Units.
Jan 31 06:09:25 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 06:09:25 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 06:09:25 localhost systemd[1]: Reached target Socket Units.
Jan 31 06:09:25 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 06:09:25 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 06:09:25 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 06:09:25 localhost systemd[1]: Reached target Basic System.
Jan 31 06:09:25 localhost dbus-broker-lau[809]: Ready
Jan 31 06:09:25 localhost systemd[1]: Starting NTP client/server...
Jan 31 06:09:25 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 06:09:25 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 06:09:25 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 06:09:25 localhost systemd[1]: Started irqbalance daemon.
Jan 31 06:09:25 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 06:09:25 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:09:25 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:09:25 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 06:09:25 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 06:09:25 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 06:09:25 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 06:09:25 localhost systemd[1]: Starting User Login Management...
Jan 31 06:09:25 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 06:09:25 localhost systemd-logind[816]: New seat seat0.
Jan 31 06:09:25 localhost systemd-logind[816]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 06:09:25 localhost systemd-logind[816]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 06:09:25 localhost systemd[1]: Started User Login Management.
Jan 31 06:09:25 localhost chronyd[828]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 06:09:25 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 06:09:25 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 06:09:25 localhost chronyd[828]: Loaded 0 symmetric keys
Jan 31 06:09:25 localhost chronyd[828]: Using right/UTC timezone to obtain leap second data
Jan 31 06:09:25 localhost chronyd[828]: Loaded seccomp filter (level 2)
Jan 31 06:09:25 localhost systemd[1]: Started NTP client/server.
Jan 31 06:09:25 localhost iptables.init[814]: iptables: Applying firewall rules: [  OK  ]
Jan 31 06:09:25 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 06:09:26 localhost cloud-init[837]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 06:09:26 +0000. Up 9.15 seconds.
Jan 31 06:09:26 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 06:09:26 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 06:09:26 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpj4b7iewc.mount: Deactivated successfully.
Jan 31 06:09:26 localhost systemd[1]: Starting Hostname Service...
Jan 31 06:09:26 localhost systemd[1]: Started Hostname Service.
Jan 31 06:09:26 np0005603608.novalocal systemd-hostnamed[851]: Hostname set to <np0005603608.novalocal> (static)
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Starting Network Manager...
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.3665] NetworkManager (version 1.54.3-2.el9) is starting... (boot:c2831af4-3642-4ec1-ade3-1145c1dbedc8)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.3669] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.3936] manager[0x55b930b38000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4119] hostname: hostname: using hostnamed
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4119] hostname: static hostname changed from (none) to "np0005603608.novalocal"
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4127] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4351] manager[0x55b930b38000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4353] manager[0x55b930b38000]: rfkill: WWAN hardware radio set enabled
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4509] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4510] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4511] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4512] manager: Networking is enabled by state file
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4515] settings: Loaded settings plugin: keyfile (internal)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4582] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4611] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4629] dhcp: init: Using DHCP client 'internal'
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4634] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4651] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4665] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4678] device (lo): Activation: starting connection 'lo' (0c594273-030a-44be-9be5-1e39156ec0e8)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4688] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4692] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4718] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4724] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4727] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4729] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4732] device (eth0): carrier: link connected
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4736] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4744] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4748] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4753] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4754] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4757] manager: NetworkManager state is now CONNECTING
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4759] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4767] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4771] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4817] dhcp4 (eth0): state changed new lease, address=38.129.56.152
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4824] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Started Network Manager.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Reached target Network.
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.4889] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5139] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5143] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5145] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5168] device (lo): Activation: successful, device activated.
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5178] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5185] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5188] device (eth0): Activation: successful, device activated.
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5196] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 06:09:27 np0005603608.novalocal NetworkManager[855]: <info>  [1769839767.5199] manager: startup complete
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Reached target NFS client services.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 06:09:27 np0005603608.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 06:09:27 +0000. Up 10.50 seconds.
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |  eth0  | True |        38.129.56.152        | 255.255.255.0 | global | fa:16:3e:67:05:98 |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe67:598/64 |       .       |  link  | fa:16:3e:67:05:98 |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 06:09:27 np0005603608.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 06:09:32 np0005603608.novalocal chronyd[828]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 06:09:32 np0005603608.novalocal chronyd[828]: System clock TAI offset set to 37 seconds
Jan 31 06:09:33 np0005603608.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Jan 31 06:09:33 np0005603608.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 06:09:33 np0005603608.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Jan 31 06:09:33 np0005603608.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Jan 31 06:09:33 np0005603608.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Jan 31 06:09:33 np0005603608.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Generating public/private rsa key pair.
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: The key fingerprint is:
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: SHA256:sqsz7Lhvmu2W18CD6Im4da3VXd1cK4mGH+rEJGTaiDQ root@np0005603608.novalocal
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: The key's randomart image is:
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: +---[RSA 3072]----+
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |                 |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |                 |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |   E   o        .|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |  . o *   . ...oo|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |   o =.oSo +.o..o|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |  . ..+++.+.. .  |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |.o.o..++.+..     |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |o.o=Bo..+        |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |..=OB=.  .       |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: The key fingerprint is:
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: SHA256:e5N9drk3X5SwzMzL5fKxTip2dbxXI/qhDwmEku98D98 root@np0005603608.novalocal
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: The key's randomart image is:
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: +---[ECDSA 256]---+
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |                 |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |      . .        |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |     o . .   .   |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |      o .   = o .|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |       .S.   * +.|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |      o  ..oo.++=|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |       o.o+oo===B|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |        ..+=++*+B|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |          .+=E.=*|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: The key fingerprint is:
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: SHA256:ZeSUSnbZtWludQiTHbmVg26cq0v97MO1P7+4FrqepnQ root@np0005603608.novalocal
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: The key's randomart image is:
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: +--[ED25519 256]--+
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |          o+o++o.|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |        o++ o+o*.|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |       o o+ o *o+|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |        .o   B...|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |        S   . +  |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |            .+  .|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |         . Eo.o o|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |        . o+..+= |
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: |         .+=+oo=O|
Jan 31 06:09:34 np0005603608.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Reached target Network is Online.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting System Logging Service...
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 06:09:34 np0005603608.novalocal sm-notify[1004]: Version 2.5.4 starting
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Started Command Scheduler.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Started Getty on tty1.
Jan 31 06:09:34 np0005603608.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 31 06:09:34 np0005603608.novalocal sshd[1006]: Server listening on :: port 22.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 06:09:34 np0005603608.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Jan 31 06:09:34 np0005603608.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 06:09:34 np0005603608.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 26% if used.)
Jan 31 06:09:34 np0005603608.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 06:09:34 np0005603608.novalocal sshd-session[1012]: Connection reset by 38.102.83.114 port 33862 [preauth]
Jan 31 06:09:34 np0005603608.novalocal sshd-session[1020]: Unable to negotiate with 38.102.83.114 port 33866: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 06:09:34 np0005603608.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Jan 31 06:09:34 np0005603608.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Started System Logging Service.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 06:09:34 np0005603608.novalocal sshd-session[1036]: Unable to negotiate with 38.102.83.114 port 33882: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 06:09:34 np0005603608.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 06:09:35 np0005603608.novalocal sshd-session[1045]: Unable to negotiate with 38.102.83.114 port 33888: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 06:09:35 np0005603608.novalocal sshd-session[1053]: Connection reset by 38.102.83.114 port 33896 [preauth]
Jan 31 06:09:35 np0005603608.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:09:35 np0005603608.novalocal sshd-session[1073]: Unable to negotiate with 38.102.83.114 port 33900: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 06:09:35 np0005603608.novalocal sshd-session[1076]: Unable to negotiate with 38.102.83.114 port 33910: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 06:09:35 np0005603608.novalocal sshd-session[1022]: Connection closed by 38.102.83.114 port 33870 [preauth]
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1083]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 06:09:35 +0000. Up 17.74 seconds.
Jan 31 06:09:35 np0005603608.novalocal sshd-session[1064]: Connection closed by 38.102.83.114 port 33898 [preauth]
Jan 31 06:09:35 np0005603608.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Jan 31 06:09:35 np0005603608.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 06:09:35 np0005603608.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 06:09:35 np0005603608.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: IRQ 25 affinity is now unmanaged
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: IRQ 31 affinity is now unmanaged
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: IRQ 28 affinity is now unmanaged
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: IRQ 32 affinity is now unmanaged
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: IRQ 30 affinity is now unmanaged
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 06:09:35 np0005603608.novalocal irqbalance[815]: IRQ 29 affinity is now unmanaged
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1279]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 06:09:35 +0000. Up 18.19 seconds.
Jan 31 06:09:35 np0005603608.novalocal dracut[1287]: dracut-057-102.git20250818.el9
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1304]: #############################################################
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1305]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1307]: 256 SHA256:e5N9drk3X5SwzMzL5fKxTip2dbxXI/qhDwmEku98D98 root@np0005603608.novalocal (ECDSA)
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1309]: 256 SHA256:ZeSUSnbZtWludQiTHbmVg26cq0v97MO1P7+4FrqepnQ root@np0005603608.novalocal (ED25519)
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1311]: 3072 SHA256:sqsz7Lhvmu2W18CD6Im4da3VXd1cK4mGH+rEJGTaiDQ root@np0005603608.novalocal (RSA)
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1312]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1313]: #############################################################
Jan 31 06:09:35 np0005603608.novalocal cloud-init[1279]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 06:09:35 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 18.39 seconds
Jan 31 06:09:35 np0005603608.novalocal dracut[1289]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 06:09:35 np0005603608.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 06:09:35 np0005603608.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 06:09:36 np0005603608.novalocal dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: memstrack is not available
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: memstrack is not available
Jan 31 06:09:37 np0005603608.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 06:09:37 np0005603608.novalocal dracut[1289]: *** Including module: systemd ***
Jan 31 06:09:38 np0005603608.novalocal dracut[1289]: *** Including module: fips ***
Jan 31 06:09:38 np0005603608.novalocal dracut[1289]: *** Including module: systemd-initrd ***
Jan 31 06:09:38 np0005603608.novalocal dracut[1289]: *** Including module: i18n ***
Jan 31 06:09:38 np0005603608.novalocal dracut[1289]: *** Including module: drm ***
Jan 31 06:09:38 np0005603608.novalocal dracut[1289]: *** Including module: prefixdevname ***
Jan 31 06:09:38 np0005603608.novalocal dracut[1289]: *** Including module: kernel-modules ***
Jan 31 06:09:39 np0005603608.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]: *** Including module: kernel-modules-extra ***
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]: *** Including module: qemu ***
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]: *** Including module: fstab-sys ***
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]: *** Including module: rootfs-block ***
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]: *** Including module: terminfo ***
Jan 31 06:09:39 np0005603608.novalocal dracut[1289]: *** Including module: udev-rules ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: Skipping udev rule: 91-permissions.rules
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: virtiofs ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: dracut-systemd ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: usrmount ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: base ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: fs-lib ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: kdumpbase ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]:   microcode_ctl module: mangling fw_dir
Jan 31 06:09:40 np0005603608.novalocal dracut[1289]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]: *** Including module: openssl ***
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]: *** Including module: shutdown ***
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]: *** Including module: squash ***
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]: *** Including modules done ***
Jan 31 06:09:41 np0005603608.novalocal dracut[1289]: *** Installing kernel module dependencies ***
Jan 31 06:09:42 np0005603608.novalocal dracut[1289]: *** Installing kernel module dependencies done ***
Jan 31 06:09:42 np0005603608.novalocal dracut[1289]: *** Resolving executable dependencies ***
Jan 31 06:09:44 np0005603608.novalocal dracut[1289]: *** Resolving executable dependencies done ***
Jan 31 06:09:44 np0005603608.novalocal dracut[1289]: *** Generating early-microcode cpio image ***
Jan 31 06:09:44 np0005603608.novalocal dracut[1289]: *** Store current command line parameters ***
Jan 31 06:09:44 np0005603608.novalocal dracut[1289]: Stored kernel commandline:
Jan 31 06:09:44 np0005603608.novalocal dracut[1289]: No dracut internal kernel commandline stored in the initramfs
Jan 31 06:09:45 np0005603608.novalocal dracut[1289]: *** Install squash loader ***
Jan 31 06:09:46 np0005603608.novalocal dracut[1289]: *** Squashing the files inside the initramfs ***
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: *** Squashing the files inside the initramfs done ***
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: *** Hardlinking files ***
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Mode:           real
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Files:          50
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Linked:         0 files
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Compared:       0 xattrs
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Compared:       0 files
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Saved:          0 B
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: Duration:       0.000291 seconds
Jan 31 06:09:47 np0005603608.novalocal dracut[1289]: *** Hardlinking files done ***
Jan 31 06:09:48 np0005603608.novalocal dracut[1289]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 06:09:49 np0005603608.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Jan 31 06:09:49 np0005603608.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Jan 31 06:09:49 np0005603608.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 06:09:49 np0005603608.novalocal systemd[1]: Startup finished in 1.422s (kernel) + 3.362s (initrd) + 27.326s (userspace) = 32.110s.
Jan 31 06:09:53 np0005603608.novalocal sshd-session[4304]: Accepted publickey for zuul from 38.102.83.114 port 52370 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 06:09:53 np0005603608.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 06:09:53 np0005603608.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 06:09:53 np0005603608.novalocal systemd-logind[816]: New session 1 of user zuul.
Jan 31 06:09:53 np0005603608.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 06:09:53 np0005603608.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Queued start job for default target Main User Target.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Created slice User Application Slice.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Reached target Paths.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Reached target Timers.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Starting D-Bus User Message Bus Socket...
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Starting Create User's Volatile Files and Directories...
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Finished Create User's Volatile Files and Directories.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Listening on D-Bus User Message Bus Socket.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Reached target Sockets.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Reached target Basic System.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Reached target Main User Target.
Jan 31 06:09:53 np0005603608.novalocal systemd[4308]: Startup finished in 126ms.
Jan 31 06:09:53 np0005603608.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 06:09:53 np0005603608.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 06:09:53 np0005603608.novalocal sshd-session[4304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:09:54 np0005603608.novalocal python3[4390]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:09:57 np0005603608.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:09:59 np0005603608.novalocal python3[4420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:15 np0005603608.novalocal python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:16 np0005603608.novalocal python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 06:10:18 np0005603608.novalocal python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDchShM99lH6I0ER4M5bdTKJqBTEwI+oB9SwUKCFnfSFe+YXdwGln/ZQz1oTQoc7uHsosGAjxkBLnzurBq9QuoyCLJfHlIRMt33udq87cbS+4TPUzX86YzbvCdjL2JcQ7HQdT/t4eiTsq/T6rUG6NN8sZSab/kVk1sT3I1DEnUGPGWr5xAUZ/TMosNE9wHhXQsHXN13G6YeYDfG/h+84mm6kTISBC+8M8Ne+jGn4udnhGcj24MjbKqS4l405WKsvB7IHwjnkEFFSQ0MXxcPMC+W1PqE0JQeoE6StfGL1kcrrAyVCz+t2vX8dRWY/nDCcOyEiXPb/tEW8ddykqk/ZgDlBYlNimaPvgLPoGTr6XZHfSGRjrYhiPwQI9xa+AHOZOXJsMdEBoZk1VMty2FQTwJgfV/t7gi5q5lagFfQ44wTy8HBcOiaQ08p2URDYYWoqWLaBV2TDvVmjvuCHKZJiWKb9vRE0G1BBNIVjIvkTPeu+7RayIoYTevDiRJsPJ0pJi0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:18 np0005603608.novalocal python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:19 np0005603608.novalocal python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:19 np0005603608.novalocal python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839819.2283046-251-87089234838268/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ef956508a7f94c3dbffb4bb08b3ee84f_id_rsa follow=False checksum=38004d5aef71e7771fda9a333b3cee8e58c249ca backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:20 np0005603608.novalocal python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:20 np0005603608.novalocal python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839820.2012732-306-161370411103172/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ef956508a7f94c3dbffb4bb08b3ee84f_id_rsa.pub follow=False checksum=aabe17515195de3c7c885977f74a37a021ba1e01 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:22 np0005603608.novalocal python3[4980]: ansible-ping Invoked with data=pong
Jan 31 06:10:23 np0005603608.novalocal python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:10:25 np0005603608.novalocal python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 06:10:26 np0005603608.novalocal python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:26 np0005603608.novalocal python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:27 np0005603608.novalocal python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:27 np0005603608.novalocal python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:28 np0005603608.novalocal python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:28 np0005603608.novalocal python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:29 np0005603608.novalocal sudo[5238]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczkibhmmzlukebpbkmeauipqtqzbadh ; /usr/bin/python3'
Jan 31 06:10:29 np0005603608.novalocal sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:30 np0005603608.novalocal python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:30 np0005603608.novalocal sudo[5238]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:30 np0005603608.novalocal sudo[5316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brnntcwhfeczhngcgouccqkiecopmzfm ; /usr/bin/python3'
Jan 31 06:10:30 np0005603608.novalocal sudo[5316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:30 np0005603608.novalocal python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:30 np0005603608.novalocal sudo[5316]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:30 np0005603608.novalocal sudo[5389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwqtslarcwkmlyqwufklfltxlrrdhews ; /usr/bin/python3'
Jan 31 06:10:30 np0005603608.novalocal sudo[5389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:31 np0005603608.novalocal python3[5391]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839830.2382808-31-158225293740359/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:31 np0005603608.novalocal sudo[5389]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:31 np0005603608.novalocal python3[5439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603608.novalocal python3[5463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603608.novalocal python3[5487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603608.novalocal python3[5511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:32 np0005603608.novalocal python3[5535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:33 np0005603608.novalocal python3[5559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:33 np0005603608.novalocal python3[5583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:34 np0005603608.novalocal python3[5607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:35 np0005603608.novalocal python3[5631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:35 np0005603608.novalocal python3[5655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:35 np0005603608.novalocal python3[5679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:36 np0005603608.novalocal python3[5703]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:36 np0005603608.novalocal python3[5727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:36 np0005603608.novalocal python3[5751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603608.novalocal python3[5775]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603608.novalocal python3[5799]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603608.novalocal python3[5823]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:37 np0005603608.novalocal chronyd[828]: Selected source 209.227.173.244 (2.centos.pool.ntp.org)
Jan 31 06:10:38 np0005603608.novalocal python3[5847]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:38 np0005603608.novalocal python3[5871]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:38 np0005603608.novalocal python3[5895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:39 np0005603608.novalocal python3[5919]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:39 np0005603608.novalocal python3[5943]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:39 np0005603608.novalocal python3[5967]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:39 np0005603608.novalocal python3[5991]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:40 np0005603608.novalocal python3[6015]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:40 np0005603608.novalocal python3[6039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:10:41 np0005603608.novalocal sudo[6063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlpefnnmokeunptowptmglpuowabzcxv ; /usr/bin/python3'
Jan 31 06:10:41 np0005603608.novalocal sudo[6063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:41 np0005603608.novalocal python3[6065]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 06:10:41 np0005603608.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 06:10:41 np0005603608.novalocal systemd[1]: Started Time & Date Service.
Jan 31 06:10:41 np0005603608.novalocal systemd-timedated[6067]: Changed time zone to 'UTC' (UTC).
Jan 31 06:10:41 np0005603608.novalocal sudo[6063]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:41 np0005603608.novalocal sudo[6094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-savsdgvpvtkuopsgixpzrakgnldpljwh ; /usr/bin/python3'
Jan 31 06:10:41 np0005603608.novalocal sudo[6094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:41 np0005603608.novalocal python3[6096]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:41 np0005603608.novalocal sudo[6094]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:42 np0005603608.novalocal python3[6172]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:42 np0005603608.novalocal python3[6243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769839842.1915808-251-123671871371651/source _original_basename=tmpc6v6yjc0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:43 np0005603608.novalocal python3[6343]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:43 np0005603608.novalocal python3[6414]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839843.0964112-301-93302681672312/source _original_basename=tmpkjvf9pen follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:45 np0005603608.novalocal sudo[6514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijpwhmvgdiuwoqaxjcoivfcnkqhkmfke ; /usr/bin/python3'
Jan 31 06:10:45 np0005603608.novalocal sudo[6514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:45 np0005603608.novalocal python3[6516]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:45 np0005603608.novalocal sudo[6514]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:45 np0005603608.novalocal sudo[6587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuektulvrjuedjjwmdgldozezlyvikjt ; /usr/bin/python3'
Jan 31 06:10:45 np0005603608.novalocal sudo[6587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:45 np0005603608.novalocal python3[6589]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769839845.182583-381-274219539310749/source _original_basename=tmpr5pckzi3 follow=False checksum=ba2fada3a1d57aea19e9ba610581bd6c67f83744 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:45 np0005603608.novalocal sudo[6587]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:46 np0005603608.novalocal python3[6637]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:46 np0005603608.novalocal python3[6663]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:47 np0005603608.novalocal sudo[6741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbpgegtxkcynmppxyxcfthbvgsbvigdb ; /usr/bin/python3'
Jan 31 06:10:47 np0005603608.novalocal sudo[6741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:47 np0005603608.novalocal python3[6743]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:10:47 np0005603608.novalocal sudo[6741]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:47 np0005603608.novalocal sudo[6814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ismfvlpgtuudvuqpzpdznktrfehpqjog ; /usr/bin/python3'
Jan 31 06:10:47 np0005603608.novalocal sudo[6814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:47 np0005603608.novalocal python3[6816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769839847.2544062-451-249436042209477/source _original_basename=tmpzmyeh_4d follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:10:47 np0005603608.novalocal sudo[6814]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:48 np0005603608.novalocal sudo[6865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsgohbyoakuejpnihhwnnztbjcajzcn ; /usr/bin/python3'
Jan 31 06:10:48 np0005603608.novalocal sudo[6865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:10:48 np0005603608.novalocal python3[6867]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-a598-46a5-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:10:48 np0005603608.novalocal sudo[6865]: pam_unix(sudo:session): session closed for user root
Jan 31 06:10:49 np0005603608.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-a598-46a5-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 06:10:50 np0005603608.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:11 np0005603608.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 06:11:12 np0005603608.novalocal sudo[6949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtyhuefqvfedgxjoxblnlvdicsshthba ; /usr/bin/python3'
Jan 31 06:11:12 np0005603608.novalocal sudo[6949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:11:12 np0005603608.novalocal python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:11:12 np0005603608.novalocal sudo[6949]: pam_unix(sudo:session): session closed for user root
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 06:11:56 np0005603608.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 06:11:56 np0005603608.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4508] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:11:56 np0005603608.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4680] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4715] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4719] device (eth1): carrier: link connected
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4723] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4731] policy: auto-activating connection 'Wired connection 1' (8273820e-2eee-3fa4-9427-36949f2c7c4b)
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4738] device (eth1): Activation: starting connection 'Wired connection 1' (8273820e-2eee-3fa4-9427-36949f2c7c4b)
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4739] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4744] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4751] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:11:56 np0005603608.novalocal NetworkManager[855]: <info>  [1769839916.4758] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:11:56 np0005603608.novalocal systemd[4308]: Starting Mark boot as successful...
Jan 31 06:11:56 np0005603608.novalocal systemd[4308]: Finished Mark boot as successful.
Jan 31 06:11:57 np0005603608.novalocal python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-8b99-369b-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:12:07 np0005603608.novalocal sudo[7058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwouubwowbrlhiiekwgqgosfomzbsycp ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:12:07 np0005603608.novalocal sudo[7058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:07 np0005603608.novalocal python3[7060]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:12:07 np0005603608.novalocal sudo[7058]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:07 np0005603608.novalocal sudo[7131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkuqvgoewoqwadzocemhntppdcutwlmz ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:12:07 np0005603608.novalocal sudo[7131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:07 np0005603608.novalocal python3[7133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769839926.9646504-104-231875471216184/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=36a5d03fbeb50142f9ad00722ddfc7b68cf493f9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:12:07 np0005603608.novalocal sudo[7131]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:08 np0005603608.novalocal sudo[7181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzbnxvqqjlvdhhrrqqsfnorbbwwtwvh ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:12:08 np0005603608.novalocal sudo[7181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:12:08 np0005603608.novalocal python3[7183]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3461] caught SIGTERM, shutting down normally.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Stopping Network Manager...
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3468] dhcp4 (eth0): canceled DHCP transaction
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3469] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3469] dhcp4 (eth0): state changed no lease
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3472] manager: NetworkManager state is now CONNECTING
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3611] dhcp4 (eth1): canceled DHCP transaction
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.3611] dhcp4 (eth1): state changed no lease
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[855]: <info>  [1769839928.4425] exiting (success)
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Stopped Network Manager.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: NetworkManager.service: Consumed 1.520s CPU time, 10.0M memory peak.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Starting Network Manager...
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.4950] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:c2831af4-3642-4ec1-ade3-1145c1dbedc8)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.4952] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5001] manager[0x555eace58000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Starting Hostname Service...
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Started Hostname Service.
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5698] hostname: hostname: using hostnamed
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5702] hostname: static hostname changed from (none) to "np0005603608.novalocal"
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5709] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5716] manager[0x555eace58000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5717] manager[0x555eace58000]: rfkill: WWAN hardware radio set enabled
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5759] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5760] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5761] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5762] manager: Networking is enabled by state file
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5765] settings: Loaded settings plugin: keyfile (internal)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5771] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5815] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5829] dhcp: init: Using DHCP client 'internal'
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5834] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5842] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5848] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5860] device (lo): Activation: starting connection 'lo' (0c594273-030a-44be-9be5-1e39156ec0e8)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5868] device (eth0): carrier: link connected
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5874] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5880] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5881] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5889] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5899] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5907] device (eth1): carrier: link connected
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5912] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5918] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (8273820e-2eee-3fa4-9427-36949f2c7c4b) (indicated)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5919] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5925] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5934] device (eth1): Activation: starting connection 'Wired connection 1' (8273820e-2eee-3fa4-9427-36949f2c7c4b)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5941] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Started Network Manager.
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5946] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5949] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5951] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5954] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5957] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5960] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5963] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5967] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5975] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5979] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5988] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.5992] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.6013] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.6016] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 06:12:08 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839928.6026] device (lo): Activation: successful, device activated.
Jan 31 06:12:08 np0005603608.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 06:12:08 np0005603608.novalocal sudo[7181]: pam_unix(sudo:session): session closed for user root
Jan 31 06:12:08 np0005603608.novalocal python3[7249]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-8b99-369b-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.0766] dhcp4 (eth0): state changed new lease, address=38.129.56.152
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.0781] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.3465] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.3499] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.3502] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.3508] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.3513] device (eth0): Activation: successful, device activated.
Jan 31 06:12:10 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839930.3521] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 06:12:20 np0005603608.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:12:38 np0005603608.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:12:47 np0005603608.novalocal chronyd[828]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3288] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 06:12:54 np0005603608.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:12:54 np0005603608.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3572] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3577] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3593] device (eth1): Activation: successful, device activated.
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3607] manager: startup complete
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3612] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <warn>  [1769839974.3625] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3636] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3731] dhcp4 (eth1): canceled DHCP transaction
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3732] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3732] dhcp4 (eth1): state changed no lease
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3756] policy: auto-activating connection 'ci-private-network' (0c8dc28c-09cf-5c4d-82d0-76839923fad3)
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3764] device (eth1): Activation: starting connection 'ci-private-network' (0c8dc28c-09cf-5c4d-82d0-76839923fad3)
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3766] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3773] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3785] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.3800] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.4187] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.4189] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 06:12:54 np0005603608.novalocal NetworkManager[7200]: <info>  [1769839974.4196] device (eth1): Activation: successful, device activated.
Jan 31 06:13:04 np0005603608.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:13:08 np0005603608.novalocal sshd-session[4317]: Received disconnect from 38.102.83.114 port 52370:11: disconnected by user
Jan 31 06:13:08 np0005603608.novalocal sshd-session[4317]: Disconnected from user zuul 38.102.83.114 port 52370
Jan 31 06:13:08 np0005603608.novalocal sshd-session[4304]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:13:08 np0005603608.novalocal systemd-logind[816]: Session 1 logged out. Waiting for processes to exit.
Jan 31 06:14:11 np0005603608.novalocal sshd-session[7296]: Accepted publickey for zuul from 38.102.83.114 port 60710 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:14:11 np0005603608.novalocal systemd-logind[816]: New session 3 of user zuul.
Jan 31 06:14:11 np0005603608.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 06:14:11 np0005603608.novalocal sshd-session[7296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:14:11 np0005603608.novalocal sudo[7375]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzefbhyvklzwjqfxbidqjqgumrxnnlfm ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:14:11 np0005603608.novalocal sudo[7375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:14:12 np0005603608.novalocal python3[7377]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:14:12 np0005603608.novalocal sudo[7375]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:12 np0005603608.novalocal sudo[7448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czfqdtcpohoatitoqtitsknscagduxrd ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 06:14:12 np0005603608.novalocal sudo[7448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:14:12 np0005603608.novalocal python3[7450]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840051.7649617-373-7721513694942/source _original_basename=tmp00g1ol1j follow=False checksum=9316f16cb173efcd9fb25ed4519cb4ede33986d5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:14:12 np0005603608.novalocal sudo[7448]: pam_unix(sudo:session): session closed for user root
Jan 31 06:14:16 np0005603608.novalocal sshd-session[7299]: Connection closed by 38.102.83.114 port 60710
Jan 31 06:14:16 np0005603608.novalocal sshd-session[7296]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:14:16 np0005603608.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 06:14:16 np0005603608.novalocal systemd-logind[816]: Session 3 logged out. Waiting for processes to exit.
Jan 31 06:14:16 np0005603608.novalocal systemd-logind[816]: Removed session 3.
Jan 31 06:15:48 np0005603608.novalocal systemd[4308]: Created slice User Background Tasks Slice.
Jan 31 06:15:48 np0005603608.novalocal systemd[4308]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 06:15:48 np0005603608.novalocal systemd[4308]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 06:23:37 np0005603608.novalocal sshd-session[7479]: Connection closed by 45.148.10.240 port 48630
Jan 31 06:24:48 np0005603608.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 06:24:48 np0005603608.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 06:24:48 np0005603608.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 06:24:48 np0005603608.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 06:26:20 np0005603608.novalocal sshd-session[7485]: Invalid user sol from 45.148.10.240 port 38884
Jan 31 06:26:20 np0005603608.novalocal sshd-session[7485]: Connection closed by invalid user sol 45.148.10.240 port 38884 [preauth]
Jan 31 06:27:24 np0005603608.novalocal sshd-session[7489]: Accepted publickey for zuul from 38.102.83.114 port 48946 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:27:24 np0005603608.novalocal systemd-logind[816]: New session 4 of user zuul.
Jan 31 06:27:24 np0005603608.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 06:27:24 np0005603608.novalocal sshd-session[7489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:27:24 np0005603608.novalocal sudo[7516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrdjhelaemvdgiougeymjafouxpotgml ; /usr/bin/python3'
Jan 31 06:27:24 np0005603608.novalocal sudo[7516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:24 np0005603608.novalocal python3[7518]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-879a-7837-000000000cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:24 np0005603608.novalocal sudo[7516]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:25 np0005603608.novalocal sudo[7545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymvnfnbhzocenmfihnculptskpqayrtn ; /usr/bin/python3'
Jan 31 06:27:25 np0005603608.novalocal sudo[7545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:25 np0005603608.novalocal python3[7547]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:25 np0005603608.novalocal sudo[7545]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:25 np0005603608.novalocal sudo[7571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqtdwtunrewpjwdquiyeqneskevhllzc ; /usr/bin/python3'
Jan 31 06:27:25 np0005603608.novalocal sudo[7571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:25 np0005603608.novalocal python3[7573]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:25 np0005603608.novalocal sudo[7571]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:25 np0005603608.novalocal sudo[7597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flqptrqwknkljesyeectgtpfvipeoryv ; /usr/bin/python3'
Jan 31 06:27:25 np0005603608.novalocal sudo[7597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:25 np0005603608.novalocal python3[7599]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:25 np0005603608.novalocal sudo[7597]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:26 np0005603608.novalocal sudo[7623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdnsjunuayiaqrvmykcxdyqedcuhemwm ; /usr/bin/python3'
Jan 31 06:27:26 np0005603608.novalocal sudo[7623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:26 np0005603608.novalocal python3[7625]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:26 np0005603608.novalocal sudo[7623]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:26 np0005603608.novalocal sudo[7649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfgdrcodqzjodljixfskvlxifbavkycy ; /usr/bin/python3'
Jan 31 06:27:26 np0005603608.novalocal sudo[7649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:26 np0005603608.novalocal python3[7651]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:26 np0005603608.novalocal sudo[7649]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:27 np0005603608.novalocal sudo[7727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzpywvsouieyvduknqtkjvdudnjdpznp ; /usr/bin/python3'
Jan 31 06:27:27 np0005603608.novalocal sudo[7727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:27 np0005603608.novalocal python3[7729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:27:27 np0005603608.novalocal sudo[7727]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:27 np0005603608.novalocal sudo[7800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmvjxwquuptdtlnfutzgupbnyawmuwnx ; /usr/bin/python3'
Jan 31 06:27:27 np0005603608.novalocal sudo[7800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:27 np0005603608.novalocal python3[7802]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840847.0284336-393-96111488821647/source _original_basename=tmp0k6q27qg follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:27:27 np0005603608.novalocal sudo[7800]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:28 np0005603608.novalocal sudo[7850]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uylqkekzkbnjphftnnkosxtgttrkfeyl ; /usr/bin/python3'
Jan 31 06:27:28 np0005603608.novalocal sudo[7850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:29 np0005603608.novalocal python3[7852]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 06:27:29 np0005603608.novalocal systemd[1]: Reloading.
Jan 31 06:27:29 np0005603608.novalocal systemd-rc-local-generator[7871]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:27:29 np0005603608.novalocal sudo[7850]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:30 np0005603608.novalocal sudo[7906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuieafcsyqtociggitwnmhkwvgliuswu ; /usr/bin/python3'
Jan 31 06:27:30 np0005603608.novalocal sudo[7906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:30 np0005603608.novalocal python3[7908]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 06:27:30 np0005603608.novalocal sudo[7906]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:30 np0005603608.novalocal sudo[7932]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlvyzzrliklqxkxcxfhcntyyaeyswyoi ; /usr/bin/python3'
Jan 31 06:27:30 np0005603608.novalocal sudo[7932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603608.novalocal python3[7934]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603608.novalocal sudo[7932]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:31 np0005603608.novalocal sudo[7960]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyfpsgiduslpqcohskegdgljbprnmbyc ; /usr/bin/python3'
Jan 31 06:27:31 np0005603608.novalocal sudo[7960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603608.novalocal python3[7962]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603608.novalocal sudo[7960]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:31 np0005603608.novalocal sudo[7988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaebzgrierpozxsdpochtebknrcuchzm ; /usr/bin/python3'
Jan 31 06:27:31 np0005603608.novalocal sudo[7988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603608.novalocal python3[7990]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603608.novalocal sudo[7988]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:31 np0005603608.novalocal sudo[8016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krosuqdiuumpmqqdttydkqvxuqunuwku ; /usr/bin/python3'
Jan 31 06:27:31 np0005603608.novalocal sudo[8016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:31 np0005603608.novalocal python3[8018]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:31 np0005603608.novalocal sudo[8016]: pam_unix(sudo:session): session closed for user root
Jan 31 06:27:32 np0005603608.novalocal python3[8045]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-879a-7837-000000000cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:27:33 np0005603608.novalocal python3[8075]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 06:27:35 np0005603608.novalocal sshd-session[7492]: Connection closed by 38.102.83.114 port 48946
Jan 31 06:27:35 np0005603608.novalocal sshd-session[7489]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:27:35 np0005603608.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 06:27:35 np0005603608.novalocal systemd[1]: session-4.scope: Consumed 3.676s CPU time.
Jan 31 06:27:35 np0005603608.novalocal systemd-logind[816]: Session 4 logged out. Waiting for processes to exit.
Jan 31 06:27:35 np0005603608.novalocal systemd-logind[816]: Removed session 4.
Jan 31 06:27:37 np0005603608.novalocal sshd-session[8083]: Accepted publickey for zuul from 38.102.83.114 port 56158 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:27:37 np0005603608.novalocal systemd-logind[816]: New session 5 of user zuul.
Jan 31 06:27:37 np0005603608.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 06:27:37 np0005603608.novalocal sshd-session[8083]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:27:37 np0005603608.novalocal sudo[8110]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaazmowkccbebccbtinrxubdqzczdlqk ; /usr/bin/python3'
Jan 31 06:27:37 np0005603608.novalocal sudo[8110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:27:38 np0005603608.novalocal python3[8112]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 06:27:49 np0005603608.novalocal setsebool[8155]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 06:27:49 np0005603608.novalocal setsebool[8155]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:28:10 np0005603608.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:28:24 np0005603608.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:28:38 np0005603608.novalocal sshd-session[8886]: Invalid user solana from 45.148.10.240 port 50278
Jan 31 06:28:38 np0005603608.novalocal sshd-session[8886]: Connection closed by invalid user solana 45.148.10.240 port 50278 [preauth]
Jan 31 06:28:47 np0005603608.novalocal dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 06:28:47 np0005603608.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:28:47 np0005603608.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:28:47 np0005603608.novalocal systemd[1]: Reloading.
Jan 31 06:28:47 np0005603608.novalocal systemd-rc-local-generator[8924]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:28:47 np0005603608.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:28:51 np0005603608.novalocal sudo[8110]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:52 np0005603608.novalocal python3[11450]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-ca6c-e883-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:28:53 np0005603608.novalocal kernel: evm: overlay not supported
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: Starting D-Bus User Message Bus...
Jan 31 06:28:53 np0005603608.novalocal dbus-broker-launch[12265]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 06:28:53 np0005603608.novalocal dbus-broker-launch[12265]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: Started D-Bus User Message Bus.
Jan 31 06:28:53 np0005603608.novalocal dbus-broker-lau[12265]: Ready
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: Created slice Slice /user.
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: podman-12073.scope: unit configures an IP firewall, but not running as root.
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 06:28:53 np0005603608.novalocal systemd[4308]: Started podman-12073.scope.
Jan 31 06:28:54 np0005603608.novalocal systemd[4308]: Started podman-pause-03c69239.scope.
Jan 31 06:28:55 np0005603608.novalocal irqbalance[815]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 06:28:55 np0005603608.novalocal irqbalance[815]: IRQ 27 affinity is now unmanaged
Jan 31 06:28:56 np0005603608.novalocal sudo[13821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjnyipjzorrsbfjsjxiekjjxkidwqjip ; /usr/bin/python3'
Jan 31 06:28:56 np0005603608.novalocal sudo[13821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:28:56 np0005603608.novalocal python3[13834]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.80:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.80:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:28:56 np0005603608.novalocal python3[13834]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 06:28:56 np0005603608.novalocal sudo[13821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:28:56 np0005603608.novalocal sshd-session[8086]: Connection closed by 38.102.83.114 port 56158
Jan 31 06:28:56 np0005603608.novalocal sshd-session[8083]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:28:56 np0005603608.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 06:28:56 np0005603608.novalocal systemd[1]: session-5.scope: Consumed 42.857s CPU time.
Jan 31 06:28:56 np0005603608.novalocal systemd-logind[816]: Session 5 logged out. Waiting for processes to exit.
Jan 31 06:28:56 np0005603608.novalocal systemd-logind[816]: Removed session 5.
Jan 31 06:29:31 np0005603608.novalocal sshd-session[26945]: Connection closed by 38.129.56.250 port 58572 [preauth]
Jan 31 06:29:31 np0005603608.novalocal sshd-session[26948]: Connection closed by 38.129.56.250 port 58568 [preauth]
Jan 31 06:29:31 np0005603608.novalocal sshd-session[26950]: Unable to negotiate with 38.129.56.250 port 58588: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 06:29:31 np0005603608.novalocal sshd-session[26951]: Unable to negotiate with 38.129.56.250 port 58598: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 06:29:31 np0005603608.novalocal sshd-session[26952]: Unable to negotiate with 38.129.56.250 port 58610: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 06:29:36 np0005603608.novalocal sshd-session[28783]: Accepted publickey for zuul from 38.102.83.114 port 58956 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:29:36 np0005603608.novalocal systemd-logind[816]: New session 6 of user zuul.
Jan 31 06:29:36 np0005603608.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 31 06:29:36 np0005603608.novalocal sshd-session[28783]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:29:36 np0005603608.novalocal python3[28843]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:29:38 np0005603608.novalocal sudo[29539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpdtpdfpxauapjlsgtmsvidyvggzsml ; /usr/bin/python3'
Jan 31 06:29:38 np0005603608.novalocal sudo[29539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:38 np0005603608.novalocal python3[29553]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:29:38 np0005603608.novalocal sudo[29539]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:38 np0005603608.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:29:38 np0005603608.novalocal systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:29:38 np0005603608.novalocal systemd[1]: man-db-cache-update.service: Consumed 39.195s CPU time.
Jan 31 06:29:38 np0005603608.novalocal systemd[1]: run-r8cf51bc980e74a1492cf6a5aaece6287.service: Deactivated successfully.
Jan 31 06:29:39 np0005603608.novalocal sudo[29746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brmznmouimajejbhzbiyjmqkuojziyae ; /usr/bin/python3'
Jan 31 06:29:39 np0005603608.novalocal sudo[29746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:39 np0005603608.novalocal python3[29748]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603608.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 06:29:39 np0005603608.novalocal useradd[29750]: new group: name=cloud-admin, GID=1002
Jan 31 06:29:39 np0005603608.novalocal useradd[29750]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 06:29:39 np0005603608.novalocal sudo[29746]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:40 np0005603608.novalocal sudo[29780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfimtngshwmeacatdunneswoqmenuqp ; /usr/bin/python3'
Jan 31 06:29:40 np0005603608.novalocal sudo[29780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:40 np0005603608.novalocal python3[29782]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL0FMDZV22TCmyOw/BdKuDUy2cGp4BfiRzOwx/JLqMraff8LZ9BOqpkfzg5VkWRHTAeVSl/Uyb0smrpcknokWxg= zuul@np0005603607.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 06:29:40 np0005603608.novalocal sudo[29780]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:41 np0005603608.novalocal sudo[29858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfjfqmzudtutqycddjizotskatvzlfly ; /usr/bin/python3'
Jan 31 06:29:41 np0005603608.novalocal sudo[29858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:41 np0005603608.novalocal python3[29860]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:29:41 np0005603608.novalocal sudo[29858]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:42 np0005603608.novalocal sudo[29931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkmwwvmoleukqygchewrrulddpkjdejj ; /usr/bin/python3'
Jan 31 06:29:42 np0005603608.novalocal sudo[29931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:42 np0005603608.novalocal python3[29933]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769840981.5551345-167-140172233229728/source _original_basename=tmpszr9xjli follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:29:42 np0005603608.novalocal sudo[29931]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:42 np0005603608.novalocal sudo[29981]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysutsfppbnfkzakpqptoaohrjipferhn ; /usr/bin/python3'
Jan 31 06:29:42 np0005603608.novalocal sudo[29981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:29:43 np0005603608.novalocal python3[29983]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 06:29:43 np0005603608.novalocal systemd[1]: Starting Hostname Service...
Jan 31 06:29:43 np0005603608.novalocal systemd[1]: Started Hostname Service.
Jan 31 06:29:43 np0005603608.novalocal systemd-hostnamed[29987]: Changed pretty hostname to 'compute-0'
Jan 31 06:29:43 compute-0 systemd-hostnamed[29987]: Hostname set to <compute-0> (static)
Jan 31 06:29:43 compute-0 NetworkManager[7200]: <info>  [1769840983.2275] hostname: static hostname changed from "np0005603608.novalocal" to "compute-0"
Jan 31 06:29:43 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 06:29:43 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 06:29:43 compute-0 sudo[29981]: pam_unix(sudo:session): session closed for user root
Jan 31 06:29:43 compute-0 sshd-session[28793]: Connection closed by 38.102.83.114 port 58956
Jan 31 06:29:43 compute-0 sshd-session[28783]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:29:43 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 06:29:43 compute-0 systemd[1]: session-6.scope: Consumed 1.968s CPU time.
Jan 31 06:29:43 compute-0 systemd-logind[816]: Session 6 logged out. Waiting for processes to exit.
Jan 31 06:29:43 compute-0 systemd-logind[816]: Removed session 6.
Jan 31 06:29:53 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 06:30:13 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 06:30:43 compute-0 sshd-session[30003]: Invalid user sol from 45.148.10.240 port 35576
Jan 31 06:30:43 compute-0 sshd-session[30003]: Connection closed by invalid user sol 45.148.10.240 port 35576 [preauth]
Jan 31 06:32:53 compute-0 sshd-session[30008]: Invalid user ubuntu from 45.148.10.240 port 53078
Jan 31 06:32:53 compute-0 sshd-session[30008]: Connection closed by invalid user ubuntu 45.148.10.240 port 53078 [preauth]
Jan 31 06:34:58 compute-0 sshd-session[30010]: Accepted publickey for zuul from 38.129.56.250 port 51356 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 06:34:58 compute-0 systemd-logind[816]: New session 7 of user zuul.
Jan 31 06:34:58 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 31 06:34:58 compute-0 sshd-session[30010]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:34:58 compute-0 python3[30086]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:35:00 compute-0 sudo[30200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcssyspuhkinrcspppngabhtssaetvav ; /usr/bin/python3'
Jan 31 06:35:00 compute-0 sudo[30200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:00 compute-0 python3[30202]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:00 compute-0 sudo[30200]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:00 compute-0 sudo[30273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqklcqsmxxwwbefqomgitpoabrfgwfmv ; /usr/bin/python3'
Jan 31 06:35:00 compute-0 sudo[30273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:00 compute-0 python3[30275]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:01 compute-0 sudo[30273]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-0 sudo[30299]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzpfwiaqdlwlwkwtvcptzdmkccokjgf ; /usr/bin/python3'
Jan 31 06:35:01 compute-0 sudo[30299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-0 python3[30301]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:01 compute-0 sudo[30299]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-0 sudo[30372]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niiksbluwazpdjoboeejqsxvdmfffspm ; /usr/bin/python3'
Jan 31 06:35:01 compute-0 sudo[30372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-0 python3[30374]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:01 compute-0 sudo[30372]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-0 sudo[30398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mucwvbixxurjdasljoucshgfzcmozrbz ; /usr/bin/python3'
Jan 31 06:35:01 compute-0 sudo[30398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:01 compute-0 python3[30400]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:01 compute-0 sudo[30398]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:01 compute-0 sudo[30471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjfnanubfenokwddrzvdeswfuftbbat ; /usr/bin/python3'
Jan 31 06:35:01 compute-0 sudo[30471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-0 python3[30473]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:02 compute-0 sudo[30471]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-0 sudo[30497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iudiwlxyciamvnammjjyysgzrdhfyfzs ; /usr/bin/python3'
Jan 31 06:35:02 compute-0 sudo[30497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-0 python3[30499]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:02 compute-0 sudo[30497]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-0 sudo[30570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvicmjqgqrpyqhxndlybtbmbfrrtfes ; /usr/bin/python3'
Jan 31 06:35:02 compute-0 sudo[30570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-0 python3[30572]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:02 compute-0 sudo[30570]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-0 sudo[30596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnhrqwpytamblfrizzszapapzjyivdqz ; /usr/bin/python3'
Jan 31 06:35:02 compute-0 sudo[30596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:02 compute-0 python3[30598]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:02 compute-0 sudo[30596]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:02 compute-0 sudo[30669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzxusovimuwxurvyqniejiqxqqfqjiff ; /usr/bin/python3'
Jan 31 06:35:02 compute-0 sudo[30669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-0 python3[30671]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:03 compute-0 sudo[30669]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-0 sudo[30695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruhqbnvwinbswvztxvrdkcnorqkiywrc ; /usr/bin/python3'
Jan 31 06:35:03 compute-0 sudo[30695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-0 python3[30697]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:03 compute-0 sudo[30695]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-0 sudo[30768]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqfhyigjtrydoohbywjhtkqajpngbpmc ; /usr/bin/python3'
Jan 31 06:35:03 compute-0 sudo[30768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-0 python3[30770]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:03 compute-0 sudo[30768]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-0 sudo[30794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qspvaqcmokafbfwozooyqahldpxltvxc ; /usr/bin/python3'
Jan 31 06:35:03 compute-0 sudo[30794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:03 compute-0 python3[30796]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 06:35:03 compute-0 sudo[30794]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:03 compute-0 sudo[30867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmgxaydtthosrjuznrlhuvotupayflp ; /usr/bin/python3'
Jan 31 06:35:03 compute-0 sudo[30867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:35:04 compute-0 python3[30869]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769841300.2705853-34134-156532267563350/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:35:04 compute-0 sudo[30867]: pam_unix(sudo:session): session closed for user root
Jan 31 06:35:06 compute-0 sshd-session[30894]: Unable to negotiate with 192.168.122.11 port 44272: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 06:35:06 compute-0 sshd-session[30895]: Unable to negotiate with 192.168.122.11 port 44274: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 06:35:06 compute-0 sshd-session[30896]: Connection closed by 192.168.122.11 port 44256 [preauth]
Jan 31 06:35:06 compute-0 sshd-session[30897]: Connection closed by 192.168.122.11 port 44260 [preauth]
Jan 31 06:35:06 compute-0 sshd-session[30898]: Unable to negotiate with 192.168.122.11 port 44264: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 06:35:08 compute-0 sshd-session[30904]: Invalid user ubuntu from 45.148.10.240 port 37900
Jan 31 06:35:09 compute-0 sshd-session[30904]: Connection closed by invalid user ubuntu 45.148.10.240 port 37900 [preauth]
Jan 31 06:35:15 compute-0 python3[30929]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:37:24 compute-0 systemd[1]: Starting dnf makecache...
Jan 31 06:37:24 compute-0 sshd-session[30933]: Invalid user sol from 45.148.10.240 port 41462
Jan 31 06:37:24 compute-0 sshd-session[30933]: Connection closed by invalid user sol 45.148.10.240 port 41462 [preauth]
Jan 31 06:37:24 compute-0 dnf[30935]: Failed determining last makecache time.
Jan 31 06:37:25 compute-0 dnf[30935]: delorean-openstack-barbican-42b4c41831408a8e323  73 kB/s |  13 kB     00:00
Jan 31 06:37:25 compute-0 dnf[30935]: delorean-python-glean-642fffe0203a8ffcc2443db52 560 kB/s |  65 kB     00:00
Jan 31 06:37:25 compute-0 dnf[30935]: delorean-openstack-cinder-1c00d6490d88e436f26ef  69 kB/s |  32 kB     00:00
Jan 31 06:37:25 compute-0 dnf[30935]: delorean-python-stevedore-c4acc5639fd2329372142 442 kB/s | 131 kB     00:00
Jan 31 06:37:26 compute-0 dnf[30935]: delorean-python-cloudkitty-tests-tempest-783703 367 kB/s |  32 kB     00:00
Jan 31 06:37:26 compute-0 dnf[30935]: delorean-diskimage-builder-61b717cc45660834fe9a 1.7 MB/s | 349 kB     00:00
Jan 31 06:37:26 compute-0 dnf[30935]: delorean-openstack-nova-eaa65f0b85123a4ee343246 295 kB/s |  42 kB     00:00
Jan 31 06:37:26 compute-0 dnf[30935]: delorean-python-designate-tests-tempest-347fdbc 107 kB/s |  18 kB     00:00
Jan 31 06:37:26 compute-0 dnf[30935]: delorean-openstack-glance-1fd12c29b339f30fe823e 106 kB/s |  18 kB     00:00
Jan 31 06:37:27 compute-0 dnf[30935]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 211 kB/s |  29 kB     00:00
Jan 31 06:37:27 compute-0 dnf[30935]: delorean-openstack-manila-d783d10e75495b73866db 120 kB/s |  25 kB     00:00
Jan 31 06:37:27 compute-0 dnf[30935]: delorean-openstack-neutron-95cadbd379667c8520c8 1.1 MB/s | 154 kB     00:00
Jan 31 06:37:27 compute-0 dnf[30935]: delorean-openstack-octavia-5975097dd4b021385178 183 kB/s |  26 kB     00:00
Jan 31 06:37:27 compute-0 dnf[30935]: delorean-openstack-watcher-c014f81a8647287f6dcc 187 kB/s |  16 kB     00:00
Jan 31 06:37:27 compute-0 dnf[30935]: delorean-python-tcib-78032d201b02cee27e8e644c61  39 kB/s | 7.4 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 2.4 MB/s | 144 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: delorean-openstack-swift-dc98a8463506ac520c469a 641 kB/s |  14 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: delorean-python-tempestconf-8515371b7cceebd4282 2.0 MB/s |  53 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.9 MB/s |  96 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: CentOS Stream 9 - BaseOS                         25 kB/s | 6.1 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: CentOS Stream 9 - AppStream                      53 kB/s | 6.5 kB     00:00
Jan 31 06:37:28 compute-0 dnf[30935]: CentOS Stream 9 - CRB                            55 kB/s | 6.0 kB     00:00
Jan 31 06:37:29 compute-0 dnf[30935]: CentOS Stream 9 - Extras packages                71 kB/s | 7.3 kB     00:00
Jan 31 06:37:29 compute-0 dnf[30935]: dlrn-antelope-testing                            17 MB/s | 1.1 MB     00:00
Jan 31 06:37:29 compute-0 dnf[30935]: dlrn-antelope-build-deps                        9.4 MB/s | 461 kB     00:00
Jan 31 06:37:29 compute-0 dnf[30935]: centos9-rabbitmq                                8.4 MB/s | 123 kB     00:00
Jan 31 06:37:29 compute-0 dnf[30935]: centos9-storage                                  17 MB/s | 415 kB     00:00
Jan 31 06:37:30 compute-0 dnf[30935]: centos9-opstools                                3.0 MB/s |  51 kB     00:00
Jan 31 06:37:30 compute-0 dnf[30935]: NFV SIG OpenvSwitch                              21 MB/s | 461 kB     00:00
Jan 31 06:37:30 compute-0 dnf[30935]: repo-setup-centos-appstream                      76 MB/s |  26 MB     00:00
Jan 31 06:37:36 compute-0 dnf[30935]: repo-setup-centos-baseos                         30 MB/s | 8.9 MB     00:00
Jan 31 06:37:38 compute-0 dnf[30935]: repo-setup-centos-highavailability               13 MB/s | 744 kB     00:00
Jan 31 06:37:38 compute-0 dnf[30935]: repo-setup-centos-powertools                     54 MB/s | 7.6 MB     00:00
Jan 31 06:37:41 compute-0 dnf[30935]: Extra Packages for Enterprise Linux 9 - x86_64   12 MB/s |  20 MB     00:01
Jan 31 06:37:54 compute-0 dnf[30935]: Metadata cache created.
Jan 31 06:37:54 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 06:37:54 compute-0 systemd[1]: Finished dnf makecache.
Jan 31 06:37:54 compute-0 systemd[1]: dnf-makecache.service: Consumed 24.628s CPU time.
Jan 31 06:39:05 compute-0 sshd-session[31038]: Connection closed by 198.235.24.39 port 55981
Jan 31 06:39:39 compute-0 sshd-session[31039]: Invalid user solana from 45.148.10.240 port 44436
Jan 31 06:39:39 compute-0 sshd-session[31039]: Connection closed by invalid user solana 45.148.10.240 port 44436 [preauth]
Jan 31 06:40:14 compute-0 sshd-session[30013]: Received disconnect from 38.129.56.250 port 51356:11: disconnected by user
Jan 31 06:40:14 compute-0 sshd-session[30013]: Disconnected from user zuul 38.129.56.250 port 51356
Jan 31 06:40:14 compute-0 sshd-session[30010]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:40:14 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 06:40:14 compute-0 systemd[1]: session-7.scope: Consumed 4.196s CPU time.
Jan 31 06:40:14 compute-0 systemd-logind[816]: Session 7 logged out. Waiting for processes to exit.
Jan 31 06:40:14 compute-0 systemd-logind[816]: Removed session 7.
Jan 31 06:41:57 compute-0 sshd-session[31042]: Invalid user solana from 45.148.10.240 port 60270
Jan 31 06:41:58 compute-0 sshd-session[31042]: Connection closed by invalid user solana 45.148.10.240 port 60270 [preauth]
Jan 31 06:44:17 compute-0 sshd-session[31045]: Invalid user sol from 45.148.10.240 port 48232
Jan 31 06:44:17 compute-0 sshd-session[31045]: Connection closed by invalid user sol 45.148.10.240 port 48232 [preauth]
Jan 31 06:46:35 compute-0 sshd-session[31047]: Invalid user sol from 45.148.10.240 port 49950
Jan 31 06:46:35 compute-0 sshd-session[31047]: Connection closed by invalid user sol 45.148.10.240 port 49950 [preauth]
Jan 31 06:48:51 compute-0 sshd-session[31051]: Invalid user solana from 45.148.10.240 port 40090
Jan 31 06:48:51 compute-0 sshd-session[31051]: Connection closed by invalid user solana 45.148.10.240 port 40090 [preauth]
Jan 31 06:51:06 compute-0 sshd-session[31054]: Invalid user solana from 45.148.10.240 port 32818
Jan 31 06:51:06 compute-0 sshd-session[31054]: Connection closed by invalid user solana 45.148.10.240 port 32818 [preauth]
Jan 31 06:53:21 compute-0 sshd-session[31057]: Invalid user solana from 45.148.10.240 port 42296
Jan 31 06:53:21 compute-0 sshd-session[31057]: Connection closed by invalid user solana 45.148.10.240 port 42296 [preauth]
Jan 31 06:55:38 compute-0 sshd-session[31060]: Invalid user sol from 45.148.10.240 port 49894
Jan 31 06:55:38 compute-0 sshd-session[31060]: Connection closed by invalid user sol 45.148.10.240 port 49894 [preauth]
Jan 31 06:56:05 compute-0 sshd-session[31062]: Accepted publickey for zuul from 192.168.122.30 port 46048 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 06:56:05 compute-0 systemd-logind[816]: New session 8 of user zuul.
Jan 31 06:56:05 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 31 06:56:05 compute-0 sshd-session[31062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:07 compute-0 python3.9[31215]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:07 compute-0 sudo[31394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizodtkxiscdcrdvbwncvrcbcvbekfce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842567.5131023-56-59115780501650/AnsiballZ_command.py'
Jan 31 06:56:07 compute-0 sudo[31394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:08 compute-0 python3.9[31396]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:19 compute-0 sudo[31394]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:20 compute-0 sshd-session[31065]: Connection closed by 192.168.122.30 port 46048
Jan 31 06:56:20 compute-0 sshd-session[31062]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:56:20 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 06:56:20 compute-0 systemd[1]: session-8.scope: Consumed 7.847s CPU time.
Jan 31 06:56:20 compute-0 systemd-logind[816]: Session 8 logged out. Waiting for processes to exit.
Jan 31 06:56:20 compute-0 systemd-logind[816]: Removed session 8.
Jan 31 06:56:40 compute-0 sshd-session[31456]: Accepted publickey for zuul from 192.168.122.30 port 54964 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 06:56:40 compute-0 systemd-logind[816]: New session 9 of user zuul.
Jan 31 06:56:40 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 31 06:56:40 compute-0 sshd-session[31456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:56:41 compute-0 python3.9[31609]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 06:56:42 compute-0 python3.9[31783]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:43 compute-0 sudo[31933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhicoylnkdhsenfyrzzymlhlywnerwev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842602.8166013-93-221038366456203/AnsiballZ_command.py'
Jan 31 06:56:43 compute-0 sudo[31933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:43 compute-0 python3.9[31935]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:56:43 compute-0 sudo[31933]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:44 compute-0 sudo[32086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwetsgrqiovparajbospnkxrnqukcnsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842603.8033319-129-200787482476061/AnsiballZ_stat.py'
Jan 31 06:56:44 compute-0 sudo[32086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:44 compute-0 python3.9[32088]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:56:44 compute-0 sudo[32086]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:44 compute-0 sudo[32238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqowfqcuhtyhomfcybwwtmajwapfjznk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842604.5410786-153-280041244590593/AnsiballZ_file.py'
Jan 31 06:56:44 compute-0 sudo[32238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:45 compute-0 python3.9[32240]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:45 compute-0 sudo[32238]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:45 compute-0 sudo[32390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctjvclsvrmlkwrpcwyuxdroxbdrgpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842605.3013878-177-92511841254676/AnsiballZ_stat.py'
Jan 31 06:56:45 compute-0 sudo[32390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:45 compute-0 python3.9[32392]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:56:45 compute-0 sudo[32390]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:46 compute-0 sudo[32513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjchtwgdzqnigzmrouvehazbyaghwhob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842605.3013878-177-92511841254676/AnsiballZ_copy.py'
Jan 31 06:56:46 compute-0 sudo[32513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:46 compute-0 python3.9[32515]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842605.3013878-177-92511841254676/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:46 compute-0 sudo[32513]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:46 compute-0 sudo[32665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmsbjmubrakkyqzlwcftuhlqefrmypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842606.6489646-222-132982261885762/AnsiballZ_setup.py'
Jan 31 06:56:46 compute-0 sudo[32665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:47 compute-0 python3.9[32667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:47 compute-0 sudo[32665]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:47 compute-0 sudo[32821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otronrtqjpsduesxdwfzsfgfcbxmirlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842607.5410795-246-194960504893349/AnsiballZ_file.py'
Jan 31 06:56:47 compute-0 sudo[32821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:47 compute-0 python3.9[32823]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:56:47 compute-0 sudo[32821]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:48 compute-0 sudo[32973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifeilfyynhafhcfyjyubegndogsqarlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842608.1451-273-132683007714277/AnsiballZ_file.py'
Jan 31 06:56:48 compute-0 sudo[32973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:48 compute-0 python3.9[32975]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:56:48 compute-0 sudo[32973]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:49 compute-0 python3.9[33125]: ansible-ansible.builtin.service_facts Invoked
Jan 31 06:56:52 compute-0 python3.9[33378]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:56:53 compute-0 python3.9[33528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:54 compute-0 python3.9[33682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:56:55 compute-0 sudo[33838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itsrklclprkzaqvzwpgottrghodnugre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842615.3816938-417-46161254545222/AnsiballZ_setup.py'
Jan 31 06:56:55 compute-0 sudo[33838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:55 compute-0 python3.9[33840]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 06:56:56 compute-0 sudo[33838]: pam_unix(sudo:session): session closed for user root
Jan 31 06:56:56 compute-0 sudo[33922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hakfcsbbzdhjalddngvcxzixvchoivja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842615.3816938-417-46161254545222/AnsiballZ_dnf.py'
Jan 31 06:56:56 compute-0 sudo[33922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:56:56 compute-0 python3.9[33924]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:57:25 compute-0 irqbalance[815]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 06:57:25 compute-0 irqbalance[815]: IRQ 26 affinity is now unmanaged
Jan 31 06:57:49 compute-0 systemd[1]: Reloading.
Jan 31 06:57:49 compute-0 systemd-rc-local-generator[34124]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:57:50 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 06:57:50 compute-0 systemd[1]: Reloading.
Jan 31 06:57:50 compute-0 systemd-rc-local-generator[34164]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:57:50 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 06:57:50 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 06:57:50 compute-0 systemd[1]: Reloading.
Jan 31 06:57:50 compute-0 systemd-rc-local-generator[34202]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:57:50 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 06:57:50 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 06:57:50 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 06:57:50 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 06:57:56 compute-0 sshd-session[34242]: Invalid user sol from 45.148.10.240 port 51700
Jan 31 06:57:56 compute-0 sshd-session[34242]: Connection closed by invalid user sol 45.148.10.240 port 51700 [preauth]
Jan 31 06:58:48 compute-0 kernel: SELinux:  Converting 2728 SID table entries...
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 06:58:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 06:58:48 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 06:58:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:58:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:58:48 compute-0 systemd[1]: Reloading.
Jan 31 06:58:48 compute-0 systemd-rc-local-generator[34517]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:58:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:58:49 compute-0 sudo[33922]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:58:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:58:49 compute-0 systemd[1]: run-rf9192c9509e24f1d9327a594f7873eb6.service: Deactivated successfully.
Jan 31 06:58:51 compute-0 sudo[35440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irdzhhdmcizatfleimfihtbquvtpnsav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842731.2508078-453-259574321297038/AnsiballZ_command.py'
Jan 31 06:58:51 compute-0 sudo[35440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:51 compute-0 python3.9[35442]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:58:53 compute-0 sudo[35440]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:54 compute-0 sudo[35721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdlpjttujisbutswumoftgsbgpipublb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842733.4816413-477-20408662431202/AnsiballZ_selinux.py'
Jan 31 06:58:54 compute-0 sudo[35721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:54 compute-0 python3.9[35723]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 06:58:54 compute-0 sudo[35721]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:55 compute-0 sudo[35873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpmeodncrrzdkqrohlufjemprbhldkxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842735.0015163-510-235164431471622/AnsiballZ_command.py'
Jan 31 06:58:55 compute-0 sudo[35873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:55 compute-0 python3.9[35875]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 06:58:56 compute-0 sudo[35873]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:58 compute-0 sudo[36026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcxmejjhhvjhojatmtvfdtsunkfghxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842738.0952954-534-245505279139072/AnsiballZ_file.py'
Jan 31 06:58:58 compute-0 sudo[36026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:58 compute-0 python3.9[36028]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:58:58 compute-0 sudo[36026]: pam_unix(sudo:session): session closed for user root
Jan 31 06:58:59 compute-0 sudo[36178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymukjmdyvosggkpezzftvayxleuamasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842739.272183-558-61527695259067/AnsiballZ_mount.py'
Jan 31 06:58:59 compute-0 sudo[36178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:58:59 compute-0 python3.9[36180]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 06:58:59 compute-0 sudo[36178]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:01 compute-0 sudo[36330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acuuwjjcahtoenmxdctaxcxzfeqgaogk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842741.4389257-642-207793750196552/AnsiballZ_file.py'
Jan 31 06:59:01 compute-0 sudo[36330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:01 compute-0 python3.9[36332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:01 compute-0 sudo[36330]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:06 compute-0 sudo[36482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvrjbymftknjvckljtznjrhqbkhdujct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842746.2129843-666-37501077560976/AnsiballZ_stat.py'
Jan 31 06:59:06 compute-0 sudo[36482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:07 compute-0 python3.9[36484]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:07 compute-0 sudo[36482]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:08 compute-0 sudo[36605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymevsvwfbnohucssxevamcduxjjzfso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842746.2129843-666-37501077560976/AnsiballZ_copy.py'
Jan 31 06:59:08 compute-0 sudo[36605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:08 compute-0 python3.9[36607]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842746.2129843-666-37501077560976/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:08 compute-0 sudo[36605]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:09 compute-0 sudo[36757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffxdnsgshillteeabvurikuckkpvdxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842749.5518765-738-269896638718689/AnsiballZ_stat.py'
Jan 31 06:59:09 compute-0 sudo[36757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:10 compute-0 python3.9[36759]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:10 compute-0 sudo[36757]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:10 compute-0 sudo[36909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rozcedpcxbjhrxmdfpeifqpicvpgoies ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842750.305678-762-199623790146422/AnsiballZ_command.py'
Jan 31 06:59:10 compute-0 sudo[36909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:10 compute-0 python3.9[36911]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:10 compute-0 sudo[36909]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:11 compute-0 sudo[37062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrpeizdpcawkriqdiefyynjwbkbntlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842751.3860223-786-52156513645610/AnsiballZ_file.py'
Jan 31 06:59:11 compute-0 sudo[37062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:11 compute-0 python3.9[37064]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 06:59:11 compute-0 sudo[37062]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:12 compute-0 sudo[37214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjjsmqubhnsqvuzbzrpyngthsugorvfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842752.4651823-819-127042431079539/AnsiballZ_getent.py'
Jan 31 06:59:12 compute-0 sudo[37214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:13 compute-0 python3.9[37216]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 06:59:13 compute-0 sudo[37214]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:13 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 06:59:13 compute-0 sudo[37368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wosakvnqregvhslmtiejkejkfqhjdajk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842753.358374-843-128511011505683/AnsiballZ_group.py'
Jan 31 06:59:13 compute-0 sudo[37368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:13 compute-0 python3.9[37370]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:59:14 compute-0 groupadd[37371]: group added to /etc/group: name=qemu, GID=107
Jan 31 06:59:14 compute-0 groupadd[37371]: group added to /etc/gshadow: name=qemu
Jan 31 06:59:14 compute-0 groupadd[37371]: new group: name=qemu, GID=107
Jan 31 06:59:14 compute-0 sudo[37368]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:14 compute-0 sudo[37526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbsiexwlpjsfwrukfbcbzccuhiaeidqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842754.2808146-867-22042282289071/AnsiballZ_user.py'
Jan 31 06:59:14 compute-0 sudo[37526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:15 compute-0 python3.9[37528]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 06:59:15 compute-0 useradd[37530]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 06:59:15 compute-0 sudo[37526]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:15 compute-0 sudo[37686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqfufuizckhtastmafuncljlsnbxdlmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842755.4143248-891-13693203273882/AnsiballZ_getent.py'
Jan 31 06:59:15 compute-0 sudo[37686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:15 compute-0 python3.9[37688]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 06:59:15 compute-0 sudo[37686]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:16 compute-0 sudo[37839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkupgnjpvfhrljvcvwenibvrgwukppnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842756.0600924-915-269311555852067/AnsiballZ_group.py'
Jan 31 06:59:16 compute-0 sudo[37839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:16 compute-0 python3.9[37841]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:59:16 compute-0 groupadd[37842]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 06:59:16 compute-0 groupadd[37842]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 06:59:16 compute-0 groupadd[37842]: new group: name=hugetlbfs, GID=42477
Jan 31 06:59:16 compute-0 sudo[37839]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:17 compute-0 sudo[37997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrvrtqynfyfnikgoftbskulrdnhluaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842756.8445704-942-257656657647733/AnsiballZ_file.py'
Jan 31 06:59:17 compute-0 sudo[37997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:17 compute-0 python3.9[37999]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 06:59:17 compute-0 sudo[37997]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:18 compute-0 sudo[38149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltpnyambvsqkcaazicgxsxgktrspvsay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842757.837047-975-81953194894012/AnsiballZ_dnf.py'
Jan 31 06:59:18 compute-0 sudo[38149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:18 compute-0 python3.9[38151]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:59:19 compute-0 sudo[38149]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:21 compute-0 sudo[38302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itagcvekcrkzdzcwjrqjtdlmvzumfrrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842761.0625613-999-27020186944862/AnsiballZ_file.py'
Jan 31 06:59:21 compute-0 sudo[38302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:21 compute-0 python3.9[38304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:21 compute-0 sudo[38302]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:22 compute-0 sudo[38454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulalkjzgzbcoblmnogppmmhmfklzvufs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842761.7302847-1023-133711947766840/AnsiballZ_stat.py'
Jan 31 06:59:22 compute-0 sudo[38454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:22 compute-0 python3.9[38456]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:22 compute-0 sudo[38454]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:22 compute-0 sudo[38577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcmvhndeneukhsnqoqgoytfzvuwhwtmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842761.7302847-1023-133711947766840/AnsiballZ_copy.py'
Jan 31 06:59:22 compute-0 sudo[38577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:22 compute-0 python3.9[38579]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842761.7302847-1023-133711947766840/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:22 compute-0 sudo[38577]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:23 compute-0 sudo[38729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unzemdyrtqagwctbjojankqrwqjluwbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842762.945393-1068-205276413715924/AnsiballZ_systemd.py'
Jan 31 06:59:23 compute-0 sudo[38729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:23 compute-0 python3.9[38731]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:59:23 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 06:59:23 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 06:59:23 compute-0 kernel: Bridge firewalling registered
Jan 31 06:59:23 compute-0 systemd-modules-load[38735]: Inserted module 'br_netfilter'
Jan 31 06:59:23 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 06:59:23 compute-0 sudo[38729]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:24 compute-0 sudo[38889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigmejhbnamcoczlxpnwaebvwzytllfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842764.1764913-1092-258536543895706/AnsiballZ_stat.py'
Jan 31 06:59:24 compute-0 sudo[38889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:24 compute-0 python3.9[38891]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 06:59:24 compute-0 sudo[38889]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:24 compute-0 sudo[39012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkgczinkvqrfmsfljxgylgbbahpabwrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842764.1764913-1092-258536543895706/AnsiballZ_copy.py'
Jan 31 06:59:24 compute-0 sudo[39012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:25 compute-0 python3.9[39014]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842764.1764913-1092-258536543895706/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 06:59:25 compute-0 sudo[39012]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:25 compute-0 sudo[39164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgduklxmiostbdaqlfmljigdckwbezcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842765.6328545-1146-171250380518869/AnsiballZ_dnf.py'
Jan 31 06:59:25 compute-0 sudo[39164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:26 compute-0 python3.9[39166]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 06:59:29 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 06:59:29 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 06:59:29 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 06:59:29 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 06:59:29 compute-0 systemd[1]: Reloading.
Jan 31 06:59:29 compute-0 systemd-rc-local-generator[39226]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:29 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 06:59:30 compute-0 sudo[39164]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:31 compute-0 python3.9[41654]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:32 compute-0 python3.9[42918]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 06:59:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 06:59:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 06:59:32 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.452s CPU time.
Jan 31 06:59:32 compute-0 systemd[1]: run-r42e00ff4b1ed4d38846f7b3a86a728d3.service: Deactivated successfully.
Jan 31 06:59:32 compute-0 python3.9[43218]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 06:59:33 compute-0 sudo[43368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wecwdvbcjcjdyoppfjjdmdxsgbmlojti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842773.5593364-1263-64369229453167/AnsiballZ_command.py'
Jan 31 06:59:33 compute-0 sudo[43368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:33 compute-0 python3.9[43370]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:34 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:59:34 compute-0 systemd[1]: Starting Authorization Manager...
Jan 31 06:59:34 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:59:34 compute-0 polkitd[43587]: Started polkitd version 0.117
Jan 31 06:59:34 compute-0 polkitd[43587]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 06:59:34 compute-0 polkitd[43587]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 06:59:34 compute-0 polkitd[43587]: Finished loading, compiling and executing 2 rules
Jan 31 06:59:34 compute-0 systemd[1]: Started Authorization Manager.
Jan 31 06:59:34 compute-0 polkitd[43587]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 06:59:34 compute-0 sudo[43368]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:35 compute-0 sudo[43755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdtampeyvgubvnbynavzvljqdfppclas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842774.9683535-1290-273774178131343/AnsiballZ_systemd.py'
Jan 31 06:59:35 compute-0 sudo[43755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:35 compute-0 python3.9[43757]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:35 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 06:59:35 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 06:59:35 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 06:59:35 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 06:59:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 06:59:35 compute-0 sudo[43755]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:36 compute-0 python3.9[43919]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 06:59:40 compute-0 sudo[44069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvcdzguecwxilunzkgiqndxynpsqpotu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842780.4659846-1461-38136870314297/AnsiballZ_systemd.py'
Jan 31 06:59:40 compute-0 sudo[44069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:41 compute-0 python3.9[44071]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:41 compute-0 systemd[1]: Reloading.
Jan 31 06:59:41 compute-0 systemd-rc-local-generator[44096]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:41 compute-0 sudo[44069]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:41 compute-0 sudo[44257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjtssjmrinvpyukrgvoshhbzukihziha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842781.4184542-1461-204873983587903/AnsiballZ_systemd.py'
Jan 31 06:59:41 compute-0 sudo[44257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:42 compute-0 python3.9[44259]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 06:59:42 compute-0 systemd[1]: Reloading.
Jan 31 06:59:42 compute-0 systemd-rc-local-generator[44282]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 06:59:42 compute-0 sudo[44257]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:43 compute-0 sudo[44446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rggsrwmkarxqclicgqnxmnyamtxmnbya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842782.7193944-1509-103261017577595/AnsiballZ_command.py'
Jan 31 06:59:43 compute-0 sudo[44446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:43 compute-0 python3.9[44448]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:43 compute-0 sudo[44446]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:43 compute-0 sudo[44599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhqdxgepjcedgvkddruwbhxhtltdrzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842783.418906-1533-119222926161274/AnsiballZ_command.py'
Jan 31 06:59:43 compute-0 sudo[44599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:43 compute-0 python3.9[44601]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:43 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 06:59:43 compute-0 sudo[44599]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:44 compute-0 sudo[44752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chxojuwrpvuzfskgvdazmggrdlfslfpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842784.088163-1557-275178761064239/AnsiballZ_command.py'
Jan 31 06:59:44 compute-0 sudo[44752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:44 compute-0 python3.9[44754]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:46 compute-0 sudo[44752]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:46 compute-0 sudo[44914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpysmhdxbhlpkudvkmpaamekrpbdowcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842786.2713091-1581-59317868378970/AnsiballZ_command.py'
Jan 31 06:59:46 compute-0 sudo[44914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:46 compute-0 python3.9[44916]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 06:59:46 compute-0 sudo[44914]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:47 compute-0 sudo[45067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dunypkxkaorzbaceshlxqfpidpjhwkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842786.9206884-1605-77029243152879/AnsiballZ_systemd.py'
Jan 31 06:59:47 compute-0 sudo[45067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:47 compute-0 python3.9[45069]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 06:59:47 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 06:59:47 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 06:59:47 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 06:59:47 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 31 06:59:47 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 06:59:47 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 31 06:59:47 compute-0 sudo[45067]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:48 compute-0 sshd-session[31459]: Connection closed by 192.168.122.30 port 54964
Jan 31 06:59:48 compute-0 sshd-session[31456]: pam_unix(sshd:session): session closed for user zuul
Jan 31 06:59:48 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 06:59:48 compute-0 systemd[1]: session-9.scope: Consumed 2min 2.367s CPU time.
Jan 31 06:59:48 compute-0 systemd-logind[816]: Session 9 logged out. Waiting for processes to exit.
Jan 31 06:59:48 compute-0 systemd-logind[816]: Removed session 9.
Jan 31 06:59:56 compute-0 sshd-session[45100]: Accepted publickey for zuul from 192.168.122.30 port 49722 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 06:59:56 compute-0 systemd-logind[816]: New session 10 of user zuul.
Jan 31 06:59:56 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 31 06:59:56 compute-0 sshd-session[45100]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 06:59:57 compute-0 python3.9[45253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 06:59:58 compute-0 sudo[45407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgidhkhgnoxdowdtqguefhwwwdvrlpmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842797.959051-68-101698609901126/AnsiballZ_getent.py'
Jan 31 06:59:58 compute-0 sudo[45407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:58 compute-0 python3.9[45409]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 06:59:58 compute-0 sudo[45407]: pam_unix(sudo:session): session closed for user root
Jan 31 06:59:59 compute-0 sudo[45560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkwwwscisgvwaxictexmveypjcmgltxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842798.7316983-92-186603639846294/AnsiballZ_group.py'
Jan 31 06:59:59 compute-0 sudo[45560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 06:59:59 compute-0 python3.9[45562]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 06:59:59 compute-0 groupadd[45563]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 06:59:59 compute-0 groupadd[45563]: group added to /etc/gshadow: name=openvswitch
Jan 31 06:59:59 compute-0 groupadd[45563]: new group: name=openvswitch, GID=42476
Jan 31 06:59:59 compute-0 sudo[45560]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:00 compute-0 sudo[45718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngextskhhzylmivcvzvbfzhgczjhybfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842799.6017346-116-219283474259329/AnsiballZ_user.py'
Jan 31 07:00:00 compute-0 sudo[45718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:00 compute-0 python3.9[45720]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:00:00 compute-0 useradd[45722]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:00:00 compute-0 useradd[45722]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 07:00:00 compute-0 useradd[45722]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 07:00:00 compute-0 sudo[45718]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:01 compute-0 sudo[45878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nslvhpryhrykurlkdoxrdvjqulqfyopb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842800.8017273-146-199707659775227/AnsiballZ_setup.py'
Jan 31 07:00:01 compute-0 sudo[45878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:01 compute-0 python3.9[45880]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:00:01 compute-0 sudo[45878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:01 compute-0 sudo[45962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmghteuseoezyinagptvitcidzdusud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842800.8017273-146-199707659775227/AnsiballZ_dnf.py'
Jan 31 07:00:01 compute-0 sudo[45962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:02 compute-0 python3.9[45964]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:00:04 compute-0 sudo[45962]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:05 compute-0 sudo[46126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhvlkyjtggvlleeloiflpntovgomyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842804.868724-188-245510453086730/AnsiballZ_dnf.py'
Jan 31 07:00:05 compute-0 sudo[46126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:05 compute-0 python3.9[46128]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:13 compute-0 sshd-session[46143]: Invalid user sol from 45.148.10.240 port 60826
Jan 31 07:00:14 compute-0 sshd-session[46143]: Connection closed by invalid user sol 45.148.10.240 port 60826 [preauth]
Jan 31 07:00:16 compute-0 kernel: SELinux:  Converting 2740 SID table entries...
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:00:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:00:16 compute-0 groupadd[46153]: group added to /etc/group: name=unbound, GID=994
Jan 31 07:00:16 compute-0 groupadd[46153]: group added to /etc/gshadow: name=unbound
Jan 31 07:00:16 compute-0 groupadd[46153]: new group: name=unbound, GID=994
Jan 31 07:00:16 compute-0 useradd[46160]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 07:00:16 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 07:00:16 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 07:00:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:00:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:00:17 compute-0 systemd[1]: Reloading.
Jan 31 07:00:17 compute-0 systemd-sysv-generator[46656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:17 compute-0 systemd-rc-local-generator[46650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:17 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:00:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:00:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:00:18 compute-0 systemd[1]: run-r4bf9e025b52d43d9966c019bd582212a.service: Deactivated successfully.
Jan 31 07:00:18 compute-0 sudo[46126]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:22 compute-0 sudo[47226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdlploxxcpvmosgyqjfkzwutobsqkagv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842822.070589-212-78245055754188/AnsiballZ_systemd.py'
Jan 31 07:00:22 compute-0 sudo[47226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:22 compute-0 python3.9[47228]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:00:22 compute-0 systemd[1]: Reloading.
Jan 31 07:00:23 compute-0 systemd-rc-local-generator[47258]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:23 compute-0 systemd-sysv-generator[47262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:23 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 07:00:23 compute-0 chown[47270]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 07:00:23 compute-0 ovs-ctl[47275]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 07:00:23 compute-0 ovs-ctl[47275]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 07:00:23 compute-0 ovs-ctl[47275]: Starting ovsdb-server [  OK  ]
Jan 31 07:00:23 compute-0 ovs-vsctl[47324]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 07:00:23 compute-0 ovs-vsctl[47340]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"5c307474-e9ec-4d19-9f52-463eb0ff26d1\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 07:00:23 compute-0 ovs-ctl[47275]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 07:00:23 compute-0 ovs-vsctl[47350]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 07:00:23 compute-0 ovs-ctl[47275]: Enabling remote OVSDB managers [  OK  ]
Jan 31 07:00:23 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 07:00:23 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 07:00:23 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 07:00:23 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 07:00:23 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 07:00:23 compute-0 ovs-ctl[47394]: Inserting openvswitch module [  OK  ]
Jan 31 07:00:23 compute-0 ovs-ctl[47363]: Starting ovs-vswitchd [  OK  ]
Jan 31 07:00:23 compute-0 ovs-ctl[47363]: Enabling remote OVSDB managers [  OK  ]
Jan 31 07:00:23 compute-0 ovs-vsctl[47412]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 07:00:23 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 07:00:23 compute-0 systemd[1]: Starting Open vSwitch...
Jan 31 07:00:23 compute-0 systemd[1]: Finished Open vSwitch.
Jan 31 07:00:23 compute-0 sudo[47226]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:24 compute-0 python3.9[47563]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:00:25 compute-0 sudo[47713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isvvyrvfjgqvaajhyhdnmkdoatvmfyra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842825.1301854-266-22072780570362/AnsiballZ_sefcontext.py'
Jan 31 07:00:25 compute-0 sudo[47713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:25 compute-0 python3.9[47715]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 07:00:26 compute-0 kernel: SELinux:  Converting 2754 SID table entries...
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:00:26 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:00:27 compute-0 sudo[47713]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:28 compute-0 python3.9[47870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:00:28 compute-0 sudo[48026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkbgdrtesbtykqvhevngxyvzepnpnyah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842828.5073805-320-201736750837457/AnsiballZ_dnf.py'
Jan 31 07:00:28 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 07:00:28 compute-0 sudo[48026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:28 compute-0 python3.9[48028]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:30 compute-0 sudo[48026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:31 compute-0 sudo[48179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwhpihphdbvtkydrkccyacdajmscdfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842830.6525552-344-188730195354894/AnsiballZ_command.py'
Jan 31 07:00:31 compute-0 sudo[48179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:31 compute-0 python3.9[48181]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:00:31 compute-0 sudo[48179]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:32 compute-0 sudo[48466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwkkuabbrmvahtcxwvvemhgdydkxoqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842832.1017063-368-48781193091719/AnsiballZ_file.py'
Jan 31 07:00:32 compute-0 sudo[48466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:32 compute-0 python3.9[48468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 07:00:32 compute-0 sudo[48466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:33 compute-0 python3.9[48618]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:00:33 compute-0 sudo[48770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jspahdbttnujesbdkchnblzlslmqvrzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842833.676091-416-23605455218155/AnsiballZ_dnf.py'
Jan 31 07:00:33 compute-0 sudo[48770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:34 compute-0 python3.9[48772]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:00:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:00:35 compute-0 systemd[1]: Reloading.
Jan 31 07:00:35 compute-0 systemd-sysv-generator[48812]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:35 compute-0 systemd-rc-local-generator[48807]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:00:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:00:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:00:36 compute-0 sudo[48770]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:36 compute-0 systemd[1]: run-r26123b1a98594e4bb9aa4e1c84e39d40.service: Deactivated successfully.
Jan 31 07:00:37 compute-0 sudo[49088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwnyozivfycqxlglkjodakurlyqnfnlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842837.1525297-440-188482944741252/AnsiballZ_systemd.py'
Jan 31 07:00:37 compute-0 sudo[49088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:37 compute-0 python3.9[49090]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:00:37 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 07:00:37 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 07:00:37 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 07:00:37 compute-0 systemd[1]: Stopping Network Manager...
Jan 31 07:00:37 compute-0 NetworkManager[7200]: <info>  [1769842837.7022] caught SIGTERM, shutting down normally.
Jan 31 07:00:37 compute-0 NetworkManager[7200]: <info>  [1769842837.7035] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:00:37 compute-0 NetworkManager[7200]: <info>  [1769842837.7036] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:00:37 compute-0 NetworkManager[7200]: <info>  [1769842837.7036] dhcp4 (eth0): state changed no lease
Jan 31 07:00:37 compute-0 NetworkManager[7200]: <info>  [1769842837.7040] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:00:37 compute-0 NetworkManager[7200]: <info>  [1769842837.7105] exiting (success)
Jan 31 07:00:37 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:00:37 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:00:37 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 07:00:37 compute-0 systemd[1]: Stopped Network Manager.
Jan 31 07:00:37 compute-0 systemd[1]: NetworkManager.service: Consumed 25.809s CPU time, 4.1M memory peak, read 0B from disk, written 33.0K to disk.
Jan 31 07:00:37 compute-0 systemd[1]: Starting Network Manager...
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.7868] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:c2831af4-3642-4ec1-ade3-1145c1dbedc8)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.7870] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.7911] manager[0x561c70936000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:00:37 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 07:00:37 compute-0 systemd[1]: Started Hostname Service.
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8622] hostname: hostname: using hostnamed
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8624] hostname: static hostname changed from (none) to "compute-0"
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8628] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8631] manager[0x561c70936000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8632] manager[0x561c70936000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8653] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8663] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8663] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8664] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8664] manager: Networking is enabled by state file
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8667] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8670] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8694] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8702] dhcp: init: Using DHCP client 'internal'
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8704] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8709] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8714] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8722] device (lo): Activation: starting connection 'lo' (0c594273-030a-44be-9be5-1e39156ec0e8)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8727] device (eth0): carrier: link connected
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8731] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8736] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8736] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8744] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8751] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8756] device (eth1): carrier: link connected
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8760] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8770] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (0c8dc28c-09cf-5c4d-82d0-76839923fad3) (indicated)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8771] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8776] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8783] device (eth1): Activation: starting connection 'ci-private-network' (0c8dc28c-09cf-5c4d-82d0-76839923fad3)
Jan 31 07:00:37 compute-0 systemd[1]: Started Network Manager.
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8789] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8805] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8808] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8809] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8811] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8814] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8815] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8817] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8820] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8825] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8828] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8835] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8844] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8855] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8857] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8860] device (lo): Activation: successful, device activated.
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8867] dhcp4 (eth0): state changed new lease, address=38.129.56.152
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8872] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:00:37 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8937] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8941] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8945] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8948] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8950] device (eth1): Activation: successful, device activated.
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8957] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8959] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8961] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8963] device (eth0): Activation: successful, device activated.
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8967] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:00:37 compute-0 NetworkManager[49108]: <info>  [1769842837.8970] manager: startup complete
Jan 31 07:00:37 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:00:37 compute-0 sudo[49088]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:38 compute-0 sudo[49315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzqnwasojmomoscozojfunmretujurvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842838.1232889-464-274260335739526/AnsiballZ_dnf.py'
Jan 31 07:00:38 compute-0 sudo[49315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:38 compute-0 python3.9[49317]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:00:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:00:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:00:43 compute-0 systemd[1]: Reloading.
Jan 31 07:00:43 compute-0 systemd-rc-local-generator[49364]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:00:43 compute-0 systemd-sysv-generator[49367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:00:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:00:44 compute-0 sudo[49315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:00:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:00:44 compute-0 systemd[1]: run-r31894c6764574d4c8e2a81258bed9c03.service: Deactivated successfully.
Jan 31 07:00:46 compute-0 sudo[49777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnggqehfzddsqjajrbrkingawvuobssh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842845.9634469-500-228347152591579/AnsiballZ_stat.py'
Jan 31 07:00:46 compute-0 sudo[49777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:46 compute-0 python3.9[49779]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:00:46 compute-0 sudo[49777]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-0 sudo[49929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrryxpbvzycbknkkzdgeyvouxtjnxtbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842846.6235533-527-34050246994026/AnsiballZ_ini_file.py'
Jan 31 07:00:47 compute-0 sudo[49929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:47 compute-0 python3.9[49931]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:47 compute-0 sudo[49929]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-0 sudo[50083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ednggzovllobvvxjjymjkgdcvvzxfrzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842847.5347326-557-169669437397473/AnsiballZ_ini_file.py'
Jan 31 07:00:47 compute-0 sudo[50083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:47 compute-0 python3.9[50085]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:47 compute-0 sudo[50083]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:47 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:00:48 compute-0 sudo[50235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drezmawqmzyrvasufvfepbpwrppjvqlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842848.1222568-557-17178282925433/AnsiballZ_ini_file.py'
Jan 31 07:00:48 compute-0 sudo[50235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:48 compute-0 python3.9[50237]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:48 compute-0 sudo[50235]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:48 compute-0 sudo[50387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reqmvekiwsatuqkmkobsgbaigkxwucjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842848.7241135-602-78413311569047/AnsiballZ_ini_file.py'
Jan 31 07:00:48 compute-0 sudo[50387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:49 compute-0 python3.9[50389]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:49 compute-0 sudo[50387]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:49 compute-0 sudo[50539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbppcfsnnpxebvbrnieynbsrwoewuask ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842849.2808106-602-12110298971427/AnsiballZ_ini_file.py'
Jan 31 07:00:49 compute-0 sudo[50539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:49 compute-0 python3.9[50541]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:49 compute-0 sudo[50539]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:50 compute-0 sudo[50691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdakynfmkndualtlwtcywqcgmxqvhprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842849.914825-647-236624977659963/AnsiballZ_stat.py'
Jan 31 07:00:50 compute-0 sudo[50691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:50 compute-0 python3.9[50693]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:00:50 compute-0 sudo[50691]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:50 compute-0 sudo[50814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djgpqskpljbozmtyrlmkpnzuloaeahfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842849.914825-647-236624977659963/AnsiballZ_copy.py'
Jan 31 07:00:50 compute-0 sudo[50814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:51 compute-0 python3.9[50816]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842849.914825-647-236624977659963/.source _original_basename=.2fjza27_ follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:51 compute-0 sudo[50814]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:51 compute-0 sudo[50966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgupayropcwszdgzkvctlwzvgbssibli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842851.2317917-692-271374179924776/AnsiballZ_file.py'
Jan 31 07:00:51 compute-0 sudo[50966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:51 compute-0 python3.9[50968]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:51 compute-0 sudo[50966]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:52 compute-0 sudo[51118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizsvubwhxsdlligrktfbkllbfpdfped ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842851.9110215-716-268968659766343/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 07:00:52 compute-0 sudo[51118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:52 compute-0 python3.9[51120]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 07:00:52 compute-0 sudo[51118]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:52 compute-0 sudo[51270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckhqkddxsaertgmhbnlxsyjqkgxrcze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842852.698089-743-229327593667360/AnsiballZ_file.py'
Jan 31 07:00:52 compute-0 sudo[51270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:53 compute-0 python3.9[51272]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:00:53 compute-0 sudo[51270]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:53 compute-0 sudo[51422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xysxkxzokmzcokchaaupnlxpfiigfunb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842853.4781895-773-22173407200879/AnsiballZ_stat.py'
Jan 31 07:00:53 compute-0 sudo[51422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:53 compute-0 sudo[51422]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:54 compute-0 sudo[51545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmfibqlortfwvnueqwofwgzljcnccxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842853.4781895-773-22173407200879/AnsiballZ_copy.py'
Jan 31 07:00:54 compute-0 sudo[51545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:54 compute-0 sudo[51545]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:55 compute-0 sudo[51697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yroavfnwqnwcpwizjwlsezguifqrzuuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842854.7236416-818-231859157987455/AnsiballZ_slurp.py'
Jan 31 07:00:55 compute-0 sudo[51697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:55 compute-0 python3.9[51699]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 07:00:55 compute-0 sudo[51697]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:56 compute-0 sudo[51872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvugiuhxdqrvuxnfaiezbzcjwzfvxrhh ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.5543883-845-16501796219260/async_wrapper.py j703236905127 300 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.5543883-845-16501796219260/AnsiballZ_edpm_os_net_config.py _'
Jan 31 07:00:56 compute-0 sudo[51872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:00:56 compute-0 ansible-async_wrapper.py[51874]: Invoked with j703236905127 300 /home/zuul/.ansible/tmp/ansible-tmp-1769842855.5543883-845-16501796219260/AnsiballZ_edpm_os_net_config.py _
Jan 31 07:00:56 compute-0 ansible-async_wrapper.py[51877]: Starting module and watcher
Jan 31 07:00:56 compute-0 ansible-async_wrapper.py[51877]: Start watching 51878 (300)
Jan 31 07:00:56 compute-0 ansible-async_wrapper.py[51878]: Start module (51878)
Jan 31 07:00:56 compute-0 ansible-async_wrapper.py[51874]: Return async_wrapper task started.
Jan 31 07:00:56 compute-0 sudo[51872]: pam_unix(sudo:session): session closed for user root
Jan 31 07:00:56 compute-0 python3.9[51879]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 07:00:57 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 07:00:57 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 07:00:57 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 07:00:57 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 07:00:57 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.4586] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.4597] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.4985] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.4986] audit: op="connection-add" uuid="92c2de8c-9391-4179-8086-688f36a89ff2" name="br-ex-br" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.4999] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.4999] audit: op="connection-add" uuid="fa95e63b-7ee4-41c6-be61-00476cd170ed" name="br-ex-port" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5008] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5009] audit: op="connection-add" uuid="1b04aaaa-10c3-4379-9ca2-a32cd0b9390f" name="eth1-port" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5018] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5019] audit: op="connection-add" uuid="f0ac0f64-28b1-4cca-a7c1-66d46f0e19e9" name="vlan20-port" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5027] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5028] audit: op="connection-add" uuid="7560ebdb-c71f-4aa6-a11a-a92dca008ad9" name="vlan21-port" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5037] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5038] audit: op="connection-add" uuid="4822e3a5-1886-4d39-92b3-8f0edb5b0ba5" name="vlan22-port" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5048] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5050] audit: op="connection-add" uuid="8d465afd-0172-479a-991b-f3b9067d1e90" name="vlan23-port" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5066] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5078] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5079] audit: op="connection-add" uuid="e8a690fd-f7f8-4368-94b7-0efb12854bed" name="br-ex-if" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5118] audit: op="connection-update" uuid="0c8dc28c-09cf-5c4d-82d0-76839923fad3" name="ci-private-network" args="connection.controller,connection.master,connection.slave-type,connection.port-type,connection.timestamp,ipv4.never-default,ipv4.routes,ipv4.method,ipv4.addresses,ipv4.dns,ipv4.routing-rules,ipv6.dns,ipv6.routes,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5130] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5132] audit: op="connection-add" uuid="f3df2045-414a-45ac-bf28-d6df6a37d4f0" name="vlan20-if" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5144] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5145] audit: op="connection-add" uuid="6cd34376-d1d7-426d-abfa-9bb96eb64ba7" name="vlan21-if" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5157] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5158] audit: op="connection-add" uuid="937cf22b-2e5a-4b67-8109-c401527b972a" name="vlan22-if" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5171] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5172] audit: op="connection-add" uuid="1e87c4a9-7c84-4618-9162-2e43b5df08a3" name="vlan23-if" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5179] audit: op="connection-delete" uuid="8273820e-2eee-3fa4-9427-36949f2c7c4b" name="Wired connection 1" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5187] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5189] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5193] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5196] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (92c2de8c-9391-4179-8086-688f36a89ff2)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5197] audit: op="connection-activate" uuid="92c2de8c-9391-4179-8086-688f36a89ff2" name="br-ex-br" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5198] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5199] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5203] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5206] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (fa95e63b-7ee4-41c6-be61-00476cd170ed)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5207] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5208] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5211] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5214] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (1b04aaaa-10c3-4379-9ca2-a32cd0b9390f)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5216] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5216] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5220] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5223] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f0ac0f64-28b1-4cca-a7c1-66d46f0e19e9)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5224] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5225] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5229] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5232] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7560ebdb-c71f-4aa6-a11a-a92dca008ad9)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5233] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5234] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5238] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5241] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4822e3a5-1886-4d39-92b3-8f0edb5b0ba5)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5243] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5243] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5247] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5250] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8d465afd-0172-479a-991b-f3b9067d1e90)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5250] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5252] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5254] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5258] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5259] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5261] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5264] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e8a690fd-f7f8-4368-94b7-0efb12854bed)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5264] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5267] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5268] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5269] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5270] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5278] device (eth1): disconnecting for new activation request.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5278] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5281] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5282] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5283] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5285] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5286] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5289] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5292] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (f3df2045-414a-45ac-bf28-d6df6a37d4f0)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5292] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5294] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5296] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5297] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5299] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5300] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5302] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5305] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (6cd34376-d1d7-426d-abfa-9bb96eb64ba7)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5306] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5308] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5309] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5311] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5313] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5313] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5316] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5319] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (937cf22b-2e5a-4b67-8109-c401527b972a)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5320] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5322] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5324] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5325] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5326] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <warn>  [1769842858.5327] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5329] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5332] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (1e87c4a9-7c84-4618-9162-2e43b5df08a3)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5332] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5334] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5336] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5336] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5337] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5346] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5347] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5349] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5350] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5355] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5357] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5360] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5382] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5384] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5388] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5392] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5396] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5397] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 kernel: Timeout policy base is empty
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5402] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5405] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5409] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5411] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5415] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5419] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5421] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5423] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5427] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5431] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5431] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5431] dhcp4 (eth0): state changed no lease
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5433] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 07:00:58 compute-0 systemd-udevd[51886]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5441] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5444] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51880 uid=0 result="fail" reason="Device is not activated"
Jan 31 07:00:58 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5556] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5562] dhcp4 (eth0): state changed new lease, address=38.129.56.152
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5617] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5625] device (eth1): disconnecting for new activation request.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5625] audit: op="connection-activate" uuid="0c8dc28c-09cf-5c4d-82d0-76839923fad3" name="ci-private-network" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 kernel: br-ex: entered promiscuous mode
Jan 31 07:00:58 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5641] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5651] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5674] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51880 uid=0 result="success"
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5694] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 07:00:58 compute-0 kernel: vlan22: entered promiscuous mode
Jan 31 07:00:58 compute-0 systemd-udevd[51884]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:00:58 compute-0 kernel: vlan23: entered promiscuous mode
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5768] device (eth1): Activation: starting connection 'ci-private-network' (0c8dc28c-09cf-5c4d-82d0-76839923fad3)
Jan 31 07:00:58 compute-0 systemd-udevd[51885]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5789] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5793] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5805] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5816] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5817] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5819] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5820] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5821] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5823] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5824] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5829] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5841] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5844] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5849] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 kernel: vlan20: entered promiscuous mode
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5852] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5856] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5859] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5862] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5865] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5869] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5872] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5877] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5880] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5884] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5888] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5895] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5901] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 kernel: vlan21: entered promiscuous mode
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5912] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-0 systemd-udevd[51976]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:00:58 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5969] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5969] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5976] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5991] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.5996] device (eth1): Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6004] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6016] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6021] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6033] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6038] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6048] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6052] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6065] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6070] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6079] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6084] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6098] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6099] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6104] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6111] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6115] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6122] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6126] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6135] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6137] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:00:58 compute-0 NetworkManager[49108]: <info>  [1769842858.6141] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 07:00:59 compute-0 NetworkManager[49108]: <info>  [1769842859.7373] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51880 uid=0 result="success"
Jan 31 07:00:59 compute-0 NetworkManager[49108]: <info>  [1769842859.9236] checkpoint[0x561c7090a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 07:00:59 compute-0 NetworkManager[49108]: <info>  [1769842859.9238] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51880 uid=0 result="success"
Jan 31 07:00:59 compute-0 sudo[52236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfaqjjlgoqwsvsvixccgzqbefcdmvcbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842859.527041-845-200471638086174/AnsiballZ_async_status.py'
Jan 31 07:00:59 compute-0 sudo[52236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:00 compute-0 python3.9[52239]: ansible-ansible.legacy.async_status Invoked with jid=j703236905127.51874 mode=status _async_dir=/root/.ansible_async
Jan 31 07:01:00 compute-0 sudo[52236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.2568] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51880 uid=0 result="success"
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.2579] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51880 uid=0 result="success"
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.4775] audit: op="networking-control" arg="global-dns-configuration" pid=51880 uid=0 result="success"
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.4818] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.4861] audit: op="networking-control" arg="global-dns-configuration" pid=51880 uid=0 result="success"
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.4887] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51880 uid=0 result="success"
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.6411] checkpoint[0x561c7090aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 07:01:00 compute-0 NetworkManager[49108]: <info>  [1769842860.6418] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51880 uid=0 result="success"
Jan 31 07:01:00 compute-0 ansible-async_wrapper.py[51878]: Module complete (51878)
Jan 31 07:01:01 compute-0 ansible-async_wrapper.py[51877]: Done in kid B.
Jan 31 07:01:01 compute-0 CROND[52247]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 07:01:01 compute-0 run-parts[52250]: (/etc/cron.hourly) starting 0anacron
Jan 31 07:01:01 compute-0 anacron[52258]: Anacron started on 2026-01-31
Jan 31 07:01:01 compute-0 anacron[52258]: Will run job `cron.daily' in 8 min.
Jan 31 07:01:01 compute-0 anacron[52258]: Will run job `cron.weekly' in 28 min.
Jan 31 07:01:01 compute-0 anacron[52258]: Will run job `cron.monthly' in 48 min.
Jan 31 07:01:01 compute-0 anacron[52258]: Jobs will be executed sequentially
Jan 31 07:01:01 compute-0 run-parts[52260]: (/etc/cron.hourly) finished 0anacron
Jan 31 07:01:01 compute-0 CROND[52246]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 07:01:03 compute-0 sudo[52357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrmhhwdfmyjifpdesoxyyyfdqsaqssvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842859.527041-845-200471638086174/AnsiballZ_async_status.py'
Jan 31 07:01:03 compute-0 sudo[52357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:03 compute-0 python3.9[52359]: ansible-ansible.legacy.async_status Invoked with jid=j703236905127.51874 mode=status _async_dir=/root/.ansible_async
Jan 31 07:01:03 compute-0 sudo[52357]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:03 compute-0 sudo[52457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aujuohajzutbpuntjorpwxtzkhwwtarn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842859.527041-845-200471638086174/AnsiballZ_async_status.py'
Jan 31 07:01:03 compute-0 sudo[52457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:04 compute-0 python3.9[52459]: ansible-ansible.legacy.async_status Invoked with jid=j703236905127.51874 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:01:04 compute-0 sudo[52457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:04 compute-0 sudo[52609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqiytpmobqoxyprbuwtjrnpyaiaplifn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842864.3344557-926-106953305366804/AnsiballZ_stat.py'
Jan 31 07:01:04 compute-0 sudo[52609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:04 compute-0 python3.9[52611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:04 compute-0 sudo[52609]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:05 compute-0 sudo[52732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwinyvlfilqgibodloyrratqzjzucvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842864.3344557-926-106953305366804/AnsiballZ_copy.py'
Jan 31 07:01:05 compute-0 sudo[52732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:05 compute-0 python3.9[52734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842864.3344557-926-106953305366804/.source.returncode _original_basename=.yyx_am60 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:05 compute-0 sudo[52732]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:05 compute-0 sudo[52884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usvgxpaflrssxbcoeytlmvgcasyeykgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842865.5444105-974-33030003056400/AnsiballZ_stat.py'
Jan 31 07:01:05 compute-0 sudo[52884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:06 compute-0 python3.9[52886]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:06 compute-0 sudo[52884]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:06 compute-0 sudo[53007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weegoiqdxdrffcdkljcxgpsuslqrxzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842865.5444105-974-33030003056400/AnsiballZ_copy.py'
Jan 31 07:01:06 compute-0 sudo[53007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:06 compute-0 python3.9[53009]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842865.5444105-974-33030003056400/.source.cfg _original_basename=.t9vx7pij follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:06 compute-0 sudo[53007]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:07 compute-0 sudo[53160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwycwkogmzndaxncejuygxtxbmuwgraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842866.7039359-1019-79645939949600/AnsiballZ_systemd.py'
Jan 31 07:01:07 compute-0 sudo[53160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:07 compute-0 python3.9[53162]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:01:07 compute-0 systemd[1]: Reloading Network Manager...
Jan 31 07:01:07 compute-0 NetworkManager[49108]: <info>  [1769842867.5179] audit: op="reload" arg="0" pid=53166 uid=0 result="success"
Jan 31 07:01:07 compute-0 NetworkManager[49108]: <info>  [1769842867.5191] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 07:01:07 compute-0 systemd[1]: Reloaded Network Manager.
Jan 31 07:01:07 compute-0 sudo[53160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:07 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:01:07 compute-0 sshd-session[45103]: Connection closed by 192.168.122.30 port 49722
Jan 31 07:01:07 compute-0 sshd-session[45100]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:01:07 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 07:01:07 compute-0 systemd[1]: session-10.scope: Consumed 43.797s CPU time.
Jan 31 07:01:07 compute-0 systemd-logind[816]: Session 10 logged out. Waiting for processes to exit.
Jan 31 07:01:07 compute-0 systemd-logind[816]: Removed session 10.
Jan 31 07:01:13 compute-0 sshd-session[53199]: Accepted publickey for zuul from 192.168.122.30 port 58986 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:01:13 compute-0 systemd-logind[816]: New session 11 of user zuul.
Jan 31 07:01:13 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 31 07:01:13 compute-0 sshd-session[53199]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:01:14 compute-0 python3.9[53352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:15 compute-0 python3.9[53506]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:17 compute-0 python3.9[53700]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:01:17 compute-0 sshd-session[53202]: Connection closed by 192.168.122.30 port 58986
Jan 31 07:01:17 compute-0 sshd-session[53199]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:01:17 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 07:01:17 compute-0 systemd[1]: session-11.scope: Consumed 2.111s CPU time.
Jan 31 07:01:17 compute-0 systemd-logind[816]: Session 11 logged out. Waiting for processes to exit.
Jan 31 07:01:17 compute-0 systemd-logind[816]: Removed session 11.
Jan 31 07:01:17 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:01:24 compute-0 sshd-session[53729]: Accepted publickey for zuul from 192.168.122.30 port 45178 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:01:24 compute-0 systemd-logind[816]: New session 12 of user zuul.
Jan 31 07:01:24 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 31 07:01:24 compute-0 sshd-session[53729]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:01:25 compute-0 python3.9[53882]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:26 compute-0 python3.9[54036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:26 compute-0 sudo[54191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnpgygcexilsqukrmwsbbkxbkeawodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842886.597129-80-92799562575715/AnsiballZ_setup.py'
Jan 31 07:01:26 compute-0 sudo[54191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:27 compute-0 python3.9[54193]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:27 compute-0 sudo[54191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:27 compute-0 sudo[54275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqdivllhxibtbedhqrsfxxgjxobtqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842886.597129-80-92799562575715/AnsiballZ_dnf.py'
Jan 31 07:01:27 compute-0 sudo[54275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:27 compute-0 python3.9[54277]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:29 compute-0 sudo[54275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:29 compute-0 sudo[54429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znxgttfbfzigxaikaevkwcfqqeoewdbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842889.5411599-116-242464390427823/AnsiballZ_setup.py'
Jan 31 07:01:29 compute-0 sudo[54429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:30 compute-0 python3.9[54431]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:30 compute-0 sudo[54429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:31 compute-0 sudo[54624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqozfdncrekxjxkuxqykhpvuuerdnbgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842890.7023797-149-268365694424406/AnsiballZ_file.py'
Jan 31 07:01:31 compute-0 sudo[54624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:31 compute-0 python3.9[54626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:31 compute-0 sudo[54624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:31 compute-0 sudo[54776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjqrfjniwgxaovovnkdrwilqcfagueh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842891.5860987-173-54591193627612/AnsiballZ_command.py'
Jan 31 07:01:31 compute-0 sudo[54776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:32 compute-0 python3.9[54778]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4082291723-merged.mount: Deactivated successfully.
Jan 31 07:01:32 compute-0 podman[54779]: 2026-01-31 07:01:32.238940209 +0000 UTC m=+0.077236689 system refresh
Jan 31 07:01:32 compute-0 sudo[54776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:32 compute-0 sudo[54939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgzzyiuvhjeyvsaumugbnupommwnqsak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842892.4607844-197-188420144894365/AnsiballZ_stat.py'
Jan 31 07:01:32 compute-0 sudo[54939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:33 compute-0 python3.9[54941]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:33 compute-0 sudo[54939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:01:33 compute-0 sudo[55062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewbpnjycrtyixtddimsjefvnopocnkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842892.4607844-197-188420144894365/AnsiballZ_copy.py'
Jan 31 07:01:33 compute-0 sudo[55062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:33 compute-0 python3.9[55064]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842892.4607844-197-188420144894365/.source.json follow=False _original_basename=podman_network_config.j2 checksum=bc70a8f50f8191d7a496f87a9f16754a9fb1d04f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:33 compute-0 sudo[55062]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:34 compute-0 sudo[55214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llefdodtfjdybwjywqhbqxlqpbudcxky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842894.1013598-242-277053612414063/AnsiballZ_stat.py'
Jan 31 07:01:34 compute-0 sudo[55214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:34 compute-0 python3.9[55216]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:34 compute-0 sudo[55214]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:34 compute-0 sudo[55337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pupcdaqtfjlehmwduhhqeuwbqtqbevcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842894.1013598-242-277053612414063/AnsiballZ_copy.py'
Jan 31 07:01:34 compute-0 sudo[55337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:35 compute-0 python3.9[55339]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842894.1013598-242-277053612414063/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a4fd3ca7d18166099562a65af8d6da655db34efc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:35 compute-0 sudo[55337]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:35 compute-0 sudo[55489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skjfvysqdqzeclnrfwjrwiagjowmlyic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842895.3802078-290-172799750763150/AnsiballZ_ini_file.py'
Jan 31 07:01:35 compute-0 sudo[55489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:35 compute-0 python3.9[55491]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:35 compute-0 sudo[55489]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:36 compute-0 sudo[55641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oooxbjykmgvexdnsgyssrxxmedjygilg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842896.0593367-290-61917551668947/AnsiballZ_ini_file.py'
Jan 31 07:01:36 compute-0 sudo[55641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:36 compute-0 python3.9[55643]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:36 compute-0 sudo[55641]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:36 compute-0 sudo[55793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqklbqgleeckfpqpketrtpjebvvzxnlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842896.6294048-290-119943394939260/AnsiballZ_ini_file.py'
Jan 31 07:01:36 compute-0 sudo[55793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:37 compute-0 python3.9[55795]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:37 compute-0 sudo[55793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:37 compute-0 sudo[55945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwafvmtqkfegmtsovsfwbvqkxclnziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842897.1933815-290-207184483102873/AnsiballZ_ini_file.py'
Jan 31 07:01:37 compute-0 sudo[55945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:37 compute-0 python3.9[55947]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:01:37 compute-0 sudo[55945]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:38 compute-0 sudo[56097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjcrpksivqscidopsnbzfkihwdciwpio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842897.951451-383-236959695243447/AnsiballZ_dnf.py'
Jan 31 07:01:38 compute-0 sudo[56097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:38 compute-0 python3.9[56099]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:39 compute-0 sudo[56097]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:40 compute-0 sudo[56250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohmjbczlaexcpdnmczgweeqchimobgtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842900.491271-416-155369406000548/AnsiballZ_setup.py'
Jan 31 07:01:40 compute-0 sudo[56250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:40 compute-0 python3.9[56252]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:01:41 compute-0 sudo[56250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:41 compute-0 sudo[56404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blszaavaommpddblfujnlxxpttkzdste ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842901.2233193-440-110670737009348/AnsiballZ_stat.py'
Jan 31 07:01:41 compute-0 sudo[56404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:41 compute-0 python3.9[56406]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:01:41 compute-0 sudo[56404]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:42 compute-0 sudo[56556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eycqljhbjxqloytooddyyhjrcslaxqzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842901.9521224-467-47966854215194/AnsiballZ_stat.py'
Jan 31 07:01:42 compute-0 sudo[56556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:42 compute-0 python3.9[56558]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:01:42 compute-0 sudo[56556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:42 compute-0 sudo[56708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldvsbxdzlbnnpknvdakdmqirbhgngiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842902.715257-497-100783704396707/AnsiballZ_command.py'
Jan 31 07:01:42 compute-0 sudo[56708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:43 compute-0 python3.9[56710]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:01:43 compute-0 sudo[56708]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:43 compute-0 sudo[56861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sovaoqujqvtdbkkkvrpyxqroxzsfagep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842903.4498744-527-241177866251605/AnsiballZ_service_facts.py'
Jan 31 07:01:43 compute-0 sudo[56861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:44 compute-0 python3.9[56863]: ansible-service_facts Invoked
Jan 31 07:01:44 compute-0 network[56880]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:01:44 compute-0 network[56881]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:01:44 compute-0 network[56882]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:01:46 compute-0 sudo[56861]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:47 compute-0 sudo[57165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrvuoazmryluvouxlosbbcgokxsyhkfk ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769842907.3690755-572-6922051701949/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769842907.3690755-572-6922051701949/args'
Jan 31 07:01:47 compute-0 sudo[57165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:47 compute-0 sudo[57165]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:48 compute-0 sudo[57332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgmrneebcafwbhxdhijlqbrawgesnfzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842908.022184-605-1624458319810/AnsiballZ_dnf.py'
Jan 31 07:01:48 compute-0 sudo[57332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:48 compute-0 python3.9[57334]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:01:49 compute-0 sudo[57332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:51 compute-0 sudo[57485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhbmlmbbmivryctkwczbyqugjtygnfsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842910.479245-644-75273881245546/AnsiballZ_package_facts.py'
Jan 31 07:01:51 compute-0 sudo[57485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:51 compute-0 python3.9[57487]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 07:01:51 compute-0 sudo[57485]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:52 compute-0 sudo[57637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckjypokrjatxlrbpydtxvdsjaejrgpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842912.3480918-674-181055211736152/AnsiballZ_stat.py'
Jan 31 07:01:52 compute-0 sudo[57637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:52 compute-0 python3.9[57639]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:52 compute-0 sudo[57637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:53 compute-0 sudo[57762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idjdpymtktwmsbcrdrxjhlhaapctamyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842912.3480918-674-181055211736152/AnsiballZ_copy.py'
Jan 31 07:01:53 compute-0 sudo[57762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:53 compute-0 python3.9[57764]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842912.3480918-674-181055211736152/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:53 compute-0 sudo[57762]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:53 compute-0 sudo[57916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikypmzupajsbfvhtgnlljhkxddqjhzwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842913.5510418-719-85665099107252/AnsiballZ_stat.py'
Jan 31 07:01:53 compute-0 sudo[57916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:54 compute-0 python3.9[57918]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:01:54 compute-0 sudo[57916]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:54 compute-0 sudo[58041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syywxhptfreiuzneixiohappimtvagfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842913.5510418-719-85665099107252/AnsiballZ_copy.py'
Jan 31 07:01:54 compute-0 sudo[58041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:54 compute-0 python3.9[58043]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842913.5510418-719-85665099107252/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:54 compute-0 sudo[58041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:56 compute-0 sudo[58195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdgduvwaefzmtzrffccwyxumwfixxcas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842915.6900506-782-51147659521508/AnsiballZ_lineinfile.py'
Jan 31 07:01:56 compute-0 sudo[58195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:56 compute-0 python3.9[58197]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:01:56 compute-0 sudo[58195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:57 compute-0 sudo[58349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvahpxnjhdmuqolvpslblyteogkrgkxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842917.4459743-827-222778398404727/AnsiballZ_setup.py'
Jan 31 07:01:57 compute-0 sudo[58349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:58 compute-0 python3.9[58351]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:01:58 compute-0 sudo[58349]: pam_unix(sudo:session): session closed for user root
Jan 31 07:01:58 compute-0 sudo[58433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlehrhxlbsuqdhpetcwrnrsdrwtabwuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842917.4459743-827-222778398404727/AnsiballZ_systemd.py'
Jan 31 07:01:58 compute-0 sudo[58433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:01:59 compute-0 python3.9[58435]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:01:59 compute-0 sudo[58433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:00 compute-0 sudo[58587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dveuevbrjvntackaftlilmyxfevideor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842920.063385-875-61901228340573/AnsiballZ_setup.py'
Jan 31 07:02:00 compute-0 sudo[58587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:00 compute-0 python3.9[58589]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:02:00 compute-0 sudo[58587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:01 compute-0 sudo[58671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yongrbohqhqbgeigqxcqgswsupusvqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842920.063385-875-61901228340573/AnsiballZ_systemd.py'
Jan 31 07:02:01 compute-0 sudo[58671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:01 compute-0 python3.9[58673]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:02:01 compute-0 chronyd[828]: chronyd exiting
Jan 31 07:02:01 compute-0 systemd[1]: Stopping NTP client/server...
Jan 31 07:02:01 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 07:02:01 compute-0 systemd[1]: Stopped NTP client/server.
Jan 31 07:02:01 compute-0 systemd[1]: Starting NTP client/server...
Jan 31 07:02:01 compute-0 chronyd[58682]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 07:02:01 compute-0 chronyd[58682]: Frequency -23.865 +/- 0.194 ppm read from /var/lib/chrony/drift
Jan 31 07:02:01 compute-0 chronyd[58682]: Loaded seccomp filter (level 2)
Jan 31 07:02:01 compute-0 systemd[1]: Started NTP client/server.
Jan 31 07:02:01 compute-0 sudo[58671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:01 compute-0 sshd-session[53732]: Connection closed by 192.168.122.30 port 45178
Jan 31 07:02:01 compute-0 sshd-session[53729]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:02:01 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 07:02:01 compute-0 systemd[1]: session-12.scope: Consumed 22.376s CPU time.
Jan 31 07:02:01 compute-0 systemd-logind[816]: Session 12 logged out. Waiting for processes to exit.
Jan 31 07:02:01 compute-0 systemd-logind[816]: Removed session 12.
Jan 31 07:02:07 compute-0 sshd-session[58708]: Accepted publickey for zuul from 192.168.122.30 port 53756 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:02:07 compute-0 systemd-logind[816]: New session 13 of user zuul.
Jan 31 07:02:07 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 31 07:02:07 compute-0 sshd-session[58708]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:02:08 compute-0 sudo[58861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkglicsugzafpmeyvhftkfvnpuvncawj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842927.786328-26-113948414303274/AnsiballZ_file.py'
Jan 31 07:02:08 compute-0 sudo[58861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:08 compute-0 python3.9[58863]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:08 compute-0 sudo[58861]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:08 compute-0 sudo[59013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufiywgicgzlvqozzmyymktahkziojitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842928.5525281-62-203782118232832/AnsiballZ_stat.py'
Jan 31 07:02:08 compute-0 sudo[59013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:09 compute-0 python3.9[59015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:09 compute-0 sudo[59013]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:09 compute-0 sudo[59136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggtlqlpqdmoxjwrhgeerbrldkztuzgri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842928.5525281-62-203782118232832/AnsiballZ_copy.py'
Jan 31 07:02:09 compute-0 sudo[59136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:09 compute-0 python3.9[59138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842928.5525281-62-203782118232832/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:09 compute-0 sudo[59136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:10 compute-0 sshd-session[58711]: Connection closed by 192.168.122.30 port 53756
Jan 31 07:02:10 compute-0 sshd-session[58708]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:02:10 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 07:02:10 compute-0 systemd[1]: session-13.scope: Consumed 1.521s CPU time.
Jan 31 07:02:10 compute-0 systemd-logind[816]: Session 13 logged out. Waiting for processes to exit.
Jan 31 07:02:10 compute-0 systemd-logind[816]: Removed session 13.
Jan 31 07:02:15 compute-0 sshd-session[59163]: Accepted publickey for zuul from 192.168.122.30 port 36674 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:02:15 compute-0 systemd-logind[816]: New session 14 of user zuul.
Jan 31 07:02:15 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 31 07:02:15 compute-0 sshd-session[59163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:02:16 compute-0 python3.9[59316]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:02:17 compute-0 sudo[59470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzzqlknougyegzdsvdgvcqipzqtzxuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842937.1553726-59-181674806268591/AnsiballZ_file.py'
Jan 31 07:02:17 compute-0 sudo[59470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:17 compute-0 python3.9[59472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:17 compute-0 sudo[59470]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:18 compute-0 sudo[59645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ardipfunoqgqytbxsgqndbigyxblhdct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842937.9480944-83-215664544816436/AnsiballZ_stat.py'
Jan 31 07:02:18 compute-0 sudo[59645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:18 compute-0 python3.9[59647]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:18 compute-0 sudo[59645]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:19 compute-0 sudo[59768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzsoygkwmphkosrekurufucovpiujurw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842937.9480944-83-215664544816436/AnsiballZ_copy.py'
Jan 31 07:02:19 compute-0 sudo[59768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:19 compute-0 python3.9[59770]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769842937.9480944-83-215664544816436/.source.json _original_basename=.qigyr8hj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:19 compute-0 sudo[59768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:19 compute-0 sudo[59920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wswtiulvthkjxywghqwllvmiocbemzgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842939.7734723-152-219170688829944/AnsiballZ_stat.py'
Jan 31 07:02:19 compute-0 sudo[59920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:20 compute-0 python3.9[59922]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:20 compute-0 sudo[59920]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:20 compute-0 sudo[60043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgdkilvvobdintbzqoaqmflfxqqpybu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842939.7734723-152-219170688829944/AnsiballZ_copy.py'
Jan 31 07:02:20 compute-0 sudo[60043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:20 compute-0 python3.9[60045]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842939.7734723-152-219170688829944/.source _original_basename=.btkxww01 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:20 compute-0 sudo[60043]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:21 compute-0 sudo[60197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfegakekrfdmtnvutdhlaiyujokutvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842941.3659022-200-113048483752257/AnsiballZ_file.py'
Jan 31 07:02:21 compute-0 sudo[60197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:21 compute-0 python3.9[60199]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:02:21 compute-0 sudo[60197]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:22 compute-0 sudo[60349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkcttasnruskxrslsbinxrgcwnbcppun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842942.0247738-224-244443076280272/AnsiballZ_stat.py'
Jan 31 07:02:22 compute-0 sudo[60349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:22 compute-0 python3.9[60351]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:22 compute-0 sudo[60349]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:22 compute-0 sudo[60472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpohsiiehzpmeduappqgazrjnaelclmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842942.0247738-224-244443076280272/AnsiballZ_copy.py'
Jan 31 07:02:22 compute-0 sudo[60472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:22 compute-0 python3.9[60474]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842942.0247738-224-244443076280272/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:02:22 compute-0 sudo[60472]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:23 compute-0 sudo[60624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcycenvcoutbofhdmpdnlhgcwsuhxgfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842943.077818-224-79678685722601/AnsiballZ_stat.py'
Jan 31 07:02:23 compute-0 sudo[60624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:23 compute-0 python3.9[60626]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:23 compute-0 sudo[60624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:23 compute-0 sudo[60747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqlfnhsekmzcfhovnmyrwinrqimsmfzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842943.077818-224-79678685722601/AnsiballZ_copy.py'
Jan 31 07:02:23 compute-0 sudo[60747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:24 compute-0 python3.9[60749]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769842943.077818-224-79678685722601/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:02:24 compute-0 sudo[60747]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:24 compute-0 sudo[60899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcbfsmzwhptlgvprxrdaqkhiurswmlmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842944.2523232-311-87620664557445/AnsiballZ_file.py'
Jan 31 07:02:24 compute-0 sudo[60899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:24 compute-0 python3.9[60901]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:24 compute-0 sudo[60899]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:25 compute-0 sudo[61051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smnfttujykwjlsaogdbnmziqjttnntti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842944.8703573-335-121775266178745/AnsiballZ_stat.py'
Jan 31 07:02:25 compute-0 sudo[61051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:25 compute-0 python3.9[61053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:25 compute-0 sudo[61051]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:25 compute-0 sudo[61174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debqbamrjpzhpahitotntedtuylkgqju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842944.8703573-335-121775266178745/AnsiballZ_copy.py'
Jan 31 07:02:25 compute-0 sudo[61174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:26 compute-0 python3.9[61176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842944.8703573-335-121775266178745/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:26 compute-0 sudo[61174]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:26 compute-0 sudo[61326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zucmukobwpbtgzzzvnhvycmkzyvkkppt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842946.1981733-380-143991271307419/AnsiballZ_stat.py'
Jan 31 07:02:26 compute-0 sudo[61326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:26 compute-0 python3.9[61328]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:26 compute-0 sudo[61326]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:26 compute-0 sudo[61449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbdfhnshzueucxwonylfxgvpwtimyjsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842946.1981733-380-143991271307419/AnsiballZ_copy.py'
Jan 31 07:02:26 compute-0 sudo[61449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:27 compute-0 python3.9[61451]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842946.1981733-380-143991271307419/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:27 compute-0 sudo[61449]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:27 compute-0 sudo[61601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjzilpxxxnmnjucdjxwyjpmfynqeinud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842947.3747318-425-154869041448511/AnsiballZ_systemd.py'
Jan 31 07:02:27 compute-0 sudo[61601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:28 compute-0 python3.9[61603]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:28 compute-0 systemd[1]: Reloading.
Jan 31 07:02:28 compute-0 systemd-rc-local-generator[61629]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:28 compute-0 systemd-sysv-generator[61633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:28 compute-0 systemd[1]: Reloading.
Jan 31 07:02:28 compute-0 systemd-sysv-generator[61668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:28 compute-0 systemd-rc-local-generator[61665]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:28 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 07:02:28 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 07:02:28 compute-0 sudo[61601]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:29 compute-0 sudo[61830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjrqthjjmdonvrcnjcyxnpikhiqyzbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842948.8888507-449-220922298587066/AnsiballZ_stat.py'
Jan 31 07:02:29 compute-0 sudo[61830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:29 compute-0 python3.9[61832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:29 compute-0 sudo[61830]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:29 compute-0 sudo[61953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgtjitfabdkpqlsnjkdeylshrcxmurxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842948.8888507-449-220922298587066/AnsiballZ_copy.py'
Jan 31 07:02:29 compute-0 sudo[61953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:29 compute-0 python3.9[61955]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842948.8888507-449-220922298587066/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:29 compute-0 sudo[61953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:30 compute-0 sudo[62105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icwvuouopqddlptvpdgxyqmwyfompfkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842950.1793177-494-186577060727583/AnsiballZ_stat.py'
Jan 31 07:02:30 compute-0 sudo[62105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:30 compute-0 python3.9[62107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:30 compute-0 sudo[62105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:30 compute-0 sudo[62228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkattyouywvamrzjjybqredgdeclwafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842950.1793177-494-186577060727583/AnsiballZ_copy.py'
Jan 31 07:02:30 compute-0 sudo[62228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:31 compute-0 python3.9[62230]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842950.1793177-494-186577060727583/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:31 compute-0 sudo[62228]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:31 compute-0 sudo[62380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gatqkfgdxbzjhggzytbehruuaonbgluu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842951.3478277-539-185479656340999/AnsiballZ_systemd.py'
Jan 31 07:02:31 compute-0 sudo[62380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:31 compute-0 python3.9[62382]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:31 compute-0 systemd[1]: Reloading.
Jan 31 07:02:31 compute-0 systemd-rc-local-generator[62409]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:31 compute-0 systemd-sysv-generator[62413]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:32 compute-0 systemd[1]: Reloading.
Jan 31 07:02:32 compute-0 systemd-rc-local-generator[62445]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:32 compute-0 systemd-sysv-generator[62449]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:32 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 07:02:32 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:02:32 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:02:32 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 07:02:32 compute-0 sudo[62380]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:33 compute-0 python3.9[62607]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:02:33 compute-0 network[62624]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:02:33 compute-0 network[62625]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:02:33 compute-0 network[62626]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:02:35 compute-0 sudo[62888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvrmzqelvkjesgawoxwbrehzakqfvtcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842955.650099-587-23299539882095/AnsiballZ_systemd.py'
Jan 31 07:02:35 compute-0 sudo[62888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:36 compute-0 sshd-session[62836]: Invalid user sol from 45.148.10.240 port 40414
Jan 31 07:02:36 compute-0 python3.9[62890]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:36 compute-0 sshd-session[62836]: Connection closed by invalid user sol 45.148.10.240 port 40414 [preauth]
Jan 31 07:02:36 compute-0 systemd[1]: Reloading.
Jan 31 07:02:36 compute-0 systemd-rc-local-generator[62921]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:36 compute-0 systemd-sysv-generator[62924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:36 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 07:02:36 compute-0 iptables.init[62931]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 07:02:36 compute-0 sshd-session[60180]: Connection closed by 199.45.155.76 port 57874 [preauth]
Jan 31 07:02:36 compute-0 iptables.init[62931]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 07:02:36 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 07:02:36 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 07:02:36 compute-0 sudo[62888]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:37 compute-0 sudo[63126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdcgjfoxilpmeefqusbnotvcrhnqxndg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842957.5854409-587-232971906352909/AnsiballZ_systemd.py'
Jan 31 07:02:37 compute-0 sudo[63126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:38 compute-0 python3.9[63128]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:38 compute-0 sudo[63126]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:38 compute-0 sudo[63280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwaifurczhqwafpstbivwmjecvaloxlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842958.4639652-635-101081701558149/AnsiballZ_systemd.py'
Jan 31 07:02:38 compute-0 sudo[63280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:38 compute-0 python3.9[63282]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:02:39 compute-0 systemd[1]: Reloading.
Jan 31 07:02:39 compute-0 systemd-sysv-generator[63312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:02:39 compute-0 systemd-rc-local-generator[63307]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:02:39 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 31 07:02:39 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 31 07:02:39 compute-0 sudo[63280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:39 compute-0 sudo[63471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wilwffjckpzibxqdqefsumgehzqmwcpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842959.5347898-659-39420578076302/AnsiballZ_command.py'
Jan 31 07:02:39 compute-0 sudo[63471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:40 compute-0 python3.9[63473]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:40 compute-0 sudo[63471]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:40 compute-0 sudo[63624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drkftelzzfcpltjqpdkalzsewghszupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842960.69386-701-7020527009205/AnsiballZ_stat.py'
Jan 31 07:02:40 compute-0 sudo[63624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:41 compute-0 python3.9[63626]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:41 compute-0 sudo[63624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:41 compute-0 sudo[63749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icmwfuzzawaeasutbwslhjxijlxqqxpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842960.69386-701-7020527009205/AnsiballZ_copy.py'
Jan 31 07:02:41 compute-0 sudo[63749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:41 compute-0 python3.9[63751]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842960.69386-701-7020527009205/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:41 compute-0 sudo[63749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:42 compute-0 sudo[63902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpfcpdhbtjmyxrcbxxhkomuzryiwxtkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842961.8820484-746-228453513384527/AnsiballZ_systemd.py'
Jan 31 07:02:42 compute-0 sudo[63902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:42 compute-0 python3.9[63904]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:02:42 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 07:02:42 compute-0 sshd[1006]: Received SIGHUP; restarting.
Jan 31 07:02:42 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 07:02:42 compute-0 sshd[1006]: Server listening on 0.0.0.0 port 22.
Jan 31 07:02:42 compute-0 sshd[1006]: Server listening on :: port 22.
Jan 31 07:02:42 compute-0 sudo[63902]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:42 compute-0 sudo[64058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrljdhyletlkoqeeuxkjrwjmzvkkluhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842962.6609268-770-55737263676837/AnsiballZ_file.py'
Jan 31 07:02:42 compute-0 sudo[64058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:43 compute-0 python3.9[64060]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:43 compute-0 sudo[64058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:43 compute-0 sudo[64210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfilhkqsgttvhbnwhujbdaxauughcirl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842963.3455408-794-66664208113848/AnsiballZ_stat.py'
Jan 31 07:02:43 compute-0 sudo[64210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:43 compute-0 python3.9[64212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:43 compute-0 sudo[64210]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:44 compute-0 sudo[64333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuobjijekpiyamzhnttihyxosfhkyuou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842963.3455408-794-66664208113848/AnsiballZ_copy.py'
Jan 31 07:02:44 compute-0 sudo[64333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:44 compute-0 python3.9[64335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842963.3455408-794-66664208113848/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:44 compute-0 sudo[64333]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:45 compute-0 sudo[64485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpfsmcklrkpqpvgckoxuspuwgwwthdbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842964.7777035-848-66852045689541/AnsiballZ_timezone.py'
Jan 31 07:02:45 compute-0 sudo[64485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:45 compute-0 python3.9[64487]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:02:45 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 07:02:45 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 07:02:45 compute-0 sudo[64485]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:46 compute-0 sudo[64641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uctoxtarwmcnkgizutdamyvavjaytgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842965.8114526-875-221564428438136/AnsiballZ_file.py'
Jan 31 07:02:46 compute-0 sudo[64641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:46 compute-0 python3.9[64643]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:46 compute-0 sudo[64641]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:46 compute-0 sudo[64793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouiyjudwioqdlqborlephyiwbejkkzma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842966.4229202-899-188900866948876/AnsiballZ_stat.py'
Jan 31 07:02:46 compute-0 sudo[64793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:46 compute-0 python3.9[64795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:46 compute-0 sudo[64793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:47 compute-0 sudo[64916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogexfyvvrdlpjciohgnikavnsbkvihbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842966.4229202-899-188900866948876/AnsiballZ_copy.py'
Jan 31 07:02:47 compute-0 sudo[64916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:47 compute-0 python3.9[64918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842966.4229202-899-188900866948876/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:47 compute-0 sudo[64916]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:47 compute-0 sudo[65068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tikxyrtsmhiaickqeotrcyytxrmhsqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842967.615328-944-46653159595881/AnsiballZ_stat.py'
Jan 31 07:02:47 compute-0 sudo[65068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:48 compute-0 python3.9[65070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:48 compute-0 sudo[65068]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:48 compute-0 sudo[65191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfytivczztuymmogrlqfjstyxqfhpbqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842967.615328-944-46653159595881/AnsiballZ_copy.py'
Jan 31 07:02:48 compute-0 sudo[65191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:48 compute-0 python3.9[65193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842967.615328-944-46653159595881/.source.yaml _original_basename=.zy_ysav6 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:48 compute-0 sudo[65191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:49 compute-0 sudo[65343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdpeqapbqjgcnnhldtmojweayzulbiix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842968.8618119-989-239366045436446/AnsiballZ_stat.py'
Jan 31 07:02:49 compute-0 sudo[65343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:49 compute-0 python3.9[65345]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:49 compute-0 sudo[65343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:49 compute-0 sudo[65466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkymtjsqcniwqioupyroxstcmatuorkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842968.8618119-989-239366045436446/AnsiballZ_copy.py'
Jan 31 07:02:49 compute-0 sudo[65466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:49 compute-0 python3.9[65468]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842968.8618119-989-239366045436446/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:49 compute-0 sudo[65466]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:50 compute-0 sudo[65618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yblybhobkpzcncywqqlkabwekeqmtxbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842970.055334-1034-153983311620561/AnsiballZ_command.py'
Jan 31 07:02:50 compute-0 sudo[65618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:50 compute-0 python3.9[65620]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:50 compute-0 sudo[65618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:51 compute-0 sudo[65771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzhghbgnjlkvossmztpwrdsgbctiukvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842970.7952359-1058-231720555423/AnsiballZ_command.py'
Jan 31 07:02:51 compute-0 sudo[65771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:51 compute-0 python3.9[65773]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:51 compute-0 sudo[65771]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:51 compute-0 sudo[65924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qglkgtseawjflrgsaylumpuwzoqgccld ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769842971.4535215-1082-238592114536199/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:02:51 compute-0 sudo[65924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:52 compute-0 python3[65926]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:02:52 compute-0 sudo[65924]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:52 compute-0 sudo[66076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maecikgkaztgrpsxsvyngvkvaizubiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842972.1939752-1106-171798455911539/AnsiballZ_stat.py'
Jan 31 07:02:52 compute-0 sudo[66076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:52 compute-0 python3.9[66078]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:52 compute-0 sudo[66076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:52 compute-0 sudo[66199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmohxeupvouyzobvxlihdqdhioqjksxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842972.1939752-1106-171798455911539/AnsiballZ_copy.py'
Jan 31 07:02:52 compute-0 sudo[66199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:53 compute-0 python3.9[66201]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842972.1939752-1106-171798455911539/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:53 compute-0 sudo[66199]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:53 compute-0 sudo[66351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvdxrsdlqfsjtzfgzqrlvbqkdktkamhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842973.6458561-1151-38422667173822/AnsiballZ_stat.py'
Jan 31 07:02:53 compute-0 sudo[66351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:54 compute-0 python3.9[66353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:54 compute-0 sudo[66351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:54 compute-0 sudo[66474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tetdzwlmihovaqdyonyblivrcrzmunps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842973.6458561-1151-38422667173822/AnsiballZ_copy.py'
Jan 31 07:02:54 compute-0 sudo[66474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:54 compute-0 python3.9[66476]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842973.6458561-1151-38422667173822/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:54 compute-0 sudo[66474]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:55 compute-0 sudo[66626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnrchfwwnvfifjrhgqbimtyzfjdbzbhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842974.82723-1196-163645749730151/AnsiballZ_stat.py'
Jan 31 07:02:55 compute-0 sudo[66626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:55 compute-0 python3.9[66628]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:55 compute-0 sudo[66626]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:55 compute-0 sudo[66749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqgnaisqhwzyjuikitdopfxunbiwaztx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842974.82723-1196-163645749730151/AnsiballZ_copy.py'
Jan 31 07:02:55 compute-0 sudo[66749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:55 compute-0 python3.9[66751]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842974.82723-1196-163645749730151/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:55 compute-0 sudo[66749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:56 compute-0 sudo[66901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufunetjtuefuhectygwoyjhnlytrqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842975.9829504-1241-136737549500960/AnsiballZ_stat.py'
Jan 31 07:02:56 compute-0 sudo[66901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:56 compute-0 python3.9[66903]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:56 compute-0 sudo[66901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:56 compute-0 sudo[67024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdbfohtsfhzphsisstofsclquzsgqtwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842975.9829504-1241-136737549500960/AnsiballZ_copy.py'
Jan 31 07:02:56 compute-0 sudo[67024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:56 compute-0 python3.9[67026]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842975.9829504-1241-136737549500960/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:56 compute-0 sudo[67024]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:57 compute-0 sudo[67176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etzkeweakixtjgyuliyblkerbhuzwitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842977.170772-1286-50770591672091/AnsiballZ_stat.py'
Jan 31 07:02:57 compute-0 sudo[67176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:57 compute-0 python3.9[67178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:02:57 compute-0 sudo[67176]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:57 compute-0 sudo[67299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfwzelfidwmhgwoxzqwbvbezageospkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842977.170772-1286-50770591672091/AnsiballZ_copy.py'
Jan 31 07:02:57 compute-0 sudo[67299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:58 compute-0 python3.9[67301]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769842977.170772-1286-50770591672091/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:58 compute-0 sudo[67299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:58 compute-0 sudo[67451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofmpradtugdxcnqjvuhuhyzcxsiboedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842978.4636326-1331-215287018652052/AnsiballZ_file.py'
Jan 31 07:02:58 compute-0 sudo[67451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:58 compute-0 python3.9[67453]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:02:58 compute-0 sudo[67451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:02:59 compute-0 sudo[67603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkaecxqnvilbpiafomxwsvimmfjprsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842979.1538942-1355-61778992063100/AnsiballZ_command.py'
Jan 31 07:02:59 compute-0 sudo[67603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:02:59 compute-0 python3.9[67605]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:02:59 compute-0 sudo[67603]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:00 compute-0 sudo[67762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqvpvosmdrycqttctqtirkaqbgadlexf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842979.8329828-1379-90906799322049/AnsiballZ_blockinfile.py'
Jan 31 07:03:00 compute-0 sudo[67762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:00 compute-0 python3.9[67764]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:00 compute-0 sudo[67762]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:00 compute-0 sudo[67915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwgpizjsaphipjxvefziqlxcfytpqhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842980.702772-1406-86796781521735/AnsiballZ_file.py'
Jan 31 07:03:00 compute-0 sudo[67915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:01 compute-0 python3.9[67917]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:01 compute-0 sudo[67915]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:01 compute-0 sudo[68067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwdzruczjrwnzrvwusazgrommtejuww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842981.324821-1406-266308919390533/AnsiballZ_file.py'
Jan 31 07:03:01 compute-0 sudo[68067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:01 compute-0 python3.9[68069]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:01 compute-0 sudo[68067]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:02 compute-0 sudo[68219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxpnbpocdqmnwjuyfxbbduwcxovcmmbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842981.9731052-1451-27944599884119/AnsiballZ_mount.py'
Jan 31 07:03:02 compute-0 sudo[68219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:02 compute-0 python3.9[68221]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:03:02 compute-0 sudo[68219]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:02 compute-0 sudo[68372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjjpztujyofvoyklxcsuvycooiumrinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842982.7348366-1451-210234245013386/AnsiballZ_mount.py'
Jan 31 07:03:02 compute-0 sudo[68372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:03 compute-0 python3.9[68374]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:03:03 compute-0 sudo[68372]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:03 compute-0 sshd-session[59166]: Connection closed by 192.168.122.30 port 36674
Jan 31 07:03:03 compute-0 sshd-session[59163]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:03 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 07:03:03 compute-0 systemd[1]: session-14.scope: Consumed 29.796s CPU time.
Jan 31 07:03:03 compute-0 systemd-logind[816]: Session 14 logged out. Waiting for processes to exit.
Jan 31 07:03:03 compute-0 systemd-logind[816]: Removed session 14.
Jan 31 07:03:08 compute-0 sshd-session[68400]: Accepted publickey for zuul from 192.168.122.30 port 37274 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:03:08 compute-0 systemd-logind[816]: New session 15 of user zuul.
Jan 31 07:03:08 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 31 07:03:08 compute-0 sshd-session[68400]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:09 compute-0 sudo[68553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbrotdpnjrgmfilkjmufenphzxdfxjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842988.917569-23-40216196078968/AnsiballZ_tempfile.py'
Jan 31 07:03:09 compute-0 sudo[68553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:09 compute-0 python3.9[68555]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 07:03:09 compute-0 sudo[68553]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:10 compute-0 sudo[68705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adbbvjvuldpscetmzvrbrgdwudtvkjvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842989.7378838-59-137416494910995/AnsiballZ_stat.py'
Jan 31 07:03:10 compute-0 sudo[68705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:10 compute-0 python3.9[68707]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:10 compute-0 sudo[68705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:11 compute-0 sudo[68857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqctgstzwqiscvhixoduelxnyloliszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842990.576256-89-139149926375907/AnsiballZ_setup.py'
Jan 31 07:03:11 compute-0 sudo[68857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:11 compute-0 python3.9[68859]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:11 compute-0 sudo[68857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:12 compute-0 sudo[69009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqogfjoiwzwngwobihcdiqiiucfrvikd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842991.652787-114-125449390622899/AnsiballZ_blockinfile.py'
Jan 31 07:03:12 compute-0 sudo[69009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:12 compute-0 python3.9[69011]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlvTGYGifalEmozttYlZ79wRHZPo6p3FfxUn+H8fCt//gLYJvHB9ygqCWO8F06xZhwaSJlU3R5k49AFtcq6rCaf4D9FuDYpYU5B1qGxpqY2S/6r/PmC9TmJJe6DJfuIf95os5YrDLR82BbT8dLFvu76PfZiMt0+kvm9gj1Q6XCUTgIsIvY9pyPySu0V4JDeT8EBgROR7WA5Fev80wO2/RlFXH9xVIupO8rswjwWPuIXoua1w44d35HWWHBdMAFXeZZMopWHWwY+fIlyz4B8y/TWDow7KZxG9GHKZ04e73/RA972Gub2LC0SlBFsBqaSnub8ooOcA3jZ3R2bjHAVkZvLgCK9UFSgwvvfyOWxtkJgj5KalAy9vZeGQ02ndAPNkQ6B1GnnRHaR5yGPG78q9Nd8RDmzhTr1iwnYLHhup04nAUnUDw5ubZFyF9bW1KQWvDv+4cfFeT8mhARMCxu7Imzne5FDq9OZAA9VLfnA26YFT0MpGjGl332cx20iz3Z4IU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBM/OyT9HQGjLM76vSXpTFer+lkr//u0v4BsUk+Rcai
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPttGgqMF5HnqNXeajmhgAAhQFj1yReXfFmUGT6cv24PcfDX+VeASpBgDGWJKvbu1EgrSPUu2R8sDzajVI5+ETk=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwK9tbwI1sVhVFn3RGaEAgpi2689y9VdIyBp+cw+RWFupGnK46xr4HB/N67Aw+A+3FJtEl1Zq1cnt3Gy8PYb6XnLd4xH/NFtUI3ukhekrtKvSmysEjpRGIamjt1BkH4Lxh79PNkk13AVMQN92Wo271/fHEvcV7HaC0Q5VypZMd+77ZvI9NuEG1nofpvI8+32YECZBLpoC5KQK7EibqD9MUR2OmapGZhV+5B5jdb0ZvNb966Q0kwAGV8E+xgHSVnh5eCWC8oxgWkycmQd2co9E79fiIHEioABE9aDUGKw0+nsZ7HrvjG/ENeg5C6fjdJE4MsPq3FNHAiTCQPZ7QZgv/CSudt7WYyLTztGL9ksWqaTUeDocKVKPlJlzGrn/TXgMoix8+qbFzxVixIROb2nqElyEy6mo0Xxt2b4aisil9ZQhWVMQY0hGX5vtVv0E6+svzjSTfkyZolbjyRsolJF4pH7+klLEmlWGDlgSoCDZeK/XEi7xq3yaCymuWtX2fAX8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICzJ5+1VSPloOqHhejNen2lHjfV4Hvj7nbRbNJjS6dtd
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECc0+1u2G3haTNDUnwK7F3+bqZqLNjR6ayEsOJcH6U6RkqhSd2eAlivxlw9dfPuir2TFrYzGTtSXuJ8iauDAtQ=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDir5Ux7IUuKTsqwrpZRFpieFX7Hi9Bsaw7N3jCiMd+vuHlEKHLX54HbyTIVnox1XbNjeYynLRRz7VKBfder8IEerGmST/uWuX5FOdve7vDdY++9J6qYkj1Gf6v6BGp8BT97bbPdvaQdLP6YS2jFEfOz4s0oJkgr8dsHjPU70e1P0b7vKxqo3z/E/XCe2BUGEv5j/z9GTl2oQ9/KoTvahfr6qfonnQK9E0gsJKDB9S1UPNFkJUxvVPfKfEao207dmT8EmQL2ZdwDwecA2Mg0SneGaNmEFWDW4CWQjdbHuikc3vsZ1do7kzq2+tz+WLEXqdb4Ig4S0OfV/MAcaC/C1DRfZHxZN3vSayrm99nFc8oPaLnRtT8Jz1dVonMOpwLm3xMm6nAeGNTzM0ImTrJTusVmKNRQI3x6VPiEcWdKNvN5sVcrN9uyINDMuzpXIxc1LmpmR/338EfP4HYhfsTqdM0worzzewvh2XhAVxQAiNYRRUbLvR4/EE5SjXTjSA4ID0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ8oYZpZvdB1n917+wvTxetgtueloCox+7yBQBW8LHZX
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEH62xmPSqzu7EFth8e8ITel7fLvoU9FKlxQN/eSXzUuR/7sZGPhcgLzjrJmEcn4Za0K2VNu6+z559d/AEJY2U=
                                             create=True mode=0644 path=/tmp/ansible.yjhirf4d state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:12 compute-0 sudo[69009]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:12 compute-0 sudo[69161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-majdbzaahxcflbmkxfqaxwoyyiptadpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842992.3500273-138-264198606638798/AnsiballZ_command.py'
Jan 31 07:03:12 compute-0 sudo[69161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:12 compute-0 python3.9[69163]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yjhirf4d' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:12 compute-0 sudo[69161]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:13 compute-0 sudo[69315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqcxtcocxvzuycjwepubychkwtqxzcwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769842993.1389573-162-168909322032684/AnsiballZ_file.py'
Jan 31 07:03:13 compute-0 sudo[69315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:13 compute-0 python3.9[69317]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yjhirf4d state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:13 compute-0 sudo[69315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:14 compute-0 sshd-session[68403]: Connection closed by 192.168.122.30 port 37274
Jan 31 07:03:14 compute-0 sshd-session[68400]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:14 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 07:03:14 compute-0 systemd[1]: session-15.scope: Consumed 2.912s CPU time.
Jan 31 07:03:14 compute-0 systemd-logind[816]: Session 15 logged out. Waiting for processes to exit.
Jan 31 07:03:14 compute-0 systemd-logind[816]: Removed session 15.
Jan 31 07:03:15 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:03:19 compute-0 sshd-session[69344]: Accepted publickey for zuul from 192.168.122.30 port 45648 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:03:19 compute-0 systemd-logind[816]: New session 16 of user zuul.
Jan 31 07:03:19 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 31 07:03:19 compute-0 sshd-session[69344]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:20 compute-0 python3.9[69497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:21 compute-0 sudo[69651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxrdsygtqsqvzlccpetoyngxgoqnmskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843000.9838905-56-184337150655700/AnsiballZ_systemd.py'
Jan 31 07:03:21 compute-0 sudo[69651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:21 compute-0 python3.9[69653]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:03:21 compute-0 sudo[69651]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:22 compute-0 sudo[69805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywzqgexgbvavamyhpjlpmbkpizpdxgfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843002.02816-80-196797025181765/AnsiballZ_systemd.py'
Jan 31 07:03:22 compute-0 sudo[69805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:22 compute-0 python3.9[69807]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:03:22 compute-0 sudo[69805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:23 compute-0 sudo[69958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krbihhwgbvmjudryejayelyfzdalwrfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843002.8914773-107-131664616152253/AnsiballZ_command.py'
Jan 31 07:03:23 compute-0 sudo[69958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:23 compute-0 python3.9[69960]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:23 compute-0 sudo[69958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:24 compute-0 sudo[70111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrtpfpwgfgqcwbdntvunjkcbmrsmhqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843003.7116542-131-15342450363839/AnsiballZ_stat.py'
Jan 31 07:03:24 compute-0 sudo[70111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:24 compute-0 python3.9[70113]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:24 compute-0 sudo[70111]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:24 compute-0 sudo[70265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szuwghdsehjwvgcxfxagnfhebqizpdwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843004.4761806-155-170861261861925/AnsiballZ_command.py'
Jan 31 07:03:24 compute-0 sudo[70265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:24 compute-0 python3.9[70267]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:24 compute-0 sudo[70265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:25 compute-0 sudo[70420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ociwpkoxsfihaflvqvzzcrrkzolpryfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843005.1462905-179-197376657729062/AnsiballZ_file.py'
Jan 31 07:03:25 compute-0 sudo[70420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:25 compute-0 python3.9[70422]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:25 compute-0 sudo[70420]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:26 compute-0 sshd-session[69347]: Connection closed by 192.168.122.30 port 45648
Jan 31 07:03:26 compute-0 sshd-session[69344]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:26 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 07:03:26 compute-0 systemd[1]: session-16.scope: Consumed 3.801s CPU time.
Jan 31 07:03:26 compute-0 systemd-logind[816]: Session 16 logged out. Waiting for processes to exit.
Jan 31 07:03:26 compute-0 systemd-logind[816]: Removed session 16.
Jan 31 07:03:31 compute-0 sshd-session[70447]: Accepted publickey for zuul from 192.168.122.30 port 57504 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:03:31 compute-0 systemd-logind[816]: New session 17 of user zuul.
Jan 31 07:03:31 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 31 07:03:31 compute-0 sshd-session[70447]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:32 compute-0 python3.9[70600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:33 compute-0 sudo[70754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvvifkwmtyjvourzhwlximkqnwouhemi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843013.2391858-62-16630957841949/AnsiballZ_setup.py'
Jan 31 07:03:33 compute-0 sudo[70754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:33 compute-0 python3.9[70756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:03:33 compute-0 sudo[70754]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:34 compute-0 sudo[70838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfessjzscfwwtkansfkrmcwrzueaecs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843013.2391858-62-16630957841949/AnsiballZ_dnf.py'
Jan 31 07:03:34 compute-0 sudo[70838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:34 compute-0 python3.9[70840]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:03:35 compute-0 sudo[70838]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:36 compute-0 python3.9[70991]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:37 compute-0 python3.9[71142]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:03:38 compute-0 python3.9[71292]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:38 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:03:39 compute-0 python3.9[71443]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:03:39 compute-0 sshd-session[70450]: Connection closed by 192.168.122.30 port 57504
Jan 31 07:03:39 compute-0 sshd-session[70447]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:03:39 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 07:03:39 compute-0 systemd[1]: session-17.scope: Consumed 5.003s CPU time.
Jan 31 07:03:39 compute-0 systemd-logind[816]: Session 17 logged out. Waiting for processes to exit.
Jan 31 07:03:39 compute-0 systemd-logind[816]: Removed session 17.
Jan 31 07:03:48 compute-0 sshd-session[71468]: Accepted publickey for zuul from 38.129.56.250 port 47718 ssh2: RSA SHA256:XB4IpasupLQgCusHkIQqr06rUeufQJTktnyEJKRsUAs
Jan 31 07:03:48 compute-0 systemd-logind[816]: New session 18 of user zuul.
Jan 31 07:03:48 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 31 07:03:48 compute-0 sshd-session[71468]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:03:48 compute-0 sudo[71544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhefoiifghfvcempcjxdrnwawauhijc ; /usr/bin/python3'
Jan 31 07:03:48 compute-0 sudo[71544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:48 compute-0 useradd[71548]: new group: name=ceph-admin, GID=42478
Jan 31 07:03:48 compute-0 useradd[71548]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 31 07:03:48 compute-0 sudo[71544]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:48 compute-0 sudo[71630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubppnpvyjevdrcuyqjissrmlzepqojkl ; /usr/bin/python3'
Jan 31 07:03:48 compute-0 sudo[71630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:48 compute-0 sudo[71630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:49 compute-0 sudo[71703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htvxmehwzmwnqmrejngjankxkpfhiriy ; /usr/bin/python3'
Jan 31 07:03:49 compute-0 sudo[71703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:49 compute-0 sudo[71703]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:49 compute-0 sudo[71753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyvalmlizndnazpiftopsczrkvxsslik ; /usr/bin/python3'
Jan 31 07:03:49 compute-0 sudo[71753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:49 compute-0 sudo[71753]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:50 compute-0 sudo[71779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptahpxypocjgfsmctrhdgaxnnvxvitau ; /usr/bin/python3'
Jan 31 07:03:50 compute-0 sudo[71779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:50 compute-0 sudo[71779]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:50 compute-0 sudo[71805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuwqtyxpjbklvlychwjxdqznghjumqi ; /usr/bin/python3'
Jan 31 07:03:50 compute-0 sudo[71805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:50 compute-0 sudo[71805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:50 compute-0 sudo[71831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrzxfsvxzgngkiyyqstzwgctxpqlwdhw ; /usr/bin/python3'
Jan 31 07:03:50 compute-0 sudo[71831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-0 sudo[71831]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:51 compute-0 sudo[71909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjlxlycorktorsdmcwuafqoxzwjbqjtc ; /usr/bin/python3'
Jan 31 07:03:51 compute-0 sudo[71909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-0 sudo[71909]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:51 compute-0 sudo[71982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rokibfwcmkhfrcarvwhghuvrhzllhwxq ; /usr/bin/python3'
Jan 31 07:03:51 compute-0 sudo[71982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:51 compute-0 sudo[71982]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:52 compute-0 sudo[72084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfkcksxdchdtnxrvnaakhpcxhmkcwmox ; /usr/bin/python3'
Jan 31 07:03:52 compute-0 sudo[72084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:52 compute-0 sudo[72084]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:52 compute-0 sudo[72157]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxxdxpoxyiiynernbtfdszrrearofqh ; /usr/bin/python3'
Jan 31 07:03:52 compute-0 sudo[72157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:52 compute-0 sudo[72157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:53 compute-0 sudo[72207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbowtgqfhulyocmhrgknfyoosudmgdv ; /usr/bin/python3'
Jan 31 07:03:53 compute-0 sudo[72207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:53 compute-0 python3[72209]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:03:54 compute-0 sudo[72207]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:55 compute-0 sudo[72302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdugjfyxjnrmylxfogvkmvmvfzfmvmbx ; /usr/bin/python3'
Jan 31 07:03:55 compute-0 sudo[72302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:55 compute-0 python3[72304]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:03:56 compute-0 sudo[72302]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:56 compute-0 sudo[72329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvaufqcdomvijhorwfgukfrhbcradof ; /usr/bin/python3'
Jan 31 07:03:56 compute-0 sudo[72329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-0 python3[72331]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:03:57 compute-0 sudo[72329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:57 compute-0 sudo[72355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfazjcnhdwovmnjyfipxydhguoxrfxnc ; /usr/bin/python3'
Jan 31 07:03:57 compute-0 sudo[72355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-0 python3[72357]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:57 compute-0 kernel: loop: module loaded
Jan 31 07:03:57 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Jan 31 07:03:57 compute-0 sudo[72355]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:57 compute-0 sudo[72390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqouklmltwxywrltbgazoxxcvzxlxpct ; /usr/bin/python3'
Jan 31 07:03:57 compute-0 sudo[72390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:57 compute-0 python3[72392]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:03:58 compute-0 lvm[72395]: PV /dev/loop3 not used.
Jan 31 07:03:58 compute-0 lvm[72397]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:03:58 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 07:03:58 compute-0 lvm[72403]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 07:03:58 compute-0 lvm[72407]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:03:58 compute-0 lvm[72407]: VG ceph_vg0 finished
Jan 31 07:03:58 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 07:03:58 compute-0 sudo[72390]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:58 compute-0 sudo[72483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipxeayeymwdbojulepmpicluoqxmjtrv ; /usr/bin/python3'
Jan 31 07:03:58 compute-0 sudo[72483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:58 compute-0 python3[72485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:03:58 compute-0 sudo[72483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:58 compute-0 sudo[72556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcxknmihfrovphysevcnylscyjrrcezv ; /usr/bin/python3'
Jan 31 07:03:58 compute-0 sudo[72556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:58 compute-0 python3[72558]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843038.3172119-37137-35467073858596/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:03:58 compute-0 sudo[72556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:03:59 compute-0 sudo[72606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvoshoomcyygyqjmofmlrfaorjbygrb ; /usr/bin/python3'
Jan 31 07:03:59 compute-0 sudo[72606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:03:59 compute-0 python3[72608]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:04:00 compute-0 systemd[1]: Reloading.
Jan 31 07:04:00 compute-0 systemd-rc-local-generator[72636]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:00 compute-0 systemd-sysv-generator[72641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:00 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 07:04:00 compute-0 bash[72649]: /dev/loop3: [64513]:4355757 (/var/lib/ceph-osd-0.img)
Jan 31 07:04:00 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 07:04:00 compute-0 lvm[72650]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:04:00 compute-0 lvm[72650]: VG ceph_vg0 finished
Jan 31 07:04:00 compute-0 sudo[72606]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:02 compute-0 python3[72674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:04:04 compute-0 sudo[72765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvbhpnzvaxsnxqkrcpndnwictzangqrt ; /usr/bin/python3'
Jan 31 07:04:04 compute-0 sudo[72765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:05 compute-0 python3[72767]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:04:06 compute-0 groupadd[72773]: group added to /etc/group: name=cephadm, GID=993
Jan 31 07:04:06 compute-0 groupadd[72773]: group added to /etc/gshadow: name=cephadm
Jan 31 07:04:06 compute-0 groupadd[72773]: new group: name=cephadm, GID=993
Jan 31 07:04:06 compute-0 useradd[72780]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 31 07:04:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:04:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:04:07 compute-0 sudo[72765]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:07 compute-0 sudo[72875]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtllhjnrairaxnnwvdgqkjcxqqboxgi ; /usr/bin/python3'
Jan 31 07:04:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:04:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:04:07 compute-0 systemd[1]: run-ra446e2d95037423aa1c46b7e3924288a.service: Deactivated successfully.
Jan 31 07:04:07 compute-0 sudo[72875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:07 compute-0 python3[72878]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:04:08 compute-0 sudo[72875]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:08 compute-0 sudo[72904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpmqtscfdrtzwdkzsapvwsjcigvjqsm ; /usr/bin/python3'
Jan 31 07:04:08 compute-0 sudo[72904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:08 compute-0 python3[72906]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:08 compute-0 sudo[72904]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:08 compute-0 sudo[72966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfoqseytaclmjghojtrhvcdzkbwzjcry ; /usr/bin/python3'
Jan 31 07:04:08 compute-0 sudo[72966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:09 compute-0 python3[72968]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:09 compute-0 sudo[72966]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:09 compute-0 sudo[72992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktcyajytpsnvidczcsrvyogiqwtwevnv ; /usr/bin/python3'
Jan 31 07:04:09 compute-0 sudo[72992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:09 compute-0 python3[72994]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:09 compute-0 sudo[72992]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:09 compute-0 sudo[73070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeekksidwtmqqqatgiibdjsvsufxudyc ; /usr/bin/python3'
Jan 31 07:04:09 compute-0 sudo[73070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:09 compute-0 python3[73072]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:04:09 compute-0 sudo[73070]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:11 compute-0 chronyd[58682]: Selected source 23.133.168.244 (pool.ntp.org)
Jan 31 07:04:11 compute-0 sudo[73143]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiegndawolbhicfpepsrsezvicideiaq ; /usr/bin/python3'
Jan 31 07:04:11 compute-0 sudo[73143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:11 compute-0 python3[73145]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843049.703094-37328-5224089368649/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:11 compute-0 sudo[73143]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:12 compute-0 sudo[73245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgpvnogbadlotjetxgprohqksaybyiiv ; /usr/bin/python3'
Jan 31 07:04:12 compute-0 sudo[73245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:12 compute-0 python3[73247]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:04:12 compute-0 sudo[73245]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:12 compute-0 sudo[73318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdoulfzxdixvchxplbrhivodqtscrcq ; /usr/bin/python3'
Jan 31 07:04:12 compute-0 sudo[73318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:12 compute-0 python3[73320]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843052.0250847-37346-219885001856011/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:04:12 compute-0 sudo[73318]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:12 compute-0 sudo[73368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohrdlccpganoakeirsxmhdxjdvsyjsy ; /usr/bin/python3'
Jan 31 07:04:12 compute-0 sudo[73368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:13 compute-0 python3[73370]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:04:13 compute-0 sudo[73368]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:13 compute-0 sudo[73396]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oavvhguelntkugkdlnnkhdhftbixfbyh ; /usr/bin/python3'
Jan 31 07:04:13 compute-0 sudo[73396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:13 compute-0 python3[73398]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:04:13 compute-0 sudo[73396]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:13 compute-0 sudo[73424]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlbylteocbclrulwyvvwzskqjivyfqqo ; /usr/bin/python3'
Jan 31 07:04:13 compute-0 sudo[73424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:13 compute-0 python3[73426]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:04:13 compute-0 sudo[73424]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:14 compute-0 python3[73452]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:04:14 compute-0 sudo[73476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjrracvmeoxafoarhdxxgojffkmjznhw ; /usr/bin/python3'
Jan 31 07:04:14 compute-0 sudo[73476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:04:14 compute-0 python3[73478]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:14 compute-0 sshd-session[73494]: Accepted publickey for ceph-admin from 192.168.122.100 port 32930 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:04:14 compute-0 systemd-logind[816]: New session 19 of user ceph-admin.
Jan 31 07:04:14 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 07:04:14 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 07:04:14 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 07:04:14 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 07:04:14 compute-0 systemd[73498]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:04:14 compute-0 systemd[73498]: Queued start job for default target Main User Target.
Jan 31 07:04:14 compute-0 systemd[73498]: Created slice User Application Slice.
Jan 31 07:04:14 compute-0 systemd[73498]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:04:14 compute-0 systemd[73498]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:04:14 compute-0 systemd[73498]: Reached target Paths.
Jan 31 07:04:14 compute-0 systemd[73498]: Reached target Timers.
Jan 31 07:04:14 compute-0 systemd[73498]: Starting D-Bus User Message Bus Socket...
Jan 31 07:04:14 compute-0 systemd[73498]: Starting Create User's Volatile Files and Directories...
Jan 31 07:04:14 compute-0 systemd[73498]: Finished Create User's Volatile Files and Directories.
Jan 31 07:04:14 compute-0 systemd[73498]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:04:14 compute-0 systemd[73498]: Reached target Sockets.
Jan 31 07:04:14 compute-0 systemd[73498]: Reached target Basic System.
Jan 31 07:04:14 compute-0 systemd[73498]: Reached target Main User Target.
Jan 31 07:04:14 compute-0 systemd[73498]: Startup finished in 136ms.
Jan 31 07:04:14 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 07:04:14 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 07:04:14 compute-0 sshd-session[73494]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:04:14 compute-0 sudo[73514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 31 07:04:14 compute-0 sudo[73514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:04:14 compute-0 sudo[73514]: pam_unix(sudo:session): session closed for user root
Jan 31 07:04:14 compute-0 sshd-session[73513]: Received disconnect from 192.168.122.100 port 32930:11: disconnected by user
Jan 31 07:04:14 compute-0 sshd-session[73513]: Disconnected from user ceph-admin 192.168.122.100 port 32930
Jan 31 07:04:14 compute-0 sshd-session[73494]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 31 07:04:14 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 07:04:14 compute-0 systemd-logind[816]: Session 19 logged out. Waiting for processes to exit.
Jan 31 07:04:14 compute-0 systemd-logind[816]: Removed session 19.
Jan 31 07:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3811818945-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 07:04:25 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 07:04:25 compute-0 systemd[73498]: Activating special unit Exit the Session...
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped target Main User Target.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped target Basic System.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped target Paths.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped target Sockets.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped target Timers.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:04:25 compute-0 systemd[73498]: Closed D-Bus User Message Bus Socket.
Jan 31 07:04:25 compute-0 systemd[73498]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:04:25 compute-0 systemd[73498]: Removed slice User Application Slice.
Jan 31 07:04:25 compute-0 systemd[73498]: Reached target Shutdown.
Jan 31 07:04:25 compute-0 systemd[73498]: Finished Exit the Session.
Jan 31 07:04:25 compute-0 systemd[73498]: Reached target Exit the Session.
Jan 31 07:04:25 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 07:04:25 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 07:04:25 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 07:04:25 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 07:04:25 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 07:04:25 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 07:04:25 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 07:04:28 compute-0 podman[73551]: 2026-01-31 07:04:28.370481919 +0000 UTC m=+13.467249458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.435466917 +0000 UTC m=+0.048573506 container create 8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc (image=quay.io/ceph/ceph:v18, name=eloquent_austin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:28 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 07:04:28 compute-0 systemd[1]: Started libpod-conmon-8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc.scope.
Jan 31 07:04:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.41120984 +0000 UTC m=+0.024316499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.53003665 +0000 UTC m=+0.143143289 container init 8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc (image=quay.io/ceph/ceph:v18, name=eloquent_austin, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.538463968 +0000 UTC m=+0.151570587 container start 8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc (image=quay.io/ceph/ceph:v18, name=eloquent_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.542182749 +0000 UTC m=+0.155289368 container attach 8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc (image=quay.io/ceph/ceph:v18, name=eloquent_austin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:04:28 compute-0 eloquent_austin[73632]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 31 07:04:28 compute-0 systemd[1]: libpod-8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc.scope: Deactivated successfully.
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.820736076 +0000 UTC m=+0.433842705 container died 8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc (image=quay.io/ceph/ceph:v18, name=eloquent_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-182fd7e09c235b8a37f89ca396d7ecd5e2cbf460894a0280fa9c5aa90d02c082-merged.mount: Deactivated successfully.
Jan 31 07:04:28 compute-0 podman[73616]: 2026-01-31 07:04:28.859414587 +0000 UTC m=+0.472521186 container remove 8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc (image=quay.io/ceph/ceph:v18, name=eloquent_austin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:04:28 compute-0 systemd[1]: libpod-conmon-8d842e7c5b4e0f0e6a35b5af96a29588953503b3389c4c207e91346eba67edfc.scope: Deactivated successfully.
Jan 31 07:04:28 compute-0 podman[73650]: 2026-01-31 07:04:28.920675382 +0000 UTC m=+0.043818728 container create 816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185 (image=quay.io/ceph/ceph:v18, name=nervous_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:28 compute-0 systemd[1]: Started libpod-conmon-816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185.scope.
Jan 31 07:04:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:28 compute-0 podman[73650]: 2026-01-31 07:04:28.985219198 +0000 UTC m=+0.108362634 container init 816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185 (image=quay.io/ceph/ceph:v18, name=nervous_greider, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:28 compute-0 podman[73650]: 2026-01-31 07:04:28.992619481 +0000 UTC m=+0.115762857 container start 816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185 (image=quay.io/ceph/ceph:v18, name=nervous_greider, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:04:28 compute-0 nervous_greider[73667]: 167 167
Jan 31 07:04:28 compute-0 podman[73650]: 2026-01-31 07:04:28.996348662 +0000 UTC m=+0.119492028 container attach 816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185 (image=quay.io/ceph/ceph:v18, name=nervous_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:28 compute-0 systemd[1]: libpod-816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185.scope: Deactivated successfully.
Jan 31 07:04:28 compute-0 podman[73650]: 2026-01-31 07:04:28.996814404 +0000 UTC m=+0.119957780 container died 816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185 (image=quay.io/ceph/ceph:v18, name=nervous_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:29 compute-0 podman[73650]: 2026-01-31 07:04:28.906146695 +0000 UTC m=+0.029290051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:29 compute-0 podman[73650]: 2026-01-31 07:04:29.035148896 +0000 UTC m=+0.158292272 container remove 816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185 (image=quay.io/ceph/ceph:v18, name=nervous_greider, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:29 compute-0 systemd[1]: libpod-conmon-816cf6ca05a4fc560cc21a3a2d861d3b8d71869d4e2586661b53c643c5267185.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.112503837 +0000 UTC m=+0.057371211 container create a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a (image=quay.io/ceph/ceph:v18, name=hopeful_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:04:29 compute-0 systemd[1]: Started libpod-conmon-a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a.scope.
Jan 31 07:04:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.180332125 +0000 UTC m=+0.125199539 container init a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a (image=quay.io/ceph/ceph:v18, name=hopeful_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.087021721 +0000 UTC m=+0.031889175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.187203163 +0000 UTC m=+0.132070577 container start a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a (image=quay.io/ceph/ceph:v18, name=hopeful_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.200765076 +0000 UTC m=+0.145632490 container attach a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a (image=quay.io/ceph/ceph:v18, name=hopeful_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:29 compute-0 hopeful_curie[73701]: AQB9qX1pEnMEDRAA+ht/5WPnfRXTfL5SCJa/Gg==
Jan 31 07:04:29 compute-0 systemd[1]: libpod-a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.221614479 +0000 UTC m=+0.166481943 container died a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a (image=quay.io/ceph/ceph:v18, name=hopeful_curie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:29 compute-0 podman[73684]: 2026-01-31 07:04:29.34124199 +0000 UTC m=+0.286109384 container remove a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a (image=quay.io/ceph/ceph:v18, name=hopeful_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:29 compute-0 systemd[1]: libpod-conmon-a26bdbc0d7aa84a8f1ed6fffef125bf586d53d94e8feadb36942f40e49eb4d8a.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.407041097 +0000 UTC m=+0.048202076 container create 2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59 (image=quay.io/ceph/ceph:v18, name=strange_goldstine, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:04:29 compute-0 systemd[1]: Started libpod-conmon-2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59.scope.
Jan 31 07:04:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.472769522 +0000 UTC m=+0.113930501 container init 2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59 (image=quay.io/ceph/ceph:v18, name=strange_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.478696078 +0000 UTC m=+0.119857037 container start 2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59 (image=quay.io/ceph/ceph:v18, name=strange_goldstine, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.389555687 +0000 UTC m=+0.030716646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.482627675 +0000 UTC m=+0.123788624 container attach 2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59 (image=quay.io/ceph/ceph:v18, name=strange_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:04:29 compute-0 strange_goldstine[73737]: AQB9qX1pXwxqHhAANb9+yOsIoAPe17D54kVd1Q==
Jan 31 07:04:29 compute-0 systemd[1]: libpod-2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.514038206 +0000 UTC m=+0.155199185 container died 2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59 (image=quay.io/ceph/ceph:v18, name=strange_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dad3f10ededb2358ff79d929faad9530425cffc6a8fd490e92f68d3f149d69d1-merged.mount: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73720]: 2026-01-31 07:04:29.553366723 +0000 UTC m=+0.194527662 container remove 2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59 (image=quay.io/ceph/ceph:v18, name=strange_goldstine, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:29 compute-0 systemd[1]: libpod-conmon-2809e0650dcfa82449afaf2abd78718e2d33b07817a07d9223c5832b38b09b59.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.622237365 +0000 UTC m=+0.048697327 container create a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4 (image=quay.io/ceph/ceph:v18, name=priceless_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:29 compute-0 systemd[1]: Started libpod-conmon-a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4.scope.
Jan 31 07:04:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.697590958 +0000 UTC m=+0.124050950 container init a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4 (image=quay.io/ceph/ceph:v18, name=priceless_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.606368226 +0000 UTC m=+0.032828228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.705539813 +0000 UTC m=+0.131999775 container start a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4 (image=quay.io/ceph/ceph:v18, name=priceless_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.709343667 +0000 UTC m=+0.135803689 container attach a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4 (image=quay.io/ceph/ceph:v18, name=priceless_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:04:29 compute-0 priceless_mahavira[73771]: AQB9qX1p1FnUKxAA/35aFxUZQow6HrnLfxUL3g==
Jan 31 07:04:29 compute-0 systemd[1]: libpod-a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.738534184 +0000 UTC m=+0.164994136 container died a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4 (image=quay.io/ceph/ceph:v18, name=priceless_mahavira, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:04:29 compute-0 podman[73754]: 2026-01-31 07:04:29.773580055 +0000 UTC m=+0.200040017 container remove a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4 (image=quay.io/ceph/ceph:v18, name=priceless_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:04:29 compute-0 systemd[1]: libpod-conmon-a8689dc3bba16377ab71257c6fd9ab32678b5e9cf3cff1d2f53de3857ae793d4.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.840563582 +0000 UTC m=+0.046870633 container create 49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79 (image=quay.io/ceph/ceph:v18, name=wonderful_bartik, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:04:29 compute-0 systemd[1]: Started libpod-conmon-49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79.scope.
Jan 31 07:04:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0611435e88c54eff160c926af88b27b97034c21095da715b9ce3cb436f28529/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.896578698 +0000 UTC m=+0.102885759 container init 49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79 (image=quay.io/ceph/ceph:v18, name=wonderful_bartik, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.904023252 +0000 UTC m=+0.110330303 container start 49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79 (image=quay.io/ceph/ceph:v18, name=wonderful_bartik, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.907378724 +0000 UTC m=+0.113685795 container attach 49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79 (image=quay.io/ceph/ceph:v18, name=wonderful_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.825180144 +0000 UTC m=+0.031487215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:29 compute-0 wonderful_bartik[73806]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 07:04:29 compute-0 wonderful_bartik[73806]: setting min_mon_release = pacific
Jan 31 07:04:29 compute-0 wonderful_bartik[73806]: /usr/bin/monmaptool: set fsid to f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:29 compute-0 wonderful_bartik[73806]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 07:04:29 compute-0 systemd[1]: libpod-49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79.scope: Deactivated successfully.
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.926104494 +0000 UTC m=+0.132411585 container died 49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79 (image=quay.io/ceph/ceph:v18, name=wonderful_bartik, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:29 compute-0 podman[73789]: 2026-01-31 07:04:29.964533529 +0000 UTC m=+0.170840590 container remove 49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79 (image=quay.io/ceph/ceph:v18, name=wonderful_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:29 compute-0 systemd[1]: libpod-conmon-49b89077d692d7465a0377dd021ea8b0548453d669413e9c8ed3d84d4c580f79.scope: Deactivated successfully.
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.03496521 +0000 UTC m=+0.050100793 container create a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053 (image=quay.io/ceph/ceph:v18, name=angry_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:30 compute-0 systemd[1]: Started libpod-conmon-a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053.scope.
Jan 31 07:04:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63f15eab96b168f66c5ec74f498af016cb73b52419c4ad4f9123c71a2a877f0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63f15eab96b168f66c5ec74f498af016cb73b52419c4ad4f9123c71a2a877f0/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63f15eab96b168f66c5ec74f498af016cb73b52419c4ad4f9123c71a2a877f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e63f15eab96b168f66c5ec74f498af016cb73b52419c4ad4f9123c71a2a877f0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.1078166 +0000 UTC m=+0.122952214 container init a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053 (image=quay.io/ceph/ceph:v18, name=angry_chatelet, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.013357009 +0000 UTC m=+0.028492632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.120124113 +0000 UTC m=+0.135259696 container start a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053 (image=quay.io/ceph/ceph:v18, name=angry_chatelet, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.125226609 +0000 UTC m=+0.140362202 container attach a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053 (image=quay.io/ceph/ceph:v18, name=angry_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:30 compute-0 systemd[1]: libpod-a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053.scope: Deactivated successfully.
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.210826203 +0000 UTC m=+0.225961786 container died a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053 (image=quay.io/ceph/ceph:v18, name=angry_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:04:30 compute-0 podman[73825]: 2026-01-31 07:04:30.247475373 +0000 UTC m=+0.262610956 container remove a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053 (image=quay.io/ceph/ceph:v18, name=angry_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:04:30 compute-0 systemd[1]: libpod-conmon-a5c0cf505fafdd0ebe7947f0e405ea4b07103815df6e45b5915bf699c7562053.scope: Deactivated successfully.
Jan 31 07:04:30 compute-0 systemd[1]: Reloading.
Jan 31 07:04:30 compute-0 systemd-rc-local-generator[73905]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:30 compute-0 systemd-sysv-generator[73909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:30 compute-0 systemd[1]: Reloading.
Jan 31 07:04:30 compute-0 systemd-sysv-generator[73943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:30 compute-0 systemd-rc-local-generator[73940]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:30 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 07:04:30 compute-0 systemd[1]: Reloading.
Jan 31 07:04:30 compute-0 systemd-sysv-generator[73982]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:30 compute-0 systemd-rc-local-generator[73979]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:30 compute-0 systemd[1]: Reached target Ceph cluster f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:04:30 compute-0 systemd[1]: Reloading.
Jan 31 07:04:30 compute-0 systemd-rc-local-generator[74022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:30 compute-0 systemd-sysv-generator[74025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:31 compute-0 systemd[1]: Reloading.
Jan 31 07:04:31 compute-0 systemd-rc-local-generator[74057]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:31 compute-0 systemd-sysv-generator[74061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:31 compute-0 systemd[1]: Created slice Slice /system/ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:04:31 compute-0 systemd[1]: Reached target System Time Set.
Jan 31 07:04:31 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 31 07:04:31 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:31 compute-0 podman[74122]: 2026-01-31 07:04:31.525453865 +0000 UTC m=+0.110409255 container create 1f7681fe14b7901884499b71cf82c267534ecb5d9c37f27077e98bd65ed806dc (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:04:31 compute-0 podman[74122]: 2026-01-31 07:04:31.431844164 +0000 UTC m=+0.016799534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6f7703520e372d0a25057b29e795b34860d12f8912ec4764a8a26b67e68d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6f7703520e372d0a25057b29e795b34860d12f8912ec4764a8a26b67e68d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6f7703520e372d0a25057b29e795b34860d12f8912ec4764a8a26b67e68d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6f7703520e372d0a25057b29e795b34860d12f8912ec4764a8a26b67e68d6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 podman[74122]: 2026-01-31 07:04:31.737225979 +0000 UTC m=+0.322181369 container init 1f7681fe14b7901884499b71cf82c267534ecb5d9c37f27077e98bd65ed806dc (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:31 compute-0 podman[74122]: 2026-01-31 07:04:31.751015279 +0000 UTC m=+0.335970629 container start 1f7681fe14b7901884499b71cf82c267534ecb5d9c37f27077e98bd65ed806dc (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:31 compute-0 bash[74122]: 1f7681fe14b7901884499b71cf82c267534ecb5d9c37f27077e98bd65ed806dc
Jan 31 07:04:31 compute-0 systemd[1]: Started Ceph mon.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:04:31 compute-0 ceph-mon[74141]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: pidfile_write: ignore empty --pid-file
Jan 31 07:04:31 compute-0 ceph-mon[74141]: load: jerasure load: lrc 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Git sha 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: DB SUMMARY
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: DB Session ID:  M9F10GSP4RUGOA2UG6UR
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                                     Options.env: 0x55d2bd384c40
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                                Options.info_log: 0x55d2bf33aec0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                                 Options.wal_dir: 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                    Options.write_buffer_manager: 0x55d2bf34ab40
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                               Options.row_cache: None
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                              Options.wal_filter: None
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.wal_compression: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.max_background_jobs: 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Compression algorithms supported:
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kZSTD supported: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:           Options.merge_operator: 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:        Options.compaction_filter: None
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d2bf33aaa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d2bf3331f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.compression: NoCompression
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.num_levels: 7
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 68bfdccf-b92f-403c-86dd-82f34f44773c
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843071817505, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843071819679, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "M9F10GSP4RUGOA2UG6UR", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843071819810, "job": 1, "event": "recovery_finished"}
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d2bf35ce00
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: DB pointer 0x55d2bf3e6000
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:04:31 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d2bf3331f0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:04:31 compute-0 ceph-mon[74141]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@-1(???) e0 preinit fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 07:04:31 compute-0 ceph-mon[74141]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:04:31 compute-0 ceph-mon[74141]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 07:04:31 compute-0 podman[74142]: 2026-01-31 07:04:31.846357392 +0000 UTC m=+0.060451157 container create eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b (image=quay.io/ceph/ceph:v18, name=intelligent_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 07:04:31 compute-0 ceph-mon[74141]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:04:31 compute-0 ceph-mon[74141]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:04:31 compute-0 ceph-mon[74141]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T07:04:30.171047Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).mds e1 new map
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 07:04:31 compute-0 ceph-mon[74141]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mkfs f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 07:04:31 compute-0 ceph-mon[74141]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:04:31 compute-0 ceph-mon[74141]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 07:04:31 compute-0 ceph-mon[74141]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:04:31 compute-0 systemd[1]: Started libpod-conmon-eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b.scope.
Jan 31 07:04:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8befb5ff6958a2cbcc62f50f55852fdf3af4059ea00661b5b5c92cbe094f61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8befb5ff6958a2cbcc62f50f55852fdf3af4059ea00661b5b5c92cbe094f61/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8befb5ff6958a2cbcc62f50f55852fdf3af4059ea00661b5b5c92cbe094f61/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:31 compute-0 podman[74142]: 2026-01-31 07:04:31.823570132 +0000 UTC m=+0.037663937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:31 compute-0 podman[74142]: 2026-01-31 07:04:31.927877675 +0000 UTC m=+0.141971450 container init eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b (image=quay.io/ceph/ceph:v18, name=intelligent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:04:31 compute-0 podman[74142]: 2026-01-31 07:04:31.93215325 +0000 UTC m=+0.146247005 container start eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b (image=quay.io/ceph/ceph:v18, name=intelligent_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:04:31 compute-0 podman[74142]: 2026-01-31 07:04:31.934898218 +0000 UTC m=+0.148991973 container attach eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b (image=quay.io/ceph/ceph:v18, name=intelligent_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:04:32 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 07:04:32 compute-0 ceph-mon[74141]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918199613' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:   cluster:
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     id:     f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     health: HEALTH_OK
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:  
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:   services:
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     mon: 1 daemons, quorum compute-0 (age 0.475837s)
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     mgr: no daemons active
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     osd: 0 osds: 0 up, 0 in
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:  
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:   data:
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     pools:   0 pools, 0 pgs
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     objects: 0 objects, 0 B
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     usage:   0 B used, 0 B / 0 B avail
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:     pgs:     
Jan 31 07:04:32 compute-0 intelligent_lovelace[74196]:  
Jan 31 07:04:32 compute-0 systemd[1]: libpod-eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b.scope: Deactivated successfully.
Jan 31 07:04:32 compute-0 podman[74142]: 2026-01-31 07:04:32.345009918 +0000 UTC m=+0.559103673 container died eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b (image=quay.io/ceph/ceph:v18, name=intelligent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b8befb5ff6958a2cbcc62f50f55852fdf3af4059ea00661b5b5c92cbe094f61-merged.mount: Deactivated successfully.
Jan 31 07:04:32 compute-0 podman[74142]: 2026-01-31 07:04:32.37966601 +0000 UTC m=+0.593759765 container remove eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b (image=quay.io/ceph/ceph:v18, name=intelligent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:04:32 compute-0 systemd[1]: libpod-conmon-eaac74649bc93417006bed26c41f2c3c6279697ba01cec9ed2b593215dad085b.scope: Deactivated successfully.
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.430503709 +0000 UTC m=+0.038327072 container create 28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44 (image=quay.io/ceph/ceph:v18, name=cool_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:04:32 compute-0 systemd[1]: Started libpod-conmon-28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44.scope.
Jan 31 07:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f201df587adf884fa84503dfe6396cad1fa1f3e28c5d09464c9a8075de80fab/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f201df587adf884fa84503dfe6396cad1fa1f3e28c5d09464c9a8075de80fab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f201df587adf884fa84503dfe6396cad1fa1f3e28c5d09464c9a8075de80fab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f201df587adf884fa84503dfe6396cad1fa1f3e28c5d09464c9a8075de80fab/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.498348257 +0000 UTC m=+0.106171610 container init 28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44 (image=quay.io/ceph/ceph:v18, name=cool_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.412363043 +0000 UTC m=+0.020186406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.512339301 +0000 UTC m=+0.120162634 container start 28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44 (image=quay.io/ceph/ceph:v18, name=cool_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.516110153 +0000 UTC m=+0.123933516 container attach 28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44 (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:32 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 07:04:32 compute-0 ceph-mon[74141]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2084936057' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:04:32 compute-0 ceph-mon[74141]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2084936057' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:04:32 compute-0 cool_khayyam[74249]: 
Jan 31 07:04:32 compute-0 cool_khayyam[74249]: [global]
Jan 31 07:04:32 compute-0 cool_khayyam[74249]:         fsid = f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:32 compute-0 cool_khayyam[74249]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 07:04:32 compute-0 ceph-mon[74141]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:04:32 compute-0 ceph-mon[74141]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:04:32 compute-0 ceph-mon[74141]: fsmap 
Jan 31 07:04:32 compute-0 ceph-mon[74141]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:04:32 compute-0 ceph-mon[74141]: mgrmap e1: no daemons active
Jan 31 07:04:32 compute-0 ceph-mon[74141]: from='client.? 192.168.122.100:0/3918199613' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 07:04:32 compute-0 ceph-mon[74141]: from='client.? 192.168.122.100:0/2084936057' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:04:32 compute-0 systemd[1]: libpod-28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44.scope: Deactivated successfully.
Jan 31 07:04:32 compute-0 conmon[74249]: conmon 28de1637217e7f22a52f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44.scope/container/memory.events
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.908453476 +0000 UTC m=+0.516276809 container died 28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44 (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f201df587adf884fa84503dfe6396cad1fa1f3e28c5d09464c9a8075de80fab-merged.mount: Deactivated successfully.
Jan 31 07:04:32 compute-0 podman[74233]: 2026-01-31 07:04:32.944891552 +0000 UTC m=+0.552714885 container remove 28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44 (image=quay.io/ceph/ceph:v18, name=cool_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:04:32 compute-0 systemd[1]: libpod-conmon-28de1637217e7f22a52f8f135323370752b20426361b3d2d7f17b10829cd7f44.scope: Deactivated successfully.
Jan 31 07:04:32 compute-0 podman[74289]: 2026-01-31 07:04:32.99277681 +0000 UTC m=+0.035713810 container create 0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:33 compute-0 systemd[1]: Started libpod-conmon-0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881.scope.
Jan 31 07:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023c5ada93be12f86de0ad12adbee9a16c02a14f544e1887aa2dffdf53ed6f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023c5ada93be12f86de0ad12adbee9a16c02a14f544e1887aa2dffdf53ed6f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023c5ada93be12f86de0ad12adbee9a16c02a14f544e1887aa2dffdf53ed6f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b023c5ada93be12f86de0ad12adbee9a16c02a14f544e1887aa2dffdf53ed6f8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:33 compute-0 podman[74289]: 2026-01-31 07:04:33.057850758 +0000 UTC m=+0.100787788 container init 0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:04:33 compute-0 podman[74289]: 2026-01-31 07:04:33.061523099 +0000 UTC m=+0.104460099 container start 0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:33 compute-0 podman[74289]: 2026-01-31 07:04:33.067030035 +0000 UTC m=+0.109967045 container attach 0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:04:33 compute-0 podman[74289]: 2026-01-31 07:04:32.975992647 +0000 UTC m=+0.018929647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:33 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:04:33 compute-0 ceph-mon[74141]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/659253829' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:04:33 compute-0 systemd[1]: libpod-0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881.scope: Deactivated successfully.
Jan 31 07:04:33 compute-0 podman[74289]: 2026-01-31 07:04:33.449635928 +0000 UTC m=+0.492572928 container died 0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b023c5ada93be12f86de0ad12adbee9a16c02a14f544e1887aa2dffdf53ed6f8-merged.mount: Deactivated successfully.
Jan 31 07:04:33 compute-0 podman[74289]: 2026-01-31 07:04:33.489265142 +0000 UTC m=+0.532202152 container remove 0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:04:33 compute-0 systemd[1]: libpod-conmon-0978632ca67d573cc5d6ef4c1a51ebdb2e3eb079d67db6c899549ea752bee881.scope: Deactivated successfully.
Jan 31 07:04:33 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:04:33 compute-0 ceph-mon[74141]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 07:04:33 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 07:04:33 compute-0 ceph-mon[74141]: mon.compute-0@0(leader) e1 shutdown
Jan 31 07:04:33 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0[74137]: 2026-01-31T07:04:33.843+0000 7f0d05ea3640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 07:04:33 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0[74137]: 2026-01-31T07:04:33.843+0000 7f0d05ea3640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 07:04:33 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:04:33 compute-0 ceph-mon[74141]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:04:33 compute-0 podman[74373]: 2026-01-31 07:04:33.893936288 +0000 UTC m=+0.248260202 container died 1f7681fe14b7901884499b71cf82c267534ecb5d9c37f27077e98bd65ed806dc (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf6f7703520e372d0a25057b29e795b34860d12f8912ec4764a8a26b67e68d6-merged.mount: Deactivated successfully.
Jan 31 07:04:33 compute-0 podman[74373]: 2026-01-31 07:04:33.928844546 +0000 UTC m=+0.283168460 container remove 1f7681fe14b7901884499b71cf82c267534ecb5d9c37f27077e98bd65ed806dc (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:33 compute-0 bash[74373]: ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0
Jan 31 07:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 07:04:34 compute-0 systemd[1]: ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mon.compute-0.service: Deactivated successfully.
Jan 31 07:04:34 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:04:34 compute-0 systemd[1]: Starting Ceph mon.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:04:34 compute-0 podman[74476]: 2026-01-31 07:04:34.229869935 +0000 UTC m=+0.041142382 container create c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb06755269fdf102ad8b13f3da1e2498f213b5f64725c7540fb71d05d5b8b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb06755269fdf102ad8b13f3da1e2498f213b5f64725c7540fb71d05d5b8b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb06755269fdf102ad8b13f3da1e2498f213b5f64725c7540fb71d05d5b8b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb06755269fdf102ad8b13f3da1e2498f213b5f64725c7540fb71d05d5b8b4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 podman[74476]: 2026-01-31 07:04:34.284620821 +0000 UTC m=+0.095893288 container init c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:04:34 compute-0 podman[74476]: 2026-01-31 07:04:34.289601014 +0000 UTC m=+0.100873461 container start c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:34 compute-0 bash[74476]: c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7
Jan 31 07:04:34 compute-0 podman[74476]: 2026-01-31 07:04:34.21212246 +0000 UTC m=+0.023394947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:34 compute-0 systemd[1]: Started Ceph mon.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:04:34 compute-0 ceph-mon[74496]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: pidfile_write: ignore empty --pid-file
Jan 31 07:04:34 compute-0 ceph-mon[74496]: load: jerasure load: lrc 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Git sha 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: DB SUMMARY
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: DB Session ID:  DMSZC7CZYI433L98KJVN
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52700 ; 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                                     Options.env: 0x5635b2dbac40
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                                Options.info_log: 0x5635b3ba5040
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                                 Options.wal_dir: 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                    Options.write_buffer_manager: 0x5635b3bb4b40
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                               Options.row_cache: None
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                              Options.wal_filter: None
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.wal_compression: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.max_background_jobs: 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Compression algorithms supported:
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kZSTD supported: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.352427538 +0000 UTC m=+0.041052231 container create 3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423 (image=quay.io/ceph/ceph:v18, name=amazing_jemison, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:           Options.merge_operator: 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:        Options.compaction_filter: None
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5635b3ba4c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5635b3b9d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.compression: NoCompression
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.num_levels: 7
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 68bfdccf-b92f-403c-86dd-82f34f44773c
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843074347399, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843074352188, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 52464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 128, "table_properties": {"data_size": 51017, "index_size": 153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2850, "raw_average_key_size": 30, "raw_value_size": 48736, "raw_average_value_size": 513, "num_data_blocks": 7, "num_entries": 95, "num_filter_entries": 95, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843074, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843074352255, "job": 1, "event": "recovery_finished"}
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5635b3bc6e00
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: DB pointer 0x5635b3cce000
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   53.13 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0   53.13 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.23 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.23 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:04:34 compute-0 ceph-mon[74496]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???) e1 preinit fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).mds e1 new map
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 07:04:34 compute-0 ceph-mon[74496]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:04:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:04:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:04:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 07:04:34 compute-0 systemd[1]: Started libpod-conmon-3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423.scope.
Jan 31 07:04:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d380d61e675f8c3c35007fdaea698fdc0eb445c9e580a0a0e20a62b44d92a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d380d61e675f8c3c35007fdaea698fdc0eb445c9e580a0a0e20a62b44d92a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d380d61e675f8c3c35007fdaea698fdc0eb445c9e580a0a0e20a62b44d92a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.32972789 +0000 UTC m=+0.018352603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:04:34 compute-0 ceph-mon[74496]: fsmap 
Jan 31 07:04:34 compute-0 ceph-mon[74496]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mgrmap e1: no daemons active
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.434834023 +0000 UTC m=+0.123458796 container init 3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423 (image=quay.io/ceph/ceph:v18, name=amazing_jemison, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.439888787 +0000 UTC m=+0.128513520 container start 3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423 (image=quay.io/ceph/ceph:v18, name=amazing_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.443682061 +0000 UTC m=+0.132306844 container attach 3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423 (image=quay.io/ceph/ceph:v18, name=amazing_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:04:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 31 07:04:34 compute-0 systemd[1]: libpod-3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423.scope: Deactivated successfully.
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.840464133 +0000 UTC m=+0.529088866 container died 3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423 (image=quay.io/ceph/ceph:v18, name=amazing_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:34 compute-0 podman[74497]: 2026-01-31 07:04:34.891261952 +0000 UTC m=+0.579886685 container remove 3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423 (image=quay.io/ceph/ceph:v18, name=amazing_jemison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:34 compute-0 systemd[1]: libpod-conmon-3cfbb01560069a275fae129a8d2f5d73507d0d4517d99e48f60da56819ab2423.scope: Deactivated successfully.
Jan 31 07:04:34 compute-0 podman[74591]: 2026-01-31 07:04:34.964098152 +0000 UTC m=+0.050396290 container create 872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225 (image=quay.io/ceph/ceph:v18, name=peaceful_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:04:35 compute-0 systemd[1]: Started libpod-conmon-872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225.scope.
Jan 31 07:04:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e36bdf7b4bd3ebe7f6b94d5d695542fc526349a004240b268a30ab1286431f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e36bdf7b4bd3ebe7f6b94d5d695542fc526349a004240b268a30ab1286431f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e36bdf7b4bd3ebe7f6b94d5d695542fc526349a004240b268a30ab1286431f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:35 compute-0 podman[74591]: 2026-01-31 07:04:34.947011601 +0000 UTC m=+0.033309749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:35 compute-0 podman[74591]: 2026-01-31 07:04:35.054715789 +0000 UTC m=+0.141013967 container init 872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225 (image=quay.io/ceph/ceph:v18, name=peaceful_shirley, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:04:35 compute-0 podman[74591]: 2026-01-31 07:04:35.061038574 +0000 UTC m=+0.147336712 container start 872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225 (image=quay.io/ceph/ceph:v18, name=peaceful_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:35 compute-0 podman[74591]: 2026-01-31 07:04:35.065035833 +0000 UTC m=+0.151333961 container attach 872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225 (image=quay.io/ceph/ceph:v18, name=peaceful_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 31 07:04:35 compute-0 systemd[1]: libpod-872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225.scope: Deactivated successfully.
Jan 31 07:04:35 compute-0 podman[74633]: 2026-01-31 07:04:35.519877122 +0000 UTC m=+0.025367535 container died 872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225 (image=quay.io/ceph/ceph:v18, name=peaceful_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:04:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e36bdf7b4bd3ebe7f6b94d5d695542fc526349a004240b268a30ab1286431f7-merged.mount: Deactivated successfully.
Jan 31 07:04:35 compute-0 podman[74633]: 2026-01-31 07:04:35.561657568 +0000 UTC m=+0.067147951 container remove 872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225 (image=quay.io/ceph/ceph:v18, name=peaceful_shirley, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:35 compute-0 systemd[1]: libpod-conmon-872a12c3fee0ba34a4bbb00a0351b21130c1f0895b63c3eb17fb6591f1b01225.scope: Deactivated successfully.
Jan 31 07:04:35 compute-0 systemd[1]: Reloading.
Jan 31 07:04:35 compute-0 systemd-rc-local-generator[74668]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:35 compute-0 systemd-sysv-generator[74673]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:35 compute-0 systemd[1]: Reloading.
Jan 31 07:04:35 compute-0 systemd-rc-local-generator[74711]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:04:35 compute-0 systemd-sysv-generator[74714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:04:36 compute-0 systemd[1]: Starting Ceph mgr.compute-0.hhuoua for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:04:36 compute-0 podman[74772]: 2026-01-31 07:04:36.255745459 +0000 UTC m=+0.051314403 container create 80eff2094e986ce79ca2f5db33c4e20adbe7c8c35152ee84bfca5e742fce5e26 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4e373ef552ae75c0a78a600062a692ffa0b2f36295b2534f62a67928c16d97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4e373ef552ae75c0a78a600062a692ffa0b2f36295b2534f62a67928c16d97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4e373ef552ae75c0a78a600062a692ffa0b2f36295b2534f62a67928c16d97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a4e373ef552ae75c0a78a600062a692ffa0b2f36295b2534f62a67928c16d97/merged/var/lib/ceph/mgr/ceph-compute-0.hhuoua supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 podman[74772]: 2026-01-31 07:04:36.301460613 +0000 UTC m=+0.097029567 container init 80eff2094e986ce79ca2f5db33c4e20adbe7c8c35152ee84bfca5e742fce5e26 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:04:36 compute-0 podman[74772]: 2026-01-31 07:04:36.304923627 +0000 UTC m=+0.100492561 container start 80eff2094e986ce79ca2f5db33c4e20adbe7c8c35152ee84bfca5e742fce5e26 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:36 compute-0 bash[74772]: 80eff2094e986ce79ca2f5db33c4e20adbe7c8c35152ee84bfca5e742fce5e26
Jan 31 07:04:36 compute-0 podman[74772]: 2026-01-31 07:04:36.232737724 +0000 UTC m=+0.028306748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:36 compute-0 systemd[1]: Started Ceph mgr.compute-0.hhuoua for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: pidfile_write: ignore empty --pid-file
Jan 31 07:04:36 compute-0 podman[74792]: 2026-01-31 07:04:36.37175972 +0000 UTC m=+0.042799603 container create 9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869 (image=quay.io/ceph/ceph:v18, name=jolly_driscoll, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:36 compute-0 systemd[1]: Started libpod-conmon-9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869.scope.
Jan 31 07:04:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'alerts'
Jan 31 07:04:36 compute-0 podman[74792]: 2026-01-31 07:04:36.347500414 +0000 UTC m=+0.018540327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acdfc5cd92b172236e3d7525b5c9548cb05f75bf0e029a1a9c16da1afcf7e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acdfc5cd92b172236e3d7525b5c9548cb05f75bf0e029a1a9c16da1afcf7e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63acdfc5cd92b172236e3d7525b5c9548cb05f75bf0e029a1a9c16da1afcf7e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:36 compute-0 podman[74792]: 2026-01-31 07:04:36.487949586 +0000 UTC m=+0.158989549 container init 9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869 (image=quay.io/ceph/ceph:v18, name=jolly_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:04:36 compute-0 podman[74792]: 2026-01-31 07:04:36.498839754 +0000 UTC m=+0.169879647 container start 9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869 (image=quay.io/ceph/ceph:v18, name=jolly_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:36 compute-0 podman[74792]: 2026-01-31 07:04:36.540137958 +0000 UTC m=+0.211177882 container attach 9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869 (image=quay.io/ceph/ceph:v18, name=jolly_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'balancer'
Jan 31 07:04:36 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:36.729+0000 7fe90d394140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 07:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993663397' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]: 
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]: {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "health": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "status": "HEALTH_OK",
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "checks": {},
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "mutes": []
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "election_epoch": 5,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "quorum": [
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         0
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     ],
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "quorum_names": [
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "compute-0"
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     ],
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "quorum_age": 2,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "monmap": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "epoch": 1,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "min_mon_release_name": "reef",
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_mons": 1
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "osdmap": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "epoch": 1,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_osds": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_up_osds": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "osd_up_since": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_in_osds": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "osd_in_since": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_remapped_pgs": 0
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "pgmap": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "pgs_by_state": [],
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_pgs": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_pools": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_objects": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "data_bytes": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "bytes_used": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "bytes_avail": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "bytes_total": 0
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "fsmap": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "epoch": 1,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "by_rank": [],
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "up:standby": 0
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "mgrmap": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "available": false,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "num_standbys": 0,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "modules": [
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:             "iostat",
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:             "nfs",
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:             "restful"
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         ],
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "services": {}
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "servicemap": {
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "epoch": 1,
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:         "services": {}
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     },
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]:     "progress_events": {}
Jan 31 07:04:36 compute-0 jolly_driscoll[74833]: }
Jan 31 07:04:36 compute-0 systemd[1]: libpod-9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869.scope: Deactivated successfully.
Jan 31 07:04:36 compute-0 podman[74792]: 2026-01-31 07:04:36.90921808 +0000 UTC m=+0.580258003 container died 9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869 (image=quay.io/ceph/ceph:v18, name=jolly_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 07:04:36 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'cephadm'
Jan 31 07:04:36 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:36.983+0000 7fe90d394140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 07:04:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3993663397' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-63acdfc5cd92b172236e3d7525b5c9548cb05f75bf0e029a1a9c16da1afcf7e5-merged.mount: Deactivated successfully.
Jan 31 07:04:37 compute-0 podman[74792]: 2026-01-31 07:04:37.130571201 +0000 UTC m=+0.801611094 container remove 9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869 (image=quay.io/ceph/ceph:v18, name=jolly_driscoll, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:37 compute-0 systemd[1]: libpod-conmon-9c62dbbfc991141f10cf91d9a8f4234aa154e9c741ac2167c9ef361dcadd4869.scope: Deactivated successfully.
Jan 31 07:04:38 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'crash'
Jan 31 07:04:39 compute-0 ceph-mgr[74791]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 07:04:39 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'dashboard'
Jan 31 07:04:39 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:39.144+0000 7fe90d394140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 07:04:39 compute-0 podman[74884]: 2026-01-31 07:04:39.189935747 +0000 UTC m=+0.038751763 container create d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49 (image=quay.io/ceph/ceph:v18, name=laughing_kalam, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:39 compute-0 systemd[1]: Started libpod-conmon-d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49.scope.
Jan 31 07:04:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c1a870f1507ff06877586636a69e7cac632cfafa30678112a98f2a2075bb62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c1a870f1507ff06877586636a69e7cac632cfafa30678112a98f2a2075bb62/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1c1a870f1507ff06877586636a69e7cac632cfafa30678112a98f2a2075bb62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:39 compute-0 podman[74884]: 2026-01-31 07:04:39.264115881 +0000 UTC m=+0.112931897 container init d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49 (image=quay.io/ceph/ceph:v18, name=laughing_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:04:39 compute-0 podman[74884]: 2026-01-31 07:04:39.17135417 +0000 UTC m=+0.020170186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:39 compute-0 podman[74884]: 2026-01-31 07:04:39.270466126 +0000 UTC m=+0.119282112 container start d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49 (image=quay.io/ceph/ceph:v18, name=laughing_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:04:39 compute-0 podman[74884]: 2026-01-31 07:04:39.278786951 +0000 UTC m=+0.127602967 container attach d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49 (image=quay.io/ceph/ceph:v18, name=laughing_kalam, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439820752' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:39 compute-0 laughing_kalam[74900]: 
Jan 31 07:04:39 compute-0 laughing_kalam[74900]: {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "health": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "status": "HEALTH_OK",
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "checks": {},
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "mutes": []
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "election_epoch": 5,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "quorum": [
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         0
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     ],
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "quorum_names": [
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "compute-0"
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     ],
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "quorum_age": 5,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "monmap": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "epoch": 1,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "min_mon_release_name": "reef",
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_mons": 1
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "osdmap": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "epoch": 1,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_osds": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_up_osds": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "osd_up_since": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_in_osds": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "osd_in_since": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_remapped_pgs": 0
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "pgmap": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "pgs_by_state": [],
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_pgs": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_pools": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_objects": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "data_bytes": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "bytes_used": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "bytes_avail": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "bytes_total": 0
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "fsmap": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "epoch": 1,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "by_rank": [],
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "up:standby": 0
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "mgrmap": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "available": false,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "num_standbys": 0,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "modules": [
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:             "iostat",
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:             "nfs",
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:             "restful"
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         ],
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "services": {}
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "servicemap": {
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "epoch": 1,
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:         "services": {}
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     },
Jan 31 07:04:39 compute-0 laughing_kalam[74900]:     "progress_events": {}
Jan 31 07:04:39 compute-0 laughing_kalam[74900]: }
Jan 31 07:04:39 compute-0 systemd[1]: libpod-d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49.scope: Deactivated successfully.
Jan 31 07:04:39 compute-0 podman[74926]: 2026-01-31 07:04:39.694588481 +0000 UTC m=+0.021383437 container died d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49 (image=quay.io/ceph/ceph:v18, name=laughing_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:04:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/439820752' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1c1a870f1507ff06877586636a69e7cac632cfafa30678112a98f2a2075bb62-merged.mount: Deactivated successfully.
Jan 31 07:04:39 compute-0 podman[74926]: 2026-01-31 07:04:39.739797982 +0000 UTC m=+0.066592928 container remove d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49 (image=quay.io/ceph/ceph:v18, name=laughing_kalam, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:39 compute-0 systemd[1]: libpod-conmon-d7f5dab1eea040d3d7329861fb2d0330d5f4417b9da1c63634b548d889bb6e49.scope: Deactivated successfully.
Jan 31 07:04:40 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'devicehealth'
Jan 31 07:04:40 compute-0 ceph-mgr[74791]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 07:04:40 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:40.884+0000 7fe90d394140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 07:04:40 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 07:04:41 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 07:04:41 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 07:04:41 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   from numpy import show_config as show_numpy_config
Jan 31 07:04:41 compute-0 ceph-mgr[74791]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 07:04:41 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:41.396+0000 7fe90d394140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 07:04:41 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'influx'
Jan 31 07:04:41 compute-0 ceph-mgr[74791]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 07:04:41 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'insights'
Jan 31 07:04:41 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:41.640+0000 7fe90d394140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 07:04:41 compute-0 podman[74942]: 2026-01-31 07:04:41.820272977 +0000 UTC m=+0.051772073 container create b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66 (image=quay.io/ceph/ceph:v18, name=tender_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:41 compute-0 systemd[1]: Started libpod-conmon-b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66.scope.
Jan 31 07:04:41 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'iostat'
Jan 31 07:04:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd7d20be80798c939dafa4e3da43cb1637c57650a77ac525adb67b3da9b3c95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd7d20be80798c939dafa4e3da43cb1637c57650a77ac525adb67b3da9b3c95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd7d20be80798c939dafa4e3da43cb1637c57650a77ac525adb67b3da9b3c95/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:41 compute-0 podman[74942]: 2026-01-31 07:04:41.798188205 +0000 UTC m=+0.029687291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:41 compute-0 podman[74942]: 2026-01-31 07:04:41.906445266 +0000 UTC m=+0.137944402 container init b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66 (image=quay.io/ceph/ceph:v18, name=tender_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:04:41 compute-0 podman[74942]: 2026-01-31 07:04:41.913800966 +0000 UTC m=+0.145300022 container start b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66 (image=quay.io/ceph/ceph:v18, name=tender_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:41 compute-0 podman[74942]: 2026-01-31 07:04:41.917402525 +0000 UTC m=+0.148901681 container attach b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66 (image=quay.io/ceph/ceph:v18, name=tender_cohen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:42 compute-0 ceph-mgr[74791]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 07:04:42 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:42.104+0000 7fe90d394140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 07:04:42 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'k8sevents'
Jan 31 07:04:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313579577' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:42 compute-0 tender_cohen[74959]: 
Jan 31 07:04:42 compute-0 tender_cohen[74959]: {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "health": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "status": "HEALTH_OK",
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "checks": {},
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "mutes": []
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "election_epoch": 5,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "quorum": [
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         0
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     ],
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "quorum_names": [
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "compute-0"
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     ],
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "quorum_age": 7,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "monmap": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "epoch": 1,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "min_mon_release_name": "reef",
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_mons": 1
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "osdmap": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "epoch": 1,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_osds": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_up_osds": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "osd_up_since": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_in_osds": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "osd_in_since": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_remapped_pgs": 0
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "pgmap": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "pgs_by_state": [],
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_pgs": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_pools": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_objects": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "data_bytes": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "bytes_used": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "bytes_avail": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "bytes_total": 0
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "fsmap": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "epoch": 1,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "by_rank": [],
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "up:standby": 0
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "mgrmap": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "available": false,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "num_standbys": 0,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "modules": [
Jan 31 07:04:42 compute-0 tender_cohen[74959]:             "iostat",
Jan 31 07:04:42 compute-0 tender_cohen[74959]:             "nfs",
Jan 31 07:04:42 compute-0 tender_cohen[74959]:             "restful"
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         ],
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "services": {}
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "servicemap": {
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "epoch": 1,
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:42 compute-0 tender_cohen[74959]:         "services": {}
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     },
Jan 31 07:04:42 compute-0 tender_cohen[74959]:     "progress_events": {}
Jan 31 07:04:42 compute-0 tender_cohen[74959]: }
Jan 31 07:04:42 compute-0 systemd[1]: libpod-b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66.scope: Deactivated successfully.
Jan 31 07:04:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3313579577' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:42 compute-0 podman[74985]: 2026-01-31 07:04:42.356142978 +0000 UTC m=+0.031839833 container died b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66 (image=quay.io/ceph/ceph:v18, name=tender_cohen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:04:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfd7d20be80798c939dafa4e3da43cb1637c57650a77ac525adb67b3da9b3c95-merged.mount: Deactivated successfully.
Jan 31 07:04:42 compute-0 podman[74985]: 2026-01-31 07:04:42.392919012 +0000 UTC m=+0.068615847 container remove b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66 (image=quay.io/ceph/ceph:v18, name=tender_cohen, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:04:42 compute-0 systemd[1]: libpod-conmon-b89d0d505dc66955822606d9d08237bfe3c0bd4d6bc5cea3895cd4da67c40a66.scope: Deactivated successfully.
Jan 31 07:04:43 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'localpool'
Jan 31 07:04:44 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.455146099 +0000 UTC m=+0.037833421 container create 37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:04:44 compute-0 systemd[1]: Started libpod-conmon-37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd.scope.
Jan 31 07:04:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552039cd7a60fe112fd5c2c773ef2249f15d689084b8c1b4319c64ea94c9a4d3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552039cd7a60fe112fd5c2c773ef2249f15d689084b8c1b4319c64ea94c9a4d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552039cd7a60fe112fd5c2c773ef2249f15d689084b8c1b4319c64ea94c9a4d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.43852395 +0000 UTC m=+0.021211242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.536236912 +0000 UTC m=+0.118924204 container init 37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.541217475 +0000 UTC m=+0.123904767 container start 37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.545008788 +0000 UTC m=+0.127696080 container attach 37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:04:44 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'mirroring'
Jan 31 07:04:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327285530' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]: 
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]: {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "health": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "status": "HEALTH_OK",
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "checks": {},
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "mutes": []
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "election_epoch": 5,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "quorum": [
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         0
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     ],
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "quorum_names": [
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "compute-0"
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     ],
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "quorum_age": 10,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "monmap": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "epoch": 1,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "min_mon_release_name": "reef",
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_mons": 1
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "osdmap": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "epoch": 1,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_osds": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_up_osds": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "osd_up_since": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_in_osds": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "osd_in_since": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_remapped_pgs": 0
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "pgmap": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "pgs_by_state": [],
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_pgs": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_pools": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_objects": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "data_bytes": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "bytes_used": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "bytes_avail": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "bytes_total": 0
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "fsmap": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "epoch": 1,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "by_rank": [],
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "up:standby": 0
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "mgrmap": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "available": false,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "num_standbys": 0,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "modules": [
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:             "iostat",
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:             "nfs",
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:             "restful"
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         ],
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "services": {}
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "servicemap": {
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "epoch": 1,
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:         "services": {}
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     },
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]:     "progress_events": {}
Jan 31 07:04:44 compute-0 crazy_matsumoto[75017]: }
Jan 31 07:04:44 compute-0 systemd[1]: libpod-37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd.scope: Deactivated successfully.
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.921408799 +0000 UTC m=+0.504096141 container died 37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:04:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-552039cd7a60fe112fd5c2c773ef2249f15d689084b8c1b4319c64ea94c9a4d3-merged.mount: Deactivated successfully.
Jan 31 07:04:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2327285530' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:44 compute-0 podman[75000]: 2026-01-31 07:04:44.973627143 +0000 UTC m=+0.556314465 container remove 37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd (image=quay.io/ceph/ceph:v18, name=crazy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:04:44 compute-0 systemd[1]: libpod-conmon-37f8abeb04e3624c5893afc524c553e5f8b57cbb61a6784f63eb2a67a871f7dd.scope: Deactivated successfully.
Jan 31 07:04:45 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'nfs'
Jan 31 07:04:45 compute-0 ceph-mgr[74791]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 07:04:45 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'orchestrator'
Jan 31 07:04:45 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:45.727+0000 7fe90d394140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-mgr[74791]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:46.417+0000 7fe90d394140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 07:04:46 compute-0 ceph-mgr[74791]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:46.696+0000 7fe90d394140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'osd_support'
Jan 31 07:04:46 compute-0 ceph-mgr[74791]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:46.940+0000 7fe90d394140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 07:04:46 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.038013113 +0000 UTC m=+0.045384106 container create b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e (image=quay.io/ceph/ceph:v18, name=trusting_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:04:47 compute-0 systemd[1]: Started libpod-conmon-b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e.scope.
Jan 31 07:04:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ade466b9c41526688d7fc10c06db57ccb19657b7a8114c37037af1febed2ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ade466b9c41526688d7fc10c06db57ccb19657b7a8114c37037af1febed2ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40ade466b9c41526688d7fc10c06db57ccb19657b7a8114c37037af1febed2ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.014855143 +0000 UTC m=+0.022226206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.124394996 +0000 UTC m=+0.131766069 container init b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e (image=quay.io/ceph/ceph:v18, name=trusting_driscoll, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.130585448 +0000 UTC m=+0.137956471 container start b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e (image=quay.io/ceph/ceph:v18, name=trusting_driscoll, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.134963836 +0000 UTC m=+0.142334909 container attach b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e (image=quay.io/ceph/ceph:v18, name=trusting_driscoll, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:04:47 compute-0 ceph-mgr[74791]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 07:04:47 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'progress'
Jan 31 07:04:47 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:47.212+0000 7fe90d394140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 07:04:47 compute-0 ceph-mgr[74791]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 07:04:47 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'prometheus'
Jan 31 07:04:47 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:47.437+0000 7fe90d394140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 07:04:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97720984' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]: 
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]: {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "health": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "status": "HEALTH_OK",
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "checks": {},
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "mutes": []
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "election_epoch": 5,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "quorum": [
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         0
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     ],
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "quorum_names": [
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "compute-0"
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     ],
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "quorum_age": 13,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "monmap": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "epoch": 1,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "min_mon_release_name": "reef",
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_mons": 1
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "osdmap": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "epoch": 1,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_osds": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_up_osds": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "osd_up_since": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_in_osds": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "osd_in_since": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_remapped_pgs": 0
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "pgmap": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "pgs_by_state": [],
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_pgs": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_pools": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_objects": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "data_bytes": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "bytes_used": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "bytes_avail": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "bytes_total": 0
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "fsmap": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "epoch": 1,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "by_rank": [],
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "up:standby": 0
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "mgrmap": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "available": false,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "num_standbys": 0,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "modules": [
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:             "iostat",
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:             "nfs",
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:             "restful"
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         ],
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "services": {}
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "servicemap": {
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "epoch": 1,
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:         "services": {}
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     },
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]:     "progress_events": {}
Jan 31 07:04:47 compute-0 trusting_driscoll[75074]: }
Jan 31 07:04:47 compute-0 systemd[1]: libpod-b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e.scope: Deactivated successfully.
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.517704423 +0000 UTC m=+0.525075416 container died b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e (image=quay.io/ceph/ceph:v18, name=trusting_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:04:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-40ade466b9c41526688d7fc10c06db57ccb19657b7a8114c37037af1febed2ac-merged.mount: Deactivated successfully.
Jan 31 07:04:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/97720984' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:47 compute-0 podman[75057]: 2026-01-31 07:04:47.565465997 +0000 UTC m=+0.572837030 container remove b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e (image=quay.io/ceph/ceph:v18, name=trusting_driscoll, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 07:04:47 compute-0 systemd[1]: libpod-conmon-b81973118ccd84d4dccc069f19ad3af11aa2f89c026758587c0cb8c87be0ef1e.scope: Deactivated successfully.
Jan 31 07:04:48 compute-0 ceph-mgr[74791]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 07:04:48 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:48.518+0000 7fe90d394140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 07:04:48 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'rbd_support'
Jan 31 07:04:48 compute-0 ceph-mgr[74791]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 07:04:48 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:48.825+0000 7fe90d394140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 07:04:48 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'restful'
Jan 31 07:04:49 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'rgw'
Jan 31 07:04:49 compute-0 podman[75112]: 2026-01-31 07:04:49.636357217 +0000 UTC m=+0.055707810 container create 10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc (image=quay.io/ceph/ceph:v18, name=pensive_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:04:49 compute-0 systemd[1]: Started libpod-conmon-10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc.scope.
Jan 31 07:04:49 compute-0 podman[75112]: 2026-01-31 07:04:49.59824215 +0000 UTC m=+0.017592723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b191d54948565bbc1b00b56bddc2ff65ad3b2a3887553fb425eab24922a1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b191d54948565bbc1b00b56bddc2ff65ad3b2a3887553fb425eab24922a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d3b191d54948565bbc1b00b56bddc2ff65ad3b2a3887553fb425eab24922a1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:49 compute-0 podman[75112]: 2026-01-31 07:04:49.729353062 +0000 UTC m=+0.148703675 container init 10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc (image=quay.io/ceph/ceph:v18, name=pensive_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:04:49 compute-0 podman[75112]: 2026-01-31 07:04:49.734577061 +0000 UTC m=+0.153927624 container start 10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc (image=quay.io/ceph/ceph:v18, name=pensive_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:04:49 compute-0 podman[75112]: 2026-01-31 07:04:49.73900567 +0000 UTC m=+0.158356243 container attach 10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc (image=quay.io/ceph/ceph:v18, name=pensive_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:04:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307578861' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:50 compute-0 pensive_perlman[75128]: 
Jan 31 07:04:50 compute-0 pensive_perlman[75128]: {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "health": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "status": "HEALTH_OK",
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "checks": {},
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "mutes": []
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "election_epoch": 5,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "quorum": [
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         0
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     ],
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "quorum_names": [
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "compute-0"
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     ],
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "quorum_age": 15,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "monmap": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "epoch": 1,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "min_mon_release_name": "reef",
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_mons": 1
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "osdmap": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "epoch": 1,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_osds": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_up_osds": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "osd_up_since": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_in_osds": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "osd_in_since": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_remapped_pgs": 0
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "pgmap": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "pgs_by_state": [],
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_pgs": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_pools": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_objects": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "data_bytes": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "bytes_used": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "bytes_avail": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "bytes_total": 0
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "fsmap": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "epoch": 1,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "by_rank": [],
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "up:standby": 0
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "mgrmap": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "available": false,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "num_standbys": 0,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "modules": [
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:             "iostat",
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:             "nfs",
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:             "restful"
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         ],
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "services": {}
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "servicemap": {
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "epoch": 1,
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:         "services": {}
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     },
Jan 31 07:04:50 compute-0 pensive_perlman[75128]:     "progress_events": {}
Jan 31 07:04:50 compute-0 pensive_perlman[75128]: }
Jan 31 07:04:50 compute-0 systemd[1]: libpod-10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc.scope: Deactivated successfully.
Jan 31 07:04:50 compute-0 podman[75112]: 2026-01-31 07:04:50.158311266 +0000 UTC m=+0.577661819 container died 10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc (image=quay.io/ceph/ceph:v18, name=pensive_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d3b191d54948565bbc1b00b56bddc2ff65ad3b2a3887553fb425eab24922a1a-merged.mount: Deactivated successfully.
Jan 31 07:04:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/307578861' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:50 compute-0 podman[75112]: 2026-01-31 07:04:50.200480382 +0000 UTC m=+0.619830935 container remove 10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc (image=quay.io/ceph/ceph:v18, name=pensive_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 07:04:50 compute-0 systemd[1]: libpod-conmon-10c9bf1d5f00fcfd488f19b44da353edad3fc72e78040c9f30db4693756471cc.scope: Deactivated successfully.
Jan 31 07:04:50 compute-0 ceph-mgr[74791]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 07:04:50 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:50.225+0000 7fe90d394140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 07:04:50 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'rook'
Jan 31 07:04:51 compute-0 sshd-session[75165]: Invalid user sol from 45.148.10.240 port 47512
Jan 31 07:04:51 compute-0 sshd-session[75165]: Connection closed by invalid user sol 45.148.10.240 port 47512 [preauth]
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 07:04:52 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:52.126+0000 7fe90d394140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'selftest'
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.274195262 +0000 UTC m=+0.049200681 container create 61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:04:52 compute-0 systemd[1]: Started libpod-conmon-61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf.scope.
Jan 31 07:04:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1469f3bd26b84bea70f4f76274a4325740faed163091923109bad624fab25c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1469f3bd26b84bea70f4f76274a4325740faed163091923109bad624fab25c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1469f3bd26b84bea70f4f76274a4325740faed163091923109bad624fab25c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.255889872 +0000 UTC m=+0.030895301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.368450738 +0000 UTC m=+0.143456227 container init 61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.375142723 +0000 UTC m=+0.150148172 container start 61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf (image=quay.io/ceph/ceph:v18, name=elegant_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.379057859 +0000 UTC m=+0.154063298 container attach 61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'snap_schedule'
Jan 31 07:04:52 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:52.379+0000 7fe90d394140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'stats'
Jan 31 07:04:52 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:52.646+0000 7fe90d394140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 07:04:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113239367' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:52 compute-0 elegant_beaver[75183]: 
Jan 31 07:04:52 compute-0 elegant_beaver[75183]: {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "health": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "status": "HEALTH_OK",
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "checks": {},
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "mutes": []
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "election_epoch": 5,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "quorum": [
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         0
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     ],
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "quorum_names": [
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "compute-0"
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     ],
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "quorum_age": 18,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "monmap": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "epoch": 1,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "min_mon_release_name": "reef",
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_mons": 1
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "osdmap": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "epoch": 1,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_osds": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_up_osds": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "osd_up_since": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_in_osds": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "osd_in_since": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_remapped_pgs": 0
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "pgmap": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "pgs_by_state": [],
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_pgs": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_pools": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_objects": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "data_bytes": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "bytes_used": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "bytes_avail": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "bytes_total": 0
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "fsmap": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "epoch": 1,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "by_rank": [],
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "up:standby": 0
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "mgrmap": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "available": false,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "num_standbys": 0,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "modules": [
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:             "iostat",
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:             "nfs",
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:             "restful"
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         ],
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "services": {}
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "servicemap": {
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "epoch": 1,
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:         "services": {}
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     },
Jan 31 07:04:52 compute-0 elegant_beaver[75183]:     "progress_events": {}
Jan 31 07:04:52 compute-0 elegant_beaver[75183]: }
Jan 31 07:04:52 compute-0 systemd[1]: libpod-61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf.scope: Deactivated successfully.
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.762906743 +0000 UTC m=+0.537912242 container died 61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf (image=quay.io/ceph/ceph:v18, name=elegant_beaver, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd1469f3bd26b84bea70f4f76274a4325740faed163091923109bad624fab25c-merged.mount: Deactivated successfully.
Jan 31 07:04:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4113239367' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:52 compute-0 podman[75167]: 2026-01-31 07:04:52.808716579 +0000 UTC m=+0.583721988 container remove 61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf (image=quay.io/ceph/ceph:v18, name=elegant_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:04:52 compute-0 systemd[1]: libpod-conmon-61ff7b900ede83508adab7b400d3f78cc92c741fbd14ee96bdb52441ad03a8cf.scope: Deactivated successfully.
Jan 31 07:04:52 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'status'
Jan 31 07:04:53 compute-0 ceph-mgr[74791]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 07:04:53 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'telegraf'
Jan 31 07:04:53 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:53.176+0000 7fe90d394140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 07:04:53 compute-0 ceph-mgr[74791]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 07:04:53 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:53.398+0000 7fe90d394140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 07:04:53 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'telemetry'
Jan 31 07:04:53 compute-0 ceph-mgr[74791]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 07:04:53 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 07:04:53 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:53.980+0000 7fe90d394140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 07:04:54 compute-0 ceph-mgr[74791]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 07:04:54 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'volumes'
Jan 31 07:04:54 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:54.657+0000 7fe90d394140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 07:04:54 compute-0 podman[75221]: 2026-01-31 07:04:54.882388528 +0000 UTC m=+0.046329231 container create ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648 (image=quay.io/ceph/ceph:v18, name=objective_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:54 compute-0 systemd[1]: Started libpod-conmon-ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648.scope.
Jan 31 07:04:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6807ee809ada582c3e2a3bd09b6aa22d3789f7bbaae84d2d041a88dc132f40e3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6807ee809ada582c3e2a3bd09b6aa22d3789f7bbaae84d2d041a88dc132f40e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6807ee809ada582c3e2a3bd09b6aa22d3789f7bbaae84d2d041a88dc132f40e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:54 compute-0 podman[75221]: 2026-01-31 07:04:54.866447765 +0000 UTC m=+0.030388508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:54 compute-0 podman[75221]: 2026-01-31 07:04:54.973366643 +0000 UTC m=+0.137307436 container init ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648 (image=quay.io/ceph/ceph:v18, name=objective_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:54 compute-0 podman[75221]: 2026-01-31 07:04:54.978462119 +0000 UTC m=+0.142402832 container start ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648 (image=quay.io/ceph/ceph:v18, name=objective_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:04:54 compute-0 podman[75221]: 2026-01-31 07:04:54.982903908 +0000 UTC m=+0.146844641 container attach ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648 (image=quay.io/ceph/ceph:v18, name=objective_carson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564252192' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:55 compute-0 objective_carson[75238]: 
Jan 31 07:04:55 compute-0 objective_carson[75238]: {
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "health": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "status": "HEALTH_OK",
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "checks": {},
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "mutes": []
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "election_epoch": 5,
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "quorum": [
Jan 31 07:04:55 compute-0 objective_carson[75238]:         0
Jan 31 07:04:55 compute-0 objective_carson[75238]:     ],
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "quorum_names": [
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "compute-0"
Jan 31 07:04:55 compute-0 objective_carson[75238]:     ],
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "quorum_age": 20,
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "monmap": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "epoch": 1,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "min_mon_release_name": "reef",
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_mons": 1
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "osdmap": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "epoch": 1,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_osds": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_up_osds": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "osd_up_since": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_in_osds": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "osd_in_since": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_remapped_pgs": 0
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "pgmap": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "pgs_by_state": [],
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_pgs": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_pools": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_objects": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "data_bytes": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "bytes_used": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "bytes_avail": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "bytes_total": 0
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "fsmap": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "epoch": 1,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "by_rank": [],
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "up:standby": 0
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "mgrmap": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "available": false,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "num_standbys": 0,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "modules": [
Jan 31 07:04:55 compute-0 objective_carson[75238]:             "iostat",
Jan 31 07:04:55 compute-0 objective_carson[75238]:             "nfs",
Jan 31 07:04:55 compute-0 objective_carson[75238]:             "restful"
Jan 31 07:04:55 compute-0 objective_carson[75238]:         ],
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "services": {}
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "servicemap": {
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "epoch": 1,
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:55 compute-0 objective_carson[75238]:         "services": {}
Jan 31 07:04:55 compute-0 objective_carson[75238]:     },
Jan 31 07:04:55 compute-0 objective_carson[75238]:     "progress_events": {}
Jan 31 07:04:55 compute-0 objective_carson[75238]: }
Jan 31 07:04:55 compute-0 systemd[1]: libpod-ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648.scope: Deactivated successfully.
Jan 31 07:04:55 compute-0 podman[75221]: 2026-01-31 07:04:55.346142976 +0000 UTC m=+0.510083689 container died ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648 (image=quay.io/ceph/ceph:v18, name=objective_carson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6807ee809ada582c3e2a3bd09b6aa22d3789f7bbaae84d2d041a88dc132f40e3-merged.mount: Deactivated successfully.
Jan 31 07:04:55 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:55.373+0000 7fe90d394140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'zabbix'
Jan 31 07:04:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/564252192' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:55 compute-0 podman[75221]: 2026-01-31 07:04:55.395207912 +0000 UTC m=+0.559148625 container remove ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648 (image=quay.io/ceph/ceph:v18, name=objective_carson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:04:55 compute-0 systemd[1]: libpod-conmon-ce99743198a8ab200b44ac955ae1e07177eb8116e910a1188325057a39081648.scope: Deactivated successfully.
Jan 31 07:04:55 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:04:55.588+0000 7fe90d394140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: ms_deliver_dispatch: unhandled message 0x55e5fbc38f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hhuoua
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr handle_mgr_map Activating!
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.hhuoua(active, starting, since 0.00906718s)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr handle_mgr_map I am now activating
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hhuoua", "id": "compute-0.hhuoua"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hhuoua", "id": "compute-0.hhuoua"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Manager daemon compute-0.hhuoua is now available
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: balancer
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [balancer INFO root] Starting
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: crash
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:04:55
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [balancer INFO root] No pools available
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: devicehealth
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: iostat
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Starting
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: nfs
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: orchestrator
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: pg_autoscaler
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: progress
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [progress INFO root] Loading...
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [progress INFO root] No stored events to load
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [progress INFO root] Loaded [] historic events
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] recovery thread starting
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] starting setup
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: rbd_support
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: restful
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/mirror_snapshot_schedule"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/mirror_snapshot_schedule"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [restful INFO root] server_addr: :: server_port: 8003
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [restful WARNING root] server not running: no certificate configured
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: status
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: telemetry
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] PerfHandler: starting
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TaskHandler: starting
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/trash_purge_schedule"} v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/trash_purge_schedule"}]: dispatch
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' 
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: [rbd_support INFO root] setup complete
Jan 31 07:04:55 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: volumes
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' 
Jan 31 07:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 31 07:04:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' 
Jan 31 07:04:56 compute-0 ceph-mon[74496]: Activating manager daemon compute-0.hhuoua
Jan 31 07:04:56 compute-0 ceph-mon[74496]: mgrmap e2: compute-0.hhuoua(active, starting, since 0.00906718s)
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hhuoua", "id": "compute-0.hhuoua"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: Manager daemon compute-0.hhuoua is now available
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/mirror_snapshot_schedule"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/trash_purge_schedule"}]: dispatch
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' 
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' 
Jan 31 07:04:56 compute-0 ceph-mon[74496]: from='mgr.14102 192.168.122.100:0/2875278805' entity='mgr.compute-0.hhuoua' 
Jan 31 07:04:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.hhuoua(active, since 1.03265s)
Jan 31 07:04:57 compute-0 podman[75357]: 2026-01-31 07:04:57.468946862 +0000 UTC m=+0.050676826 container create 6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4 (image=quay.io/ceph/ceph:v18, name=magical_jennings, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 31 07:04:57 compute-0 systemd[1]: Started libpod-conmon-6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4.scope.
Jan 31 07:04:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc805210580dda0c869912917466e3f7d5ad271e9ba9c7fc4c572cd30bafc3a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc805210580dda0c869912917466e3f7d5ad271e9ba9c7fc4c572cd30bafc3a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc805210580dda0c869912917466e3f7d5ad271e9ba9c7fc4c572cd30bafc3a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:57 compute-0 podman[75357]: 2026-01-31 07:04:57.449303769 +0000 UTC m=+0.031033743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:57 compute-0 podman[75357]: 2026-01-31 07:04:57.560937943 +0000 UTC m=+0.142667887 container init 6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4 (image=quay.io/ceph/ceph:v18, name=magical_jennings, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:04:57 compute-0 podman[75357]: 2026-01-31 07:04:57.56492858 +0000 UTC m=+0.146658504 container start 6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4 (image=quay.io/ceph/ceph:v18, name=magical_jennings, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:04:57 compute-0 podman[75357]: 2026-01-31 07:04:57.568348385 +0000 UTC m=+0.150078349 container attach 6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4 (image=quay.io/ceph/ceph:v18, name=magical_jennings, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:04:57 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:04:57 compute-0 ceph-mon[74496]: mgrmap e3: compute-0.hhuoua(active, since 1.03265s)
Jan 31 07:04:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.hhuoua(active, since 2s)
Jan 31 07:04:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 07:04:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207135271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:58 compute-0 magical_jennings[75373]: 
Jan 31 07:04:58 compute-0 magical_jennings[75373]: {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "health": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "status": "HEALTH_OK",
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "checks": {},
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "mutes": []
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "election_epoch": 5,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "quorum": [
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         0
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     ],
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "quorum_names": [
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "compute-0"
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     ],
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "quorum_age": 23,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "monmap": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "epoch": 1,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "min_mon_release_name": "reef",
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_mons": 1
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "osdmap": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "epoch": 1,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_osds": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_up_osds": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "osd_up_since": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_in_osds": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "osd_in_since": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_remapped_pgs": 0
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "pgmap": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "pgs_by_state": [],
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_pgs": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_pools": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_objects": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "data_bytes": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "bytes_used": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "bytes_avail": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "bytes_total": 0
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "fsmap": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "epoch": 1,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "by_rank": [],
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "up:standby": 0
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "mgrmap": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "available": true,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "num_standbys": 0,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "modules": [
Jan 31 07:04:58 compute-0 magical_jennings[75373]:             "iostat",
Jan 31 07:04:58 compute-0 magical_jennings[75373]:             "nfs",
Jan 31 07:04:58 compute-0 magical_jennings[75373]:             "restful"
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         ],
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "services": {}
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "servicemap": {
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "epoch": 1,
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "modified": "2026-01-31T07:04:31.861311+0000",
Jan 31 07:04:58 compute-0 magical_jennings[75373]:         "services": {}
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     },
Jan 31 07:04:58 compute-0 magical_jennings[75373]:     "progress_events": {}
Jan 31 07:04:58 compute-0 magical_jennings[75373]: }
Jan 31 07:04:58 compute-0 systemd[1]: libpod-6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4.scope: Deactivated successfully.
Jan 31 07:04:58 compute-0 podman[75399]: 2026-01-31 07:04:58.290682339 +0000 UTC m=+0.034139891 container died 6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4 (image=quay.io/ceph/ceph:v18, name=magical_jennings, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc805210580dda0c869912917466e3f7d5ad271e9ba9c7fc4c572cd30bafc3a5-merged.mount: Deactivated successfully.
Jan 31 07:04:58 compute-0 podman[75399]: 2026-01-31 07:04:58.336225968 +0000 UTC m=+0.079683480 container remove 6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4 (image=quay.io/ceph/ceph:v18, name=magical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:58 compute-0 systemd[1]: libpod-conmon-6fd4ce93953c4a2875359a7a32b679dccb34d23ad1950a9cc04d727651afd3d4.scope: Deactivated successfully.
Jan 31 07:04:58 compute-0 podman[75414]: 2026-01-31 07:04:58.423424752 +0000 UTC m=+0.056761187 container create 48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880 (image=quay.io/ceph/ceph:v18, name=keen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:04:58 compute-0 systemd[1]: Started libpod-conmon-48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880.scope.
Jan 31 07:04:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18494f8acc65d43357ef74ac1294ecb5b66a109acb94ac12a0868f890f0da195/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18494f8acc65d43357ef74ac1294ecb5b66a109acb94ac12a0868f890f0da195/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18494f8acc65d43357ef74ac1294ecb5b66a109acb94ac12a0868f890f0da195/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18494f8acc65d43357ef74ac1294ecb5b66a109acb94ac12a0868f890f0da195/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:58 compute-0 podman[75414]: 2026-01-31 07:04:58.405617074 +0000 UTC m=+0.038953539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:58 compute-0 podman[75414]: 2026-01-31 07:04:58.502821133 +0000 UTC m=+0.136157588 container init 48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880 (image=quay.io/ceph/ceph:v18, name=keen_haslett, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:04:58 compute-0 podman[75414]: 2026-01-31 07:04:58.507609911 +0000 UTC m=+0.140946346 container start 48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880 (image=quay.io/ceph/ceph:v18, name=keen_haslett, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:04:58 compute-0 podman[75414]: 2026-01-31 07:04:58.511177738 +0000 UTC m=+0.144514193 container attach 48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880 (image=quay.io/ceph/ceph:v18, name=keen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:04:58 compute-0 ceph-mon[74496]: mgrmap e4: compute-0.hhuoua(active, since 2s)
Jan 31 07:04:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/207135271' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 07:04:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 07:04:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2020428441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:04:59 compute-0 systemd[1]: libpod-48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880.scope: Deactivated successfully.
Jan 31 07:04:59 compute-0 podman[75414]: 2026-01-31 07:04:59.084883949 +0000 UTC m=+0.718220424 container died 48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880 (image=quay.io/ceph/ceph:v18, name=keen_haslett, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:04:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-18494f8acc65d43357ef74ac1294ecb5b66a109acb94ac12a0868f890f0da195-merged.mount: Deactivated successfully.
Jan 31 07:04:59 compute-0 podman[75414]: 2026-01-31 07:04:59.172539014 +0000 UTC m=+0.805875449 container remove 48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880 (image=quay.io/ceph/ceph:v18, name=keen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:04:59 compute-0 systemd[1]: libpod-conmon-48a544cef4973f62036449844c182d6e691d088736c7486ec9e2002181b41880.scope: Deactivated successfully.
Jan 31 07:04:59 compute-0 podman[75471]: 2026-01-31 07:04:59.237790017 +0000 UTC m=+0.044581857 container create 763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63 (image=quay.io/ceph/ceph:v18, name=quizzical_blackwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:04:59 compute-0 systemd[1]: Started libpod-conmon-763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63.scope.
Jan 31 07:04:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaad54c60ae6a7cd0cca36d24668389ddb30ae66bd11b9780665446ba6e30a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaad54c60ae6a7cd0cca36d24668389ddb30ae66bd11b9780665446ba6e30a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaad54c60ae6a7cd0cca36d24668389ddb30ae66bd11b9780665446ba6e30a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:04:59 compute-0 podman[75471]: 2026-01-31 07:04:59.315062806 +0000 UTC m=+0.121854676 container init 763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63 (image=quay.io/ceph/ceph:v18, name=quizzical_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:04:59 compute-0 podman[75471]: 2026-01-31 07:04:59.21875669 +0000 UTC m=+0.025548550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:04:59 compute-0 podman[75471]: 2026-01-31 07:04:59.319145247 +0000 UTC m=+0.125937077 container start 763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63 (image=quay.io/ceph/ceph:v18, name=quizzical_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:04:59 compute-0 podman[75471]: 2026-01-31 07:04:59.322776506 +0000 UTC m=+0.129568426 container attach 763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63 (image=quay.io/ceph/ceph:v18, name=quizzical_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:04:59 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:04:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2020428441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:04:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 31 07:04:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/582439267' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 31 07:05:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/582439267' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 31 07:05:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/582439267' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  1: '-n'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  2: 'mgr.compute-0.hhuoua'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  3: '-f'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  4: '--setuser'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  5: 'ceph'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  6: '--setgroup'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  7: 'ceph'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr respawn  exe_path /proc/self/exe
Jan 31 07:05:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.hhuoua(active, since 5s)
Jan 31 07:05:00 compute-0 systemd[1]: libpod-763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63.scope: Deactivated successfully.
Jan 31 07:05:00 compute-0 podman[75514]: 2026-01-31 07:05:00.729766158 +0000 UTC m=+0.033347131 container died 763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63 (image=quay.io/ceph/ceph:v18, name=quizzical_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:00 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: ignoring --setuser ceph since I am not root
Jan 31 07:05:00 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: ignoring --setgroup ceph since I am not root
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: pidfile_write: ignore empty --pid-file
Jan 31 07:05:00 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'alerts'
Jan 31 07:05:01 compute-0 ceph-mgr[74791]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 07:05:01 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'balancer'
Jan 31 07:05:01 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:01.139+0000 7fb66730c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 07:05:01 compute-0 ceph-mgr[74791]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 07:05:01 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'cephadm'
Jan 31 07:05:01 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:01.392+0000 7fb66730c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 07:05:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/582439267' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 07:05:01 compute-0 ceph-mon[74496]: mgrmap e5: compute-0.hhuoua(active, since 5s)
Jan 31 07:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcaad54c60ae6a7cd0cca36d24668389ddb30ae66bd11b9780665446ba6e30a2-merged.mount: Deactivated successfully.
Jan 31 07:05:01 compute-0 podman[75514]: 2026-01-31 07:05:01.727537551 +0000 UTC m=+1.031118524 container remove 763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63 (image=quay.io/ceph/ceph:v18, name=quizzical_blackwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:01 compute-0 systemd[1]: libpod-conmon-763e8e5fe272fbfe873dfdb341977a923e3419a2b1f47b8354203368f47dfe63.scope: Deactivated successfully.
Jan 31 07:05:01 compute-0 podman[75553]: 2026-01-31 07:05:01.778577227 +0000 UTC m=+0.034615043 container create b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c (image=quay.io/ceph/ceph:v18, name=heuristic_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:01 compute-0 systemd[1]: Started libpod-conmon-b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c.scope.
Jan 31 07:05:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d9ff1d3ad8c5e0cc127f6aa16a3463976feb6d3c838badb3d8a51e427de24b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d9ff1d3ad8c5e0cc127f6aa16a3463976feb6d3c838badb3d8a51e427de24b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d9ff1d3ad8c5e0cc127f6aa16a3463976feb6d3c838badb3d8a51e427de24b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:01 compute-0 podman[75553]: 2026-01-31 07:05:01.859624249 +0000 UTC m=+0.115662095 container init b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c (image=quay.io/ceph/ceph:v18, name=heuristic_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:01 compute-0 podman[75553]: 2026-01-31 07:05:01.764617443 +0000 UTC m=+0.020655289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:01 compute-0 podman[75553]: 2026-01-31 07:05:01.864144279 +0000 UTC m=+0.120182105 container start b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c (image=quay.io/ceph/ceph:v18, name=heuristic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:01 compute-0 podman[75553]: 2026-01-31 07:05:01.867569893 +0000 UTC m=+0.123607719 container attach b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c (image=quay.io/ceph/ceph:v18, name=heuristic_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 07:05:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486160446' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 07:05:02 compute-0 heuristic_wilson[75569]: {
Jan 31 07:05:02 compute-0 heuristic_wilson[75569]:     "epoch": 5,
Jan 31 07:05:02 compute-0 heuristic_wilson[75569]:     "available": true,
Jan 31 07:05:02 compute-0 heuristic_wilson[75569]:     "active_name": "compute-0.hhuoua",
Jan 31 07:05:02 compute-0 heuristic_wilson[75569]:     "num_standby": 0
Jan 31 07:05:02 compute-0 heuristic_wilson[75569]: }
Jan 31 07:05:02 compute-0 systemd[1]: libpod-b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c.scope: Deactivated successfully.
Jan 31 07:05:02 compute-0 podman[75553]: 2026-01-31 07:05:02.422226597 +0000 UTC m=+0.678264423 container died b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c (image=quay.io/ceph/ceph:v18, name=heuristic_wilson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:05:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d9ff1d3ad8c5e0cc127f6aa16a3463976feb6d3c838badb3d8a51e427de24b-merged.mount: Deactivated successfully.
Jan 31 07:05:02 compute-0 podman[75553]: 2026-01-31 07:05:02.457369771 +0000 UTC m=+0.713407597 container remove b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c (image=quay.io/ceph/ceph:v18, name=heuristic_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:05:02 compute-0 systemd[1]: libpod-conmon-b71dfc38c4608710e1e13bc43e103f5d61c6ab60e0b4ed5bb4f25081b8b97d4c.scope: Deactivated successfully.
Jan 31 07:05:02 compute-0 podman[75608]: 2026-01-31 07:05:02.521959408 +0000 UTC m=+0.045548571 container create 749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92 (image=quay.io/ceph/ceph:v18, name=boring_chebyshev, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:05:02 compute-0 systemd[1]: Started libpod-conmon-749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92.scope.
Jan 31 07:05:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d84697ddb18c2b613e5f6950c6afa45679d79dbc80c346cf96c83c08debc58/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d84697ddb18c2b613e5f6950c6afa45679d79dbc80c346cf96c83c08debc58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d84697ddb18c2b613e5f6950c6afa45679d79dbc80c346cf96c83c08debc58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:02 compute-0 podman[75608]: 2026-01-31 07:05:02.501101895 +0000 UTC m=+0.024691108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:02 compute-0 podman[75608]: 2026-01-31 07:05:02.601214835 +0000 UTC m=+0.124804028 container init 749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92 (image=quay.io/ceph/ceph:v18, name=boring_chebyshev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:02 compute-0 podman[75608]: 2026-01-31 07:05:02.605248655 +0000 UTC m=+0.128837818 container start 749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92 (image=quay.io/ceph/ceph:v18, name=boring_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:05:02 compute-0 podman[75608]: 2026-01-31 07:05:02.608246558 +0000 UTC m=+0.131835751 container attach 749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92 (image=quay.io/ceph/ceph:v18, name=boring_chebyshev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:05:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2486160446' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 07:05:03 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'crash'
Jan 31 07:05:03 compute-0 ceph-mgr[74791]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 07:05:03 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'dashboard'
Jan 31 07:05:03 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:03.627+0000 7fb66730c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 07:05:05 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'devicehealth'
Jan 31 07:05:05 compute-0 ceph-mgr[74791]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 07:05:05 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 07:05:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:05.325+0000 7fb66730c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 07:05:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 07:05:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 07:05:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   from numpy import show_config as show_numpy_config
Jan 31 07:05:05 compute-0 ceph-mgr[74791]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 07:05:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:05.816+0000 7fb66730c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 07:05:05 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'influx'
Jan 31 07:05:06 compute-0 ceph-mgr[74791]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 07:05:06 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'insights'
Jan 31 07:05:06 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:06.046+0000 7fb66730c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 07:05:06 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'iostat'
Jan 31 07:05:06 compute-0 ceph-mgr[74791]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 07:05:06 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'k8sevents'
Jan 31 07:05:06 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:06.523+0000 7fb66730c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 07:05:08 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'localpool'
Jan 31 07:05:08 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 07:05:09 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'mirroring'
Jan 31 07:05:09 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'nfs'
Jan 31 07:05:10 compute-0 ceph-mgr[74791]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 07:05:10 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'orchestrator'
Jan 31 07:05:10 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:10.152+0000 7fb66730c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 07:05:10 compute-0 ceph-mgr[74791]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 07:05:10 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 07:05:10 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:10.781+0000 7fb66730c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'osd_support'
Jan 31 07:05:11 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:11.023+0000 7fb66730c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 07:05:11 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:11.255+0000 7fb66730c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'progress'
Jan 31 07:05:11 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:11.536+0000 7fb66730c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 07:05:11 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'prometheus'
Jan 31 07:05:11 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:11.800+0000 7fb66730c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 07:05:12 compute-0 ceph-mgr[74791]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 07:05:12 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'rbd_support'
Jan 31 07:05:12 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:12.845+0000 7fb66730c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 07:05:13 compute-0 ceph-mgr[74791]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 07:05:13 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'restful'
Jan 31 07:05:13 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:13.141+0000 7fb66730c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 07:05:13 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'rgw'
Jan 31 07:05:14 compute-0 ceph-mgr[74791]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 07:05:14 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'rook'
Jan 31 07:05:14 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:14.496+0000 7fb66730c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 07:05:16 compute-0 ceph-mgr[74791]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 07:05:16 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'selftest'
Jan 31 07:05:16 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:16.506+0000 7fb66730c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 07:05:16 compute-0 ceph-mgr[74791]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 07:05:16 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'snap_schedule'
Jan 31 07:05:16 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:16.759+0000 7fb66730c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 07:05:16 compute-0 ceph-mgr[74791]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 07:05:16 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'stats'
Jan 31 07:05:16 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:16.993+0000 7fb66730c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 07:05:17 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'status'
Jan 31 07:05:17 compute-0 ceph-mgr[74791]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 07:05:17 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'telegraf'
Jan 31 07:05:17 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:17.467+0000 7fb66730c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 07:05:17 compute-0 ceph-mgr[74791]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 07:05:17 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'telemetry'
Jan 31 07:05:17 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:17.684+0000 7fb66730c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 07:05:18 compute-0 ceph-mgr[74791]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 07:05:18 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 07:05:18 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:18.253+0000 7fb66730c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 07:05:18 compute-0 ceph-mgr[74791]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 07:05:18 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'volumes'
Jan 31 07:05:18 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:18.878+0000 7fb66730c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr[py] Loading python module 'zabbix'
Jan 31 07:05:19 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:19.592+0000 7fb66730c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 07:05:19 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:05:19.842+0000 7fb66730c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Active manager daemon compute-0.hhuoua restarted
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: ms_deliver_dispatch: unhandled message 0x562bc9fec420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.hhuoua
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr handle_mgr_map Activating!
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr handle_mgr_map I am now activating
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.hhuoua(active, starting, since 0.0246805s)
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.hhuoua", "id": "compute-0.hhuoua"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hhuoua", "id": "compute-0.hhuoua"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: balancer
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Starting
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Manager daemon compute-0.hhuoua is now available
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:05:19
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [balancer INFO root] No pools available
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: Active manager daemon compute-0.hhuoua restarted
Jan 31 07:05:19 compute-0 ceph-mon[74496]: Activating manager daemon compute-0.hhuoua
Jan 31 07:05:19 compute-0 ceph-mon[74496]: osdmap e2: 0 total, 0 up, 0 in
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mgrmap e6: compute-0.hhuoua(active, starting, since 0.0246805s)
Jan 31 07:05:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-0.hhuoua", "id": "compute-0.hhuoua"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mon[74496]: Manager daemon compute-0.hhuoua is now available
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: cephadm
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: crash
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: devicehealth
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: iostat
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Starting
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: nfs
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: orchestrator
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: pg_autoscaler
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: progress
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [progress INFO root] Loading...
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [progress INFO root] No stored events to load
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [progress INFO root] Loaded [] historic events
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] recovery thread starting
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] starting setup
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: rbd_support
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: restful
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/mirror_snapshot_schedule"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/mirror_snapshot_schedule"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: status
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [restful INFO root] server_addr: :: server_port: 8003
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: telemetry
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [restful WARNING root] server not running: no certificate configured
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] PerfHandler: starting
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TaskHandler: starting
Jan 31 07:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/trash_purge_schedule"} v 0) v1
Jan 31 07:05:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/trash_purge_schedule"}]: dispatch
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] setup complete
Jan 31 07:05:19 compute-0 ceph-mgr[74791]: mgr load Constructed class from module: volumes
Jan 31 07:05:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 31 07:05:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 31 07:05:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:20 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.hhuoua(active, since 1.07152s)
Jan 31 07:05:20 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 07:05:20 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 07:05:20 compute-0 boring_chebyshev[75624]: {
Jan 31 07:05:20 compute-0 boring_chebyshev[75624]:     "mgrmap_epoch": 7,
Jan 31 07:05:20 compute-0 boring_chebyshev[75624]:     "initialized": true
Jan 31 07:05:20 compute-0 boring_chebyshev[75624]: }
Jan 31 07:05:20 compute-0 ceph-mon[74496]: Found migration_current of "None". Setting to last migration.
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/mirror_snapshot_schedule"}]: dispatch
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.hhuoua/trash_purge_schedule"}]: dispatch
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:20 compute-0 ceph-mon[74496]: mgrmap e7: compute-0.hhuoua(active, since 1.07152s)
Jan 31 07:05:20 compute-0 systemd[1]: libpod-749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92.scope: Deactivated successfully.
Jan 31 07:05:20 compute-0 podman[75608]: 2026-01-31 07:05:20.948189206 +0000 UTC m=+18.471778369 container died 749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92 (image=quay.io/ceph/ceph:v18, name=boring_chebyshev, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-06d84697ddb18c2b613e5f6950c6afa45679d79dbc80c346cf96c83c08debc58-merged.mount: Deactivated successfully.
Jan 31 07:05:21 compute-0 podman[75608]: 2026-01-31 07:05:21.074235293 +0000 UTC m=+18.597824456 container remove 749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92 (image=quay.io/ceph/ceph:v18, name=boring_chebyshev, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:21 compute-0 systemd[1]: libpod-conmon-749753069b838881e969eb6969df09dd25b05dc43bcf0fea393047b569ea7a92.scope: Deactivated successfully.
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.164865322 +0000 UTC m=+0.069812091 container create c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e (image=quay.io/ceph/ceph:v18, name=pensive_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:21 compute-0 systemd[1]: Started libpod-conmon-c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e.scope.
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.126961494 +0000 UTC m=+0.031908283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e0540ee66f03104cbafd4140d02f76d4fd640737be0d4a09f851d3e7c11b21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e0540ee66f03104cbafd4140d02f76d4fd640737be0d4a09f851d3e7c11b21/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5e0540ee66f03104cbafd4140d02f76d4fd640737be0d4a09f851d3e7c11b21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.267603877 +0000 UTC m=+0.172550656 container init c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e (image=quay.io/ceph/ceph:v18, name=pensive_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.275572673 +0000 UTC m=+0.180519492 container start c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e (image=quay.io/ceph/ceph:v18, name=pensive_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.301541118 +0000 UTC m=+0.206487937 container attach c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e (image=quay.io/ceph/ceph:v18, name=pensive_ganguly, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:05:21] ENGINE Bus STARTING
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:05:21] ENGINE Bus STARTING
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:05:21] ENGINE Serving on https://192.168.122.100:7150
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:05:21] ENGINE Serving on https://192.168.122.100:7150
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:05:21] ENGINE Client ('192.168.122.100', 33900) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:05:21] ENGINE Client ('192.168.122.100', 33900) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:05:21] ENGINE Serving on http://192.168.122.100:8765
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:05:21] ENGINE Serving on http://192.168.122.100:8765
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:05:21] ENGINE Bus STARTED
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:05:21] ENGINE Bus STARTED
Jan 31 07:05:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 07:05:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 31 07:05:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 07:05:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:21 compute-0 systemd[1]: libpod-c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e.scope: Deactivated successfully.
Jan 31 07:05:21 compute-0 conmon[75802]: conmon c11ae47e2248dcaf1b0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e.scope/container/memory.events
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.818315942 +0000 UTC m=+0.723262711 container died c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e (image=quay.io/ceph/ceph:v18, name=pensive_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5e0540ee66f03104cbafd4140d02f76d4fd640737be0d4a09f851d3e7c11b21-merged.mount: Deactivated successfully.
Jan 31 07:05:21 compute-0 podman[75786]: 2026-01-31 07:05:21.869267019 +0000 UTC m=+0.774213788 container remove c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e (image=quay.io/ceph/ceph:v18, name=pensive_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:21 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:21 compute-0 systemd[1]: libpod-conmon-c11ae47e2248dcaf1b0e9ec7f0dd674435057229c132fee3624c61dce73dac2e.scope: Deactivated successfully.
Jan 31 07:05:21 compute-0 podman[75863]: 2026-01-31 07:05:21.921323763 +0000 UTC m=+0.040414550 container create 31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d (image=quay.io/ceph/ceph:v18, name=vibrant_euclid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:05:21 compute-0 ceph-mon[74496]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 07:05:21 compute-0 ceph-mon[74496]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 07:05:21 compute-0 ceph-mon[74496]: [31/Jan/2026:07:05:21] ENGINE Bus STARTING
Jan 31 07:05:21 compute-0 ceph-mon[74496]: [31/Jan/2026:07:05:21] ENGINE Serving on https://192.168.122.100:7150
Jan 31 07:05:21 compute-0 ceph-mon[74496]: [31/Jan/2026:07:05:21] ENGINE Client ('192.168.122.100', 33900) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 07:05:21 compute-0 ceph-mon[74496]: [31/Jan/2026:07:05:21] ENGINE Serving on http://192.168.122.100:8765
Jan 31 07:05:21 compute-0 ceph-mon[74496]: [31/Jan/2026:07:05:21] ENGINE Bus STARTED
Jan 31 07:05:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:21 compute-0 systemd[1]: Started libpod-conmon-31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d.scope.
Jan 31 07:05:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0dc4ecf1900aa9b582f04ad68745d7b168fb57d915953f5f7e707a571dd4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0dc4ecf1900aa9b582f04ad68745d7b168fb57d915953f5f7e707a571dd4d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d0dc4ecf1900aa9b582f04ad68745d7b168fb57d915953f5f7e707a571dd4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:21 compute-0 podman[75863]: 2026-01-31 07:05:21.898995167 +0000 UTC m=+0.018085994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:22 compute-0 podman[75863]: 2026-01-31 07:05:22.017592881 +0000 UTC m=+0.136683688 container init 31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d (image=quay.io/ceph/ceph:v18, name=vibrant_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:22 compute-0 podman[75863]: 2026-01-31 07:05:22.023574377 +0000 UTC m=+0.142665164 container start 31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d (image=quay.io/ceph/ceph:v18, name=vibrant_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 31 07:05:22 compute-0 podman[75863]: 2026-01-31 07:05:22.027869962 +0000 UTC m=+0.146960839 container attach 31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d (image=quay.io/ceph/ceph:v18, name=vibrant_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 31 07:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: [cephadm INFO root] Set ssh ssh_user
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 07:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 31 07:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: [cephadm INFO root] Set ssh ssh_config
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 07:05:22 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 07:05:22 compute-0 vibrant_euclid[75879]: ssh user set to ceph-admin. sudo will be used
Jan 31 07:05:22 compute-0 systemd[1]: libpod-31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d.scope: Deactivated successfully.
Jan 31 07:05:22 compute-0 podman[75863]: 2026-01-31 07:05:22.578351691 +0000 UTC m=+0.697442518 container died 31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d (image=quay.io/ceph/ceph:v18, name=vibrant_euclid, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d0dc4ecf1900aa9b582f04ad68745d7b168fb57d915953f5f7e707a571dd4d-merged.mount: Deactivated successfully.
Jan 31 07:05:22 compute-0 podman[75863]: 2026-01-31 07:05:22.641995849 +0000 UTC m=+0.761086636 container remove 31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d (image=quay.io/ceph/ceph:v18, name=vibrant_euclid, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 07:05:22 compute-0 systemd[1]: libpod-conmon-31f23a0202eab4939e889b91b8381ef9de20c209634beb900cdc74d88977215d.scope: Deactivated successfully.
Jan 31 07:05:22 compute-0 podman[75919]: 2026-01-31 07:05:22.706274373 +0000 UTC m=+0.046563722 container create 774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32 (image=quay.io/ceph/ceph:v18, name=inspiring_chaum, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:05:22 compute-0 systemd[1]: Started libpod-conmon-774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32.scope.
Jan 31 07:05:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80834433c7028b402c42242dd73bab345ee7aa84b66891371bc1e5137e2144dd/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80834433c7028b402c42242dd73bab345ee7aa84b66891371bc1e5137e2144dd/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80834433c7028b402c42242dd73bab345ee7aa84b66891371bc1e5137e2144dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80834433c7028b402c42242dd73bab345ee7aa84b66891371bc1e5137e2144dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80834433c7028b402c42242dd73bab345ee7aa84b66891371bc1e5137e2144dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:22 compute-0 podman[75919]: 2026-01-31 07:05:22.774463872 +0000 UTC m=+0.114753251 container init 774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32 (image=quay.io/ceph/ceph:v18, name=inspiring_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:05:22 compute-0 podman[75919]: 2026-01-31 07:05:22.688497598 +0000 UTC m=+0.028786957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:22 compute-0 podman[75919]: 2026-01-31 07:05:22.785052532 +0000 UTC m=+0.125341871 container start 774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32 (image=quay.io/ceph/ceph:v18, name=inspiring_chaum, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:22 compute-0 podman[75919]: 2026-01-31 07:05:22.790629368 +0000 UTC m=+0.130918747 container attach 774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32 (image=quay.io/ceph/ceph:v18, name=inspiring_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.hhuoua(active, since 2s)
Jan 31 07:05:23 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 31 07:05:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:23 compute-0 ceph-mgr[74791]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 07:05:23 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 07:05:23 compute-0 ceph-mgr[74791]: [cephadm INFO root] Set ssh private key
Jan 31 07:05:23 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 07:05:23 compute-0 systemd[1]: libpod-774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32.scope: Deactivated successfully.
Jan 31 07:05:23 compute-0 podman[75919]: 2026-01-31 07:05:23.388788604 +0000 UTC m=+0.729077973 container died 774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32 (image=quay.io/ceph/ceph:v18, name=inspiring_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-80834433c7028b402c42242dd73bab345ee7aa84b66891371bc1e5137e2144dd-merged.mount: Deactivated successfully.
Jan 31 07:05:23 compute-0 podman[75919]: 2026-01-31 07:05:23.490693339 +0000 UTC m=+0.830982668 container remove 774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32 (image=quay.io/ceph/ceph:v18, name=inspiring_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:05:23 compute-0 systemd[1]: libpod-conmon-774c5c5269be6add923fa1e6ce6cdd5cc60581c127fa289c4c9d10a1e566ac32.scope: Deactivated successfully.
Jan 31 07:05:23 compute-0 podman[75974]: 2026-01-31 07:05:23.542014576 +0000 UTC m=+0.037393716 container create e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b (image=quay.io/ceph/ceph:v18, name=suspicious_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:05:23 compute-0 ceph-mon[74496]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:23 compute-0 ceph-mon[74496]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:23 compute-0 ceph-mon[74496]: Set ssh ssh_user
Jan 31 07:05:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:23 compute-0 ceph-mon[74496]: Set ssh ssh_config
Jan 31 07:05:23 compute-0 ceph-mon[74496]: ssh user set to ceph-admin. sudo will be used
Jan 31 07:05:23 compute-0 ceph-mon[74496]: mgrmap e8: compute-0.hhuoua(active, since 2s)
Jan 31 07:05:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:23 compute-0 systemd[1]: Started libpod-conmon-e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b.scope.
Jan 31 07:05:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554cf3a0c372bdd0675b79a0c3c83443723b07e3eb6c090d690b1fccf1a9f421/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554cf3a0c372bdd0675b79a0c3c83443723b07e3eb6c090d690b1fccf1a9f421/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554cf3a0c372bdd0675b79a0c3c83443723b07e3eb6c090d690b1fccf1a9f421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554cf3a0c372bdd0675b79a0c3c83443723b07e3eb6c090d690b1fccf1a9f421/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554cf3a0c372bdd0675b79a0c3c83443723b07e3eb6c090d690b1fccf1a9f421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:23 compute-0 podman[75974]: 2026-01-31 07:05:23.615569897 +0000 UTC m=+0.110949087 container init e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b (image=quay.io/ceph/ceph:v18, name=suspicious_joliot, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:05:23 compute-0 podman[75974]: 2026-01-31 07:05:23.523669397 +0000 UTC m=+0.019048557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:23 compute-0 podman[75974]: 2026-01-31 07:05:23.620121818 +0000 UTC m=+0.115500968 container start e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b (image=quay.io/ceph/ceph:v18, name=suspicious_joliot, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:23 compute-0 podman[75974]: 2026-01-31 07:05:23.623556312 +0000 UTC m=+0.118935502 container attach e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b (image=quay.io/ceph/ceph:v18, name=suspicious_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:05:23 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:24 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 31 07:05:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:24 compute-0 ceph-mgr[74791]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 07:05:24 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 07:05:24 compute-0 systemd[1]: libpod-e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b.scope: Deactivated successfully.
Jan 31 07:05:24 compute-0 podman[76017]: 2026-01-31 07:05:24.166854175 +0000 UTC m=+0.019408486 container died e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b (image=quay.io/ceph/ceph:v18, name=suspicious_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-554cf3a0c372bdd0675b79a0c3c83443723b07e3eb6c090d690b1fccf1a9f421-merged.mount: Deactivated successfully.
Jan 31 07:05:24 compute-0 podman[76017]: 2026-01-31 07:05:24.224547077 +0000 UTC m=+0.077101358 container remove e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b (image=quay.io/ceph/ceph:v18, name=suspicious_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:05:24 compute-0 systemd[1]: libpod-conmon-e67725d64bbb1f525be0bc70b05201f81dee4408a038b99a92eaf91581f8ff9b.scope: Deactivated successfully.
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.281445581 +0000 UTC m=+0.037971301 container create 41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40 (image=quay.io/ceph/ceph:v18, name=agitated_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:05:24 compute-0 systemd[1]: Started libpod-conmon-41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40.scope.
Jan 31 07:05:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a0372a2a4baf0b6b385198335695ae5413a19fbbaed1645a9bdea7741f28df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a0372a2a4baf0b6b385198335695ae5413a19fbbaed1645a9bdea7741f28df/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a0372a2a4baf0b6b385198335695ae5413a19fbbaed1645a9bdea7741f28df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.349987329 +0000 UTC m=+0.106513089 container init 41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40 (image=quay.io/ceph/ceph:v18, name=agitated_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.353749031 +0000 UTC m=+0.110274751 container start 41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40 (image=quay.io/ceph/ceph:v18, name=agitated_haibt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.357466642 +0000 UTC m=+0.113992412 container attach 41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40 (image=quay.io/ceph/ceph:v18, name=agitated_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.263885921 +0000 UTC m=+0.020411671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920396 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:24 compute-0 ceph-mon[74496]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:24 compute-0 ceph-mon[74496]: Set ssh ssh_identity_key
Jan 31 07:05:24 compute-0 ceph-mon[74496]: Set ssh private key
Jan 31 07:05:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:24 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:24 compute-0 agitated_haibt[76048]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo1gQdKMYGowpo1uGvNIu6YC5WKyBNMj4sOvGmuT8BHqQ0leCpTT5r2tFzHcRkhu9fqP/1J+kuP35WczOajp0q2P0LBu9GWiFxzERwqRW4KOTcQNobCEGBpR6m8EiQMAKPnPy3UOzfqctBEORmdg8mu8P/b9WXwEKwokfcIXDRtW6fDZ5y0STQzWrNXjdBr62Yd2nd7aRW5FRq4A6wnSzT9akgQnbS2DTEOMiNqk+Qv3Hn5nyy1P4/6OBZFtz+L8J+Ob0dEf3W/0LgxYTskCuGl5PaV7mBQIsxzW6D5KLrEZfKHo5laj7HMSSSidL+V4VFCBA1sjDeEp04fbCWkPbH93C9v0tNEIs3Kax30Ci1qxIlWAfVwiGsgK0sJIwXS0QgdYIYBkV7UzmBYDuPPlbdE2klu3ApxJVvl/8sNJooRhKuxucRFQ0YqhWlHHXkujEVn30WaaMN7O8659PvpaXn2lhnPcIVkM21eyGeXVpFzM81XWxAGKURRAVkLiq+Pg8= zuul@controller
Jan 31 07:05:24 compute-0 systemd[1]: libpod-41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40.scope: Deactivated successfully.
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.873681502 +0000 UTC m=+0.630207262 container died 41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40 (image=quay.io/ceph/ceph:v18, name=agitated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-42a0372a2a4baf0b6b385198335695ae5413a19fbbaed1645a9bdea7741f28df-merged.mount: Deactivated successfully.
Jan 31 07:05:24 compute-0 podman[76031]: 2026-01-31 07:05:24.909680312 +0000 UTC m=+0.666206042 container remove 41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40 (image=quay.io/ceph/ceph:v18, name=agitated_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:05:24 compute-0 systemd[1]: libpod-conmon-41da6a28d34a717a4d57f57a031483fa14b59fafa6b91e22d32241497bf98d40.scope: Deactivated successfully.
Jan 31 07:05:24 compute-0 podman[76084]: 2026-01-31 07:05:24.956386757 +0000 UTC m=+0.033847200 container create 02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749 (image=quay.io/ceph/ceph:v18, name=competent_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:24 compute-0 systemd[1]: Started libpod-conmon-02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749.scope.
Jan 31 07:05:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbfc2de977e2d5f2a5474f4e9ee44ea41446c41f3098485783be36c56974c05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbfc2de977e2d5f2a5474f4e9ee44ea41446c41f3098485783be36c56974c05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbfc2de977e2d5f2a5474f4e9ee44ea41446c41f3098485783be36c56974c05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:25 compute-0 podman[76084]: 2026-01-31 07:05:25.018367464 +0000 UTC m=+0.095827947 container init 02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749 (image=quay.io/ceph/ceph:v18, name=competent_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:05:25 compute-0 podman[76084]: 2026-01-31 07:05:25.024847833 +0000 UTC m=+0.102308276 container start 02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749 (image=quay.io/ceph/ceph:v18, name=competent_matsumoto, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:25 compute-0 podman[76084]: 2026-01-31 07:05:25.028057601 +0000 UTC m=+0.105518084 container attach 02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749 (image=quay.io/ceph/ceph:v18, name=competent_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:25 compute-0 podman[76084]: 2026-01-31 07:05:24.940255531 +0000 UTC m=+0.017715994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:25 compute-0 ceph-mon[74496]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:25 compute-0 ceph-mon[74496]: Set ssh ssh_identity_pub
Jan 31 07:05:25 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:25 compute-0 sshd-session[76126]: Accepted publickey for ceph-admin from 192.168.122.100 port 52892 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:25 compute-0 systemd-logind[816]: New session 21 of user ceph-admin.
Jan 31 07:05:25 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 07:05:25 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 07:05:25 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 07:05:25 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 07:05:25 compute-0 systemd[76130]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:25 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:25 compute-0 systemd[76130]: Queued start job for default target Main User Target.
Jan 31 07:05:25 compute-0 systemd[76130]: Created slice User Application Slice.
Jan 31 07:05:25 compute-0 systemd[76130]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:05:25 compute-0 systemd[76130]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:05:25 compute-0 systemd[76130]: Reached target Paths.
Jan 31 07:05:25 compute-0 systemd[76130]: Reached target Timers.
Jan 31 07:05:25 compute-0 systemd[76130]: Starting D-Bus User Message Bus Socket...
Jan 31 07:05:25 compute-0 systemd[76130]: Starting Create User's Volatile Files and Directories...
Jan 31 07:05:25 compute-0 systemd[76130]: Finished Create User's Volatile Files and Directories.
Jan 31 07:05:25 compute-0 sshd-session[76143]: Accepted publickey for ceph-admin from 192.168.122.100 port 52904 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:25 compute-0 systemd[76130]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:05:25 compute-0 systemd[76130]: Reached target Sockets.
Jan 31 07:05:25 compute-0 systemd[76130]: Reached target Basic System.
Jan 31 07:05:25 compute-0 systemd[76130]: Reached target Main User Target.
Jan 31 07:05:25 compute-0 systemd[76130]: Startup finished in 111ms.
Jan 31 07:05:25 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 07:05:25 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 07:05:25 compute-0 sshd-session[76126]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:25 compute-0 systemd-logind[816]: New session 23 of user ceph-admin.
Jan 31 07:05:25 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 07:05:25 compute-0 sshd-session[76143]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:26 compute-0 sudo[76150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:26 compute-0 sudo[76150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:26 compute-0 sudo[76150]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:26 compute-0 sudo[76175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:26 compute-0 sudo[76175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:26 compute-0 sudo[76175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:26 compute-0 sshd-session[76200]: Accepted publickey for ceph-admin from 192.168.122.100 port 52920 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:26 compute-0 systemd-logind[816]: New session 24 of user ceph-admin.
Jan 31 07:05:26 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 07:05:26 compute-0 sshd-session[76200]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:26 compute-0 sudo[76204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:26 compute-0 sudo[76204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:26 compute-0 sudo[76204]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:26 compute-0 sudo[76229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 31 07:05:26 compute-0 sudo[76229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:26 compute-0 sudo[76229]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:26 compute-0 ceph-mon[74496]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:26 compute-0 ceph-mon[74496]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:26 compute-0 sshd-session[76254]: Accepted publickey for ceph-admin from 192.168.122.100 port 52926 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:26 compute-0 systemd-logind[816]: New session 25 of user ceph-admin.
Jan 31 07:05:26 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 07:05:26 compute-0 sshd-session[76254]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:26 compute-0 sudo[76258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:26 compute-0 sudo[76258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:26 compute-0 sudo[76258]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:26 compute-0 sudo[76283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 31 07:05:26 compute-0 sudo[76283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:26 compute-0 sudo[76283]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:26 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 07:05:26 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 07:05:27 compute-0 sshd-session[76308]: Accepted publickey for ceph-admin from 192.168.122.100 port 52934 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:27 compute-0 systemd-logind[816]: New session 26 of user ceph-admin.
Jan 31 07:05:27 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 07:05:27 compute-0 sshd-session[76308]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:27 compute-0 sudo[76312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:27 compute-0 sudo[76312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:27 compute-0 sudo[76312]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:27 compute-0 sudo[76337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:27 compute-0 sudo[76337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:27 compute-0 sudo[76337]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:27 compute-0 sshd-session[76362]: Accepted publickey for ceph-admin from 192.168.122.100 port 52942 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:27 compute-0 systemd-logind[816]: New session 27 of user ceph-admin.
Jan 31 07:05:27 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 07:05:27 compute-0 sshd-session[76362]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:27 compute-0 sudo[76366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:27 compute-0 sudo[76366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:27 compute-0 sudo[76366]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:27 compute-0 sudo[76391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:27 compute-0 sudo[76391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:27 compute-0 sudo[76391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:27 compute-0 ceph-mon[74496]: Deploying cephadm binary to compute-0
Jan 31 07:05:27 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:27 compute-0 sshd-session[76416]: Accepted publickey for ceph-admin from 192.168.122.100 port 52948 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:27 compute-0 systemd-logind[816]: New session 28 of user ceph-admin.
Jan 31 07:05:27 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 07:05:27 compute-0 sshd-session[76416]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:28 compute-0 sudo[76420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:28 compute-0 sudo[76420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:28 compute-0 sudo[76420]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-0 sudo[76445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 31 07:05:28 compute-0 sudo[76445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:28 compute-0 sudo[76445]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-0 sshd-session[76470]: Accepted publickey for ceph-admin from 192.168.122.100 port 52962 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:28 compute-0 systemd-logind[816]: New session 29 of user ceph-admin.
Jan 31 07:05:28 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 07:05:28 compute-0 sshd-session[76470]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:28 compute-0 sudo[76474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:28 compute-0 sudo[76474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:28 compute-0 sudo[76474]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-0 sudo[76499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:28 compute-0 sudo[76499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:28 compute-0 sudo[76499]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-0 sshd-session[76524]: Accepted publickey for ceph-admin from 192.168.122.100 port 52970 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:28 compute-0 systemd-logind[816]: New session 30 of user ceph-admin.
Jan 31 07:05:28 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 07:05:28 compute-0 sshd-session[76524]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:28 compute-0 sudo[76528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:28 compute-0 sudo[76528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:28 compute-0 sudo[76528]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:28 compute-0 sudo[76553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Jan 31 07:05:28 compute-0 sudo[76553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:28 compute-0 sudo[76553]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:29 compute-0 sshd-session[76578]: Accepted publickey for ceph-admin from 192.168.122.100 port 52980 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:29 compute-0 systemd-logind[816]: New session 31 of user ceph-admin.
Jan 31 07:05:29 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 07:05:29 compute-0 sshd-session[76578]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053007 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:29 compute-0 sshd-session[76605]: Accepted publickey for ceph-admin from 192.168.122.100 port 52994 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:29 compute-0 systemd-logind[816]: New session 32 of user ceph-admin.
Jan 31 07:05:29 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 07:05:29 compute-0 sshd-session[76605]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:29 compute-0 sudo[76609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:29 compute-0 sudo[76609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:29 compute-0 sudo[76609]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:29 compute-0 sudo[76634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Jan 31 07:05:29 compute-0 sudo[76634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:29 compute-0 sudo[76634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:29 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:30 compute-0 sshd-session[76659]: Accepted publickey for ceph-admin from 192.168.122.100 port 53010 ssh2: RSA SHA256:TuH35lNNH2Qzo+bS1OQ3cKDwd7uVGJr8RxC0AbQLNUg
Jan 31 07:05:30 compute-0 systemd-logind[816]: New session 33 of user ceph-admin.
Jan 31 07:05:30 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 07:05:30 compute-0 sshd-session[76659]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 07:05:30 compute-0 sudo[76663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:30 compute-0 sudo[76663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:30 compute-0 sudo[76663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-0 sudo[76688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 31 07:05:30 compute-0 sudo[76688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:30 compute-0 sudo[76688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:30 compute-0 ceph-mgr[74791]: [cephadm INFO root] Added host compute-0
Jan 31 07:05:30 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 07:05:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 07:05:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:30 compute-0 competent_matsumoto[76100]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 07:05:30 compute-0 systemd[1]: libpod-02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749.scope: Deactivated successfully.
Jan 31 07:05:30 compute-0 podman[76084]: 2026-01-31 07:05:30.441705204 +0000 UTC m=+5.519165647 container died 02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749 (image=quay.io/ceph/ceph:v18, name=competent_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dbfc2de977e2d5f2a5474f4e9ee44ea41446c41f3098485783be36c56974c05-merged.mount: Deactivated successfully.
Jan 31 07:05:30 compute-0 sudo[76734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:30 compute-0 sudo[76734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:30 compute-0 sudo[76734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-0 podman[76084]: 2026-01-31 07:05:30.482964954 +0000 UTC m=+5.560425397 container remove 02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749 (image=quay.io/ceph/ceph:v18, name=competent_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:05:30 compute-0 systemd[1]: libpod-conmon-02e3454d9df936068a02b63069955da89ad2375af3745b902c6176c451f6f749.scope: Deactivated successfully.
Jan 31 07:05:30 compute-0 sudo[76773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:30 compute-0 sudo[76773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:30 compute-0 sudo[76773]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-0 podman[76778]: 2026-01-31 07:05:30.529651457 +0000 UTC m=+0.034250139 container create 4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115 (image=quay.io/ceph/ceph:v18, name=serene_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:05:30 compute-0 systemd[1]: Started libpod-conmon-4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115.scope.
Jan 31 07:05:30 compute-0 sudo[76812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:30 compute-0 sudo[76812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:30 compute-0 sudo[76812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e066b21f143fdd9020aeb5dd5e32e53343a5b57c110fc3581b86f2d76a0d3b02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e066b21f143fdd9020aeb5dd5e32e53343a5b57c110fc3581b86f2d76a0d3b02/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e066b21f143fdd9020aeb5dd5e32e53343a5b57c110fc3581b86f2d76a0d3b02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:30 compute-0 podman[76778]: 2026-01-31 07:05:30.585927725 +0000 UTC m=+0.090526447 container init 4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115 (image=quay.io/ceph/ceph:v18, name=serene_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:30 compute-0 podman[76778]: 2026-01-31 07:05:30.591992274 +0000 UTC m=+0.096590966 container start 4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115 (image=quay.io/ceph/ceph:v18, name=serene_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:30 compute-0 podman[76778]: 2026-01-31 07:05:30.595720435 +0000 UTC m=+0.100319177 container attach 4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115 (image=quay.io/ceph/ceph:v18, name=serene_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:05:30 compute-0 podman[76778]: 2026-01-31 07:05:30.51340397 +0000 UTC m=+0.018002672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:30 compute-0 sudo[76842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Jan 31 07:05:30 compute-0 sudo[76842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:30 compute-0 podman[76897]: 2026-01-31 07:05:30.837491935 +0000 UTC m=+0.037189421 container create eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe (image=quay.io/ceph/ceph:v18, name=laughing_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:05:30 compute-0 systemd[1]: Started libpod-conmon-eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe.scope.
Jan 31 07:05:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:30 compute-0 podman[76897]: 2026-01-31 07:05:30.908807331 +0000 UTC m=+0.108504837 container init eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe (image=quay.io/ceph/ceph:v18, name=laughing_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:05:30 compute-0 podman[76897]: 2026-01-31 07:05:30.91325029 +0000 UTC m=+0.112947796 container start eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe (image=quay.io/ceph/ceph:v18, name=laughing_northcutt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:05:30 compute-0 podman[76897]: 2026-01-31 07:05:30.917811342 +0000 UTC m=+0.117508828 container attach eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe (image=quay.io/ceph/ceph:v18, name=laughing_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:30 compute-0 podman[76897]: 2026-01-31 07:05:30.823599535 +0000 UTC m=+0.023297041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 07:05:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 07:05:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 serene_varahamihira[76838]: Scheduled mon update...
Jan 31 07:05:31 compute-0 systemd[1]: libpod-4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115.scope: Deactivated successfully.
Jan 31 07:05:31 compute-0 podman[76778]: 2026-01-31 07:05:31.150340795 +0000 UTC m=+0.654939487 container died 4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115 (image=quay.io/ceph/ceph:v18, name=serene_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e066b21f143fdd9020aeb5dd5e32e53343a5b57c110fc3581b86f2d76a0d3b02-merged.mount: Deactivated successfully.
Jan 31 07:05:31 compute-0 podman[76778]: 2026-01-31 07:05:31.185569108 +0000 UTC m=+0.690167790 container remove 4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115 (image=quay.io/ceph/ceph:v18, name=serene_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:31 compute-0 systemd[1]: libpod-conmon-4a7f81500d679ce30f5778dc44b31da61fd71bcef9acc9d94623b127ab895115.scope: Deactivated successfully.
Jan 31 07:05:31 compute-0 laughing_northcutt[76914]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 31 07:05:31 compute-0 systemd[1]: libpod-eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe.scope: Deactivated successfully.
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.248130399 +0000 UTC m=+0.044496430 container create 0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c (image=quay.io/ceph/ceph:v18, name=affectionate_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:31 compute-0 podman[76897]: 2026-01-31 07:05:31.253182323 +0000 UTC m=+0.452879809 container died eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe (image=quay.io/ceph/ceph:v18, name=laughing_northcutt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:31 compute-0 systemd[1]: Started libpod-conmon-0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c.scope.
Jan 31 07:05:31 compute-0 podman[76897]: 2026-01-31 07:05:31.290679831 +0000 UTC m=+0.490377337 container remove eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe (image=quay.io/ceph/ceph:v18, name=laughing_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:31 compute-0 systemd[1]: libpod-conmon-eb15f67f3ffbe79da680e1ecaee40abbb362677efa98b2e54dd4d3c079e0acbe.scope: Deactivated successfully.
Jan 31 07:05:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb0677bf7aa364d0d411d91f961180aca417740174d631abe2d8a3d8c6e718/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb0677bf7aa364d0d411d91f961180aca417740174d631abe2d8a3d8c6e718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eedb0677bf7aa364d0d411d91f961180aca417740174d631abe2d8a3d8c6e718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:31 compute-0 sudo[76842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.2289384 +0000 UTC m=+0.025304471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.329930922 +0000 UTC m=+0.126296993 container init 0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c (image=quay.io/ceph/ceph:v18, name=affectionate_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:05:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.336278427 +0000 UTC m=+0.132644458 container start 0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c (image=quay.io/ceph/ceph:v18, name=affectionate_jennings, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.340870661 +0000 UTC m=+0.137236742 container attach 0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c (image=quay.io/ceph/ceph:v18, name=affectionate_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:05:31 compute-0 sudo[76989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:31 compute-0 sudo[76989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 sudo[76989]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 sudo[77014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:31 compute-0 sudo[77014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 sudo[77014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 ceph-mon[74496]: Added host compute-0
Jan 31 07:05:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:05:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 sudo[77039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:31 compute-0 sudo[77039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a141ad04c65df5a2c337ab834a9698cab7abd90cc16f2171dcde6097464e2a2d-merged.mount: Deactivated successfully.
Jan 31 07:05:31 compute-0 sudo[77039]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 sudo[77064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:05:31 compute-0 sudo[77064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 sudo[77064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 sudo[77128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:31 compute-0 sudo[77128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 sudo[77128]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:31 compute-0 sudo[77153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 07:05:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 07:05:31 compute-0 sudo[77153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 sudo[77153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:31 compute-0 affectionate_jennings[76983]: Scheduled mgr update...
Jan 31 07:05:31 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:31 compute-0 systemd[1]: libpod-0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c.scope: Deactivated successfully.
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.882349608 +0000 UTC m=+0.678715669 container died 0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c (image=quay.io/ceph/ceph:v18, name=affectionate_jennings, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-eedb0677bf7aa364d0d411d91f961180aca417740174d631abe2d8a3d8c6e718-merged.mount: Deactivated successfully.
Jan 31 07:05:31 compute-0 sudo[77180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:31 compute-0 sudo[77180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 sudo[77180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:31 compute-0 podman[76954]: 2026-01-31 07:05:31.923153948 +0000 UTC m=+0.719519989 container remove 0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c (image=quay.io/ceph/ceph:v18, name=affectionate_jennings, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:05:31 compute-0 systemd[1]: libpod-conmon-0f7d8840b8c1f11e2369c33fd170d7888c33dffde2331c885c64e250ea6f2b2c.scope: Deactivated successfully.
Jan 31 07:05:31 compute-0 sudo[77217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:05:31 compute-0 sudo[77217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:31 compute-0 podman[77218]: 2026-01-31 07:05:31.981114337 +0000 UTC m=+0.044625244 container create 42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa (image=quay.io/ceph/ceph:v18, name=agitated_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:05:32 compute-0 systemd[1]: Started libpod-conmon-42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa.scope.
Jan 31 07:05:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341efdd94c461828e5162b93d1c82e5198c40f6153edd9b730e982f170c87bc8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341efdd94c461828e5162b93d1c82e5198c40f6153edd9b730e982f170c87bc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341efdd94c461828e5162b93d1c82e5198c40f6153edd9b730e982f170c87bc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:32 compute-0 podman[77218]: 2026-01-31 07:05:31.958415531 +0000 UTC m=+0.021926468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:32 compute-0 podman[77218]: 2026-01-31 07:05:32.069274905 +0000 UTC m=+0.132785852 container init 42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa (image=quay.io/ceph/ceph:v18, name=agitated_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:32 compute-0 podman[77218]: 2026-01-31 07:05:32.076885701 +0000 UTC m=+0.140396618 container start 42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa (image=quay.io/ceph/ceph:v18, name=agitated_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:32 compute-0 podman[77218]: 2026-01-31 07:05:32.092871903 +0000 UTC m=+0.156382830 container attach 42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa (image=quay.io/ceph/ceph:v18, name=agitated_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:05:32 compute-0 podman[77338]: 2026-01-31 07:05:32.385227142 +0000 UTC m=+0.059059678 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:05:32 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:32 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 07:05:32 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 07:05:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:05:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:32 compute-0 agitated_albattani[77259]: Scheduled crash update...
Jan 31 07:05:32 compute-0 systemd[1]: libpod-42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa.scope: Deactivated successfully.
Jan 31 07:05:32 compute-0 podman[77218]: 2026-01-31 07:05:32.644845628 +0000 UTC m=+0.708356575 container died 42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa (image=quay.io/ceph/ceph:v18, name=agitated_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-341efdd94c461828e5162b93d1c82e5198c40f6153edd9b730e982f170c87bc8-merged.mount: Deactivated successfully.
Jan 31 07:05:32 compute-0 podman[77218]: 2026-01-31 07:05:32.687880942 +0000 UTC m=+0.751391879 container remove 42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa (image=quay.io/ceph/ceph:v18, name=agitated_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:05:32 compute-0 systemd[1]: libpod-conmon-42ab47cadbd1d9ae0230c6198f7d29384bc4a38610ff7c667c583626df9125fa.scope: Deactivated successfully.
Jan 31 07:05:32 compute-0 ceph-mon[74496]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:32 compute-0 ceph-mon[74496]: Saving service mon spec with placement count:5
Jan 31 07:05:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:32 compute-0 podman[77392]: 2026-01-31 07:05:32.755228281 +0000 UTC m=+0.046703765 container create 71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609 (image=quay.io/ceph/ceph:v18, name=hungry_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:32 compute-0 podman[77399]: 2026-01-31 07:05:32.775312732 +0000 UTC m=+0.054116035 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:05:32 compute-0 podman[77338]: 2026-01-31 07:05:32.781646668 +0000 UTC m=+0.455479184 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:32 compute-0 systemd[1]: Started libpod-conmon-71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609.scope.
Jan 31 07:05:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/328c2af0bd5cab101aa908a6d26a7b79cd69e85db8790b68e0b6277921168d12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/328c2af0bd5cab101aa908a6d26a7b79cd69e85db8790b68e0b6277921168d12/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/328c2af0bd5cab101aa908a6d26a7b79cd69e85db8790b68e0b6277921168d12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:32 compute-0 podman[77392]: 2026-01-31 07:05:32.742191102 +0000 UTC m=+0.033666606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:32 compute-0 podman[77392]: 2026-01-31 07:05:32.841269107 +0000 UTC m=+0.132744631 container init 71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609 (image=quay.io/ceph/ceph:v18, name=hungry_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:05:32 compute-0 podman[77392]: 2026-01-31 07:05:32.846112176 +0000 UTC m=+0.137587680 container start 71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609 (image=quay.io/ceph/ceph:v18, name=hungry_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:05:32 compute-0 podman[77392]: 2026-01-31 07:05:32.849584231 +0000 UTC m=+0.141059735 container attach 71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609 (image=quay.io/ceph/ceph:v18, name=hungry_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 07:05:32 compute-0 sudo[77217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:32 compute-0 sudo[77443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:32 compute-0 sudo[77443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:32 compute-0 sudo[77443]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:32 compute-0 sudo[77468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:32 compute-0 sudo[77468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:32 compute-0 sudo[77468]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 sudo[77493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:33 compute-0 sudo[77493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 sudo[77493]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 sudo[77518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:05:33 compute-0 sudo[77518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77573 (sysctl)
Jan 31 07:05:33 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 07:05:33 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 07:05:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 31 07:05:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2179474423' entity='client.admin' 
Jan 31 07:05:33 compute-0 systemd[1]: libpod-71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609.scope: Deactivated successfully.
Jan 31 07:05:33 compute-0 podman[77392]: 2026-01-31 07:05:33.410954526 +0000 UTC m=+0.702430030 container died 71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609 (image=quay.io/ceph/ceph:v18, name=hungry_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-328c2af0bd5cab101aa908a6d26a7b79cd69e85db8790b68e0b6277921168d12-merged.mount: Deactivated successfully.
Jan 31 07:05:33 compute-0 podman[77392]: 2026-01-31 07:05:33.459944406 +0000 UTC m=+0.751419930 container remove 71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609 (image=quay.io/ceph/ceph:v18, name=hungry_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:33 compute-0 systemd[1]: libpod-conmon-71384e82bcaa5e367a6196384f806f784ffba7ebd3773bc5833ae4f2c7f10609.scope: Deactivated successfully.
Jan 31 07:05:33 compute-0 sudo[77518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 podman[77610]: 2026-01-31 07:05:33.514371068 +0000 UTC m=+0.039805325 container create 156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8 (image=quay.io/ceph/ceph:v18, name=zealous_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:05:33 compute-0 systemd[1]: Started libpod-conmon-156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8.scope.
Jan 31 07:05:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041ecfa8565a989f45828e45654480f0e0a9f3e2b3168dfd3aea53452f2b2424/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041ecfa8565a989f45828e45654480f0e0a9f3e2b3168dfd3aea53452f2b2424/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041ecfa8565a989f45828e45654480f0e0a9f3e2b3168dfd3aea53452f2b2424/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:33 compute-0 sudo[77626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:33 compute-0 sudo[77626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 sudo[77626]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 podman[77610]: 2026-01-31 07:05:33.499779831 +0000 UTC m=+0.025214128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:33 compute-0 podman[77610]: 2026-01-31 07:05:33.597383961 +0000 UTC m=+0.122818298 container init 156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8 (image=quay.io/ceph/ceph:v18, name=zealous_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:33 compute-0 podman[77610]: 2026-01-31 07:05:33.602101087 +0000 UTC m=+0.127535344 container start 156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8 (image=quay.io/ceph/ceph:v18, name=zealous_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:33 compute-0 podman[77610]: 2026-01-31 07:05:33.606420212 +0000 UTC m=+0.131854479 container attach 156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8 (image=quay.io/ceph/ceph:v18, name=zealous_goldwasser, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:33 compute-0 sudo[77654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:33 compute-0 sudo[77654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 sudo[77654]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 sudo[77681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:33 compute-0 sudo[77681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 sudo[77681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 sudo[77706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 07:05:33 compute-0 sudo[77706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 ceph-mon[74496]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:33 compute-0 ceph-mon[74496]: Saving service mgr spec with placement count:2
Jan 31 07:05:33 compute-0 ceph-mon[74496]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:33 compute-0 ceph-mon[74496]: Saving service crash spec with placement *
Jan 31 07:05:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2179474423' entity='client.admin' 
Jan 31 07:05:33 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:33 compute-0 sudo[77706]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:33 compute-0 sudo[77769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:33 compute-0 sudo[77769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 sudo[77769]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:33 compute-0 sudo[77794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:33 compute-0 sudo[77794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:33 compute-0 sudo[77794]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:34 compute-0 sudo[77819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:34 compute-0 sudo[77819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:34 compute-0 sudo[77819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:34 compute-0 sudo[77844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:05:34 compute-0 sudo[77844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:34 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 31 07:05:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:34 compute-0 systemd[1]: libpod-156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8.scope: Deactivated successfully.
Jan 31 07:05:34 compute-0 podman[77871]: 2026-01-31 07:05:34.154388228 +0000 UTC m=+0.026400837 container died 156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8 (image=quay.io/ceph/ceph:v18, name=zealous_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-041ecfa8565a989f45828e45654480f0e0a9f3e2b3168dfd3aea53452f2b2424-merged.mount: Deactivated successfully.
Jan 31 07:05:34 compute-0 podman[77871]: 2026-01-31 07:05:34.370926001 +0000 UTC m=+0.242938580 container remove 156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8 (image=quay.io/ceph/ceph:v18, name=zealous_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:05:34 compute-0 systemd[1]: libpod-conmon-156c8e847538e10e9f4fcef003dcd0acdee97fc6b76615a59a47863a0474f7a8.scope: Deactivated successfully.
Jan 31 07:05:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:34 compute-0 podman[77899]: 2026-01-31 07:05:34.422207396 +0000 UTC m=+0.035830518 container create f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb (image=quay.io/ceph/ceph:v18, name=angry_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:34 compute-0 systemd[1]: Started libpod-conmon-f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb.scope.
Jan 31 07:05:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c0f13c1967188d2b11670550b5c5265560cd16a5874c83c19f471ef7868ece/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c0f13c1967188d2b11670550b5c5265560cd16a5874c83c19f471ef7868ece/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c0f13c1967188d2b11670550b5c5265560cd16a5874c83c19f471ef7868ece/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:34 compute-0 podman[77899]: 2026-01-31 07:05:34.486770097 +0000 UTC m=+0.100393259 container init f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb (image=quay.io/ceph/ceph:v18, name=angry_ride, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:05:34 compute-0 podman[77899]: 2026-01-31 07:05:34.491351349 +0000 UTC m=+0.104974511 container start f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb (image=quay.io/ceph/ceph:v18, name=angry_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:34 compute-0 podman[77899]: 2026-01-31 07:05:34.49468643 +0000 UTC m=+0.108309612 container attach f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb (image=quay.io/ceph/ceph:v18, name=angry_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:34 compute-0 podman[77899]: 2026-01-31 07:05:34.404770459 +0000 UTC m=+0.018393611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.589887611 +0000 UTC m=+0.035726115 container create 12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:05:34 compute-0 systemd[1]: Started libpod-conmon-12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b.scope.
Jan 31 07:05:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.660177992 +0000 UTC m=+0.106016496 container init 12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.665961884 +0000 UTC m=+0.111800398 container start 12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:05:34 compute-0 friendly_rosalind[77962]: 167 167
Jan 31 07:05:34 compute-0 systemd[1]: libpod-12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b.scope: Deactivated successfully.
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.573593443 +0000 UTC m=+0.019431957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.670894325 +0000 UTC m=+0.116732839 container attach 12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.671377266 +0000 UTC m=+0.117215760 container died 12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e44722b3aa040b2cc7f3a7fe92ccaa67e97755fc8add5f00d5c530b17ae3ed67-merged.mount: Deactivated successfully.
Jan 31 07:05:34 compute-0 podman[77946]: 2026-01-31 07:05:34.708173188 +0000 UTC m=+0.154011682 container remove 12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:34 compute-0 systemd[1]: libpod-conmon-12cc6ef34fe5793231a6f692c9df441c4a585c63adca13608639f3523708858b.scope: Deactivated successfully.
Jan 31 07:05:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:34 compute-0 ceph-mon[74496]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:35 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:35 compute-0 ceph-mgr[74791]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 07:05:35 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 07:05:35 compute-0 angry_ride[77927]: Added label _admin to host compute-0
Jan 31 07:05:35 compute-0 systemd[1]: libpod-f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb.scope: Deactivated successfully.
Jan 31 07:05:35 compute-0 podman[77899]: 2026-01-31 07:05:35.030024199 +0000 UTC m=+0.643647331 container died f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb (image=quay.io/ceph/ceph:v18, name=angry_ride, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-81c0f13c1967188d2b11670550b5c5265560cd16a5874c83c19f471ef7868ece-merged.mount: Deactivated successfully.
Jan 31 07:05:35 compute-0 podman[77899]: 2026-01-31 07:05:35.062925024 +0000 UTC m=+0.676548156 container remove f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb (image=quay.io/ceph/ceph:v18, name=angry_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:05:35 compute-0 systemd[1]: libpod-conmon-f9e746de32c90f1b52f954ca2db20ad1b9b32ac9b6510ad8220feeed465e9ddb.scope: Deactivated successfully.
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.127347791 +0000 UTC m=+0.049859152 container create dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468 (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:35 compute-0 systemd[1]: Started libpod-conmon-dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468.scope.
Jan 31 07:05:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780b51b498fd5f5d293ff318823b7068c08bae40adcdf05943b010d4d5fcf870/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780b51b498fd5f5d293ff318823b7068c08bae40adcdf05943b010d4d5fcf870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/780b51b498fd5f5d293ff318823b7068c08bae40adcdf05943b010d4d5fcf870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.105744682 +0000 UTC m=+0.028256033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.210378164 +0000 UTC m=+0.132889505 container init dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468 (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.214404652 +0000 UTC m=+0.136915973 container start dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468 (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.218294078 +0000 UTC m=+0.140805399 container attach dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468 (image=quay.io/ceph/ceph:v18, name=busy_greider, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 31 07:05:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4254008443' entity='client.admin' 
Jan 31 07:05:35 compute-0 systemd[1]: libpod-dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468.scope: Deactivated successfully.
Jan 31 07:05:35 compute-0 conmon[78030]: conmon dd62797a1a1c9ebd7b8b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468.scope/container/memory.events
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.84122626 +0000 UTC m=+0.763737611 container died dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468 (image=quay.io/ceph/ceph:v18, name=busy_greider, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-780b51b498fd5f5d293ff318823b7068c08bae40adcdf05943b010d4d5fcf870-merged.mount: Deactivated successfully.
Jan 31 07:05:35 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:35 compute-0 podman[78014]: 2026-01-31 07:05:35.889476401 +0000 UTC m=+0.811987732 container remove dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468 (image=quay.io/ceph/ceph:v18, name=busy_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:05:35 compute-0 systemd[1]: libpod-conmon-dd62797a1a1c9ebd7b8b7380d4c2354165eb1230915decd6954c938a173ea468.scope: Deactivated successfully.
Jan 31 07:05:35 compute-0 podman[78069]: 2026-01-31 07:05:35.939991938 +0000 UTC m=+0.036921485 container create bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959 (image=quay.io/ceph/ceph:v18, name=frosty_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:35 compute-0 systemd[1]: Started libpod-conmon-bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959.scope.
Jan 31 07:05:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf23a604e067252725e46ab3cfc2c081f5401c80c29469b02d1ab24812aba27a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf23a604e067252725e46ab3cfc2c081f5401c80c29469b02d1ab24812aba27a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf23a604e067252725e46ab3cfc2c081f5401c80c29469b02d1ab24812aba27a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 ceph-mon[74496]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:36 compute-0 ceph-mon[74496]: Added label _admin to host compute-0
Jan 31 07:05:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4254008443' entity='client.admin' 
Jan 31 07:05:36 compute-0 podman[78069]: 2026-01-31 07:05:36.018196853 +0000 UTC m=+0.115126450 container init bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959 (image=quay.io/ceph/ceph:v18, name=frosty_carver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:05:36 compute-0 podman[78069]: 2026-01-31 07:05:35.923204587 +0000 UTC m=+0.020134154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:36 compute-0 podman[78069]: 2026-01-31 07:05:36.021816792 +0000 UTC m=+0.118746339 container start bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959 (image=quay.io/ceph/ceph:v18, name=frosty_carver, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:05:36 compute-0 podman[78069]: 2026-01-31 07:05:36.025392239 +0000 UTC m=+0.122321846 container attach bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959 (image=quay.io/ceph/ceph:v18, name=frosty_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 31 07:05:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3241561397' entity='client.admin' 
Jan 31 07:05:36 compute-0 frosty_carver[78085]: set mgr/dashboard/cluster/status
Jan 31 07:05:36 compute-0 systemd[1]: libpod-bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959.scope: Deactivated successfully.
Jan 31 07:05:36 compute-0 podman[78069]: 2026-01-31 07:05:36.611643624 +0000 UTC m=+0.708573171 container died bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959 (image=quay.io/ceph/ceph:v18, name=frosty_carver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf23a604e067252725e46ab3cfc2c081f5401c80c29469b02d1ab24812aba27a-merged.mount: Deactivated successfully.
Jan 31 07:05:36 compute-0 podman[78069]: 2026-01-31 07:05:36.657698301 +0000 UTC m=+0.754627868 container remove bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959 (image=quay.io/ceph/ceph:v18, name=frosty_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 07:05:36 compute-0 systemd[1]: libpod-conmon-bc3b10a18919a58aeb0743ba15cabcba36abdc984e1c9f1c802804c7b5eb8959.scope: Deactivated successfully.
Jan 31 07:05:36 compute-0 sudo[73476]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:36 compute-0 podman[78132]: 2026-01-31 07:05:36.798005867 +0000 UTC m=+0.046019128 container create ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:36 compute-0 systemd[1]: Started libpod-conmon-ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f.scope.
Jan 31 07:05:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c76693bb410810adab979f0b5bc3f9f6ca58bb9b6308984ce9394c83756f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c76693bb410810adab979f0b5bc3f9f6ca58bb9b6308984ce9394c83756f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c76693bb410810adab979f0b5bc3f9f6ca58bb9b6308984ce9394c83756f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b38c76693bb410810adab979f0b5bc3f9f6ca58bb9b6308984ce9394c83756f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:36 compute-0 podman[78132]: 2026-01-31 07:05:36.774594604 +0000 UTC m=+0.022607935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:05:36 compute-0 podman[78132]: 2026-01-31 07:05:36.878244942 +0000 UTC m=+0.126258193 container init ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:36 compute-0 podman[78132]: 2026-01-31 07:05:36.887225732 +0000 UTC m=+0.135238963 container start ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:05:36 compute-0 podman[78132]: 2026-01-31 07:05:36.890651995 +0000 UTC m=+0.138665236 container attach ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:05:37 compute-0 sudo[78177]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqjokuvrmdxsaysoadjihomkglmredw ; /usr/bin/python3'
Jan 31 07:05:37 compute-0 sudo[78177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:37 compute-0 python3[78179]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.289655635 +0000 UTC m=+0.047522365 container create 7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:05:37 compute-0 systemd[1]: Started libpod-conmon-7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a.scope.
Jan 31 07:05:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42fe36b0d5f1e7daf725c307d17d9b3af53d9f03389dfc5f3c8a6597fcd9819/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e42fe36b0d5f1e7daf725c307d17d9b3af53d9f03389dfc5f3c8a6597fcd9819/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.347574263 +0000 UTC m=+0.105441043 container init 7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.352918514 +0000 UTC m=+0.110785244 container start 7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.356562903 +0000 UTC m=+0.114429643 container attach 7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.271552011 +0000 UTC m=+0.029418771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3241561397' entity='client.admin' 
Jan 31 07:05:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 31 07:05:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/647641130' entity='client.admin' 
Jan 31 07:05:37 compute-0 systemd[1]: libpod-7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a.scope: Deactivated successfully.
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.872860105 +0000 UTC m=+0.630726835 container died 7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:37 compute-0 ceph-mgr[74791]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 07:05:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e42fe36b0d5f1e7daf725c307d17d9b3af53d9f03389dfc5f3c8a6597fcd9819-merged.mount: Deactivated successfully.
Jan 31 07:05:37 compute-0 podman[78180]: 2026-01-31 07:05:37.908020876 +0000 UTC m=+0.665887606 container remove 7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:37 compute-0 systemd[1]: libpod-conmon-7e0e0d2bbbffbf01e2df2c4978e98ee77a89c60095292e00c107c5e63c9b7a9a.scope: Deactivated successfully.
Jan 31 07:05:37 compute-0 vibrant_banach[78149]: [
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:     {
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "available": false,
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "ceph_device": false,
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "lsm_data": {},
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "lvs": [],
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "path": "/dev/sr0",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "rejected_reasons": [
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "Insufficient space (<5GB)",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "Has a FileSystem"
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         ],
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         "sys_api": {
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "actuators": null,
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "device_nodes": "sr0",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "devname": "sr0",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "human_readable_size": "482.00 KB",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "id_bus": "ata",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "model": "QEMU DVD-ROM",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "nr_requests": "2",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "parent": "/dev/sr0",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "partitions": {},
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "path": "/dev/sr0",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "removable": "1",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "rev": "2.5+",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "ro": "0",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "rotational": "1",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "sas_address": "",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "sas_device_handle": "",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "scheduler_mode": "mq-deadline",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "sectors": 0,
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "sectorsize": "2048",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "size": 493568.0,
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "support_discard": "2048",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "type": "disk",
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:             "vendor": "QEMU"
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:         }
Jan 31 07:05:37 compute-0 vibrant_banach[78149]:     }
Jan 31 07:05:37 compute-0 vibrant_banach[78149]: ]
Jan 31 07:05:37 compute-0 sudo[78177]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:37 compute-0 systemd[1]: libpod-ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f.scope: Deactivated successfully.
Jan 31 07:05:37 compute-0 systemd[1]: libpod-ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f.scope: Consumed 1.059s CPU time.
Jan 31 07:05:37 compute-0 podman[78132]: 2026-01-31 07:05:37.962666403 +0000 UTC m=+1.210679664 container died ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b38c76693bb410810adab979f0b5bc3f9f6ca58bb9b6308984ce9394c83756f2-merged.mount: Deactivated successfully.
Jan 31 07:05:38 compute-0 podman[78132]: 2026-01-31 07:05:38.009164762 +0000 UTC m=+1.257177993 container remove ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:05:38 compute-0 systemd[1]: libpod-conmon-ba4a1b18fc1bedef006dc2ce949173f4faa6abcc4af9184aa91e275d5d10d59f.scope: Deactivated successfully.
Jan 31 07:05:38 compute-0 sudo[77844]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:05:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:38 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:05:38 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:05:38 compute-0 sudo[79177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79177]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:05:38 compute-0 sudo[79202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79202]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79227]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph
Jan 31 07:05:38 compute-0 sudo[79252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79252]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79305]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:05:38 compute-0 sudo[79354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79354]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79402]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:38 compute-0 sudo[79427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79452]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:05:38 compute-0 sudo[79477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79477]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79546]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:05:38 compute-0 sudo[79593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-katigyfghfyvpomgntzeaemahljwrcat ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843138.2358484-37395-226430551084021/async_wrapper.py j531759718154 30 /home/zuul/.ansible/tmp/ansible-tmp-1769843138.2358484-37395-226430551084021/AnsiballZ_command.py _'
Jan 31 07:05:38 compute-0 sudo[79666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:38 compute-0 sudo[79631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:05:38 compute-0 sudo[79675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79700]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 07:05:38 compute-0 ansible-async_wrapper.py[79672]: Invoked with j531759718154 30 /home/zuul/.ansible/tmp/ansible-tmp-1769843138.2358484-37395-226430551084021/AnsiballZ_command.py _
Jan 31 07:05:38 compute-0 sudo[79725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 ansible-async_wrapper.py[79752]: Starting module and watcher
Jan 31 07:05:38 compute-0 ansible-async_wrapper.py[79752]: Start watching 79753 (30)
Jan 31 07:05:38 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:05:38 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:05:38 compute-0 ansible-async_wrapper.py[79753]: Start module (79753)
Jan 31 07:05:38 compute-0 ansible-async_wrapper.py[79672]: Return async_wrapper task started.
Jan 31 07:05:38 compute-0 sudo[79666]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79754]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 sudo[79780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:05:38 compute-0 sudo[79780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79780]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/647641130' entity='client.admin' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:38 compute-0 ceph-mon[74496]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:05:38 compute-0 ceph-mon[74496]: Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:05:38 compute-0 python3[79755]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:38 compute-0 sudo[79805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 sudo[79805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 podman[79828]: 2026-01-31 07:05:38.926136734 +0000 UTC m=+0.034019824 container create ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49 (image=quay.io/ceph/ceph:v18, name=upbeat_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:05:38 compute-0 sudo[79836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:05:38 compute-0 sudo[79836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79836]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:38 compute-0 systemd[1]: Started libpod-conmon-ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49.scope.
Jan 31 07:05:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903c8e9379bc23f61a1823fe61b8de121e1a81a8423f280bdb9ce261ee0d3e67/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903c8e9379bc23f61a1823fe61b8de121e1a81a8423f280bdb9ce261ee0d3e67/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:38 compute-0 podman[79828]: 2026-01-31 07:05:38.985686903 +0000 UTC m=+0.093569983 container init ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49 (image=quay.io/ceph/ceph:v18, name=upbeat_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:38 compute-0 podman[79828]: 2026-01-31 07:05:38.990556381 +0000 UTC m=+0.098439471 container start ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49 (image=quay.io/ceph/ceph:v18, name=upbeat_wozniak, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:38 compute-0 sudo[79870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:38 compute-0 podman[79828]: 2026-01-31 07:05:38.994950269 +0000 UTC m=+0.102833389 container attach ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49 (image=quay.io/ceph/ceph:v18, name=upbeat_wozniak, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:05:38 compute-0 sudo[79870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:38 compute-0 sudo[79870]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 podman[79828]: 2026-01-31 07:05:38.913345131 +0000 UTC m=+0.021228261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:39 compute-0 sudo[79899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:05:39 compute-0 sudo[79899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[79899]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[79924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[79924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[79924]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[79949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:39 compute-0 sudo[79949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[79949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[79974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[79974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[79974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[79999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:05:39 compute-0 sudo[79999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[79999]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80047]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:05:39 compute-0 sudo[80082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80082]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:39 compute-0 sudo[80116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80116]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:05:39 compute-0 sudo[80141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80141]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80166]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:05:39 compute-0 sudo[80191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:05:39 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:05:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:05:39 compute-0 upbeat_wozniak[79877]: 
Jan 31 07:05:39 compute-0 upbeat_wozniak[79877]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:05:39 compute-0 systemd[1]: libpod-ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49.scope: Deactivated successfully.
Jan 31 07:05:39 compute-0 podman[79828]: 2026-01-31 07:05:39.597979304 +0000 UTC m=+0.705862424 container died ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49 (image=quay.io/ceph/ceph:v18, name=upbeat_wozniak, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:05:39 compute-0 sudo[80216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-903c8e9379bc23f61a1823fe61b8de121e1a81a8423f280bdb9ce261ee0d3e67-merged.mount: Deactivated successfully.
Jan 31 07:05:39 compute-0 sudo[80216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 podman[79828]: 2026-01-31 07:05:39.646579255 +0000 UTC m=+0.754462375 container remove ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49 (image=quay.io/ceph/ceph:v18, name=upbeat_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:05:39 compute-0 systemd[1]: libpod-conmon-ab964b11e0fbe03eebea3e5ed269047ac19d80263d7d76b9fdfc9e9d2f9a2b49.scope: Deactivated successfully.
Jan 31 07:05:39 compute-0 ansible-async_wrapper.py[79753]: Module complete (79753)
Jan 31 07:05:39 compute-0 sudo[80255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:05:39 compute-0 sudo[80255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80282]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph
Jan 31 07:05:39 compute-0 sudo[80307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80307]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:05:39 compute-0 sudo[80380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80380]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 ceph-mon[74496]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:05:39 compute-0 ceph-mon[74496]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:05:39 compute-0 ceph-mgr[74791]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 07:05:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 07:05:39 compute-0 sudo[80405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80405]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:39 compute-0 sudo[80430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80430]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:39 compute-0 sudo[80455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:39 compute-0 sudo[80455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:39 compute-0 sudo[80455]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:05:40 compute-0 sudo[80480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80480]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipqrvlktrjqbippbgczldnwllroxplhf ; /usr/bin/python3'
Jan 31 07:05:40 compute-0 sudo[80527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:40 compute-0 sudo[80554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80554]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:05:40 compute-0 python3[80530]: ansible-ansible.legacy.async_status Invoked with jid=j531759718154.79672 mode=status _async_dir=/root/.ansible_async
Jan 31 07:05:40 compute-0 sudo[80579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80579]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new
Jan 31 07:05:40 compute-0 sudo[80652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80652]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrrycypnthxpphiyuxkvouuruqwuxlph ; /usr/bin/python3'
Jan 31 07:05:40 compute-0 sudo[80701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:40 compute-0 sudo[80700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80700]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 31 07:05:40 compute-0 sudo[80728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80728]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:05:40 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:05:40 compute-0 python3[80719]: ansible-ansible.legacy.async_status Invoked with jid=j531759718154.79672 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:05:40 compute-0 sudo[80753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80753]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80701]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:05:40 compute-0 sudo[80778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80778]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80803]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:05:40 compute-0 sudo[80828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80828]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80853]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:05:40 compute-0 sudo[80878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnzauwshnijzopiyrvvrexgwkshuioyt ; /usr/bin/python3'
Jan 31 07:05:40 compute-0 sudo[80906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:40 compute-0 sudo[80906]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:40 compute-0 sudo[80954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80954]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:40 compute-0 sudo[80979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[80979]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 ceph-mon[74496]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:40 compute-0 ceph-mon[74496]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 07:05:40 compute-0 ceph-mon[74496]: Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:05:40 compute-0 python3[80953]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:05:40 compute-0 sudo[81004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:05:40 compute-0 sudo[81004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:40 compute-0 sudo[81004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:40 compute-0 sudo[80949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 sudo[81054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:41 compute-0 sudo[81054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 sudo[81079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:05:41 compute-0 sudo[81079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 sudo[81104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:41 compute-0 sudo[81104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81104]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 sudo[81129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new
Jan 31 07:05:41 compute-0 sudo[81129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81129]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 sudo[81154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:41 compute-0 sudo[81154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81154]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 sudo[81200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imhxmpriyyjtfrnvjycgpmqjrhgybcls ; /usr/bin/python3'
Jan 31 07:05:41 compute-0 sudo[81200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:41 compute-0 sudo[81204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:05:41 compute-0 sudo[81204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81204]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:05:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:41 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev c6d5d5d2-ffed-447f-9591-e706268e1d93 (Updating crash deployment (+1 -> 1))
Jan 31 07:05:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 07:05:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:05:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:05:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:41 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 07:05:41 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 07:05:41 compute-0 sudo[81230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:41 compute-0 sudo[81230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81230]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 python3[81206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:41 compute-0 sudo[81256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:41 compute-0 sudo[81256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81256]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 podman[81255]: 2026-01-31 07:05:41.426657709 +0000 UTC m=+0.042392559 container create e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997 (image=quay.io/ceph/ceph:v18, name=charming_feistel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:41 compute-0 systemd[1]: Started libpod-conmon-e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997.scope.
Jan 31 07:05:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:41 compute-0 sudo[81294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7df8f755391eae3e1d8a45995f1e526e555193687170c8f7ad1baa907f8c5d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7df8f755391eae3e1d8a45995f1e526e555193687170c8f7ad1baa907f8c5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac7df8f755391eae3e1d8a45995f1e526e555193687170c8f7ad1baa907f8c5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:41 compute-0 sudo[81294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 sudo[81294]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:41 compute-0 podman[81255]: 2026-01-31 07:05:41.405057861 +0000 UTC m=+0.020792691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:41 compute-0 podman[81255]: 2026-01-31 07:05:41.509752233 +0000 UTC m=+0.125487083 container init e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997 (image=quay.io/ceph/ceph:v18, name=charming_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:05:41 compute-0 podman[81255]: 2026-01-31 07:05:41.517447702 +0000 UTC m=+0.133182512 container start e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997 (image=quay.io/ceph/ceph:v18, name=charming_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:41 compute-0 podman[81255]: 2026-01-31 07:05:41.520365514 +0000 UTC m=+0.136100314 container attach e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997 (image=quay.io/ceph/ceph:v18, name=charming_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:05:41 compute-0 sudo[81324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:41 compute-0 sudo[81324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:41 compute-0 podman[81411]: 2026-01-31 07:05:41.855063389 +0000 UTC m=+0.034617749 container create bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moser, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:41 compute-0 systemd[1]: Started libpod-conmon-bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce.scope.
Jan 31 07:05:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:41 compute-0 podman[81411]: 2026-01-31 07:05:41.93147818 +0000 UTC m=+0.111032550 container init bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:41 compute-0 podman[81411]: 2026-01-31 07:05:41.836173967 +0000 UTC m=+0.015728357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:05:41 compute-0 podman[81411]: 2026-01-31 07:05:41.938242685 +0000 UTC m=+0.117797045 container start bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:05:41 compute-0 beautiful_moser[81428]: 167 167
Jan 31 07:05:41 compute-0 systemd[1]: libpod-bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce.scope: Deactivated successfully.
Jan 31 07:05:41 compute-0 podman[81411]: 2026-01-31 07:05:41.945728009 +0000 UTC m=+0.125282399 container attach bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:41 compute-0 podman[81411]: 2026-01-31 07:05:41.946350144 +0000 UTC m=+0.125904494 container died bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moser, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:05:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e1d541b4c4a218abee7ee35dc16a7a3837c54d739d8187aa29d6d03deb42e6e-merged.mount: Deactivated successfully.
Jan 31 07:05:42 compute-0 podman[81411]: 2026-01-31 07:05:42.003673787 +0000 UTC m=+0.183228157 container remove bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:05:42 compute-0 systemd[1]: libpod-conmon-bd97544a64e40d36e68534a6f7105108224cff7580e851d2db2ad68d491d45ce.scope: Deactivated successfully.
Jan 31 07:05:42 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:05:42 compute-0 charming_feistel[81319]: 
Jan 31 07:05:42 compute-0 charming_feistel[81319]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:05:42 compute-0 systemd[1]: libpod-e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997.scope: Deactivated successfully.
Jan 31 07:05:42 compute-0 podman[81255]: 2026-01-31 07:05:42.060670893 +0000 UTC m=+0.676405743 container died e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997 (image=quay.io/ceph/ceph:v18, name=charming_feistel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:05:42 compute-0 systemd[1]: Reloading.
Jan 31 07:05:42 compute-0 podman[81255]: 2026-01-31 07:05:42.102128679 +0000 UTC m=+0.717863489 container remove e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997 (image=quay.io/ceph/ceph:v18, name=charming_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:05:42 compute-0 sudo[81200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:42 compute-0 systemd-rc-local-generator[81480]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:42 compute-0 systemd-sysv-generator[81484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac7df8f755391eae3e1d8a45995f1e526e555193687170c8f7ad1baa907f8c5d-merged.mount: Deactivated successfully.
Jan 31 07:05:42 compute-0 systemd[1]: libpod-conmon-e88df440d7825a46e09f9cf137b134428646b7988a96f0e5d9fdc51900cc9997.scope: Deactivated successfully.
Jan 31 07:05:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:05:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:05:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:42 compute-0 ceph-mon[74496]: Deploying daemon crash.compute-0 on compute-0
Jan 31 07:05:42 compute-0 systemd[1]: Reloading.
Jan 31 07:05:42 compute-0 systemd-rc-local-generator[81554]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:05:42 compute-0 systemd-sysv-generator[81557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:05:42 compute-0 sudo[81526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iswmmltvtldzellnjfiulztlskmrvujy ; /usr/bin/python3'
Jan 31 07:05:42 compute-0 sudo[81526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:42 compute-0 systemd[1]: Starting Ceph crash.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:05:42 compute-0 python3[81565]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:42 compute-0 podman[81600]: 2026-01-31 07:05:42.786188827 +0000 UTC m=+0.042251315 container create 5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10 (image=quay.io/ceph/ceph:v18, name=vigilant_meninsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:05:42 compute-0 systemd[1]: Started libpod-conmon-5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10.scope.
Jan 31 07:05:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:42 compute-0 podman[81623]: 2026-01-31 07:05:42.849921868 +0000 UTC m=+0.066349396 container create 1da6676927c3fbc16088c8bd8068265d57380b59a05a52c32a3be4739c9d9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539d9e08c486243408bbccc357a17a9c218c9711463c0c3f5acf6ca76adb18d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539d9e08c486243408bbccc357a17a9c218c9711463c0c3f5acf6ca76adb18d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539d9e08c486243408bbccc357a17a9c218c9711463c0c3f5acf6ca76adb18d1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 podman[81600]: 2026-01-31 07:05:42.767476179 +0000 UTC m=+0.023538697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:42 compute-0 podman[81600]: 2026-01-31 07:05:42.883282814 +0000 UTC m=+0.139345322 container init 5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10 (image=quay.io/ceph/ceph:v18, name=vigilant_meninsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:05:42 compute-0 podman[81600]: 2026-01-31 07:05:42.891576057 +0000 UTC m=+0.147638525 container start 5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10 (image=quay.io/ceph/ceph:v18, name=vigilant_meninsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:42 compute-0 podman[81600]: 2026-01-31 07:05:42.894764466 +0000 UTC m=+0.150827024 container attach 5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10 (image=quay.io/ceph/ceph:v18, name=vigilant_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4657c69bf1f7017ddb82052b8e192728196553527cd2185ee2575a7f53c015/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4657c69bf1f7017ddb82052b8e192728196553527cd2185ee2575a7f53c015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4657c69bf1f7017ddb82052b8e192728196553527cd2185ee2575a7f53c015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4657c69bf1f7017ddb82052b8e192728196553527cd2185ee2575a7f53c015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:42 compute-0 podman[81623]: 2026-01-31 07:05:42.821994444 +0000 UTC m=+0.038421962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:05:42 compute-0 podman[81623]: 2026-01-31 07:05:42.920774743 +0000 UTC m=+0.137202321 container init 1da6676927c3fbc16088c8bd8068265d57380b59a05a52c32a3be4739c9d9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:05:42 compute-0 podman[81623]: 2026-01-31 07:05:42.927008286 +0000 UTC m=+0.143435814 container start 1da6676927c3fbc16088c8bd8068265d57380b59a05a52c32a3be4739c9d9976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:05:42 compute-0 bash[81623]: 1da6676927c3fbc16088c8bd8068265d57380b59a05a52c32a3be4739c9d9976
Jan 31 07:05:42 compute-0 systemd[1]: Started Ceph crash.compute-0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:05:42 compute-0 sudo[81324]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev c6d5d5d2-ffed-447f-9591-e706268e1d93 (Updating crash deployment (+1 -> 1))
Jan 31 07:05:43 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event c6d5d5d2-ffed-447f-9591-e706268e1d93 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 31 07:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 95edb100-6f3d-4b95-be63-60df4c122898 does not exist
Jan 31 07:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 07:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 15db8962-9af4-4fb2-ae00-6496145dc7c5 does not exist
Jan 31 07:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 07:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 sudo[81651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:43 compute-0 sudo[81651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-0 sudo[81651]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 07:05:43 compute-0 sudo[81676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:05:43 compute-0 sudo[81676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-0 sudo[81676]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-0 sudo[81704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:43 compute-0 sudo[81704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-0 sudo[81704]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-0 sudo[81747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:43 compute-0 sudo[81747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-0 sudo[81747]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-0 sudo[81772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:43 compute-0 sudo[81772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-0 sudo[81772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-0 ceph-mon[74496]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: 2026-01-31T07:05:43.346+0000 7f2ee78c6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: 2026-01-31T07:05:43.346+0000 7f2ee78c6640 -1 AuthRegistry(0x7f2ee0066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: 2026-01-31T07:05:43.347+0000 7f2ee78c6640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: 2026-01-31T07:05:43.347+0000 7f2ee78c6640 -1 AuthRegistry(0x7f2ee78c5000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: 2026-01-31T07:05:43.348+0000 7f2ee563b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: 2026-01-31T07:05:43.348+0000 7f2ee78c6640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 07:05:43 compute-0 sudo[81797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 07:05:43 compute-0 sudo[81797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 31 07:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1847991108' entity='client.admin' 
Jan 31 07:05:43 compute-0 systemd[1]: libpod-5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10.scope: Deactivated successfully.
Jan 31 07:05:43 compute-0 podman[81600]: 2026-01-31 07:05:43.46750516 +0000 UTC m=+0.723567638 container died 5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10 (image=quay.io/ceph/ceph:v18, name=vigilant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-539d9e08c486243408bbccc357a17a9c218c9711463c0c3f5acf6ca76adb18d1-merged.mount: Deactivated successfully.
Jan 31 07:05:43 compute-0 podman[81600]: 2026-01-31 07:05:43.511429525 +0000 UTC m=+0.767491993 container remove 5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10 (image=quay.io/ceph/ceph:v18, name=vigilant_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:05:43 compute-0 systemd[1]: libpod-conmon-5f9e5ace86ab02302c7c795e6f152a8709c1ba8e88c40159a06176035f9e6f10.scope: Deactivated successfully.
Jan 31 07:05:43 compute-0 sudo[81526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:43 compute-0 sudo[81921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oswbiqwfmlmvjurnpmvqxvmjsmtvnkjc ; /usr/bin/python3'
Jan 31 07:05:43 compute-0 sudo[81921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:43 compute-0 ansible-async_wrapper.py[79752]: Done in kid B.
Jan 31 07:05:43 compute-0 podman[81945]: 2026-01-31 07:05:43.788068319 +0000 UTC m=+0.068480468 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:43 compute-0 python3[81924]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:43 compute-0 podman[81966]: 2026-01-31 07:05:43.872172378 +0000 UTC m=+0.045055424 container create 67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d (image=quay.io/ceph/ceph:v18, name=nice_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:05:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:43 compute-0 podman[81945]: 2026-01-31 07:05:43.889808339 +0000 UTC m=+0.170220398 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:05:43 compute-0 systemd[1]: Started libpod-conmon-67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d.scope.
Jan 31 07:05:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e01406bac7b0974e4cf17734dd3caa0d24a3aa472b75bc51f5301cd4130064/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e01406bac7b0974e4cf17734dd3caa0d24a3aa472b75bc51f5301cd4130064/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41e01406bac7b0974e4cf17734dd3caa0d24a3aa472b75bc51f5301cd4130064/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:43 compute-0 podman[81966]: 2026-01-31 07:05:43.849325628 +0000 UTC m=+0.022208684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:43 compute-0 podman[81966]: 2026-01-31 07:05:43.958765968 +0000 UTC m=+0.131649044 container init 67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d (image=quay.io/ceph/ceph:v18, name=nice_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:43 compute-0 podman[81966]: 2026-01-31 07:05:43.966275742 +0000 UTC m=+0.139158788 container start 67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d (image=quay.io/ceph/ceph:v18, name=nice_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:43 compute-0 podman[81966]: 2026-01-31 07:05:43.972044443 +0000 UTC m=+0.144927819 container attach 67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d (image=quay.io/ceph/ceph:v18, name=nice_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:44 compute-0 sudo[81797]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9c5a7d09-7080-422b-bafc-65ccdb3e42e6 does not exist
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c190f0f5-2d9b-41e1-a39a-e8cc4bbca010 does not exist
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4870aa0b-4e4e-443e-9add-0756daa82244 does not exist
Jan 31 07:05:44 compute-0 sudo[82029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:44 compute-0 sudo[82029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 sudo[82029]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 sudo[82054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:05:44 compute-0 sudo[82054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 sudo[82054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:05:44 compute-0 sudo[82098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:44 compute-0 sudo[82098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 sudo[82098]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:44 compute-0 sudo[82123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:44 compute-0 sudo[82123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 sudo[82123]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 sudo[82148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:44 compute-0 sudo[82148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 sudo[82148]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1847991108' entity='client.admin' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:44 compute-0 sudo[82173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:44 compute-0 sudo[82173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/273218485' entity='client.admin' 
Jan 31 07:05:44 compute-0 systemd[1]: libpod-67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d.scope: Deactivated successfully.
Jan 31 07:05:44 compute-0 podman[81966]: 2026-01-31 07:05:44.511482891 +0000 UTC m=+0.684365957 container died 67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d (image=quay.io/ceph/ceph:v18, name=nice_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-41e01406bac7b0974e4cf17734dd3caa0d24a3aa472b75bc51f5301cd4130064-merged.mount: Deactivated successfully.
Jan 31 07:05:44 compute-0 podman[81966]: 2026-01-31 07:05:44.552244979 +0000 UTC m=+0.725128045 container remove 67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d (image=quay.io/ceph/ceph:v18, name=nice_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:44 compute-0 systemd[1]: libpod-conmon-67ba2e784a4815902d90396770195173ea3a75bf306bb55fa0cd65c9db58c86d.scope: Deactivated successfully.
Jan 31 07:05:44 compute-0 sudo[81921]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 sudo[82264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahdapwybewpkjoriephmzasyaxwkgwjg ; /usr/bin/python3'
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.746617398 +0000 UTC m=+0.051664936 container create 77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:05:44 compute-0 sudo[82264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:44 compute-0 systemd[1]: Started libpod-conmon-77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119.scope.
Jan 31 07:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.809871056 +0000 UTC m=+0.114918614 container init 77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_visvesvaraya, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.717969207 +0000 UTC m=+0.023016765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.813844124 +0000 UTC m=+0.118891662 container start 77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_visvesvaraya, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:05:44 compute-0 reverent_visvesvaraya[82270]: 167 167
Jan 31 07:05:44 compute-0 systemd[1]: libpod-77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119.scope: Deactivated successfully.
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.818639082 +0000 UTC m=+0.123686660 container attach 77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_visvesvaraya, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.819029851 +0000 UTC m=+0.124077399 container died 77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-99fd59cc3ba1833d6b0ac9ce65c00f113d63eb253b5699d3a948491661d20e0b-merged.mount: Deactivated successfully.
Jan 31 07:05:44 compute-0 podman[82228]: 2026-01-31 07:05:44.848593875 +0000 UTC m=+0.153641413 container remove 77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:05:44 compute-0 systemd[1]: libpod-conmon-77bfdd253ed7145ef26836a70a862ff193cf17df2666c72316da8f2905200119.scope: Deactivated successfully.
Jan 31 07:05:44 compute-0 sudo[82173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:44 compute-0 python3[82266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hhuoua (unknown last config time)...
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hhuoua (unknown last config time)...
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:05:44 compute-0 podman[82289]: 2026-01-31 07:05:44.940729001 +0000 UTC m=+0.038161305 container create 76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8 (image=quay.io/ceph/ceph:v18, name=quizzical_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:44 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 1 completed events
Jan 31 07:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:05:44 compute-0 sudo[82298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:44 compute-0 sudo[82298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:44 compute-0 sudo[82298]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:44 compute-0 systemd[1]: Started libpod-conmon-76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8.scope.
Jan 31 07:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fb8fe1ca61f28b82e0a90ca15f070efcfa5ee1cd384f91acb49a8eb9877d7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fb8fe1ca61f28b82e0a90ca15f070efcfa5ee1cd384f91acb49a8eb9877d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395fb8fe1ca61f28b82e0a90ca15f070efcfa5ee1cd384f91acb49a8eb9877d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:45 compute-0 podman[82289]: 2026-01-31 07:05:45.00477643 +0000 UTC m=+0.102208734 container init 76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8 (image=quay.io/ceph/ceph:v18, name=quizzical_feynman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:45 compute-0 sudo[82329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:45 compute-0 sudo[82329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:45 compute-0 podman[82289]: 2026-01-31 07:05:45.011542504 +0000 UTC m=+0.108974788 container start 76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8 (image=quay.io/ceph/ceph:v18, name=quizzical_feynman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:45 compute-0 sudo[82329]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:45 compute-0 podman[82289]: 2026-01-31 07:05:45.015535042 +0000 UTC m=+0.112967346 container attach 76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8 (image=quay.io/ceph/ceph:v18, name=quizzical_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:45 compute-0 podman[82289]: 2026-01-31 07:05:44.923744035 +0000 UTC m=+0.021176379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:45 compute-0 sudo[82359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:45 compute-0 sudo[82359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:45 compute-0 sudo[82359]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:45 compute-0 sudo[82384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:05:45 compute-0 sudo[82384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.338954181 +0000 UTC m=+0.048265462 container create 6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:05:45 compute-0 systemd[1]: Started libpod-conmon-6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a.scope.
Jan 31 07:05:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.395620219 +0000 UTC m=+0.104931530 container init 6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.399996196 +0000 UTC m=+0.109307507 container start 6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:45 compute-0 zen_davinci[82460]: 167 167
Jan 31 07:05:45 compute-0 systemd[1]: libpod-6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a.scope: Deactivated successfully.
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.403916392 +0000 UTC m=+0.113227673 container attach 6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.404316402 +0000 UTC m=+0.113627683 container died 6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.317871885 +0000 UTC m=+0.027183266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c670d0b63426042e880dd31b60371e9fbc389329b8fe39f699a879b5053cdf2-merged.mount: Deactivated successfully.
Jan 31 07:05:45 compute-0 podman[82425]: 2026-01-31 07:05:45.440220571 +0000 UTC m=+0.149531852 container remove 6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:45 compute-0 systemd[1]: libpod-conmon-6a273f90eb6c000152b26f9870b582aa44436c8029e34c60e6ccba9edad7025a.scope: Deactivated successfully.
Jan 31 07:05:45 compute-0 sudo[82384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:05:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:05:45 compute-0 ceph-mon[74496]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:45 compute-0 ceph-mon[74496]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 07:05:45 compute-0 ceph-mon[74496]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/273218485' entity='client.admin' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:05:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:05:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5770daba-1c16-47e1-b5b1-bcae6697c382 does not exist
Jan 31 07:05:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 21d1ea2e-1ea9-4967-9c96-b8291acaf982 does not exist
Jan 31 07:05:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 27c66068-14a5-4ca7-a30b-fe1f7baa9795 does not exist
Jan 31 07:05:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 31 07:05:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1054193188' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 31 07:05:45 compute-0 sudo[82480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:45 compute-0 sudo[82480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:45 compute-0 sudo[82480]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:45 compute-0 sudo[82505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:05:45 compute-0 sudo[82505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:45 compute-0 sudo[82505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:46 compute-0 ceph-mon[74496]: Reconfiguring mgr.compute-0.hhuoua (unknown last config time)...
Jan 31 07:05:46 compute-0 ceph-mon[74496]: Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:05:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1054193188' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 31 07:05:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 07:05:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:05:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1054193188' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 07:05:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 07:05:46 compute-0 quizzical_feynman[82333]: set require_min_compat_client to mimic
Jan 31 07:05:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 07:05:46 compute-0 systemd[1]: libpod-76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8.scope: Deactivated successfully.
Jan 31 07:05:46 compute-0 conmon[82333]: conmon 76d08394749b7518e216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8.scope/container/memory.events
Jan 31 07:05:46 compute-0 podman[82289]: 2026-01-31 07:05:46.564367846 +0000 UTC m=+1.661800160 container died 76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8 (image=quay.io/ceph/ceph:v18, name=quizzical_feynman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-395fb8fe1ca61f28b82e0a90ca15f070efcfa5ee1cd384f91acb49a8eb9877d7-merged.mount: Deactivated successfully.
Jan 31 07:05:46 compute-0 podman[82289]: 2026-01-31 07:05:46.606168858 +0000 UTC m=+1.703601172 container remove 76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8 (image=quay.io/ceph/ceph:v18, name=quizzical_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:05:46 compute-0 systemd[1]: libpod-conmon-76d08394749b7518e2166ff8bfa90460a988ee352c46b6847206f28c465cc0c8.scope: Deactivated successfully.
Jan 31 07:05:46 compute-0 sudo[82264]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:47 compute-0 sudo[82568]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deszeevbrzjjwfdymjcnosbkgfaurzfe ; /usr/bin/python3'
Jan 31 07:05:47 compute-0 sudo[82568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:47 compute-0 python3[82570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:47 compute-0 podman[82571]: 2026-01-31 07:05:47.261454333 +0000 UTC m=+0.070320823 container create a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3 (image=quay.io/ceph/ceph:v18, name=vibrant_margulis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:47 compute-0 podman[82571]: 2026-01-31 07:05:47.213740715 +0000 UTC m=+0.022607285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:47 compute-0 systemd[1]: Started libpod-conmon-a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3.scope.
Jan 31 07:05:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b97f8ac0667598d28118dcb7e3bb54d6689992d855a74eee814f22bfafc61f6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b97f8ac0667598d28118dcb7e3bb54d6689992d855a74eee814f22bfafc61f6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b97f8ac0667598d28118dcb7e3bb54d6689992d855a74eee814f22bfafc61f6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:47 compute-0 podman[82571]: 2026-01-31 07:05:47.380700744 +0000 UTC m=+0.189567284 container init a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3 (image=quay.io/ceph/ceph:v18, name=vibrant_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:05:47 compute-0 podman[82571]: 2026-01-31 07:05:47.388217257 +0000 UTC m=+0.197083737 container start a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3 (image=quay.io/ceph/ceph:v18, name=vibrant_margulis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:05:47 compute-0 podman[82571]: 2026-01-31 07:05:47.416685984 +0000 UTC m=+0.225552584 container attach a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3 (image=quay.io/ceph/ceph:v18, name=vibrant_margulis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:05:47 compute-0 ceph-mon[74496]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1054193188' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 07:05:47 compute-0 ceph-mon[74496]: osdmap e3: 0 total, 0 up, 0 in
Jan 31 07:05:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:47 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:48 compute-0 sudo[82610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:48 compute-0 sudo[82610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:48 compute-0 sudo[82610]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:48 compute-0 sudo[82635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:05:48 compute-0 sudo[82635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:48 compute-0 sudo[82635]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:48 compute-0 sudo[82660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:48 compute-0 sudo[82660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:48 compute-0 sudo[82660]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:48 compute-0 sudo[82685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Jan 31 07:05:48 compute-0 sudo[82685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:48 compute-0 sudo[82685]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:48 compute-0 ceph-mgr[74791]: [cephadm INFO root] Added host compute-0
Jan 31 07:05:48 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:05:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 90466208-9d7b-4071-a382-3534ec332b1f does not exist
Jan 31 07:05:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4daf3479-c283-4211-85f7-6a526cffcf3a does not exist
Jan 31 07:05:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0568cd95-85e9-43c1-9deb-08815ab05ae9 does not exist
Jan 31 07:05:48 compute-0 sudo[82729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:05:48 compute-0 sudo[82729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:48 compute-0 sudo[82729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:48 compute-0 sudo[82754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:05:48 compute-0 sudo[82754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:05:48 compute-0 sudo[82754]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:49 compute-0 ceph-mon[74496]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:49 compute-0 ceph-mon[74496]: Added host compute-0
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:05:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:05:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:05:51 compute-0 ceph-mon[74496]: Deploying cephadm binary to compute-1
Jan 31 07:05:51 compute-0 ceph-mon[74496]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:53 compute-0 ceph-mgr[74791]: [cephadm INFO root] Added host compute-1
Jan 31 07:05:53 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 31 07:05:53 compute-0 ceph-mon[74496]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:05:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:05:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:54 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 31 07:05:54 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 31 07:05:54 compute-0 ceph-mon[74496]: Added host compute-1
Jan 31 07:05:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:05:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:55 compute-0 ceph-mon[74496]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:55 compute-0 ceph-mon[74496]: Deploying cephadm binary to compute-2
Jan 31 07:05:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 07:05:57 compute-0 ceph-mon[74496]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Added host compute-2
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 07:05:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 07:05:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 31 07:05:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:57 compute-0 vibrant_margulis[82586]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 07:05:57 compute-0 vibrant_margulis[82586]: Added host 'compute-1' with addr '192.168.122.101'
Jan 31 07:05:57 compute-0 vibrant_margulis[82586]: Added host 'compute-2' with addr '192.168.122.102'
Jan 31 07:05:57 compute-0 vibrant_margulis[82586]: Scheduled mon update...
Jan 31 07:05:57 compute-0 vibrant_margulis[82586]: Scheduled mgr update...
Jan 31 07:05:57 compute-0 vibrant_margulis[82586]: Scheduled osd.default_drive_group update...
Jan 31 07:05:57 compute-0 systemd[1]: libpod-a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3.scope: Deactivated successfully.
Jan 31 07:05:57 compute-0 podman[82571]: 2026-01-31 07:05:57.949772147 +0000 UTC m=+10.758638637 container died a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3 (image=quay.io/ceph/ceph:v18, name=vibrant_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b97f8ac0667598d28118dcb7e3bb54d6689992d855a74eee814f22bfafc61f6-merged.mount: Deactivated successfully.
Jan 31 07:05:58 compute-0 podman[82571]: 2026-01-31 07:05:58.006507916 +0000 UTC m=+10.815374396 container remove a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3 (image=quay.io/ceph/ceph:v18, name=vibrant_margulis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:05:58 compute-0 systemd[1]: libpod-conmon-a64651fdedfa38e81b5466e31dc4b400f7b7ba17ae0588174d0fec432301e8a3.scope: Deactivated successfully.
Jan 31 07:05:58 compute-0 sudo[82568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:58 compute-0 sudo[82817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyutesntbrqkjysnwutirqzxvogdnjd ; /usr/bin/python3'
Jan 31 07:05:58 compute-0 sudo[82817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:05:58 compute-0 python3[82819]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:05:58 compute-0 podman[82821]: 2026-01-31 07:05:58.454179637 +0000 UTC m=+0.047292198 container create 7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528 (image=quay.io/ceph/ceph:v18, name=hungry_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:05:58 compute-0 systemd[1]: Started libpod-conmon-7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528.scope.
Jan 31 07:05:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b938e6ee6bef8649d64c83421f30dfbd1df33164f96be1fad41b64f4bdc8bd04/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b938e6ee6bef8649d64c83421f30dfbd1df33164f96be1fad41b64f4bdc8bd04/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b938e6ee6bef8649d64c83421f30dfbd1df33164f96be1fad41b64f4bdc8bd04/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:05:58 compute-0 podman[82821]: 2026-01-31 07:05:58.519953527 +0000 UTC m=+0.113065938 container init 7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528 (image=quay.io/ceph/ceph:v18, name=hungry_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:05:58 compute-0 podman[82821]: 2026-01-31 07:05:58.431618854 +0000 UTC m=+0.024731425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:05:58 compute-0 podman[82821]: 2026-01-31 07:05:58.525110214 +0000 UTC m=+0.118222615 container start 7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528 (image=quay.io/ceph/ceph:v18, name=hungry_boyd, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:05:58 compute-0 podman[82821]: 2026-01-31 07:05:58.53146419 +0000 UTC m=+0.124576591 container attach 7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528 (image=quay.io/ceph/ceph:v18, name=hungry_boyd, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:05:58 compute-0 ceph-mon[74496]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:58 compute-0 ceph-mon[74496]: Added host compute-2
Jan 31 07:05:58 compute-0 ceph-mon[74496]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:58 compute-0 ceph-mon[74496]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:58 compute-0 ceph-mon[74496]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 07:05:58 compute-0 ceph-mon[74496]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 07:05:58 compute-0 ceph-mon[74496]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 07:05:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:05:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 07:05:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150708696' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:05:59 compute-0 hungry_boyd[82837]: 
Jan 31 07:05:59 compute-0 hungry_boyd[82837]: {"fsid":"f70fcd2a-dcb4-5f89-a4ba-79a09959083b","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":84,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T07:04:31.861311+0000","services":{}},"progress_events":{}}
Jan 31 07:05:59 compute-0 systemd[1]: libpod-7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528.scope: Deactivated successfully.
Jan 31 07:05:59 compute-0 podman[82862]: 2026-01-31 07:05:59.177640111 +0000 UTC m=+0.024956882 container died 7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528 (image=quay.io/ceph/ceph:v18, name=hungry_boyd, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b938e6ee6bef8649d64c83421f30dfbd1df33164f96be1fad41b64f4bdc8bd04-merged.mount: Deactivated successfully.
Jan 31 07:05:59 compute-0 podman[82862]: 2026-01-31 07:05:59.218186464 +0000 UTC m=+0.065503205 container remove 7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528 (image=quay.io/ceph/ceph:v18, name=hungry_boyd, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:05:59 compute-0 systemd[1]: libpod-conmon-7b3b55ec6dc96c5c0f72629443098cb6663642bcf3babb0924dec425de239528.scope: Deactivated successfully.
Jan 31 07:05:59 compute-0 sudo[82817]: pam_unix(sudo:session): session closed for user root
Jan 31 07:05:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:05:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:05:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1150708696' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:06:00 compute-0 ceph-mon[74496]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:02 compute-0 ceph-mon[74496]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:05 compute-0 ceph-mon[74496]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:07 compute-0 ceph-mon[74496]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:09 compute-0 ceph-mon[74496]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:11 compute-0 ceph-mon[74496]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:06:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:11 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:06:11 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:06:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:12 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:12 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:12 compute-0 ceph-mon[74496]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:06:13 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:13 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:13 compute-0 ceph-mon[74496]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:13 compute-0 ceph-mon[74496]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:14 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:14 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:14 compute-0 ceph-mon[74496]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:06:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 664173e5-04c6-4e07-a033-5ed24b82b0b5 (Updating crash deployment (+1 -> 2))
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:06:15.281+0000 7fb5f6518640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: service_name: mon
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: placement:
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   hosts:
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   - compute-0
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   - compute-1
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   - compute-2
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:06:15.282+0000 7fb5f6518640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: service_name: mgr
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: placement:
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   hosts:
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   - compute-0
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   - compute-1
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]:   - compute-2
Jan 31 07:06:15 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 07:06:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 07:06:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:06:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:06:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 31 07:06:15 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 31 07:06:15 compute-0 ceph-mon[74496]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:15 compute-0 ceph-mon[74496]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:06:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:06:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 07:06:16 compute-0 ceph-mon[74496]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 07:06:16 compute-0 ceph-mon[74496]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:16 compute-0 ceph-mon[74496]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 07:06:16 compute-0 ceph-mon[74496]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:16 compute-0 ceph-mon[74496]: Deploying daemon crash.compute-1 on compute-1
Jan 31 07:06:16 compute-0 ceph-mon[74496]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 07:06:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:17 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 664173e5-04c6-4e07-a033-5ed24b82b0b5 (Updating crash deployment (+1 -> 2))
Jan 31 07:06:17 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 664173e5-04c6-4e07-a033-5ed24b82b0b5 (Updating crash deployment (+1 -> 2)) in 2 seconds
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:17 compute-0 sudo[82878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:17 compute-0 sudo[82878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:17 compute-0 sudo[82878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:17 compute-0 sudo[82903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:17 compute-0 sudo[82903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:17 compute-0 sudo[82903]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:17 compute-0 sudo[82928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:17 compute-0 sudo[82928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:17 compute-0 sudo[82928]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:17 compute-0 sudo[82953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:06:17 compute-0 sudo[82953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.850031603 +0000 UTC m=+0.041932658 container create f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 07:06:17 compute-0 systemd[1]: Started libpod-conmon-f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f.scope.
Jan 31 07:06:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.827946542 +0000 UTC m=+0.019847607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.929196151 +0000 UTC m=+0.121097246 container init f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.935112046 +0000 UTC m=+0.127013071 container start f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:06:17 compute-0 quirky_blackburn[83033]: 167 167
Jan 31 07:06:17 compute-0 systemd[1]: libpod-f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f.scope: Deactivated successfully.
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.941757059 +0000 UTC m=+0.133658174 container attach f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.942116648 +0000 UTC m=+0.134017713 container died f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-210dadd27b9e2e73dc02da79a45275b8b6d055967b0a2a3b7f0e535f81176e9f-merged.mount: Deactivated successfully.
Jan 31 07:06:17 compute-0 podman[83017]: 2026-01-31 07:06:17.986383632 +0000 UTC m=+0.178284697 container remove f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_blackburn, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:17 compute-0 systemd[1]: libpod-conmon-f8a7c65d79cb522ff19c91272805304de21c0b76c444754248992bbd170a0b5f.scope: Deactivated successfully.
Jan 31 07:06:18 compute-0 podman[83057]: 2026-01-31 07:06:18.126065871 +0000 UTC m=+0.054849664 container create 9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:06:18 compute-0 systemd[1]: Started libpod-conmon-9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98.scope.
Jan 31 07:06:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8d2497cdd9fe094149d1babab5d92738657d2346031501d95f0cb94d436ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8d2497cdd9fe094149d1babab5d92738657d2346031501d95f0cb94d436ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8d2497cdd9fe094149d1babab5d92738657d2346031501d95f0cb94d436ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8d2497cdd9fe094149d1babab5d92738657d2346031501d95f0cb94d436ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f8d2497cdd9fe094149d1babab5d92738657d2346031501d95f0cb94d436ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:18 compute-0 podman[83057]: 2026-01-31 07:06:18.099953352 +0000 UTC m=+0.028737155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:18 compute-0 podman[83057]: 2026-01-31 07:06:18.208409157 +0000 UTC m=+0.137192920 container init 9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:06:18 compute-0 podman[83057]: 2026-01-31 07:06:18.222770119 +0000 UTC m=+0.151553872 container start 9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:06:18 compute-0 podman[83057]: 2026-01-31 07:06:18.226939361 +0000 UTC m=+0.155723134 container attach 9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:06:18 compute-0 ceph-mon[74496]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:06:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:18 compute-0 strange_curie[83073]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:06:18 compute-0 strange_curie[83073]: --> relative data size: 1.0
Jan 31 07:06:18 compute-0 strange_curie[83073]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:06:18 compute-0 strange_curie[83073]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d19aa227-e399-4341-9824-b20a6ddbc903
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"} v 0) v1
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]: dispatch
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]': finished
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 07:06:19 compute-0 lvm[83120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:06:19 compute-0 lvm[83120]: VG ceph_vg0 finished
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"} v 0) v1
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]: dispatch
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]': finished
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:06:19
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [balancer INFO root] No pools available
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504108929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:06:19 compute-0 strange_curie[83073]:  stderr: got monmap epoch 1
Jan 31 07:06:19 compute-0 strange_curie[83073]: --> Creating keyring file for osd.0
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 2 completed events
Jan 31 07:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:06:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:06:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:06:19 compute-0 strange_curie[83073]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d19aa227-e399-4341-9824-b20a6ddbc903 --setuser ceph --setgroup ceph
Jan 31 07:06:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 07:06:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/779569343' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2157664835' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d19aa227-e399-4341-9824-b20a6ddbc903"}]': finished
Jan 31 07:06:20 compute-0 ceph-mon[74496]: osdmap e4: 1 total, 0 up, 1 in
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2276134975' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea64e94f-e08a-40f8-8a79-f0fb7a401afe"}]': finished
Jan 31 07:06:20 compute-0 ceph-mon[74496]: osdmap e5: 2 total, 0 up, 2 in
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2504108929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/779569343' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:06:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 07:06:21 compute-0 ceph-mon[74496]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 07:06:22 compute-0 ceph-mon[74496]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:22 compute-0 strange_curie[83073]:  stderr: 2026-01-31T07:06:20.021+0000 7fd48d5f8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 07:06:22 compute-0 strange_curie[83073]:  stderr: 2026-01-31T07:06:20.021+0000 7fd48d5f8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 07:06:22 compute-0 strange_curie[83073]:  stderr: 2026-01-31T07:06:20.021+0000 7fd48d5f8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 07:06:22 compute-0 strange_curie[83073]:  stderr: 2026-01-31T07:06:20.021+0000 7fd48d5f8740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 07:06:22 compute-0 strange_curie[83073]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 07:06:22 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:06:22 compute-0 strange_curie[83073]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 07:06:22 compute-0 strange_curie[83073]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:22 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:22 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:06:22 compute-0 strange_curie[83073]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:06:22 compute-0 strange_curie[83073]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 07:06:22 compute-0 strange_curie[83073]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 07:06:22 compute-0 systemd[1]: libpod-9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98.scope: Deactivated successfully.
Jan 31 07:06:22 compute-0 systemd[1]: libpod-9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98.scope: Consumed 2.222s CPU time.
Jan 31 07:06:22 compute-0 podman[83057]: 2026-01-31 07:06:22.593779559 +0000 UTC m=+4.522563342 container died 9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-19f8d2497cdd9fe094149d1babab5d92738657d2346031501d95f0cb94d436ec-merged.mount: Deactivated successfully.
Jan 31 07:06:22 compute-0 podman[83057]: 2026-01-31 07:06:22.663405902 +0000 UTC m=+4.592189665 container remove 9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:06:22 compute-0 systemd[1]: libpod-conmon-9091af2bb8674444137c3a4524f7e3b6e0ba9f537807269b944be7e19a170c98.scope: Deactivated successfully.
Jan 31 07:06:22 compute-0 sudo[82953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:22 compute-0 sudo[84036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:22 compute-0 sudo[84036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:22 compute-0 sudo[84036]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:22 compute-0 sudo[84061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:22 compute-0 sudo[84061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:22 compute-0 sudo[84061]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:22 compute-0 sudo[84086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:22 compute-0 sudo[84086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:22 compute-0 sudo[84086]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:22 compute-0 sudo[84111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:06:22 compute-0 sudo[84111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.13618707 +0000 UTC m=+0.030873861 container create ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:23 compute-0 systemd[1]: Started libpod-conmon-ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b.scope.
Jan 31 07:06:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.208778482 +0000 UTC m=+0.103465323 container init ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.216171641 +0000 UTC m=+0.110858452 container start ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.121367331 +0000 UTC m=+0.016054132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:23 compute-0 confident_lumiere[84194]: 167 167
Jan 31 07:06:23 compute-0 systemd[1]: libpod-ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b.scope: Deactivated successfully.
Jan 31 07:06:23 compute-0 conmon[84194]: conmon ea643b765c1c101b7a4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b.scope/container/memory.events
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.221984388 +0000 UTC m=+0.116671189 container attach ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.223352014 +0000 UTC m=+0.118038825 container died ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-50f6b531183a1327e6f144fde5434dd10684b51a92951b3a3a81d9ffd5cd2d9e-merged.mount: Deactivated successfully.
Jan 31 07:06:23 compute-0 podman[84177]: 2026-01-31 07:06:23.256805704 +0000 UTC m=+0.151492485 container remove ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:06:23 compute-0 systemd[1]: libpod-conmon-ea643b765c1c101b7a4c679715fda00660903192f8277b18654a6e6a4a724e9b.scope: Deactivated successfully.
Jan 31 07:06:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:23 compute-0 podman[84218]: 2026-01-31 07:06:23.388647191 +0000 UTC m=+0.039344280 container create 08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:23 compute-0 systemd[1]: Started libpod-conmon-08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf.scope.
Jan 31 07:06:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86f6eb83224bc9856469e08cec3ff575015cb9a96692793fac3547cd271679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86f6eb83224bc9856469e08cec3ff575015cb9a96692793fac3547cd271679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86f6eb83224bc9856469e08cec3ff575015cb9a96692793fac3547cd271679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f86f6eb83224bc9856469e08cec3ff575015cb9a96692793fac3547cd271679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:23 compute-0 podman[84218]: 2026-01-31 07:06:23.367290396 +0000 UTC m=+0.017987475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:23 compute-0 podman[84218]: 2026-01-31 07:06:23.475973089 +0000 UTC m=+0.126670178 container init 08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:06:23 compute-0 podman[84218]: 2026-01-31 07:06:23.48604162 +0000 UTC m=+0.136738689 container start 08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:06:23 compute-0 podman[84218]: 2026-01-31 07:06:23.489394361 +0000 UTC m=+0.140091450 container attach 08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:06:24 compute-0 goofy_beaver[84235]: {
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:     "0": [
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:         {
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "devices": [
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "/dev/loop3"
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             ],
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "lv_name": "ceph_lv0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "lv_size": "7511998464",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "name": "ceph_lv0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "tags": {
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.cluster_name": "ceph",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.crush_device_class": "",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.encrypted": "0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.osd_id": "0",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.type": "block",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:                 "ceph.vdo": "0"
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             },
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "type": "block",
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:             "vg_name": "ceph_vg0"
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:         }
Jan 31 07:06:24 compute-0 goofy_beaver[84235]:     ]
Jan 31 07:06:24 compute-0 goofy_beaver[84235]: }
Jan 31 07:06:24 compute-0 systemd[1]: libpod-08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf.scope: Deactivated successfully.
Jan 31 07:06:24 compute-0 podman[84218]: 2026-01-31 07:06:24.251827829 +0000 UTC m=+0.902524908 container died 08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_beaver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f86f6eb83224bc9856469e08cec3ff575015cb9a96692793fac3547cd271679-merged.mount: Deactivated successfully.
Jan 31 07:06:24 compute-0 podman[84218]: 2026-01-31 07:06:24.299719178 +0000 UTC m=+0.950416247 container remove 08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_beaver, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 07:06:24 compute-0 systemd[1]: libpod-conmon-08db928dcd547b201dd099ef8d6eb3a42f8b06a98ea1182722ebe3b84ab38abf.scope: Deactivated successfully.
Jan 31 07:06:24 compute-0 sudo[84111]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 31 07:06:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 07:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:24 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 07:06:24 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 07:06:24 compute-0 sudo[84258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:24 compute-0 sudo[84258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:24 compute-0 sudo[84258]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:24 compute-0 ceph-mon[74496]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 07:06:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:24 compute-0 sudo[84283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:24 compute-0 sudo[84283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:24 compute-0 sudo[84283]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:24 compute-0 sudo[84308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:24 compute-0 sudo[84308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:24 compute-0 sudo[84308]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:24 compute-0 sudo[84333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:06:24 compute-0 sudo[84333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 31 07:06:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 07:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:24 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 31 07:06:24 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.823955469 +0000 UTC m=+0.037514450 container create a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:06:24 compute-0 systemd[1]: Started libpod-conmon-a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34.scope.
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.807578598 +0000 UTC m=+0.021137579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.919846439 +0000 UTC m=+0.133405500 container init a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.929115068 +0000 UTC m=+0.142674029 container start a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.933237229 +0000 UTC m=+0.146796280 container attach a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:06:24 compute-0 silly_raman[84414]: 167 167
Jan 31 07:06:24 compute-0 systemd[1]: libpod-a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34.scope: Deactivated successfully.
Jan 31 07:06:24 compute-0 conmon[84414]: conmon a0594b2648002e6d1da5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34.scope/container/memory.events
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.937596986 +0000 UTC m=+0.151155957 container died a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:06:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-38cbca773c831d52fea60f83002c138c349627a2e7444d36bac654b3f742144e-merged.mount: Deactivated successfully.
Jan 31 07:06:24 compute-0 podman[84398]: 2026-01-31 07:06:24.984858428 +0000 UTC m=+0.198417389 container remove a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:06:24 compute-0 systemd[1]: libpod-conmon-a0594b2648002e6d1da5e1f25ee4255d7bcd4fd4840a08c2979ef912cae7ea34.scope: Deactivated successfully.
Jan 31 07:06:25 compute-0 podman[84446]: 2026-01-31 07:06:25.219189611 +0000 UTC m=+0.043906202 container create 5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:06:25 compute-0 systemd[1]: Started libpod-conmon-5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d.scope.
Jan 31 07:06:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78baa4d2aa3ceb948e9ebfb7350e390fdd5a08a01774abe3fda112018aad7fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78baa4d2aa3ceb948e9ebfb7350e390fdd5a08a01774abe3fda112018aad7fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78baa4d2aa3ceb948e9ebfb7350e390fdd5a08a01774abe3fda112018aad7fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78baa4d2aa3ceb948e9ebfb7350e390fdd5a08a01774abe3fda112018aad7fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78baa4d2aa3ceb948e9ebfb7350e390fdd5a08a01774abe3fda112018aad7fe/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:25 compute-0 podman[84446]: 2026-01-31 07:06:25.199380228 +0000 UTC m=+0.024096899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:25 compute-0 podman[84446]: 2026-01-31 07:06:25.305215405 +0000 UTC m=+0.129932006 container init 5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:06:25 compute-0 podman[84446]: 2026-01-31 07:06:25.313724714 +0000 UTC m=+0.138441295 container start 5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:06:25 compute-0 podman[84446]: 2026-01-31 07:06:25.316666823 +0000 UTC m=+0.141383404 container attach 5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:06:25 compute-0 ceph-mon[74496]: Deploying daemon osd.0 on compute-0
Jan 31 07:06:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 07:06:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:25 compute-0 ceph-mon[74496]: Deploying daemon osd.1 on compute-1
Jan 31 07:06:25 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test[84462]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 31 07:06:25 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test[84462]:                             [--no-systemd] [--no-tmpfs]
Jan 31 07:06:25 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test[84462]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 07:06:25 compute-0 systemd[1]: libpod-5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d.scope: Deactivated successfully.
Jan 31 07:06:25 compute-0 podman[84446]: 2026-01-31 07:06:25.954351616 +0000 UTC m=+0.779068217 container died 5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c78baa4d2aa3ceb948e9ebfb7350e390fdd5a08a01774abe3fda112018aad7fe-merged.mount: Deactivated successfully.
Jan 31 07:06:26 compute-0 podman[84446]: 2026-01-31 07:06:26.036615699 +0000 UTC m=+0.861332290 container remove 5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:06:26 compute-0 systemd[1]: libpod-conmon-5a756e09b9b241c1a1bad0dc6644d00487cf6c522c6e5a8d391bac895dac3f2d.scope: Deactivated successfully.
Jan 31 07:06:26 compute-0 systemd[1]: Reloading.
Jan 31 07:06:26 compute-0 systemd-sysv-generator[84522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:26 compute-0 systemd-rc-local-generator[84519]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:26 compute-0 ceph-mon[74496]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:26 compute-0 systemd[1]: Reloading.
Jan 31 07:06:26 compute-0 systemd-rc-local-generator[84562]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:06:26 compute-0 systemd-sysv-generator[84565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:06:26 compute-0 systemd[1]: Starting Ceph osd.0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:06:26 compute-0 podman[84620]: 2026-01-31 07:06:26.874984121 +0000 UTC m=+0.054236761 container create cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab10e58eeeec7bb7c62b04df5dcbb82e8d5770ea2a9c9af34515d39c398d38e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab10e58eeeec7bb7c62b04df5dcbb82e8d5770ea2a9c9af34515d39c398d38e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab10e58eeeec7bb7c62b04df5dcbb82e8d5770ea2a9c9af34515d39c398d38e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab10e58eeeec7bb7c62b04df5dcbb82e8d5770ea2a9c9af34515d39c398d38e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab10e58eeeec7bb7c62b04df5dcbb82e8d5770ea2a9c9af34515d39c398d38e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:26 compute-0 podman[84620]: 2026-01-31 07:06:26.852583038 +0000 UTC m=+0.031835738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:26 compute-0 podman[84620]: 2026-01-31 07:06:26.974533738 +0000 UTC m=+0.153786388 container init cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:06:26 compute-0 podman[84620]: 2026-01-31 07:06:26.98168051 +0000 UTC m=+0.160933140 container start cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:26 compute-0 podman[84620]: 2026-01-31 07:06:26.986189482 +0000 UTC m=+0.165442112 container attach cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:06:27 compute-0 bash[84620]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:06:27 compute-0 bash[84620]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:06:27 compute-0 bash[84620]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:06:27 compute-0 bash[84620]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:27 compute-0 bash[84620]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:06:27 compute-0 bash[84620]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 07:06:27 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate[84635]: --> ceph-volume raw activate successful for osd ID: 0
Jan 31 07:06:27 compute-0 bash[84620]: --> ceph-volume raw activate successful for osd ID: 0
Jan 31 07:06:27 compute-0 systemd[1]: libpod-cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad.scope: Deactivated successfully.
Jan 31 07:06:27 compute-0 podman[84620]: 2026-01-31 07:06:27.824863151 +0000 UTC m=+1.004115781 container died cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:06:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dab10e58eeeec7bb7c62b04df5dcbb82e8d5770ea2a9c9af34515d39c398d38e-merged.mount: Deactivated successfully.
Jan 31 07:06:27 compute-0 podman[84620]: 2026-01-31 07:06:27.876791348 +0000 UTC m=+1.056043988 container remove cd57f404b67d1eef9a5d1901b4d45398290d9811a193d99776087973d639b1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:06:28 compute-0 podman[84797]: 2026-01-31 07:06:28.06275532 +0000 UTC m=+0.044901818 container create 7fd99238d214935b3dacb5b0284648e15db53a9954031165abce49a65cde0630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adc12f2687c18b3cdb07e162f1a88b08c133ab5d12031270cb3d36f3d463d6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adc12f2687c18b3cdb07e162f1a88b08c133ab5d12031270cb3d36f3d463d6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adc12f2687c18b3cdb07e162f1a88b08c133ab5d12031270cb3d36f3d463d6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adc12f2687c18b3cdb07e162f1a88b08c133ab5d12031270cb3d36f3d463d6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adc12f2687c18b3cdb07e162f1a88b08c133ab5d12031270cb3d36f3d463d6e/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:28 compute-0 podman[84797]: 2026-01-31 07:06:28.122159299 +0000 UTC m=+0.104305777 container init 7fd99238d214935b3dacb5b0284648e15db53a9954031165abce49a65cde0630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:06:28 compute-0 podman[84797]: 2026-01-31 07:06:28.041318474 +0000 UTC m=+0.023465012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:28 compute-0 podman[84797]: 2026-01-31 07:06:28.135525498 +0000 UTC m=+0.117671956 container start 7fd99238d214935b3dacb5b0284648e15db53a9954031165abce49a65cde0630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:28 compute-0 bash[84797]: 7fd99238d214935b3dacb5b0284648e15db53a9954031165abce49a65cde0630
Jan 31 07:06:28 compute-0 systemd[1]: Started Ceph osd.0 for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:06:28 compute-0 sudo[84333]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:06:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:06:28 compute-0 ceph-osd[84816]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:06:28 compute-0 ceph-osd[84816]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 31 07:06:28 compute-0 ceph-osd[84816]: pidfile_write: ignore empty --pid-file
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d82af800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d82af800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d82af800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d82af800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d90e7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d90e7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d90e7800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d90e7800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d90e7800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d82af800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:06:28 compute-0 sudo[84829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:28 compute-0 sudo[84829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:28 compute-0 sudo[84829]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:28 compute-0 sudo[84857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:28 compute-0 sudo[84857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:28 compute-0 sudo[84857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:28 compute-0 sudo[84882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:28 compute-0 sudo[84882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:28 compute-0 sudo[84882]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:28 compute-0 sudo[84907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:06:28 compute-0 ceph-mon[74496]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:28 compute-0 sudo[84907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:28 compute-0 ceph-osd[84816]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 07:06:28 compute-0 ceph-osd[84816]: load: jerasure load: lrc 
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.706673941 +0000 UTC m=+0.030713057 container create a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:28 compute-0 systemd[1]: Started libpod-conmon-a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5.scope.
Jan 31 07:06:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:06:28 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.755638549 +0000 UTC m=+0.079677775 container init a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.763952772 +0000 UTC m=+0.087991928 container start a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.768975168 +0000 UTC m=+0.093014334 container attach a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 07:06:28 compute-0 sweet_burnell[84996]: 167 167
Jan 31 07:06:28 compute-0 systemd[1]: libpod-a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5.scope: Deactivated successfully.
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.772555764 +0000 UTC m=+0.096594940 container died a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.692348466 +0000 UTC m=+0.016387612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c235fbb7b8add140ee79ad58e0a7c4119d903faeba02746feadbc81fd57e579a-merged.mount: Deactivated successfully.
Jan 31 07:06:28 compute-0 podman[84979]: 2026-01-31 07:06:28.812741265 +0000 UTC m=+0.136780391 container remove a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:06:28 compute-0 systemd[1]: libpod-conmon-a462953a1f169bc8bc1692ca95bc7604b124e914bcfe13de45d5f5f869bf2de5.scope: Deactivated successfully.
Jan 31 07:06:28 compute-0 podman[85025]: 2026-01-31 07:06:28.944565121 +0000 UTC m=+0.046499512 container create de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:28 compute-0 systemd[1]: Started libpod-conmon-de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872.scope.
Jan 31 07:06:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c5fdd34bfe551a5194c2a1d1b2a4204b773be72427d442db5674701544d198/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c5fdd34bfe551a5194c2a1d1b2a4204b773be72427d442db5674701544d198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c5fdd34bfe551a5194c2a1d1b2a4204b773be72427d442db5674701544d198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23c5fdd34bfe551a5194c2a1d1b2a4204b773be72427d442db5674701544d198/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 podman[85025]: 2026-01-31 07:06:29.015265873 +0000 UTC m=+0.117200354 container init de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:29 compute-0 podman[85025]: 2026-01-31 07:06:28.923510724 +0000 UTC m=+0.025445205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:29 compute-0 podman[85025]: 2026-01-31 07:06:29.027291276 +0000 UTC m=+0.129225697 container start de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:06:29 compute-0 podman[85025]: 2026-01-31 07:06:29.030910204 +0000 UTC m=+0.132844595 container attach de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9168c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs mount
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs mount shared_bdev_used = 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Git sha 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DB SUMMARY
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DB Session ID:  7PL9XJIG88Q2HXDZMB0J
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                     Options.env: 0x55f9d9139c70
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                Options.info_log: 0x55f9d832cba0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.write_buffer_manager: 0x55f9d9242460
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.row_cache: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                              Options.wal_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.wal_compression: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Compression algorithms supported:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kZSTD supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832c5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8322430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f6eeab8b-ebc9-457e-8ff7-019d5cfd8fb7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189081578, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189081751, "job": 1, "event": "recovery_finished"}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: freelist init
Jan 31 07:06:29 compute-0 ceph-osd[84816]: freelist _read_cfg
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs umount
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 07:06:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bdev(0x55f9d9169400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs mount
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluefs mount shared_bdev_used = 4718592
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: RocksDB version: 7.9.2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Git sha 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DB SUMMARY
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DB Session ID:  7PL9XJIG88Q2HXDZMB0I
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: CURRENT file:  CURRENT
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.error_if_exists: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.create_if_missing: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                     Options.env: 0x55f9d836e310
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                Options.info_log: 0x55f9d832dcc0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                              Options.statistics: (nil)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.use_fsync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                              Options.db_log_dir: 
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.write_buffer_manager: 0x55f9d9242460
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.unordered_write: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.row_cache: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                              Options.wal_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.two_write_queues: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.wal_compression: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.atomic_flush: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_background_jobs: 4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_background_compactions: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_subcompactions: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.max_open_files: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Compression algorithms supported:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kZSTD supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kXpressCompression supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kBZip2Compression supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kLZ4Compression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kZlibCompression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         kSnappyCompression supported: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 sudo[85265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxdcaabbpfflqwbhcyemkfdsdwspoxgb ; /usr/bin/python3'
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 sudo[85265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d832d860)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d85fcfe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d85fcfe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:           Options.merge_operator: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f9d85fcfe0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55f9d8323770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.compression: LZ4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.num_levels: 7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.bloom_locality: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                               Options.ttl: 2592000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                       Options.enable_blob_files: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                           Options.min_blob_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f6eeab8b-ebc9-457e-8ff7-019d5cfd8fb7
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189366305, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189378209, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843189, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f6eeab8b-ebc9-457e-8ff7-019d5cfd8fb7", "db_session_id": "7PL9XJIG88Q2HXDZMB0I", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189385133, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843189, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f6eeab8b-ebc9-457e-8ff7-019d5cfd8fb7", "db_session_id": "7PL9XJIG88Q2HXDZMB0I", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189388837, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843189, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f6eeab8b-ebc9-457e-8ff7-019d5cfd8fb7", "db_session_id": "7PL9XJIG88Q2HXDZMB0I", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843189390167, "job": 1, "event": "recovery_finished"}
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f9d83f5c00
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: DB pointer 0x55f9d922ba00
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 07:06:29 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:06:29 compute-0 ceph-osd[84816]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 07:06:29 compute-0 ceph-osd[84816]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 07:06:29 compute-0 ceph-osd[84816]: _get_class not permitted to load lua
Jan 31 07:06:29 compute-0 ceph-osd[84816]: _get_class not permitted to load sdk
Jan 31 07:06:29 compute-0 ceph-osd[84816]: _get_class not permitted to load test_remote_reads
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 load_pgs
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 load_pgs opened 0 pgs
Jan 31 07:06:29 compute-0 ceph-osd[84816]: osd.0 0 log_to_monitors true
Jan 31 07:06:29 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0[84812]: 2026-01-31T07:06:29.416+0000 7fd700983740 -1 osd.0 0 log_to_monitors true
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 07:06:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:29 compute-0 ceph-mon[74496]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:29 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:29 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:29 compute-0 python3[85301]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:06:29 compute-0 podman[85482]: 2026-01-31 07:06:29.552667288 +0000 UTC m=+0.042638918 container create fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9 (image=quay.io/ceph/ceph:v18, name=recursing_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:06:29 compute-0 systemd[1]: Started libpod-conmon-fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9.scope.
Jan 31 07:06:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac1cbfa15c10e2ce9e10ee859b3aa24c3a9febb75f9bbfe9df9286fb75445ee/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac1cbfa15c10e2ce9e10ee859b3aa24c3a9febb75f9bbfe9df9286fb75445ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac1cbfa15c10e2ce9e10ee859b3aa24c3a9febb75f9bbfe9df9286fb75445ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:29 compute-0 podman[85482]: 2026-01-31 07:06:29.618681084 +0000 UTC m=+0.108652734 container init fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9 (image=quay.io/ceph/ceph:v18, name=recursing_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:06:29 compute-0 podman[85482]: 2026-01-31 07:06:29.623673658 +0000 UTC m=+0.113645288 container start fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9 (image=quay.io/ceph/ceph:v18, name=recursing_easley, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:29 compute-0 podman[85482]: 2026-01-31 07:06:29.628128498 +0000 UTC m=+0.118100138 container attach fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9 (image=quay.io/ceph/ceph:v18, name=recursing_easley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:29 compute-0 podman[85482]: 2026-01-31 07:06:29.534263643 +0000 UTC m=+0.024235303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]: {
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:         "osd_id": 0,
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:         "type": "bluestore"
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]:     }
Jan 31 07:06:29 compute-0 compassionate_dijkstra[85041]: }
Jan 31 07:06:29 compute-0 systemd[1]: libpod-de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872.scope: Deactivated successfully.
Jan 31 07:06:29 compute-0 podman[85518]: 2026-01-31 07:06:29.834764777 +0000 UTC m=+0.025736874 container died de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c5fdd34bfe551a5194c2a1d1b2a4204b773be72427d442db5674701544d198-merged.mount: Deactivated successfully.
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 07:06:29 compute-0 podman[85518]: 2026-01-31 07:06:29.891699578 +0000 UTC m=+0.082671695 container remove de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:29 compute-0 systemd[1]: libpod-conmon-de51063d409952230b2d4e26407dd0e7979b38219dfbf807585c2586a5759872.scope: Deactivated successfully.
Jan 31 07:06:29 compute-0 sudo[84907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156760717' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:06:30 compute-0 recursing_easley[85498]: 
Jan 31 07:06:30 compute-0 recursing_easley[85498]: {"fsid":"f70fcd2a-dcb4-5f89-a4ba-79a09959083b","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":115,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":6,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769843179,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T07:06:21.284282+0000","services":{}},"progress_events":{}}
Jan 31 07:06:30 compute-0 systemd[1]: libpod-fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9.scope: Deactivated successfully.
Jan 31 07:06:30 compute-0 podman[85482]: 2026-01-31 07:06:30.280431365 +0000 UTC m=+0.770403035 container died fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9 (image=quay.io/ceph/ceph:v18, name=recursing_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ac1cbfa15c10e2ce9e10ee859b3aa24c3a9febb75f9bbfe9df9286fb75445ee-merged.mount: Deactivated successfully.
Jan 31 07:06:30 compute-0 podman[85482]: 2026-01-31 07:06:30.337164561 +0000 UTC m=+0.827136191 container remove fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9 (image=quay.io/ceph/ceph:v18, name=recursing_easley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:06:30 compute-0 systemd[1]: libpod-conmon-fe83b738c46efd3a2e8703775d83a6294bf9ab2702b57022e449562f413d3fe9.scope: Deactivated successfully.
Jan 31 07:06:30 compute-0 sudo[85265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 07:06:30 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0 done with init, starting boot process
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0 start_boot
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 07:06:30 compute-0 ceph-osd[84816]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:30 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:30 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:30 compute-0 ceph-mon[74496]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 07:06:30 compute-0 ceph-mon[74496]: osdmap e6: 2 total, 0 up, 2 in
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4156760717' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:30 compute-0 sudo[85570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:30 compute-0 sudo[85570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-0 sudo[85570]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-0 sudo[85595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:06:30 compute-0 sudo[85595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-0 sudo[85595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-0 sudo[85621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:30 compute-0 sudo[85621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-0 sudo[85621]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-0 sudo[85646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:30 compute-0 sudo[85646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-0 sudo[85646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-0 sudo[85671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:30 compute-0 sudo[85671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:30 compute-0 sudo[85671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:30 compute-0 sudo[85696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:06:30 compute-0 sudo[85696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:31 compute-0 podman[85793]: 2026-01-31 07:06:31.474153785 +0000 UTC m=+0.116438083 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:06:31 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:31 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:31 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 07:06:31 compute-0 ceph-mon[74496]: osdmap e7: 2 total, 0 up, 2 in
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:31 compute-0 ceph-mon[74496]: from='osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 07:06:31 compute-0 ceph-mon[74496]: osdmap e8: 2 total, 0 up, 2 in
Jan 31 07:06:31 compute-0 podman[85793]: 2026-01-31 07:06:31.58363783 +0000 UTC m=+0.225922118 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:31 compute-0 sudo[85696]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:06:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:06:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 sudo[85880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:32 compute-0 sudo[85880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-0 sudo[85880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-0 sudo[85905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:06:32 compute-0 sudo[85905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-0 sudo[85905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-0 sudo[85930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:06:32 compute-0 sudo[85930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-0 sudo[85930]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:32 compute-0 sudo[85955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:06:32 compute-0 sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:06:32 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:32 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.509138455 +0000 UTC m=+0.071340670 container create 603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:06:32 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:32 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.455869452 +0000 UTC m=+0.018071697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:32 compute-0 systemd[1]: Started libpod-conmon-603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d.scope.
Jan 31 07:06:32 compute-0 ceph-mon[74496]: purged_snaps scrub starts
Jan 31 07:06:32 compute-0 ceph-mon[74496]: purged_snaps scrub ok
Jan 31 07:06:32 compute-0 ceph-mon[74496]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.653879988 +0000 UTC m=+0.216082213 container init 603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.659070778 +0000 UTC m=+0.221272993 container start 603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:06:32 compute-0 eager_beaver[86035]: 167 167
Jan 31 07:06:32 compute-0 systemd[1]: libpod-603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d.scope: Deactivated successfully.
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.700872752 +0000 UTC m=+0.263074967 container attach 603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.704900611 +0000 UTC m=+0.267102866 container died 603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1af310a966e22f58c73ab2dfcd7c276bb9306b0c425497631bf745086546af5f-merged.mount: Deactivated successfully.
Jan 31 07:06:32 compute-0 podman[86019]: 2026-01-31 07:06:32.900396579 +0000 UTC m=+0.462598834 container remove 603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:06:32 compute-0 systemd[1]: libpod-conmon-603c211303ab701160a33e887d0b5716f4371fde8202aaa23d1db8456a97841d.scope: Deactivated successfully.
Jan 31 07:06:33 compute-0 podman[86059]: 2026-01-31 07:06:33.076735692 +0000 UTC m=+0.062877432 container create 9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shaw, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:06:33 compute-0 podman[86059]: 2026-01-31 07:06:33.040739475 +0000 UTC m=+0.026881315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:06:33 compute-0 systemd[1]: Started libpod-conmon-9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f.scope.
Jan 31 07:06:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fce0e07bf37f433853e1b3ea202d8bd3cf939942da9cae0eea3c6cea08d52c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fce0e07bf37f433853e1b3ea202d8bd3cf939942da9cae0eea3c6cea08d52c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fce0e07bf37f433853e1b3ea202d8bd3cf939942da9cae0eea3c6cea08d52c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fce0e07bf37f433853e1b3ea202d8bd3cf939942da9cae0eea3c6cea08d52c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:06:33 compute-0 podman[86059]: 2026-01-31 07:06:33.241472604 +0000 UTC m=+0.227614344 container init 9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:06:33 compute-0 podman[86059]: 2026-01-31 07:06:33.247347142 +0000 UTC m=+0.233488882 container start 9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:06:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:33 compute-0 podman[86059]: 2026-01-31 07:06:33.287121372 +0000 UTC m=+0.273263112 container attach 9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:06:33 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:33 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:33 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:33 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:33 compute-0 ceph-mon[74496]: purged_snaps scrub starts
Jan 31 07:06:33 compute-0 ceph-mon[74496]: purged_snaps scrub ok
Jan 31 07:06:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:34 compute-0 objective_shaw[86075]: [
Jan 31 07:06:34 compute-0 objective_shaw[86075]:     {
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "available": false,
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "ceph_device": false,
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "lsm_data": {},
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "lvs": [],
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "path": "/dev/sr0",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "rejected_reasons": [
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "Insufficient space (<5GB)",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "Has a FileSystem"
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         ],
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         "sys_api": {
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "actuators": null,
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "device_nodes": "sr0",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "devname": "sr0",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "human_readable_size": "482.00 KB",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "id_bus": "ata",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "model": "QEMU DVD-ROM",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "nr_requests": "2",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "parent": "/dev/sr0",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "partitions": {},
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "path": "/dev/sr0",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "removable": "1",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "rev": "2.5+",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "ro": "0",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "rotational": "1",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "sas_address": "",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "sas_device_handle": "",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "scheduler_mode": "mq-deadline",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "sectors": 0,
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "sectorsize": "2048",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "size": 493568.0,
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "support_discard": "2048",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "type": "disk",
Jan 31 07:06:34 compute-0 objective_shaw[86075]:             "vendor": "QEMU"
Jan 31 07:06:34 compute-0 objective_shaw[86075]:         }
Jan 31 07:06:34 compute-0 objective_shaw[86075]:     }
Jan 31 07:06:34 compute-0 objective_shaw[86075]: ]
Jan 31 07:06:34 compute-0 systemd[1]: libpod-9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f.scope: Deactivated successfully.
Jan 31 07:06:34 compute-0 systemd[1]: libpod-9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f.scope: Consumed 1.113s CPU time.
Jan 31 07:06:34 compute-0 podman[86059]: 2026-01-31 07:06:34.365912001 +0000 UTC m=+1.352053741 container died 9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shaw, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3fce0e07bf37f433853e1b3ea202d8bd3cf939942da9cae0eea3c6cea08d52c-merged.mount: Deactivated successfully.
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:34 compute-0 podman[86059]: 2026-01-31 07:06:34.589718641 +0000 UTC m=+1.575860371 container remove 9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_shaw, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:06:34 compute-0 systemd[1]: libpod-conmon-9648b172e3964eb70047ddbfd86274ec3e3e24b370479f0f0cf913cfcb06478f.scope: Deactivated successfully.
Jan 31 07:06:34 compute-0 sudo[85955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 07:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 07:06:34 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 07:06:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:35 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:35 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:35 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:35 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:35 compute-0 ceph-mon[74496]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 07:06:35 compute-0 ceph-mon[74496]: Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 07:06:35 compute-0 ceph-mon[74496]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 7.344 iops: 1880.086 elapsed_sec: 1.596
Jan 31 07:06:36 compute-0 ceph-osd[84816]: log_channel(cluster) log [WRN] : OSD bench result of 1880.086077 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 0 waiting for initial osdmap
Jan 31 07:06:36 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0[84812]: 2026-01-31T07:06:36.014+0000 7fd6fc903640 -1 osd.0 0 waiting for initial osdmap
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Jan 31 07:06:36 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-osd-0[84812]: 2026-01-31T07:06:36.041+0000 7fd6f7f2b640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 set_numa_affinity not setting numa affinity
Jan 31 07:06:36 compute-0 ceph-osd[84816]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 31 07:06:36 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3651846438; not ready for session (expect reconnect)
Jan 31 07:06:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:36 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 07:06:36 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:36 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 07:06:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:37 compute-0 ceph-osd[84816]: osd.0 8 tick checking mon for new map
Jan 31 07:06:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Jan 31 07:06:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438] boot
Jan 31 07:06:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Jan 31 07:06:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 07:06:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:37 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:37 compute-0 ceph-mon[74496]: OSD bench result of 1880.086077 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:06:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:37 compute-0 ceph-osd[84816]: osd.0 9 state: booting -> active
Jan 31 07:06:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:37 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:37 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:37 compute-0 ceph-mgr[74791]: [devicehealth INFO root] creating mgr pool
Jan 31 07:06:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 31 07:06:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 07:06:38 compute-0 ceph-mon[74496]: osd.0 [v2:192.168.122.100:6802/3651846438,v1:192.168.122.100:6803/3651846438] boot
Jan 31 07:06:38 compute-0 ceph-mon[74496]: osdmap e9: 2 total, 1 up, 2 in
Jan 31 07:06:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mon[74496]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 07:06:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:38 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 07:06:38 compute-0 ceph-osd[84816]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 07:06:38 compute-0 ceph-osd[84816]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 07:06:38 compute-0 ceph-osd[84816]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mgr[74791]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 07:06:38 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:38 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:38 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 07:06:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 07:06:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Jan 31 07:06:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Jan 31 07:06:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:39 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 07:06:39 compute-0 ceph-mon[74496]: osdmap e10: 2 total, 1 up, 2 in
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:39 compute-0 ceph-mon[74496]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 07:06:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:39 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2099563795; not ready for session (expect reconnect)
Jan 31 07:06:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:39 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 07:06:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 07:06:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 31 07:06:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795] boot
Jan 31 07:06:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 31 07:06:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 07:06:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 07:06:40 compute-0 ceph-mon[74496]: osdmap e11: 2 total, 1 up, 2 in
Jan 31 07:06:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:40 compute-0 ceph-mon[74496]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 07:06:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:40 compute-0 ceph-mon[74496]: OSD bench result of 8461.932853 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:06:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 07:06:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 31 07:06:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 31 07:06:41 compute-0 ceph-mon[74496]: osd.1 [v2:192.168.122.101:6800/2099563795,v1:192.168.122.101:6801/2099563795] boot
Jan 31 07:06:41 compute-0 ceph-mon[74496]: osdmap e12: 2 total, 2 up, 2 in
Jan 31 07:06:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 07:06:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 07:06:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 07:06:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 07:06:41 compute-0 sudo[87268]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 31 07:06:41 compute-0 sudo[87268]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 07:06:41 compute-0 sudo[87268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 31 07:06:41 compute-0 sudo[87268]: pam_unix(sudo:session): session closed for user root
Jan 31 07:06:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 07:06:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 07:06:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:06:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 07:06:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 31 07:06:42 compute-0 ceph-mon[74496]: osdmap e13: 2 total, 2 up, 2 in
Jan 31 07:06:42 compute-0 ceph-mon[74496]: pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:42 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 07:06:42 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 07:06:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:06:42 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hhuoua(active, since 82s)
Jan 31 07:06:42 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:06:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:43 compute-0 ceph-mon[74496]: mgrmap e9: compute-0.hhuoua(active, since 82s)
Jan 31 07:06:43 compute-0 ceph-mon[74496]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:06:44 compute-0 ceph-mon[74496]: pgmap v48: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:46 compute-0 ceph-mon[74496]: pgmap v49: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 07:06:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:48 compute-0 ceph-mon[74496]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:06:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:06:50 compute-0 ceph-mon[74496]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:52 compute-0 ceph-mon[74496]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:06:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:54 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:06:54 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:54 compute-0 ceph-mon[74496]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:06:55 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:55 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:55 compute-0 ceph-mon[74496]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:06:55 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:55 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:56 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:56 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:56 compute-0 ceph-mon[74496]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:06:56 compute-0 ceph-mon[74496]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:56 compute-0 ceph-mon[74496]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 07:06:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:06:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:06:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:57 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 16ea8726-5b14-494d-9d68-f2987962b8a6 (Updating mon deployment (+2 -> 3))
Jan 31 07:06:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 07:06:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:06:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 07:06:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:06:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:57 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 31 07:06:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 31 07:06:57 compute-0 ceph-mon[74496]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.client.admin.keyring
Jan 31 07:06:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:06:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:06:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 07:06:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 07:06:58 compute-0 ceph-mon[74496]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:58 compute-0 ceph-mon[74496]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:58 compute-0 ceph-mon[74496]: Deploying daemon mon.compute-2 on compute-2
Jan 31 07:06:58 compute-0 ceph-mon[74496]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 07:06:58 compute-0 ceph-mon[74496]: Cluster is now healthy
Jan 31 07:06:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:06:59 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 31 07:06:59 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 31 07:06:59 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/599054767; not ready for session (expect reconnect)
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:06:59 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:06:59 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:06:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:06:59 compute-0 ceph-mon[74496]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 31 07:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:00 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/599054767; not ready for session (expect reconnect)
Jan 31 07:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:00 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 07:07:00 compute-0 sudo[87294]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yovltxhqpwkiszukgfahxwwuhixsfxff ; /usr/bin/python3'
Jan 31 07:07:00 compute-0 sudo[87294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:00 compute-0 python3[87296]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:00 compute-0 podman[87298]: 2026-01-31 07:07:00.724454034 +0000 UTC m=+0.060900000 container create 15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:07:00 compute-0 systemd[1]: Started libpod-conmon-15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e.scope.
Jan 31 07:07:00 compute-0 podman[87298]: 2026-01-31 07:07:00.697061366 +0000 UTC m=+0.033507342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c5d5e960c111b8805f3b128483e40d8c9369e1d37825fbdcb19f05273bfa74/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c5d5e960c111b8805f3b128483e40d8c9369e1d37825fbdcb19f05273bfa74/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c5d5e960c111b8805f3b128483e40d8c9369e1d37825fbdcb19f05273bfa74/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:00 compute-0 podman[87298]: 2026-01-31 07:07:00.845802547 +0000 UTC m=+0.182248593 container init 15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:07:00 compute-0 podman[87298]: 2026-01-31 07:07:00.852018575 +0000 UTC m=+0.188464531 container start 15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:00 compute-0 podman[87298]: 2026-01-31 07:07:00.862463165 +0000 UTC m=+0.198909141 container attach 15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:07:01 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:01 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:01 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/599054767; not ready for session (expect reconnect)
Jan 31 07:07:01 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:07:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:01 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 07:07:01 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:02 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/599054767; not ready for session (expect reconnect)
Jan 31 07:07:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:07:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:02 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 07:07:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:03 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:03 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:03 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:03 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 07:07:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:03 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/599054767; not ready for session (expect reconnect)
Jan 31 07:07:03 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:07:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:03 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/599054767; not ready for session (expect reconnect)
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 07:07:04 compute-0 ceph-mon[74496]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hhuoua(active, since 104s)
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 16ea8726-5b14-494d-9d68-f2987962b8a6 (Updating mon deployment (+2 -> 3))
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 16ea8726-5b14-494d-9d68-f2987962b8a6 (Updating mon deployment (+2 -> 3)) in 7 seconds
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev faa4286c-86ba-446b-897f-a0d669975479 (Updating mgr deployment (+2 -> 3))
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:04 compute-0 ceph-mon[74496]: Deploying daemon mon.compute-1 on compute-1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 07:07:04 compute-0 ceph-mon[74496]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:07:04 compute-0 ceph-mon[74496]: fsmap 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mgrmap e9: compute-0.hhuoua(active, since 104s)
Jan 31 07:07:04 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.wmgest on compute-2
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.wmgest on compute-2
Jan 31 07:07:04 compute-0 sshd-session[87329]: Invalid user sol from 45.148.10.240 port 54462
Jan 31 07:07:04 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 3 completed events
Jan 31 07:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:07:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 31 07:07:05 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:05 compute-0 sshd-session[87329]: Connection closed by invalid user sol 45.148.10.240 port 54462 [preauth]
Jan 31 07:07:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 07:07:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527175672' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:07:05 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 07:07:05 compute-0 elated_payne[87315]: 
Jan 31 07:07:05 compute-0 elated_payne[87315]: {"fsid":"f70fcd2a-dcb4-5f89-a4ba-79a09959083b","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"reef","num_mons":2},"osdmap":{"epoch":14,"num_osds":2,"num_up_osds":2,"osd_up_since":1769843200,"num_in_osds":2,"osd_in_since":1769843179,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475234304,"bytes_avail":14548762624,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T07:06:21.284282+0000","services":{}},"progress_events":{"16ea8726-5b14-494d-9d68-f2987962b8a6":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Jan 31 07:07:05 compute-0 systemd[1]: libpod-15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e.scope: Deactivated successfully.
Jan 31 07:07:05 compute-0 podman[87298]: 2026-01-31 07:07:05.066000287 +0000 UTC m=+4.402446283 container died 15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-05c5d5e960c111b8805f3b128483e40d8c9369e1d37825fbdcb19f05273bfa74-merged.mount: Deactivated successfully.
Jan 31 07:07:05 compute-0 podman[87298]: 2026-01-31 07:07:05.133558435 +0000 UTC m=+4.470004421 container remove 15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e (image=quay.io/ceph/ceph:v18, name=elated_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 07:07:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:07:05 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:07:05 compute-0 ceph-mon[74496]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:05 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 07:07:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:05 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 07:07:05 compute-0 systemd[1]: libpod-conmon-15e4a2df3af965c784c226cb17dcebedb85b778128d5375cbd76bc2ea338cd4e.scope: Deactivated successfully.
Jan 31 07:07:05 compute-0 sudo[87294]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:05 compute-0 sudo[87377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kogiucxqizpbasiahclwnoddvgdlokhl ; /usr/bin/python3'
Jan 31 07:07:05 compute-0 sudo[87377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:05 compute-0 ceph-mgr[74791]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 31 07:07:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:07:05.464+0000 7fb604d35640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 31 07:07:05 compute-0 python3[87379]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:05 compute-0 podman[87380]: 2026-01-31 07:07:05.695767728 +0000 UTC m=+0.063827308 container create 89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364 (image=quay.io/ceph/ceph:v18, name=nifty_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:07:05 compute-0 systemd[1]: Started libpod-conmon-89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364.scope.
Jan 31 07:07:05 compute-0 podman[87380]: 2026-01-31 07:07:05.660409277 +0000 UTC m=+0.028468927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6aae3f343bc9d5d57356b79b8d975ead9a2c9a8c11971478db8d5d158964db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb6aae3f343bc9d5d57356b79b8d975ead9a2c9a8c11971478db8d5d158964db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:05 compute-0 podman[87380]: 2026-01-31 07:07:05.78694382 +0000 UTC m=+0.155003420 container init 89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364 (image=quay.io/ceph/ceph:v18, name=nifty_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:07:05 compute-0 podman[87380]: 2026-01-31 07:07:05.796496197 +0000 UTC m=+0.164555737 container start 89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364 (image=quay.io/ceph/ceph:v18, name=nifty_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:07:05 compute-0 podman[87380]: 2026-01-31 07:07:05.803140706 +0000 UTC m=+0.171200306 container attach 89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364 (image=quay.io/ceph/ceph:v18, name=nifty_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:07:06 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:06 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:07 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:07 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:07 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 07:07:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:07 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:07 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:08 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:08 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:08 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 07:07:09 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:09 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:07:10 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 07:07:10 compute-0 ceph-mon[74496]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.hhuoua(active, since 110s)
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:07:10 compute-0 ceph-mon[74496]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:07:10 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:07:10 compute-0 ceph-mon[74496]: fsmap 
Jan 31 07:07:10 compute-0 ceph-mon[74496]: osdmap e14: 2 total, 2 up, 2 in
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mgrmap e9: compute-0.hhuoua(active, since 110s)
Jan 31 07:07:10 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:10 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.hodsiu on compute-1
Jan 31 07:07:10 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.hodsiu on compute-1
Jan 31 07:07:11 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/405239720; not ready for session (expect reconnect)
Jan 31 07:07:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 07:07:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 07:07:11 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hodsiu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 07:07:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 31 07:07:12 compute-0 nifty_mclaren[87396]: pool 'vms' created
Jan 31 07:07:12 compute-0 systemd[1]: libpod-89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364.scope: Deactivated successfully.
Jan 31 07:07:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 31 07:07:12 compute-0 podman[87380]: 2026-01-31 07:07:12.026853339 +0000 UTC m=+6.394912889 container died 89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364 (image=quay.io/ceph/ceph:v18, name=nifty_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:12 compute-0 ceph-mgr[74791]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 31 07:07:12 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T07:07:12.045+0000 7fb604d35640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 31 07:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb6aae3f343bc9d5d57356b79b8d975ead9a2c9a8c11971478db8d5d158964db-merged.mount: Deactivated successfully.
Jan 31 07:07:12 compute-0 podman[87380]: 2026-01-31 07:07:12.354764829 +0000 UTC m=+6.722824369 container remove 89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364 (image=quay.io/ceph/ceph:v18, name=nifty_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:07:12 compute-0 sudo[87377]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:12 compute-0 systemd[1]: libpod-conmon-89f228133c09bdc294d54a7200f7234879d5d90b7f07220ad6f4824239d2a364.scope: Deactivated successfully.
Jan 31 07:07:12 compute-0 sudo[87458]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkqpypeiigwaonggweujohwamsddfnhr ; /usr/bin/python3'
Jan 31 07:07:12 compute-0 sudo[87458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:12 compute-0 python3[87460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:12 compute-0 podman[87461]: 2026-01-31 07:07:12.723127328 +0000 UTC m=+0.060174160 container create 6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2 (image=quay.io/ceph/ceph:v18, name=nostalgic_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:12 compute-0 systemd[1]: Started libpod-conmon-6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2.scope.
Jan 31 07:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/442dc4940641113d50cfe9a2ccf819b4316916a2fdccde48f716323f6a8270cf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/442dc4940641113d50cfe9a2ccf819b4316916a2fdccde48f716323f6a8270cf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:12 compute-0 podman[87461]: 2026-01-31 07:07:12.696376818 +0000 UTC m=+0.033423670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:12 compute-0 podman[87461]: 2026-01-31 07:07:12.804453965 +0000 UTC m=+0.141500817 container init 6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2 (image=quay.io/ceph/ceph:v18, name=nostalgic_noyce, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:12 compute-0 podman[87461]: 2026-01-31 07:07:12.809255055 +0000 UTC m=+0.146301887 container start 6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2 (image=quay.io/ceph/ceph:v18, name=nostalgic_noyce, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 31 07:07:12 compute-0 podman[87461]: 2026-01-31 07:07:12.815980446 +0000 UTC m=+0.153027378 container attach 6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2 (image=quay.io/ceph/ceph:v18, name=nostalgic_noyce, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:07:12 compute-0 ceph-mon[74496]: Deploying daemon mgr.compute-1.hodsiu on compute-1
Jan 31 07:07:12 compute-0 ceph-mon[74496]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1344602823' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:12 compute-0 ceph-mon[74496]: osdmap e15: 2 total, 2 up, 2 in
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v65: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:13 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev faa4286c-86ba-446b-897f-a0d669975479 (Updating mgr deployment (+2 -> 3))
Jan 31 07:07:13 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event faa4286c-86ba-446b-897f-a0d669975479 (Updating mgr deployment (+2 -> 3)) in 9 seconds
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:13 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 162dd4f8-f8a1-4f43-b436-29b9fc9502dc (Updating crash deployment (+1 -> 3))
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:13 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 31 07:07:13 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 31 07:07:13 compute-0 nostalgic_noyce[87476]: pool 'volumes' created
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 31 07:07:13 compute-0 systemd[1]: libpod-6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2.scope: Deactivated successfully.
Jan 31 07:07:13 compute-0 podman[87461]: 2026-01-31 07:07:13.726604481 +0000 UTC m=+1.063651313 container died 6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2 (image=quay.io/ceph/ceph:v18, name=nostalgic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:07:13 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-442dc4940641113d50cfe9a2ccf819b4316916a2fdccde48f716323f6a8270cf-merged.mount: Deactivated successfully.
Jan 31 07:07:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:13 compute-0 podman[87461]: 2026-01-31 07:07:13.898890965 +0000 UTC m=+1.235937797 container remove 6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2 (image=quay.io/ceph/ceph:v18, name=nostalgic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:13 compute-0 sudo[87458]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:13 compute-0 systemd[1]: libpod-conmon-6b4209d956c6766757fca890ab6e2e03492bb2af5e9395c4565a8dce44cbdff2.scope: Deactivated successfully.
Jan 31 07:07:14 compute-0 sudo[87537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pysgpwjqztxyeylieiwpuyjcmpunxymy ; /usr/bin/python3'
Jan 31 07:07:14 compute-0 sudo[87537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:14 compute-0 python3[87539]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:14 compute-0 podman[87540]: 2026-01-31 07:07:14.269284467 +0000 UTC m=+0.096914787 container create 716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04 (image=quay.io/ceph/ceph:v18, name=zealous_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:14 compute-0 podman[87540]: 2026-01-31 07:07:14.202767849 +0000 UTC m=+0.030398249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:14 compute-0 systemd[1]: Started libpod-conmon-716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04.scope.
Jan 31 07:07:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50e27bf1b166fb5bc42ae173bf8695ca4098e0d0650182aeadaa8fa870d0e95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50e27bf1b166fb5bc42ae173bf8695ca4098e0d0650182aeadaa8fa870d0e95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:14 compute-0 podman[87540]: 2026-01-31 07:07:14.350868053 +0000 UTC m=+0.178498393 container init 716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04 (image=quay.io/ceph/ceph:v18, name=zealous_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:14 compute-0 podman[87540]: 2026-01-31 07:07:14.355444376 +0000 UTC m=+0.183074696 container start 716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04 (image=quay.io/ceph/ceph:v18, name=zealous_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:07:14 compute-0 podman[87540]: 2026-01-31 07:07:14.364830099 +0000 UTC m=+0.192460419 container attach 716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04 (image=quay.io/ceph/ceph:v18, name=zealous_khorana, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:14 compute-0 ceph-mon[74496]: pgmap v65: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: Deploying daemon crash.compute-2 on compute-2
Jan 31 07:07:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4258413547' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:14 compute-0 ceph-mon[74496]: osdmap e16: 2 total, 2 up, 2 in
Jan 31 07:07:14 compute-0 ceph-mon[74496]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 162dd4f8-f8a1-4f43-b436-29b9fc9502dc (Updating crash deployment (+1 -> 3))
Jan 31 07:07:14 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 162dd4f8-f8a1-4f43-b436-29b9fc9502dc (Updating crash deployment (+1 -> 3)) in 2 seconds
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:14 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 5 completed events
Jan 31 07:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:07:15 compute-0 sudo[87582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:15 compute-0 sudo[87582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-0 sudo[87582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-0 sudo[87607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:15 compute-0 sudo[87607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-0 sudo[87607]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-0 sudo[87632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:15 compute-0 sudo[87632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-0 sudo[87632]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:15 compute-0 sudo[87657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:07:15 compute-0 sudo[87657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.443571436 +0000 UTC m=+0.042857244 container create 07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:15 compute-0 systemd[1]: Started libpod-conmon-07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119.scope.
Jan 31 07:07:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.420588678 +0000 UTC m=+0.019874616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.529475107 +0000 UTC m=+0.128760985 container init 07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.534619415 +0000 UTC m=+0.133905253 container start 07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:15 compute-0 jovial_hopper[87738]: 167 167
Jan 31 07:07:15 compute-0 systemd[1]: libpod-07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119.scope: Deactivated successfully.
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.54336501 +0000 UTC m=+0.142650908 container attach 07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.544242844 +0000 UTC m=+0.143528682 container died 07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e44cacc3316f81327403c624e3a6a972476eaf20fefd3d809ecf040cd855da71-merged.mount: Deactivated successfully.
Jan 31 07:07:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 07:07:15 compute-0 podman[87722]: 2026-01-31 07:07:15.868312961 +0000 UTC m=+0.467598799 container remove 07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:07:15 compute-0 systemd[1]: libpod-conmon-07260c8ccf0ea49c24ef2829e637986e86db8e22229a7a9fdfad21d622166119.scope: Deactivated successfully.
Jan 31 07:07:15 compute-0 ceph-mon[74496]: osdmap e17: 2 total, 2 up, 2 in
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:16 compute-0 podman[87764]: 2026-01-31 07:07:15.965352651 +0000 UTC m=+0.017573494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:16 compute-0 podman[87764]: 2026-01-31 07:07:16.110005702 +0000 UTC m=+0.162226525 container create e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:07:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 31 07:07:16 compute-0 zealous_khorana[87556]: pool 'backups' created
Jan 31 07:07:16 compute-0 podman[87540]: 2026-01-31 07:07:16.237384128 +0000 UTC m=+2.065014488 container died 716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04 (image=quay.io/ceph/ceph:v18, name=zealous_khorana, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:16 compute-0 systemd[1]: Started libpod-conmon-e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51.scope.
Jan 31 07:07:16 compute-0 systemd[1]: libpod-716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04.scope: Deactivated successfully.
Jan 31 07:07:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c225c01b2cb491ae1faa9da2203693b4671f1861698bf4ed5a1f4599c398321/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c225c01b2cb491ae1faa9da2203693b4671f1861698bf4ed5a1f4599c398321/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c225c01b2cb491ae1faa9da2203693b4671f1861698bf4ed5a1f4599c398321/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c225c01b2cb491ae1faa9da2203693b4671f1861698bf4ed5a1f4599c398321/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c225c01b2cb491ae1faa9da2203693b4671f1861698bf4ed5a1f4599c398321/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 podman[87764]: 2026-01-31 07:07:16.397418863 +0000 UTC m=+0.449639706 container init e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:07:16 compute-0 podman[87764]: 2026-01-31 07:07:16.406933799 +0000 UTC m=+0.459154662 container start e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e50e27bf1b166fb5bc42ae173bf8695ca4098e0d0650182aeadaa8fa870d0e95-merged.mount: Deactivated successfully.
Jan 31 07:07:16 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:16 compute-0 podman[87764]: 2026-01-31 07:07:16.45159002 +0000 UTC m=+0.503810883 container attach e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:16 compute-0 podman[87540]: 2026-01-31 07:07:16.486648153 +0000 UTC m=+2.314278503 container remove 716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04 (image=quay.io/ceph/ceph:v18, name=zealous_khorana, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:16 compute-0 sudo[87537]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:16 compute-0 systemd[1]: libpod-conmon-716fa018e9f782f439dff50b3b9a4df1db5ad3dc3ed89ece1c946797e662bc04.scope: Deactivated successfully.
Jan 31 07:07:16 compute-0 sudo[87821]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgdnkmtxkwmbcoghbiekqxpchwsxjvve ; /usr/bin/python3'
Jan 31 07:07:16 compute-0 sudo[87821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:16 compute-0 python3[87823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:16 compute-0 podman[87824]: 2026-01-31 07:07:16.840591814 +0000 UTC m=+0.056275534 container create 1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6 (image=quay.io/ceph/ceph:v18, name=ecstatic_burnell, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:16 compute-0 systemd[1]: Started libpod-conmon-1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6.scope.
Jan 31 07:07:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4f27d053df3e83bc5c1fe50117d6aa43862cf89fa3da1e69e93079fb3248a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b4f27d053df3e83bc5c1fe50117d6aa43862cf89fa3da1e69e93079fb3248a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:16 compute-0 podman[87824]: 2026-01-31 07:07:16.810885526 +0000 UTC m=+0.026569266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:16 compute-0 podman[87824]: 2026-01-31 07:07:16.942614229 +0000 UTC m=+0.158297969 container init 1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6 (image=quay.io/ceph/ceph:v18, name=ecstatic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:07:16 compute-0 podman[87824]: 2026-01-31 07:07:16.946561515 +0000 UTC m=+0.162245235 container start 1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6 (image=quay.io/ceph/ceph:v18, name=ecstatic_burnell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:07:16 compute-0 ceph-mon[74496]: pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2911166990' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:16 compute-0 ceph-mon[74496]: osdmap e18: 2 total, 2 up, 2 in
Jan 31 07:07:16 compute-0 podman[87824]: 2026-01-31 07:07:16.967029185 +0000 UTC m=+0.182712925 container attach 1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6 (image=quay.io/ceph/ceph:v18, name=ecstatic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"} v 0) v1
Jan 31 07:07:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 07:07:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 07:07:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]': finished
Jan 31 07:07:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 31 07:07:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 31 07:07:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:17 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:17 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:17 compute-0 clever_robinson[87787]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:07:17 compute-0 clever_robinson[87787]: --> relative data size: 1.0
Jan 31 07:07:17 compute-0 clever_robinson[87787]: --> All data devices are unavailable
Jan 31 07:07:17 compute-0 systemd[1]: libpod-e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51.scope: Deactivated successfully.
Jan 31 07:07:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:17 compute-0 podman[87764]: 2026-01-31 07:07:17.307776832 +0000 UTC m=+1.359997655 container died e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:07:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 07:07:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c225c01b2cb491ae1faa9da2203693b4671f1861698bf4ed5a1f4599c398321-merged.mount: Deactivated successfully.
Jan 31 07:07:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 07:07:18 compute-0 podman[87764]: 2026-01-31 07:07:18.021579322 +0000 UTC m=+2.073800145 container remove e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_robinson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:18 compute-0 systemd[1]: libpod-conmon-e82c1448e7a612c2d4ed02e98a3426a632b15fa2fca42adba630ccb267f3bc51.scope: Deactivated successfully.
Jan 31 07:07:18 compute-0 sudo[87657]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2266493088' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 07:07:18 compute-0 ceph-mon[74496]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]: dispatch
Jan 31 07:07:18 compute-0 ceph-mon[74496]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d108d581-3dd3-4742-941a-f201ff187649"}]': finished
Jan 31 07:07:18 compute-0 ceph-mon[74496]: osdmap e19: 3 total, 2 up, 3 in
Jan 31 07:07:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:18 compute-0 ceph-mon[74496]: pgmap v71: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3536322800' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 07:07:18 compute-0 sudo[87890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:18 compute-0 sudo[87890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:18 compute-0 sudo[87890]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:18 compute-0 sudo[87915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:18 compute-0 sudo[87915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:18 compute-0 sudo[87915]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:18 compute-0 sudo[87940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:18 compute-0 sudo[87940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:18 compute-0 sudo[87940]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:18 compute-0 sudo[87965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:07:18 compute-0 sudo[87965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:18 compute-0 podman[88030]: 2026-01-31 07:07:18.521563962 +0000 UTC m=+0.023895995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Jan 31 07:07:18 compute-0 ecstatic_burnell[87840]: pool 'images' created
Jan 31 07:07:18 compute-0 systemd[1]: libpod-1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6.scope: Deactivated successfully.
Jan 31 07:07:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Jan 31 07:07:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:18 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:18 compute-0 podman[88030]: 2026-01-31 07:07:18.865297948 +0000 UTC m=+0.367629941 container create 55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:07:18 compute-0 systemd[1]: Started libpod-conmon-55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4.scope.
Jan 31 07:07:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v73: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:19 compute-0 podman[88030]: 2026-01-31 07:07:19.335315351 +0000 UTC m=+0.837647354 container init 55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:07:19 compute-0 podman[88030]: 2026-01-31 07:07:19.342056143 +0000 UTC m=+0.844388146 container start 55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:19 compute-0 competent_mestorf[88059]: 167 167
Jan 31 07:07:19 compute-0 systemd[1]: libpod-55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4.scope: Deactivated successfully.
Jan 31 07:07:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:19 compute-0 podman[88030]: 2026-01-31 07:07:19.427886641 +0000 UTC m=+0.930218644 container attach 55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:07:19 compute-0 podman[88030]: 2026-01-31 07:07:19.42969319 +0000 UTC m=+0.932025173 container died 55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4a8921b8ff5e8eb98b7bb4d7778c388c5ce1bd8110e21a5cf790a45a96b8290-merged.mount: Deactivated successfully.
Jan 31 07:07:19 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:19 compute-0 podman[88030]: 2026-01-31 07:07:19.582453378 +0000 UTC m=+1.084785361 container remove 55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:19 compute-0 podman[87824]: 2026-01-31 07:07:19.597702219 +0000 UTC m=+2.813385939 container died 1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6 (image=quay.io/ceph/ceph:v18, name=ecstatic_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 07:07:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/331184130' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:19 compute-0 ceph-mon[74496]: osdmap e20: 3 total, 2 up, 3 in
Jan 31 07:07:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:19 compute-0 ceph-mon[74496]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b4f27d053df3e83bc5c1fe50117d6aa43862cf89fa3da1e69e93079fb3248a2-merged.mount: Deactivated successfully.
Jan 31 07:07:19 compute-0 podman[88083]: 2026-01-31 07:07:19.698644244 +0000 UTC m=+0.026820492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:07:19
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:07:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:07:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:07:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:20 compute-0 podman[87824]: 2026-01-31 07:07:20.224314105 +0000 UTC m=+3.439997855 container remove 1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6 (image=quay.io/ceph/ceph:v18, name=ecstatic_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:07:20 compute-0 sudo[87821]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:20 compute-0 sudo[88120]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkpgrxxbqifgvwznvfmgeuwmuwikhwj ; /usr/bin/python3'
Jan 31 07:07:20 compute-0 sudo[88120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:20 compute-0 python3[88122]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:20 compute-0 podman[88083]: 2026-01-31 07:07:20.875585323 +0000 UTC m=+1.203761561 container create d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Jan 31 07:07:20 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Jan 31 07:07:20 compute-0 systemd[1]: libpod-conmon-55c05eb10c418113caead3c7da38aee9219cf46000bc5016c30cb1576d2c06a4.scope: Deactivated successfully.
Jan 31 07:07:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:20 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:20 compute-0 systemd[1]: Started libpod-conmon-d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03.scope.
Jan 31 07:07:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4e8ba6ca5a5671744ed28fefcc2a39c007205c2b1297e20cdd1f5c94da2f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4e8ba6ca5a5671744ed28fefcc2a39c007205c2b1297e20cdd1f5c94da2f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4e8ba6ca5a5671744ed28fefcc2a39c007205c2b1297e20cdd1f5c94da2f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4e8ba6ca5a5671744ed28fefcc2a39c007205c2b1297e20cdd1f5c94da2f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:20 compute-0 podman[88123]: 2026-01-31 07:07:20.978650135 +0000 UTC m=+0.442584126 container create 7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82 (image=quay.io/ceph/ceph:v18, name=crazy_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:20 compute-0 podman[88123]: 2026-01-31 07:07:20.886365822 +0000 UTC m=+0.350299833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:20 compute-0 systemd[1]: libpod-conmon-1057cb3f0510e7c4e2d9128ac71d124b2d6b484680c143888368740980510cd6.scope: Deactivated successfully.
Jan 31 07:07:21 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:21 compute-0 systemd[1]: Started libpod-conmon-7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82.scope.
Jan 31 07:07:21 compute-0 ceph-mon[74496]: pgmap v73: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:21 compute-0 podman[88083]: 2026-01-31 07:07:21.160825255 +0000 UTC m=+1.489001553 container init d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614eae927189fd212a94c51f1366e6cd5a2ec2aed3a01c72a9928f152b56eea5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614eae927189fd212a94c51f1366e6cd5a2ec2aed3a01c72a9928f152b56eea5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:21 compute-0 podman[88083]: 2026-01-31 07:07:21.173180327 +0000 UTC m=+1.501356555 container start d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:07:21 compute-0 podman[88123]: 2026-01-31 07:07:21.199921927 +0000 UTC m=+0.663855928 container init 7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82 (image=quay.io/ceph/ceph:v18, name=crazy_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:21 compute-0 podman[88123]: 2026-01-31 07:07:21.204628584 +0000 UTC m=+0.668562575 container start 7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82 (image=quay.io/ceph/ceph:v18, name=crazy_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:07:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v75: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:21 compute-0 podman[88123]: 2026-01-31 07:07:21.334908148 +0000 UTC m=+0.798842149 container attach 7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82 (image=quay.io/ceph/ceph:v18, name=crazy_chatelet, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:07:21 compute-0 podman[88083]: 2026-01-31 07:07:21.371285817 +0000 UTC m=+1.699462075 container attach d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 07:07:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 07:07:21 compute-0 romantic_solomon[88138]: {
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:     "0": [
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:         {
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "devices": [
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "/dev/loop3"
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             ],
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "lv_name": "ceph_lv0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "lv_size": "7511998464",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "name": "ceph_lv0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "tags": {
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.cluster_name": "ceph",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.crush_device_class": "",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.encrypted": "0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.osd_id": "0",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.type": "block",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:                 "ceph.vdo": "0"
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             },
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "type": "block",
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:             "vg_name": "ceph_vg0"
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:         }
Jan 31 07:07:21 compute-0 romantic_solomon[88138]:     ]
Jan 31 07:07:21 compute-0 romantic_solomon[88138]: }
Jan 31 07:07:21 compute-0 systemd[1]: libpod-d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03.scope: Deactivated successfully.
Jan 31 07:07:21 compute-0 podman[88083]: 2026-01-31 07:07:21.946758447 +0000 UTC m=+2.274934705 container died d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bd4e8ba6ca5a5671744ed28fefcc2a39c007205c2b1297e20cdd1f5c94da2f5-merged.mount: Deactivated successfully.
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Jan 31 07:07:22 compute-0 crazy_chatelet[88143]: pool 'cephfs.cephfs.meta' created
Jan 31 07:07:22 compute-0 systemd[1]: libpod-7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82.scope: Deactivated successfully.
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Jan 31 07:07:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:22 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:22 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 9bc0525b-8d5d-4e3f-bca6-8c3c4a9b7825 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 07:07:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:22 compute-0 podman[88083]: 2026-01-31 07:07:22.493303718 +0000 UTC m=+2.821479936 container remove d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:07:22 compute-0 systemd[1]: libpod-conmon-d449888e6568ff45d33dc61a002c8378d0ccb1564c7176da9ce296cfc639db03.scope: Deactivated successfully.
Jan 31 07:07:22 compute-0 podman[88123]: 2026-01-31 07:07:22.502257089 +0000 UTC m=+1.966191090 container died 7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82 (image=quay.io/ceph/ceph:v18, name=crazy_chatelet, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:07:22 compute-0 sudo[87965]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-0 ceph-mon[74496]: osdmap e21: 3 total, 2 up, 3 in
Jan 31 07:07:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:22 compute-0 ceph-mon[74496]: pgmap v75: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:22 compute-0 sudo[88203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:22 compute-0 sudo[88203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-0 sudo[88203]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:22 compute-0 sudo[88228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:22 compute-0 sudo[88228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-0 sudo[88228]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-0 sudo[88253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:22 compute-0 sudo[88253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-0 sudo[88253]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:22 compute-0 sudo[88280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:07:22 compute-0 sudo[88280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 07:07:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:22 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 31 07:07:22 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 31 07:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-614eae927189fd212a94c51f1366e6cd5a2ec2aed3a01c72a9928f152b56eea5-merged.mount: Deactivated successfully.
Jan 31 07:07:23 compute-0 podman[88190]: 2026-01-31 07:07:23.214660471 +0000 UTC m=+0.753657173 container remove 7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82 (image=quay.io/ceph/ceph:v18, name=crazy_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:07:23 compute-0 systemd[1]: libpod-conmon-7c7dca3dca7776dbfd13c22eb879d8507a2c955c562ec4ef54a5bfb4492d9c82.scope: Deactivated successfully.
Jan 31 07:07:23 compute-0 sudo[88120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v77: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:23 compute-0 sudo[88369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbewkyiazriooawiayufjvpsyhcumln ; /usr/bin/python3'
Jan 31 07:07:23 compute-0 sudo[88369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 07:07:23 compute-0 podman[88371]: 2026-01-31 07:07:23.402900476 +0000 UTC m=+0.027131202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:23 compute-0 python3[88373]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:23 compute-0 podman[88371]: 2026-01-31 07:07:23.752632854 +0000 UTC m=+0.376863570 container create 5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Jan 31 07:07:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Jan 31 07:07:24 compute-0 systemd[1]: Started libpod-conmon-5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb.scope.
Jan 31 07:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:24 compute-0 podman[88386]: 2026-01-31 07:07:24.106709257 +0000 UTC m=+0.551360541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:24 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:24 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 3ce0b7f1-0673-4771-aa89-6879d8152c16 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 07:07:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:07:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:24 compute-0 podman[88386]: 2026-01-31 07:07:24.594167501 +0000 UTC m=+1.038818755 container create 9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50 (image=quay.io/ceph/ceph:v18, name=pedantic_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:07:24 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4225669901' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:24 compute-0 ceph-mon[74496]: osdmap e22: 3 total, 2 up, 3 in
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:24 compute-0 systemd[1]: Started libpod-conmon-9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50.scope.
Jan 31 07:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264337b9d882a11f6ca7b8c69f469bb7cfee9b4e59bda53f0530a4ec8af1ab4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a264337b9d882a11f6ca7b8c69f469bb7cfee9b4e59bda53f0530a4ec8af1ab4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:24 compute-0 podman[88371]: 2026-01-31 07:07:24.977626454 +0000 UTC m=+1.601857220 container init 5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_taussig, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:07:24 compute-0 podman[88371]: 2026-01-31 07:07:24.985986809 +0000 UTC m=+1.610217545 container start 5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:24 compute-0 naughty_taussig[88400]: 167 167
Jan 31 07:07:24 compute-0 systemd[1]: libpod-5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb.scope: Deactivated successfully.
Jan 31 07:07:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 07:07:25 compute-0 ceph-mgr[74791]: [progress WARNING root] Starting Global Recovery Event,32 pgs not in active + clean state
Jan 31 07:07:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v79: 37 pgs: 32 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:25 compute-0 podman[88371]: 2026-01-31 07:07:25.348723178 +0000 UTC m=+1.972953904 container attach 5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_taussig, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:25 compute-0 podman[88371]: 2026-01-31 07:07:25.349700313 +0000 UTC m=+1.973931039 container died 5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 07:07:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Jan 31 07:07:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Jan 31 07:07:26 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:26 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev a9ebc9b9-37b9-466a-b93f-d65f8883f207 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 07:07:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:07:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:26 compute-0 ceph-mon[74496]: Deploying daemon osd.2 on compute-2
Jan 31 07:07:26 compute-0 ceph-mon[74496]: pgmap v77: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:26 compute-0 ceph-mon[74496]: osdmap e23: 3 total, 2 up, 3 in
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:26 compute-0 ceph-mon[74496]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:26 compute-0 ceph-mon[74496]: pgmap v79: 37 pgs: 32 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:26 compute-0 ceph-mon[74496]: osdmap e24: 3 total, 2 up, 3 in
Jan 31 07:07:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2ce8b6a5fa39ca5c9630052097ed461a0845a698d9c3a0ccbf4c77c544a618a-merged.mount: Deactivated successfully.
Jan 31 07:07:26 compute-0 systemd[76130]: Starting Mark boot as successful...
Jan 31 07:07:26 compute-0 systemd[76130]: Finished Mark boot as successful.
Jan 31 07:07:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 07:07:27 compute-0 podman[88371]: 2026-01-31 07:07:27.221756228 +0000 UTC m=+3.845986924 container remove 5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_taussig, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:07:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v81: 37 pgs: 1 peering, 32 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:27 compute-0 podman[88386]: 2026-01-31 07:07:27.522489099 +0000 UTC m=+3.967140353 container init 9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50 (image=quay.io/ceph/ceph:v18, name=pedantic_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:07:27 compute-0 podman[88386]: 2026-01-31 07:07:27.531308643 +0000 UTC m=+3.975959937 container start 9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50 (image=quay.io/ceph/ceph:v18, name=pedantic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:07:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Jan 31 07:07:27 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Jan 31 07:07:27 compute-0 podman[88386]: 2026-01-31 07:07:27.888357003 +0000 UTC m=+4.333008327 container attach 9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50 (image=quay.io/ceph/ceph:v18, name=pedantic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:28 compute-0 podman[88430]: 2026-01-31 07:07:27.95545301 +0000 UTC m=+0.651987814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 319d8031-0ea0-4aa2-a726-1d20a907397c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 9bc0525b-8d5d-4e3f-bca6-8c3c4a9b7825 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 9bc0525b-8d5d-4e3f-bca6-8c3c4a9b7825 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 3ce0b7f1-0673-4771-aa89-6879d8152c16 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 3ce0b7f1-0673-4771-aa89-6879d8152c16 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev a9ebc9b9-37b9-466a-b93f-d65f8883f207 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event a9ebc9b9-37b9-466a-b93f-d65f8883f207 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 2 seconds
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 319d8031-0ea0-4aa2-a726-1d20a907397c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 07:07:28 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 319d8031-0ea0-4aa2-a726-1d20a907397c (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Jan 31 07:07:28 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=10.561912537s) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active pruub 69.369171143s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:28 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=10.561912537s) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown pruub 69.369171143s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:28 compute-0 podman[88430]: 2026-01-31 07:07:28.232771695 +0000 UTC m=+0.929306389 container create ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:07:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:07:28 compute-0 ceph-mon[74496]: 2.1 scrub starts
Jan 31 07:07:28 compute-0 ceph-mon[74496]: 2.1 scrub ok
Jan 31 07:07:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:28 compute-0 systemd[1]: libpod-conmon-5b1a2aaa4a40c1936b1d7fb4c1139035e7a1198d2895ce2dffc968f789cd54eb.scope: Deactivated successfully.
Jan 31 07:07:28 compute-0 systemd[1]: Started libpod-conmon-ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9.scope.
Jan 31 07:07:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81098080e4c09473fe3ed2966ec25f543004f4302edabc0ac798fbbacf9025/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81098080e4c09473fe3ed2966ec25f543004f4302edabc0ac798fbbacf9025/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81098080e4c09473fe3ed2966ec25f543004f4302edabc0ac798fbbacf9025/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd81098080e4c09473fe3ed2966ec25f543004f4302edabc0ac798fbbacf9025/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 07:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 07:07:28 compute-0 podman[88430]: 2026-01-31 07:07:28.761452675 +0000 UTC m=+1.457987459 container init ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:28 compute-0 podman[88430]: 2026-01-31 07:07:28.76804029 +0000 UTC m=+1.464574994 container start ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:28 compute-0 podman[88430]: 2026-01-31 07:07:28.860666839 +0000 UTC m=+1.557201533 container attach ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 31 07:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-0 pedantic_hofstadter[88406]: pool 'cephfs.cephfs.data' created
Jan 31 07:07:29 compute-0 systemd[1]: libpod-9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50.scope: Deactivated successfully.
Jan 31 07:07:29 compute-0 podman[88386]: 2026-01-31 07:07:29.118653668 +0000 UTC m=+5.563304932 container died 9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50 (image=quay.io/ceph/ceph:v18, name=pedantic_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v84: 100 pgs: 1 peering, 94 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.18( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.17( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.19( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.16( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.14( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.13( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.12( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.10( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.15( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.11( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.f( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.e( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.d( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.c( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.a( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.b( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26 pruub=11.631585121s) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active pruub 71.636085510s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.6( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.7( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.2( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.5( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.4( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.8( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.9( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1a( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.3( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1b( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1c( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1e( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1f( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1d( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[4.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26 pruub=11.631585121s) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown pruub 71.636085510s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:29 compute-0 ceph-mon[74496]: pgmap v81: 37 pgs: 1 peering, 32 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: osdmap e25: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/386957885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:29 compute-0 ceph-mon[74496]: osdmap e26: 3 total, 2 up, 3 in
Jan 31 07:07:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]: {
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:         "osd_id": 0,
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:         "type": "bluestore"
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]:     }
Jan 31 07:07:29 compute-0 dreamy_heisenberg[88468]: }
Jan 31 07:07:29 compute-0 systemd[1]: libpod-ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9.scope: Deactivated successfully.
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a264337b9d882a11f6ca7b8c69f469bb7cfee9b4e59bda53f0530a4ec8af1ab4-merged.mount: Deactivated successfully.
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 26 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [0] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:29 compute-0 podman[88386]: 2026-01-31 07:07:29.938352135 +0000 UTC m=+6.383003409 container remove 9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50 (image=quay.io/ceph/ceph:v18, name=pedantic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:29 compute-0 sudo[88369]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:29 compute-0 podman[88430]: 2026-01-31 07:07:29.981799591 +0000 UTC m=+2.678334275 container died ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:30 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 9 completed events
Jan 31 07:07:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:07:30 compute-0 sudo[88539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrkudluvlcmcfgdynsxomxmgyxtgjrbp ; /usr/bin/python3'
Jan 31 07:07:30 compute-0 sudo[88539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 07:07:30 compute-0 python3[88541]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:30 compute-0 podman[88542]: 2026-01-31 07:07:30.379142858 +0000 UTC m=+0.029633033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd81098080e4c09473fe3ed2966ec25f543004f4302edabc0ac798fbbacf9025-merged.mount: Deactivated successfully.
Jan 31 07:07:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 07:07:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Jan 31 07:07:30 compute-0 ceph-mon[74496]: pgmap v84: 100 pgs: 1 peering, 94 unknown, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:30 compute-0 ceph-mon[74496]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1e( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.10( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.11( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.12( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.14( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.16( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.17( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.b( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.7( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=27 pruub=13.993068695s) [0] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 active pruub 75.639915466s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.2( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.6( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.4( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.3( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.f( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1d( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1c( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.19( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=18/19 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=27 pruub=13.993068695s) [0] r=0 lpr=27 pi=[20,27)/1 crt=0'0 mlcod 0'0 unknown pruub 75.639915466s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:31 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:31 compute-0 podman[88430]: 2026-01-31 07:07:31.137489914 +0000 UTC m=+3.834024648 container remove ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e27 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Jan 31 07:07:31 compute-0 sudo[88280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.10( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 systemd[1]: libpod-conmon-9de5a7e249f9b6c4d3bbf159a46e010e4b4ae670f024d24afd6ca7bd0d813b50.scope: Deactivated successfully.
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.11( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1e( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.15( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.12( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.16( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1f( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.17( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.b( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.c( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.0( empty local-lis/les=26/27 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.7( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.4( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.f( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.6( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1d( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.19( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 27 pg[4.3( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=18/18 les/c/f=19/19/0 sis=26) [0] r=0 lpr=26 pi=[18,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:31 compute-0 podman[88542]: 2026-01-31 07:07:31.275546045 +0000 UTC m=+0.926036220 container create 83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25 (image=quay.io/ceph/ceph:v18, name=loving_perlman, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v86: 131 pgs: 1 creating+peering, 93 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:31 compute-0 systemd[1]: Started libpod-conmon-83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25.scope.
Jan 31 07:07:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b3f451d093ef0aa526dbda44c9c4e4639f5a2f8665356f95a823fa6ccb3717/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78b3f451d093ef0aa526dbda44c9c4e4639f5a2f8665356f95a823fa6ccb3717/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:31 compute-0 podman[88542]: 2026-01-31 07:07:31.502428279 +0000 UTC m=+1.152918544 container init 83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25 (image=quay.io/ceph/ceph:v18, name=loving_perlman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:31 compute-0 podman[88542]: 2026-01-31 07:07:31.510415824 +0000 UTC m=+1.160906029 container start 83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25 (image=quay.io/ceph/ceph:v18, name=loving_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:31 compute-0 podman[88542]: 2026-01-31 07:07:31.583979314 +0000 UTC m=+1.234469489 container attach 83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25 (image=quay.io/ceph/ceph:v18, name=loving_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 systemd[1]: libpod-conmon-ca6a495a74c4e736829205a1a5e59d694cb30f504e77cdacf951e99951e8e4d9.scope: Deactivated successfully.
Jan 31 07:07:31 compute-0 sudo[88562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:31 compute-0 sudo[88562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:31 compute-0 sudo[88562]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:31 compute-0 sudo[88587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:07:31 compute-0 sudo[88587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:31 compute-0 sudo[88587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:31 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.682032585s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.171165466s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.11( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.682032585s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171165466s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.682155609s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.171333313s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.682155609s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171333313s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.11( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.10( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.10( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.13( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.13( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.12( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932046890s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421424866s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932431221s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421859741s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.12( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932046890s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421424866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931964874s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421455383s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932431221s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421859741s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931964874s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421455383s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.15( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932118416s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421707153s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.15( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932118416s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421707153s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.14( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932090759s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421737671s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.14( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932090759s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421737671s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.17( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932027817s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421768188s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.17( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.16( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.16( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.932027817s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421768188s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931864738s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421676636s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931864738s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421676636s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.9( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.9( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.8( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.681884766s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.171188354s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.8( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931818008s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421806335s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931820869s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421775818s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.681884766s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171188354s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931820869s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421775818s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931818008s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421806335s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931715012s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421813965s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931713104s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421852112s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931715012s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421813965s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931713104s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421852112s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.6( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931663513s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421905518s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.6( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931661606s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421920776s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931663513s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421905518s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931588173s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421867371s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931588173s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421867371s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931661606s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421920776s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931588173s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421943665s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931588173s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421943665s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.3( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931518555s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421936035s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.3( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931518555s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421936035s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=13.149483681s) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown pruub 75.639915466s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=28 pruub=13.149483681s) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.639915466s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931557655s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422050476s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.7( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931441307s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.421989441s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.7( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931557655s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422050476s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.4( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931441307s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421989441s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.4( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.5( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931391716s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422035217s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.5( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.2( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931391716s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422035217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.2( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931299210s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422042847s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931299210s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422042847s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931277275s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422088623s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931206703s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422035217s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931277275s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422088623s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931206703s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422035217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931151390s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422065735s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931151390s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422065735s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931085587s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422103882s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931077957s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422126770s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931085587s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422103882s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.931077957s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422126770s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930975914s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422111511s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.1b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930975914s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422111511s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.18( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.18( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930915833s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422157288s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930904388s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422157288s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930904388s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930915833s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.19( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930854797s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 76.422157288s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[5.19( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=28) [] r=-1 lpr=28 pi=[20,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 28 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=28 pruub=13.930854797s) [] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:31 compute-0 ceph-mon[74496]: purged_snaps scrub starts
Jan 31 07:07:31 compute-0 ceph-mon[74496]: purged_snaps scrub ok
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: 2.2 scrub starts
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 07:07:31 compute-0 ceph-mon[74496]: 2.2 scrub ok
Jan 31 07:07:31 compute-0 ceph-mon[74496]: osdmap e27: 3 total, 2 up, 3 in
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: pgmap v86: 131 pgs: 1 creating+peering, 93 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:31 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:31 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:32 compute-0 sudo[88631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:32 compute-0 sudo[88631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-0 sudo[88631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 31 07:07:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 07:07:32 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.wmgest started
Jan 31 07:07:32 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mgr.compute-2.wmgest 192.168.122.102:0/1492640040; not ready for session (expect reconnect)
Jan 31 07:07:32 compute-0 sudo[88657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:32 compute-0 sudo[88657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-0 sudo[88657]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-0 sudo[88682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:32 compute-0 sudo[88682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-0 sudo[88682]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-0 sudo[88707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:07:32 compute-0 sudo[88707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:32 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 07:07:32 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 07:07:32 compute-0 sudo[88707]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 07:07:32 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:32 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:32 compute-0 ceph-mon[74496]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 07:07:32 compute-0 ceph-mon[74496]: osdmap e28: 3 total, 2 up, 3 in
Jan 31 07:07:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 07:07:32 compute-0 ceph-mon[74496]: Standby manager daemon compute-2.wmgest started
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 07:07:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Jan 31 07:07:33 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mgr.compute-2.wmgest 192.168.122.102:0/1492640040; not ready for session (expect reconnect)
Jan 31 07:07:33 compute-0 loving_perlman[88558]: enabled application 'rbd' on pool 'vms'
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.hhuoua(active, since 2m), standbys: compute-2.wmgest
Jan 31 07:07:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.wmgest", "id": "compute-2.wmgest"} v 0) v1
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wmgest", "id": "compute-2.wmgest"}]: dispatch
Jan 31 07:07:33 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:33 compute-0 systemd[1]: libpod-83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25.scope: Deactivated successfully.
Jan 31 07:07:33 compute-0 podman[88542]: 2026-01-31 07:07:33.185949842 +0000 UTC m=+2.836440017 container died 83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25 (image=quay.io/ceph/ceph:v18, name=loving_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:07:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-78b3f451d093ef0aa526dbda44c9c4e4639f5a2f8665356f95a823fa6ccb3717-merged.mount: Deactivated successfully.
Jan 31 07:07:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v89: 131 pgs: 1 creating+peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:33 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 31 07:07:33 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:33 compute-0 podman[88542]: 2026-01-31 07:07:33.7926962 +0000 UTC m=+3.443186385 container remove 83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25 (image=quay.io/ceph/ceph:v18, name=loving_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:33 compute-0 systemd[1]: libpod-conmon-83be867acf8c80e4eb0c46bbd3be222f785d1b5bc084b6df5f75df653dc3da25.scope: Deactivated successfully.
Jan 31 07:07:33 compute-0 sudo[88539]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:33 compute-0 sudo[88801]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyuvrcavawmppraeplivaotjvqqxzoi ; /usr/bin/python3'
Jan 31 07:07:33 compute-0 sudo[88801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:33 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:33 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:34 compute-0 python3[88803]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:34 compute-0 ceph-mon[74496]: 4.1 scrub starts
Jan 31 07:07:34 compute-0 ceph-mon[74496]: 4.1 scrub ok
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3259085705' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 07:07:34 compute-0 ceph-mon[74496]: osdmap e29: 3 total, 2 up, 3 in
Jan 31 07:07:34 compute-0 ceph-mon[74496]: mgrmap e10: compute-0.hhuoua(active, since 2m), standbys: compute-2.wmgest
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-2.wmgest", "id": "compute-2.wmgest"}]: dispatch
Jan 31 07:07:34 compute-0 ceph-mon[74496]: pgmap v89: 131 pgs: 1 creating+peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:34 compute-0 ceph-mon[74496]: 2.3 scrub starts
Jan 31 07:07:34 compute-0 ceph-mon[74496]: 2.3 scrub ok
Jan 31 07:07:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-0 podman[88804]: 2026-01-31 07:07:34.219872464 +0000 UTC m=+0.093412617 container create 2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11 (image=quay.io/ceph/ceph:v18, name=infallible_cohen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:34 compute-0 podman[88804]: 2026-01-31 07:07:34.153481803 +0000 UTC m=+0.027022036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:34 compute-0 systemd[1]: Started libpod-conmon-2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11.scope.
Jan 31 07:07:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1b7a6f6db6697f8113e2ddc2053f2a68b5818a6b0fc412822e33fc894c0785/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1b7a6f6db6697f8113e2ddc2053f2a68b5818a6b0fc412822e33fc894c0785/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:34 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mgr.compute-1.hodsiu 192.168.122.101:0/3438022061; not ready for session (expect reconnect)
Jan 31 07:07:34 compute-0 podman[88804]: 2026-01-31 07:07:34.770966557 +0000 UTC m=+0.644506720 container init 2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11 (image=quay.io/ceph/ceph:v18, name=infallible_cohen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:34 compute-0 podman[88804]: 2026-01-31 07:07:34.779109657 +0000 UTC m=+0.652649800 container start 2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11 (image=quay.io/ceph/ceph:v18, name=infallible_cohen, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:07:34 compute-0 podman[88804]: 2026-01-31 07:07:34.895802455 +0000 UTC m=+0.769342588 container attach 2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11 (image=quay.io/ceph/ceph:v18, name=infallible_cohen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.hodsiu started
Jan 31 07:07:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:34 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:34 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 creating+peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 31 07:07:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 07:07:35 compute-0 ceph-mon[74496]: 4.2 scrub starts
Jan 31 07:07:35 compute-0 ceph-mon[74496]: 4.2 scrub ok
Jan 31 07:07:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:35 compute-0 ceph-mon[74496]: Standby manager daemon compute-1.hodsiu started
Jan 31 07:07:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:35 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from mgr.compute-1.hodsiu 192.168.122.101:0/3438022061; not ready for session (expect reconnect)
Jan 31 07:07:35 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 07:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:36 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 2m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.hodsiu", "id": "compute-1.hodsiu"} v 0) v1
Jan 31 07:07:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hodsiu", "id": "compute-1.hodsiu"}]: dispatch
Jan 31 07:07:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 07:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Jan 31 07:07:36 compute-0 infallible_cohen[88819]: enabled application 'rbd' on pool 'volumes'
Jan 31 07:07:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Jan 31 07:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:36 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:36 compute-0 systemd[1]: libpod-2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11.scope: Deactivated successfully.
Jan 31 07:07:36 compute-0 podman[88804]: 2026-01-31 07:07:36.131702994 +0000 UTC m=+2.005243147 container died 2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11 (image=quay.io/ceph/ceph:v18, name=infallible_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:07:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb1b7a6f6db6697f8113e2ddc2053f2a68b5818a6b0fc412822e33fc894c0785-merged.mount: Deactivated successfully.
Jan 31 07:07:36 compute-0 podman[88804]: 2026-01-31 07:07:36.215996239 +0000 UTC m=+2.089536362 container remove 2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11 (image=quay.io/ceph/ceph:v18, name=infallible_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:07:36 compute-0 systemd[1]: libpod-conmon-2a2da8ad5b083f15674b3bb82bec8bfbf9c4c8d404622563264e2677db8d0c11.scope: Deactivated successfully.
Jan 31 07:07:36 compute-0 sudo[88801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:36 compute-0 sudo[88879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxgqmwrfkiprllxiknnokttjxghnuohj ; /usr/bin/python3'
Jan 31 07:07:36 compute-0 sudo[88879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:36 compute-0 python3[88881]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:36 compute-0 ceph-mon[74496]: pgmap v90: 131 pgs: 1 creating+peering, 62 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 07:07:36 compute-0 ceph-mon[74496]: 2.4 deep-scrub starts
Jan 31 07:07:36 compute-0 ceph-mon[74496]: 2.4 deep-scrub ok
Jan 31 07:07:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:36 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 2m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:07:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr metadata", "who": "compute-1.hodsiu", "id": "compute-1.hodsiu"}]: dispatch
Jan 31 07:07:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/501340508' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 07:07:36 compute-0 ceph-mon[74496]: osdmap e30: 3 total, 2 up, 3 in
Jan 31 07:07:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:36 compute-0 podman[88882]: 2026-01-31 07:07:36.57763685 +0000 UTC m=+0.066121206 container create e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2 (image=quay.io/ceph/ceph:v18, name=priceless_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:07:36 compute-0 systemd[1]: Started libpod-conmon-e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2.scope.
Jan 31 07:07:36 compute-0 podman[88882]: 2026-01-31 07:07:36.537015056 +0000 UTC m=+0.025499452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1be9bdc39b5ce7621ebb5fa4b60de743e88f68b728096a0ba6d25dbefafd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2e1be9bdc39b5ce7621ebb5fa4b60de743e88f68b728096a0ba6d25dbefafd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:36 compute-0 podman[88882]: 2026-01-31 07:07:36.683040152 +0000 UTC m=+0.171524488 container init e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2 (image=quay.io/ceph/ceph:v18, name=priceless_swartz, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:07:36 compute-0 podman[88882]: 2026-01-31 07:07:36.688174084 +0000 UTC m=+0.176658450 container start e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2 (image=quay.io/ceph/ceph:v18, name=priceless_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:36 compute-0 podman[88882]: 2026-01-31 07:07:36.69798025 +0000 UTC m=+0.186464606 container attach e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2 (image=quay.io/ceph/ceph:v18, name=priceless_swartz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:37 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:37 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 07:07:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 07:07:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Jan 31 07:07:37 compute-0 priceless_swartz[88897]: enabled application 'rbd' on pool 'backups'
Jan 31 07:07:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Jan 31 07:07:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:37 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:37 compute-0 systemd[1]: libpod-e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2.scope: Deactivated successfully.
Jan 31 07:07:37 compute-0 conmon[88897]: conmon e836f8dc8ed9b2723dbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2.scope/container/memory.events
Jan 31 07:07:37 compute-0 podman[88882]: 2026-01-31 07:07:37.665892439 +0000 UTC m=+1.154376765 container died e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2 (image=quay.io/ceph/ceph:v18, name=priceless_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d2e1be9bdc39b5ce7621ebb5fa4b60de743e88f68b728096a0ba6d25dbefafd-merged.mount: Deactivated successfully.
Jan 31 07:07:37 compute-0 podman[88882]: 2026-01-31 07:07:37.75405249 +0000 UTC m=+1.242536796 container remove e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2 (image=quay.io/ceph/ceph:v18, name=priceless_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:37 compute-0 systemd[1]: libpod-conmon-e836f8dc8ed9b2723dbb6ad8597966112e843ef3b11c6e53bff639cc01608ad2.scope: Deactivated successfully.
Jan 31 07:07:37 compute-0 sudo[88879]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:37 compute-0 sudo[88957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azxrgnkvmahmpgmbkjrjdcvlyrecgari ; /usr/bin/python3'
Jan 31 07:07:37 compute-0 sudo[88957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:37 compute-0 ceph-mgr[74791]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2867694174; not ready for session (expect reconnect)
Jan 31 07:07:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:37 compute-0 ceph-mgr[74791]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 07:07:38 compute-0 python3[88959]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:38 compute-0 podman[88960]: 2026-01-31 07:07:38.092693534 +0000 UTC m=+0.051068575 container create 7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133 (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:38 compute-0 systemd[1]: Started libpod-conmon-7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133.scope.
Jan 31 07:07:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ae80d40f8c81fa5c0315dbe4c9651348c80b280e871a9132317398ff9acbe7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ae80d40f8c81fa5c0315dbe4c9651348c80b280e871a9132317398ff9acbe7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:38 compute-0 podman[88960]: 2026-01-31 07:07:38.0652463 +0000 UTC m=+0.023621321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:38 compute-0 podman[88960]: 2026-01-31 07:07:38.186756196 +0000 UTC m=+0.145131237 container init 7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133 (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:38 compute-0 podman[88960]: 2026-01-31 07:07:38.192775728 +0000 UTC m=+0.151150749 container start 7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133 (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:07:38 compute-0 podman[88960]: 2026-01-31 07:07:38.204784493 +0000 UTC m=+0.163159544 container attach 7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133 (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 07:07:38 compute-0 ceph-mon[74496]: pgmap v92: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 07:07:38 compute-0 ceph-mon[74496]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1530384074' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 07:07:38 compute-0 ceph-mon[74496]: osdmap e31: 3 total, 2 up, 3 in
Jan 31 07:07:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:38 compute-0 ceph-mon[74496]: OSD bench result of 6665.721838 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 07:07:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 31 07:07:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 07:07:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 07:07:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174] boot
Jan 31 07:07:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 07:07:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 07:07:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v95: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:07:39 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:07:39 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 31 07:07:39 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 31 07:07:39 compute-0 sudo[89000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-0 sudo[89000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89000]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:39 compute-0 sudo[89025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 07:07:39 compute-0 sudo[89025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 sudo[89050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-0 sudo[89050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89050]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mon[74496]: osd.2 [v2:192.168.122.102:6800/2867694174,v1:192.168.122.102:6801/2867694174] boot
Jan 31 07:07:39 compute-0 ceph-mon[74496]: osdmap e32: 3 total, 3 up, 3 in
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mon[74496]: 2.5 deep-scrub starts
Jan 31 07:07:39 compute-0 ceph-mon[74496]: 2.5 deep-scrub ok
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 07:07:39 compute-0 sudo[89075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 07:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 07:07:39 compute-0 brave_aryabhata[88975]: enabled application 'rbd' on pool 'images'
Jan 31 07:07:39 compute-0 sudo[89075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89075]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 07:07:39 compute-0 systemd[1]: libpod-7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133.scope: Deactivated successfully.
Jan 31 07:07:39 compute-0 podman[88960]: 2026-01-31 07:07:39.735820979 +0000 UTC m=+1.694196010 container died 7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133 (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=5.843027115s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171333313s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.11( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.18( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=5.842964172s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171333313s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=5.842694759s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171165466s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.19( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=5.842669010s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171165466s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=5.842631340s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171188354s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=5.842613697s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.171188354s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092739105s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421424866s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.13( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.16( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092720509s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421424866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.10( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.13( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092975140s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421859741s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092951775s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421859741s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092453003s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421455383s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.14( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092438698s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421455383s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092626572s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421707153s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.15( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.13( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092610836s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421707153s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.14( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.14( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092557430s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421737671s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092538834s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421737671s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.17( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.17( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092475414s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421768188s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092459202s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421768188s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092332840s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421676636s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.16( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.9( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.10( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092307568s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421676636s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092304230s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421775818s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092282295s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421775818s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.8( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.8( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092265129s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421806335s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092231750s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421806335s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092205048s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421813965s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092188358s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421813965s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092141151s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421852112s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092105865s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421852112s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092071533s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421905518s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.092053890s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421905518s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091988564s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421920776s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.6( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.6( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091945171s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421920776s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091852665s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421867371s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091831207s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421867371s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091820717s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421943665s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=5.309773922s) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.639915466s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091783524s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421936035s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.7( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091794014s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421943665s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=32 pruub=5.309749603s) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 75.639915466s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091763020s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421936035s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.3( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.3( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091721058s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422050476s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.5( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091700554s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422050476s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.12( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.12( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.7( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091344357s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422035217s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091245174s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421989441s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091333866s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422088623s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.2( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091288090s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422035217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091307640s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422088623s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091199875s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422042847s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.5( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.2( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.4( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091165066s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422042847s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.5( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090991974s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.421989441s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.091009617s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422035217s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.f( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.8( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090982914s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422035217s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090961456s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422065735s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090942860s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422065735s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090929985s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422103882s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090915203s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422103882s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1d( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090595722s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422111511s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1c( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1c( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090563297s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422111511s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.1b( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090557575s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422126770s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090540409s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422126770s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090455055s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090415478s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[5.19( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.18( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[5.19( empty local-lis/les=20/21 n=0 ec=27/20 lis/c=20/20 les/c/f=21/21/0 sis=32) [2] r=-1 lpr=32 pi=[20,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1e( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090381145s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 32 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090289116s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090269566s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=25/26 n=0 ec=25/16 lis/c=25/25 les/c/f=26/26/0 sis=32 pruub=6.090229988s) [2] r=-1 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.422157288s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-62ae80d40f8c81fa5c0315dbe4c9651348c80b280e871a9132317398ff9acbe7-merged.mount: Deactivated successfully.
Jan 31 07:07:39 compute-0 sudo[89101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-0 sudo[89101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89101]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 podman[88960]: 2026-01-31 07:07:39.7990409 +0000 UTC m=+1.757415921 container remove 7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133 (image=quay.io/ceph/ceph:v18, name=brave_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:07:39 compute-0 systemd[1]: libpod-conmon-7c40e3069aca18ac8f6ca47e94aa7d34b5dc4a3c1136dddd29bca1a41569b133.scope: Deactivated successfully.
Jan 31 07:07:39 compute-0 sudo[88957]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 sudo[89139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:39 compute-0 sudo[89139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89139]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 sudo[89164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-0 sudo[89164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89164]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 sudo[89189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:39 compute-0 sudo[89189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89189]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 sudo[89240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvncfyxieoszyrwfyjsnorjsmjqcedko ; /usr/bin/python3'
Jan 31 07:07:39 compute-0 sudo[89240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:39 compute-0 sudo[89237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:39 compute-0 sudo[89237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:39 compute-0 sudo[89265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:39 compute-0 sudo[89265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:39 compute-0 sudo[89265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 python3[89259]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:40 compute-0 sudo[89313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89313]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 podman[89314]: 2026-01-31 07:07:40.107018941 +0000 UTC m=+0.042585249 container create 60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a (image=quay.io/ceph/ceph:v18, name=quirky_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:07:40 compute-0 systemd[1]: Started libpod-conmon-60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a.scope.
Jan 31 07:07:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:40 compute-0 sudo[89351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93136d0a9b19bbb8fd4b38bcfa93caaa43a6629e4a465bfc2470ba0ba97890f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93136d0a9b19bbb8fd4b38bcfa93caaa43a6629e4a465bfc2470ba0ba97890f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:40 compute-0 sudo[89351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 podman[89314]: 2026-01-31 07:07:40.087734186 +0000 UTC m=+0.023300514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:40 compute-0 podman[89314]: 2026-01-31 07:07:40.184947807 +0000 UTC m=+0.120514115 container init 60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a (image=quay.io/ceph/ceph:v18, name=quirky_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:40 compute-0 podman[89314]: 2026-01-31 07:07:40.19056333 +0000 UTC m=+0.126129638 container start 60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a (image=quay.io/ceph/ceph:v18, name=quirky_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:07:40 compute-0 podman[89314]: 2026-01-31 07:07:40.199516987 +0000 UTC m=+0.135083305 container attach 60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a (image=quay.io/ceph/ceph:v18, name=quirky_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:07:40 compute-0 sudo[89382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89382]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new
Jan 31 07:07:40 compute-0 sudo[89408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89408]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:40 compute-0 sudo[89433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 07:07:40 compute-0 sudo[89458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89458]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:40 compute-0 sudo[89483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:07:40 compute-0 sudo[89508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89508]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 07:07:40 compute-0 sudo[89560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config
Jan 31 07:07:40 compute-0 sudo[89560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89560]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 07:07:40 compute-0 sudo[89602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89602]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:40 compute-0 sudo[89627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89627]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89652]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:40 compute-0 sudo[89677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 ceph-mon[74496]: pgmap v95: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:40 compute-0 ceph-mon[74496]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 07:07:40 compute-0 ceph-mon[74496]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 07:07:40 compute-0 ceph-mon[74496]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mon[74496]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mon[74496]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 07:07:40 compute-0 ceph-mon[74496]: 4.3 scrub starts
Jan 31 07:07:40 compute-0 ceph-mon[74496]: 4.3 scrub ok
Jan 31 07:07:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2153731720' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 07:07:40 compute-0 ceph-mon[74496]: osdmap e33: 3 total, 3 up, 3 in
Jan 31 07:07:40 compute-0 sudo[89677]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 07:07:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 31 07:07:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 07:07:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 07:07:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 07:07:40 compute-0 sudo[89702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89702]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:40 compute-0 sudo[89728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89728]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:40 compute-0 sudo[89776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:40 compute-0 sudo[89776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:40 compute-0 sudo[89776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 sudo[89801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:41 compute-0 sudo[89801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:41 compute-0 sudo[89826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:41 compute-0 sudo[89826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89826]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 sudo[89851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new
Jan 31 07:07:41 compute-0 sudo[89851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 sudo[89876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:41 compute-0 sudo[89876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89876]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 sudo[89901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-f70fcd2a-dcb4-5f89-a4ba-79a09959083b/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf.new /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-0 sudo[89901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v98: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a201b9b4-d0d3-4796-9cf1-d4e182879e92 does not exist
Jan 31 07:07:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e4cba558-48c3-484e-92d8-0d373f4c4cb9 does not exist
Jan 31 07:07:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b68621b7-bb6d-407b-ab0d-69bb3029486f does not exist
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:41 compute-0 sudo[89926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:41 compute-0 sudo[89926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89926]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 07:07:41 compute-0 sudo[89951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:41 compute-0 sudo[89951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89951]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 07:07:41 compute-0 sudo[89976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:41 compute-0 sudo[89976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 sudo[89976]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 sudo[90001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:07:41 compute-0 sudo[90001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:41 compute-0 ceph-mon[74496]: Updating compute-2:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-0 ceph-mon[74496]: Updating compute-0:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-0 ceph-mon[74496]: Updating compute-1:/var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/config/ceph.conf
Jan 31 07:07:41 compute-0 ceph-mon[74496]: 4.4 scrub starts
Jan 31 07:07:41 compute-0 ceph-mon[74496]: 4.4 scrub ok
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 07:07:41 compute-0 ceph-mon[74496]: osdmap e34: 3 total, 3 up, 3 in
Jan 31 07:07:41 compute-0 ceph-mon[74496]: 3.18 scrub starts
Jan 31 07:07:41 compute-0 ceph-mon[74496]: 3.18 scrub ok
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:07:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 07:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 07:07:41 compute-0 quirky_brahmagupta[89377]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 07:07:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 07:07:41 compute-0 systemd[1]: libpod-60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a.scope: Deactivated successfully.
Jan 31 07:07:41 compute-0 podman[89314]: 2026-01-31 07:07:41.797903716 +0000 UTC m=+1.733470024 container died 60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a (image=quay.io/ceph/ceph:v18, name=quirky_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c93136d0a9b19bbb8fd4b38bcfa93caaa43a6629e4a465bfc2470ba0ba97890f-merged.mount: Deactivated successfully.
Jan 31 07:07:41 compute-0 podman[89314]: 2026-01-31 07:07:41.870608597 +0000 UTC m=+1.806174915 container remove 60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a (image=quay.io/ceph/ceph:v18, name=quirky_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:07:41 compute-0 systemd[1]: libpod-conmon-60ba30e2b361b8f0442a48d9d3b6a133cd6a5539867ba4703e57cb1ecb02ad5a.scope: Deactivated successfully.
Jan 31 07:07:41 compute-0 sudo[89240]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:41 compute-0 podman[90076]: 2026-01-31 07:07:41.978058192 +0000 UTC m=+0.057838234 container create 26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:42 compute-0 sudo[90113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpqppbzobhwfgrbjutvnaqtrgoqvqor ; /usr/bin/python3'
Jan 31 07:07:42 compute-0 sudo[90113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:42 compute-0 systemd[1]: Started libpod-conmon-26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c.scope.
Jan 31 07:07:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:42 compute-0 podman[90076]: 2026-01-31 07:07:41.954218717 +0000 UTC m=+0.033998759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:42 compute-0 podman[90076]: 2026-01-31 07:07:42.061025919 +0000 UTC m=+0.140805931 container init 26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:07:42 compute-0 podman[90076]: 2026-01-31 07:07:42.068120345 +0000 UTC m=+0.147900367 container start 26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:42 compute-0 brave_perlman[90118]: 167 167
Jan 31 07:07:42 compute-0 systemd[1]: libpod-26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c.scope: Deactivated successfully.
Jan 31 07:07:42 compute-0 podman[90076]: 2026-01-31 07:07:42.075942137 +0000 UTC m=+0.155722139 container attach 26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:07:42 compute-0 podman[90076]: 2026-01-31 07:07:42.076280855 +0000 UTC m=+0.156060857 container died 26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:07:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c241d6be56e8c50265f3cfa83eba7e71762968c090b0f9d81b7a898253b6d3e-merged.mount: Deactivated successfully.
Jan 31 07:07:42 compute-0 podman[90076]: 2026-01-31 07:07:42.15101665 +0000 UTC m=+0.230796652 container remove 26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:42 compute-0 python3[90117]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:42 compute-0 systemd[1]: libpod-conmon-26790ea13b4ae683df5c7e369fa751877f3f7d402662cd49615f38a3800e561c.scope: Deactivated successfully.
Jan 31 07:07:42 compute-0 podman[90137]: 2026-01-31 07:07:42.230185833 +0000 UTC m=+0.064626114 container create be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42 (image=quay.io/ceph/ceph:v18, name=crazy_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:42 compute-0 systemd[1]: Started libpod-conmon-be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42.scope.
Jan 31 07:07:42 compute-0 podman[90137]: 2026-01-31 07:07:42.190888227 +0000 UTC m=+0.025328518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef23b435865db9dc5039141c02d40eab1157c4ec519e87fde5baab46833a7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbef23b435865db9dc5039141c02d40eab1157c4ec519e87fde5baab46833a7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 podman[90157]: 2026-01-31 07:07:42.306195396 +0000 UTC m=+0.065497073 container create 5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tesla, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:42 compute-0 podman[90137]: 2026-01-31 07:07:42.314200233 +0000 UTC m=+0.148640534 container init be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42 (image=quay.io/ceph/ceph:v18, name=crazy_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:42 compute-0 podman[90137]: 2026-01-31 07:07:42.322823623 +0000 UTC m=+0.157263914 container start be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42 (image=quay.io/ceph/ceph:v18, name=crazy_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:07:42 compute-0 podman[90137]: 2026-01-31 07:07:42.332852153 +0000 UTC m=+0.167292434 container attach be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42 (image=quay.io/ceph/ceph:v18, name=crazy_herschel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:42 compute-0 systemd[1]: Started libpod-conmon-5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b.scope.
Jan 31 07:07:42 compute-0 podman[90157]: 2026-01-31 07:07:42.279738644 +0000 UTC m=+0.039040401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed0535018c3856fb884dd039f225cd11328171acab728c7650984708d370cee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed0535018c3856fb884dd039f225cd11328171acab728c7650984708d370cee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed0535018c3856fb884dd039f225cd11328171acab728c7650984708d370cee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed0535018c3856fb884dd039f225cd11328171acab728c7650984708d370cee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ed0535018c3856fb884dd039f225cd11328171acab728c7650984708d370cee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:42 compute-0 podman[90157]: 2026-01-31 07:07:42.41632154 +0000 UTC m=+0.175623237 container init 5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tesla, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:42 compute-0 podman[90157]: 2026-01-31 07:07:42.425891931 +0000 UTC m=+0.185193638 container start 5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tesla, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:07:42 compute-0 podman[90157]: 2026-01-31 07:07:42.432309303 +0000 UTC m=+0.191611010 container attach 5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tesla, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:07:42 compute-0 ceph-mon[74496]: pgmap v98: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:42 compute-0 ceph-mon[74496]: 4.5 scrub starts
Jan 31 07:07:42 compute-0 ceph-mon[74496]: 4.5 scrub ok
Jan 31 07:07:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4176655533' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 07:07:42 compute-0 ceph-mon[74496]: 2.6 scrub starts
Jan 31 07:07:42 compute-0 ceph-mon[74496]: osdmap e35: 3 total, 3 up, 3 in
Jan 31 07:07:42 compute-0 ceph-mon[74496]: 2.6 scrub ok
Jan 31 07:07:42 compute-0 ceph-mon[74496]: 5.11 scrub starts
Jan 31 07:07:42 compute-0 ceph-mon[74496]: 5.11 scrub ok
Jan 31 07:07:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 31 07:07:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 07:07:43 compute-0 confident_tesla[90179]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:07:43 compute-0 confident_tesla[90179]: --> relative data size: 1.0
Jan 31 07:07:43 compute-0 confident_tesla[90179]: --> All data devices are unavailable
Jan 31 07:07:43 compute-0 systemd[1]: libpod-5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b.scope: Deactivated successfully.
Jan 31 07:07:43 compute-0 podman[90157]: 2026-01-31 07:07:43.265780321 +0000 UTC m=+1.025081998 container died 5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ed0535018c3856fb884dd039f225cd11328171acab728c7650984708d370cee-merged.mount: Deactivated successfully.
Jan 31 07:07:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v100: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:43 compute-0 podman[90157]: 2026-01-31 07:07:43.331972948 +0000 UTC m=+1.091274625 container remove 5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:07:43 compute-0 systemd[1]: libpod-conmon-5e4a5d0855f85d6df9357da8cf22dfcdb2a88b4fa365fc58a9bfb201fc67499b.scope: Deactivated successfully.
Jan 31 07:07:43 compute-0 sudo[90001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:43 compute-0 sudo[90228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:43 compute-0 sudo[90228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:43 compute-0 sudo[90228]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:43 compute-0 sudo[90253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:43 compute-0 sudo[90253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:43 compute-0 sudo[90253]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:43 compute-0 sudo[90278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:43 compute-0 sudo[90278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:43 compute-0 sudo[90278]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:43 compute-0 sudo[90303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:07:43 compute-0 sudo[90303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 07:07:43 compute-0 podman[90367]: 2026-01-31 07:07:43.812008117 +0000 UTC m=+0.020597784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:44 compute-0 podman[90367]: 2026-01-31 07:07:44.061777226 +0000 UTC m=+0.270366863 container create 275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:07:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 07:07:44 compute-0 systemd[1]: Started libpod-conmon-275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f.scope.
Jan 31 07:07:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 07:07:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 07:07:44 compute-0 crazy_herschel[90171]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 07:07:44 compute-0 systemd[1]: libpod-be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42.scope: Deactivated successfully.
Jan 31 07:07:44 compute-0 podman[90367]: 2026-01-31 07:07:44.435614806 +0000 UTC m=+0.644204473 container init 275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:44 compute-0 podman[90137]: 2026-01-31 07:07:44.436566236 +0000 UTC m=+2.271006517 container died be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42 (image=quay.io/ceph/ceph:v18, name=crazy_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:07:44 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 07:07:44 compute-0 podman[90367]: 2026-01-31 07:07:44.444189935 +0000 UTC m=+0.652779602 container start 275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:44 compute-0 hopeful_beaver[90383]: 167 167
Jan 31 07:07:44 compute-0 systemd[1]: libpod-275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f.scope: Deactivated successfully.
Jan 31 07:07:44 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 31 07:07:44 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 31 07:07:44 compute-0 podman[90367]: 2026-01-31 07:07:44.602180013 +0000 UTC m=+0.810769730 container attach 275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:07:44 compute-0 podman[90367]: 2026-01-31 07:07:44.602657373 +0000 UTC m=+0.811247040 container died 275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:07:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-22274c9c16f315740bef6c458f9199b2c789906103ea22ae3138572e365176df-merged.mount: Deactivated successfully.
Jan 31 07:07:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:07:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 07:07:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 07:07:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 07:07:45 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event e88483f0-dae4-4afc-b525-62593a21ce81 (Global Recovery Event) in 21 seconds
Jan 31 07:07:45 compute-0 ceph-mon[74496]: pgmap v100: 131 pgs: 62 unknown, 69 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:45 compute-0 ceph-mon[74496]: 2.7 scrub starts
Jan 31 07:07:45 compute-0 ceph-mon[74496]: 2.7 scrub ok
Jan 31 07:07:45 compute-0 ceph-mon[74496]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:07:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1645037556' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 07:07:45 compute-0 ceph-mon[74496]: osdmap e36: 3 total, 3 up, 3 in
Jan 31 07:07:45 compute-0 ceph-mon[74496]: 3.17 scrub starts
Jan 31 07:07:45 compute-0 ceph-mon[74496]: 3.17 scrub ok
Jan 31 07:07:45 compute-0 podman[90395]: 2026-01-31 07:07:45.709888549 +0000 UTC m=+1.244442057 container remove 275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:45 compute-0 systemd[1]: libpod-conmon-275b41b9eca6a14dd1b6286b4a88a07cd109687b9283339306e99844d462ed0f.scope: Deactivated successfully.
Jan 31 07:07:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 07:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbef23b435865db9dc5039141c02d40eab1157c4ec519e87fde5baab46833a7e-merged.mount: Deactivated successfully.
Jan 31 07:07:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.19( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.1f( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.1e( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.1d( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.4( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.5( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.2( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.1( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.3( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.6( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.7( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.6( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.c( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.b( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.a( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.17( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.12( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.14( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.17( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[5.1e( empty local-lis/les=0/0 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.18( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[3.19( empty local-lis/les=0/0 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184813499s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836410522s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184780121s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836410522s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.185380936s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837127686s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.185363770s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837127686s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184702873s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836631775s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184571266s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836494446s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184684753s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836631775s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184483528s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836494446s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184591293s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836662292s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184571266s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836662292s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184303284s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836563110s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184268951s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836563110s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184422493s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836791992s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184402466s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184267998s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836769104s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184233665s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836769104s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184261322s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837135315s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184123039s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837005615s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184227943s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837135315s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184085846s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837005615s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184286118s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836898804s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184000015s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837005615s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183897972s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836898804s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183959961s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837005615s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184455872s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837554932s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.184429169s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837554932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183897972s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837158203s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183839798s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837120056s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183875084s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837158203s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183860779s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837188721s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183807373s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837120056s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183838844s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837188721s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183739662s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837203979s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183718681s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837203979s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183684349s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837219238s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183015823s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.836555481s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183927536s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837486267s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.182974815s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.836555481s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183645248s) [2] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837219238s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183902740s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837486267s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183435440s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 active pruub 85.837188721s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=26/27 n=0 ec=26/18 lis/c=26/26 les/c/f=27/27/0 sis=37 pruub=9.183411598s) [1] r=-1 lpr=37 pi=[26,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.837188721s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.1e( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.1f( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.9( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.6( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.4( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.1( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.e( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 37 pg[2.19( empty local-lis/les=0/0 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:07:46 compute-0 podman[90137]: 2026-01-31 07:07:46.079563747 +0000 UTC m=+3.914004028 container remove be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42 (image=quay.io/ceph/ceph:v18, name=crazy_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:07:46 compute-0 sudo[90113]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:46 compute-0 systemd[1]: libpod-conmon-be0f7c6251af205bcab013efb37892c1025f80cfd7601343b1f5a1ba91e9af42.scope: Deactivated successfully.
Jan 31 07:07:46 compute-0 podman[90424]: 2026-01-31 07:07:46.195346827 +0000 UTC m=+0.382809910 container create 0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:07:46 compute-0 systemd[1]: Started libpod-conmon-0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125.scope.
Jan 31 07:07:46 compute-0 podman[90424]: 2026-01-31 07:07:46.165327635 +0000 UTC m=+0.352790798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b741ce29039372ac0ed729f20d651ea25a0b2f5062a35b29490ba88e1dd483/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b741ce29039372ac0ed729f20d651ea25a0b2f5062a35b29490ba88e1dd483/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b741ce29039372ac0ed729f20d651ea25a0b2f5062a35b29490ba88e1dd483/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1b741ce29039372ac0ed729f20d651ea25a0b2f5062a35b29490ba88e1dd483/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:46 compute-0 podman[90424]: 2026-01-31 07:07:46.310269437 +0000 UTC m=+0.497732540 container init 0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 31 07:07:46 compute-0 podman[90424]: 2026-01-31 07:07:46.316678427 +0000 UTC m=+0.504141510 container start 0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:07:46 compute-0 podman[90424]: 2026-01-31 07:07:46.330118603 +0000 UTC m=+0.517581736 container attach 0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:46 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 07:07:46 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 07:07:46 compute-0 ceph-mon[74496]: 4.6 scrub starts
Jan 31 07:07:46 compute-0 ceph-mon[74496]: 4.6 scrub ok
Jan 31 07:07:46 compute-0 ceph-mon[74496]: pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:07:46 compute-0 ceph-mon[74496]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 07:07:46 compute-0 ceph-mon[74496]: Cluster is now healthy
Jan 31 07:07:46 compute-0 ceph-mon[74496]: 5.1e deep-scrub starts
Jan 31 07:07:46 compute-0 ceph-mon[74496]: 5.1e deep-scrub ok
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:07:46 compute-0 ceph-mon[74496]: osdmap e37: 3 total, 3 up, 3 in
Jan 31 07:07:46 compute-0 python3[90521]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:07:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]: {
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:     "0": [
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:         {
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "devices": [
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "/dev/loop3"
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             ],
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "lv_name": "ceph_lv0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "lv_size": "7511998464",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "name": "ceph_lv0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "tags": {
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.cluster_name": "ceph",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.crush_device_class": "",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.encrypted": "0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.osd_id": "0",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.type": "block",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:                 "ceph.vdo": "0"
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             },
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "type": "block",
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:             "vg_name": "ceph_vg0"
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:         }
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]:     ]
Jan 31 07:07:47 compute-0 interesting_mccarthy[90441]: }
Jan 31 07:07:47 compute-0 systemd[1]: libpod-0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125.scope: Deactivated successfully.
Jan 31 07:07:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 07:07:47 compute-0 podman[90567]: 2026-01-31 07:07:47.113454028 +0000 UTC m=+0.029952750 container died 0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:07:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 07:07:47 compute-0 python3[90609]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843266.6970322-37510-117742431741904/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:07:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.19( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.1f( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.1d( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.1e( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.9( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.5( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.3( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.4( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.6( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.6( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.1( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.c( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.a( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.17( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.e( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.14( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[5.1e( empty local-lis/les=37/38 n=0 ec=27/20 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=25/16 lis/c=32/32 les/c/f=34/34/0 sis=37) [0] r=0 lpr=37 pi=[32,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 38 pg[2.19( empty local-lis/les=37/38 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=37) [0] r=0 lpr=37 pi=[23,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b741ce29039372ac0ed729f20d651ea25a0b2f5062a35b29490ba88e1dd483-merged.mount: Deactivated successfully.
Jan 31 07:07:47 compute-0 ceph-mon[74496]: 4.7 scrub starts
Jan 31 07:07:47 compute-0 ceph-mon[74496]: 4.7 scrub ok
Jan 31 07:07:47 compute-0 ceph-mon[74496]: 2.8 scrub starts
Jan 31 07:07:47 compute-0 ceph-mon[74496]: 2.8 scrub ok
Jan 31 07:07:47 compute-0 ceph-mon[74496]: osdmap e38: 3 total, 3 up, 3 in
Jan 31 07:07:47 compute-0 sudo[90711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxuudyybhxqlpgqrebipavntyypqupuf ; /usr/bin/python3'
Jan 31 07:07:47 compute-0 sudo[90711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:48 compute-0 python3[90713]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:07:48 compute-0 sudo[90711]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:48 compute-0 sudo[90786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgnayyvmcdnujvrzoyukwobssekuqtqp ; /usr/bin/python3'
Jan 31 07:07:48 compute-0 sudo[90786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:48 compute-0 python3[90788]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843267.6148493-37524-165511096176576/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=9578ca43d703ba54a199f3ccb3eff6f743698f3b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:07:48 compute-0 sudo[90786]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:48 compute-0 podman[90567]: 2026-01-31 07:07:48.417337414 +0000 UTC m=+1.333836116 container remove 0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:48 compute-0 systemd[1]: libpod-conmon-0ee84d0df9a04d3e586bed87889c8b2e604dd48bae2d3e2dc856e4c95af8e125.scope: Deactivated successfully.
Jan 31 07:07:48 compute-0 sudo[90303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:48 compute-0 sudo[90812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:48 compute-0 sudo[90812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:48 compute-0 sudo[90812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:48 compute-0 sudo[90838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:48 compute-0 sudo[90838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:48 compute-0 sudo[90838]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:48 compute-0 sudo[90885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwvrewkqatklpxuhuyomuhdlccitpgu ; /usr/bin/python3'
Jan 31 07:07:48 compute-0 sudo[90885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:48 compute-0 sudo[90889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:48 compute-0 sudo[90889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:48 compute-0 sudo[90889]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:48 compute-0 sudo[90914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:07:48 compute-0 sudo[90914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:48 compute-0 ceph-mon[74496]: pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:48 compute-0 python3[90888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:48 compute-0 podman[90939]: 2026-01-31 07:07:48.850939169 +0000 UTC m=+0.042753202 container create bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d (image=quay.io/ceph/ceph:v18, name=thirsty_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:07:48 compute-0 systemd[1]: Started libpod-conmon-bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d.scope.
Jan 31 07:07:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df57019374e31b342f963aa93de0d4447ac6fa095838f34cdfde1cd1b9ddb25/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df57019374e31b342f963aa93de0d4447ac6fa095838f34cdfde1cd1b9ddb25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df57019374e31b342f963aa93de0d4447ac6fa095838f34cdfde1cd1b9ddb25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:48 compute-0 podman[90939]: 2026-01-31 07:07:48.831144743 +0000 UTC m=+0.022958796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:48 compute-0 podman[90939]: 2026-01-31 07:07:48.933326843 +0000 UTC m=+0.125140976 container init bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d (image=quay.io/ceph/ceph:v18, name=thirsty_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:07:48 compute-0 podman[90939]: 2026-01-31 07:07:48.939184902 +0000 UTC m=+0.130998935 container start bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d (image=quay.io/ceph/ceph:v18, name=thirsty_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:48 compute-0 podman[90939]: 2026-01-31 07:07:48.946545804 +0000 UTC m=+0.138359927 container attach bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d (image=quay.io/ceph/ceph:v18, name=thirsty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.066677759 +0000 UTC m=+0.052088788 container create 3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_khorana, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:49 compute-0 systemd[1]: Started libpod-conmon-3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f.scope.
Jan 31 07:07:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.043322955 +0000 UTC m=+0.028734074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.139185435 +0000 UTC m=+0.124596474 container init 3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.146855684 +0000 UTC m=+0.132266713 container start 3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_khorana, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:07:49 compute-0 elegant_khorana[91014]: 167 167
Jan 31 07:07:49 compute-0 systemd[1]: libpod-3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f.scope: Deactivated successfully.
Jan 31 07:07:49 compute-0 conmon[91014]: conmon 3d7e8c59545bd20ebf8d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f.scope/container/memory.events
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.152058658 +0000 UTC m=+0.137469737 container attach 3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.153462299 +0000 UTC m=+0.138873338 container died 3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_khorana, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-70365244c452c0152a86fd262c6022cf9085c4d15299a0d1338b6a84a1fe4e44-merged.mount: Deactivated successfully.
Jan 31 07:07:49 compute-0 podman[90998]: 2026-01-31 07:07:49.208367168 +0000 UTC m=+0.193778237 container remove 3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_khorana, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:07:49 compute-0 systemd[1]: libpod-conmon-3d7e8c59545bd20ebf8dfd5b8dfac9e8c757231911ecb9c78652f35243cccf8f.scope: Deactivated successfully.
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v106: 131 pgs: 30 activating, 101 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:49 compute-0 podman[91057]: 2026-01-31 07:07:49.363500633 +0000 UTC m=+0.050062442 container create b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:49 compute-0 systemd[1]: Started libpod-conmon-b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292.scope.
Jan 31 07:07:49 compute-0 podman[91057]: 2026-01-31 07:07:49.343504933 +0000 UTC m=+0.030066752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5534db440bbd9d1441c5fef7c7a2081e643508c59620b9ad1729a30061af537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5534db440bbd9d1441c5fef7c7a2081e643508c59620b9ad1729a30061af537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5534db440bbd9d1441c5fef7c7a2081e643508c59620b9ad1729a30061af537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5534db440bbd9d1441c5fef7c7a2081e643508c59620b9ad1729a30061af537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:49 compute-0 podman[91057]: 2026-01-31 07:07:49.479313093 +0000 UTC m=+0.165874902 container init b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:07:49 compute-0 podman[91057]: 2026-01-31 07:07:49.485727455 +0000 UTC m=+0.172289254 container start b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:49 compute-0 podman[91057]: 2026-01-31 07:07:49.492261858 +0000 UTC m=+0.178823687 container attach b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 07:07:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 07:07:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:07:49 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 07:07:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:07:49 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 07:07:49 compute-0 thirsty_mahavira[90980]: 
Jan 31 07:07:49 compute-0 thirsty_mahavira[90980]: [global]
Jan 31 07:07:49 compute-0 thirsty_mahavira[90980]:         fsid = f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:49 compute-0 thirsty_mahavira[90980]:         mon_host = 192.168.122.100
Jan 31 07:07:49 compute-0 systemd[1]: libpod-bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d.scope: Deactivated successfully.
Jan 31 07:07:49 compute-0 podman[90939]: 2026-01-31 07:07:49.545815137 +0000 UTC m=+0.737629170 container died bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d (image=quay.io/ceph/ceph:v18, name=thirsty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0df57019374e31b342f963aa93de0d4447ac6fa095838f34cdfde1cd1b9ddb25-merged.mount: Deactivated successfully.
Jan 31 07:07:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:49 compute-0 podman[90939]: 2026-01-31 07:07:49.607802832 +0000 UTC m=+0.799616885 container remove bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d (image=quay.io/ceph/ceph:v18, name=thirsty_mahavira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:49 compute-0 systemd[1]: libpod-conmon-bc9c17ffea1e33df0a950ebd7c153d0d8daac218dd2064aecdd8abb128b2893d.scope: Deactivated successfully.
Jan 31 07:07:49 compute-0 sudo[90885]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:49 compute-0 sudo[91117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxrgbvohwlnxxqgjklosmtgitucxqtpw ; /usr/bin/python3'
Jan 31 07:07:49 compute-0 sudo[91117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:49 compute-0 ceph-mon[74496]: 2.11 scrub starts
Jan 31 07:07:49 compute-0 ceph-mon[74496]: 2.11 scrub ok
Jan 31 07:07:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 07:07:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3678408126' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 07:07:49 compute-0 python3[91119]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:07:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:50.010569628 +0000 UTC m=+0.075327938 container create a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d (image=quay.io/ceph/ceph:v18, name=gallant_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:50 compute-0 systemd[1]: Started libpod-conmon-a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d.scope.
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:49.991541 +0000 UTC m=+0.056299330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2bf1a1fc55f5558e664dc5e8abf31c8c187ad76ed13ba5d11540416215d3e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2bf1a1fc55f5558e664dc5e8abf31c8c187ad76ed13ba5d11540416215d3e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a2bf1a1fc55f5558e664dc5e8abf31c8c187ad76ed13ba5d11540416215d3e9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:50.111246485 +0000 UTC m=+0.176004845 container init a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d (image=quay.io/ceph/ceph:v18, name=gallant_bouman, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:50.116733706 +0000 UTC m=+0.181492016 container start a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d (image=quay.io/ceph/ceph:v18, name=gallant_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:50.123131517 +0000 UTC m=+0.187889907 container attach a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d (image=quay.io/ceph/ceph:v18, name=gallant_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]: {
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:         "osd_id": 0,
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:         "type": "bluestore"
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]:     }
Jan 31 07:07:50 compute-0 wizardly_shtern[91073]: }
Jan 31 07:07:50 compute-0 systemd[1]: libpod-b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292.scope: Deactivated successfully.
Jan 31 07:07:50 compute-0 podman[91155]: 2026-01-31 07:07:50.380925822 +0000 UTC m=+0.035364529 container died b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5534db440bbd9d1441c5fef7c7a2081e643508c59620b9ad1729a30061af537-merged.mount: Deactivated successfully.
Jan 31 07:07:50 compute-0 podman[91155]: 2026-01-31 07:07:50.462430507 +0000 UTC m=+0.116869174 container remove b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:07:50 compute-0 systemd[1]: libpod-conmon-b364c4bac359ad2a0496156b0bf630c4434691fc898faea17df41351dea36292.scope: Deactivated successfully.
Jan 31 07:07:50 compute-0 sudo[90914]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 10 completed events
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-0 sudo[91189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:50 compute-0 sudo[91189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:50 compute-0 sudo[91189]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:50 compute-0 sudo[91214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:07:50 compute-0 sudo[91214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:50 compute-0 sudo[91214]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3677494513' entity='client.admin' 
Jan 31 07:07:50 compute-0 gallant_bouman[91135]: set ssl_option
Jan 31 07:07:50 compute-0 systemd[1]: libpod-a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d.scope: Deactivated successfully.
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:50.803925995 +0000 UTC m=+0.868684375 container died a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d (image=quay.io/ceph/ceph:v18, name=gallant_bouman, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:07:50 compute-0 ceph-mon[74496]: pgmap v106: 131 pgs: 30 activating, 101 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:50 compute-0 ceph-mon[74496]: 4.b scrub starts
Jan 31 07:07:50 compute-0 ceph-mon[74496]: 4.b scrub ok
Jan 31 07:07:50 compute-0 ceph-mon[74496]: 2.14 deep-scrub starts
Jan 31 07:07:50 compute-0 ceph-mon[74496]: 2.14 deep-scrub ok
Jan 31 07:07:50 compute-0 ceph-mon[74496]: 3.15 scrub starts
Jan 31 07:07:50 compute-0 ceph-mon[74496]: 3.15 scrub ok
Jan 31 07:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3677494513' entity='client.admin' 
Jan 31 07:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2bf1a1fc55f5558e664dc5e8abf31c8c187ad76ed13ba5d11540416215d3e9-merged.mount: Deactivated successfully.
Jan 31 07:07:50 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 07:07:50 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 07:07:50 compute-0 podman[91120]: 2026-01-31 07:07:50.876778499 +0000 UTC m=+0.941536809 container remove a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d (image=quay.io/ceph/ceph:v18, name=gallant_bouman, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:50 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:07:50 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:07:50 compute-0 systemd[1]: libpod-conmon-a71c1b2483d347230af362dbdf36c6b142783f9866fd66bd8eccadd56d7bf51d.scope: Deactivated successfully.
Jan 31 07:07:50 compute-0 sudo[91117]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:50 compute-0 sudo[91255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:50 compute-0 sudo[91255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:50 compute-0 sudo[91255]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:50 compute-0 sudo[91280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:50 compute-0 sudo[91280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:50 compute-0 sudo[91280]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-0 sudo[91334]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xritunamgxbwbixzxlxetdzeyhvzumwr ; /usr/bin/python3'
Jan 31 07:07:51 compute-0 sudo[91334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:51 compute-0 sudo[91325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:51 compute-0 sudo[91325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:51 compute-0 sudo[91325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-0 sudo[91356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:51 compute-0 sudo[91356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:51 compute-0 python3[91353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.209762539 +0000 UTC m=+0.049017960 container create b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7 (image=quay.io/ceph/ceph:v18, name=focused_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:07:51 compute-0 systemd[1]: Started libpod-conmon-b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7.scope.
Jan 31 07:07:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea42cde4b2c249ecfdbeb20f41a8382eea1674124cd1c25956831f07d43a0b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea42cde4b2c249ecfdbeb20f41a8382eea1674124cd1c25956831f07d43a0b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea42cde4b2c249ecfdbeb20f41a8382eea1674124cd1c25956831f07d43a0b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.187680023 +0000 UTC m=+0.026935424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.293973163 +0000 UTC m=+0.133228564 container init b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7 (image=quay.io/ceph/ceph:v18, name=focused_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.298518563 +0000 UTC m=+0.137773944 container start b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7 (image=quay.io/ceph/ceph:v18, name=focused_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.298842581 +0000 UTC m=+0.029034361 container create 3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_einstein, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.306890717 +0000 UTC m=+0.146146098 container attach b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7 (image=quay.io/ceph/ceph:v18, name=focused_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v107: 131 pgs: 30 activating, 101 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:51 compute-0 systemd[1]: Started libpod-conmon-3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d.scope.
Jan 31 07:07:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.377266567 +0000 UTC m=+0.107458377 container init 3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.382046412 +0000 UTC m=+0.112238192 container start 3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.285679521 +0000 UTC m=+0.015871321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:51 compute-0 suspicious_einstein[91433]: 167 167
Jan 31 07:07:51 compute-0 systemd[1]: libpod-3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d.scope: Deactivated successfully.
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.387256257 +0000 UTC m=+0.117448107 container attach 3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.387550104 +0000 UTC m=+0.117741914 container died 3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc9b7d9877655ce71ec6d4dd26336ed2d955a0c6f0e7b816bb46f8b0dd7bacf-merged.mount: Deactivated successfully.
Jan 31 07:07:51 compute-0 podman[91416]: 2026-01-31 07:07:51.448615077 +0000 UTC m=+0.178806867 container remove 3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:51 compute-0 systemd[1]: libpod-conmon-3d09b4686cdfb986bc153616c172be501b93af967f3b504b264c439af411d23d.scope: Deactivated successfully.
Jan 31 07:07:51 compute-0 sudo[91356]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.hhuoua (monmap changed)...
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.hhuoua (monmap changed)...
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:07:51 compute-0 sudo[91451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:51 compute-0 sudo[91451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:51 compute-0 sudo[91451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-0 sudo[91495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:51 compute-0 sudo[91495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:51 compute-0 sudo[91495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-0 sudo[91520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:51 compute-0 sudo[91520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:51 compute-0 sudo[91520]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:51 compute-0 sudo[91545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:51 compute-0 sudo[91545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: 5.13 scrub starts
Jan 31 07:07:51 compute-0 ceph-mon[74496]: 5.13 scrub ok
Jan 31 07:07:51 compute-0 ceph-mon[74496]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 07:07:51 compute-0 ceph-mon[74496]: pgmap v107: 131 pgs: 30 activating, 101 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-0 ceph-mon[74496]: Reconfiguring mgr.compute-0.hhuoua (monmap changed)...
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhuoua", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:51 compute-0 ceph-mon[74496]: Reconfiguring daemon mgr.compute-0.hhuoua on compute-0
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 31 07:07:51 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 31 07:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 07:07:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:51 compute-0 focused_gould[91413]: Scheduled rgw.rgw update...
Jan 31 07:07:51 compute-0 focused_gould[91413]: Scheduled ingress.rgw.default update...
Jan 31 07:07:51 compute-0 systemd[1]: libpod-b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7.scope: Deactivated successfully.
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.920899355 +0000 UTC m=+0.760154776 container died b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7 (image=quay.io/ceph/ceph:v18, name=focused_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea42cde4b2c249ecfdbeb20f41a8382eea1674124cd1c25956831f07d43a0b5-merged.mount: Deactivated successfully.
Jan 31 07:07:51 compute-0 podman[91381]: 2026-01-31 07:07:51.989943895 +0000 UTC m=+0.829199286 container remove b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7 (image=quay.io/ceph/ceph:v18, name=focused_gould, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:07:52 compute-0 systemd[1]: libpod-conmon-b20a1e07228e779def939495c0d90a479f62a221b32e59ca455306328f5716c7.scope: Deactivated successfully.
Jan 31 07:07:52 compute-0 sudo[91334]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.081515132 +0000 UTC m=+0.053169352 container create 71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:52 compute-0 systemd[1]: Started libpod-conmon-71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9.scope.
Jan 31 07:07:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.149490338 +0000 UTC m=+0.121144588 container init 71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.05738587 +0000 UTC m=+0.029040110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.154912477 +0000 UTC m=+0.126566727 container start 71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:52 compute-0 inspiring_bassi[91616]: 167 167
Jan 31 07:07:52 compute-0 systemd[1]: libpod-71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9.scope: Deactivated successfully.
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.160663174 +0000 UTC m=+0.132317444 container attach 71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.161198275 +0000 UTC m=+0.132852545 container died 71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:52 compute-0 podman[91600]: 2026-01-31 07:07:52.20770114 +0000 UTC m=+0.179355360 container remove 71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1570d462a93417657821703b0f660023560717760ad94dfc69eadbcc1d517ac5-merged.mount: Deactivated successfully.
Jan 31 07:07:52 compute-0 systemd[1]: libpod-conmon-71bff8aa06544a047714a75f9504ef3c0a2f1e615874b3ccc2f640a5423bbbb9.scope: Deactivated successfully.
Jan 31 07:07:52 compute-0 sudo[91545]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 07:07:52 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 07:07:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 07:07:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:52 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 07:07:52 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 07:07:52 compute-0 sudo[91635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:52 compute-0 sudo[91635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:52 compute-0 sudo[91635]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:52 compute-0 sudo[91660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:52 compute-0 sudo[91660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:52 compute-0 sudo[91660]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:52 compute-0 sudo[91685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:52 compute-0 sudo[91685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:52 compute-0 sudo[91685]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:52 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 07:07:52 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 07:07:52 compute-0 sudo[91710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:52 compute-0 sudo[91710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.773901094 +0000 UTC m=+0.035990423 container create c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:07:52 compute-0 systemd[1]: Started libpod-conmon-c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e.scope.
Jan 31 07:07:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.844017828 +0000 UTC m=+0.106107187 container init c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.851104954 +0000 UTC m=+0.113194273 container start c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.756471421 +0000 UTC m=+0.018560770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:52 compute-0 awesome_mestorf[91844]: 167 167
Jan 31 07:07:52 compute-0 systemd[1]: libpod-c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e.scope: Deactivated successfully.
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.866789809 +0000 UTC m=+0.128879158 container attach c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.867192388 +0000 UTC m=+0.129281727 container died c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:07:52 compute-0 ceph-mon[74496]: 3.11 scrub starts
Jan 31 07:07:52 compute-0 ceph-mon[74496]: 3.11 scrub ok
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:52 compute-0 ceph-mon[74496]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-0 ceph-mon[74496]: Saving service ingress.rgw.default spec with placement count:2
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:52 compute-0 ceph-mon[74496]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:52 compute-0 ceph-mon[74496]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 07:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74138c7e5c7108a30fe012660b9f208887896298b29ac159b9868312b3f584b-merged.mount: Deactivated successfully.
Jan 31 07:07:52 compute-0 podman[91774]: 2026-01-31 07:07:52.960377989 +0000 UTC m=+0.222467318 container remove c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:07:52 compute-0 systemd[1]: libpod-conmon-c8b31c4791e35c2545dcacec802006b98188e6fd05f8c597086f5ec8b57d4f0e.scope: Deactivated successfully.
Jan 31 07:07:52 compute-0 python3[91843]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:07:53 compute-0 sudo[91710]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:53 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 31 07:07:53 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 31 07:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 31 07:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 07:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:53 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 31 07:07:53 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 31 07:07:53 compute-0 sudo[91892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:53 compute-0 sudo[91892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:53 compute-0 sudo[91892]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:53 compute-0 sudo[91942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:07:53 compute-0 sudo[91942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:53 compute-0 sudo[91942]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:53 compute-0 sudo[91986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:07:53 compute-0 sudo[91986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:53 compute-0 sudo[91986]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:53 compute-0 sudo[92011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:07:53 compute-0 sudo[92011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:07:53 compute-0 python3[91981]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843272.721852-37565-237412394232894/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:07:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v108: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:53 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 07:07:53 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 07:07:53 compute-0 podman[92077]: 2026-01-31 07:07:53.509047669 +0000 UTC m=+0.081390633 container create e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lehmann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:07:53 compute-0 podman[92077]: 2026-01-31 07:07:53.458728931 +0000 UTC m=+0.031071895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:07:53 compute-0 sudo[92115]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjzmpnabdaiqwcszjacpokenepvkkmy ; /usr/bin/python3'
Jan 31 07:07:53 compute-0 sudo[92115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:53 compute-0 systemd[1]: Started libpod-conmon-e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e.scope.
Jan 31 07:07:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:53 compute-0 python3[92117]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:53 compute-0 podman[92077]: 2026-01-31 07:07:53.794041653 +0000 UTC m=+0.366384677 container init e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:53 compute-0 podman[92077]: 2026-01-31 07:07:53.799535164 +0000 UTC m=+0.371878098 container start e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:07:53 compute-0 adoring_lehmann[92120]: 167 167
Jan 31 07:07:53 compute-0 systemd[1]: libpod-e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e.scope: Deactivated successfully.
Jan 31 07:07:53 compute-0 podman[92123]: 2026-01-31 07:07:53.806392244 +0000 UTC m=+0.032569997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:53 compute-0 podman[92077]: 2026-01-31 07:07:53.938191957 +0000 UTC m=+0.510534941 container attach e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:07:53 compute-0 podman[92077]: 2026-01-31 07:07:53.938619386 +0000 UTC m=+0.510962400 container died e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe2ddd5875d78c5a8ca6c876ac2ed38cdd940157f986e2dc90f3c2ebae99dbec-merged.mount: Deactivated successfully.
Jan 31 07:07:54 compute-0 ceph-mon[74496]: 4.f scrub starts
Jan 31 07:07:54 compute-0 ceph-mon[74496]: 4.f scrub ok
Jan 31 07:07:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:54 compute-0 ceph-mon[74496]: Reconfiguring osd.0 (monmap changed)...
Jan 31 07:07:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 07:07:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:54 compute-0 ceph-mon[74496]: Reconfiguring daemon osd.0 on compute-0
Jan 31 07:07:54 compute-0 ceph-mon[74496]: pgmap v108: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:54 compute-0 podman[92077]: 2026-01-31 07:07:54.216326969 +0000 UTC m=+0.788669903 container remove e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_lehmann, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:07:54 compute-0 podman[92123]: 2026-01-31 07:07:54.24768854 +0000 UTC m=+0.473866193 container create 4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2 (image=quay.io/ceph/ceph:v18, name=vibrant_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:54 compute-0 systemd[1]: libpod-conmon-e9438b7afea204c7a1a82df70b5bf9b52d646842fa2d1f38643e891c66b7ea9e.scope: Deactivated successfully.
Jan 31 07:07:54 compute-0 systemd[1]: Started libpod-conmon-4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2.scope.
Jan 31 07:07:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef38223b6a11b43f880343bde03511afbbaa5e1b46fcc39e51bfe97b506e5508/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef38223b6a11b43f880343bde03511afbbaa5e1b46fcc39e51bfe97b506e5508/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef38223b6a11b43f880343bde03511afbbaa5e1b46fcc39e51bfe97b506e5508/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:54 compute-0 podman[92123]: 2026-01-31 07:07:54.447375526 +0000 UTC m=+0.673553209 container init 4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2 (image=quay.io/ceph/ceph:v18, name=vibrant_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:07:54 compute-0 podman[92123]: 2026-01-31 07:07:54.454996204 +0000 UTC m=+0.681173867 container start 4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2 (image=quay.io/ceph/ceph:v18, name=vibrant_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:07:54 compute-0 podman[92123]: 2026-01-31 07:07:54.470476105 +0000 UTC m=+0.696653758 container attach 4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2 (image=quay.io/ceph/ceph:v18, name=vibrant_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:07:54 compute-0 sudo[92011]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:07:54 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Jan 31 07:07:54 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Jan 31 07:07:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:07:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:54 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 07:07:54 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 07:07:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 07:07:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:54 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 07:07:54 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 07:07:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mgr[74791]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 07:07:55 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0[74492]: 2026-01-31T07:07:55.105+0000 7f123e254640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e2 new map
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:07:55.106335+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 07:07:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 07:07:55 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:55 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 07:07:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 07:07:55 compute-0 ceph-mon[74496]: 4.10 scrub starts
Jan 31 07:07:55 compute-0 ceph-mon[74496]: 4.10 scrub ok
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:55 compute-0 ceph-mon[74496]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 07:07:55 compute-0 ceph-mon[74496]: 3.e scrub starts
Jan 31 07:07:55 compute-0 ceph-mon[74496]: 3.e scrub ok
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 07:07:55 compute-0 ceph-mon[74496]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 07:07:55 compute-0 ceph-mon[74496]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 07:07:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 07:07:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:55 compute-0 ceph-mgr[74791]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 07:07:55 compute-0 systemd[1]: libpod-4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2.scope: Deactivated successfully.
Jan 31 07:07:55 compute-0 podman[92123]: 2026-01-31 07:07:55.805855303 +0000 UTC m=+2.032032986 container died 4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2 (image=quay.io/ceph/ceph:v18, name=vibrant_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef38223b6a11b43f880343bde03511afbbaa5e1b46fcc39e51bfe97b506e5508-merged.mount: Deactivated successfully.
Jan 31 07:07:56 compute-0 podman[92123]: 2026-01-31 07:07:56.022301818 +0000 UTC m=+2.248479491 container remove 4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2 (image=quay.io/ceph/ceph:v18, name=vibrant_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:56 compute-0 systemd[1]: libpod-conmon-4561c9b16d27fbe7c2d2c868cc04968804cb7127a3d08581d0cd18ec10ad1cf2.scope: Deactivated successfully.
Jan 31 07:07:56 compute-0 sudo[92115]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 31 07:07:56 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 31 07:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 31 07:07:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 07:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:56 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 31 07:07:56 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 31 07:07:56 compute-0 sudo[92222]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cryeudbeyymsgpgsmiwjusvzwitfksus ; /usr/bin/python3'
Jan 31 07:07:56 compute-0 sudo[92222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:56 compute-0 python3[92224]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:56 compute-0 podman[92225]: 2026-01-31 07:07:56.397734623 +0000 UTC m=+0.060855430 container create 533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6 (image=quay.io/ceph/ceph:v18, name=stoic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:07:56 compute-0 systemd[1]: Started libpod-conmon-533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6.scope.
Jan 31 07:07:56 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 07:07:56 compute-0 podman[92225]: 2026-01-31 07:07:56.369206396 +0000 UTC m=+0.032327283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195611944c3525c0f1854a2e2fbe20ba691a0b5d411babf0c8138696e207e5c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195611944c3525c0f1854a2e2fbe20ba691a0b5d411babf0c8138696e207e5c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/195611944c3525c0f1854a2e2fbe20ba691a0b5d411babf0c8138696e207e5c6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:56 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 07:07:56 compute-0 podman[92225]: 2026-01-31 07:07:56.495662079 +0000 UTC m=+0.158782906 container init 533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6 (image=quay.io/ceph/ceph:v18, name=stoic_lichterman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:07:56 compute-0 podman[92225]: 2026-01-31 07:07:56.500807903 +0000 UTC m=+0.163928710 container start 533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6 (image=quay.io/ceph/ceph:v18, name=stoic_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:07:56 compute-0 podman[92225]: 2026-01-31 07:07:56.506679822 +0000 UTC m=+0.169800629 container attach 533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6 (image=quay.io/ceph/ceph:v18, name=stoic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:07:56 compute-0 ceph-mon[74496]: 4.11 deep-scrub starts
Jan 31 07:07:56 compute-0 ceph-mon[74496]: 4.11 deep-scrub ok
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='client.14316 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 07:07:56 compute-0 ceph-mon[74496]: pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:56 compute-0 ceph-mon[74496]: osdmap e39: 3 total, 3 up, 3 in
Jan 31 07:07:56 compute-0 ceph-mon[74496]: fsmap cephfs:0
Jan 31 07:07:56 compute-0 ceph-mon[74496]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:56 compute-0 ceph-mon[74496]: 4.12 scrub starts
Jan 31 07:07:56 compute-0 ceph-mon[74496]: 4.12 scrub ok
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-0 ceph-mon[74496]: 5.b scrub starts
Jan 31 07:07:56 compute-0 ceph-mon[74496]: 5.b scrub ok
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 07:07:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 stoic_lichterman[92240]: Scheduled mds.cephfs update...
Jan 31 07:07:57 compute-0 systemd[1]: libpod-533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6.scope: Deactivated successfully.
Jan 31 07:07:57 compute-0 conmon[92240]: conmon 533be7e49ba86198a50e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6.scope/container/memory.events
Jan 31 07:07:57 compute-0 podman[92225]: 2026-01-31 07:07:57.12117219 +0000 UTC m=+0.784293007 container died 533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6 (image=quay.io/ceph/ceph:v18, name=stoic_lichterman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 07:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-195611944c3525c0f1854a2e2fbe20ba691a0b5d411babf0c8138696e207e5c6-merged.mount: Deactivated successfully.
Jan 31 07:07:57 compute-0 podman[92225]: 2026-01-31 07:07:57.220277562 +0000 UTC m=+0.883398409 container remove 533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6 (image=quay.io/ceph/ceph:v18, name=stoic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:07:57 compute-0 systemd[1]: libpod-conmon-533be7e49ba86198a50eaf8af4d6d304868cee169ced6b099f77d93afb08ffa6.scope: Deactivated successfully.
Jan 31 07:07:57 compute-0 sudo[92222]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v111: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:57 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 07:07:57 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 07:07:57 compute-0 ceph-mon[74496]: 2.16 deep-scrub starts
Jan 31 07:07:57 compute-0 ceph-mon[74496]: 2.16 deep-scrub ok
Jan 31 07:07:57 compute-0 ceph-mon[74496]: Reconfiguring osd.1 (monmap changed)...
Jan 31 07:07:57 compute-0 ceph-mon[74496]: Reconfiguring daemon osd.1 on compute-1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: 4.16 scrub starts
Jan 31 07:07:57 compute-0 ceph-mon[74496]: 4.16 scrub ok
Jan 31 07:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 07:07:57 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 07:07:58 compute-0 sudo[92350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujulcaldrkenxetzcymbgbmwzibdezfr ; /usr/bin/python3'
Jan 31 07:07:58 compute-0 sudo[92350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:58 compute-0 python3[92352]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:07:58 compute-0 sudo[92350]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-0 sudo[92423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipvuuxeyqlqbntqeguvxyztsgolskbhp ; /usr/bin/python3'
Jan 31 07:07:58 compute-0 sudo[92423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.wmgest (monmap changed)...
Jan 31 07:07:58 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.wmgest (monmap changed)...
Jan 31 07:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 07:07:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 07:07:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:07:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.wmgest on compute-2
Jan 31 07:07:58 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.wmgest on compute-2
Jan 31 07:07:58 compute-0 python3[92425]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843277.972655-37613-252725769745663/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=d07c30b1acab71467a05fb02d206fcd55de2512c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:07:58 compute-0 sudo[92423]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 07:07:58 compute-0 ceph-mon[74496]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 07:07:58 compute-0 ceph-mon[74496]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 07:07:58 compute-0 ceph-mon[74496]: pgmap v111: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:58 compute-0 ceph-mon[74496]: 4.17 scrub starts
Jan 31 07:07:58 compute-0 ceph-mon[74496]: 4.17 scrub ok
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.wmgest", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 07:07:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:07:58 compute-0 sudo[92473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslhhicrufqzpkuaaxeyjscakzpoatlb ; /usr/bin/python3'
Jan 31 07:07:58 compute-0 sudo[92473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:07:58 compute-0 python3[92475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.04574932 +0000 UTC m=+0.045339159 container create 6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d (image=quay.io/ceph/ceph:v18, name=awesome_diffie, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:07:59 compute-0 systemd[1]: Started libpod-conmon-6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d.scope.
Jan 31 07:07:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e36e9efc723ece01b76c28f02652c1221917b0da3e20870ddaf48af2ba7e5e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e36e9efc723ece01b76c28f02652c1221917b0da3e20870ddaf48af2ba7e5e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.114275479 +0000 UTC m=+0.113865388 container init 6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d (image=quay.io/ceph/ceph:v18, name=awesome_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.118597265 +0000 UTC m=+0.118187104 container start 6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d (image=quay.io/ceph/ceph:v18, name=awesome_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.023612633 +0000 UTC m=+0.023202502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.12430155 +0000 UTC m=+0.123891409 container attach 6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d (image=quay.io/ceph/ceph:v18, name=awesome_diffie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:07:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v112: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:07:59 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 07:07:59 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 07:07:59 compute-0 ceph-mon[74496]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 07:07:59 compute-0 ceph-mon[74496]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 07:07:59 compute-0 ceph-mon[74496]: 2.17 scrub starts
Jan 31 07:07:59 compute-0 ceph-mon[74496]: 2.17 scrub ok
Jan 31 07:07:59 compute-0 ceph-mon[74496]: Reconfiguring mgr.compute-2.wmgest (monmap changed)...
Jan 31 07:07:59 compute-0 ceph-mon[74496]: Reconfiguring daemon mgr.compute-2.wmgest on compute-2
Jan 31 07:07:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 31 07:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 07:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 07:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:07:59 compute-0 systemd[1]: libpod-6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d.scope: Deactivated successfully.
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.709068823 +0000 UTC m=+0.708658702 container died 6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d (image=quay.io/ceph/ceph:v18, name=awesome_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e36e9efc723ece01b76c28f02652c1221917b0da3e20870ddaf48af2ba7e5e-merged.mount: Deactivated successfully.
Jan 31 07:07:59 compute-0 podman[92476]: 2026-01-31 07:07:59.776846966 +0000 UTC m=+0.776436835 container remove 6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d (image=quay.io/ceph/ceph:v18, name=awesome_diffie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:07:59 compute-0 systemd[1]: libpod-conmon-6dc73a0f023fe0f6d82698c423a6523cbb752edb1dff6e1b142b81850ab0140d.scope: Deactivated successfully.
Jan 31 07:07:59 compute-0 sudo[92473]: pam_unix(sudo:session): session closed for user root
Jan 31 07:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d0c3d38c-c07e-4016-8a1d-a62db88f0f0c does not exist
Jan 31 07:08:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8f9a99f9-6539-41c0-ac18-6252ef061573 does not exist
Jan 31 07:08:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6958ad6c-cba1-48fd-947a-8faa138b6108 does not exist
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:00 compute-0 sudo[92530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:00 compute-0 sudo[92530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:00 compute-0 sudo[92530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:00 compute-0 sudo[92555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:00 compute-0 sudo[92555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:00 compute-0 sudo[92555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:00 compute-0 sudo[92610]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njrxqtmgchvjwqtelwixewtaqbmxgkjq ; /usr/bin/python3'
Jan 31 07:08:00 compute-0 sudo[92610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:00 compute-0 sudo[92600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:00 compute-0 sudo[92600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:00 compute-0 sudo[92600]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:00 compute-0 sudo[92631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:08:00 compute-0 sudo[92631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:00 compute-0 python3[92628]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:00 compute-0 podman[92659]: 2026-01-31 07:08:00.573114315 +0000 UTC m=+0.045061443 container create aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3 (image=quay.io/ceph/ceph:v18, name=mystifying_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:08:00 compute-0 systemd[1]: Started libpod-conmon-aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3.scope.
Jan 31 07:08:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20243efbc555fbe5da336699a37b04897083aedaa77fb796389af7d9fa70296/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20243efbc555fbe5da336699a37b04897083aedaa77fb796389af7d9fa70296/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:00 compute-0 ceph-mon[74496]: pgmap v112: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:00 compute-0 ceph-mon[74496]: 4.1e scrub starts
Jan 31 07:08:00 compute-0 ceph-mon[74496]: 4.1e scrub ok
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1696344557' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:00 compute-0 podman[92659]: 2026-01-31 07:08:00.646237206 +0000 UTC m=+0.118184354 container init aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3 (image=quay.io/ceph/ceph:v18, name=mystifying_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:08:00 compute-0 podman[92659]: 2026-01-31 07:08:00.551273165 +0000 UTC m=+0.023220293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:00 compute-0 podman[92659]: 2026-01-31 07:08:00.650430268 +0000 UTC m=+0.122377416 container start aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3 (image=quay.io/ceph/ceph:v18, name=mystifying_keller, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:00 compute-0 podman[92659]: 2026-01-31 07:08:00.656700296 +0000 UTC m=+0.128647444 container attach aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3 (image=quay.io/ceph/ceph:v18, name=mystifying_keller, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.763357244 +0000 UTC m=+0.045767758 container create 4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_franklin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:08:00 compute-0 systemd[1]: Started libpod-conmon-4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd.scope.
Jan 31 07:08:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.739402686 +0000 UTC m=+0.021813280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.84085178 +0000 UTC m=+0.123262294 container init 4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_franklin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.848367305 +0000 UTC m=+0.130777849 container start 4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_franklin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:08:00 compute-0 angry_franklin[92732]: 167 167
Jan 31 07:08:00 compute-0 systemd[1]: libpod-4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd.scope: Deactivated successfully.
Jan 31 07:08:00 compute-0 conmon[92732]: conmon 4dd42d8f176e4c631f36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd.scope/container/memory.events
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.855617875 +0000 UTC m=+0.138028379 container attach 4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.856042334 +0000 UTC m=+0.138452848 container died 4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_franklin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd665c3b10160ce284608569efb5730daaed44688a57ad93b585f59ac9ecf975-merged.mount: Deactivated successfully.
Jan 31 07:08:00 compute-0 podman[92715]: 2026-01-31 07:08:00.900117545 +0000 UTC m=+0.182528089 container remove 4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:08:00 compute-0 systemd[1]: libpod-conmon-4dd42d8f176e4c631f3619616ba516726ec9ba4004f35fe11bccd9328aaf99bd.scope: Deactivated successfully.
Jan 31 07:08:01 compute-0 podman[92776]: 2026-01-31 07:08:01.038507672 +0000 UTC m=+0.051972795 container create 696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:08:01 compute-0 systemd[1]: Started libpod-conmon-696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2.scope.
Jan 31 07:08:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:01 compute-0 podman[92776]: 2026-01-31 07:08:01.010870183 +0000 UTC m=+0.024335396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d7d163d8b09e7bb1378291e8c2b4bc65d62398f1ecdf6af9fdf708ecd0b96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d7d163d8b09e7bb1378291e8c2b4bc65d62398f1ecdf6af9fdf708ecd0b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d7d163d8b09e7bb1378291e8c2b4bc65d62398f1ecdf6af9fdf708ecd0b96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d7d163d8b09e7bb1378291e8c2b4bc65d62398f1ecdf6af9fdf708ecd0b96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e90d7d163d8b09e7bb1378291e8c2b4bc65d62398f1ecdf6af9fdf708ecd0b96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 podman[92776]: 2026-01-31 07:08:01.13198651 +0000 UTC m=+0.145451633 container init 696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_napier, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:08:01 compute-0 podman[92776]: 2026-01-31 07:08:01.137239395 +0000 UTC m=+0.150704528 container start 696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_napier, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:01 compute-0 podman[92776]: 2026-01-31 07:08:01.14334675 +0000 UTC m=+0.156811893 container attach 696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:08:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 07:08:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3578243099' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:08:01 compute-0 mystifying_keller[92696]: 
Jan 31 07:08:01 compute-0 mystifying_keller[92696]: {"fsid":"f70fcd2a-dcb4-5f89-a4ba-79a09959083b","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":51,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1769843258,"num_in_osds":3,"osd_in_since":1769843236,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":131}],"num_pgs":131,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84066304,"bytes_avail":22451929088,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-31T07:07:41.313979+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.hodsiu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.wmgest":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 07:08:01 compute-0 systemd[1]: libpod-aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3.scope: Deactivated successfully.
Jan 31 07:08:01 compute-0 podman[92659]: 2026-01-31 07:08:01.248948194 +0000 UTC m=+0.720895322 container died aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3 (image=quay.io/ceph/ceph:v18, name=mystifying_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 31 07:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e20243efbc555fbe5da336699a37b04897083aedaa77fb796389af7d9fa70296-merged.mount: Deactivated successfully.
Jan 31 07:08:01 compute-0 podman[92659]: 2026-01-31 07:08:01.316008381 +0000 UTC m=+0.787955549 container remove aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3 (image=quay.io/ceph/ceph:v18, name=mystifying_keller, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:01 compute-0 systemd[1]: libpod-conmon-aef702b1d3a7a45630781e927f815d3ac9a8c5f0f0e79d60301db622fda219a3.scope: Deactivated successfully.
Jan 31 07:08:01 compute-0 sudo[92610]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:01 compute-0 sudo[92834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctrygjogbybposeypcartikrxpevwemg ; /usr/bin/python3'
Jan 31 07:08:01 compute-0 sudo[92834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:01 compute-0 python3[92836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:01 compute-0 ceph-mon[74496]: 2.1a deep-scrub starts
Jan 31 07:08:01 compute-0 ceph-mon[74496]: 2.1a deep-scrub ok
Jan 31 07:08:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3578243099' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:08:01 compute-0 podman[92837]: 2026-01-31 07:08:01.659393 +0000 UTC m=+0.050113404 container create 1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a (image=quay.io/ceph/ceph:v18, name=peaceful_curran, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:08:01 compute-0 systemd[1]: Started libpod-conmon-1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a.scope.
Jan 31 07:08:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45cd631645b67e2f7964fa0b3e97ff338bf0dc8e2da10b032dcf06c6c962b4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f45cd631645b67e2f7964fa0b3e97ff338bf0dc8e2da10b032dcf06c6c962b4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:01 compute-0 podman[92837]: 2026-01-31 07:08:01.635802501 +0000 UTC m=+0.026522915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:01 compute-0 podman[92837]: 2026-01-31 07:08:01.741845006 +0000 UTC m=+0.132565410 container init 1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a (image=quay.io/ceph/ceph:v18, name=peaceful_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:01 compute-0 podman[92837]: 2026-01-31 07:08:01.751154321 +0000 UTC m=+0.141874685 container start 1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a (image=quay.io/ceph/ceph:v18, name=peaceful_curran, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:08:01 compute-0 podman[92837]: 2026-01-31 07:08:01.761668372 +0000 UTC m=+0.152388746 container attach 1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a (image=quay.io/ceph/ceph:v18, name=peaceful_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:08:01 compute-0 sweet_napier[92793]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:08:01 compute-0 sweet_napier[92793]: --> relative data size: 1.0
Jan 31 07:08:01 compute-0 sweet_napier[92793]: --> All data devices are unavailable
Jan 31 07:08:01 compute-0 systemd[1]: libpod-696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2.scope: Deactivated successfully.
Jan 31 07:08:01 compute-0 podman[92776]: 2026-01-31 07:08:01.945201332 +0000 UTC m=+0.958666475 container died 696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90d7d163d8b09e7bb1378291e8c2b4bc65d62398f1ecdf6af9fdf708ecd0b96-merged.mount: Deactivated successfully.
Jan 31 07:08:02 compute-0 podman[92776]: 2026-01-31 07:08:02.01731522 +0000 UTC m=+1.030780343 container remove 696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_napier, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:08:02 compute-0 systemd[1]: libpod-conmon-696575ab3e6916fc06b969ea0e9e49074184144f7d2fa08b7df676aa1232d2b2.scope: Deactivated successfully.
Jan 31 07:08:02 compute-0 sudo[92631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:02 compute-0 sudo[92889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:02 compute-0 sudo[92889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:02 compute-0 sudo[92889]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:02 compute-0 sudo[92923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:02 compute-0 sudo[92923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:02 compute-0 sudo[92923]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:02 compute-0 sudo[92948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:02 compute-0 sudo[92948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:02 compute-0 sudo[92948]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:02 compute-0 sudo[92973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:08:02 compute-0 sudo[92973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:08:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320746925' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:08:02 compute-0 peaceful_curran[92853]: 
Jan 31 07:08:02 compute-0 peaceful_curran[92853]: {"epoch":3,"fsid":"f70fcd2a-dcb4-5f89-a4ba-79a09959083b","modified":"2026-01-31T07:07:05.044825Z","created":"2026-01-31T07:04:29.923749Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 31 07:08:02 compute-0 peaceful_curran[92853]: dumped monmap epoch 3
Jan 31 07:08:02 compute-0 systemd[1]: libpod-1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a.scope: Deactivated successfully.
Jan 31 07:08:02 compute-0 podman[92837]: 2026-01-31 07:08:02.388684146 +0000 UTC m=+0.779404520 container died 1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a (image=quay.io/ceph/ceph:v18, name=peaceful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f45cd631645b67e2f7964fa0b3e97ff338bf0dc8e2da10b032dcf06c6c962b4c-merged.mount: Deactivated successfully.
Jan 31 07:08:02 compute-0 podman[92837]: 2026-01-31 07:08:02.489937995 +0000 UTC m=+0.880658359 container remove 1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a (image=quay.io/ceph/ceph:v18, name=peaceful_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:02 compute-0 systemd[1]: libpod-conmon-1ac13eb2786708d6cb89097f352ec0a343d2bf3cb75c630ad8d9d6ddcedd203a.scope: Deactivated successfully.
Jan 31 07:08:02 compute-0 sudo[92834]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.621422239 +0000 UTC m=+0.043754424 container create 0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:08:02 compute-0 systemd[1]: Started libpod-conmon-0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9.scope.
Jan 31 07:08:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.679814205 +0000 UTC m=+0.102146410 container init 0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.685093242 +0000 UTC m=+0.107425437 container start 0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:02 compute-0 boring_jennings[93071]: 167 167
Jan 31 07:08:02 compute-0 systemd[1]: libpod-0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9.scope: Deactivated successfully.
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.688447905 +0000 UTC m=+0.110780100 container attach 0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.689354926 +0000 UTC m=+0.111687111 container died 0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:08:02 compute-0 ceph-mon[74496]: pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2320746925' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.602451952 +0000 UTC m=+0.024784167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-03b5c9b341553c739638e322bd5b1202c77e88d864e527293a5b377751eb4fb6-merged.mount: Deactivated successfully.
Jan 31 07:08:02 compute-0 podman[93054]: 2026-01-31 07:08:02.752492995 +0000 UTC m=+0.174825180 container remove 0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jennings, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:02 compute-0 systemd[1]: libpod-conmon-0c0cb29d80ddcca3c20b00860620bcba35f3a20e787ed95807e798ec99b462f9.scope: Deactivated successfully.
Jan 31 07:08:02 compute-0 podman[93095]: 2026-01-31 07:08:02.909378839 +0000 UTC m=+0.063528299 container create bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:08:02 compute-0 sudo[93132]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blhdfkzdojcfsrxwkyxnfvnbieirqgsh ; /usr/bin/python3'
Jan 31 07:08:02 compute-0 sudo[93132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:02 compute-0 systemd[1]: Started libpod-conmon-bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6.scope.
Jan 31 07:08:02 compute-0 podman[93095]: 2026-01-31 07:08:02.870320079 +0000 UTC m=+0.024469569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30fb72017a9fe0131aa42582c90531de906eaf918904f96d27331fb01c93ad9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30fb72017a9fe0131aa42582c90531de906eaf918904f96d27331fb01c93ad9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30fb72017a9fe0131aa42582c90531de906eaf918904f96d27331fb01c93ad9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30fb72017a9fe0131aa42582c90531de906eaf918904f96d27331fb01c93ad9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:03 compute-0 podman[93095]: 2026-01-31 07:08:03.012002608 +0000 UTC m=+0.166152078 container init bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:08:03 compute-0 podman[93095]: 2026-01-31 07:08:03.019984894 +0000 UTC m=+0.174134364 container start bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:08:03 compute-0 podman[93095]: 2026-01-31 07:08:03.028253056 +0000 UTC m=+0.182402546 container attach bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:08:03 compute-0 python3[93134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.128073894 +0000 UTC m=+0.042817253 container create 575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca (image=quay.io/ceph/ceph:v18, name=dazzling_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:03 compute-0 systemd[1]: Started libpod-conmon-575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca.scope.
Jan 31 07:08:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60834f8a0f2516f499d3a767021a0cba751176e438f0fe4597acbe0149a56c4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60834f8a0f2516f499d3a767021a0cba751176e438f0fe4597acbe0149a56c4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.196881648 +0000 UTC m=+0.111625037 container init 575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca (image=quay.io/ceph/ceph:v18, name=dazzling_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.201507551 +0000 UTC m=+0.116250940 container start 575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca (image=quay.io/ceph/ceph:v18, name=dazzling_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.109695949 +0000 UTC m=+0.024439328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.208269669 +0000 UTC m=+0.123013028 container attach 575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca (image=quay.io/ceph/ceph:v18, name=dazzling_spence, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:03 compute-0 great_bose[93137]: {
Jan 31 07:08:03 compute-0 great_bose[93137]:     "0": [
Jan 31 07:08:03 compute-0 great_bose[93137]:         {
Jan 31 07:08:03 compute-0 great_bose[93137]:             "devices": [
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "/dev/loop3"
Jan 31 07:08:03 compute-0 great_bose[93137]:             ],
Jan 31 07:08:03 compute-0 great_bose[93137]:             "lv_name": "ceph_lv0",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "lv_size": "7511998464",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "name": "ceph_lv0",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "tags": {
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.cluster_name": "ceph",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.crush_device_class": "",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.encrypted": "0",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.osd_id": "0",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.type": "block",
Jan 31 07:08:03 compute-0 great_bose[93137]:                 "ceph.vdo": "0"
Jan 31 07:08:03 compute-0 great_bose[93137]:             },
Jan 31 07:08:03 compute-0 great_bose[93137]:             "type": "block",
Jan 31 07:08:03 compute-0 great_bose[93137]:             "vg_name": "ceph_vg0"
Jan 31 07:08:03 compute-0 great_bose[93137]:         }
Jan 31 07:08:03 compute-0 great_bose[93137]:     ]
Jan 31 07:08:03 compute-0 great_bose[93137]: }
Jan 31 07:08:03 compute-0 systemd[1]: libpod-bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6.scope: Deactivated successfully.
Jan 31 07:08:03 compute-0 podman[93095]: 2026-01-31 07:08:03.814292871 +0000 UTC m=+0.968442381 container died bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bose, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:08:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 31 07:08:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1362729985' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 07:08:03 compute-0 dazzling_spence[93158]: [client.openstack]
Jan 31 07:08:03 compute-0 dazzling_spence[93158]:         key = AQBjqX1pAAAAABAAZUDJ8pReeykI0ZmVlnkCdQ==
Jan 31 07:08:03 compute-0 dazzling_spence[93158]:         caps mgr = "allow *"
Jan 31 07:08:03 compute-0 dazzling_spence[93158]:         caps mon = "profile rbd"
Jan 31 07:08:03 compute-0 dazzling_spence[93158]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 07:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-30fb72017a9fe0131aa42582c90531de906eaf918904f96d27331fb01c93ad9d-merged.mount: Deactivated successfully.
Jan 31 07:08:03 compute-0 systemd[1]: libpod-575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca.scope: Deactivated successfully.
Jan 31 07:08:03 compute-0 podman[93095]: 2026-01-31 07:08:03.884330153 +0000 UTC m=+1.038479623 container remove bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bose, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.888151157 +0000 UTC m=+0.802894516 container died 575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca (image=quay.io/ceph/ceph:v18, name=dazzling_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:08:03 compute-0 systemd[1]: libpod-conmon-bb88c370f042e39e368b2aff433c5dadc1f705752e54f54ce48eb1ab195013b6.scope: Deactivated successfully.
Jan 31 07:08:03 compute-0 sudo[92973]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-60834f8a0f2516f499d3a767021a0cba751176e438f0fe4597acbe0149a56c4e-merged.mount: Deactivated successfully.
Jan 31 07:08:03 compute-0 podman[93142]: 2026-01-31 07:08:03.9573481 +0000 UTC m=+0.872091459 container remove 575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca (image=quay.io/ceph/ceph:v18, name=dazzling_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:03 compute-0 systemd[1]: libpod-conmon-575acf52e8cbcf6921b475c25811be7ec50bc1a2d25f9689072ae863967309ca.scope: Deactivated successfully.
Jan 31 07:08:03 compute-0 sudo[93212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:03 compute-0 sudo[93132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:03 compute-0 sudo[93212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:03 compute-0 sudo[93212]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:04 compute-0 sudo[93237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:04 compute-0 sudo[93237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:04 compute-0 sudo[93237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:04 compute-0 sudo[93262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:04 compute-0 sudo[93262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:04 compute-0 sudo[93262]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:04 compute-0 sudo[93287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:08:04 compute-0 sudo[93287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.443257227 +0000 UTC m=+0.046594646 container create ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:04 compute-0 systemd[1]: Started libpod-conmon-ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9.scope.
Jan 31 07:08:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.420515517 +0000 UTC m=+0.023852956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.527882171 +0000 UTC m=+0.131219670 container init ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.535487008 +0000 UTC m=+0.138824457 container start ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:08:04 compute-0 focused_heyrovsky[93369]: 167 167
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.540902138 +0000 UTC m=+0.144239637 container attach ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:08:04 compute-0 systemd[1]: libpod-ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9.scope: Deactivated successfully.
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.542069493 +0000 UTC m=+0.145406912 container died ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a5cc3a44096108a7a8b1c430e4b46fb3598530dd8e9eea8bc738a7757a2811c-merged.mount: Deactivated successfully.
Jan 31 07:08:04 compute-0 podman[93353]: 2026-01-31 07:08:04.615637853 +0000 UTC m=+0.218975302 container remove ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:08:04 compute-0 systemd[1]: libpod-conmon-ee65cc613ea6d58d37a05a13abb5c6efb26479368b083534e6e1d811037473d9.scope: Deactivated successfully.
Jan 31 07:08:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:04 compute-0 ceph-mon[74496]: pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1362729985' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 07:08:04 compute-0 podman[93393]: 2026-01-31 07:08:04.778936368 +0000 UTC m=+0.059715455 container create c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_proskuriakova, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:08:04 compute-0 systemd[1]: Started libpod-conmon-c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5.scope.
Jan 31 07:08:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:04 compute-0 podman[93393]: 2026-01-31 07:08:04.756977415 +0000 UTC m=+0.037756552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0839f3545d643c6bcf9e2e59bddcb13f274ca75a80e8953071c544bda93609a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0839f3545d643c6bcf9e2e59bddcb13f274ca75a80e8953071c544bda93609a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0839f3545d643c6bcf9e2e59bddcb13f274ca75a80e8953071c544bda93609a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0839f3545d643c6bcf9e2e59bddcb13f274ca75a80e8953071c544bda93609a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:04 compute-0 podman[93393]: 2026-01-31 07:08:04.87804454 +0000 UTC m=+0.158823647 container init c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:04 compute-0 podman[93393]: 2026-01-31 07:08:04.885936384 +0000 UTC m=+0.166715471 container start c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:04 compute-0 podman[93393]: 2026-01-31 07:08:04.893752925 +0000 UTC m=+0.174532032 container attach c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:08:05 compute-0 sudo[93561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvmwwcgvpysfuelekixtsfiitpfnlfqc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843284.9148066-37685-117380074800825/async_wrapper.py j144994536259 30 /home/zuul/.ansible/tmp/ansible-tmp-1769843284.9148066-37685-117380074800825/AnsiballZ_command.py _'
Jan 31 07:08:05 compute-0 sudo[93561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v115: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:05 compute-0 ansible-async_wrapper.py[93563]: Invoked with j144994536259 30 /home/zuul/.ansible/tmp/ansible-tmp-1769843284.9148066-37685-117380074800825/AnsiballZ_command.py _
Jan 31 07:08:05 compute-0 ansible-async_wrapper.py[93566]: Starting module and watcher
Jan 31 07:08:05 compute-0 ansible-async_wrapper.py[93566]: Start watching 93567 (30)
Jan 31 07:08:05 compute-0 ansible-async_wrapper.py[93567]: Start module (93567)
Jan 31 07:08:05 compute-0 ansible-async_wrapper.py[93563]: Return async_wrapper task started.
Jan 31 07:08:05 compute-0 sudo[93561]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:05 compute-0 python3[93568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:05 compute-0 podman[93569]: 2026-01-31 07:08:05.565927924 +0000 UTC m=+0.059547042 container create ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d (image=quay.io/ceph/ceph:v18, name=silly_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:08:05 compute-0 systemd[1]: Started libpod-conmon-ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d.scope.
Jan 31 07:08:05 compute-0 podman[93569]: 2026-01-31 07:08:05.533640733 +0000 UTC m=+0.027259881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b719ec183408e4b48971919def78e784e434817843bceea57d20b985007af9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67b719ec183408e4b48971919def78e784e434817843bceea57d20b985007af9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:05 compute-0 podman[93569]: 2026-01-31 07:08:05.648517652 +0000 UTC m=+0.142136830 container init ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d (image=quay.io/ceph/ceph:v18, name=silly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:08:05 compute-0 podman[93569]: 2026-01-31 07:08:05.655925346 +0000 UTC m=+0.149544504 container start ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d (image=quay.io/ceph/ceph:v18, name=silly_buck, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:08:05 compute-0 podman[93569]: 2026-01-31 07:08:05.661923797 +0000 UTC m=+0.155542965 container attach ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d (image=quay.io/ceph/ceph:v18, name=silly_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]: {
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:         "osd_id": 0,
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:         "type": "bluestore"
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]:     }
Jan 31 07:08:05 compute-0 epic_proskuriakova[93409]: }
Jan 31 07:08:05 compute-0 ceph-mon[74496]: 5.1c scrub starts
Jan 31 07:08:05 compute-0 ceph-mon[74496]: 5.1c scrub ok
Jan 31 07:08:05 compute-0 systemd[1]: libpod-c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5.scope: Deactivated successfully.
Jan 31 07:08:05 compute-0 podman[93393]: 2026-01-31 07:08:05.77376693 +0000 UTC m=+1.054546017 container died c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_proskuriakova, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0839f3545d643c6bcf9e2e59bddcb13f274ca75a80e8953071c544bda93609a1-merged.mount: Deactivated successfully.
Jan 31 07:08:05 compute-0 podman[93393]: 2026-01-31 07:08:05.830247133 +0000 UTC m=+1.111026220 container remove c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:08:05 compute-0 systemd[1]: libpod-conmon-c196ae531c7e47ab6bde3e30c1a4f4b9b06a834e9eb8383597b97e081647dbe5.scope: Deactivated successfully.
Jan 31 07:08:05 compute-0 sudo[93287]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:05 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev a19fd92b-2b7a-4e3f-a5a8-272db4c2532a (Updating rgw.rgw deployment (+3 -> 3))
Jan 31 07:08:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 07:08:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 07:08:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:05 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.kddbks on compute-2
Jan 31 07:08:05 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.kddbks on compute-2
Jan 31 07:08:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14352 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:06 compute-0 silly_buck[93586]: 
Jan 31 07:08:06 compute-0 silly_buck[93586]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:08:06 compute-0 systemd[1]: libpod-ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d.scope: Deactivated successfully.
Jan 31 07:08:06 compute-0 podman[93639]: 2026-01-31 07:08:06.335732801 +0000 UTC m=+0.028173741 container died ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d (image=quay.io/ceph/ceph:v18, name=silly_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-67b719ec183408e4b48971919def78e784e434817843bceea57d20b985007af9-merged.mount: Deactivated successfully.
Jan 31 07:08:06 compute-0 podman[93639]: 2026-01-31 07:08:06.387341417 +0000 UTC m=+0.079782337 container remove ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d (image=quay.io/ceph/ceph:v18, name=silly_buck, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:08:06 compute-0 systemd[1]: libpod-conmon-ddb14de71bbe22e8a54ce0b2828e6d91bea93c77a73817643757833af727f14d.scope: Deactivated successfully.
Jan 31 07:08:06 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 07:08:06 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 07:08:06 compute-0 ansible-async_wrapper.py[93567]: Module complete (93567)
Jan 31 07:08:06 compute-0 sudo[93701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iskrmjooerexkrhelvdmigikkkqklnty ; /usr/bin/python3'
Jan 31 07:08:06 compute-0 sudo[93701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:06 compute-0 python3[93703]: ansible-ansible.legacy.async_status Invoked with jid=j144994536259.93563 mode=status _async_dir=/root/.ansible_async
Jan 31 07:08:06 compute-0 sudo[93701]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:06 compute-0 sudo[93750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-komfraxremharunrgezfeycsvtjsfmxw ; /usr/bin/python3'
Jan 31 07:08:06 compute-0 sudo[93750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:06 compute-0 ceph-mon[74496]: pgmap v115: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.kddbks", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:06 compute-0 ceph-mon[74496]: 5.19 scrub starts
Jan 31 07:08:06 compute-0 ceph-mon[74496]: 5.19 scrub ok
Jan 31 07:08:06 compute-0 python3[93752]: ansible-ansible.legacy.async_status Invoked with jid=j144994536259.93563 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 07:08:06 compute-0 sudo[93750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:07 compute-0 sudo[93776]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adenyzhogciledvgntrozxjxqkxcunzx ; /usr/bin/python3'
Jan 31 07:08:07 compute-0 sudo[93776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v116: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:07 compute-0 python3[93778]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:07 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.zjvjex on compute-1
Jan 31 07:08:07 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.zjvjex on compute-1
Jan 31 07:08:07 compute-0 podman[93781]: 2026-01-31 07:08:07.529907972 +0000 UTC m=+0.051402073 container create 6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847 (image=quay.io/ceph/ceph:v18, name=affectionate_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:08:07 compute-0 systemd[1]: Started libpod-conmon-6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847.scope.
Jan 31 07:08:07 compute-0 podman[93781]: 2026-01-31 07:08:07.508632083 +0000 UTC m=+0.030126204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffe2dc4adfcab7b892344089ed0f29dce7aeb0e0fa3e203ce9ba1baa0ff26e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffe2dc4adfcab7b892344089ed0f29dce7aeb0e0fa3e203ce9ba1baa0ff26e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:07 compute-0 podman[93781]: 2026-01-31 07:08:07.626553539 +0000 UTC m=+0.148047660 container init 6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847 (image=quay.io/ceph/ceph:v18, name=affectionate_bose, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:08:07 compute-0 podman[93781]: 2026-01-31 07:08:07.6320317 +0000 UTC m=+0.153525791 container start 6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847 (image=quay.io/ceph/ceph:v18, name=affectionate_bose, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:07 compute-0 podman[93781]: 2026-01-31 07:08:07.638871061 +0000 UTC m=+0.160365162 container attach 6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847 (image=quay.io/ceph/ceph:v18, name=affectionate_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:07 compute-0 ceph-mon[74496]: Deploying daemon rgw.rgw.compute-2.kddbks on compute-2
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='client.14352 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zjvjex", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:08 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:08 compute-0 affectionate_bose[93797]: 
Jan 31 07:08:08 compute-0 affectionate_bose[93797]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 07:08:08 compute-0 systemd[1]: libpod-6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847.scope: Deactivated successfully.
Jan 31 07:08:08 compute-0 podman[93781]: 2026-01-31 07:08:08.20392741 +0000 UTC m=+0.725421501 container died 6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847 (image=quay.io/ceph/ceph:v18, name=affectionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ffe2dc4adfcab7b892344089ed0f29dce7aeb0e0fa3e203ce9ba1baa0ff26e7-merged.mount: Deactivated successfully.
Jan 31 07:08:08 compute-0 podman[93781]: 2026-01-31 07:08:08.251925307 +0000 UTC m=+0.773419398 container remove 6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847 (image=quay.io/ceph/ceph:v18, name=affectionate_bose, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:08:08 compute-0 systemd[1]: libpod-conmon-6ac2c5a3a70806470bdcb361ec70c5119d8db075f6fd8a5de0f163323e9b7847.scope: Deactivated successfully.
Jan 31 07:08:08 compute-0 sudo[93776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 07:08:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 07:08:08 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 07:08:08 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 40 pg[8.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 31 07:08:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 07:08:08 compute-0 ceph-mon[74496]: pgmap v116: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:08 compute-0 ceph-mon[74496]: Deploying daemon rgw.rgw.compute-1.zjvjex on compute-1
Jan 31 07:08:08 compute-0 ceph-mon[74496]: 5.8 scrub starts
Jan 31 07:08:08 compute-0 ceph-mon[74496]: 5.8 scrub ok
Jan 31 07:08:08 compute-0 ceph-mon[74496]: from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:08 compute-0 ceph-mon[74496]: osdmap e40: 3 total, 3 up, 3 in
Jan 31 07:08:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 07:08:08 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 07:08:08 compute-0 sudo[93858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kihbspfmeavgpwnoskwvaqyajkrkpgrh ; /usr/bin/python3'
Jan 31 07:08:08 compute-0 sudo[93858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 07:08:09 compute-0 python3[93860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:09 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.njduba on compute-0
Jan 31 07:08:09 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.njduba on compute-0
Jan 31 07:08:09 compute-0 podman[93861]: 2026-01-31 07:08:09.136385948 +0000 UTC m=+0.048774594 container create 125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:09 compute-0 systemd[1]: Started libpod-conmon-125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8.scope.
Jan 31 07:08:09 compute-0 sudo[93874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:09 compute-0 sudo[93874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:09 compute-0 sudo[93874]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e867ce0722a93da3faef7c7f82247042cffbe764dea1f02408b43320be5de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e9e867ce0722a93da3faef7c7f82247042cffbe764dea1f02408b43320be5de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:09 compute-0 podman[93861]: 2026-01-31 07:08:09.114810083 +0000 UTC m=+0.027198799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:09 compute-0 podman[93861]: 2026-01-31 07:08:09.216635775 +0000 UTC m=+0.129024441 container init 125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:08:09 compute-0 podman[93861]: 2026-01-31 07:08:09.222050894 +0000 UTC m=+0.134439550 container start 125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:08:09 compute-0 podman[93861]: 2026-01-31 07:08:09.227827132 +0000 UTC m=+0.140215798 container attach 125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:09 compute-0 sudo[93904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:09 compute-0 sudo[93904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:09 compute-0 sudo[93904]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:09 compute-0 sudo[93930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:09 compute-0 sudo[93930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:09 compute-0 sudo[93930]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v118: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:09 compute-0 sudo[93955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:09 compute-0 sudo[93955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 07:08:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 07:08:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 41 pg[8.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.65701096 +0000 UTC m=+0.041822162 container create 2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:09 compute-0 systemd[1]: Started libpod-conmon-2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e.scope.
Jan 31 07:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.72378504 +0000 UTC m=+0.108596282 container init 2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hamilton, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.728454372 +0000 UTC m=+0.113265604 container start 2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hamilton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:09 compute-0 gallant_hamilton[94058]: 167 167
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.63518499 +0000 UTC m=+0.019996282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:09 compute-0 systemd[1]: libpod-2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e.scope: Deactivated successfully.
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.733433952 +0000 UTC m=+0.118245184 container attach 2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.73379491 +0000 UTC m=+0.118606142 container died 2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2c75f898a56838a89a969711dbb25df6b0cd197bd0eb1717b15b222f49e06b-merged.mount: Deactivated successfully.
Jan 31 07:08:09 compute-0 podman[94042]: 2026-01-31 07:08:09.784836574 +0000 UTC m=+0.169647816 container remove 2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:08:09 compute-0 systemd[1]: libpod-conmon-2d82b84be946775e21508d2d0006d72593058a964010e80492aacd4715d0ac0e.scope: Deactivated successfully.
Jan 31 07:08:09 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:09 compute-0 pensive_chaum[93899]: 
Jan 31 07:08:09 compute-0 pensive_chaum[93899]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 31 07:08:09 compute-0 systemd[1]: Reloading.
Jan 31 07:08:09 compute-0 podman[93861]: 2026-01-31 07:08:09.857315539 +0000 UTC m=+0.769704225 container died 125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:09 compute-0 systemd-rc-local-generator[94114]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:09 compute-0 systemd-sysv-generator[94117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:10 compute-0 systemd[1]: libpod-125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8.scope: Deactivated successfully.
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.njduba", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:10 compute-0 ceph-mon[74496]: Deploying daemon rgw.rgw.compute-0.njduba on compute-0
Jan 31 07:08:10 compute-0 ceph-mon[74496]: pgmap v118: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:10 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 07:08:10 compute-0 ceph-mon[74496]: osdmap e41: 3 total, 3 up, 3 in
Jan 31 07:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e9e867ce0722a93da3faef7c7f82247042cffbe764dea1f02408b43320be5de-merged.mount: Deactivated successfully.
Jan 31 07:08:10 compute-0 podman[93861]: 2026-01-31 07:08:10.102374754 +0000 UTC m=+1.014763440 container remove 125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8 (image=quay.io/ceph/ceph:v18, name=pensive_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 31 07:08:10 compute-0 systemd[1]: libpod-conmon-125620b6d5bc1a9d5656e6a589f5e49703c984d65ed38a90306740063c233af8.scope: Deactivated successfully.
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:08:10 compute-0 sudo[93858]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:10 compute-0 systemd[1]: Reloading.
Jan 31 07:08:10 compute-0 systemd-rc-local-generator[94158]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:10 compute-0 systemd-sysv-generator[94163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:10 compute-0 ansible-async_wrapper.py[93566]: Done in kid B.
Jan 31 07:08:10 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.njduba for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 07:08:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 42 pg[9.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [0] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:10 compute-0 podman[94219]: 2026-01-31 07:08:10.552933674 +0000 UTC m=+0.046481625 container create 7b0621a2e40c1742efa8191945f46ebb87559261384b35471c2d6d71eb9be595 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-0-njduba, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 31 07:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd00e4c1a40e0446e7d0b8c98b4d2645414e2347ad534128305aa1e29eacd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd00e4c1a40e0446e7d0b8c98b4d2645414e2347ad534128305aa1e29eacd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd00e4c1a40e0446e7d0b8c98b4d2645414e2347ad534128305aa1e29eacd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd00e4c1a40e0446e7d0b8c98b4d2645414e2347ad534128305aa1e29eacd9/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.njduba supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:10 compute-0 podman[94219]: 2026-01-31 07:08:10.616225117 +0000 UTC m=+0.109773088 container init 7b0621a2e40c1742efa8191945f46ebb87559261384b35471c2d6d71eb9be595 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-0-njduba, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:10 compute-0 podman[94219]: 2026-01-31 07:08:10.61950663 +0000 UTC m=+0.113054591 container start 7b0621a2e40c1742efa8191945f46ebb87559261384b35471c2d6d71eb9be595 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-0-njduba, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:10 compute-0 bash[94219]: 7b0621a2e40c1742efa8191945f46ebb87559261384b35471c2d6d71eb9be595
Jan 31 07:08:10 compute-0 podman[94219]: 2026-01-31 07:08:10.534542959 +0000 UTC m=+0.028090930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:10 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.njduba for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:10 compute-0 sudo[93955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:10 compute-0 radosgw[94239]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:08:10 compute-0 radosgw[94239]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 31 07:08:10 compute-0 radosgw[94239]: framework: beast
Jan 31 07:08:10 compute-0 radosgw[94239]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 07:08:10 compute-0 radosgw[94239]: init_numa not setting numa affinity
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev a19fd92b-2b7a-4e3f-a5a8-272db4c2532a (Updating rgw.rgw deployment (+3 -> 3))
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event a19fd92b-2b7a-4e3f-a5a8-272db4c2532a (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 77106dd6-8219-4586-97bf-62115dd8abb1 (Updating mds.cephfs deployment (+3 -> 3))
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.ihffma on compute-2
Jan 31 07:08:10 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.ihffma on compute-2
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mon[74496]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 07:08:11 compute-0 ceph-mon[74496]: osdmap e42: 3 total, 3 up, 3 in
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-0 ceph-mon[74496]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.ihffma", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v121: 133 pgs: 2 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 07:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 07:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 07:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 07:08:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 07:08:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 43 pg[9.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [0] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:11 compute-0 sudo[94328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmnhdkkdsogxvbuzxmcwresbnktdtjmg ; /usr/bin/python3'
Jan 31 07:08:11 compute-0 sudo[94328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:11 compute-0 python3[94330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:11 compute-0 podman[94331]: 2026-01-31 07:08:11.91499728 +0000 UTC m=+0.055587895 container create 3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed (image=quay.io/ceph/ceph:v18, name=distracted_perlman, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:08:11 compute-0 systemd[1]: Started libpod-conmon-3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed.scope.
Jan 31 07:08:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:11 compute-0 podman[94331]: 2026-01-31 07:08:11.887858433 +0000 UTC m=+0.028449108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29c9f50cac35dd3ae407a3de8e095342e9140456c80dec7cd6a52271040ce95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b29c9f50cac35dd3ae407a3de8e095342e9140456c80dec7cd6a52271040ce95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:12 compute-0 podman[94331]: 2026-01-31 07:08:12.012571859 +0000 UTC m=+0.153162534 container init 3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed (image=quay.io/ceph/ceph:v18, name=distracted_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:08:12 compute-0 podman[94331]: 2026-01-31 07:08:12.018619461 +0000 UTC m=+0.159210056 container start 3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed (image=quay.io/ceph/ceph:v18, name=distracted_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:08:12 compute-0 podman[94331]: 2026-01-31 07:08:12.02358426 +0000 UTC m=+0.164174855 container attach 3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed (image=quay.io/ceph/ceph:v18, name=distracted_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:12 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.voybui on compute-0
Jan 31 07:08:12 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.voybui on compute-0
Jan 31 07:08:12 compute-0 sudo[94370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:12 compute-0 sudo[94370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:12 compute-0 sudo[94370]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:12 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 07:08:12 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 07:08:12 compute-0 ceph-mon[74496]: Deploying daemon mds.cephfs.compute-2.ihffma on compute-2
Jan 31 07:08:12 compute-0 ceph-mon[74496]: pgmap v121: 133 pgs: 2 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 07:08:12 compute-0 ceph-mon[74496]: osdmap e43: 3 total, 3 up, 3 in
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.voybui", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:12 compute-0 ceph-mon[74496]: 3.1f scrub starts
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:12 compute-0 sudo[94395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:12 compute-0 sudo[94395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:12 compute-0 sudo[94395]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 07:08:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:12 compute-0 sudo[94420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:12 compute-0 sudo[94420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:12 compute-0 sudo[94420]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.14388 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:12 compute-0 distracted_perlman[94346]: 
Jan 31 07:08:12 compute-0 distracted_perlman[94346]: [{"container_id": "1da6676927c3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.87%", "created": "2026-01-31T07:05:42.940299Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T07:05:43.006918Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\"", "2026-01-31T07:07:53.042444Z daemon:crash.compute-0 [INFO] \"Reconfigured crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:06:31.966518Z", "memory_usage": 11628707, "ports": [], "service_name": "crash", "started": "2026-01-31T07:05:42.826826Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@crash.compute-0", "version": "18.2.7"}, {"container_id": "92426b64c091", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.73%", "created": "2026-01-31T07:06:17.273973Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-31T07:06:17.313935Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\"", "2026-01-31T07:07:56.128693Z daemon:crash.compute-1 [INFO] \"Reconfigured crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T07:08:00.144658Z", "memory_usage": 11712593, "ports": [], "service_name": "crash", "started": "2026-01-31T07:06:17.200137Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@crash.compute-1", "version": "18.2.7"}, {"container_id": "7d6c90b37bdf", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.92%", "created": "2026-01-31T07:07:14.702391Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-31T07:07:14.861535Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T07:07:59.952406Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2026-01-31T07:07:14.604263Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@crash.compute-2", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-2.ihffma", "daemon_name": "mds.cephfs.compute-2.ihffma", "daemon_type": "mds", "events": ["2026-01-31T07:08:12.337658Z daemon:mds.cephfs.compute-2.ihffma [INFO] \"Deployed mds.cephfs.compute-2.ihffma on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "80eff2094e98", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "43.40%", "created": "2026-01-31T07:04:36.313712Z", "daemon_id": "compute-0.hhuoua", "daemon_name": "mgr.compute-0.hhuoua", "daemon_type": "mgr", "events": ["2026-01-31T07:07:52.284234Z daemon:mgr.compute-0.hhuoua [INFO] \"Reconfigured mgr.compute-0.hhuoua on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-31T07:06:31.966385Z", "memory_usage": 545993523, "ports": [9283, 8765, 8765], "service_name": "mgr", "started": "2026-01-31T07:04:36.241326Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mgr.compute-0.hhuoua", "version": "18.2.7"}, {"container_id": "479824a71b73", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "46.50%", "created": "2026-01-31T07:07:13.261245Z", "daemon_id": "compute-1.hodsiu", "daemon_name": "mgr.compute-1.hodsiu", "daemon_type": "mgr", "events": ["2026-01-31T07:07:13.341999Z daemon:mgr.compute-1.hodsiu [INFO] \"Deployed mgr.compute-1.hodsiu on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T07:08:00.145046Z", "memory_usage": 514011955, "ports": [8765], "service_name": "mgr", "started": "2026-01-31T07:07:13.139288Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mgr.compute-1.hodsiu", "version": "18.2.7"}, {"container_id": "3f61f2a28d52", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "40.49%", "created": "2026-01-31T07:07:06.070161Z", "daemon_id": "compute-2.wmgest", "daemon_name": "mgr.compute-2.wmgest", "daemon_type": "mgr", "events": ["2026-01-31T07:07:10.864442Z daemon:mgr.compute-2.wmgest [INFO] \"Deployed mgr.compute-2.wmgest on host 'compute-2'\"", "2026-01-31T07:07:59.041202Z daemon:mgr.compute-2.wmgest [INFO] \"Reconfigured mgr.compute-2.wmgest on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T07:07:59.952272Z", "memory_usage": 515165388, "ports": [8765], "service_name": "mgr", "started": "2026-01-31T07:07:05.988241Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mgr.compute-2.wmgest", "version": "18.2.7"}, {"container_id": "c6500841c07b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.66%", "created": "2026-01-31T07:04:31.763603Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T07:07:51.550702Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:06:31.966234Z", "memory_request": 2147483648, "memory_usage": 32694599, "ports": [], "service_name": "mon", "started": "2026-01-31T07:04:34.216897Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mon.compute-0", "version": "18.2.7"}, {"container_id": "26cb4dc9845a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.93%", "created": "2026-01-31T07:07:02.311998Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2026-01-31T07:07:04.557312Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\"", "2026-01-31T07:07:57.868140Z daemon:mon.compute-1 [INFO] \"Reconfigured mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T07:08:00.144927Z", "memory_request": 2147483648, "memory_usage": 32463912, "ports": [], "service_name": "mon", "started": "2026-01-31T07:07:01.913586Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mon.compute-1", "version": "18.2.7"}, {"container_id": "630bbce25a07", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.55%", "created": "2026-01-31T07:06:59.349981Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2026-01-31T07:06:59.413175Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\"", "2026-01-31T07:07:58.455580Z daemon:mon.compute-2 [INFO] \"Reconfigured mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T07:07:59.952036Z", "memory_request": 2147483648, "memory_usage": 31950110, "ports": [], "service_name": "mon", "started": "2026-01-31T07:06:59.270186Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@mon.compute-2", "version": "18.2.7"}, {"container_id": "7fd99238d214", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "9.27%", "created": "2026-01-31T07:06:28.146392Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T07:06:28.195051Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\"", "2026-01-31T07:07:54.767784Z daemon:osd.0 [INFO] \"Reconfigured osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:06:31.966645Z", "memory_request": 4294967296, "memory_usage": 33145487, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T07:06:28.047147Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@osd.0", "version": "18.2.7"}, {"container_id": "286672a875d6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.61%", "created": "2026-01-31T07:06:28.421234Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T07:06:28.458503Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-1'\"", "2026-01-31T07:07:57.130748Z daemon:osd.1 [INFO] \"Reconfigured osd.1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T07:08:00.144805Z", "memory_request": 5502906777, "memory_usage": 64508395, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T07:06:28.314553Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@osd.1", "version": "18.2.7"}, {"container_id": "3047b440b723", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.18%", "created": "2026-01-31T07:07:27.327729Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T07:07:29.226776Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T07:07:59.952527Z", "memory_request": 4294967296, "memory_usage": 66154659, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T07:07:27.237274Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.njduba", "daemon_name": "rgw.rgw.compute-0.njduba", "daemon_type": "rgw", "events": ["2026-01-31T07:08:10.726964Z daemon:rgw.rgw.compute-0.njduba [INFO] \"Deployed rgw.rgw.compute-0.njduba on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-1.zjvjex", "daemon_name": "rgw.rgw.compute-1.zjvjex", "daemon_type": "rgw", "events": ["2026-01-31T07:08:09.063798Z daemon:rgw.rgw.compute-1.zjvjex [INFO] \"Deployed rgw.rgw.compute-1.zjvjex on host 'compute-1'\""], "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}, {"daemon_id": "rgw.compute-2.kddbks", "daemon_name": "rgw.rgw.compute-2.kddbks", "daemon_type": "rgw", "events": ["2026-01-31T07:08:07.440200Z daemon:rgw.rgw.compute-2.kddbks [INFO] \"Deployed rgw.rgw.compute-2.kddbks on host 'compute-2'\""], "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 31 07:08:12 compute-0 sudo[94445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:12 compute-0 sudo[94445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:12 compute-0 systemd[1]: libpod-3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed.scope: Deactivated successfully.
Jan 31 07:08:12 compute-0 podman[94331]: 2026-01-31 07:08:12.642407124 +0000 UTC m=+0.782997749 container died 3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed (image=quay.io/ceph/ceph:v18, name=distracted_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b29c9f50cac35dd3ae407a3de8e095342e9140456c80dec7cd6a52271040ce95-merged.mount: Deactivated successfully.
Jan 31 07:08:12 compute-0 podman[94331]: 2026-01-31 07:08:12.723308845 +0000 UTC m=+0.863899440 container remove 3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed (image=quay.io/ceph/ceph:v18, name=distracted_perlman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:12 compute-0 sudo[94328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:12 compute-0 systemd[1]: libpod-conmon-3cfe3f10af31df8d57e42742e5b756d7c88733b55caa846a115a38fb94cf99ed.scope: Deactivated successfully.
Jan 31 07:08:12 compute-0 rsyslogd[1005]: message too long (15148) with configured size 8096, begin of message is: [{"container_id": "1da6676927c3", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 07:08:12 compute-0 podman[94526]: 2026-01-31 07:08:12.913750558 +0000 UTC m=+0.036609987 container create 86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_keller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:08:12 compute-0 systemd[1]: Started libpod-conmon-86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163.scope.
Jan 31 07:08:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:12 compute-0 podman[94526]: 2026-01-31 07:08:12.97427561 +0000 UTC m=+0.097135119 container init 86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_keller, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:12 compute-0 podman[94526]: 2026-01-31 07:08:12.981641813 +0000 UTC m=+0.104501282 container start 86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_keller, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:12 compute-0 laughing_keller[94542]: 167 167
Jan 31 07:08:12 compute-0 systemd[1]: libpod-86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163.scope: Deactivated successfully.
Jan 31 07:08:12 compute-0 podman[94526]: 2026-01-31 07:08:12.986701914 +0000 UTC m=+0.109561363 container attach 86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 07:08:12 compute-0 podman[94526]: 2026-01-31 07:08:12.987200515 +0000 UTC m=+0.110059944 container died 86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:12 compute-0 podman[94526]: 2026-01-31 07:08:12.892824587 +0000 UTC m=+0.015684036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-956a376cda1cf6c95dacbebe510e92b1214b9280a9e1f1f5718a2eeef53369ec-merged.mount: Deactivated successfully.
Jan 31 07:08:13 compute-0 podman[94526]: 2026-01-31 07:08:13.048183697 +0000 UTC m=+0.171043126 container remove 86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:08:13 compute-0 systemd[1]: libpod-conmon-86690f7a8398805aa5db70ef925af43c8ebc875f9ce29b550221b54c63e8d163.scope: Deactivated successfully.
Jan 31 07:08:13 compute-0 systemd[1]: Reloading.
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e3 new map
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:07:55.106335+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.ihffma{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:boot
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] as mds.0
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.ihffma assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.ihffma"} v 0) v1
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.ihffma"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e4 new map
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:13.111386+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:creating}
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.ihffma is now active in filesystem cephfs as rank 0
Jan 31 07:08:13 compute-0 systemd-rc-local-generator[94587]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:13 compute-0 systemd-sysv-generator[94591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v124: 134 pgs: 1 unknown, 133 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 07:08:13 compute-0 systemd[1]: Reloading.
Jan 31 07:08:13 compute-0 systemd-sysv-generator[94632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:13 compute-0 systemd-rc-local-generator[94626]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:13 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Jan 31 07:08:13 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 07:08:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 07:08:13 compute-0 ceph-mon[74496]: Deploying daemon mds.cephfs.compute-0.voybui on compute-0
Jan 31 07:08:13 compute-0 ceph-mon[74496]: 3.1f scrub ok
Jan 31 07:08:13 compute-0 ceph-mon[74496]: osdmap e44: 3 total, 3 up, 3 in
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2780650631' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1217523012' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: 5.d scrub starts
Jan 31 07:08:13 compute-0 ceph-mon[74496]: 5.d scrub ok
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='client.14388 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:boot
Jan 31 07:08:13 compute-0 ceph-mon[74496]: daemon mds.cephfs.compute-2.ihffma assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 07:08:13 compute-0 ceph-mon[74496]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 07:08:13 compute-0 ceph-mon[74496]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 07:08:13 compute-0 ceph-mon[74496]: fsmap cephfs:0 1 up:standby
Jan 31 07:08:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.ihffma"}]: dispatch
Jan 31 07:08:13 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:creating}
Jan 31 07:08:13 compute-0 ceph-mon[74496]: daemon mds.cephfs.compute-2.ihffma is now active in filesystem cephfs as rank 0
Jan 31 07:08:13 compute-0 ceph-mon[74496]: 5.1d deep-scrub starts
Jan 31 07:08:13 compute-0 ceph-mon[74496]: 5.1d deep-scrub ok
Jan 31 07:08:13 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.voybui for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:13 compute-0 sudo[94665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsyfzanzpdlkdcapvktenisqnrjqnebh ; /usr/bin/python3'
Jan 31 07:08:13 compute-0 sudo[94665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:13 compute-0 python3[94673]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:13 compute-0 podman[94728]: 2026-01-31 07:08:13.81911516 +0000 UTC m=+0.037384544 container create 4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9 (image=quay.io/ceph/ceph:v18, name=nice_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:08:13 compute-0 podman[94729]: 2026-01-31 07:08:13.829568439 +0000 UTC m=+0.052254781 container create b8961a2bb89ef92ece9fe756e2217169f987d51ab5ed9d70855c2e200dbe66dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-0-voybui, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7aa80fa79fb9bf6ae5b0db9455669739fa62501a6080b365e71c3549ee89c80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7aa80fa79fb9bf6ae5b0db9455669739fa62501a6080b365e71c3549ee89c80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7aa80fa79fb9bf6ae5b0db9455669739fa62501a6080b365e71c3549ee89c80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7aa80fa79fb9bf6ae5b0db9455669739fa62501a6080b365e71c3549ee89c80/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.voybui supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:13 compute-0 systemd[1]: Started libpod-conmon-4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9.scope.
Jan 31 07:08:13 compute-0 podman[94729]: 2026-01-31 07:08:13.802655207 +0000 UTC m=+0.025341579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:13 compute-0 podman[94728]: 2026-01-31 07:08:13.803222129 +0000 UTC m=+0.021491533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:13 compute-0 podman[94729]: 2026-01-31 07:08:13.912745561 +0000 UTC m=+0.135431923 container init b8961a2bb89ef92ece9fe756e2217169f987d51ab5ed9d70855c2e200dbe66dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-0-voybui, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecca3066f8488923441e32e8f6fae80424f7f4d43f2648e8a4d2d1747e5ef62b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecca3066f8488923441e32e8f6fae80424f7f4d43f2648e8a4d2d1747e5ef62b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:13 compute-0 podman[94729]: 2026-01-31 07:08:13.93042392 +0000 UTC m=+0.153110262 container start b8961a2bb89ef92ece9fe756e2217169f987d51ab5ed9d70855c2e200dbe66dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-0-voybui, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:08:13 compute-0 podman[94728]: 2026-01-31 07:08:13.936019713 +0000 UTC m=+0.154289117 container init 4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9 (image=quay.io/ceph/ceph:v18, name=nice_turing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:13 compute-0 podman[94728]: 2026-01-31 07:08:13.942498256 +0000 UTC m=+0.160767640 container start 4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9 (image=quay.io/ceph/ceph:v18, name=nice_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:13 compute-0 bash[94729]: b8961a2bb89ef92ece9fe756e2217169f987d51ab5ed9d70855c2e200dbe66dd
Jan 31 07:08:13 compute-0 podman[94728]: 2026-01-31 07:08:13.956147947 +0000 UTC m=+0.174417331 container attach 4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9 (image=quay.io/ceph/ceph:v18, name=nice_turing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 31 07:08:13 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.voybui for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:13 compute-0 ceph-mds[94769]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:08:13 compute-0 ceph-mds[94769]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 31 07:08:13 compute-0 ceph-mds[94769]: main not setting numa affinity
Jan 31 07:08:13 compute-0 ceph-mds[94769]: pidfile_write: ignore empty --pid-file
Jan 31 07:08:13 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mds-cephfs-compute-0-voybui[94760]: starting mds.cephfs.compute-0.voybui at 
Jan 31 07:08:13 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Updating MDS map to version 4 from mon.0
Jan 31 07:08:13 compute-0 sudo[94445]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.dqeaqy on compute-1
Jan 31 07:08:14 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.dqeaqy on compute-1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e5 new map
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:14.127061+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:14 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Updating MDS map to version 5 from mon.0
Jan 31 07:08:14 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Monitors have assigned me to become a standby.
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:active
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] up:boot
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 1 up:standby
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.voybui"} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.voybui"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e5 all = 0
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e6 new map
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:14.127061+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 1 up:standby
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419499773' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:08:14 compute-0 nice_turing[94765]: 
Jan 31 07:08:14 compute-0 nice_turing[94765]: {"fsid":"f70fcd2a-dcb4-5f89-a4ba-79a09959083b","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":64,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1769843258,"num_in_osds":3,"osd_in_since":1769843236,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":133},{"state_name":"unknown","count":1}],"num_pgs":134,"num_pools":10,"num_objects":8,"data_bytes":461960,"bytes_used":84201472,"bytes_avail":22451793920,"bytes_total":22535995392,"unknown_pgs_ratio":0.0074626863934099674,"read_bytes_sec":4094,"write_bytes_sec":1535,"read_op_per_sec":3,"write_op_per_sec":1},"fsmap":{"epoch":6,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.ihffma","status":"up:active","gid":24157}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":6,"modified":"2026-01-31T07:08:13.325303+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-2.ihffma":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mgr":{"daemons":{"summary":"","compute-1.hodsiu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.wmgest":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"77106dd6-8219-4586-97bf-62115dd8abb1":{"message":"Updating mds.cephfs deployment (+3 -> 3) (1s)\n      [=========...................] (remaining: 3s)","progress":0.3333333432674408,"add_to_ceph_s":true},"c045c0ec-a7b1-41f3-94e7-0d61220a8226":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 31 07:08:14 compute-0 systemd[1]: libpod-4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9.scope: Deactivated successfully.
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 07:08:14 compute-0 podman[94811]: 2026-01-31 07:08:14.578196061 +0000 UTC m=+0.020583204 container died 4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9 (image=quay.io/ceph/ceph:v18, name=nice_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 46 pg[11.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecca3066f8488923441e32e8f6fae80424f7f4d43f2648e8a4d2d1747e5ef62b-merged.mount: Deactivated successfully.
Jan 31 07:08:14 compute-0 ceph-mon[74496]: pgmap v124: 134 pgs: 1 unknown, 133 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1455749916' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 07:08:14 compute-0 ceph-mon[74496]: osdmap e45: 3 total, 3 up, 3 in
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dqeaqy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:active
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] up:boot
Jan 31 07:08:14 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 1 up:standby
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.voybui"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 1 up:standby
Jan 31 07:08:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/419499773' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 07:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 07:08:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:14 compute-0 podman[94811]: 2026-01-31 07:08:14.859868252 +0000 UTC m=+0.302255385 container remove 4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9 (image=quay.io/ceph/ceph:v18, name=nice_turing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:14 compute-0 systemd[1]: libpod-conmon-4aae08bd139c69091d1a0d85d0f5c76c17f9b7e196d9337eb4daaa96588c5eb9.scope: Deactivated successfully.
Jan 31 07:08:14 compute-0 sudo[94665]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v127: 135 pgs: 2 unknown, 133 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 07:08:15 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 07:08:15 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 07:08:15 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 11 completed events
Jan 31 07:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:15 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event c045c0ec-a7b1-41f3-94e7-0d61220a8226 (Global Recovery Event) in 5 seconds
Jan 31 07:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 07:08:15 compute-0 sudo[94848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eajabqjhmhlibqltxwmcjidpuhewootm ; /usr/bin/python3'
Jan 31 07:08:15 compute-0 sudo[94848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 07:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 47 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:15 compute-0 python3[94850]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:15 compute-0 ceph-mon[74496]: 5.1f scrub starts
Jan 31 07:08:15 compute-0 ceph-mon[74496]: 5.1f scrub ok
Jan 31 07:08:15 compute-0 ceph-mon[74496]: Deploying daemon mds.cephfs.compute-1.dqeaqy on compute-1
Jan 31 07:08:15 compute-0 ceph-mon[74496]: osdmap e46: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: 3.1e scrub starts
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 07:08:15 compute-0 ceph-mon[74496]: osdmap e47: 3 total, 3 up, 3 in
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/616184540' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1299930726' entity='client.rgw.rgw.compute-1.zjvjex' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 07:08:15 compute-0 podman[94851]: 2026-01-31 07:08:15.961279731 +0000 UTC m=+0.084973552 container create ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2 (image=quay.io/ceph/ceph:v18, name=sad_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:16 compute-0 systemd[1]: Started libpod-conmon-ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2.scope.
Jan 31 07:08:16 compute-0 podman[94851]: 2026-01-31 07:08:15.914724096 +0000 UTC m=+0.038417997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17a7b8df6cef93a471484e2b61794ddd775dabb960379189a11e322f53f29e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17a7b8df6cef93a471484e2b61794ddd775dabb960379189a11e322f53f29e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:16 compute-0 podman[94851]: 2026-01-31 07:08:16.074140845 +0000 UTC m=+0.197834756 container init ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2 (image=quay.io/ceph/ceph:v18, name=sad_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:08:16 compute-0 podman[94851]: 2026-01-31 07:08:16.079115695 +0000 UTC m=+0.202809546 container start ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2 (image=quay.io/ceph/ceph:v18, name=sad_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:08:16 compute-0 podman[94851]: 2026-01-31 07:08:16.114283119 +0000 UTC m=+0.237977020 container attach ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2 (image=quay.io/ceph/ceph:v18, name=sad_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513498969' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:08:16 compute-0 sad_shockley[94867]: 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 77106dd6-8219-4586-97bf-62115dd8abb1 (Updating mds.cephfs deployment (+3 -> 3))
Jan 31 07:08:16 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 77106dd6-8219-4586-97bf-62115dd8abb1 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 31 07:08:16 compute-0 systemd[1]: libpod-ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2.scope: Deactivated successfully.
Jan 31 07:08:16 compute-0 conmon[94867]: conmon ae1eb9a97274815516f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2.scope/container/memory.events
Jan 31 07:08:16 compute-0 sad_shockley[94867]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502906777","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.njduba","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.zjvjex","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.kddbks","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 07:08:16 compute-0 podman[94851]: 2026-01-31 07:08:16.623813286 +0000 UTC m=+0.747507127 container died ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2 (image=quay.io/ceph/ceph:v18, name=sad_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev ad35783d-1bb4-4dd4-b9eb-ab3e54dd4a05 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 31 07:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a17a7b8df6cef93a471484e2b61794ddd775dabb960379189a11e322f53f29e7-merged.mount: Deactivated successfully.
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 07:08:16 compute-0 podman[94851]: 2026-01-31 07:08:16.779830311 +0000 UTC m=+0.903524122 container remove ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2 (image=quay.io/ceph/ceph:v18, name=sad_shockley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 systemd[1]: libpod-conmon-ae1eb9a97274815516f58094b6c446b76ae198f2f17af1f4e0336ed27faf0dd2.scope: Deactivated successfully.
Jan 31 07:08:16 compute-0 sudo[94848]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:16 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.cwtxbj on compute-0
Jan 31 07:08:16 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.cwtxbj on compute-0
Jan 31 07:08:16 compute-0 sudo[94904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:16 compute-0 sudo[94904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:16 compute-0 sudo[94904]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:16 compute-0 ceph-mon[74496]: pgmap v127: 135 pgs: 2 unknown, 133 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 07:08:16 compute-0 ceph-mon[74496]: 3.1e scrub ok
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1513498969' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-2.kddbks' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1494637509' entity='client.rgw.rgw.compute-0.njduba' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='client.? ' entity='client.rgw.rgw.compute-1.zjvjex' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 07:08:16 compute-0 ceph-mon[74496]: osdmap e48: 3 total, 3 up, 3 in
Jan 31 07:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:16 compute-0 ceph-mon[74496]: Deploying daemon haproxy.rgw.default.compute-0.cwtxbj on compute-0
Jan 31 07:08:16 compute-0 sudo[94929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:16 compute-0 sudo[94929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:16 compute-0 sudo[94929]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:16 compute-0 sudo[94954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:16 compute-0 sudo[94954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:16 compute-0 sudo[94954]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e7 new map
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:14.127061+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] up:boot
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.dqeaqy"} v 0) v1
Jan 31 07:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.dqeaqy"}]: dispatch
Jan 31 07:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e7 all = 0
Jan 31 07:08:17 compute-0 sudo[94979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:17 compute-0 sudo[94979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:17 compute-0 radosgw[94239]: LDAP not started since no server URIs were provided in the configuration.
Jan 31 07:08:17 compute-0 radosgw[94239]: framework: beast
Jan 31 07:08:17 compute-0 radosgw[94239]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 07:08:17 compute-0 radosgw[94239]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 07:08:17 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-rgw-rgw-compute-0-njduba[94235]: 2026-01-31T07:08:17.211+0000 7fdd24d7d940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 radosgw[94239]: starting handler: beast
Jan 31 07:08:17 compute-0 radosgw[94239]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 radosgw[94239]: mgrc service_daemon_register rgw.14394 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.njduba,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=12135664-ebbb-4151-b9c5-06f6a22e158a,zone_name=default,zonegroup_id=50d9470b-8f69-45ea-ac36-a7be21625514,zonegroup_name=default}
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v130: 135 pgs: 1 unknown, 134 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.4 deep-scrub starts
Jan 31 07:08:17 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.4 deep-scrub ok
Jan 31 07:08:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 31 07:08:17 compute-0 sudo[95620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjqeegkixcanxshkkctqafvacnushejf ; /usr/bin/python3'
Jan 31 07:08:17 compute-0 sudo[95620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:17 compute-0 python3[95622]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:17 compute-0 podman[95623]: 2026-01-31 07:08:17.792661679 +0000 UTC m=+0.048856536 container create 5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f (image=quay.io/ceph/ceph:v18, name=jovial_murdock, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:17 compute-0 systemd[1]: Started libpod-conmon-5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f.scope.
Jan 31 07:08:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4b1e7e723d078b4435d59da9901f0f4db60cfbc72c3d0e647763565fe5b5dd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4b1e7e723d078b4435d59da9901f0f4db60cfbc72c3d0e647763565fe5b5dd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:17 compute-0 podman[95623]: 2026-01-31 07:08:17.768739253 +0000 UTC m=+0.024934110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:17 compute-0 podman[95623]: 2026-01-31 07:08:17.865486682 +0000 UTC m=+0.121681519 container init 5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:08:17 compute-0 podman[95623]: 2026-01-31 07:08:17.870775419 +0000 UTC m=+0.126970236 container start 5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:17 compute-0 podman[95623]: 2026-01-31 07:08:17.876342022 +0000 UTC m=+0.132536839 container attach 5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 07:08:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 07:08:18 compute-0 ceph-mon[74496]: 5.10 scrub starts
Jan 31 07:08:18 compute-0 ceph-mon[74496]: 5.10 scrub ok
Jan 31 07:08:18 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] up:boot
Jan 31 07:08:18 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.dqeaqy"}]: dispatch
Jan 31 07:08:18 compute-0 ceph-mon[74496]: pgmap v130: 135 pgs: 1 unknown, 134 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:18 compute-0 ceph-mon[74496]: 3.4 deep-scrub starts
Jan 31 07:08:18 compute-0 ceph-mon[74496]: 3.4 deep-scrub ok
Jan 31 07:08:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e8 new map
Jan 31 07:08:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:17.997329+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:18 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Updating MDS map to version 8 from mon.0
Jan 31 07:08:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:active
Jan 31 07:08:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] up:standby
Jan 31 07:08:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:18 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 31 07:08:18 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 31 07:08:18 compute-0 jovial_murdock[95638]: mimic
Jan 31 07:08:18 compute-0 systemd[1]: libpod-5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f.scope: Deactivated successfully.
Jan 31 07:08:18 compute-0 podman[95623]: 2026-01-31 07:08:18.509787987 +0000 UTC m=+0.765982824 container died 5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f (image=quay.io/ceph/ceph:v18, name=jovial_murdock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:19 compute-0 ceph-mon[74496]: 3.14 scrub starts
Jan 31 07:08:19 compute-0 ceph-mon[74496]: 3.14 scrub ok
Jan 31 07:08:19 compute-0 ceph-mon[74496]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 07:08:19 compute-0 ceph-mon[74496]: Cluster is now healthy
Jan 31 07:08:19 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] up:active
Jan 31 07:08:19 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] up:standby
Jan 31 07:08:19 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:19 compute-0 ceph-mon[74496]: 5.5 scrub starts
Jan 31 07:08:19 compute-0 ceph-mon[74496]: 5.5 scrub ok
Jan 31 07:08:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2513510612' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v131: 135 pgs: 135 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 8.3 KiB/s wr, 130 op/s
Jan 31 07:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c4b1e7e723d078b4435d59da9901f0f4db60cfbc72c3d0e647763565fe5b5dd-merged.mount: Deactivated successfully.
Jan 31 07:08:19 compute-0 podman[95623]: 2026-01-31 07:08:19.612769729 +0000 UTC m=+1.868964536 container remove 5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f (image=quay.io/ceph/ceph:v18, name=jovial_murdock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:08:19 compute-0 systemd[1]: libpod-conmon-5948debef7586f19c1a661f70953f3a623dc656486b1bba61feb30b8cadd856f.scope: Deactivated successfully.
Jan 31 07:08:19 compute-0 sudo[95620]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.682486904 +0000 UTC m=+2.364541637 container create 5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163 (image=quay.io/ceph/haproxy:2.3, name=frosty_clarke)
Jan 31 07:08:19 compute-0 systemd[1]: Started libpod-conmon-5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163.scope.
Jan 31 07:08:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.669284813 +0000 UTC m=+2.351339566 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.748070687 +0000 UTC m=+2.430125430 container init 5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163 (image=quay.io/ceph/haproxy:2.3, name=frosty_clarke)
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.755588184 +0000 UTC m=+2.437642947 container start 5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163 (image=quay.io/ceph/haproxy:2.3, name=frosty_clarke)
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.759168582 +0000 UTC m=+2.441223345 container attach 5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163 (image=quay.io/ceph/haproxy:2.3, name=frosty_clarke)
Jan 31 07:08:19 compute-0 frosty_clarke[95778]: 0 0
Jan 31 07:08:19 compute-0 systemd[1]: libpod-5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163.scope: Deactivated successfully.
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.761057774 +0000 UTC m=+2.443112517 container died 5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163 (image=quay.io/ceph/haproxy:2.3, name=frosty_clarke)
Jan 31 07:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e16173b8d3195330d449a6732cc5e3add5bc9cb839f047ed3d3461ac2e9b99f-merged.mount: Deactivated successfully.
Jan 31 07:08:19 compute-0 podman[95583]: 2026-01-31 07:08:19.801022283 +0000 UTC m=+2.483077036 container remove 5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163 (image=quay.io/ceph/haproxy:2.3, name=frosty_clarke)
Jan 31 07:08:19 compute-0 systemd[1]: libpod-conmon-5cc2a0cf90a2220bb4f75f22ec2b206332551194ca3d39b8bbb318ee659b7163.scope: Deactivated successfully.
Jan 31 07:08:19 compute-0 systemd[1]: Reloading.
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:08:19
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images']
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:08:19 compute-0 systemd-rc-local-generator[95821]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:19 compute-0 systemd-sysv-generator[95824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:08:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:08:20 compute-0 systemd[1]: Reloading.
Jan 31 07:08:20 compute-0 systemd-rc-local-generator[95865]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:20 compute-0 systemd-sysv-generator[95868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:20 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.cwtxbj for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:20 compute-0 sudo[95900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pghfehzbqhbzmwmtdrlvruuxxnfgmnko ; /usr/bin/python3'
Jan 31 07:08:20 compute-0 sudo[95900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:08:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e9 new map
Jan 31 07:08:20 compute-0 ceph-mon[74496]: pgmap v131: 135 pgs: 135 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 8.3 KiB/s wr, 130 op/s
Jan 31 07:08:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T07:07:55.106295+0000
                                           modified        2026-01-31T07:08:17.997329+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.ihffma{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/3303115004,v1:192.168.122.102:6805/3303115004] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.voybui{-1:14406} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2569432247,v1:192.168.122.100:6807/2569432247] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.dqeaqy{-1:24170} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 07:08:20 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] up:standby
Jan 31 07:08:20 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:20 compute-0 python3[95907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:08:20 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 13 completed events
Jan 31 07:08:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:08:20 compute-0 podman[95947]: 2026-01-31 07:08:20.641851735 +0000 UTC m=+0.061639268 container create b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97 (image=quay.io/ceph/ceph:v18, name=admiring_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:08:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:20 compute-0 podman[95948]: 2026-01-31 07:08:20.710378454 +0000 UTC m=+0.121575978 container create b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:20 compute-0 podman[95947]: 2026-01-31 07:08:20.620967815 +0000 UTC m=+0.040755358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:08:20 compute-0 podman[95948]: 2026-01-31 07:08:20.624452641 +0000 UTC m=+0.035650195 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 07:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4a79a3d665d7d86e7ff3b378ff3c712ce81c902e96f3b9551300c184da98f4/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:20 compute-0 systemd[1]: Started libpod-conmon-b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97.scope.
Jan 31 07:08:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d2ecd80faa810e66106ab5df36f030d7b474ca310c2633bc03b8edd4e733c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6d2ecd80faa810e66106ab5df36f030d7b474ca310c2633bc03b8edd4e733c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:20 compute-0 podman[95948]: 2026-01-31 07:08:20.837747368 +0000 UTC m=+0.248944922 container init b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:20 compute-0 podman[95948]: 2026-01-31 07:08:20.846194904 +0000 UTC m=+0.257392478 container start b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:20 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj[95975]: [NOTICE] 030/070820 (2) : New worker #1 (4) forked
Jan 31 07:08:20 compute-0 podman[95947]: 2026-01-31 07:08:20.864478156 +0000 UTC m=+0.284265699 container init b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97 (image=quay.io/ceph/ceph:v18, name=admiring_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000042s ======
Jan 31 07:08:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:20.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000042s
Jan 31 07:08:20 compute-0 podman[95947]: 2026-01-31 07:08:20.873845002 +0000 UTC m=+0.293632525 container start b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97 (image=quay.io/ceph/ceph:v18, name=admiring_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:08:21 compute-0 bash[95948]: b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3
Jan 31 07:08:21 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.cwtxbj for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:21 compute-0 podman[95947]: 2026-01-31 07:08:21.064446039 +0000 UTC m=+0.484233562 container attach b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97 (image=quay.io/ceph/ceph:v18, name=admiring_hodgkin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:21 compute-0 sudo[94979]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 07:08:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.envbir on compute-2
Jan 31 07:08:21 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.envbir on compute-2
Jan 31 07:08:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v132: 135 pgs: 135 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 7.3 KiB/s wr, 114 op/s
Jan 31 07:08:21 compute-0 ceph-mon[74496]: mds.? [v2:192.168.122.101:6804/858174765,v1:192.168.122.101:6805/858174765] up:standby
Jan 31 07:08:21 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:08:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:21 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.2 deep-scrub starts
Jan 31 07:08:21 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.2 deep-scrub ok
Jan 31 07:08:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 31 07:08:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3900091297' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 31 07:08:21 compute-0 admiring_hodgkin[95981]: 
Jan 31 07:08:21 compute-0 admiring_hodgkin[95981]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":15}}
Jan 31 07:08:21 compute-0 systemd[1]: libpod-b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97.scope: Deactivated successfully.
Jan 31 07:08:21 compute-0 podman[96017]: 2026-01-31 07:08:21.762747971 +0000 UTC m=+0.021846871 container died b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97 (image=quay.io/ceph/ceph:v18, name=admiring_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6d2ecd80faa810e66106ab5df36f030d7b474ca310c2633bc03b8edd4e733c4-merged.mount: Deactivated successfully.
Jan 31 07:08:21 compute-0 podman[96017]: 2026-01-31 07:08:21.808815805 +0000 UTC m=+0.067914725 container remove b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97 (image=quay.io/ceph/ceph:v18, name=admiring_hodgkin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:08:21 compute-0 systemd[1]: libpod-conmon-b8cced7b7f924d5efcd442423a0e7dc280b34f3c0010d0c81fba27f1a6da8a97.scope: Deactivated successfully.
Jan 31 07:08:21 compute-0 sudo[95900]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:22 compute-0 ceph-mon[74496]: 3.13 scrub starts
Jan 31 07:08:22 compute-0 ceph-mon[74496]: 3.13 scrub ok
Jan 31 07:08:22 compute-0 ceph-mon[74496]: Deploying daemon haproxy.rgw.default.compute-2.envbir on compute-2
Jan 31 07:08:22 compute-0 ceph-mon[74496]: pgmap v132: 135 pgs: 135 active+clean; 457 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 7.3 KiB/s wr, 114 op/s
Jan 31 07:08:22 compute-0 ceph-mon[74496]: 3.2 deep-scrub starts
Jan 31 07:08:22 compute-0 ceph-mon[74496]: 3.2 deep-scrub ok
Jan 31 07:08:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3900091297' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 31 07:08:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:08:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:22.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:08:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v133: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 233 op/s
Jan 31 07:08:23 compute-0 ceph-mon[74496]: 3.16 scrub starts
Jan 31 07:08:23 compute-0 ceph-mon[74496]: 3.16 scrub ok
Jan 31 07:08:23 compute-0 ceph-mon[74496]: 3.0 scrub starts
Jan 31 07:08:23 compute-0 ceph-mon[74496]: 3.0 scrub ok
Jan 31 07:08:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:24 compute-0 ceph-mon[74496]: pgmap v133: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 120 KiB/s rd, 6.0 KiB/s wr, 233 op/s
Jan 31 07:08:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v134: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 5.0 KiB/s wr, 194 op/s
Jan 31 07:08:25 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 07:08:25 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 07:08:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:26.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:08:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:08:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 07:08:26 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 31 07:08:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 31 07:08:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 31 07:08:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:26 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:26 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:26 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:26 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.faavbs on compute-2
Jan 31 07:08:26 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.faavbs on compute-2
Jan 31 07:08:26 compute-0 ceph-mon[74496]: pgmap v134: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 5.0 KiB/s wr, 194 op/s
Jan 31 07:08:26 compute-0 ceph-mon[74496]: 3.6 scrub starts
Jan 31 07:08:26 compute-0 ceph-mon[74496]: 3.6 scrub ok
Jan 31 07:08:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v135: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 4.5 KiB/s wr, 176 op/s
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 5.16 scrub starts
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 5.16 scrub ok
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 5.3 scrub starts
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 5.3 scrub ok
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:27 compute-0 ceph-mon[74496]: Deploying daemon keepalived.rgw.default.compute-2.faavbs on compute-2
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 5.12 scrub starts
Jan 31 07:08:27 compute-0 ceph-mon[74496]: 5.12 scrub ok
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 1)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 1)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 07:08:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Jan 31 07:08:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 31 07:08:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:08:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:08:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 07:08:28 compute-0 ceph-mon[74496]: pgmap v135: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 4.5 KiB/s wr, 176 op/s
Jan 31 07:08:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 31 07:08:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 07:08:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 07:08:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 07:08:28 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev f6ceb8a4-da63-42b3-91ce-0b1d7d8d2817 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 07:08:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:08:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:08:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:08:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v137: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 111 op/s
Jan 31 07:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Jan 31 07:08:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 31 07:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 07:08:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 07:08:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 07:08:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 07:08:30 compute-0 ceph-mon[74496]: osdmap e49: 3 total, 3 up, 3 in
Jan 31 07:08:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:30 compute-0 ceph-mon[74496]: 5.15 scrub starts
Jan 31 07:08:30 compute-0 ceph-mon[74496]: 5.15 scrub ok
Jan 31 07:08:30 compute-0 ceph-mon[74496]: pgmap v137: 135 pgs: 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 111 op/s
Jan 31 07:08:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 31 07:08:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 07:08:30 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev d7cfe08e-593e-47d6-be07-328766b12274 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 07:08:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:08:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:30 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 50 pg[6.0( v 48'39 (0'0,48'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=50 pruub=14.375216484s) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 44'38 mlcod 44'38 active pruub 135.303100586s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:30 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 50 pg[6.0( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=50 pruub=14.375216484s) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 44'38 mlcod 0'0 unknown pruub 135.303100586s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:30 compute-0 ceph-mgr[74791]: [progress WARNING root] Starting Global Recovery Event,15 pgs not in active + clean state
Jan 31 07:08:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 07:08:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 07:08:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 07:08:31 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 601a6da4-2ecf-40eb-8393-afb702d15a67 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 07:08:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:08:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.8( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.c( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.a( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.9( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.e( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.f( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.5( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.2( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.3( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.7( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.4( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=22/23 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.d( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=22/23 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.c( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.0( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 44'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.4( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 51 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=22/22 les/c/f=23/23/0 sis=50) [0] r=0 lpr=50 pi=[22,50)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:31 compute-0 ceph-mon[74496]: 5.0 deep-scrub starts
Jan 31 07:08:31 compute-0 ceph-mon[74496]: 5.0 deep-scrub ok
Jan 31 07:08:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 07:08:31 compute-0 ceph-mon[74496]: osdmap e50: 3 total, 3 up, 3 in
Jan 31 07:08:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v140: 150 pgs: 15 unknown, 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 07:08:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 07:08:32 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 07:08:32 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev b53fe66b-6dca-431d-be2c-00366ad2ea9a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 07:08:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:08:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:32 compute-0 ceph-mon[74496]: osdmap e51: 3 total, 3 up, 3 in
Jan 31 07:08:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-0 ceph-mon[74496]: pgmap v140: 150 pgs: 15 unknown, 135 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:32 compute-0 ceph-mon[74496]: 3.10 scrub starts
Jan 31 07:08:32 compute-0 ceph-mon[74496]: 3.10 scrub ok
Jan 31 07:08:32 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 52 pg[8.0( v 41'8 (0'0,41'8] local-lis/les=40/41 n=6 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=52 pruub=9.171376228s) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 lcod 41'7 mlcod 41'7 active pruub 132.062881470s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:32 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 52 pg[8.0( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=52 pruub=9.171376228s) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 lcod 41'7 mlcod 0'0 unknown pruub 132.062881470s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:08:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:08:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:08:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.rwjfwq on compute-0
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.rwjfwq on compute-0
Jan 31 07:08:33 compute-0 sudo[96032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:33 compute-0 sudo[96032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:33 compute-0 sudo[96032]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:33 compute-0 sudo[96057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:33 compute-0 sudo[96057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:33 compute-0 sudo[96057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-0 sudo[96082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:33 compute-0 sudo[96082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 296c4842-b411-4ff5-abd5-82a96f582fa7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 07:08:33 compute-0 sudo[96082]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.18( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.17( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.16( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.11( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.2( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1f( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.5( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.12( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.6( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.13( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1c( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1e( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1d( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.19( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1a( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.4( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1b( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.7( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1( v 41'8 (0'0,41'8] local-lis/les=40/41 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.b( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.c( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.d( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.e( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.a( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.9( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.8( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.f( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.3( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.10( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.15( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.14( v 41'8 lc 0'0 (0'0,41'8] local-lis/les=40/41 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.17( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:33 compute-0 ceph-mon[74496]: osdmap e52: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:33 compute-0 ceph-mon[74496]: osdmap e53: 3 total, 3 up, 3 in
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.18( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.11( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.2( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.16( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.5( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1f( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.6( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.12( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.13( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1c( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1e( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1d( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1a( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.4( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1b( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.0( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 41'7 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.7( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.19( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.1( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.b( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.c( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.d( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.a( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.9( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.8( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.e( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.f( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.3( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.15( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.10( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 53 pg[8.14( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=40/40 les/c/f=41/41/0 sis=52) [0] r=0 lpr=52 pi=[40,52)/1 crt=41'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:33 compute-0 sudo[96107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b
Jan 31 07:08:33 compute-0 sudo[96107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v143: 212 pgs: 1 peering, 62 unknown, 149 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 07:08:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 07:08:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] update: starting ev 51109a79-ff55-497d-bfce-107eda499a61 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev f6ceb8a4-da63-42b3-91ce-0b1d7d8d2817 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event f6ceb8a4-da63-42b3-91ce-0b1d7d8d2817 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev d7cfe08e-593e-47d6-be07-328766b12274 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event d7cfe08e-593e-47d6-be07-328766b12274 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 601a6da4-2ecf-40eb-8393-afb702d15a67 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 601a6da4-2ecf-40eb-8393-afb702d15a67 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev b53fe66b-6dca-431d-be2c-00366ad2ea9a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event b53fe66b-6dca-431d-be2c-00366ad2ea9a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 296c4842-b411-4ff5-abd5-82a96f582fa7 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 296c4842-b411-4ff5-abd5-82a96f582fa7 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev 51109a79-ff55-497d-bfce-107eda499a61 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 07:08:34 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event 51109a79-ff55-497d-bfce-107eda499a61 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 07:08:34 compute-0 ceph-mon[74496]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 07:08:34 compute-0 ceph-mon[74496]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 07:08:34 compute-0 ceph-mon[74496]: Deploying daemon keepalived.rgw.default.compute-0.rwjfwq on compute-0
Jan 31 07:08:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-0 ceph-mon[74496]: pgmap v143: 212 pgs: 1 peering, 62 unknown, 149 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 07:08:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:34 compute-0 ceph-mon[74496]: osdmap e54: 3 total, 3 up, 3 in
Jan 31 07:08:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 54 pg[9.0( v 48'908 (0'0,48'908] local-lis/les=42/43 n=177 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=9.066829681s) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 48'907 mlcod 48'907 active pruub 134.097564697s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 54 pg[9.0( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=54 pruub=9.066829681s) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 48'907 mlcod 0'0 unknown pruub 134.097564697s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:08:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:08:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 07:08:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 07:08:35 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1e( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.19( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.16( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.10( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.17( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.3( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.13( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.4( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.7( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.12( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1d( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1c( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1f( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.18( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1b( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1a( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.6( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.5( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.a( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.d( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.c( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.f( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.b( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.8( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.e( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.2( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.11( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.9( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.14( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.15( v 48'908 lc 0'0 (0'0,48'908] local-lis/les=42/43 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:35 compute-0 ceph-mon[74496]: 5.4 scrub starts
Jan 31 07:08:35 compute-0 ceph-mon[74496]: 5.4 scrub ok
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.4( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1c( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.1( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.0( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 48'907 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.c( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.2( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 55 pg[9.14( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=42/42 les/c/f=43/43/0 sis=54) [0] r=0 lpr=54 pi=[42,54)/1 crt=48'908 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v146: 274 pgs: 1 peering, 124 unknown, 149 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:35 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 07:08:35 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 07:08:35 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 19 completed events
Jan 31 07:08:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:08:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.161550239 +0000 UTC m=+2.647662933 container create 720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492 (image=quay.io/ceph/keepalived:2.2.4, name=focused_cannon, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, version=2.2.4, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Jan 31 07:08:36 compute-0 systemd[1]: Started libpod-conmon-720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492.scope.
Jan 31 07:08:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.138061243 +0000 UTC m=+2.624173947 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.24169018 +0000 UTC m=+2.727802864 container init 720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492 (image=quay.io/ceph/keepalived:2.2.4, name=focused_cannon, vendor=Red Hat, Inc., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, distribution-scope=public)
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.247662464 +0000 UTC m=+2.733775128 container start 720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492 (image=quay.io/ceph/keepalived:2.2.4, name=focused_cannon, version=2.2.4, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived)
Jan 31 07:08:36 compute-0 focused_cannon[96268]: 0 0
Jan 31 07:08:36 compute-0 systemd[1]: libpod-720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492.scope: Deactivated successfully.
Jan 31 07:08:36 compute-0 conmon[96268]: conmon 720ff67be81b0b892d2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492.scope/container/memory.events
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.251896476 +0000 UTC m=+2.738009130 container attach 720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492 (image=quay.io/ceph/keepalived:2.2.4, name=focused_cannon, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, release=1793, io.openshift.expose-services=, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.252807472 +0000 UTC m=+2.738920126 container died 720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492 (image=quay.io/ceph/keepalived:2.2.4, name=focused_cannon, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 07:08:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 07:08:36 compute-0 ceph-mon[74496]: osdmap e55: 3 total, 3 up, 3 in
Jan 31 07:08:36 compute-0 ceph-mon[74496]: pgmap v146: 274 pgs: 1 peering, 124 unknown, 149 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:36 compute-0 ceph-mon[74496]: 5.e deep-scrub starts
Jan 31 07:08:36 compute-0 ceph-mon[74496]: 5.e deep-scrub ok
Jan 31 07:08:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cad83b4257f44769ae0e171f88cdbd572e8be3ceaae73092d7baf9bea0440642-merged.mount: Deactivated successfully.
Jan 31 07:08:36 compute-0 podman[96172]: 2026-01-31 07:08:36.292803131 +0000 UTC m=+2.778915785 container remove 720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492 (image=quay.io/ceph/keepalived:2.2.4, name=focused_cannon, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, release=1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=keepalived, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:08:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 07:08:36 compute-0 systemd[1]: libpod-conmon-720ff67be81b0b892d2d1c04235a2ee4973af5e4841905992907da8b7e0dd492.scope: Deactivated successfully.
Jan 31 07:08:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 07:08:36 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=11.462250710s) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active pruub 138.367019653s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:36 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=56 pruub=11.462250710s) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown pruub 138.367019653s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:36 compute-0 systemd[1]: Reloading.
Jan 31 07:08:36 compute-0 systemd-rc-local-generator[96314]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:36 compute-0 systemd-sysv-generator[96320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:36.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:36 compute-0 systemd[1]: Reloading.
Jan 31 07:08:36 compute-0 systemd-rc-local-generator[96358]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:08:36 compute-0 systemd-sysv-generator[96361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:08:36 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.rwjfwq for f70fcd2a-dcb4-5f89-a4ba-79a09959083b...
Jan 31 07:08:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:37 compute-0 podman[96416]: 2026-01-31 07:08:37.056339414 +0000 UTC m=+0.041856503 container create 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:08:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759648a91cc9b8b1430b048d0f09c22cd75b2ceaa0b020a031640a866cc4ac27/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:37 compute-0 podman[96416]: 2026-01-31 07:08:37.108435001 +0000 UTC m=+0.093952070 container init 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Jan 31 07:08:37 compute-0 podman[96416]: 2026-01-31 07:08:37.116248466 +0000 UTC m=+0.101765525 container start 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, architecture=x86_64)
Jan 31 07:08:37 compute-0 bash[96416]: 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51
Jan 31 07:08:37 compute-0 podman[96416]: 2026-01-31 07:08:37.038323483 +0000 UTC m=+0.023840552 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 07:08:37 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.rwjfwq for f70fcd2a-dcb4-5f89-a4ba-79a09959083b.
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: Starting VRRP child process, pid=4
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: Startup complete
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: (VI_0) Entering BACKUP STATE (init)
Jan 31 07:08:37 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:37 2026: VRRP_Script(check_backend) succeeded
Jan 31 07:08:37 compute-0 sudo[96107]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 07:08:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mgr[74791]: [progress INFO root] complete: finished ev ad35783d-1bb4-4dd4-b9eb-ab3e54dd4a05 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 31 07:08:37 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event ad35783d-1bb4-4dd4-b9eb-ab3e54dd4a05 (Updating ingress.rgw.default deployment (+4 -> 4)) in 21 seconds
Jan 31 07:08:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 07:08:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 07:08:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 07:08:37 compute-0 ceph-mon[74496]: 3.7 scrub starts
Jan 31 07:08:37 compute-0 ceph-mon[74496]: 3.7 scrub ok
Jan 31 07:08:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 07:08:37 compute-0 ceph-mon[74496]: osdmap e56: 3 total, 3 up, 3 in
Jan 31 07:08:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 1 peering, 93 unknown, 211 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=46/47 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.c( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.b( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.d( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.9( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.0( empty local-lis/les=56/57 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.2( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.11( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.15( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.1f( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 57 pg[11.18( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=46/46 les/c/f=47/47/0 sis=56) [0] r=0 lpr=56 pi=[46,56)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:37 compute-0 sudo[96441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-0 sudo[96441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-0 sudo[96441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96440]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-0 sudo[96490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:08:37 compute-0 sudo[96490]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96493]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-0 sudo[96540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96540]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:37 compute-0 sudo[96565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96565]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:37 compute-0 sudo[96590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:37 compute-0 sudo[96590]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:37 compute-0 sudo[96615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:08:37 compute-0 sudo[96615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:38 compute-0 podman[96712]: 2026-01-31 07:08:38.360309773 +0000 UTC m=+0.075485832 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:08:38 compute-0 podman[96712]: 2026-01-31 07:08:38.476372563 +0000 UTC m=+0.191548642 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:38.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:38 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 07:08:38 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 07:08:38 compute-0 ceph-mon[74496]: 3.f scrub starts
Jan 31 07:08:38 compute-0 ceph-mon[74496]: 3.f scrub ok
Jan 31 07:08:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:38 compute-0 ceph-mon[74496]: pgmap v149: 305 pgs: 1 peering, 93 unknown, 211 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:38 compute-0 ceph-mon[74496]: osdmap e57: 3 total, 3 up, 3 in
Jan 31 07:08:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000017s ======
Jan 31 07:08:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 31 07:08:39 compute-0 podman[96871]: 2026-01-31 07:08:39.000431548 +0000 UTC m=+0.056983734 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:39 compute-0 podman[96894]: 2026-01-31 07:08:39.063376373 +0000 UTC m=+0.051813204 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:39 compute-0 podman[96871]: 2026-01-31 07:08:39.068558311 +0000 UTC m=+0.125110527 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:39 compute-0 podman[96939]: 2026-01-31 07:08:39.241188188 +0000 UTC m=+0.045214461 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 07:08:39 compute-0 podman[96939]: 2026-01-31 07:08:39.259433253 +0000 UTC m=+0.063459496 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20)
Jan 31 07:08:39 compute-0 sudo[96615]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:39 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 07:08:39 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 07:08:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:40.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:40 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 20 completed events
Jan 31 07:08:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:08:40 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 07:08:40 2026: (VI_0) Entering MASTER STATE
Jan 31 07:08:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000017s ======
Jan 31 07:08:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 31 07:08:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:41 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 07:08:41 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 07:08:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:42.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:08:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:42.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 07:08:43 compute-0 ceph-mon[74496]: 3.1 scrub starts
Jan 31 07:08:43 compute-0 ceph-mon[74496]: 5.9 scrub starts
Jan 31 07:08:43 compute-0 ceph-mon[74496]: 3.1 scrub ok
Jan 31 07:08:43 compute-0 ceph-mon[74496]: 5.9 scrub ok
Jan 31 07:08:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075749397s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.933242798s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.14( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.949059486s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806594849s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.17( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075696945s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.933242798s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.14( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.949004173s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806594849s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075373650s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.933166504s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.16( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075348854s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.933166504s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.15( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948728561s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806579590s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.15( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948695183s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806579590s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.10( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948561668s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806579590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.10( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948536873s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806579590s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075138092s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.933212280s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.3( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948369980s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806533813s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.856096268s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714233398s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.13( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075003624s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.933212280s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.3( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948325157s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806533813s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.856000900s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714248657s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.856020927s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714233398s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.855980873s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714248657s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.f( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948106766s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806518555s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.f( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.948079109s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806518555s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.855614662s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714233398s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078432083s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937057495s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.855594635s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714233398s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.8( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947859764s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806503296s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.9( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947722435s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806427002s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.8( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947802544s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806503296s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.a( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078395844s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937057495s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.9( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947700500s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806427002s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078474998s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937271118s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.e( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078434944s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937271118s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078376770s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937316895s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.a( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947346687s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806335449s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.d( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947298050s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806320190s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.f( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078299522s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937316895s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.855176926s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714218140s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.d( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947267532s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806320190s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.c( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947195053s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806320190s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078175545s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937362671s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.c( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947151184s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806320190s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.855137825s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714218140s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.8( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.078148842s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937362671s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.a( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.947284698s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806335449s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.b( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.946967125s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806304932s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.854627609s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.713989258s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.854660988s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714035034s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.854588509s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.713989258s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.854620934s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714035034s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.b( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.946902275s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806304932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077900887s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937438965s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.3( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077875137s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937438965s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077348709s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937454224s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.7( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077315331s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937454224s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077327728s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937484741s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.4( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.945925713s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806091309s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.853682518s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.713867188s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.4( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077292442s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937484741s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.4( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.945898056s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806091309s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.077057838s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937500000s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.19( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.945735931s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806243896s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076998711s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937530518s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.853632927s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.713867188s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.19( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.945570946s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806243896s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.19( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076843262s) [2] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937500000s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.1b( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.945486069s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806198120s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.1b( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.945446014s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806198120s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076912880s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937744141s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1a( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076710701s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937530518s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076493263s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937545776s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1e( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076432228s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937545776s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.1c( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944915771s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.806121826s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.1c( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944885254s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.806121826s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076347351s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937805176s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1d( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076255798s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937744141s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.5( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076277733s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937805176s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.6( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944409370s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805938721s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.12( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944335938s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805923462s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.5( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944263458s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805892944s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.5( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944238663s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805892944s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.12( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944270134s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805923462s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.849373817s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.710922241s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.2( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944146156s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805862427s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.6( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944355011s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805938721s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076044083s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937866211s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.2( v 41'8 (0'0,41'8] local-lis/les=52/53 n=1 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.944116592s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805862427s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076023102s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937866211s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076299667s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.938171387s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=11.849054337s) [2] r=-1 lpr=58 pi=[50,58)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.710922241s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.11( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943929672s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805877686s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.12( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076241493s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.938171387s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.11( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943895340s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805877686s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.16( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943829536s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805877686s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076041222s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.938140869s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.16( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943794250s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805877686s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.14( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.076022148s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.938140869s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.17( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.936564445s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.798751831s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075737000s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.937927246s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.18( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943590164s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805831909s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.17( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.936530113s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.798751831s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.18( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943566322s) [1] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805831909s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1b( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075686455s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.937927246s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075713158s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 active pruub 143.938049316s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.1f( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943533897s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 active pruub 147.805908203s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[8.1f( v 41'8 (0'0,41'8] local-lis/les=52/53 n=0 ec=52/40 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=13.943501472s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=41'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 147.805908203s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[11.1c( empty local-lis/les=56/57 n=0 ec=56/46 lis/c=56/56 les/c/f=57/57/0 sis=58 pruub=10.075644493s) [1] r=-1 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 143.938049316s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.18( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.1b( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.18( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.e( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.f( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.2( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.3( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.8( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.b( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.9( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.6( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.10( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.4( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.1e( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 58 pg[7.13( empty local-lis/les=0/0 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:43 compute-0 ceph-mgr[74791]: [progress INFO root] Completed event ed92812c-633d-4d92-be87-ee4260d4fef3 (Global Recovery Event) in 13 seconds
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:43 compute-0 sudo[96977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:43 compute-0 sudo[96977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:43 compute-0 sudo[96977]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:43 compute-0 sudo[97002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:43 compute-0 sudo[97002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:43 compute-0 sudo[97002]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:43 compute-0 sudo[97027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:43 compute-0 sudo[97027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:43 compute-0 sudo[97027]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:43 compute-0 sudo[97052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:08:43 compute-0 sudo[97052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.9 scrub starts
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.9 scrub ok
Jan 31 07:08:44 compute-0 ceph-mon[74496]: pgmap v150: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 5.6 scrub starts
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 5.6 scrub ok
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.1a scrub starts
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.1a scrub ok
Jan 31 07:08:44 compute-0 ceph-mon[74496]: pgmap v151: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.b scrub starts
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.b scrub ok
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 5.1a scrub starts
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.1b scrub starts
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 3.1b scrub ok
Jan 31 07:08:44 compute-0 ceph-mon[74496]: 5.1a scrub ok
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 ceph-mon[74496]: osdmap e58: 3 total, 3 up, 3 in
Jan 31 07:08:44 compute-0 ceph-mon[74496]: pgmap v153: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 sudo[97052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 07:08:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.712057114s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714263916s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.712000847s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714263916s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.711598396s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714218140s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=50/51 n=2 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.711552620s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714218140s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.711185455s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.714233398s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.710688591s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 145.713806152s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.711125374s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.714233398s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=10.710652351s) [1] r=-1 lpr=59 pi=[50,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 145.713806152s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.10( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.18( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.15( v 57'51 lc 45'19 (0'0,57'51] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.13( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.9( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.14( v 57'51 lc 45'43 (0'0,57'51] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=57'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=58/59 n=0 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.b( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.e( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.4( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.1e( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.f( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.3( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.2( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.6( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=58/59 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.1b( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=58/59 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[7.8( empty local-lis/les=58/59 n=0 ec=52/26 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 59 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=58/59 n=1 ec=54/44 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000017s ======
Jan 31 07:08:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:44.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 31 07:08:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:08:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:08:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:08:44 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 07:08:44 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 07:08:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 46 peering, 259 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 07:08:45 compute-0 ceph-mon[74496]: osdmap e59: 3 total, 3 up, 3 in
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 07:08:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 217c5737-3b53-442f-986e-001a92080705 does not exist
Jan 31 07:08:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 31dc0f46-86f5-4aed-9019-906fb62d90ce does not exist
Jan 31 07:08:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c27e41cc-34a7-4aba-902f-7907a9887393 does not exist
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:45 compute-0 sudo[97108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:45 compute-0 sudo[97108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:45 compute-0 sudo[97108]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:45 compute-0 sudo[97133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:45 compute-0 sudo[97133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:45 compute-0 sudo[97133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:45 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 07:08:45 compute-0 sudo[97158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:45 compute-0 sudo[97158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:45 compute-0 sudo[97158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:45 compute-0 sudo[97183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:08:45 compute-0 sudo[97183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:45 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.14142707 +0000 UTC m=+0.042436653 container create 0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:46 compute-0 systemd[1]: Started libpod-conmon-0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb.scope.
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.121317173 +0000 UTC m=+0.022326776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.237349694 +0000 UTC m=+0.138359327 container init 0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.243702053 +0000 UTC m=+0.144711656 container start 0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:46 compute-0 eloquent_ride[97263]: 167 167
Jan 31 07:08:46 compute-0 systemd[1]: libpod-0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb.scope: Deactivated successfully.
Jan 31 07:08:46 compute-0 conmon[97263]: conmon 0f44fdbfbac4afa24037 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb.scope/container/memory.events
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.253011814 +0000 UTC m=+0.154021457 container attach 0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.254307516 +0000 UTC m=+0.155317199 container died 0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd04155c377c438a392a8139ff68e89620ec0f7520fc3822cc556f93b11b7f9a-merged.mount: Deactivated successfully.
Jan 31 07:08:46 compute-0 podman[97246]: 2026-01-31 07:08:46.312492668 +0000 UTC m=+0.213502261 container remove 0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:08:46 compute-0 systemd[1]: libpod-conmon-0f44fdbfbac4afa24037c38ec4a6906b6a438810022d1fe7fc1879ad2dbca2eb.scope: Deactivated successfully.
Jan 31 07:08:46 compute-0 podman[97288]: 2026-01-31 07:08:46.444703718 +0000 UTC m=+0.046315289 container create 371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:46 compute-0 systemd[1]: Started libpod-conmon-371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9.scope.
Jan 31 07:08:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd8a9141a959579caff3fe84938690884ed7e0950d4cd6fa9b5bb8cac08e1333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:46 compute-0 ceph-mon[74496]: 5.a scrub starts
Jan 31 07:08:46 compute-0 ceph-mon[74496]: 5.a scrub ok
Jan 31 07:08:46 compute-0 ceph-mon[74496]: pgmap v155: 305 pgs: 46 peering, 259 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:08:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:46 compute-0 ceph-mon[74496]: osdmap e60: 3 total, 3 up, 3 in
Jan 31 07:08:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd8a9141a959579caff3fe84938690884ed7e0950d4cd6fa9b5bb8cac08e1333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd8a9141a959579caff3fe84938690884ed7e0950d4cd6fa9b5bb8cac08e1333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd8a9141a959579caff3fe84938690884ed7e0950d4cd6fa9b5bb8cac08e1333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd8a9141a959579caff3fe84938690884ed7e0950d4cd6fa9b5bb8cac08e1333/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:46 compute-0 ceph-mon[74496]: 5.17 scrub starts
Jan 31 07:08:46 compute-0 ceph-mon[74496]: 5.17 scrub ok
Jan 31 07:08:46 compute-0 podman[97288]: 2026-01-31 07:08:46.423009574 +0000 UTC m=+0.024621125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:46 compute-0 podman[97288]: 2026-01-31 07:08:46.540812704 +0000 UTC m=+0.142424265 container init 371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:08:46 compute-0 podman[97288]: 2026-01-31 07:08:46.548994835 +0000 UTC m=+0.150606396 container start 371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:46 compute-0 podman[97288]: 2026-01-31 07:08:46.557635604 +0000 UTC m=+0.159247135 container attach 371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:46 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 07:08:46 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 07:08:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:47 compute-0 ecstatic_goldberg[97304]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:08:47 compute-0 ecstatic_goldberg[97304]: --> relative data size: 1.0
Jan 31 07:08:47 compute-0 ecstatic_goldberg[97304]: --> All data devices are unavailable
Jan 31 07:08:47 compute-0 systemd[1]: libpod-371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9.scope: Deactivated successfully.
Jan 31 07:08:47 compute-0 podman[97288]: 2026-01-31 07:08:47.306443843 +0000 UTC m=+0.908055424 container died 371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldberg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 50 peering, 255 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 69 B/s, 0 objects/s recovering
Jan 31 07:08:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd8a9141a959579caff3fe84938690884ed7e0950d4cd6fa9b5bb8cac08e1333-merged.mount: Deactivated successfully.
Jan 31 07:08:47 compute-0 podman[97288]: 2026-01-31 07:08:47.399305453 +0000 UTC m=+1.000917014 container remove 371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:08:47 compute-0 systemd[1]: libpod-conmon-371177533cccfd767555c6ee360563c874d8e972950e6bfe88c08f8b50f7b8a9.scope: Deactivated successfully.
Jan 31 07:08:47 compute-0 sudo[97183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:47 compute-0 sudo[97334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:47 compute-0 sudo[97334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:47 compute-0 sudo[97334]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:47 compute-0 ceph-mon[74496]: 5.c scrub starts
Jan 31 07:08:47 compute-0 ceph-mon[74496]: 5.c scrub ok
Jan 31 07:08:47 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 07:08:47 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 07:08:47 compute-0 sudo[97359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:47 compute-0 sudo[97359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:47 compute-0 sudo[97359]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:47 compute-0 sudo[97384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:47 compute-0 sudo[97384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:47 compute-0 sudo[97384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:47 compute-0 sudo[97409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:08:47 compute-0 sudo[97409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:48.001303511 +0000 UTC m=+0.051178332 container create 2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kirch, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:08:48 compute-0 systemd[1]: Started libpod-conmon-2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1.scope.
Jan 31 07:08:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:47.977326188 +0000 UTC m=+0.027201059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:48.080748951 +0000 UTC m=+0.130623872 container init 2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kirch, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:48.08651218 +0000 UTC m=+0.136387051 container start 2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:48 compute-0 zen_kirch[97490]: 167 167
Jan 31 07:08:48 compute-0 systemd[1]: libpod-2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1.scope: Deactivated successfully.
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:48.092967982 +0000 UTC m=+0.142842853 container attach 2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:48.094704772 +0000 UTC m=+0.144579633 container died 2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f35c9b95475f458ac3958783733bf19d40bc66c929336db91aa99421b8335d1-merged.mount: Deactivated successfully.
Jan 31 07:08:48 compute-0 podman[97474]: 2026-01-31 07:08:48.150682627 +0000 UTC m=+0.200557488 container remove 2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:08:48 compute-0 systemd[1]: libpod-conmon-2d1cdf481ef8dd72d4d069e6f8e35ed9cd578045b2459eb76ef641e040b797f1.scope: Deactivated successfully.
Jan 31 07:08:48 compute-0 podman[97515]: 2026-01-31 07:08:48.263648404 +0000 UTC m=+0.024261180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:48 compute-0 podman[97515]: 2026-01-31 07:08:48.445743883 +0000 UTC m=+0.206356589 container create ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:48 compute-0 systemd[1]: Started libpod-conmon-ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b.scope.
Jan 31 07:08:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:08:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:08:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7eafc0332ffbfc3e7993f0457f67aeea3d0534bd6197b211925177c400c9a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7eafc0332ffbfc3e7993f0457f67aeea3d0534bd6197b211925177c400c9a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7eafc0332ffbfc3e7993f0457f67aeea3d0534bd6197b211925177c400c9a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7eafc0332ffbfc3e7993f0457f67aeea3d0534bd6197b211925177c400c9a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:48 compute-0 ceph-mon[74496]: pgmap v157: 305 pgs: 50 peering, 255 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 69 B/s, 0 objects/s recovering
Jan 31 07:08:48 compute-0 ceph-mon[74496]: 5.14 scrub starts
Jan 31 07:08:48 compute-0 ceph-mon[74496]: 5.14 scrub ok
Jan 31 07:08:48 compute-0 podman[97515]: 2026-01-31 07:08:48.69206124 +0000 UTC m=+0.452673986 container init ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:48 compute-0 podman[97515]: 2026-01-31 07:08:48.701066034 +0000 UTC m=+0.461678730 container start ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:48 compute-0 podman[97515]: 2026-01-31 07:08:48.723102615 +0000 UTC m=+0.483715321 container attach ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:48 compute-0 ceph-mgr[74791]: [progress INFO root] Writing back 21 completed events
Jan 31 07:08:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 07:08:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 284 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:49 compute-0 festive_clarke[97531]: {
Jan 31 07:08:49 compute-0 festive_clarke[97531]:     "0": [
Jan 31 07:08:49 compute-0 festive_clarke[97531]:         {
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "devices": [
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "/dev/loop3"
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             ],
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "lv_name": "ceph_lv0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "lv_size": "7511998464",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "name": "ceph_lv0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "tags": {
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.cluster_name": "ceph",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.crush_device_class": "",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.encrypted": "0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.osd_id": "0",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.type": "block",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:                 "ceph.vdo": "0"
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             },
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "type": "block",
Jan 31 07:08:49 compute-0 festive_clarke[97531]:             "vg_name": "ceph_vg0"
Jan 31 07:08:49 compute-0 festive_clarke[97531]:         }
Jan 31 07:08:49 compute-0 festive_clarke[97531]:     ]
Jan 31 07:08:49 compute-0 festive_clarke[97531]: }
Jan 31 07:08:49 compute-0 systemd[1]: libpod-ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b.scope: Deactivated successfully.
Jan 31 07:08:49 compute-0 podman[97515]: 2026-01-31 07:08:49.430073402 +0000 UTC m=+1.190686108 container died ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-da7eafc0332ffbfc3e7993f0457f67aeea3d0534bd6197b211925177c400c9a4-merged.mount: Deactivated successfully.
Jan 31 07:08:49 compute-0 podman[97515]: 2026-01-31 07:08:49.525159581 +0000 UTC m=+1.285772287 container remove ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:49 compute-0 systemd[1]: libpod-conmon-ecece695d139e3abc917426e9ef97db652b94627a71bdcdf154fa2ea2a4f978b.scope: Deactivated successfully.
Jan 31 07:08:49 compute-0 sudo[97409]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:49 compute-0 sudo[97553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:49 compute-0 sudo[97553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:49 compute-0 sudo[97553]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:49 compute-0 sudo[97578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:49 compute-0 sudo[97578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:49 compute-0 sudo[97578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:49 compute-0 sudo[97603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:49 compute-0 sudo[97603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:49 compute-0 sudo[97603]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:49 compute-0 sudo[97628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:08:49 compute-0 sudo[97628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:49 compute-0 ceph-mon[74496]: pgmap v158: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 284 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:08:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.083268542 +0000 UTC m=+0.037826813 container create 90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:08:50 compute-0 systemd[1]: Started libpod-conmon-90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6.scope.
Jan 31 07:08:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.155263883 +0000 UTC m=+0.109822194 container init 90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.161584392 +0000 UTC m=+0.116142673 container start 90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.067112934 +0000 UTC m=+0.021671235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.165160074 +0000 UTC m=+0.119718385 container attach 90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:50 compute-0 vigilant_sanderson[97711]: 167 167
Jan 31 07:08:50 compute-0 systemd[1]: libpod-90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6.scope: Deactivated successfully.
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.167152738 +0000 UTC m=+0.121711029 container died 90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:08:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76ebb110a44335ea91edb9f9345a53f6afb5c6cf8d3fd19844d69f20cd1e1a1-merged.mount: Deactivated successfully.
Jan 31 07:08:50 compute-0 podman[97694]: 2026-01-31 07:08:50.201647973 +0000 UTC m=+0.156206254 container remove 90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:50 compute-0 systemd[1]: libpod-conmon-90607f6c1182c9ed8e35453cfe5829fe585c2f17edf261119e4b515f6d977cc6.scope: Deactivated successfully.
Jan 31 07:08:50 compute-0 podman[97737]: 2026-01-31 07:08:50.381288809 +0000 UTC m=+0.055333574 container create ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_allen, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:08:50 compute-0 systemd[1]: Started libpod-conmon-ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d.scope.
Jan 31 07:08:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b2f74628f67711224aaaa027b7b394c19e169da047042898c3bcc042709cb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b2f74628f67711224aaaa027b7b394c19e169da047042898c3bcc042709cb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b2f74628f67711224aaaa027b7b394c19e169da047042898c3bcc042709cb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b2f74628f67711224aaaa027b7b394c19e169da047042898c3bcc042709cb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:50 compute-0 podman[97737]: 2026-01-31 07:08:50.361135022 +0000 UTC m=+0.035179827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:50 compute-0 podman[97737]: 2026-01-31 07:08:50.47467721 +0000 UTC m=+0.148721995 container init ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:08:50 compute-0 podman[97737]: 2026-01-31 07:08:50.48108689 +0000 UTC m=+0.155131675 container start ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:50 compute-0 podman[97737]: 2026-01-31 07:08:50.484656301 +0000 UTC m=+0.158701076 container attach ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:08:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:50.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:50.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:51 compute-0 ceph-mon[74496]: 3.1d scrub starts
Jan 31 07:08:51 compute-0 ceph-mon[74496]: 3.1d scrub ok
Jan 31 07:08:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 216 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:51 compute-0 gifted_allen[97754]: {
Jan 31 07:08:51 compute-0 gifted_allen[97754]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:08:51 compute-0 gifted_allen[97754]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:08:51 compute-0 gifted_allen[97754]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:08:51 compute-0 gifted_allen[97754]:         "osd_id": 0,
Jan 31 07:08:51 compute-0 gifted_allen[97754]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:08:51 compute-0 gifted_allen[97754]:         "type": "bluestore"
Jan 31 07:08:51 compute-0 gifted_allen[97754]:     }
Jan 31 07:08:51 compute-0 gifted_allen[97754]: }
Jan 31 07:08:51 compute-0 systemd[1]: libpod-ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d.scope: Deactivated successfully.
Jan 31 07:08:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 31 07:08:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 07:08:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 31 07:08:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 07:08:51 compute-0 podman[97777]: 2026-01-31 07:08:51.458389137 +0000 UTC m=+0.028935809 container died ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_allen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-27b2f74628f67711224aaaa027b7b394c19e169da047042898c3bcc042709cb7-merged.mount: Deactivated successfully.
Jan 31 07:08:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 07:08:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000017s ======
Jan 31 07:08:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:52.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 31 07:08:52 compute-0 podman[97777]: 2026-01-31 07:08:52.805742504 +0000 UTC m=+1.376289156 container remove ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:52 compute-0 systemd[1]: libpod-conmon-ee4e2c50c49ecd72fc3f5c44c8c1a790be10a5d5e06584eba6d950717867ab4d.scope: Deactivated successfully.
Jan 31 07:08:52 compute-0 sudo[97628]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000017s ======
Jan 31 07:08:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:52.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 31 07:08:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 07:08:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 07:08:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 07:08:52 compute-0 ceph-mon[74496]: pgmap v159: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 216 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 07:08:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 07:08:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 07:08:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:52 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c8309e87-1232-4fe0-a515-e9989f87122f does not exist
Jan 31 07:08:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 16b3b4c2-8467-4af9-984b-cebd9c77bc87 does not exist
Jan 31 07:08:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3166f6cb-bf4c-44b6-ab16-1656a224631a does not exist
Jan 31 07:08:53 compute-0 sudo[97794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:53 compute-0 sudo[97794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-0 sudo[97794]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-0 sudo[97819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:08:53 compute-0 sudo[97819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-0 sudo[97819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.139379501s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.870178223s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.139225960s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.870010376s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.139307976s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.870178223s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.139151573s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.870010376s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.138419151s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869613647s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.138394356s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869613647s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.138144493s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869461060s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.138123512s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869461060s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.138607025s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.870101929s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.137809753s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869308472s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.137764931s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869308472s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.138573647s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.870101929s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.137413979s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869049072s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.137386322s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869049072s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.137386322s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869079590s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 61 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=14.137306213s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869079590s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 216 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 31 07:08:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 07:08:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 31 07:08:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 07:08:53 compute-0 sudo[97845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:53 compute-0 sudo[97845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-0 sudo[97845]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-0 sudo[97870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:53 compute-0 sudo[97870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-0 sudo[97870]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-0 sudo[97895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:53 compute-0 sudo[97895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-0 sudo[97895]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:53 compute-0 sudo[97920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:08:53 compute-0 sudo[97920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:53 compute-0 podman[98019]: 2026-01-31 07:08:53.906365667 +0000 UTC m=+0.067691928 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:08:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 07:08:53 compute-0 ceph-mon[74496]: 3.c scrub starts
Jan 31 07:08:53 compute-0 ceph-mon[74496]: 3.c scrub ok
Jan 31 07:08:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 07:08:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 07:08:53 compute-0 ceph-mon[74496]: osdmap e61: 3 total, 3 up, 3 in
Jan 31 07:08:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:53 compute-0 ceph-mon[74496]: pgmap v161: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 216 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 07:08:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 07:08:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 07:08:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 07:08:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 07:08:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 07:08:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:53 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 62 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:54 compute-0 podman[98019]: 2026-01-31 07:08:54.048643859 +0000 UTC m=+0.209970120 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:08:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:54.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:08:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 podman[98175]: 2026-01-31 07:08:54.682954545 +0000 UTC m=+0.083985989 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:54 compute-0 podman[98175]: 2026-01-31 07:08:54.722437075 +0000 UTC m=+0.123468429 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:08:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:08:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:54.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:54 compute-0 podman[98243]: 2026-01-31 07:08:54.956804105 +0000 UTC m=+0.050455361 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, architecture=x86_64, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Jan 31 07:08:54 compute-0 podman[98243]: 2026-01-31 07:08:54.972436284 +0000 UTC m=+0.066087570 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, name=keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 07:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 07:08:54 compute-0 ceph-mon[74496]: 3.8 scrub starts
Jan 31 07:08:54 compute-0 ceph-mon[74496]: 3.8 scrub ok
Jan 31 07:08:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 07:08:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 07:08:54 compute-0 ceph-mon[74496]: osdmap e62: 3 total, 3 up, 3 in
Jan 31 07:08:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 07:08:55 compute-0 sudo[97920]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 63 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[54,62)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a6904041-c42a-487e-a333-c474a306afb9 does not exist
Jan 31 07:08:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0fd3619d-1d80-45af-9848-e9772bdaaef4 does not exist
Jan 31 07:08:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 85637741-c669-458f-90e0-10f6d9dcab4d does not exist
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:55 compute-0 sudo[98272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:55 compute-0 sudo[98272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:55 compute-0 sudo[98272]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:55 compute-0 sudo[98297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:55 compute-0 sudo[98297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:55 compute-0 sudo[98297]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:55 compute-0 sudo[98323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:55 compute-0 sudo[98323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:55 compute-0 sudo[98323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:55 compute-0 sudo[98348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:08:55 compute-0 sudo[98348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 07:08:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 31 07:08:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.505659187 +0000 UTC m=+0.034041508 container create d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:55 compute-0 systemd[1]: Started libpod-conmon-d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f.scope.
Jan 31 07:08:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.557795355 +0000 UTC m=+0.086177696 container init d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.565389457 +0000 UTC m=+0.093771808 container start d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.569473317 +0000 UTC m=+0.097855638 container attach d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:08:55 compute-0 gallant_wu[98429]: 167 167
Jan 31 07:08:55 compute-0 systemd[1]: libpod-d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f.scope: Deactivated successfully.
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.571256358 +0000 UTC m=+0.099638749 container died d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:08:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.490695298 +0000 UTC m=+0.019077719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 07:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f989e41e9323f4623408ca3ad4e46fb4323705d1548c1f4e4c0b254e799070f7-merged.mount: Deactivated successfully.
Jan 31 07:08:55 compute-0 podman[98413]: 2026-01-31 07:08:55.612997177 +0000 UTC m=+0.141379538 container remove d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:08:55 compute-0 systemd[1]: libpod-conmon-d72deee38b1bec0260863b32f5b9374bb0483c1b2d6cf968c1599b5271abc06f.scope: Deactivated successfully.
Jan 31 07:08:55 compute-0 podman[98455]: 2026-01-31 07:08:55.789832275 +0000 UTC m=+0.059499916 container create 8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:08:55 compute-0 systemd[1]: Started libpod-conmon-8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980.scope.
Jan 31 07:08:55 compute-0 podman[98455]: 2026-01-31 07:08:55.763460381 +0000 UTC m=+0.033128102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ec9db3fac260b0bcccf10d25f66537e11fbe86d11b555326762973705360bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ec9db3fac260b0bcccf10d25f66537e11fbe86d11b555326762973705360bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ec9db3fac260b0bcccf10d25f66537e11fbe86d11b555326762973705360bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ec9db3fac260b0bcccf10d25f66537e11fbe86d11b555326762973705360bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ec9db3fac260b0bcccf10d25f66537e11fbe86d11b555326762973705360bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:55 compute-0 podman[98455]: 2026-01-31 07:08:55.885336312 +0000 UTC m=+0.155003993 container init 8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:55 compute-0 podman[98455]: 2026-01-31 07:08:55.898402198 +0000 UTC m=+0.168069839 container start 8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:55 compute-0 podman[98455]: 2026-01-31 07:08:55.902117891 +0000 UTC m=+0.171785622 container attach 8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_montalcini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:08:56 compute-0 ceph-mon[74496]: 4.8 deep-scrub starts
Jan 31 07:08:56 compute-0 ceph-mon[74496]: 4.8 deep-scrub ok
Jan 31 07:08:56 compute-0 ceph-mon[74496]: osdmap e63: 3 total, 3 up, 3 in
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 07:08:56 compute-0 ceph-mon[74496]: 3.d scrub starts
Jan 31 07:08:56 compute-0 ceph-mon[74496]: 3.d scrub ok
Jan 31 07:08:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 07:08:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 07:08:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 07:08:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 07:08:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.226256371s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.870376587s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.226172447s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.870376587s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.980553627s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625244141s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.224949837s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869903564s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.b( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.980441093s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625244141s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.224858284s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869903564s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.224452019s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869812012s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.979789734s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625091553s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.224395752s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869812012s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.979507446s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625061035s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.223616600s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869323730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.979417801s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625061035s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=64 pruub=11.223569870s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869323730s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.979237556s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625076294s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.f( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.979248047s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625091553s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.13( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.979008675s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625076294s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.978956223s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625122070s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.978876114s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625045776s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.3( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.978887558s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625122070s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.971168518s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.617492676s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.971058846s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.617492676s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.978858948s) [2] async=[2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 161.625305176s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.7( v 48'908 (0'0,48'908] local-lis/les=62/63 n=6 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.978811264s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625045776s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 64 pg[9.17( v 48'908 (0'0,48'908] local-lis/les=62/63 n=5 ec=54/42 lis/c=62/54 les/c/f=63/55/0 sis=64 pruub=14.978719711s) [2] r=-1 lpr=64 pi=[54,64)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.625305176s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:56.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:56 compute-0 silly_montalcini[98471]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:08:56 compute-0 silly_montalcini[98471]: --> relative data size: 1.0
Jan 31 07:08:56 compute-0 silly_montalcini[98471]: --> All data devices are unavailable
Jan 31 07:08:56 compute-0 systemd[1]: libpod-8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980.scope: Deactivated successfully.
Jan 31 07:08:56 compute-0 podman[98455]: 2026-01-31 07:08:56.661896349 +0000 UTC m=+0.931564000 container died 8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9ec9db3fac260b0bcccf10d25f66537e11fbe86d11b555326762973705360bc-merged.mount: Deactivated successfully.
Jan 31 07:08:56 compute-0 podman[98455]: 2026-01-31 07:08:56.720026221 +0000 UTC m=+0.989693862 container remove 8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:56 compute-0 systemd[1]: libpod-conmon-8e7c53ef273e6da5557e80e004f0a14f62f954e0da112ccf7c7ebf78b431b980.scope: Deactivated successfully.
Jan 31 07:08:56 compute-0 sudo[98348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:56 compute-0 sudo[98497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:56 compute-0 sudo[98497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:56 compute-0 sudo[98497]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:56 compute-0 sudo[98522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:56 compute-0 sudo[98522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:56 compute-0 sudo[98522]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:56.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:56 compute-0 sudo[98547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:56 compute-0 sudo[98547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:56 compute-0 sudo[98547]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:56 compute-0 sudo[98572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:08:56 compute-0 sudo[98572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 07:08:57 compute-0 ceph-mon[74496]: 3.19 scrub starts
Jan 31 07:08:57 compute-0 ceph-mon[74496]: 3.19 scrub ok
Jan 31 07:08:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 07:08:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 07:08:57 compute-0 ceph-mon[74496]: osdmap e64: 3 total, 3 up, 3 in
Jan 31 07:08:57 compute-0 ceph-mon[74496]: 4.14 scrub starts
Jan 31 07:08:57 compute-0 ceph-mon[74496]: 4.14 scrub ok
Jan 31 07:08:57 compute-0 ceph-mon[74496]: 3.3 scrub starts
Jan 31 07:08:57 compute-0 ceph-mon[74496]: 3.3 scrub ok
Jan 31 07:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 07:08:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:57 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 65 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.275277063 +0000 UTC m=+0.033461468 container create 7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:08:57 compute-0 systemd[1]: Started libpod-conmon-7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055.scope.
Jan 31 07:08:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.333968585 +0000 UTC m=+0.092153030 container init 7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cori, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.338626245 +0000 UTC m=+0.096810650 container start 7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cori, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:08:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 07:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 31 07:08:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 07:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 31 07:08:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 07:08:57 compute-0 wizardly_cori[98654]: 167 167
Jan 31 07:08:57 compute-0 systemd[1]: libpod-7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055.scope: Deactivated successfully.
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.343705643 +0000 UTC m=+0.101890088 container attach 7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.344306132 +0000 UTC m=+0.102490547 container died 7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cori, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.260891665 +0000 UTC m=+0.019076090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-adda45c61ff7d6e457e84eba89a78d357ec550ccfe0dd6ab104c5778ea5c8250-merged.mount: Deactivated successfully.
Jan 31 07:08:57 compute-0 podman[98638]: 2026-01-31 07:08:57.381182739 +0000 UTC m=+0.139367144 container remove 7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:08:57 compute-0 systemd[1]: libpod-conmon-7821364a07a939f0fde3186ea51bb3708cb01244866894fc4304c3893845d055.scope: Deactivated successfully.
Jan 31 07:08:57 compute-0 sudo[98672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:57 compute-0 sudo[98672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:57 compute-0 sudo[98672]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:57 compute-0 podman[98696]: 2026-01-31 07:08:57.500442984 +0000 UTC m=+0.040621632 container create 0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swirles, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:08:57 compute-0 systemd[1]: Started libpod-conmon-0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401.scope.
Jan 31 07:08:57 compute-0 sudo[98716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:57 compute-0 sudo[98716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:57 compute-0 sudo[98716]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:57 compute-0 podman[98696]: 2026-01-31 07:08:57.483839948 +0000 UTC m=+0.024018635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59037bbf454446f93e1255faf2998bf591136468d2da924ff5eb1d7293db241/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59037bbf454446f93e1255faf2998bf591136468d2da924ff5eb1d7293db241/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59037bbf454446f93e1255faf2998bf591136468d2da924ff5eb1d7293db241/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f59037bbf454446f93e1255faf2998bf591136468d2da924ff5eb1d7293db241/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:57 compute-0 podman[98696]: 2026-01-31 07:08:57.60749556 +0000 UTC m=+0.147674297 container init 0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swirles, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:08:57 compute-0 podman[98696]: 2026-01-31 07:08:57.622807284 +0000 UTC m=+0.162985971 container start 0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:08:57 compute-0 podman[98696]: 2026-01-31 07:08:57.626458667 +0000 UTC m=+0.166637374 container attach 0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:08:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 07:08:58 compute-0 ceph-mon[74496]: osdmap e65: 3 total, 3 up, 3 in
Jan 31 07:08:58 compute-0 ceph-mon[74496]: pgmap v167: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 07:08:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 07:08:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 07:08:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 07:08:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 07:08:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 07:08:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.104284286s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.870376587s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.104213715s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.870376587s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.102955818s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.869766235s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.102911949s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.869766235s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.101414680s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.868850708s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.089417458s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 157.856918335s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.101372719s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.868850708s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=66 pruub=9.089356422s) [1] r=-1 lpr=66 pi=[54,66)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 157.856918335s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=65/66 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=65/66 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=65/66 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=65/66 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[6.e( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 66 pg[6.6( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:58 compute-0 sad_swirles[98742]: {
Jan 31 07:08:58 compute-0 sad_swirles[98742]:     "0": [
Jan 31 07:08:58 compute-0 sad_swirles[98742]:         {
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "devices": [
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "/dev/loop3"
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             ],
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "lv_name": "ceph_lv0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "lv_size": "7511998464",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "name": "ceph_lv0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "tags": {
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.cluster_name": "ceph",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.crush_device_class": "",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.encrypted": "0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.osd_id": "0",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.type": "block",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:                 "ceph.vdo": "0"
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             },
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "type": "block",
Jan 31 07:08:58 compute-0 sad_swirles[98742]:             "vg_name": "ceph_vg0"
Jan 31 07:08:58 compute-0 sad_swirles[98742]:         }
Jan 31 07:08:58 compute-0 sad_swirles[98742]:     ]
Jan 31 07:08:58 compute-0 sad_swirles[98742]: }
Jan 31 07:08:58 compute-0 systemd[1]: libpod-0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401.scope: Deactivated successfully.
Jan 31 07:08:58 compute-0 podman[98696]: 2026-01-31 07:08:58.378616363 +0000 UTC m=+0.918795050 container died 0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swirles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f59037bbf454446f93e1255faf2998bf591136468d2da924ff5eb1d7293db241-merged.mount: Deactivated successfully.
Jan 31 07:08:58 compute-0 podman[98696]: 2026-01-31 07:08:58.470870063 +0000 UTC m=+1.011048720 container remove 0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:08:58 compute-0 systemd[1]: libpod-conmon-0e8e1de19573aebd4188c062df78fc8cf300d42933e45fc4e47bd7c0fdced401.scope: Deactivated successfully.
Jan 31 07:08:58 compute-0 sudo[98572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:08:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:08:58.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:08:58 compute-0 sudo[98765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:58 compute-0 sudo[98765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:58 compute-0 sudo[98765]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:58 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 31 07:08:58 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 31 07:08:58 compute-0 sudo[98790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:08:58 compute-0 sudo[98790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:58 compute-0 sudo[98790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:58 compute-0 sudo[98815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:08:58 compute-0 sudo[98815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:58 compute-0 sudo[98815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:08:58 compute-0 sudo[98840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:08:58 compute-0 sudo[98840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:08:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:08:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:08:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:08:58.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:08:58 compute-0 podman[98906]: 2026-01-31 07:08:58.995059339 +0000 UTC m=+0.049908920 container create d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:08:59 compute-0 systemd[1]: Started libpod-conmon-d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c.scope.
Jan 31 07:08:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:59 compute-0 podman[98906]: 2026-01-31 07:08:58.969253195 +0000 UTC m=+0.024102796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:59 compute-0 podman[98906]: 2026-01-31 07:08:59.06647988 +0000 UTC m=+0.121329481 container init d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:59 compute-0 podman[98906]: 2026-01-31 07:08:59.07228823 +0000 UTC m=+0.127137781 container start d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:08:59 compute-0 cool_kapitsa[98922]: 167 167
Jan 31 07:08:59 compute-0 systemd[1]: libpod-d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c.scope: Deactivated successfully.
Jan 31 07:08:59 compute-0 podman[98906]: 2026-01-31 07:08:59.077220126 +0000 UTC m=+0.132069737 container attach d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:59 compute-0 podman[98906]: 2026-01-31 07:08:59.077670523 +0000 UTC m=+0.132520134 container died d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c00e0322dbd5c545aa036dc87134d8994589bc6fd1c35d8d7cdedb0ae38556bf-merged.mount: Deactivated successfully.
Jan 31 07:08:59 compute-0 podman[98906]: 2026-01-31 07:08:59.124310688 +0000 UTC m=+0.179160259 container remove d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:08:59 compute-0 systemd[1]: libpod-conmon-d072c065e6177f6a3ceb982c104ede7a59ece7191d01abf56bcbb1db57e7632c.scope: Deactivated successfully.
Jan 31 07:08:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 07:08:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 07:08:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 07:08:59 compute-0 ceph-mon[74496]: osdmap e66: 3 total, 3 up, 3 in
Jan 31 07:08:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 07:08:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=65/66 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.989504814s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 164.775100708s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=65/66 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988770485s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 164.775054932s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.d( v 48'908 (0'0,48'908] local-lis/les=65/66 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988634109s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.775054932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=65/66 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988469124s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 164.775268555s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.15( v 48'908 (0'0,48'908] local-lis/les=65/66 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988214493s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.775100708s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.5( v 48'908 (0'0,48'908] local-lis/les=65/66 n=6 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.988159180s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.775268555s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=65/66 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.982922554s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 164.770187378s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.1d( v 48'908 (0'0,48'908] local-lis/les=65/66 n=5 ec=54/42 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.982811928s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.770187378s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[6.e( v 48'39 lc 44'19 (0'0,48'39] local-lis/les=66/67 n=1 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 67 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=66/67 n=2 ec=50/22 lis/c=59/59 les/c/f=60/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:08:59 compute-0 podman[98948]: 2026-01-31 07:08:59.302476879 +0000 UTC m=+0.051968377 container create 134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:08:59 compute-0 systemd[1]: Started libpod-conmon-134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00.scope.
Jan 31 07:08:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 461 B/s, 2 keys/s, 11 objects/s recovering
Jan 31 07:08:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bee65ba81ec383aad461c1a7f8712ab6cc99df87b5938b5ae23d81ef852238f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bee65ba81ec383aad461c1a7f8712ab6cc99df87b5938b5ae23d81ef852238f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bee65ba81ec383aad461c1a7f8712ab6cc99df87b5938b5ae23d81ef852238f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bee65ba81ec383aad461c1a7f8712ab6cc99df87b5938b5ae23d81ef852238f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:08:59 compute-0 podman[98948]: 2026-01-31 07:08:59.283432051 +0000 UTC m=+0.032923569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:08:59 compute-0 podman[98948]: 2026-01-31 07:08:59.3814535 +0000 UTC m=+0.130944968 container init 134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:08:59 compute-0 podman[98948]: 2026-01-31 07:08:59.397394376 +0000 UTC m=+0.146885874 container start 134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:08:59 compute-0 podman[98948]: 2026-01-31 07:08:59.401416965 +0000 UTC m=+0.150908533 container attach 134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:08:59 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 07:08:59 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 07:08:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 07:09:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 07:09:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 07:09:00 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 68 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 68 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 68 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 68 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[54,67)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]: {
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:         "osd_id": 0,
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:         "type": "bluestore"
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]:     }
Jan 31 07:09:00 compute-0 priceless_dewdney[98964]: }
Jan 31 07:09:00 compute-0 systemd[1]: libpod-134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00.scope: Deactivated successfully.
Jan 31 07:09:00 compute-0 podman[98948]: 2026-01-31 07:09:00.380542444 +0000 UTC m=+1.130033902 container died 134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bee65ba81ec383aad461c1a7f8712ab6cc99df87b5938b5ae23d81ef852238f-merged.mount: Deactivated successfully.
Jan 31 07:09:00 compute-0 podman[98948]: 2026-01-31 07:09:00.428527471 +0000 UTC m=+1.178018929 container remove 134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:09:00 compute-0 systemd[1]: libpod-conmon-134aa4f693e65fcc4981d6e4af99c7745478cac336fbb59dec3e44384a6d3a00.scope: Deactivated successfully.
Jan 31 07:09:00 compute-0 sudo[98840]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:09:00 compute-0 ceph-mon[74496]: 3.12 scrub starts
Jan 31 07:09:00 compute-0 ceph-mon[74496]: 3.12 scrub ok
Jan 31 07:09:00 compute-0 ceph-mon[74496]: osdmap e67: 3 total, 3 up, 3 in
Jan 31 07:09:00 compute-0 ceph-mon[74496]: pgmap v170: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 461 B/s, 2 keys/s, 11 objects/s recovering
Jan 31 07:09:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:00.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:09:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:09:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:09:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5df1ff57-3c9a-45fd-a4b7-49fe247d94a3 does not exist
Jan 31 07:09:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 411bc5ca-8042-44b7-a1c5-e2f710c33754 does not exist
Jan 31 07:09:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9daa30bc-e286-4a4b-92ea-cfc0714adb8b does not exist
Jan 31 07:09:00 compute-0 sudo[98999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:00 compute-0 sudo[98999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:00 compute-0 sudo[98999]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:00 compute-0 sudo[99024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:09:00 compute-0 sudo[99024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:00 compute-0 sudo[99024]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:00.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 07:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.976828575s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 166.887130737s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.e( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.976743698s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.887130737s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.976493835s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 166.887100220s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.6( v 48'908 (0'0,48'908] local-lis/les=67/68 n=6 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.976418495s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.887100220s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.976243019s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 166.887176514s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.972501755s) [1] async=[1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 166.883438110s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.16( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.976173401s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.887176514s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 69 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=67/68 n=5 ec=54/42 lis/c=67/54 les/c/f=68/55/0 sis=69 pruub=14.972399712s) [1] r=-1 lpr=69 pi=[54,69)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.883438110s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 342 B/s, 10 objects/s recovering
Jan 31 07:09:01 compute-0 ceph-mon[74496]: 2.1f scrub starts
Jan 31 07:09:01 compute-0 ceph-mon[74496]: 2.1f scrub ok
Jan 31 07:09:01 compute-0 ceph-mon[74496]: osdmap e68: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:09:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:09:01 compute-0 ceph-mon[74496]: osdmap e69: 3 total, 3 up, 3 in
Jan 31 07:09:01 compute-0 anacron[52258]: Job `cron.daily' started
Jan 31 07:09:01 compute-0 anacron[52258]: Job `cron.daily' terminated
Jan 31 07:09:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 07:09:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 07:09:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 07:09:02 compute-0 ceph-mon[74496]: pgmap v173: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 342 B/s, 10 objects/s recovering
Jan 31 07:09:02 compute-0 ceph-mon[74496]: 4.15 deep-scrub starts
Jan 31 07:09:02 compute-0 ceph-mon[74496]: 4.15 deep-scrub ok
Jan 31 07:09:02 compute-0 ceph-mon[74496]: osdmap e70: 3 total, 3 up, 3 in
Jan 31 07:09:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:02.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 1 objects/s recovering
Jan 31 07:09:03 compute-0 ceph-mon[74496]: 4.1c scrub starts
Jan 31 07:09:03 compute-0 ceph-mon[74496]: 4.1c scrub ok
Jan 31 07:09:03 compute-0 ceph-mon[74496]: 5.7 scrub starts
Jan 31 07:09:03 compute-0 ceph-mon[74496]: 5.7 scrub ok
Jan 31 07:09:03 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 07:09:03 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 07:09:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:04.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:04 compute-0 ceph-mon[74496]: pgmap v175: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 1 objects/s recovering
Jan 31 07:09:04 compute-0 ceph-mon[74496]: 2.9 scrub starts
Jan 31 07:09:04 compute-0 ceph-mon[74496]: 4.1d scrub starts
Jan 31 07:09:04 compute-0 ceph-mon[74496]: 2.9 scrub ok
Jan 31 07:09:04 compute-0 ceph-mon[74496]: 4.1d scrub ok
Jan 31 07:09:04 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 07:09:04 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 07:09:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 125 B/s, 4 objects/s recovering
Jan 31 07:09:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 31 07:09:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 07:09:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 31 07:09:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 07:09:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 07:09:05 compute-0 ceph-mon[74496]: 2.1e scrub starts
Jan 31 07:09:05 compute-0 ceph-mon[74496]: 2.1e scrub ok
Jan 31 07:09:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 07:09:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 07:09:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 07:09:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 07:09:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 07:09:05 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 07:09:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:06.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:06 compute-0 ceph-mon[74496]: pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 125 B/s, 4 objects/s recovering
Jan 31 07:09:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 07:09:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 07:09:06 compute-0 ceph-mon[74496]: osdmap e71: 3 total, 3 up, 3 in
Jan 31 07:09:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 134 B/s, 7 objects/s recovering
Jan 31 07:09:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 31 07:09:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 07:09:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 31 07:09:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 07:09:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 07:09:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 07:09:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 07:09:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 07:09:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 07:09:07 compute-0 ceph-mon[74496]: 4.1f scrub starts
Jan 31 07:09:07 compute-0 ceph-mon[74496]: 4.1f scrub ok
Jan 31 07:09:07 compute-0 ceph-mon[74496]: 3.5 scrub starts
Jan 31 07:09:07 compute-0 ceph-mon[74496]: 3.5 scrub ok
Jan 31 07:09:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 07:09:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 07:09:07 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 72 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=15.428533554s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 173.870361328s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:07 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 72 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=15.428444862s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.870361328s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:07 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 72 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=15.427467346s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 173.869842529s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:07 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 72 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=72 pruub=15.427439690s) [2] r=-1 lpr=72 pi=[54,72)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.869842529s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:07 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 72 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=72 pruub=11.268939972s) [1] r=-1 lpr=72 pi=[50,72)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 169.711791992s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:07 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 72 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=50/51 n=1 ec=50/22 lis/c=50/50 les/c/f=51/51/0 sis=72 pruub=11.268862724s) [1] r=-1 lpr=72 pi=[50,72)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.711791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:08 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 07:09:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 07:09:08 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 07:09:08 compute-0 ceph-mon[74496]: pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 134 B/s, 7 objects/s recovering
Jan 31 07:09:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 07:09:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 07:09:08 compute-0 ceph-mon[74496]: osdmap e72: 3 total, 3 up, 3 in
Jan 31 07:09:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 07:09:08 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 73 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:08 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 73 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:08 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 73 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:08 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 73 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] r=0 lpr=73 pi=[54,73)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:08 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 07:09:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:08.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 B/s, 7 objects/s recovering
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 31 07:09:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 31 07:09:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 07:09:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 07:09:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[6.9( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=74) [0] r=0 lpr=74 pi=[58,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.594676971s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 173.870712280s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.594615936s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.870712280s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.592281342s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 173.869308472s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=13.592247009s) [2] r=-1 lpr=74 pi=[54,74)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.869308472s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:09 compute-0 ceph-mon[74496]: 2.e scrub starts
Jan 31 07:09:09 compute-0 ceph-mon[74496]: 2.e scrub ok
Jan 31 07:09:09 compute-0 ceph-mon[74496]: osdmap e73: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 07:09:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=73/74 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[54,73)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 74 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=73/74 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[54,73)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 07:09:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 75 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=0 lpr=75 pi=[54,75)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 75 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=0 lpr=75 pi=[54,75)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 75 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=0 lpr=75 pi=[54,75)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 75 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] r=0 lpr=75 pi=[54,75)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:09 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 75 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=74/75 n=1 ec=50/22 lis/c=58/58 les/c/f=59/59/0 sis=74) [0] r=0 lpr=74 pi=[58,74)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000017s ======
Jan 31 07:09:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000017s
Jan 31 07:09:10 compute-0 ceph-mon[74496]: pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 135 B/s, 7 objects/s recovering
Jan 31 07:09:10 compute-0 ceph-mon[74496]: 4.9 scrub starts
Jan 31 07:09:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 07:09:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 07:09:10 compute-0 ceph-mon[74496]: osdmap e74: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-0 ceph-mon[74496]: 4.9 scrub ok
Jan 31 07:09:10 compute-0 ceph-mon[74496]: osdmap e75: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 07:09:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 07:09:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 76 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76 pruub=14.905313492s) [2] async=[2] r=-1 lpr=76 pi=[54,76)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 176.287902832s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 76 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=73/74 n=6 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76 pruub=14.905292511s) [2] async=[2] r=-1 lpr=76 pi=[54,76)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 176.287918091s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 76 pg[9.8( v 48'908 (0'0,48'908] local-lis/les=73/74 n=6 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76 pruub=14.905074120s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.287918091s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 76 pg[9.18( v 48'908 (0'0,48'908] local-lis/les=73/74 n=5 ec=54/42 lis/c=73/54 les/c/f=74/55/0 sis=76 pruub=14.904256821s) [2] r=-1 lpr=76 pi=[54,76)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.287902832s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 76 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=75/76 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[54,75)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:10 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 76 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=75/76 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[54,75)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:10.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 31 07:09:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 07:09:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 31 07:09:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 07:09:11 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 07:09:11 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 07:09:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 07:09:11 compute-0 ceph-mon[74496]: osdmap e76: 3 total, 3 up, 3 in
Jan 31 07:09:11 compute-0 ceph-mon[74496]: 5.f scrub starts
Jan 31 07:09:11 compute-0 ceph-mon[74496]: 5.f scrub ok
Jan 31 07:09:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 07:09:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 07:09:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 07:09:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 07:09:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 07:09:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=75/76 n=6 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=14.978872299s) [2] async=[2] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 177.388107300s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.460717201s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 173.870239258s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.460647583s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.870239258s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.459980011s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 173.870040894s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=11.459921837s) [1] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.870040894s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=75/76 n=5 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=14.982235909s) [2] async=[2] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 177.392547607s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=75/76 n=5 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=14.982169151s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.392547607s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 77 pg[9.9( v 48'908 (0'0,48'908] local-lis/les=75/76 n=6 ec=54/42 lis/c=75/54 les/c/f=76/55/0 sis=77 pruub=14.977895737s) [2] r=-1 lpr=77 pi=[54,77)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 177.388107300s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:12 compute-0 sudo[99080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egdbmqwvrqotpbzynewzigrtoplgadch ; /usr/bin/python3'
Jan 31 07:09:12 compute-0 sudo[99080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:09:12 compute-0 python3[99082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:09:12 compute-0 podman[99083]: 2026-01-31 07:09:12.77855335 +0000 UTC m=+0.047717533 container create c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46 (image=quay.io/ceph/ceph:v18, name=confident_franklin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:09:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 07:09:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 07:09:12 compute-0 systemd[1]: Started libpod-conmon-c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46.scope.
Jan 31 07:09:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 07:09:12 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 78 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:12 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 78 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:12 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 78 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:12 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 78 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] r=0 lpr=78 pi=[54,78)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:12 compute-0 ceph-mon[74496]: pgmap v185: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:12 compute-0 ceph-mon[74496]: 4.19 scrub starts
Jan 31 07:09:12 compute-0 ceph-mon[74496]: 4.19 scrub ok
Jan 31 07:09:12 compute-0 ceph-mon[74496]: 2.19 scrub starts
Jan 31 07:09:12 compute-0 ceph-mon[74496]: 2.19 scrub ok
Jan 31 07:09:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 07:09:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 07:09:12 compute-0 ceph-mon[74496]: osdmap e77: 3 total, 3 up, 3 in
Jan 31 07:09:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2a737f33f744d1df7b59a1fb74d2202a5cf8c5daab4c7e27e3b6c5073f682b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2a737f33f744d1df7b59a1fb74d2202a5cf8c5daab4c7e27e3b6c5073f682b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:09:12 compute-0 podman[99083]: 2026-01-31 07:09:12.753355896 +0000 UTC m=+0.022520119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:09:12 compute-0 podman[99083]: 2026-01-31 07:09:12.862293704 +0000 UTC m=+0.131457917 container init c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46 (image=quay.io/ceph/ceph:v18, name=confident_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:09:12 compute-0 podman[99083]: 2026-01-31 07:09:12.866937704 +0000 UTC m=+0.136101927 container start c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46 (image=quay.io/ceph/ceph:v18, name=confident_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:09:12 compute-0 podman[99083]: 2026-01-31 07:09:12.871473772 +0000 UTC m=+0.140638025 container attach c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46 (image=quay.io/ceph/ceph:v18, name=confident_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:09:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:12.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:13 compute-0 confident_franklin[99098]: could not fetch user info: no user info saved
Jan 31 07:09:13 compute-0 systemd[1]: libpod-c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46.scope: Deactivated successfully.
Jan 31 07:09:13 compute-0 podman[99183]: 2026-01-31 07:09:13.154048713 +0000 UTC m=+0.025696133 container died c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46 (image=quay.io/ceph/ceph:v18, name=confident_franklin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2a737f33f744d1df7b59a1fb74d2202a5cf8c5daab4c7e27e3b6c5073f682b-merged.mount: Deactivated successfully.
Jan 31 07:09:13 compute-0 podman[99183]: 2026-01-31 07:09:13.227816805 +0000 UTC m=+0.099464205 container remove c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46 (image=quay.io/ceph/ceph:v18, name=confident_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:09:13 compute-0 systemd[1]: libpod-conmon-c11da31ffa01932a659e9ed9548a1b47aac32870a2cad9b38ddfcae385f77c46.scope: Deactivated successfully.
Jan 31 07:09:13 compute-0 sudo[99080]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 31 07:09:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 07:09:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 31 07:09:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 07:09:13 compute-0 sudo[99221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiwvippvyrxuhkiwqjeyrgbxdtcmvlgg ; /usr/bin/python3'
Jan 31 07:09:13 compute-0 sudo[99221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:09:13 compute-0 python3[99223]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:09:13 compute-0 podman[99224]: 2026-01-31 07:09:13.623214111 +0000 UTC m=+0.053712127 container create 8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209 (image=quay.io/ceph/ceph:v18, name=pensive_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:09:13 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 07:09:13 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 07:09:13 compute-0 systemd[1]: Started libpod-conmon-8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209.scope.
Jan 31 07:09:13 compute-0 podman[99224]: 2026-01-31 07:09:13.606155447 +0000 UTC m=+0.036653473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 07:09:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17c51cfa2df8f83494ca896944216066b609ac12e721a797a028530b95314ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:09:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17c51cfa2df8f83494ca896944216066b609ac12e721a797a028530b95314ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:09:13 compute-0 podman[99224]: 2026-01-31 07:09:13.714379023 +0000 UTC m=+0.144877059 container init 8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209 (image=quay.io/ceph/ceph:v18, name=pensive_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:09:13 compute-0 podman[99224]: 2026-01-31 07:09:13.721708199 +0000 UTC m=+0.152206235 container start 8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209 (image=quay.io/ceph/ceph:v18, name=pensive_lumiere, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:09:13 compute-0 podman[99224]: 2026-01-31 07:09:13.725630446 +0000 UTC m=+0.156128462 container attach 8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209 (image=quay.io/ceph/ceph:v18, name=pensive_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:09:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 07:09:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 07:09:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 07:09:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-0 ceph-mon[74496]: 2.1d deep-scrub starts
Jan 31 07:09:13 compute-0 ceph-mon[74496]: 2.1d deep-scrub ok
Jan 31 07:09:13 compute-0 ceph-mon[74496]: osdmap e78: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-0 ceph-mon[74496]: pgmap v188: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 07:09:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 07:09:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 07:09:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 07:09:13 compute-0 ceph-mon[74496]: osdmap e79: 3 total, 3 up, 3 in
Jan 31 07:09:13 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 79 pg[6.b( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:13 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 79 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=78/79 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[54,78)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:13 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 79 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=78/79 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[54,78)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]: {
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "user_id": "openstack",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "display_name": "openstack",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "email": "",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "suspended": 0,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "max_buckets": 1000,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "subusers": [],
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "keys": [
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         {
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:             "user": "openstack",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:             "access_key": "U6X9T793U0GJP847NDXL",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:             "secret_key": "w9DAIJWsPcUHbHzBsBAstnjjPanlpp18SCSUT6pu"
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         }
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     ],
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "swift_keys": [],
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "caps": [],
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "op_mask": "read, write, delete",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "default_placement": "",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "default_storage_class": "",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "placement_tags": [],
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "bucket_quota": {
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "enabled": false,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "check_on_raw": false,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "max_size": -1,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "max_size_kb": 0,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "max_objects": -1
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     },
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "user_quota": {
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "enabled": false,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "check_on_raw": false,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "max_size": -1,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "max_size_kb": 0,
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:         "max_objects": -1
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     },
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "temp_url_keys": [],
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "type": "rgw",
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]:     "mfa_ids": []
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]: }
Jan 31 07:09:13 compute-0 pensive_lumiere[99239]: 
Jan 31 07:09:14 compute-0 systemd[1]: libpod-8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209.scope: Deactivated successfully.
Jan 31 07:09:14 compute-0 podman[99224]: 2026-01-31 07:09:14.024276865 +0000 UTC m=+0.454774881 container died 8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209 (image=quay.io/ceph/ceph:v18, name=pensive_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:09:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17c51cfa2df8f83494ca896944216066b609ac12e721a797a028530b95314ad-merged.mount: Deactivated successfully.
Jan 31 07:09:14 compute-0 podman[99224]: 2026-01-31 07:09:14.064866324 +0000 UTC m=+0.495364330 container remove 8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209 (image=quay.io/ceph/ceph:v18, name=pensive_lumiere, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:09:14 compute-0 systemd[1]: libpod-conmon-8572d326ed6741d1c1ed8d0affb57a2f40e4be44d934dfe8e8f5d8144353f209.scope: Deactivated successfully.
Jan 31 07:09:14 compute-0 sudo[99221]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 31 07:09:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 31 07:09:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 07:09:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 07:09:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 07:09:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 80 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=78/79 n=5 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.098196030s) [1] async=[1] r=-1 lpr=80 pi=[54,80)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 180.466125488s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 80 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=78/79 n=6 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.098286629s) [1] async=[1] r=-1 lpr=80 pi=[54,80)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 180.466262817s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 80 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=78/79 n=5 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.098081589s) [1] r=-1 lpr=80 pi=[54,80)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.466125488s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 80 pg[9.a( v 48'908 (0'0,48'908] local-lis/les=78/79 n=6 ec=54/42 lis/c=78/54 les/c/f=79/55/0 sis=80 pruub=15.098204613s) [1] r=-1 lpr=80 pi=[54,80)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.466262817s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 80 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=79/80 n=1 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:14 compute-0 ceph-mon[74496]: 2.1c deep-scrub starts
Jan 31 07:09:14 compute-0 ceph-mon[74496]: 6.4 scrub starts
Jan 31 07:09:14 compute-0 ceph-mon[74496]: 2.1c deep-scrub ok
Jan 31 07:09:14 compute-0 ceph-mon[74496]: 6.4 scrub ok
Jan 31 07:09:14 compute-0 ceph-mon[74496]: osdmap e80: 3 total, 3 up, 3 in
Jan 31 07:09:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:14.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 0 op/s; 137 B/s, 5 objects/s recovering
Jan 31 07:09:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 31 07:09:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 07:09:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 31 07:09:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 07:09:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 07:09:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 07:09:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 07:09:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 07:09:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 07:09:15 compute-0 ceph-mon[74496]: pgmap v191: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 0 op/s; 137 B/s, 5 objects/s recovering
Jan 31 07:09:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 07:09:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 07:09:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 07:09:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 07:09:15 compute-0 ceph-mon[74496]: osdmap e81: 3 total, 3 up, 3 in
Jan 31 07:09:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:16.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:16.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:16 compute-0 ceph-mon[74496]: 5.2 scrub starts
Jan 31 07:09:16 compute-0 ceph-mon[74496]: 5.2 scrub ok
Jan 31 07:09:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 226 B/s wr, 1 op/s; 170 B/s, 7 objects/s recovering
Jan 31 07:09:17 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Jan 31 07:09:17 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Jan 31 07:09:17 compute-0 sudo[99337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-0 sudo[99337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-0 sudo[99337]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-0 sudo[99362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:17 compute-0 sudo[99362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:17 compute-0 sudo[99362]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:17 compute-0 ceph-mon[74496]: pgmap v193: 305 pgs: 2 peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 226 B/s wr, 1 op/s; 170 B/s, 7 objects/s recovering
Jan 31 07:09:17 compute-0 ceph-mon[74496]: 5.1b scrub starts
Jan 31 07:09:17 compute-0 ceph-mon[74496]: 5.1b scrub ok
Jan 31 07:09:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:19 compute-0 ceph-mon[74496]: 6.c deep-scrub starts
Jan 31 07:09:19 compute-0 ceph-mon[74496]: 6.c deep-scrub ok
Jan 31 07:09:19 compute-0 ceph-mon[74496]: 2.b deep-scrub starts
Jan 31 07:09:19 compute-0 ceph-mon[74496]: 2.b deep-scrub ok
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 2 op/s; 128 B/s, 6 objects/s recovering
Jan 31 07:09:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:09:19
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:09:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:09:20 compute-0 ceph-mon[74496]: 2.a deep-scrub starts
Jan 31 07:09:20 compute-0 ceph-mon[74496]: 2.a deep-scrub ok
Jan 31 07:09:20 compute-0 ceph-mon[74496]: pgmap v194: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 2 op/s; 128 B/s, 6 objects/s recovering
Jan 31 07:09:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000033s ======
Jan 31 07:09:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:20.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000033s
Jan 31 07:09:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 1 op/s; 102 B/s, 4 objects/s recovering
Jan 31 07:09:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 31 07:09:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 07:09:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 31 07:09:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 07:09:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 07:09:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 07:09:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 07:09:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 07:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 07:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 07:09:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 07:09:21 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 07:09:21 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 07:09:22 compute-0 sshd-session[99389]: Invalid user sol from 45.148.10.240 port 39550
Jan 31 07:09:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:22.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:22 compute-0 sshd-session[99389]: Connection closed by invalid user sol 45.148.10.240 port 39550 [preauth]
Jan 31 07:09:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 07:09:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 07:09:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 07:09:22 compute-0 ceph-mon[74496]: pgmap v195: 305 pgs: 305 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 1 op/s; 102 B/s, 4 objects/s recovering
Jan 31 07:09:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 07:09:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 07:09:22 compute-0 ceph-mon[74496]: osdmap e82: 3 total, 3 up, 3 in
Jan 31 07:09:22 compute-0 ceph-mon[74496]: 8.1 scrub starts
Jan 31 07:09:22 compute-0 ceph-mon[74496]: 8.1 scrub ok
Jan 31 07:09:22 compute-0 ceph-mon[74496]: 2.c deep-scrub starts
Jan 31 07:09:22 compute-0 ceph-mon[74496]: 2.c deep-scrub ok
Jan 31 07:09:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 946 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 31 07:09:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 31 07:09:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 07:09:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 31 07:09:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 07:09:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 07:09:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 07:09:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 07:09:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-0 ceph-mon[74496]: osdmap e83: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-0 ceph-mon[74496]: 2.d deep-scrub starts
Jan 31 07:09:23 compute-0 ceph-mon[74496]: 2.d deep-scrub ok
Jan 31 07:09:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 07:09:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 07:09:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 07:09:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 07:09:23 compute-0 ceph-mon[74496]: osdmap e84: 3 total, 3 up, 3 in
Jan 31 07:09:23 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 07:09:23 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 07:09:24 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 84 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=66/67 n=1 ec=50/22 lis/c=66/66 les/c/f=67/67/0 sis=84 pruub=15.048298836s) [1] r=-1 lpr=84 pi=[66,84)/1 crt=48'39 mlcod 48'39 active pruub 189.793807983s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:24 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 84 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=66/67 n=1 ec=50/22 lis/c=66/66 les/c/f=67/67/0 sis=84 pruub=15.048077583s) [1] r=-1 lpr=84 pi=[66,84)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 189.793807983s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:24.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 07:09:24 compute-0 ceph-mon[74496]: pgmap v198: 305 pgs: 305 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 946 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Jan 31 07:09:24 compute-0 ceph-mon[74496]: 8.7 scrub starts
Jan 31 07:09:24 compute-0 ceph-mon[74496]: 8.7 scrub ok
Jan 31 07:09:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 07:09:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 07:09:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:24.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 2 active+remapped, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 07:09:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 31 07:09:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 07:09:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 31 07:09:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 07:09:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 07:09:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 07:09:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 07:09:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 07:09:25 compute-0 ceph-mon[74496]: osdmap e85: 3 total, 3 up, 3 in
Jan 31 07:09:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 07:09:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 07:09:25 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 07:09:25 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 86 pg[6.f( empty local-lis/les=0/0 n=0 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=86) [0] r=0 lpr=86 pi=[61,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:26.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 07:09:26 compute-0 ceph-mon[74496]: pgmap v201: 305 pgs: 2 active+remapped, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 07:09:26 compute-0 ceph-mon[74496]: 5.18 scrub starts
Jan 31 07:09:26 compute-0 ceph-mon[74496]: 5.18 scrub ok
Jan 31 07:09:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 07:09:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 07:09:26 compute-0 ceph-mon[74496]: osdmap e86: 3 total, 3 up, 3 in
Jan 31 07:09:26 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 07:09:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 07:09:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 07:09:26 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 07:09:26 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 87 pg[6.f( v 48'39 lc 44'1 (0'0,48'39] local-lis/les=86/87 n=1 ec=50/22 lis/c=61/61 les/c/f=62/62/0 sis=86) [0] r=0 lpr=86 pi=[61,86)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:26.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 07:09:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 07:09:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 07:09:27 compute-0 ceph-mon[74496]: 8.e scrub starts
Jan 31 07:09:27 compute-0 ceph-mon[74496]: osdmap e87: 3 total, 3 up, 3 in
Jan 31 07:09:27 compute-0 ceph-mon[74496]: 8.e scrub ok
Jan 31 07:09:27 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 07:09:27 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 07:09:27 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 07:09:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:28.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 07:09:28 compute-0 ceph-mon[74496]: pgmap v204: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 07:09:28 compute-0 ceph-mon[74496]: osdmap e88: 3 total, 3 up, 3 in
Jan 31 07:09:28 compute-0 ceph-mon[74496]: 8.13 scrub starts
Jan 31 07:09:28 compute-0 ceph-mon[74496]: 8.13 scrub ok
Jan 31 07:09:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 07:09:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 07:09:28 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 07:09:28 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 07:09:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:28.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 2 peering, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Jan 31 07:09:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 07:09:29 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 07:09:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 07:09:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 07:09:29 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 07:09:29 compute-0 ceph-mon[74496]: osdmap e89: 3 total, 3 up, 3 in
Jan 31 07:09:29 compute-0 ceph-mon[74496]: 8.1a scrub starts
Jan 31 07:09:29 compute-0 ceph-mon[74496]: 8.1a scrub ok
Jan 31 07:09:29 compute-0 ceph-mon[74496]: pgmap v207: 305 pgs: 2 peering, 303 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Jan 31 07:09:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:30 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 07:09:30 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 07:09:30 compute-0 ceph-mon[74496]: 8.1d scrub starts
Jan 31 07:09:30 compute-0 ceph-mon[74496]: osdmap e90: 3 total, 3 up, 3 in
Jan 31 07:09:30 compute-0 ceph-mon[74496]: 8.1d scrub ok
Jan 31 07:09:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 180 B/s, 3 objects/s recovering
Jan 31 07:09:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 31 07:09:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 07:09:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 07:09:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 07:09:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 07:09:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 91 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=91 pruub=15.372633934s) [1] r=-1 lpr=91 pi=[54,91)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 197.870071411s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:31 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 91 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=91 pruub=15.372591972s) [1] r=-1 lpr=91 pi=[54,91)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.870071411s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 07:09:31 compute-0 ceph-mon[74496]: 3.1c scrub starts
Jan 31 07:09:31 compute-0 ceph-mon[74496]: 3.1c scrub ok
Jan 31 07:09:31 compute-0 ceph-mon[74496]: 8.1e scrub starts
Jan 31 07:09:31 compute-0 ceph-mon[74496]: 8.1e scrub ok
Jan 31 07:09:31 compute-0 ceph-mon[74496]: pgmap v209: 305 pgs: 305 active+clean; 458 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 180 B/s, 3 objects/s recovering
Jan 31 07:09:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 07:09:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 07:09:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 07:09:32 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 07:09:32 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 92 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=92) [1]/[0] r=0 lpr=92 pi=[54,92)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:32 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 92 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=92) [1]/[0] r=0 lpr=92 pi=[54,92)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 07:09:32 compute-0 ceph-mon[74496]: osdmap e91: 3 total, 3 up, 3 in
Jan 31 07:09:32 compute-0 ceph-mon[74496]: osdmap e92: 3 total, 3 up, 3 in
Jan 31 07:09:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:32.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 460 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 2 objects/s recovering
Jan 31 07:09:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 31 07:09:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 07:09:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 07:09:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 07:09:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 07:09:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 93 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=93 pruub=13.185388565s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 197.871124268s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 93 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=93 pruub=13.185195923s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.871124268s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 4.13 scrub starts
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 4.13 scrub ok
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 2.10 scrub starts
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 2.10 scrub ok
Jan 31 07:09:34 compute-0 ceph-mon[74496]: pgmap v212: 305 pgs: 305 active+clean; 460 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 2 objects/s recovering
Jan 31 07:09:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 4.c scrub starts
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 4.c scrub ok
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 2.15 scrub starts
Jan 31 07:09:34 compute-0 ceph-mon[74496]: 2.15 scrub ok
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 93 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=92/93 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=92) [1]/[0] async=[1] r=0 lpr=92 pi=[54,92)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:09:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:09:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:34 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 07:09:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 07:09:34 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 07:09:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 07:09:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 94 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 94 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=54/55 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] r=0 lpr=94 pi=[54,94)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 94 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=92/93 n=6 ec=54/42 lis/c=92/54 les/c/f=93/55/0 sis=94 pruub=15.330225945s) [1] async=[1] r=-1 lpr=94 pi=[54,94)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 200.708007812s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:34 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 94 pg[9.10( v 48'908 (0'0,48'908] local-lis/les=92/93 n=6 ec=54/42 lis/c=92/54 les/c/f=93/55/0 sis=94 pruub=15.330159187s) [1] r=-1 lpr=94 pi=[54,94)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.708007812s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 07:09:35 compute-0 ceph-mon[74496]: osdmap e93: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-0 ceph-mon[74496]: osdmap e94: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-0 sshd-session[99397]: Accepted publickey for zuul from 192.168.122.30 port 48444 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:09:35 compute-0 systemd-logind[816]: New session 34 of user zuul.
Jan 31 07:09:35 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 31 07:09:35 compute-0 sshd-session[99397]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:09:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 31 07:09:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 07:09:35 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 07:09:35 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 07:09:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 07:09:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 07:09:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 07:09:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 95 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=95 pruub=11.481463432s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 197.870178223s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 95 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=95 pruub=11.481300354s) [1] r=-1 lpr=95 pi=[54,95)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.870178223s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:35 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 95 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=94/95 n=6 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=94) [1]/[0] async=[1] r=0 lpr=94 pi=[54,94)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:36 compute-0 ceph-mon[74496]: 9.1 scrub starts
Jan 31 07:09:36 compute-0 ceph-mon[74496]: 9.1 scrub ok
Jan 31 07:09:36 compute-0 ceph-mon[74496]: pgmap v215: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 07:09:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 07:09:36 compute-0 ceph-mon[74496]: osdmap e95: 3 total, 3 up, 3 in
Jan 31 07:09:36 compute-0 python3.9[99551]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:09:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:36.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 07:09:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 07:09:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 07:09:36 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 96 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=94/95 n=6 ec=54/42 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=14.995935440s) [1] async=[1] r=-1 lpr=96 pi=[54,96)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 202.396743774s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:36 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 96 pg[9.11( v 48'908 (0'0,48'908] local-lis/les=94/95 n=6 ec=54/42 lis/c=94/54 les/c/f=95/55/0 sis=96 pruub=14.995021820s) [1] r=-1 lpr=96 pi=[54,96)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.396743774s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:36 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 96 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=0 lpr=96 pi=[54,96)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:36 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 96 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=54/55 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] r=0 lpr=96 pi=[54,96)/1 crt=48'908 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:09:37 compute-0 ceph-mon[74496]: 9.2 scrub starts
Jan 31 07:09:37 compute-0 ceph-mon[74496]: 9.2 scrub ok
Jan 31 07:09:37 compute-0 ceph-mon[74496]: 4.d scrub starts
Jan 31 07:09:37 compute-0 ceph-mon[74496]: 4.d scrub ok
Jan 31 07:09:37 compute-0 ceph-mon[74496]: osdmap e96: 3 total, 3 up, 3 in
Jan 31 07:09:37 compute-0 sudo[99738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:37 compute-0 sudo[99738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:37 compute-0 sudo[99738]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 07:09:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 07:09:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 07:09:37 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 97 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=96/97 n=5 ec=54/42 lis/c=54/54 les/c/f=55/55/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[54,96)/1 crt=48'908 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:37 compute-0 sudo[99771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:37 compute-0 sudo[99810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvxnsqknlrzjdxsfuuxwbwmzqsdeguve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843377.3791933-56-214994776740879/AnsiballZ_command.py'
Jan 31 07:09:37 compute-0 sudo[99771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:37 compute-0 sudo[99810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:09:37 compute-0 sudo[99771]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:38 compute-0 python3.9[99816]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:09:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:38.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:38 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 07:09:38 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 07:09:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 07:09:38 compute-0 ceph-mon[74496]: pgmap v218: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:09:38 compute-0 ceph-mon[74496]: osdmap e97: 3 total, 3 up, 3 in
Jan 31 07:09:38 compute-0 ceph-mon[74496]: 2.12 scrub starts
Jan 31 07:09:38 compute-0 ceph-mon[74496]: 2.12 scrub ok
Jan 31 07:09:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 07:09:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 07:09:38 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 98 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=96/97 n=5 ec=54/42 lis/c=96/54 les/c/f=97/55/0 sis=98 pruub=14.944844246s) [1] async=[1] r=-1 lpr=98 pi=[54,98)/1 crt=48'908 lcod 0'0 mlcod 0'0 active pruub 204.433059692s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:38 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 98 pg[9.12( v 48'908 (0'0,48'908] local-lis/les=96/97 n=5 ec=54/42 lis/c=96/54 les/c/f=97/55/0 sis=98 pruub=14.944708824s) [1] r=-1 lpr=98 pi=[54,98)/1 crt=48'908 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 204.433059692s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:09:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 07:09:39 compute-0 ceph-mon[74496]: 9.4 scrub starts
Jan 31 07:09:39 compute-0 ceph-mon[74496]: 9.4 scrub ok
Jan 31 07:09:39 compute-0 ceph-mon[74496]: 2.18 scrub starts
Jan 31 07:09:39 compute-0 ceph-mon[74496]: 2.18 scrub ok
Jan 31 07:09:39 compute-0 ceph-mon[74496]: osdmap e98: 3 total, 3 up, 3 in
Jan 31 07:09:39 compute-0 ceph-mon[74496]: pgmap v221: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 07:09:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 07:09:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 07:09:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:40.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:40 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 07:09:40 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 07:09:40 compute-0 ceph-mon[74496]: osdmap e99: 3 total, 3 up, 3 in
Jan 31 07:09:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 07:09:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 31 07:09:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 07:09:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 07:09:41 compute-0 ceph-mon[74496]: 9.c scrub starts
Jan 31 07:09:41 compute-0 ceph-mon[74496]: 9.c scrub ok
Jan 31 07:09:41 compute-0 ceph-mon[74496]: pgmap v223: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 07:09:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 07:09:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 07:09:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 07:09:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 07:09:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:42.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:42 compute-0 ceph-mon[74496]: 4.a scrub starts
Jan 31 07:09:42 compute-0 ceph-mon[74496]: 4.a scrub ok
Jan 31 07:09:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 07:09:42 compute-0 ceph-mon[74496]: osdmap e100: 3 total, 3 up, 3 in
Jan 31 07:09:42 compute-0 ceph-mon[74496]: 4.e scrub starts
Jan 31 07:09:42 compute-0 ceph-mon[74496]: 4.e scrub ok
Jan 31 07:09:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Jan 31 07:09:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 31 07:09:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 07:09:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 07:09:43 compute-0 ceph-mon[74496]: pgmap v225: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Jan 31 07:09:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 07:09:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 07:09:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 07:09:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 07:09:44 compute-0 sudo[99810]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:44.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:44.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:45 compute-0 ceph-mon[74496]: 2.13 scrub starts
Jan 31 07:09:45 compute-0 ceph-mon[74496]: 2.13 scrub ok
Jan 31 07:09:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 07:09:45 compute-0 ceph-mon[74496]: osdmap e101: 3 total, 3 up, 3 in
Jan 31 07:09:45 compute-0 sshd-session[99401]: Connection closed by 192.168.122.30 port 48444
Jan 31 07:09:45 compute-0 sshd-session[99397]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:09:45 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 07:09:45 compute-0 systemd[1]: session-34.scope: Consumed 7.666s CPU time.
Jan 31 07:09:45 compute-0 systemd-logind[816]: Session 34 logged out. Waiting for processes to exit.
Jan 31 07:09:45 compute-0 systemd-logind[816]: Removed session 34.
Jan 31 07:09:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 07:09:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 31 07:09:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 07:09:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 07:09:46 compute-0 ceph-mon[74496]: pgmap v227: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 07:09:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 07:09:46 compute-0 ceph-mon[74496]: 2.1b deep-scrub starts
Jan 31 07:09:46 compute-0 ceph-mon[74496]: 2.1b deep-scrub ok
Jan 31 07:09:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 07:09:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 07:09:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 07:09:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:09:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:09:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 07:09:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 07:09:47 compute-0 ceph-mon[74496]: osdmap e102: 3 total, 3 up, 3 in
Jan 31 07:09:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 07:09:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 07:09:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 31 07:09:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 07:09:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 07:09:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 07:09:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 07:09:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 07:09:48 compute-0 ceph-mon[74496]: osdmap e103: 3 total, 3 up, 3 in
Jan 31 07:09:48 compute-0 ceph-mon[74496]: pgmap v230: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 07:09:48 compute-0 ceph-mon[74496]: 4.1a scrub starts
Jan 31 07:09:48 compute-0 ceph-mon[74496]: 4.1a scrub ok
Jan 31 07:09:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 07:09:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 07:09:49 compute-0 ceph-mon[74496]: osdmap e104: 3 total, 3 up, 3 in
Jan 31 07:09:49 compute-0 ceph-mon[74496]: 2.f scrub starts
Jan 31 07:09:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 07:09:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:49 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.14 deep-scrub starts
Jan 31 07:09:49 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.14 deep-scrub ok
Jan 31 07:09:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:09:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:09:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 07:09:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 07:09:50 compute-0 ceph-mon[74496]: 2.f scrub ok
Jan 31 07:09:50 compute-0 ceph-mon[74496]: osdmap e105: 3 total, 3 up, 3 in
Jan 31 07:09:50 compute-0 ceph-mon[74496]: pgmap v233: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:50 compute-0 ceph-mon[74496]: 8.16 scrub starts
Jan 31 07:09:50 compute-0 ceph-mon[74496]: 8.16 scrub ok
Jan 31 07:09:50 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 07:09:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 07:09:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 07:09:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 07:09:51 compute-0 ceph-mon[74496]: 9.14 deep-scrub starts
Jan 31 07:09:51 compute-0 ceph-mon[74496]: 9.14 deep-scrub ok
Jan 31 07:09:51 compute-0 ceph-mon[74496]: osdmap e106: 3 total, 3 up, 3 in
Jan 31 07:09:51 compute-0 ceph-mon[74496]: 11.13 deep-scrub starts
Jan 31 07:09:51 compute-0 ceph-mon[74496]: 11.13 deep-scrub ok
Jan 31 07:09:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/223 objects misplaced (1.794%); 27 B/s, 0 objects/s recovering
Jan 31 07:09:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 31 07:09:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 07:09:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 07:09:52 compute-0 ceph-mon[74496]: osdmap e107: 3 total, 3 up, 3 in
Jan 31 07:09:52 compute-0 ceph-mon[74496]: pgmap v236: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/223 objects misplaced (1.794%); 27 B/s, 0 objects/s recovering
Jan 31 07:09:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 07:09:52 compute-0 ceph-mon[74496]: 4.18 scrub starts
Jan 31 07:09:52 compute-0 ceph-mon[74496]: 4.18 scrub ok
Jan 31 07:09:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 07:09:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 07:09:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 07:09:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:52 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Jan 31 07:09:52 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Jan 31 07:09:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:52.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 07:09:53 compute-0 ceph-mon[74496]: osdmap e108: 3 total, 3 up, 3 in
Jan 31 07:09:53 compute-0 ceph-mon[74496]: 4.1b scrub starts
Jan 31 07:09:53 compute-0 ceph-mon[74496]: 4.1b scrub ok
Jan 31 07:09:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/223 objects misplaced (1.794%); 26 B/s, 0 objects/s recovering
Jan 31 07:09:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 31 07:09:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 07:09:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 07:09:54 compute-0 ceph-mon[74496]: 9.1c deep-scrub starts
Jan 31 07:09:54 compute-0 ceph-mon[74496]: 9.1c deep-scrub ok
Jan 31 07:09:54 compute-0 ceph-mon[74496]: pgmap v238: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/223 objects misplaced (1.794%); 26 B/s, 0 objects/s recovering
Jan 31 07:09:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 07:09:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 07:09:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 07:09:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 07:09:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:09:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:54.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:09:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:54.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 42 B/s, 1 objects/s recovering
Jan 31 07:09:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 31 07:09:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 07:09:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 07:09:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 07:09:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 07:09:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 07:09:55 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=110) [0] r=0 lpr=110 pi=[77,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 07:09:55 compute-0 ceph-mon[74496]: osdmap e109: 3 total, 3 up, 3 in
Jan 31 07:09:55 compute-0 ceph-mon[74496]: 7.1f scrub starts
Jan 31 07:09:55 compute-0 ceph-mon[74496]: 7.1f scrub ok
Jan 31 07:09:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 07:09:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 07:09:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 07:09:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 07:09:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 07:09:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 07:09:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 111 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[77,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:56 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 111 pg[9.19( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=77/77 les/c/f=78/78/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[77,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:56 compute-0 ceph-mon[74496]: pgmap v240: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 42 B/s, 1 objects/s recovering
Jan 31 07:09:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 07:09:56 compute-0 ceph-mon[74496]: osdmap e110: 3 total, 3 up, 3 in
Jan 31 07:09:56 compute-0 ceph-mon[74496]: 10.12 scrub starts
Jan 31 07:09:56 compute-0 ceph-mon[74496]: 10.12 scrub ok
Jan 31 07:09:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:56.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:56 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 07:09:56 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 07:09:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Jan 31 07:09:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 31 07:09:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 07:09:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 07:09:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 07:09:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-0 ceph-mon[74496]: 11.2 scrub starts
Jan 31 07:09:57 compute-0 ceph-mon[74496]: 11.2 scrub ok
Jan 31 07:09:57 compute-0 ceph-mon[74496]: 3.a scrub starts
Jan 31 07:09:57 compute-0 ceph-mon[74496]: 3.a scrub ok
Jan 31 07:09:57 compute-0 ceph-mon[74496]: osdmap e111: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-0 ceph-mon[74496]: 11.6 scrub starts
Jan 31 07:09:57 compute-0 ceph-mon[74496]: 11.6 scrub ok
Jan 31 07:09:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 07:09:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 07:09:57 compute-0 ceph-mon[74496]: osdmap e112: 3 total, 3 up, 3 in
Jan 31 07:09:57 compute-0 sudo[99883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:57 compute-0 sudo[99883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:57 compute-0 sudo[99883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:57 compute-0 sudo[99908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:09:57 compute-0 sudo[99908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:09:57 compute-0 sudo[99908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:09:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 112 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 07:09:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 07:09:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 07:09:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 113 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=111/77 les/c/f=112/78/0 sis=113) [0] r=0 lpr=113 pi=[77,113)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 113 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=111/77 les/c/f=112/78/0 sis=113) [0] r=0 lpr=113 pi=[77,113)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 113 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=-1 lpr=113 pi=[80,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:58 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 113 pg[9.1a( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=80/80 les/c/f=81/81/0 sis=113) [0]/[1] r=-1 lpr=113 pi=[80,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:09:58 compute-0 ceph-mon[74496]: pgmap v243: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Jan 31 07:09:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:09:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:09:58.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:09:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:09:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:09:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:09:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:09:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:09:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 31 07:09:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 07:09:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 07:09:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 07:09:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 114 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=114) [0] r=0 lpr=114 pi=[64,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:59 compute-0 ceph-mon[74496]: osdmap e113: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 07:09:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 07:09:59 compute-0 ceph-mon[74496]: osdmap e114: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 114 pg[9.19( v 48'908 (0'0,48'908] local-lis/les=113/114 n=5 ec=54/42 lis/c=111/77 les/c/f=112/78/0 sis=113) [0] r=0 lpr=113 pi=[77,113)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:09:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:09:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 07:09:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 07:09:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 115 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=113/80 les/c/f=114/81/0 sis=115) [0] r=0 lpr=115 pi=[80,115)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 115 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=113/80 les/c/f=114/81/0 sis=115) [0] r=0 lpr=115 pi=[80,115)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:09:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 115 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[64,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:09:59 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 115 pg[9.1b( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=64/64 les/c/f=65/65/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[64,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:10:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:10:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:00.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:00 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 07:10:00 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 07:10:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 07:10:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 07:10:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 07:10:00 compute-0 ceph-mon[74496]: pgmap v246: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:00 compute-0 ceph-mon[74496]: osdmap e115: 3 total, 3 up, 3 in
Jan 31 07:10:00 compute-0 ceph-mon[74496]: 8.11 scrub starts
Jan 31 07:10:00 compute-0 ceph-mon[74496]: 8.11 scrub ok
Jan 31 07:10:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:10:00 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 116 pg[9.1a( v 48'908 (0'0,48'908] local-lis/les=115/116 n=5 ec=54/42 lis/c=113/80 les/c/f=114/81/0 sis=115) [0] r=0 lpr=115 pi=[80,115)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:10:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:01 compute-0 sudo[99934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:01 compute-0 sudo[99934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-0 sudo[99934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:01 compute-0 sudo[99961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:01 compute-0 sudo[99961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-0 sudo[99961]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:01 compute-0 sshd-session[99952]: Accepted publickey for zuul from 192.168.122.30 port 54848 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:10:01 compute-0 systemd-logind[816]: New session 35 of user zuul.
Jan 31 07:10:01 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 31 07:10:01 compute-0 sshd-session[99952]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:10:01 compute-0 sudo[99986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:01 compute-0 sudo[99986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-0 sudo[99986]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:01 compute-0 sudo[100014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:10:01 compute-0 sudo[100014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 31 07:10:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 07:10:01 compute-0 podman[100211]: 2026-01-31 07:10:01.652925586 +0000 UTC m=+0.063295101 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:10:01 compute-0 podman[100211]: 2026-01-31 07:10:01.746594202 +0000 UTC m=+0.156963677 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:10:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 07:10:01 compute-0 ceph-mon[74496]: 11.9 scrub starts
Jan 31 07:10:01 compute-0 ceph-mon[74496]: 11.9 scrub ok
Jan 31 07:10:01 compute-0 ceph-mon[74496]: osdmap e116: 3 total, 3 up, 3 in
Jan 31 07:10:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 07:10:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 07:10:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 07:10:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 07:10:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 117 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=115/64 les/c/f=116/65/0 sis=117) [0] r=0 lpr=117 pi=[64,117)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:01 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 117 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=115/64 les/c/f=116/65/0 sis=117) [0] r=0 lpr=117 pi=[64,117)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:01 compute-0 python3.9[100280]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 podman[100489]: 2026-01-31 07:10:02.250806263 +0000 UTC m=+0.069292141 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:10:02 compute-0 podman[100489]: 2026-01-31 07:10:02.284506413 +0000 UTC m=+0.102992281 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 podman[100556]: 2026-01-31 07:10:02.471538979 +0000 UTC m=+0.051685161 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:10:02 compute-0 podman[100556]: 2026-01-31 07:10:02.484280517 +0000 UTC m=+0.064426719 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2)
Jan 31 07:10:02 compute-0 sudo[100014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 sudo[100635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:02 compute-0 sudo[100635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-0 sudo[100635]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:02.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:02 compute-0 sudo[100681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:02 compute-0 sudo[100681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-0 sudo[100681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-0 sudo[100726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:02 compute-0 sudo[100726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-0 sudo[100726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:02 compute-0 sudo[100761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:10:02 compute-0 sudo[100761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 07:10:02 compute-0 ceph-mon[74496]: 5.1 scrub starts
Jan 31 07:10:02 compute-0 ceph-mon[74496]: 5.1 scrub ok
Jan 31 07:10:02 compute-0 ceph-mon[74496]: pgmap v250: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 07:10:02 compute-0 ceph-mon[74496]: osdmap e117: 3 total, 3 up, 3 in
Jan 31 07:10:02 compute-0 ceph-mon[74496]: 8.2 scrub starts
Jan 31 07:10:02 compute-0 ceph-mon[74496]: 8.2 scrub ok
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 07:10:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 07:10:02 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 118 pg[9.1b( v 48'908 (0'0,48'908] local-lis/les=117/118 n=5 ec=54/42 lis/c=115/64 les/c/f=116/65/0 sis=117) [0] r=0 lpr=117 pi=[64,117)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:10:02 compute-0 python3.9[100742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:10:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:02.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:03 compute-0 sudo[100761]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:10:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:10:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:10:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b041d0c8-e3f0-4897-9781-6b042de32288 does not exist
Jan 31 07:10:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3dbdea21-8694-435c-92d3-2e4b30faa502 does not exist
Jan 31 07:10:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1c0c40c9-951f-4fc1-af7e-bbae7b0c50a7 does not exist
Jan 31 07:10:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:10:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:10:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:10:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:03 compute-0 sudo[100820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:03 compute-0 sudo[100820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:03 compute-0 sudo[100820]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:03 compute-0 sudo[100846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:03 compute-0 sudo[100846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:03 compute-0 sudo[100846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:03 compute-0 sudo[100894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:03 compute-0 sudo[100894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:03 compute-0 sudo[100894]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:03 compute-0 sudo[100919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:10:03 compute-0 sudo[100919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 86 B/s, 3 objects/s recovering
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.660839972 +0000 UTC m=+0.056399669 container create 9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:10:03 compute-0 systemd[1]: Started libpod-conmon-9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6.scope.
Jan 31 07:10:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.626539026 +0000 UTC m=+0.022098773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.732673834 +0000 UTC m=+0.128233551 container init 9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.74052993 +0000 UTC m=+0.136089617 container start 9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.74457378 +0000 UTC m=+0.140133477 container attach 9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:10:03 compute-0 fervent_clarke[101043]: 167 167
Jan 31 07:10:03 compute-0 systemd[1]: libpod-9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6.scope: Deactivated successfully.
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.745526304 +0000 UTC m=+0.141085981 container died 9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-a05b48f8a1195c90f11ed803f1d2a393c3a7b69c0feac2b7cfafbad27b4fc322-merged.mount: Deactivated successfully.
Jan 31 07:10:03 compute-0 podman[100984]: 2026-01-31 07:10:03.816950256 +0000 UTC m=+0.212509943 container remove 9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_clarke, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:10:03 compute-0 systemd[1]: libpod-conmon-9ecec40f897c405c104db8c9cc606ba5e2e5c6000ba09e8c42b15a47eafcaed6.scope: Deactivated successfully.
Jan 31 07:10:03 compute-0 ceph-mon[74496]: 7.1 scrub starts
Jan 31 07:10:03 compute-0 ceph-mon[74496]: 7.1 scrub ok
Jan 31 07:10:03 compute-0 ceph-mon[74496]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 07:10:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:10:03 compute-0 ceph-mon[74496]: pgmap v253: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 86 B/s, 3 objects/s recovering
Jan 31 07:10:04 compute-0 podman[101078]: 2026-01-31 07:10:04.064150824 +0000 UTC m=+0.122233521 container create 4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:10:04 compute-0 podman[101078]: 2026-01-31 07:10:03.973675377 +0000 UTC m=+0.031758154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:10:04 compute-0 sudo[101164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krzgaiyeargsietngjxprisxkcluropk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843403.6628802-93-12584640851226/AnsiballZ_command.py'
Jan 31 07:10:04 compute-0 sudo[101164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:04 compute-0 systemd[1]: Started libpod-conmon-4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021.scope.
Jan 31 07:10:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6754a32a2187763066cdcf1046ca1dc5bc063f4060d62d9234994701818b2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6754a32a2187763066cdcf1046ca1dc5bc063f4060d62d9234994701818b2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6754a32a2187763066cdcf1046ca1dc5bc063f4060d62d9234994701818b2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6754a32a2187763066cdcf1046ca1dc5bc063f4060d62d9234994701818b2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b6754a32a2187763066cdcf1046ca1dc5bc063f4060d62d9234994701818b2a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:04 compute-0 podman[101078]: 2026-01-31 07:10:04.224009653 +0000 UTC m=+0.282092370 container init 4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:10:04 compute-0 podman[101078]: 2026-01-31 07:10:04.233886839 +0000 UTC m=+0.291969536 container start 4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:10:04 compute-0 podman[101078]: 2026-01-31 07:10:04.23954883 +0000 UTC m=+0.297631527 container attach 4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:10:04 compute-0 python3.9[101166]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:10:04 compute-0 sudo[101164]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:04.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:04 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 07:10:04 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 07:10:04 compute-0 pensive_satoshi[101169]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:10:04 compute-0 pensive_satoshi[101169]: --> relative data size: 1.0
Jan 31 07:10:04 compute-0 pensive_satoshi[101169]: --> All data devices are unavailable
Jan 31 07:10:04 compute-0 ceph-mon[74496]: 7.7 scrub starts
Jan 31 07:10:04 compute-0 ceph-mon[74496]: 7.7 scrub ok
Jan 31 07:10:04 compute-0 systemd[1]: libpod-4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021.scope: Deactivated successfully.
Jan 31 07:10:04 compute-0 podman[101078]: 2026-01-31 07:10:04.961745699 +0000 UTC m=+1.019828406 container died 4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:10:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:04.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b6754a32a2187763066cdcf1046ca1dc5bc063f4060d62d9234994701818b2a-merged.mount: Deactivated successfully.
Jan 31 07:10:05 compute-0 podman[101078]: 2026-01-31 07:10:05.031996241 +0000 UTC m=+1.090078938 container remove 4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_satoshi, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:10:05 compute-0 systemd[1]: libpod-conmon-4f95352bb822b1397240469ca4a15cb89a89fa3a9bb8632e156e4aa0b2352021.scope: Deactivated successfully.
Jan 31 07:10:05 compute-0 sudo[100919]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:05 compute-0 sudo[101321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:05 compute-0 sudo[101321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:05 compute-0 sudo[101321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:05 compute-0 sudo[101373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoghgalcojjjmtuxfddxadmvlkdtizfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843404.750749-129-214657890418746/AnsiballZ_stat.py'
Jan 31 07:10:05 compute-0 sudo[101373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:05 compute-0 sudo[101374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:05 compute-0 sudo[101374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:05 compute-0 sudo[101374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:05 compute-0 sudo[101402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:05 compute-0 sudo[101402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:05 compute-0 sudo[101402]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:05 compute-0 sudo[101427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:10:05 compute-0 sudo[101427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:05 compute-0 python3.9[101384]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:10:05 compute-0 sudo[101373]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 59 B/s, 2 objects/s recovering
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.471054256 +0000 UTC m=+0.041070396 container create f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:10:05 compute-0 systemd[1]: Started libpod-conmon-f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3.scope.
Jan 31 07:10:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.542959639 +0000 UTC m=+0.112975809 container init f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.548385725 +0000 UTC m=+0.118401865 container start f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.452884253 +0000 UTC m=+0.022900413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:10:05 compute-0 peaceful_panini[101533]: 167 167
Jan 31 07:10:05 compute-0 systemd[1]: libpod-f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3.scope: Deactivated successfully.
Jan 31 07:10:05 compute-0 conmon[101533]: conmon f909439293745694b44d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3.scope/container/memory.events
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.56184577 +0000 UTC m=+0.131861940 container attach f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.562943528 +0000 UTC m=+0.132959668 container died f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc504cdca42d63525355017e21c0ce74ae14d21161280ef231eb430b1b8edba1-merged.mount: Deactivated successfully.
Jan 31 07:10:05 compute-0 podman[101517]: 2026-01-31 07:10:05.663030025 +0000 UTC m=+0.233046165 container remove f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_panini, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:10:05 compute-0 systemd[1]: libpod-conmon-f909439293745694b44db92158f88fedddc8e9d2afab2fc90284810db42fbcb3.scope: Deactivated successfully.
Jan 31 07:10:05 compute-0 podman[101559]: 2026-01-31 07:10:05.818003371 +0000 UTC m=+0.047331052 container create 836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 07:10:05 compute-0 systemd[1]: Started libpod-conmon-836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee.scope.
Jan 31 07:10:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920e70f39ab504ee2fa4a7efcf0f50989435d13dbb4619a92a7abf732e1051a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920e70f39ab504ee2fa4a7efcf0f50989435d13dbb4619a92a7abf732e1051a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920e70f39ab504ee2fa4a7efcf0f50989435d13dbb4619a92a7abf732e1051a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920e70f39ab504ee2fa4a7efcf0f50989435d13dbb4619a92a7abf732e1051a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:05 compute-0 podman[101559]: 2026-01-31 07:10:05.892714636 +0000 UTC m=+0.122042337 container init 836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:10:05 compute-0 podman[101559]: 2026-01-31 07:10:05.797647354 +0000 UTC m=+0.026975075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:10:05 compute-0 podman[101559]: 2026-01-31 07:10:05.898381027 +0000 UTC m=+0.127708728 container start 836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:10:05 compute-0 podman[101559]: 2026-01-31 07:10:05.902553421 +0000 UTC m=+0.131881102 container attach 836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:10:05 compute-0 ceph-mon[74496]: 7.c scrub starts
Jan 31 07:10:05 compute-0 ceph-mon[74496]: 7.c scrub ok
Jan 31 07:10:05 compute-0 ceph-mon[74496]: 11.b scrub starts
Jan 31 07:10:05 compute-0 ceph-mon[74496]: 11.b scrub ok
Jan 31 07:10:05 compute-0 ceph-mon[74496]: pgmap v254: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 59 B/s, 2 objects/s recovering
Jan 31 07:10:06 compute-0 sudo[101705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vryaxzolnmmjjkbvqcotjevdlvdxbeqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843405.8730874-162-89323578221011/AnsiballZ_file.py'
Jan 31 07:10:06 compute-0 sudo[101705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:06 compute-0 eager_cartwright[101598]: {
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:     "0": [
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:         {
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "devices": [
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "/dev/loop3"
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             ],
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "lv_name": "ceph_lv0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "lv_size": "7511998464",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "name": "ceph_lv0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "tags": {
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.cluster_name": "ceph",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.crush_device_class": "",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.encrypted": "0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.osd_id": "0",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.type": "block",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:                 "ceph.vdo": "0"
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             },
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "type": "block",
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:             "vg_name": "ceph_vg0"
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:         }
Jan 31 07:10:06 compute-0 eager_cartwright[101598]:     ]
Jan 31 07:10:06 compute-0 eager_cartwright[101598]: }
Jan 31 07:10:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:06.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:06 compute-0 systemd[1]: libpod-836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee.scope: Deactivated successfully.
Jan 31 07:10:06 compute-0 podman[101559]: 2026-01-31 07:10:06.663566957 +0000 UTC m=+0.892894638 container died 836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5920e70f39ab504ee2fa4a7efcf0f50989435d13dbb4619a92a7abf732e1051a-merged.mount: Deactivated successfully.
Jan 31 07:10:06 compute-0 podman[101559]: 2026-01-31 07:10:06.723312458 +0000 UTC m=+0.952640139 container remove 836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:10:06 compute-0 systemd[1]: libpod-conmon-836e0165eb973e61b9955512023004ac7396de2256ecc2943eba43dee99db7ee.scope: Deactivated successfully.
Jan 31 07:10:06 compute-0 python3.9[101707]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:10:06 compute-0 sudo[101427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:06 compute-0 sudo[101705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:06 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 07:10:06 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 07:10:06 compute-0 sudo[101727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:06 compute-0 sudo[101727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:06 compute-0 sudo[101727]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:06 compute-0 sudo[101776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:10:06 compute-0 sudo[101776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:06 compute-0 sudo[101776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:06 compute-0 sudo[101801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:06 compute-0 sudo[101801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:06 compute-0 sudo[101801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:06 compute-0 sudo[101826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:10:06 compute-0 sudo[101826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:07 compute-0 ceph-mon[74496]: 10.f deep-scrub starts
Jan 31 07:10:07 compute-0 ceph-mon[74496]: 10.f deep-scrub ok
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.196003531 +0000 UTC m=+0.036853990 container create 08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:10:07 compute-0 sudo[102032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inalkqpkdstunqksnpdxagfcpreeqqwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843407.0068665-189-1491710720805/AnsiballZ_file.py'
Jan 31 07:10:07 compute-0 sudo[102032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.179222653 +0000 UTC m=+0.020073112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:10:07 compute-0 systemd[1]: Started libpod-conmon-08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799.scope.
Jan 31 07:10:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.363192212 +0000 UTC m=+0.204042691 container init 08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:10:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 1 objects/s recovering
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.369071729 +0000 UTC m=+0.209922158 container start 08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.372042493 +0000 UTC m=+0.212892942 container attach 08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:10:07 compute-0 lucid_sammet[102037]: 167 167
Jan 31 07:10:07 compute-0 systemd[1]: libpod-08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799.scope: Deactivated successfully.
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.373655253 +0000 UTC m=+0.214505702 container died 08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:10:07 compute-0 python3.9[102034]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd450844eeb6d563fa112ff73113f4b8c0083462b39b5498a15dd769100e6a02-merged.mount: Deactivated successfully.
Jan 31 07:10:07 compute-0 sudo[102032]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:07 compute-0 podman[101991]: 2026-01-31 07:10:07.501517294 +0000 UTC m=+0.342367763 container remove 08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:10:07 compute-0 systemd[1]: libpod-conmon-08ba0d8ad37db9c76a646aa508ff457b6a60aee91dd9c8e5848e7f72c4ee3799.scope: Deactivated successfully.
Jan 31 07:10:07 compute-0 podman[102108]: 2026-01-31 07:10:07.66891868 +0000 UTC m=+0.053611418 container create 5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:10:07 compute-0 systemd[1]: Started libpod-conmon-5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5.scope.
Jan 31 07:10:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f132200ddb496364b1470fc9ff413fb1e9abc4182b2e9254f1cbc505027d71fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f132200ddb496364b1470fc9ff413fb1e9abc4182b2e9254f1cbc505027d71fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f132200ddb496364b1470fc9ff413fb1e9abc4182b2e9254f1cbc505027d71fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f132200ddb496364b1470fc9ff413fb1e9abc4182b2e9254f1cbc505027d71fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:10:07 compute-0 podman[102108]: 2026-01-31 07:10:07.637623629 +0000 UTC m=+0.022316417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:10:07 compute-0 podman[102108]: 2026-01-31 07:10:07.75428359 +0000 UTC m=+0.138976358 container init 5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:10:07 compute-0 podman[102108]: 2026-01-31 07:10:07.76068144 +0000 UTC m=+0.145374178 container start 5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:10:07 compute-0 podman[102108]: 2026-01-31 07:10:07.766396862 +0000 UTC m=+0.151089630 container attach 5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:10:08 compute-0 ceph-mon[74496]: 11.c scrub starts
Jan 31 07:10:08 compute-0 ceph-mon[74496]: 11.c scrub ok
Jan 31 07:10:08 compute-0 ceph-mon[74496]: pgmap v255: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 1 objects/s recovering
Jan 31 07:10:08 compute-0 python3.9[102231]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:10:08 compute-0 network[102248]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:10:08 compute-0 network[102249]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:10:08 compute-0 network[102250]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:10:08 compute-0 festive_morse[102153]: {
Jan 31 07:10:08 compute-0 festive_morse[102153]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:10:08 compute-0 festive_morse[102153]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:10:08 compute-0 festive_morse[102153]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:10:08 compute-0 festive_morse[102153]:         "osd_id": 0,
Jan 31 07:10:08 compute-0 festive_morse[102153]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:10:08 compute-0 festive_morse[102153]:         "type": "bluestore"
Jan 31 07:10:08 compute-0 festive_morse[102153]:     }
Jan 31 07:10:08 compute-0 festive_morse[102153]: }
Jan 31 07:10:08 compute-0 podman[102108]: 2026-01-31 07:10:08.583613011 +0000 UTC m=+0.968305749 container died 5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:10:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:10:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:08.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:10:08 compute-0 systemd[1]: libpod-5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5.scope: Deactivated successfully.
Jan 31 07:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f132200ddb496364b1470fc9ff413fb1e9abc4182b2e9254f1cbc505027d71fb-merged.mount: Deactivated successfully.
Jan 31 07:10:08 compute-0 podman[102108]: 2026-01-31 07:10:08.714277971 +0000 UTC m=+1.098970719 container remove 5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_morse, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:10:08 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 07:10:08 compute-0 systemd[1]: libpod-conmon-5a9a0675189b056dba9e4f2ea6cb30c588469dd6aa47f22d7ca352b799c731a5.scope: Deactivated successfully.
Jan 31 07:10:08 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 07:10:08 compute-0 sudo[101826]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:10:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:10:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 53b9ede4-ad98-4098-aa36-2ffa427b267f does not exist
Jan 31 07:10:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fa31d287-a6e6-4450-8140-cb6711dc0588 does not exist
Jan 31 07:10:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5a9dfbc4-10f4-4ab9-855d-6e37eef34842 does not exist
Jan 31 07:10:08 compute-0 sudo[102293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:08 compute-0 sudo[102293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:08 compute-0 sudo[102293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:08 compute-0 sudo[102323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:10:08 compute-0 sudo[102323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:08 compute-0 sudo[102323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 31 07:10:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 31 07:10:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 07:10:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:09 compute-0 ceph-mon[74496]: 7.d scrub starts
Jan 31 07:10:09 compute-0 ceph-mon[74496]: 7.d scrub ok
Jan 31 07:10:09 compute-0 ceph-mon[74496]: 11.d scrub starts
Jan 31 07:10:09 compute-0 ceph-mon[74496]: 11.d scrub ok
Jan 31 07:10:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:10:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 07:10:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 07:10:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 07:10:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 07:10:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 07:10:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:10.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:10 compute-0 ceph-mon[74496]: pgmap v256: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 31 07:10:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 07:10:10 compute-0 ceph-mon[74496]: osdmap e119: 3 total, 3 up, 3 in
Jan 31 07:10:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 07:10:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 07:10:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 07:10:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:10:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:10.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:10:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 31 07:10:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 07:10:11 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 07:10:11 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 07:10:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 07:10:11 compute-0 ceph-mon[74496]: osdmap e120: 3 total, 3 up, 3 in
Jan 31 07:10:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 07:10:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 07:10:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 07:10:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 07:10:11 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 121 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=121) [0] r=0 lpr=121 pi=[69,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:12 compute-0 python3.9[102592]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:10:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:10:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:12.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:10:12 compute-0 ceph-mon[74496]: 7.12 scrub starts
Jan 31 07:10:12 compute-0 ceph-mon[74496]: 7.12 scrub ok
Jan 31 07:10:12 compute-0 ceph-mon[74496]: pgmap v259: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:12 compute-0 ceph-mon[74496]: 11.10 scrub starts
Jan 31 07:10:12 compute-0 ceph-mon[74496]: 11.10 scrub ok
Jan 31 07:10:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 07:10:12 compute-0 ceph-mon[74496]: osdmap e121: 3 total, 3 up, 3 in
Jan 31 07:10:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 07:10:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 07:10:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 07:10:12 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[69,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:12 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=69/69 les/c/f=70/70/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[69,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:10:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:12.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:13 compute-0 python3.9[102742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:10:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 07:10:13 compute-0 ceph-mon[74496]: osdmap e122: 3 total, 3 up, 3 in
Jan 31 07:10:13 compute-0 ceph-mon[74496]: pgmap v262: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 07:10:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-0 python3.9[102897]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:10:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:10:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:14.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:10:14 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 07:10:14 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 07:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 07:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 124 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=122/69 les/c/f=123/70/0 sis=124) [0] r=0 lpr=124 pi=[69,124)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:14 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 124 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=122/69 les/c/f=123/70/0 sis=124) [0] r=0 lpr=124 pi=[69,124)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:14 compute-0 ceph-mon[74496]: 7.15 scrub starts
Jan 31 07:10:14 compute-0 ceph-mon[74496]: 7.15 scrub ok
Jan 31 07:10:14 compute-0 ceph-mon[74496]: osdmap e123: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-0 ceph-mon[74496]: osdmap e124: 3 total, 3 up, 3 in
Jan 31 07:10:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:15 compute-0 sudo[103054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmbneltpwzvutbknibzzlgpqxgvkjsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843414.9318883-333-220343416954731/AnsiballZ_setup.py'
Jan 31 07:10:15 compute-0 sudo[103054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 0 objects/s recovering
Jan 31 07:10:15 compute-0 python3.9[103056]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:10:15 compute-0 sudo[103054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:15 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 07:10:15 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 07:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 07:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 07:10:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 07:10:15 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 125 pg[9.1e( v 48'908 (0'0,48'908] local-lis/les=124/125 n=5 ec=54/42 lis/c=122/69 les/c/f=123/70/0 sis=124) [0] r=0 lpr=124 pi=[69,124)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:10:15 compute-0 ceph-mon[74496]: 7.17 scrub starts
Jan 31 07:10:15 compute-0 ceph-mon[74496]: 7.17 scrub ok
Jan 31 07:10:15 compute-0 ceph-mon[74496]: 11.11 scrub starts
Jan 31 07:10:15 compute-0 ceph-mon[74496]: 11.11 scrub ok
Jan 31 07:10:15 compute-0 ceph-mon[74496]: pgmap v265: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 0 objects/s recovering
Jan 31 07:10:15 compute-0 ceph-mon[74496]: osdmap e125: 3 total, 3 up, 3 in
Jan 31 07:10:15 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 07:10:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:15.947389) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:10:15 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 07:10:15 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843415947486, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7522, "num_deletes": 251, "total_data_size": 9671995, "memory_usage": 9922656, "flush_reason": "Manual Compaction"}
Jan 31 07:10:15 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416016006, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7824395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 137, "largest_seqno": 7650, "table_properties": {"data_size": 7796321, "index_size": 18487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 79189, "raw_average_key_size": 23, "raw_value_size": 7730544, "raw_average_value_size": 2282, "num_data_blocks": 817, "num_entries": 3387, "num_filter_entries": 3387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843074, "oldest_key_time": 1769843074, "file_creation_time": 1769843415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 68705 microseconds, and 20583 cpu microseconds.
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.016100) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7824395 bytes OK
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.016125) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.017531) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.017557) EVENT_LOG_v1 {"time_micros": 1769843416017549, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.017589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9639070, prev total WAL file size 9639070, number of live WAL files 2.
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.019438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7641KB) 13(51KB) 8(1944B)]
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416019528, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7878803, "oldest_snapshot_seqno": -1}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3196 keys, 7835553 bytes, temperature: kUnknown
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416067239, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7835553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7807939, "index_size": 18544, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 77026, "raw_average_key_size": 24, "raw_value_size": 7743914, "raw_average_value_size": 2423, "num_data_blocks": 822, "num_entries": 3196, "num_filter_entries": 3196, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769843416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.067424) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7835553 bytes
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.068808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.9 rd, 164.0 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3487, records dropped: 291 output_compression: NoCompression
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.068825) EVENT_LOG_v1 {"time_micros": 1769843416068817, "job": 4, "event": "compaction_finished", "compaction_time_micros": 47771, "compaction_time_cpu_micros": 12174, "output_level": 6, "num_output_files": 1, "total_output_size": 7835553, "num_input_records": 3487, "num_output_records": 3196, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416069664, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416069699, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843416069725, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 07:10:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:10:16.019310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:10:16 compute-0 sudo[103139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuwbwxauugznfeoslyiuqxrxxbxrdbbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843414.9318883-333-220343416954731/AnsiballZ_dnf.py'
Jan 31 07:10:16 compute-0 sudo[103139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:10:16 compute-0 python3.9[103141]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:10:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:16.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:16 compute-0 ceph-mon[74496]: 7.19 scrub starts
Jan 31 07:10:16 compute-0 ceph-mon[74496]: 7.19 scrub ok
Jan 31 07:10:16 compute-0 ceph-mon[74496]: 11.15 scrub starts
Jan 31 07:10:16 compute-0 ceph-mon[74496]: 11.15 scrub ok
Jan 31 07:10:16 compute-0 ceph-mon[74496]: 6.1 deep-scrub starts
Jan 31 07:10:16 compute-0 ceph-mon[74496]: 6.1 deep-scrub ok
Jan 31 07:10:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:16.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 07:10:17 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 07:10:17 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 07:10:17 compute-0 ceph-mon[74496]: pgmap v267: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 07:10:18 compute-0 sudo[103189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:18 compute-0 sudo[103189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:18 compute-0 sudo[103189]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:18 compute-0 sudo[103217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:18 compute-0 sudo[103217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:18 compute-0 sudo[103217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:18.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:18 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Jan 31 07:10:18 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Jan 31 07:10:18 compute-0 ceph-mon[74496]: 11.18 scrub starts
Jan 31 07:10:18 compute-0 ceph-mon[74496]: 11.18 scrub ok
Jan 31 07:10:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:18.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s; 36 B/s, 2 objects/s recovering
Jan 31 07:10:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 07:10:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:10:19 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Jan 31 07:10:19 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Jan 31 07:10:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:10:19
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', '.rgw.root']
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:10:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:10:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:10:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:10:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 07:10:20 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 07:10:20 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 126 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=126) [0] r=0 lpr=126 pi=[89,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:20 compute-0 ceph-mon[74496]: 11.1f deep-scrub starts
Jan 31 07:10:20 compute-0 ceph-mon[74496]: 11.1f deep-scrub ok
Jan 31 07:10:20 compute-0 ceph-mon[74496]: 7.16 scrub starts
Jan 31 07:10:20 compute-0 ceph-mon[74496]: 7.16 scrub ok
Jan 31 07:10:20 compute-0 ceph-mon[74496]: pgmap v268: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s; 36 B/s, 2 objects/s recovering
Jan 31 07:10:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 07:10:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:20.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:20.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 07:10:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[89,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:21 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=54/42 lis/c=89/89 les/c/f=90/90/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[89,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 07:10:21 compute-0 ceph-mon[74496]: 10.13 deep-scrub starts
Jan 31 07:10:21 compute-0 ceph-mon[74496]: 10.13 deep-scrub ok
Jan 31 07:10:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 07:10:21 compute-0 ceph-mon[74496]: osdmap e126: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-0 ceph-mon[74496]: 7.1a scrub starts
Jan 31 07:10:21 compute-0 ceph-mon[74496]: 7.1a scrub ok
Jan 31 07:10:21 compute-0 ceph-mon[74496]: osdmap e127: 3 total, 3 up, 3 in
Jan 31 07:10:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 31 07:10:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 07:10:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 07:10:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 07:10:22 compute-0 ceph-mon[74496]: 7.1c scrub starts
Jan 31 07:10:22 compute-0 ceph-mon[74496]: 7.1c scrub ok
Jan 31 07:10:22 compute-0 ceph-mon[74496]: pgmap v271: 305 pgs: 305 active+clean; 459 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 31 07:10:22 compute-0 ceph-mon[74496]: osdmap e128: 3 total, 3 up, 3 in
Jan 31 07:10:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:22.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:22 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Jan 31 07:10:22 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Jan 31 07:10:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:10:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:10:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 07:10:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 07:10:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 07:10:23 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 129 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=127/89 les/c/f=128/90/0 sis=129) [0] r=0 lpr=129 pi=[89,129)/1 luod=0'0 crt=48'908 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 07:10:23 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 129 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=0/0 n=5 ec=54/42 lis/c=127/89 les/c/f=128/90/0 sis=129) [0] r=0 lpr=129 pi=[89,129)/1 crt=48'908 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 07:10:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 07:10:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 07:10:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 07:10:24 compute-0 ceph-mon[74496]: 10.1b deep-scrub starts
Jan 31 07:10:24 compute-0 ceph-mon[74496]: 10.1b deep-scrub ok
Jan 31 07:10:24 compute-0 ceph-mon[74496]: osdmap e129: 3 total, 3 up, 3 in
Jan 31 07:10:24 compute-0 ceph-mon[74496]: pgmap v274: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:24 compute-0 ceph-osd[84816]: osd.0 pg_epoch: 130 pg[9.1f( v 48'908 (0'0,48'908] local-lis/les=129/130 n=5 ec=54/42 lis/c=127/89 les/c/f=128/90/0 sis=129) [0] r=0 lpr=129 pi=[89,129)/1 crt=48'908 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 07:10:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:24.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:25.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:25 compute-0 ceph-mon[74496]: osdmap e130: 3 total, 3 up, 3 in
Jan 31 07:10:25 compute-0 ceph-mon[74496]: 10.6 scrub starts
Jan 31 07:10:25 compute-0 ceph-mon[74496]: 10.6 scrub ok
Jan 31 07:10:25 compute-0 ceph-mon[74496]: 7.11 deep-scrub starts
Jan 31 07:10:25 compute-0 ceph-mon[74496]: 7.11 deep-scrub ok
Jan 31 07:10:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:25 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 07:10:25 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 07:10:26 compute-0 ceph-mon[74496]: 10.7 scrub starts
Jan 31 07:10:26 compute-0 ceph-mon[74496]: 10.7 scrub ok
Jan 31 07:10:26 compute-0 ceph-mon[74496]: pgmap v276: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:26.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:27.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:27 compute-0 ceph-mon[74496]: 10.19 scrub starts
Jan 31 07:10:27 compute-0 ceph-mon[74496]: 10.19 scrub ok
Jan 31 07:10:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:28 compute-0 ceph-mon[74496]: pgmap v277: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:28 compute-0 ceph-mon[74496]: 7.5 scrub starts
Jan 31 07:10:28 compute-0 ceph-mon[74496]: 7.5 scrub ok
Jan 31 07:10:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:28.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:29.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 31 07:10:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:29 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Jan 31 07:10:29 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Jan 31 07:10:30 compute-0 ceph-mon[74496]: pgmap v278: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 31 07:10:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:10:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:30.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:10:30 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 07:10:30 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 07:10:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 07:10:31 compute-0 ceph-mon[74496]: 10.18 deep-scrub starts
Jan 31 07:10:31 compute-0 ceph-mon[74496]: 10.18 deep-scrub ok
Jan 31 07:10:32 compute-0 ceph-mon[74496]: 10.8 scrub starts
Jan 31 07:10:32 compute-0 ceph-mon[74496]: 10.8 scrub ok
Jan 31 07:10:32 compute-0 ceph-mon[74496]: pgmap v279: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 07:10:32 compute-0 ceph-mon[74496]: 8.f scrub starts
Jan 31 07:10:32 compute-0 ceph-mon[74496]: 8.f scrub ok
Jan 31 07:10:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:32.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:32 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 07:10:32 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 07:10:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:33.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 07:10:33 compute-0 ceph-mon[74496]: 10.5 scrub starts
Jan 31 07:10:33 compute-0 ceph-mon[74496]: 10.5 scrub ok
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:10:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:10:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:34.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:34 compute-0 ceph-mon[74496]: 10.9 scrub starts
Jan 31 07:10:34 compute-0 ceph-mon[74496]: 10.9 scrub ok
Jan 31 07:10:34 compute-0 ceph-mon[74496]: pgmap v280: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 07:10:34 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 07:10:34 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 07:10:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:10:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:35.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:10:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:35 compute-0 ceph-mon[74496]: 10.2 scrub starts
Jan 31 07:10:35 compute-0 ceph-mon[74496]: 10.2 scrub ok
Jan 31 07:10:35 compute-0 ceph-mon[74496]: 8.9 scrub starts
Jan 31 07:10:35 compute-0 ceph-mon[74496]: 8.9 scrub ok
Jan 31 07:10:35 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 07:10:35 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 07:10:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:36 compute-0 ceph-mon[74496]: pgmap v281: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:36 compute-0 ceph-mon[74496]: 7.1b scrub starts
Jan 31 07:10:36 compute-0 ceph-mon[74496]: 7.1b scrub ok
Jan 31 07:10:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:10:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:37.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:10:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:38 compute-0 sudo[103318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:38 compute-0 sudo[103318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:38 compute-0 sudo[103318]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:38 compute-0 sudo[103343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:38 compute-0 sudo[103343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:38 compute-0 sudo[103343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:38.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:38 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 07:10:38 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 07:10:38 compute-0 ceph-mon[74496]: 10.a scrub starts
Jan 31 07:10:38 compute-0 ceph-mon[74496]: 10.a scrub ok
Jan 31 07:10:38 compute-0 ceph-mon[74496]: pgmap v282: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:39.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:39 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 07:10:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:39 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 07:10:39 compute-0 ceph-mon[74496]: 10.b deep-scrub starts
Jan 31 07:10:39 compute-0 ceph-mon[74496]: 10.b deep-scrub ok
Jan 31 07:10:39 compute-0 ceph-mon[74496]: 10.14 scrub starts
Jan 31 07:10:39 compute-0 ceph-mon[74496]: 10.14 scrub ok
Jan 31 07:10:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:40.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:40 compute-0 ceph-mon[74496]: pgmap v283: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1 B/s, 0 objects/s recovering
Jan 31 07:10:40 compute-0 ceph-mon[74496]: 7.e scrub starts
Jan 31 07:10:40 compute-0 ceph-mon[74496]: 7.e scrub ok
Jan 31 07:10:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:41.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:41 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 07:10:41 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 07:10:41 compute-0 ceph-mon[74496]: pgmap v284: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:42.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:42 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 07:10:42 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 07:10:43 compute-0 ceph-mon[74496]: 10.c scrub starts
Jan 31 07:10:43 compute-0 ceph-mon[74496]: 10.c scrub ok
Jan 31 07:10:43 compute-0 ceph-mon[74496]: 7.f scrub starts
Jan 31 07:10:43 compute-0 ceph-mon[74496]: 7.f scrub ok
Jan 31 07:10:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:44 compute-0 ceph-mon[74496]: 10.15 scrub starts
Jan 31 07:10:44 compute-0 ceph-mon[74496]: 10.15 scrub ok
Jan 31 07:10:44 compute-0 ceph-mon[74496]: pgmap v285: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:45.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:45 compute-0 ceph-mon[74496]: 10.1 scrub starts
Jan 31 07:10:45 compute-0 ceph-mon[74496]: 10.1 scrub ok
Jan 31 07:10:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:46 compute-0 ceph-mon[74496]: pgmap v286: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:47.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:47 compute-0 ceph-mon[74496]: 10.d deep-scrub starts
Jan 31 07:10:47 compute-0 ceph-mon[74496]: 10.d deep-scrub ok
Jan 31 07:10:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:48 compute-0 ceph-mon[74496]: 10.e scrub starts
Jan 31 07:10:48 compute-0 ceph-mon[74496]: 10.e scrub ok
Jan 31 07:10:48 compute-0 ceph-mon[74496]: pgmap v287: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:48 compute-0 ceph-mon[74496]: 8.3 scrub starts
Jan 31 07:10:48 compute-0 ceph-mon[74496]: 8.3 scrub ok
Jan 31 07:10:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:48.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:48 compute-0 systemd[76130]: Created slice User Background Tasks Slice.
Jan 31 07:10:48 compute-0 systemd[76130]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 07:10:48 compute-0 systemd[76130]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 07:10:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:10:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:10:50 compute-0 ceph-mon[74496]: pgmap v288: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:50.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:50 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 07:10:50 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 07:10:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:51.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:51 compute-0 ceph-mon[74496]: 10.16 deep-scrub starts
Jan 31 07:10:51 compute-0 ceph-mon[74496]: 10.16 deep-scrub ok
Jan 31 07:10:51 compute-0 ceph-mon[74496]: 7.2 scrub starts
Jan 31 07:10:51 compute-0 ceph-mon[74496]: 7.2 scrub ok
Jan 31 07:10:52 compute-0 ceph-mon[74496]: pgmap v289: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:52.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:52 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 07:10:52 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 07:10:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:53 compute-0 ceph-mon[74496]: 7.18 scrub starts
Jan 31 07:10:53 compute-0 ceph-mon[74496]: 7.18 scrub ok
Jan 31 07:10:54 compute-0 ceph-mon[74496]: pgmap v290: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:54 compute-0 ceph-mon[74496]: 7.1d scrub starts
Jan 31 07:10:54 compute-0 ceph-mon[74496]: 7.1d scrub ok
Jan 31 07:10:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:54.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:10:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:10:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:55.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:10:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 07:10:55 compute-0 ceph-mon[74496]: 11.a scrub starts
Jan 31 07:10:55 compute-0 ceph-mon[74496]: 11.a scrub ok
Jan 31 07:10:55 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 07:10:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:56 compute-0 ceph-mon[74496]: pgmap v291: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:56 compute-0 ceph-mon[74496]: 7.8 scrub starts
Jan 31 07:10:56 compute-0 ceph-mon[74496]: 7.8 scrub ok
Jan 31 07:10:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:57.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:58 compute-0 ceph-mon[74496]: 10.17 scrub starts
Jan 31 07:10:58 compute-0 ceph-mon[74496]: 10.17 scrub ok
Jan 31 07:10:58 compute-0 ceph-mon[74496]: pgmap v292: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:58 compute-0 sudo[103408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:58 compute-0 sudo[103408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:58 compute-0 sudo[103408]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:58 compute-0 sudo[103139]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:58 compute-0 sudo[103433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:10:58 compute-0 sudo[103433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:10:58 compute-0 sudo[103433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:10:58 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 07:10:58 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 07:10:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:10:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:10:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:10:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:10:59.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:10:59 compute-0 ceph-mon[74496]: 11.e scrub starts
Jan 31 07:10:59 compute-0 ceph-mon[74496]: 11.e scrub ok
Jan 31 07:10:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:10:59 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 07:10:59 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 07:10:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:00 compute-0 ceph-mon[74496]: 7.3 scrub starts
Jan 31 07:11:00 compute-0 ceph-mon[74496]: 7.3 scrub ok
Jan 31 07:11:00 compute-0 ceph-mon[74496]: 10.1a scrub starts
Jan 31 07:11:00 compute-0 ceph-mon[74496]: 10.1a scrub ok
Jan 31 07:11:00 compute-0 ceph-mon[74496]: pgmap v293: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:00 compute-0 ceph-mon[74496]: 10.11 scrub starts
Jan 31 07:11:00 compute-0 ceph-mon[74496]: 10.11 scrub ok
Jan 31 07:11:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:01 compute-0 ceph-mon[74496]: 7.b scrub starts
Jan 31 07:11:01 compute-0 ceph-mon[74496]: 7.b scrub ok
Jan 31 07:11:01 compute-0 ceph-mon[74496]: 10.1c scrub starts
Jan 31 07:11:01 compute-0 ceph-mon[74496]: 10.1c scrub ok
Jan 31 07:11:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:02 compute-0 ceph-mon[74496]: 10.1d scrub starts
Jan 31 07:11:02 compute-0 ceph-mon[74496]: 10.1d scrub ok
Jan 31 07:11:02 compute-0 ceph-mon[74496]: pgmap v294: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:02 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.6 deep-scrub starts
Jan 31 07:11:02 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.6 deep-scrub ok
Jan 31 07:11:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:02.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:03.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:03 compute-0 ceph-mon[74496]: 8.a scrub starts
Jan 31 07:11:03 compute-0 ceph-mon[74496]: 8.a scrub ok
Jan 31 07:11:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:03 compute-0 sudo[103610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfufijuwvpyucvihbjaqsiartqomeefl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843463.5358777-369-212206770364406/AnsiballZ_command.py'
Jan 31 07:11:03 compute-0 sudo[103610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:04 compute-0 python3.9[103612]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:11:04 compute-0 ceph-mon[74496]: 7.6 deep-scrub starts
Jan 31 07:11:04 compute-0 ceph-mon[74496]: 7.6 deep-scrub ok
Jan 31 07:11:04 compute-0 ceph-mon[74496]: pgmap v295: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:04 compute-0 ceph-mon[74496]: 8.d scrub starts
Jan 31 07:11:04 compute-0 ceph-mon[74496]: 8.d scrub ok
Jan 31 07:11:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:04.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:04 compute-0 sudo[103610]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:05.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:05 compute-0 ceph-mon[74496]: 10.1f scrub starts
Jan 31 07:11:05 compute-0 ceph-mon[74496]: 10.1f scrub ok
Jan 31 07:11:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:05 compute-0 sudo[103898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpibsojvgyrkbgwsynaidnyavenldgbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843464.945094-393-245916415974107/AnsiballZ_selinux.py'
Jan 31 07:11:05 compute-0 sudo[103898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:06 compute-0 python3.9[103900]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 07:11:06 compute-0 sudo[103898]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:06 compute-0 ceph-mon[74496]: pgmap v296: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:06 compute-0 ceph-mon[74496]: 11.8 deep-scrub starts
Jan 31 07:11:06 compute-0 ceph-mon[74496]: 11.8 deep-scrub ok
Jan 31 07:11:06 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 07:11:06 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 07:11:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:06.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:07 compute-0 sudo[104050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpgcjsmmyvsbkjlhfmzlhnhugndyegdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843466.7346308-426-33859209102079/AnsiballZ_command.py'
Jan 31 07:11:07 compute-0 sudo[104050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:07.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:07 compute-0 python3.9[104052]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 07:11:07 compute-0 sudo[104050]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:07 compute-0 sudo[104203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzwspgmiqsfphqxbqdwrrvoobcwioqbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843467.3353808-450-168555839665075/AnsiballZ_file.py'
Jan 31 07:11:07 compute-0 sudo[104203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:07 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Jan 31 07:11:07 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Jan 31 07:11:07 compute-0 python3.9[104205]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:11:07 compute-0 sudo[104203]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:08 compute-0 sudo[104355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvkgqmqfmawqnicjaiydyficjxbmhpqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843468.0618167-474-26322446780912/AnsiballZ_mount.py'
Jan 31 07:11:08 compute-0 sudo[104355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:08 compute-0 ceph-mon[74496]: 7.9 scrub starts
Jan 31 07:11:08 compute-0 ceph-mon[74496]: 7.9 scrub ok
Jan 31 07:11:08 compute-0 ceph-mon[74496]: 11.1a scrub starts
Jan 31 07:11:08 compute-0 ceph-mon[74496]: 11.1a scrub ok
Jan 31 07:11:08 compute-0 ceph-mon[74496]: pgmap v297: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:08 compute-0 ceph-mon[74496]: 7.a scrub starts
Jan 31 07:11:08 compute-0 ceph-mon[74496]: 7.a scrub ok
Jan 31 07:11:08 compute-0 python3.9[104357]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 07:11:08 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 07:11:08 compute-0 sudo[104355]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:08 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 07:11:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:08.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:09 compute-0 sudo[104382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:09 compute-0 sudo[104382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-0 sudo[104382]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-0 sudo[104408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:09 compute-0 sudo[104408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-0 sudo[104408]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-0 sudo[104433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:09 compute-0 sudo[104433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-0 sudo[104433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:09 compute-0 sudo[104458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:11:09 compute-0 sudo[104458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:09 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Jan 31 07:11:09 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Jan 31 07:11:09 compute-0 ceph-mon[74496]: 7.10 deep-scrub starts
Jan 31 07:11:09 compute-0 ceph-mon[74496]: 7.10 deep-scrub ok
Jan 31 07:11:09 compute-0 ceph-mon[74496]: 8.19 scrub starts
Jan 31 07:11:09 compute-0 ceph-mon[74496]: 8.19 scrub ok
Jan 31 07:11:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:10 compute-0 podman[104553]: 2026-01-31 07:11:10.141061356 +0000 UTC m=+0.425043855 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:11:10 compute-0 podman[104553]: 2026-01-31 07:11:10.282446215 +0000 UTC m=+0.566428724 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:11:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:11:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:11:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:10.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:10 compute-0 ceph-mon[74496]: 7.4 scrub starts
Jan 31 07:11:10 compute-0 ceph-mon[74496]: 7.4 scrub ok
Jan 31 07:11:10 compute-0 ceph-mon[74496]: pgmap v298: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:10 compute-0 ceph-mon[74496]: 10.10 scrub starts
Jan 31 07:11:10 compute-0 ceph-mon[74496]: 10.10 scrub ok
Jan 31 07:11:10 compute-0 ceph-mon[74496]: 7.1e deep-scrub starts
Jan 31 07:11:10 compute-0 ceph-mon[74496]: 7.1e deep-scrub ok
Jan 31 07:11:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:10 compute-0 podman[104705]: 2026-01-31 07:11:10.913351881 +0000 UTC m=+0.100570916 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:11:11 compute-0 podman[104705]: 2026-01-31 07:11:11.03351257 +0000 UTC m=+0.220731535 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:11:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:11 compute-0 sudo[104909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-harskbeemsjvvilfppgkilcjvwybyhpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843471.2990723-558-204887471599346/AnsiballZ_file.py'
Jan 31 07:11:11 compute-0 sudo[104909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:11 compute-0 podman[104845]: 2026-01-31 07:11:11.700874558 +0000 UTC m=+0.251618119 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived)
Jan 31 07:11:11 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Jan 31 07:11:11 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Jan 31 07:11:11 compute-0 podman[104845]: 2026-01-31 07:11:11.747453346 +0000 UTC m=+0.298196947 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 07:11:11 compute-0 python3.9[104911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:11 compute-0 sudo[104458]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:11 compute-0 sudo[104909]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: pgmap v299: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:12 compute-0 sudo[105011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:12 compute-0 sudo[105011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:12 compute-0 sudo[105011]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:12 compute-0 sudo[105054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:12 compute-0 sudo[105054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:12 compute-0 sudo[105054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:12 compute-0 sudo[105079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:12 compute-0 sudo[105079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:12 compute-0 sudo[105079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:12 compute-0 sudo[105104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:11:12 compute-0 sudo[105104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:12 compute-0 sudo[105180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvymzwfnuufwwzelvabnqzgoyxqallr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843472.1081662-582-219261544163952/AnsiballZ_stat.py'
Jan 31 07:11:12 compute-0 sudo[105180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:12.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:12 compute-0 python3.9[105188]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:11:12 compute-0 sudo[105180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:12 compute-0 sudo[105104]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e3528d46-7a76-4c10-ae7d-609e96cf8c10 does not exist
Jan 31 07:11:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c84a748a-1c90-4f76-9e9c-e453a26af5b7 does not exist
Jan 31 07:11:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4e5a3d1e-3d39-40b3-a765-ea24247fd131 does not exist
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:11:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:11:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:12 compute-0 sudo[105216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:12 compute-0 sudo[105216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:12 compute-0 sudo[105216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:12 compute-0 sudo[105262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:12 compute-0 sudo[105262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:12 compute-0 sudo[105262]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:13 compute-0 sudo[105293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:13 compute-0 sudo[105293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:13 compute-0 sudo[105293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:13 compute-0 sudo[105381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfzlfqoslrbmmspxwwhsmslcwnfzbicc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843472.1081662-582-219261544163952/AnsiballZ_file.py'
Jan 31 07:11:13 compute-0 sudo[105381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:13 compute-0 sudo[105348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:11:13 compute-0 sudo[105348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:13.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:13 compute-0 ceph-mon[74496]: 7.13 deep-scrub starts
Jan 31 07:11:13 compute-0 ceph-mon[74496]: 7.13 deep-scrub ok
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:13 compute-0 ceph-mon[74496]: 11.12 scrub starts
Jan 31 07:11:13 compute-0 ceph-mon[74496]: 11.12 scrub ok
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:11:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:11:13 compute-0 python3.9[105389]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:11:13 compute-0 sudo[105381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.3275069 +0000 UTC m=+0.056659653 container create cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:11:13 compute-0 systemd[1]: Started libpod-conmon-cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a.scope.
Jan 31 07:11:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.288415384 +0000 UTC m=+0.017568167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.391788789 +0000 UTC m=+0.120941562 container init cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:11:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.396825902 +0000 UTC m=+0.125978655 container start cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:11:13 compute-0 quirky_mccarthy[105472]: 167 167
Jan 31 07:11:13 compute-0 systemd[1]: libpod-cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a.scope: Deactivated successfully.
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.403118389 +0000 UTC m=+0.132271202 container attach cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.404535199 +0000 UTC m=+0.133687952 container died cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d65bf4f5d9ecb2f2a95ff42304debf347c9b910ce5186a9fbf89eca453af7a-merged.mount: Deactivated successfully.
Jan 31 07:11:13 compute-0 podman[105432]: 2026-01-31 07:11:13.447010711 +0000 UTC m=+0.176163484 container remove cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:11:13 compute-0 systemd[1]: libpod-conmon-cff3795c330336258dad3fc843b8f2d706780b0653c541164c52ab64b52c016a.scope: Deactivated successfully.
Jan 31 07:11:13 compute-0 podman[105496]: 2026-01-31 07:11:13.553852753 +0000 UTC m=+0.039879798 container create a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:11:13 compute-0 systemd[1]: Started libpod-conmon-a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70.scope.
Jan 31 07:11:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b24958090b5e5f27f77753040049cb6a8529abf00a17ac364dc5e01cb7a953/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b24958090b5e5f27f77753040049cb6a8529abf00a17ac364dc5e01cb7a953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b24958090b5e5f27f77753040049cb6a8529abf00a17ac364dc5e01cb7a953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b24958090b5e5f27f77753040049cb6a8529abf00a17ac364dc5e01cb7a953/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b24958090b5e5f27f77753040049cb6a8529abf00a17ac364dc5e01cb7a953/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:13 compute-0 podman[105496]: 2026-01-31 07:11:13.62866131 +0000 UTC m=+0.114688375 container init a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:11:13 compute-0 podman[105496]: 2026-01-31 07:11:13.537869372 +0000 UTC m=+0.023896477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:11:13 compute-0 podman[105496]: 2026-01-31 07:11:13.639953139 +0000 UTC m=+0.125980184 container start a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ganguly, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:11:13 compute-0 podman[105496]: 2026-01-31 07:11:13.643188961 +0000 UTC m=+0.129216006 container attach a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:11:14 compute-0 ceph-mon[74496]: 8.12 scrub starts
Jan 31 07:11:14 compute-0 ceph-mon[74496]: 8.12 scrub ok
Jan 31 07:11:14 compute-0 ceph-mon[74496]: pgmap v300: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:14 compute-0 cool_ganguly[105512]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:11:14 compute-0 cool_ganguly[105512]: --> relative data size: 1.0
Jan 31 07:11:14 compute-0 cool_ganguly[105512]: --> All data devices are unavailable
Jan 31 07:11:14 compute-0 systemd[1]: libpod-a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70.scope: Deactivated successfully.
Jan 31 07:11:14 compute-0 podman[105496]: 2026-01-31 07:11:14.386144757 +0000 UTC m=+0.872171812 container died a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b24958090b5e5f27f77753040049cb6a8529abf00a17ac364dc5e01cb7a953-merged.mount: Deactivated successfully.
Jan 31 07:11:14 compute-0 podman[105496]: 2026-01-31 07:11:14.440905886 +0000 UTC m=+0.926932941 container remove a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:11:14 compute-0 systemd[1]: libpod-conmon-a2207741641181d59c1de6647250c55bd5f9d2d7d17babe152886755b1d1ea70.scope: Deactivated successfully.
Jan 31 07:11:14 compute-0 sudo[105348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:14 compute-0 sudo[105573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:14 compute-0 sudo[105573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:14 compute-0 sudo[105573]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:14 compute-0 sudo[105621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:14 compute-0 sudo[105621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:14 compute-0 sudo[105621]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:14 compute-0 sudo[105663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:14 compute-0 sudo[105663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:14 compute-0 sudo[105663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:14 compute-0 sudo[105760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwdzdyddtqhctshytgcrrodclejfyxhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843474.4558136-645-35652417178756/AnsiballZ_stat.py'
Jan 31 07:11:14 compute-0 sudo[105760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:14 compute-0 sudo[105718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:11:14 compute-0 sudo[105718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:14.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:14 compute-0 python3.9[105764]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:11:14 compute-0 sudo[105760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:14 compute-0 podman[105809]: 2026-01-31 07:11:14.956469599 +0000 UTC m=+0.041758282 container create 6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:11:14 compute-0 systemd[1]: Started libpod-conmon-6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091.scope.
Jan 31 07:11:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:11:15 compute-0 podman[105809]: 2026-01-31 07:11:14.934358564 +0000 UTC m=+0.019647337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:11:15 compute-0 podman[105809]: 2026-01-31 07:11:15.03002662 +0000 UTC m=+0.115315353 container init 6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:11:15 compute-0 podman[105809]: 2026-01-31 07:11:15.035796553 +0000 UTC m=+0.121085246 container start 6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:11:15 compute-0 podman[105809]: 2026-01-31 07:11:15.039313993 +0000 UTC m=+0.124602726 container attach 6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:11:15 compute-0 zealous_leakey[105850]: 167 167
Jan 31 07:11:15 compute-0 systemd[1]: libpod-6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091.scope: Deactivated successfully.
Jan 31 07:11:15 compute-0 podman[105809]: 2026-01-31 07:11:15.040568838 +0000 UTC m=+0.125857531 container died 6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:11:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:15.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dda3f9584d3172228d724d04b9231abe67a2b5d9713cc2e66fd396f6e644999d-merged.mount: Deactivated successfully.
Jan 31 07:11:15 compute-0 podman[105809]: 2026-01-31 07:11:15.081445474 +0000 UTC m=+0.166734177 container remove 6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:11:15 compute-0 systemd[1]: libpod-conmon-6af1642dff5c136072ccc036a1d074dd77bacb743420158129c571c7294bc091.scope: Deactivated successfully.
Jan 31 07:11:15 compute-0 podman[105875]: 2026-01-31 07:11:15.190450658 +0000 UTC m=+0.040860717 container create c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:11:15 compute-0 ceph-mon[74496]: 10.4 scrub starts
Jan 31 07:11:15 compute-0 ceph-mon[74496]: 10.4 scrub ok
Jan 31 07:11:15 compute-0 systemd[1]: Started libpod-conmon-c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82.scope.
Jan 31 07:11:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efa3bc849182c52806113453fb8b576077b949042293c6b148325fd8e059019/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efa3bc849182c52806113453fb8b576077b949042293c6b148325fd8e059019/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efa3bc849182c52806113453fb8b576077b949042293c6b148325fd8e059019/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efa3bc849182c52806113453fb8b576077b949042293c6b148325fd8e059019/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:15 compute-0 podman[105875]: 2026-01-31 07:11:15.26403534 +0000 UTC m=+0.114445389 container init c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:11:15 compute-0 podman[105875]: 2026-01-31 07:11:15.171574984 +0000 UTC m=+0.021985033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:11:15 compute-0 podman[105875]: 2026-01-31 07:11:15.273732573 +0000 UTC m=+0.124142602 container start c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shirley, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:11:15 compute-0 podman[105875]: 2026-01-31 07:11:15.277358346 +0000 UTC m=+0.127768445 container attach c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:11:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]: {
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:     "0": [
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:         {
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "devices": [
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "/dev/loop3"
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             ],
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "lv_name": "ceph_lv0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "lv_size": "7511998464",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "name": "ceph_lv0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "tags": {
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.cluster_name": "ceph",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.crush_device_class": "",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.encrypted": "0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.osd_id": "0",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.type": "block",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:                 "ceph.vdo": "0"
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             },
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "type": "block",
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:             "vg_name": "ceph_vg0"
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:         }
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]:     ]
Jan 31 07:11:15 compute-0 quizzical_shirley[105892]: }
Jan 31 07:11:15 compute-0 systemd[1]: libpod-c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82.scope: Deactivated successfully.
Jan 31 07:11:15 compute-0 podman[105875]: 2026-01-31 07:11:15.971505252 +0000 UTC m=+0.821915321 container died c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:11:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9efa3bc849182c52806113453fb8b576077b949042293c6b148325fd8e059019-merged.mount: Deactivated successfully.
Jan 31 07:11:16 compute-0 podman[105875]: 2026-01-31 07:11:16.024404208 +0000 UTC m=+0.874814237 container remove c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shirley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:11:16 compute-0 systemd[1]: libpod-conmon-c3de4fba66731d3cfd91f045a9a78e85fd65f348e40c22b4019e24f3755d9e82.scope: Deactivated successfully.
Jan 31 07:11:16 compute-0 sudo[105718]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:16 compute-0 sudo[105986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:16 compute-0 sudo[105986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:16 compute-0 sudo[105986]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:16 compute-0 sudo[106017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:11:16 compute-0 sudo[106017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:16 compute-0 sudo[106017]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:16 compute-0 sudo[106063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:16 compute-0 sudo[106109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpvibwbjysbdqaxsaorgdoxgrgdxeexs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843475.6702266-684-168977922551873/AnsiballZ_getent.py'
Jan 31 07:11:16 compute-0 sudo[106063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:16 compute-0 sudo[106109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:16 compute-0 sudo[106063]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:16 compute-0 ceph-mon[74496]: pgmap v301: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:16 compute-0 ceph-mon[74496]: 8.b scrub starts
Jan 31 07:11:16 compute-0 ceph-mon[74496]: 8.b scrub ok
Jan 31 07:11:16 compute-0 sudo[106114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:11:16 compute-0 sudo[106114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:16 compute-0 python3.9[106113]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 07:11:16 compute-0 sudo[106109]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.497381717 +0000 UTC m=+0.042202215 container create 1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:11:16 compute-0 systemd[1]: Started libpod-conmon-1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3.scope.
Jan 31 07:11:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.564918328 +0000 UTC m=+0.109738866 container init 1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_panini, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.570682671 +0000 UTC m=+0.115503179 container start 1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_panini, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.574781177 +0000 UTC m=+0.119601715 container attach 1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:11:16 compute-0 friendly_panini[106221]: 167 167
Jan 31 07:11:16 compute-0 systemd[1]: libpod-1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3.scope: Deactivated successfully.
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.481630091 +0000 UTC m=+0.026450619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.577607137 +0000 UTC m=+0.122427645 container died 1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:11:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-469862376e88278ea4b427a7eed4d7bc71df18feb30eebeafef5fafb2e2b8551-merged.mount: Deactivated successfully.
Jan 31 07:11:16 compute-0 podman[106205]: 2026-01-31 07:11:16.628069894 +0000 UTC m=+0.172890372 container remove 1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:11:16 compute-0 systemd[1]: libpod-conmon-1d57f1dea9d38d5e690304289d03cf4f61a03307a40dc091d883ab6fc3f4def3.scope: Deactivated successfully.
Jan 31 07:11:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:11:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:16.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:11:16 compute-0 podman[106282]: 2026-01-31 07:11:16.777069299 +0000 UTC m=+0.052612499 container create 63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:11:16 compute-0 systemd[1]: Started libpod-conmon-63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e.scope.
Jan 31 07:11:16 compute-0 podman[106282]: 2026-01-31 07:11:16.757404702 +0000 UTC m=+0.032947882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:11:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3739e928b816aec6b32d99be35c7748c54c4d3b412cd300d4e9f2e1dba474047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3739e928b816aec6b32d99be35c7748c54c4d3b412cd300d4e9f2e1dba474047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3739e928b816aec6b32d99be35c7748c54c4d3b412cd300d4e9f2e1dba474047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3739e928b816aec6b32d99be35c7748c54c4d3b412cd300d4e9f2e1dba474047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:11:16 compute-0 podman[106282]: 2026-01-31 07:11:16.870135862 +0000 UTC m=+0.145679062 container init 63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:11:16 compute-0 podman[106282]: 2026-01-31 07:11:16.881035289 +0000 UTC m=+0.156578489 container start 63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_heisenberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:11:16 compute-0 podman[106282]: 2026-01-31 07:11:16.885540957 +0000 UTC m=+0.161084157 container attach 63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:11:16 compute-0 sudo[106393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfcvzzfelxwasaevvlwihmvrucgphoxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843476.6801856-714-224457568984961/AnsiballZ_getent.py'
Jan 31 07:11:16 compute-0 sudo[106393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:17.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:17 compute-0 python3.9[106395]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 07:11:17 compute-0 sudo[106393]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]: {
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:         "osd_id": 0,
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:         "type": "bluestore"
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]:     }
Jan 31 07:11:17 compute-0 intelligent_heisenberg[106338]: }
Jan 31 07:11:17 compute-0 systemd[1]: libpod-63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e.scope: Deactivated successfully.
Jan 31 07:11:17 compute-0 podman[106282]: 2026-01-31 07:11:17.682399198 +0000 UTC m=+0.957942368 container died 63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_heisenberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:11:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3739e928b816aec6b32d99be35c7748c54c4d3b412cd300d4e9f2e1dba474047-merged.mount: Deactivated successfully.
Jan 31 07:11:17 compute-0 podman[106282]: 2026-01-31 07:11:17.750804923 +0000 UTC m=+1.026348083 container remove 63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:11:17 compute-0 systemd[1]: libpod-conmon-63f998e0a78ad4fcd45f63cfe1f85511d47060e146146d424c50844a727f049e.scope: Deactivated successfully.
Jan 31 07:11:17 compute-0 sudo[106114]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:17 compute-0 sudo[106575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhxkngxjaoxclqleipktafvjfxmcakex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843477.3373902-738-42828793971961/AnsiballZ_group.py'
Jan 31 07:11:17 compute-0 sudo[106575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:11:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:11:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2215c1af-57da-466f-8e56-26e1219963fc does not exist
Jan 31 07:11:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b1f22ba1-7402-4521-a72b-206bcc014c89 does not exist
Jan 31 07:11:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 37cf0a83-7934-4c30-ae6d-73f07c710a36 does not exist
Jan 31 07:11:17 compute-0 sudo[106578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:17 compute-0 sudo[106578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:17 compute-0 sudo[106578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:17 compute-0 sudo[106603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:11:17 compute-0 sudo[106603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:17 compute-0 sudo[106603]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:17 compute-0 python3.9[106577]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:11:17 compute-0 sudo[106575]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-0 sudo[106696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:18 compute-0 sudo[106696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:18 compute-0 sudo[106696]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-0 sudo[106730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:18 compute-0 sudo[106730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:18 compute-0 sudo[106730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-0 sudo[106827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojtmfstnhylcdzbscixkchhcdchyblol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843478.2590756-765-63152209387086/AnsiballZ_file.py'
Jan 31 07:11:18 compute-0 sudo[106827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:18 compute-0 python3.9[106829]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 07:11:18 compute-0 sudo[106827]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:18 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 31 07:11:18 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 31 07:11:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:18 compute-0 ceph-mon[74496]: pgmap v302: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:11:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:19.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:19 compute-0 sudo[106980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apbxzffeibxxivqoospzgowjoggyjvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843479.1116235-798-44654893270768/AnsiballZ_dnf.py'
Jan 31 07:11:19 compute-0 sudo[106980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:19 compute-0 python3.9[106982]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:19 compute-0 ceph-mon[74496]: 8.1f deep-scrub starts
Jan 31 07:11:19 compute-0 ceph-mon[74496]: 8.1f deep-scrub ok
Jan 31 07:11:19 compute-0 ceph-mon[74496]: 6.6 scrub starts
Jan 31 07:11:19 compute-0 ceph-mon[74496]: 6.6 scrub ok
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:11:19
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', 'backups']
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:11:19 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:11:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:20.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:20 compute-0 ceph-mon[74496]: 8.10 scrub starts
Jan 31 07:11:20 compute-0 ceph-mon[74496]: 8.10 scrub ok
Jan 31 07:11:20 compute-0 ceph-mon[74496]: pgmap v303: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:20 compute-0 sudo[106980]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:21.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:21 compute-0 sudo[107134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocapzrikzfbbcworuclhuvbqkohmjhjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843481.1678069-822-17103913517734/AnsiballZ_file.py'
Jan 31 07:11:21 compute-0 sudo[107134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:21 compute-0 python3.9[107136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:21 compute-0 sudo[107134]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:21 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 31 07:11:21 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 31 07:11:21 compute-0 ceph-mon[74496]: 10.3 scrub starts
Jan 31 07:11:21 compute-0 ceph-mon[74496]: 10.3 scrub ok
Jan 31 07:11:22 compute-0 sudo[107286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdadimumububsqnmndycossqakqcihkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843481.8124602-846-120961571040729/AnsiballZ_stat.py'
Jan 31 07:11:22 compute-0 sudo[107286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:22 compute-0 python3.9[107288]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:11:22 compute-0 sudo[107286]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:22 compute-0 sudo[107364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqplvxwifvksjojqoxwemibxyuqivbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843481.8124602-846-120961571040729/AnsiballZ_file.py'
Jan 31 07:11:22 compute-0 sudo[107364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:22 compute-0 python3.9[107366]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:22 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Jan 31 07:11:22 compute-0 sudo[107364]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:22 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Jan 31 07:11:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:22.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:22 compute-0 ceph-mon[74496]: pgmap v304: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:22 compute-0 ceph-mon[74496]: 11.19 scrub starts
Jan 31 07:11:22 compute-0 ceph-mon[74496]: 6.9 deep-scrub starts
Jan 31 07:11:22 compute-0 ceph-mon[74496]: 11.19 scrub ok
Jan 31 07:11:22 compute-0 ceph-mon[74496]: 6.9 deep-scrub ok
Jan 31 07:11:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:23.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:23 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 07:11:23 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 07:11:23 compute-0 ceph-mon[74496]: 6.b deep-scrub starts
Jan 31 07:11:23 compute-0 ceph-mon[74496]: 6.b deep-scrub ok
Jan 31 07:11:24 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 07:11:24 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 07:11:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:24.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:24 compute-0 ceph-mon[74496]: pgmap v305: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:24 compute-0 ceph-mon[74496]: 6.f scrub starts
Jan 31 07:11:24 compute-0 ceph-mon[74496]: 6.f scrub ok
Jan 31 07:11:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:25.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:25 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 07:11:25 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 07:11:25 compute-0 ceph-mon[74496]: 9.19 scrub starts
Jan 31 07:11:25 compute-0 ceph-mon[74496]: 9.19 scrub ok
Jan 31 07:11:25 compute-0 ceph-mon[74496]: pgmap v306: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:26 compute-0 sudo[107518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdqjfjhbkhglqmdfcpailpslpbwylgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843485.7463071-885-141749338932436/AnsiballZ_stat.py'
Jan 31 07:11:26 compute-0 sudo[107518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:26 compute-0 python3.9[107520]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:11:26 compute-0 sudo[107518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:26 compute-0 sudo[107596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwsoxpwjxrviijzjbhpufiphyqslaqbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843485.7463071-885-141749338932436/AnsiballZ_file.py'
Jan 31 07:11:26 compute-0 sudo[107596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:26 compute-0 python3.9[107598]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:11:26 compute-0 sudo[107596]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:26 compute-0 ceph-mon[74496]: 11.1d scrub starts
Jan 31 07:11:26 compute-0 ceph-mon[74496]: 11.1d scrub ok
Jan 31 07:11:26 compute-0 ceph-mon[74496]: 9.1a scrub starts
Jan 31 07:11:26 compute-0 ceph-mon[74496]: 9.1a scrub ok
Jan 31 07:11:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:27.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:27 compute-0 sudo[107749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgrsbvwvhexjdomnhcgtlfcozwueacoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843487.1394176-930-245374197308515/AnsiballZ_dnf.py'
Jan 31 07:11:27 compute-0 sudo[107749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:27 compute-0 python3.9[107751]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:27 compute-0 ceph-mon[74496]: pgmap v307: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:28 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 07:11:28 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 07:11:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:28.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:28 compute-0 sudo[107749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:28 compute-0 ceph-mon[74496]: 8.18 deep-scrub starts
Jan 31 07:11:28 compute-0 ceph-mon[74496]: 8.18 deep-scrub ok
Jan 31 07:11:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:29 compute-0 python3.9[107903]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:11:29 compute-0 ceph-mon[74496]: 8.5 scrub starts
Jan 31 07:11:29 compute-0 ceph-mon[74496]: 8.5 scrub ok
Jan 31 07:11:29 compute-0 ceph-mon[74496]: 9.1b scrub starts
Jan 31 07:11:29 compute-0 ceph-mon[74496]: 9.1b scrub ok
Jan 31 07:11:29 compute-0 ceph-mon[74496]: pgmap v308: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:30 compute-0 python3.9[108055]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 07:11:31 compute-0 ceph-mon[74496]: 11.1c scrub starts
Jan 31 07:11:31 compute-0 ceph-mon[74496]: 11.1c scrub ok
Jan 31 07:11:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:31 compute-0 python3.9[108206]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:11:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:32 compute-0 ceph-mon[74496]: 11.1e deep-scrub starts
Jan 31 07:11:32 compute-0 ceph-mon[74496]: 11.1e deep-scrub ok
Jan 31 07:11:32 compute-0 ceph-mon[74496]: pgmap v309: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:32 compute-0 sudo[108356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoprxfjajeppggovtvnmplidirrrsdgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843491.8849304-1053-135141932520341/AnsiballZ_systemd.py'
Jan 31 07:11:32 compute-0 sudo[108356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:32 compute-0 python3.9[108358]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:11:32 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 07:11:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:32 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 07:11:32 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 07:11:32 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 07:11:33 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 07:11:33 compute-0 sudo[108356]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:33 compute-0 ceph-mon[74496]: 7.14 scrub starts
Jan 31 07:11:33 compute-0 ceph-mon[74496]: 7.14 scrub ok
Jan 31 07:11:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:33.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:11:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:11:34 compute-0 ceph-mon[74496]: 11.1b scrub starts
Jan 31 07:11:34 compute-0 ceph-mon[74496]: 11.1b scrub ok
Jan 31 07:11:34 compute-0 ceph-mon[74496]: pgmap v310: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:34 compute-0 python3.9[108522]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 07:11:34 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 07:11:34 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 07:11:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:34.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:35.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:36 compute-0 ceph-mon[74496]: 9.1e scrub starts
Jan 31 07:11:36 compute-0 ceph-mon[74496]: 9.1e scrub ok
Jan 31 07:11:36 compute-0 ceph-mon[74496]: pgmap v311: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:36 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 07:11:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:36 compute-0 ceph-osd[84816]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 07:11:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:37.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:37 compute-0 ceph-mon[74496]: 8.1b scrub starts
Jan 31 07:11:37 compute-0 ceph-mon[74496]: 8.1b scrub ok
Jan 31 07:11:37 compute-0 ceph-mon[74496]: 8.6 scrub starts
Jan 31 07:11:37 compute-0 ceph-mon[74496]: 8.6 scrub ok
Jan 31 07:11:37 compute-0 sudo[108674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utkmtmptsqwadlztknjsohjlhwacztio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843497.421193-1224-256564591740004/AnsiballZ_systemd.py'
Jan 31 07:11:37 compute-0 sudo[108674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:38 compute-0 python3.9[108676]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:11:38 compute-0 sudo[108674]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-0 sshd-session[108677]: Invalid user sol from 45.148.10.240 port 44574
Jan 31 07:11:38 compute-0 sshd-session[108677]: Connection closed by invalid user sol 45.148.10.240 port 44574 [preauth]
Jan 31 07:11:38 compute-0 sudo[108853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vavyfiragupkqnkzhulcnrojvbdbuhsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843498.1757374-1224-259547442456411/AnsiballZ_systemd.py'
Jan 31 07:11:38 compute-0 sudo[108807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:38 compute-0 sudo[108807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:38 compute-0 sudo[108853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:38 compute-0 sudo[108807]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-0 sudo[108858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:38 compute-0 sudo[108858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:38 compute-0 ceph-mon[74496]: 9.1f scrub starts
Jan 31 07:11:38 compute-0 ceph-mon[74496]: 9.1f scrub ok
Jan 31 07:11:38 compute-0 ceph-mon[74496]: pgmap v312: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:38 compute-0 sudo[108858]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-0 python3.9[108857]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:11:38 compute-0 sudo[108853]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:39.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:39 compute-0 sshd-session[100008]: Connection closed by 192.168.122.30 port 54848
Jan 31 07:11:39 compute-0 sshd-session[99952]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:11:39 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 07:11:39 compute-0 systemd[1]: session-35.scope: Consumed 1min 1.291s CPU time.
Jan 31 07:11:39 compute-0 systemd-logind[816]: Session 35 logged out. Waiting for processes to exit.
Jan 31 07:11:39 compute-0 systemd-logind[816]: Removed session 35.
Jan 31 07:11:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:40 compute-0 ceph-mon[74496]: 8.4 scrub starts
Jan 31 07:11:40 compute-0 ceph-mon[74496]: 8.4 scrub ok
Jan 31 07:11:40 compute-0 ceph-mon[74496]: 11.16 scrub starts
Jan 31 07:11:40 compute-0 ceph-mon[74496]: 11.16 scrub ok
Jan 31 07:11:40 compute-0 ceph-mon[74496]: pgmap v313: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:40.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 31 07:11:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:41.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 31 07:11:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:42 compute-0 ceph-mon[74496]: pgmap v314: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:43.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:44 compute-0 ceph-mon[74496]: pgmap v315: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:44.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:44 compute-0 sshd-session[108912]: Accepted publickey for zuul from 192.168.122.30 port 51574 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:11:44 compute-0 systemd-logind[816]: New session 36 of user zuul.
Jan 31 07:11:44 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 31 07:11:44 compute-0 sshd-session[108912]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:11:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:45 compute-0 python3.9[109066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:11:46 compute-0 ceph-mon[74496]: 8.15 scrub starts
Jan 31 07:11:46 compute-0 ceph-mon[74496]: 8.15 scrub ok
Jan 31 07:11:46 compute-0 ceph-mon[74496]: pgmap v316: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:46.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:46 compute-0 sudo[109220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxqwsainyohwodyaaekvplblhjyvxrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843506.5723348-68-48556545038451/AnsiballZ_getent.py'
Jan 31 07:11:46 compute-0 sudo[109220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:47.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:47 compute-0 python3.9[109222]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 07:11:47 compute-0 sudo[109220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:47 compute-0 ceph-mon[74496]: 11.5 scrub starts
Jan 31 07:11:47 compute-0 ceph-mon[74496]: 11.5 scrub ok
Jan 31 07:11:47 compute-0 sudo[109374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxbnmgtxtxtzymsadiwndczwattpryyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843507.5549502-104-95536390227913/AnsiballZ_setup.py'
Jan 31 07:11:47 compute-0 sudo[109374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:48 compute-0 python3.9[109376]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:11:48 compute-0 sudo[109374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:48 compute-0 sudo[109458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jngpvpmisknuwxhjksgpcnzigaxvnboe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843507.5549502-104-95536390227913/AnsiballZ_dnf.py'
Jan 31 07:11:48 compute-0 sudo[109458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:48 compute-0 ceph-mon[74496]: pgmap v317: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:48 compute-0 python3.9[109460]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:11:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:11:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:11:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:11:50 compute-0 sudo[109458]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:50 compute-0 sudo[109612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcqmdngqrjmhbqeafdxplesdmmnxfaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843510.5609303-146-77481209842893/AnsiballZ_dnf.py'
Jan 31 07:11:50 compute-0 sudo[109612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:50.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:50 compute-0 ceph-mon[74496]: 11.4 scrub starts
Jan 31 07:11:50 compute-0 ceph-mon[74496]: 11.4 scrub ok
Jan 31 07:11:50 compute-0 ceph-mon[74496]: pgmap v318: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:51 compute-0 python3.9[109614]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:11:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:11:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:51 compute-0 ceph-mon[74496]: 11.7 scrub starts
Jan 31 07:11:51 compute-0 ceph-mon[74496]: 11.7 scrub ok
Jan 31 07:11:51 compute-0 ceph-mon[74496]: 8.c scrub starts
Jan 31 07:11:51 compute-0 ceph-mon[74496]: 8.c scrub ok
Jan 31 07:11:52 compute-0 sudo[109612]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:52.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:52 compute-0 ceph-mon[74496]: pgmap v319: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:53 compute-0 sudo[109767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqyhwdbzpgkeeojhpywlcfrefjlefssj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843512.6739097-170-93219340216542/AnsiballZ_systemd.py'
Jan 31 07:11:53 compute-0 sudo[109767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:53 compute-0 python3.9[109769]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:11:53 compute-0 sudo[109767]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:54 compute-0 python3.9[109922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:11:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:11:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:54.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:54 compute-0 ceph-mon[74496]: pgmap v320: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:55 compute-0 sudo[110072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypbmsttngxxmgnrdplxkecrebunaspxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843514.6912637-224-68420960090549/AnsiballZ_sefcontext.py'
Jan 31 07:11:55 compute-0 sudo[110072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:55 compute-0 python3.9[110074]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 07:11:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:55 compute-0 sudo[110072]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:55 compute-0 ceph-mon[74496]: pgmap v321: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:56 compute-0 python3.9[110225]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:11:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:56.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:56 compute-0 sudo[110381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczjvgrztnfcvhfppbxhgbdkmbwzfefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843516.7599616-278-165235103487190/AnsiballZ_dnf.py'
Jan 31 07:11:56 compute-0 sudo[110381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:11:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:57.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:11:57 compute-0 python3.9[110383]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:11:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:58 compute-0 sudo[110381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:58 compute-0 ceph-mon[74496]: 11.1 scrub starts
Jan 31 07:11:58 compute-0 ceph-mon[74496]: 11.1 scrub ok
Jan 31 07:11:58 compute-0 ceph-mon[74496]: pgmap v322: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:58 compute-0 sudo[110410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:58 compute-0 sudo[110410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:58 compute-0 sudo[110410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:58 compute-0 sudo[110435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:11:58 compute-0 sudo[110435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:11:58 compute-0 sudo[110435]: pam_unix(sudo:session): session closed for user root
Jan 31 07:11:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:11:58.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:11:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:11:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:11:59.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:11:59 compute-0 sudo[110586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qogdjddhnmdmshhloyogyvlxaazrvjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843518.8140187-302-26744436980249/AnsiballZ_command.py'
Jan 31 07:11:59 compute-0 sudo[110586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:11:59 compute-0 python3.9[110588]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:11:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:11:59 compute-0 ceph-mon[74496]: 11.f scrub starts
Jan 31 07:11:59 compute-0 ceph-mon[74496]: 11.f scrub ok
Jan 31 07:11:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:00 compute-0 sudo[110586]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:00 compute-0 ceph-mon[74496]: 8.8 scrub starts
Jan 31 07:12:00 compute-0 ceph-mon[74496]: 8.8 scrub ok
Jan 31 07:12:00 compute-0 ceph-mon[74496]: pgmap v323: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:00 compute-0 sudo[110873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-japcmaitnlbrwtqjgvrvqwnzxjbjmyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843520.3016512-326-169970995777115/AnsiballZ_file.py'
Jan 31 07:12:00 compute-0 sudo[110873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:00.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:00 compute-0 python3.9[110875]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 07:12:00 compute-0 sudo[110873]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:01.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:01 compute-0 python3.9[111026]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:02 compute-0 sudo[111178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plskhzhydwqejqpqdbmqgcrlyyptuciy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843521.9018211-374-156064245909642/AnsiballZ_dnf.py'
Jan 31 07:12:02 compute-0 sudo[111178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:02 compute-0 python3.9[111180]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:02 compute-0 ceph-mon[74496]: 8.17 scrub starts
Jan 31 07:12:02 compute-0 ceph-mon[74496]: 8.17 scrub ok
Jan 31 07:12:02 compute-0 ceph-mon[74496]: pgmap v324: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:02.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:03.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:03 compute-0 sudo[111178]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:04 compute-0 sudo[111332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhlahsnvjunsidpxdsfmtxgbdhyhnwha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843524.0667229-401-223091691011042/AnsiballZ_dnf.py'
Jan 31 07:12:04 compute-0 sudo[111332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:04 compute-0 python3.9[111334]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:04 compute-0 ceph-mon[74496]: 11.14 scrub starts
Jan 31 07:12:04 compute-0 ceph-mon[74496]: 11.14 scrub ok
Jan 31 07:12:04 compute-0 ceph-mon[74496]: pgmap v325: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:04.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:05.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:05 compute-0 ceph-mon[74496]: 11.3 scrub starts
Jan 31 07:12:05 compute-0 ceph-mon[74496]: 11.3 scrub ok
Jan 31 07:12:05 compute-0 sudo[111332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:06 compute-0 ceph-mon[74496]: pgmap v326: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:06 compute-0 sudo[111486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tugqfxdsmkvhdptnxpanxygqpdzlhnua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843526.5648365-437-136550814004399/AnsiballZ_stat.py'
Jan 31 07:12:06 compute-0 sudo[111486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:06.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:06 compute-0 python3.9[111488]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:06 compute-0 sudo[111486]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:07.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:07 compute-0 sudo[111641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxbwwmzqyfqmhmntrigbytmoopyuelw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843527.144771-461-100007437619849/AnsiballZ_slurp.py'
Jan 31 07:12:07 compute-0 sudo[111641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:07 compute-0 python3.9[111643]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 07:12:07 compute-0 sudo[111641]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:08 compute-0 ceph-mon[74496]: 8.14 scrub starts
Jan 31 07:12:08 compute-0 ceph-mon[74496]: 8.14 scrub ok
Jan 31 07:12:08 compute-0 ceph-mon[74496]: pgmap v327: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:08 compute-0 ceph-mon[74496]: 11.17 scrub starts
Jan 31 07:12:08 compute-0 ceph-mon[74496]: 11.17 scrub ok
Jan 31 07:12:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:08.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:08 compute-0 sshd-session[108915]: Connection closed by 192.168.122.30 port 51574
Jan 31 07:12:08 compute-0 sshd-session[108912]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:12:08 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 07:12:08 compute-0 systemd[1]: session-36.scope: Consumed 16.019s CPU time.
Jan 31 07:12:08 compute-0 systemd-logind[816]: Session 36 logged out. Waiting for processes to exit.
Jan 31 07:12:08 compute-0 systemd-logind[816]: Removed session 36.
Jan 31 07:12:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:09.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:09 compute-0 ceph-mon[74496]: 6.2 scrub starts
Jan 31 07:12:09 compute-0 ceph-mon[74496]: 6.2 scrub ok
Jan 31 07:12:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:10 compute-0 ceph-mon[74496]: 6.a scrub starts
Jan 31 07:12:10 compute-0 ceph-mon[74496]: 6.a scrub ok
Jan 31 07:12:10 compute-0 ceph-mon[74496]: pgmap v328: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:10 compute-0 ceph-mon[74496]: 8.1c scrub starts
Jan 31 07:12:10 compute-0 ceph-mon[74496]: 8.1c scrub ok
Jan 31 07:12:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:11.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:11 compute-0 ceph-mon[74496]: 10.1e scrub starts
Jan 31 07:12:11 compute-0 ceph-mon[74496]: 10.1e scrub ok
Jan 31 07:12:12 compute-0 ceph-mon[74496]: pgmap v329: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:13 compute-0 sshd-session[111671]: Accepted publickey for zuul from 192.168.122.30 port 45328 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:12:13 compute-0 systemd-logind[816]: New session 37 of user zuul.
Jan 31 07:12:13 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 31 07:12:13 compute-0 sshd-session[111671]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:12:13 compute-0 ceph-mon[74496]: 6.3 scrub starts
Jan 31 07:12:13 compute-0 ceph-mon[74496]: 6.3 scrub ok
Jan 31 07:12:14 compute-0 python3.9[111824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:14 compute-0 ceph-mon[74496]: pgmap v330: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:14 compute-0 ceph-mon[74496]: 9.b scrub starts
Jan 31 07:12:14 compute-0 ceph-mon[74496]: 9.b scrub ok
Jan 31 07:12:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:15 compute-0 python3.9[111979]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:15 compute-0 ceph-mon[74496]: 9.7 scrub starts
Jan 31 07:12:15 compute-0 ceph-mon[74496]: 9.7 scrub ok
Jan 31 07:12:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:16.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:16 compute-0 ceph-mon[74496]: 6.7 scrub starts
Jan 31 07:12:16 compute-0 ceph-mon[74496]: 6.7 scrub ok
Jan 31 07:12:16 compute-0 ceph-mon[74496]: pgmap v331: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:16 compute-0 python3.9[112172]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:12:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:17 compute-0 sshd-session[111674]: Connection closed by 192.168.122.30 port 45328
Jan 31 07:12:17 compute-0 sshd-session[111671]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:12:17 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 07:12:17 compute-0 systemd[1]: session-37.scope: Consumed 2.079s CPU time.
Jan 31 07:12:17 compute-0 systemd-logind[816]: Session 37 logged out. Waiting for processes to exit.
Jan 31 07:12:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:17 compute-0 systemd-logind[816]: Removed session 37.
Jan 31 07:12:17 compute-0 ceph-mon[74496]: 9.13 scrub starts
Jan 31 07:12:17 compute-0 ceph-mon[74496]: 9.13 scrub ok
Jan 31 07:12:18 compute-0 sudo[112200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-0 sudo[112200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-0 sudo[112200]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-0 sudo[112225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:18 compute-0 sudo[112225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-0 sudo[112225]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-0 sudo[112250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-0 sudo[112250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-0 sudo[112250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-0 sudo[112275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:12:18 compute-0 sudo[112275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-0 sudo[112318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-0 sudo[112318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-0 sudo[112318]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-0 sudo[112275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-0 sudo[112351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:18 compute-0 sudo[112351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:18 compute-0 sudo[112351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:18 compute-0 ceph-mon[74496]: 6.5 scrub starts
Jan 31 07:12:18 compute-0 ceph-mon[74496]: 6.5 scrub ok
Jan 31 07:12:18 compute-0 ceph-mon[74496]: pgmap v332: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:19.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:12:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:12:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:19 compute-0 ceph-mon[74496]: 6.d scrub starts
Jan 31 07:12:19 compute-0 ceph-mon[74496]: 6.d scrub ok
Jan 31 07:12:19 compute-0 ceph-mon[74496]: pgmap v333: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:12:19
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:12:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:12:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:12:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:12:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d9ffbc53-0921-4e17-afcd-b24c47bcb841 does not exist
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b9ec7939-a76c-483e-a37a-62b105aa6c9e does not exist
Jan 31 07:12:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e8cb52ad-79f7-40a8-813b-649290482fac does not exist
Jan 31 07:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:12:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:12:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:12:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:20 compute-0 sudo[112381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:20 compute-0 sudo[112381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:20 compute-0 sudo[112381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:20 compute-0 sudo[112406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:20 compute-0 sudo[112406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:20 compute-0 sudo[112406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:20 compute-0 sudo[112431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:20 compute-0 sudo[112431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:20 compute-0 sudo[112431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:20 compute-0 sudo[112456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:12:20 compute-0 sudo[112456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.583724985 +0000 UTC m=+0.055141093 container create f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:12:20 compute-0 systemd[1]: Started libpod-conmon-f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522.scope.
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.560133188 +0000 UTC m=+0.031549276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:12:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.680974472 +0000 UTC m=+0.152390590 container init f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.689482503 +0000 UTC m=+0.160898571 container start f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:12:20 compute-0 systemd[1]: libpod-f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522.scope: Deactivated successfully.
Jan 31 07:12:20 compute-0 vigorous_heyrovsky[112537]: 167 167
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.697175025 +0000 UTC m=+0.168591143 container attach f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:12:20 compute-0 conmon[112537]: conmon f535bb90cf27a4f3d2b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522.scope/container/memory.events
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.697564325 +0000 UTC m=+0.168980393 container died f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7756d0859941b02c784d162b872f7e720fdf985b01197be000ca1636b447fed5-merged.mount: Deactivated successfully.
Jan 31 07:12:20 compute-0 podman[112520]: 2026-01-31 07:12:20.757675059 +0000 UTC m=+0.229091137 container remove f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_heyrovsky, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:12:20 compute-0 systemd[1]: libpod-conmon-f535bb90cf27a4f3d2b66f8704a7193c5bc555b944f3c244061ed68b6c44f522.scope: Deactivated successfully.
Jan 31 07:12:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:20 compute-0 podman[112563]: 2026-01-31 07:12:20.893066345 +0000 UTC m=+0.048674091 container create 49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:12:20 compute-0 ceph-mon[74496]: 9.17 scrub starts
Jan 31 07:12:20 compute-0 ceph-mon[74496]: 9.17 scrub ok
Jan 31 07:12:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:12:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:12:20 compute-0 systemd[1]: Started libpod-conmon-49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5.scope.
Jan 31 07:12:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:12:20 compute-0 podman[112563]: 2026-01-31 07:12:20.870410901 +0000 UTC m=+0.026018637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515f3d637f153a5444de480459e531e0344fd5c79975a6837a3795e38a25729/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515f3d637f153a5444de480459e531e0344fd5c79975a6837a3795e38a25729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515f3d637f153a5444de480459e531e0344fd5c79975a6837a3795e38a25729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515f3d637f153a5444de480459e531e0344fd5c79975a6837a3795e38a25729/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d515f3d637f153a5444de480459e531e0344fd5c79975a6837a3795e38a25729/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:20 compute-0 podman[112563]: 2026-01-31 07:12:20.988661672 +0000 UTC m=+0.144269488 container init 49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:12:21 compute-0 podman[112563]: 2026-01-31 07:12:21.000744041 +0000 UTC m=+0.156351797 container start 49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:12:21 compute-0 podman[112563]: 2026-01-31 07:12:21.004889775 +0000 UTC m=+0.160497591 container attach 49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:12:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:21.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:21 compute-0 priceless_mayer[112579]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:12:21 compute-0 priceless_mayer[112579]: --> relative data size: 1.0
Jan 31 07:12:21 compute-0 priceless_mayer[112579]: --> All data devices are unavailable
Jan 31 07:12:21 compute-0 systemd[1]: libpod-49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5.scope: Deactivated successfully.
Jan 31 07:12:21 compute-0 podman[112563]: 2026-01-31 07:12:21.735394526 +0000 UTC m=+0.891002282 container died 49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d515f3d637f153a5444de480459e531e0344fd5c79975a6837a3795e38a25729-merged.mount: Deactivated successfully.
Jan 31 07:12:21 compute-0 podman[112563]: 2026-01-31 07:12:21.795389998 +0000 UTC m=+0.950997744 container remove 49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:12:21 compute-0 systemd[1]: libpod-conmon-49835f92ed2773d62c867a44f13e96f1dc0c812169af0f9b269db6a18e2457c5.scope: Deactivated successfully.
Jan 31 07:12:21 compute-0 sudo[112456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:21 compute-0 sudo[112606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:21 compute-0 sudo[112606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:21 compute-0 sudo[112606]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:21 compute-0 ceph-mon[74496]: pgmap v334: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:21 compute-0 sudo[112631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:21 compute-0 sudo[112631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:21 compute-0 sudo[112631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:21 compute-0 sudo[112656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:21 compute-0 sudo[112656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:21 compute-0 sudo[112656]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:22 compute-0 sudo[112681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:12:22 compute-0 sudo[112681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.367878341 +0000 UTC m=+0.051768908 container create 969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:12:22 compute-0 systemd[1]: Started libpod-conmon-969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe.scope.
Jan 31 07:12:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.346404077 +0000 UTC m=+0.030294714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.452187036 +0000 UTC m=+0.136077593 container init 969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.457428397 +0000 UTC m=+0.141318924 container start 969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.460971785 +0000 UTC m=+0.144862342 container attach 969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:12:22 compute-0 systemd[1]: libpod-969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe.scope: Deactivated successfully.
Jan 31 07:12:22 compute-0 elated_pascal[112762]: 167 167
Jan 31 07:12:22 compute-0 conmon[112762]: conmon 969800e911776eea6366 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe.scope/container/memory.events
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.465782725 +0000 UTC m=+0.149673282 container died 969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:12:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0dccd062d02a97bfb7d1b02883eec17f3dd27131038ca9d0907747a8c6b73bd-merged.mount: Deactivated successfully.
Jan 31 07:12:22 compute-0 podman[112746]: 2026-01-31 07:12:22.512158378 +0000 UTC m=+0.196048915 container remove 969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:12:22 compute-0 systemd[1]: libpod-conmon-969800e911776eea6366c75bdd5abe3b1f8c4db0c8172b607da9a05d9a5721fe.scope: Deactivated successfully.
Jan 31 07:12:22 compute-0 podman[112786]: 2026-01-31 07:12:22.631256848 +0000 UTC m=+0.038191920 container create aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sutherland, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:12:22 compute-0 systemd[1]: Started libpod-conmon-aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9.scope.
Jan 31 07:12:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8035b263453e09207f8bf857ad379e4dbf33223b869d299e87a0d510af44683/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8035b263453e09207f8bf857ad379e4dbf33223b869d299e87a0d510af44683/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8035b263453e09207f8bf857ad379e4dbf33223b869d299e87a0d510af44683/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8035b263453e09207f8bf857ad379e4dbf33223b869d299e87a0d510af44683/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:22 compute-0 podman[112786]: 2026-01-31 07:12:22.616908802 +0000 UTC m=+0.023843884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:12:22 compute-0 podman[112786]: 2026-01-31 07:12:22.713222787 +0000 UTC m=+0.120157869 container init aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:12:22 compute-0 podman[112786]: 2026-01-31 07:12:22.718374904 +0000 UTC m=+0.125309966 container start aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:12:22 compute-0 podman[112786]: 2026-01-31 07:12:22.722955269 +0000 UTC m=+0.129890381 container attach aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:12:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:22 compute-0 ceph-mon[74496]: 9.3 scrub starts
Jan 31 07:12:22 compute-0 ceph-mon[74496]: 9.3 scrub ok
Jan 31 07:12:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:23.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:23 compute-0 sshd-session[112807]: Accepted publickey for zuul from 192.168.122.30 port 50296 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:12:23 compute-0 systemd-logind[816]: New session 38 of user zuul.
Jan 31 07:12:23 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 31 07:12:23 compute-0 sshd-session[112807]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:12:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:23 compute-0 boring_sutherland[112802]: {
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:     "0": [
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:         {
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "devices": [
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "/dev/loop3"
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             ],
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "lv_name": "ceph_lv0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "lv_size": "7511998464",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "name": "ceph_lv0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "tags": {
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.cluster_name": "ceph",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.crush_device_class": "",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.encrypted": "0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.osd_id": "0",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.type": "block",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:                 "ceph.vdo": "0"
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             },
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "type": "block",
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:             "vg_name": "ceph_vg0"
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:         }
Jan 31 07:12:23 compute-0 boring_sutherland[112802]:     ]
Jan 31 07:12:23 compute-0 boring_sutherland[112802]: }
Jan 31 07:12:23 compute-0 systemd[1]: libpod-aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9.scope: Deactivated successfully.
Jan 31 07:12:23 compute-0 podman[112786]: 2026-01-31 07:12:23.466300558 +0000 UTC m=+0.873235640 container died aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8035b263453e09207f8bf857ad379e4dbf33223b869d299e87a0d510af44683-merged.mount: Deactivated successfully.
Jan 31 07:12:23 compute-0 podman[112786]: 2026-01-31 07:12:23.521195274 +0000 UTC m=+0.928130346 container remove aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:12:23 compute-0 systemd[1]: libpod-conmon-aae2d7ba01c587f41ec8bc32b21b5a1d5b51a51a2ef1f5c25628e6c60b4694d9.scope: Deactivated successfully.
Jan 31 07:12:23 compute-0 sudo[112681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:23 compute-0 sudo[112882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:23 compute-0 sudo[112882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:23 compute-0 sudo[112882]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:23 compute-0 sudo[112907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:12:23 compute-0 sudo[112907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:23 compute-0 sudo[112907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:23 compute-0 sudo[112958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:23 compute-0 sudo[112958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:23 compute-0 sudo[112958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:23 compute-0 sudo[113004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:12:23 compute-0 sudo[113004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:23 compute-0 ceph-mon[74496]: 9.6 scrub starts
Jan 31 07:12:23 compute-0 ceph-mon[74496]: 9.6 scrub ok
Jan 31 07:12:23 compute-0 ceph-mon[74496]: pgmap v335: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.049133919 +0000 UTC m=+0.038155920 container create 1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:12:24 compute-0 systemd[1]: Started libpod-conmon-1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab.scope.
Jan 31 07:12:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.115371435 +0000 UTC m=+0.104393546 container init 1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.121921668 +0000 UTC m=+0.110943689 container start 1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:12:24 compute-0 heuristic_germain[113136]: 167 167
Jan 31 07:12:24 compute-0 systemd[1]: libpod-1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab.scope: Deactivated successfully.
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.126141563 +0000 UTC m=+0.115163574 container attach 1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.030878105 +0000 UTC m=+0.019900136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.129955067 +0000 UTC m=+0.118977088 container died 1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:12:24 compute-0 python3.9[113079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-658558c1f2b34088fc78ce8a4b4bad199da875e153a7305eec52caa06b018347-merged.mount: Deactivated successfully.
Jan 31 07:12:24 compute-0 podman[113120]: 2026-01-31 07:12:24.173735427 +0000 UTC m=+0.162757438 container remove 1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_germain, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:12:24 compute-0 systemd[1]: libpod-conmon-1be1aa1edca6c440ec1b698ffaa1abb8c93d1fcd042409b5c3deaf3e6fce29ab.scope: Deactivated successfully.
Jan 31 07:12:24 compute-0 podman[113164]: 2026-01-31 07:12:24.322394742 +0000 UTC m=+0.043603555 container create b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:12:24 compute-0 systemd[1]: Started libpod-conmon-b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4.scope.
Jan 31 07:12:24 compute-0 podman[113164]: 2026-01-31 07:12:24.305153354 +0000 UTC m=+0.026362197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:12:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e2336e7705bd02321d82dcd861308d36782b45cc289ba25858e59ef3964643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e2336e7705bd02321d82dcd861308d36782b45cc289ba25858e59ef3964643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e2336e7705bd02321d82dcd861308d36782b45cc289ba25858e59ef3964643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e2336e7705bd02321d82dcd861308d36782b45cc289ba25858e59ef3964643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:12:24 compute-0 podman[113164]: 2026-01-31 07:12:24.420600994 +0000 UTC m=+0.141809807 container init b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:12:24 compute-0 podman[113164]: 2026-01-31 07:12:24.426449289 +0000 UTC m=+0.147658102 container start b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:12:24 compute-0 podman[113164]: 2026-01-31 07:12:24.430124491 +0000 UTC m=+0.151333304 container attach b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:12:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:24.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:24 compute-0 python3.9[113335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:24 compute-0 ceph-mon[74496]: 9.e scrub starts
Jan 31 07:12:24 compute-0 ceph-mon[74496]: 9.e scrub ok
Jan 31 07:12:24 compute-0 ceph-mon[74496]: 9.5 scrub starts
Jan 31 07:12:24 compute-0 ceph-mon[74496]: 9.5 scrub ok
Jan 31 07:12:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:25 compute-0 kind_elgamal[113181]: {
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:         "osd_id": 0,
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:         "type": "bluestore"
Jan 31 07:12:25 compute-0 kind_elgamal[113181]:     }
Jan 31 07:12:25 compute-0 kind_elgamal[113181]: }
Jan 31 07:12:25 compute-0 systemd[1]: libpod-b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4.scope: Deactivated successfully.
Jan 31 07:12:25 compute-0 podman[113164]: 2026-01-31 07:12:25.27803721 +0000 UTC m=+0.999246063 container died b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e2336e7705bd02321d82dcd861308d36782b45cc289ba25858e59ef3964643-merged.mount: Deactivated successfully.
Jan 31 07:12:25 compute-0 podman[113164]: 2026-01-31 07:12:25.337056628 +0000 UTC m=+1.058265431 container remove b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_elgamal, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:12:25 compute-0 systemd[1]: libpod-conmon-b77b668e4833ded7eb260847ae4e79b275f26ced70ae18c47f0edbc16ccd84e4.scope: Deactivated successfully.
Jan 31 07:12:25 compute-0 sudo[113004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:12:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:12:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev dffdc2c6-7833-4543-96be-dbb26f84381d does not exist
Jan 31 07:12:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cfb59deb-5794-49cc-8d4f-42b533eba694 does not exist
Jan 31 07:12:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 747e437f-4109-4fce-9771-62a662501813 does not exist
Jan 31 07:12:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:25 compute-0 sudo[113393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:25 compute-0 sudo[113393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:25 compute-0 sudo[113393]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:25 compute-0 sudo[113441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:12:25 compute-0 sudo[113441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:25 compute-0 sudo[113441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:25 compute-0 sudo[113568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgxvzjmrgdfkcfesccqmyjrvdexejzfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843545.4595258-80-135134372506645/AnsiballZ_setup.py'
Jan 31 07:12:25 compute-0 sudo[113568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:25 compute-0 python3.9[113570]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:26 compute-0 sudo[113568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:12:26 compute-0 ceph-mon[74496]: pgmap v336: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:26 compute-0 sudo[113652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytfbhwtgrkqwrwlqrqymrevslvilyues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843545.4595258-80-135134372506645/AnsiballZ_dnf.py'
Jan 31 07:12:26 compute-0 sudo[113652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:26 compute-0 python3.9[113654]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:27.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:28 compute-0 sudo[113652]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:28 compute-0 ceph-mon[74496]: pgmap v337: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:28 compute-0 sudo[113806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuoghkfitbjigkpaefueifljjhugkxxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843548.4144034-116-102513992732484/AnsiballZ_setup.py'
Jan 31 07:12:28 compute-0 sudo[113806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:28 compute-0 python3.9[113808]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:29 compute-0 sudo[113806]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:29 compute-0 sudo[114002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnpakllysekdkjbwxbgzbegwcidahejl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843549.5801513-149-4560404922967/AnsiballZ_file.py'
Jan 31 07:12:29 compute-0 sudo[114002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:30 compute-0 python3.9[114004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:30 compute-0 sudo[114002]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:30 compute-0 ceph-mon[74496]: 6.8 scrub starts
Jan 31 07:12:30 compute-0 ceph-mon[74496]: 6.8 scrub ok
Jan 31 07:12:30 compute-0 ceph-mon[74496]: pgmap v338: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:30 compute-0 ceph-mon[74496]: 9.18 scrub starts
Jan 31 07:12:30 compute-0 ceph-mon[74496]: 9.18 scrub ok
Jan 31 07:12:30 compute-0 sudo[114154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inxdphizuarjgxhpugoobtdvbvwldsev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843550.366445-173-82657003214120/AnsiballZ_command.py'
Jan 31 07:12:30 compute-0 sudo[114154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:30.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:30 compute-0 python3.9[114156]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:12:31 compute-0 sudo[114154]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:31 compute-0 sudo[114320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzycqcwklwmmaobpqynflqztekxjvezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843551.2708297-197-274397060481986/AnsiballZ_stat.py'
Jan 31 07:12:31 compute-0 sudo[114320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:31 compute-0 python3.9[114322]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:31 compute-0 sudo[114320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:32 compute-0 sudo[114398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itysqezxcdvwjixnohymcnxwxlzsezpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843551.2708297-197-274397060481986/AnsiballZ_file.py'
Jan 31 07:12:32 compute-0 sudo[114398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:32 compute-0 python3.9[114400]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:32 compute-0 sudo[114398]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:32 compute-0 ceph-mon[74496]: 9.a scrub starts
Jan 31 07:12:32 compute-0 ceph-mon[74496]: pgmap v339: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:32 compute-0 ceph-mon[74496]: 9.a scrub ok
Jan 31 07:12:32 compute-0 sudo[114550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwygnxctyguuauggyjwfbenwlgavcsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843552.4903004-233-276443482855917/AnsiballZ_stat.py'
Jan 31 07:12:32 compute-0 sudo[114550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:32.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:32 compute-0 python3.9[114552]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:32 compute-0 sudo[114550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:33 compute-0 sudo[114628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sakugxixfoadhqxdqsccnbxrfogtzpvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843552.4903004-233-276443482855917/AnsiballZ_file.py'
Jan 31 07:12:33 compute-0 sudo[114628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:33 compute-0 python3.9[114630]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:33 compute-0 sudo[114628]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:34 compute-0 sudo[114781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjzszosncyaouigmcxyjewuijvcncte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843553.6012638-272-91598903465849/AnsiballZ_ini_file.py'
Jan 31 07:12:34 compute-0 sudo[114781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:34 compute-0 python3.9[114783]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:34 compute-0 sudo[114781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:12:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:12:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:34.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:34 compute-0 ceph-mon[74496]: pgmap v340: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:35 compute-0 sudo[114934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zduvgvlhucnrgiqsfvyxbtghmqxuozmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843554.991355-272-114471288528721/AnsiballZ_ini_file.py'
Jan 31 07:12:35 compute-0 sudo[114934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:35 compute-0 python3.9[114936]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:35 compute-0 sudo[114934]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:35 compute-0 ceph-mon[74496]: 9.8 scrub starts
Jan 31 07:12:35 compute-0 ceph-mon[74496]: 9.8 scrub ok
Jan 31 07:12:35 compute-0 ceph-mon[74496]: pgmap v341: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:35 compute-0 sudo[115086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqzwaynozlbaiiusgtuuidnwkiqgyttz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843555.6739395-272-209813925807357/AnsiballZ_ini_file.py'
Jan 31 07:12:35 compute-0 sudo[115086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:36 compute-0 python3.9[115088]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:36 compute-0 sudo[115086]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:36 compute-0 sudo[115238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceoyyxcpjxqccttvtthjteyicnxvzeas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843556.3003738-272-224385471995703/AnsiballZ_ini_file.py'
Jan 31 07:12:36 compute-0 sudo[115238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:36 compute-0 python3.9[115240]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:12:36 compute-0 sudo[115238]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:36.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:37 compute-0 ceph-mon[74496]: 6.e scrub starts
Jan 31 07:12:37 compute-0 ceph-mon[74496]: 6.e scrub ok
Jan 31 07:12:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:37.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:37 compute-0 sudo[115391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trmkrujsmybshdueamgrkqvlmjqpznpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843557.0148644-365-267324808100525/AnsiballZ_dnf.py'
Jan 31 07:12:37 compute-0 sudo[115391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:37 compute-0 python3.9[115393]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:38 compute-0 ceph-mon[74496]: pgmap v342: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:38 compute-0 sudo[115391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:38 compute-0 sudo[115419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:38 compute-0 sudo[115419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:38 compute-0 sudo[115419]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:38 compute-0 sudo[115444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:38 compute-0 sudo[115444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:38 compute-0 sudo[115444]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:38.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:40 compute-0 ceph-mon[74496]: pgmap v343: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:40 compute-0 sudo[115595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mujyicglimivtrxxnlrpgclewgjihaqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843560.4968255-398-263228072765447/AnsiballZ_setup.py'
Jan 31 07:12:40 compute-0 sudo[115595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:41 compute-0 python3.9[115597]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:12:41 compute-0 sudo[115595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:12:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:12:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:41 compute-0 sudo[115750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uorwtzyxqhmrqxljwpjbpledpqchwmda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843561.2527964-422-258389144202834/AnsiballZ_stat.py'
Jan 31 07:12:41 compute-0 sudo[115750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:41 compute-0 ceph-mon[74496]: 9.d scrub starts
Jan 31 07:12:41 compute-0 ceph-mon[74496]: 9.d scrub ok
Jan 31 07:12:41 compute-0 python3.9[115752]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:41 compute-0 sudo[115750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:42 compute-0 sudo[115902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hodiszolqdxjvvfwthaladqqpxallcva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843561.9371545-449-193819232770703/AnsiballZ_stat.py'
Jan 31 07:12:42 compute-0 sudo[115902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:42 compute-0 python3.9[115904]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:12:42 compute-0 sudo[115902]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:42 compute-0 ceph-mon[74496]: 9.f scrub starts
Jan 31 07:12:42 compute-0 ceph-mon[74496]: 9.f scrub ok
Jan 31 07:12:42 compute-0 ceph-mon[74496]: pgmap v344: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:42 compute-0 sudo[116054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odvxyyijyxgazorglfpxgdxainkjarab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843562.6727753-479-118175928150517/AnsiballZ_command.py'
Jan 31 07:12:42 compute-0 sudo[116054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:42.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:43 compute-0 python3.9[116056]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:12:43 compute-0 sudo[116054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:43.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:43 compute-0 ceph-mon[74496]: 9.10 scrub starts
Jan 31 07:12:43 compute-0 ceph-mon[74496]: 9.10 scrub ok
Jan 31 07:12:43 compute-0 ceph-mon[74496]: 9.9 scrub starts
Jan 31 07:12:43 compute-0 ceph-mon[74496]: 9.9 scrub ok
Jan 31 07:12:43 compute-0 sudo[116208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-botvvbxmatjdfhrwfuovnlglsmfeikui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843563.4485555-509-8509154056783/AnsiballZ_service_facts.py'
Jan 31 07:12:43 compute-0 sudo[116208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:44 compute-0 python3.9[116210]: ansible-service_facts Invoked
Jan 31 07:12:44 compute-0 network[116227]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:12:44 compute-0 network[116228]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:12:44 compute-0 network[116229]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:12:44 compute-0 ceph-mon[74496]: pgmap v345: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:12:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:44.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:12:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:45 compute-0 ceph-mon[74496]: 9.11 scrub starts
Jan 31 07:12:45 compute-0 ceph-mon[74496]: 9.11 scrub ok
Jan 31 07:12:46 compute-0 ceph-mon[74496]: pgmap v346: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:46.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:47.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:47 compute-0 sudo[116208]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:47 compute-0 ceph-mon[74496]: 9.16 scrub starts
Jan 31 07:12:47 compute-0 ceph-mon[74496]: 9.16 scrub ok
Jan 31 07:12:47 compute-0 ceph-mon[74496]: 9.12 scrub starts
Jan 31 07:12:48 compute-0 sudo[116514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frvaufphbzcclsnyqpmgqhoejqxafrfy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769843568.1911416-554-3055388751054/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769843568.1911416-554-3055388751054/args'
Jan 31 07:12:48 compute-0 sudo[116514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:48 compute-0 sudo[116514]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:48 compute-0 ceph-mon[74496]: 9.12 scrub ok
Jan 31 07:12:48 compute-0 ceph-mon[74496]: pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:12:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:49.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:12:49 compute-0 sudo[116682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iesraglmqjmanliuspjyiymwnqwsiyfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843569.094468-587-125509182605738/AnsiballZ_dnf.py'
Jan 31 07:12:49 compute-0 sudo[116682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:49 compute-0 ceph-mon[74496]: 9.1d scrub starts
Jan 31 07:12:49 compute-0 ceph-mon[74496]: 9.1d scrub ok
Jan 31 07:12:49 compute-0 python3.9[116684]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:12:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:12:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:12:50 compute-0 ceph-mon[74496]: pgmap v348: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:50 compute-0 sudo[116682]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:50.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:51.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:51 compute-0 ceph-mon[74496]: 9.15 scrub starts
Jan 31 07:12:51 compute-0 ceph-mon[74496]: 9.15 scrub ok
Jan 31 07:12:52 compute-0 sudo[116836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkllvommgcmauzqivkziykqjkzzwnvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843571.4774299-626-36757869088105/AnsiballZ_package_facts.py'
Jan 31 07:12:52 compute-0 sudo[116836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:52 compute-0 python3.9[116838]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 07:12:52 compute-0 sudo[116836]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:52 compute-0 ceph-mon[74496]: pgmap v349: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:53 compute-0 sudo[116989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtcqxacemguszmlfmeomjjpuldwsrguy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843573.3172674-656-147571995303640/AnsiballZ_stat.py'
Jan 31 07:12:53 compute-0 sudo[116989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:53 compute-0 python3.9[116991]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:53 compute-0 sudo[116989]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:54 compute-0 sudo[117067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozqyimnmppyaduzkceafffpenwrpjbtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843573.3172674-656-147571995303640/AnsiballZ_file.py'
Jan 31 07:12:54 compute-0 sudo[117067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:54 compute-0 python3.9[117069]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:54 compute-0 sudo[117067]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:54 compute-0 sudo[117219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtedsawsttbokaijkjbzgiuohxmcdwfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843574.516746-692-33436517297249/AnsiballZ_stat.py'
Jan 31 07:12:54 compute-0 sudo[117219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:54 compute-0 ceph-mon[74496]: pgmap v350: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:12:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:55 compute-0 python3.9[117221]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:12:55 compute-0 sudo[117219]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:55.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:55 compute-0 sudo[117298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzubkgefnfulzthdbvrgxazpktumcgip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843574.516746-692-33436517297249/AnsiballZ_file.py'
Jan 31 07:12:55 compute-0 sudo[117298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:55 compute-0 python3.9[117300]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:55 compute-0 sudo[117298]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:56 compute-0 ceph-mon[74496]: pgmap v351: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:57 compute-0 sudo[117450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwpvdledagmkcuecgeluolevhicnujxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843576.573522-746-23173872346833/AnsiballZ_lineinfile.py'
Jan 31 07:12:57 compute-0 sudo[117450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:57.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:57 compute-0 python3.9[117452]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:12:57 compute-0 sudo[117450]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:58 compute-0 sudo[117603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vikddfmjwqfazndodpmxgrlwddrfxcaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843578.4319818-791-40492506323647/AnsiballZ_setup.py'
Jan 31 07:12:58 compute-0 sudo[117603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:58 compute-0 ceph-mon[74496]: pgmap v352: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:12:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:12:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:12:58 compute-0 sudo[117606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:58 compute-0 sudo[117606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:58 compute-0 sudo[117606]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:58 compute-0 python3.9[117605]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:12:59 compute-0 sudo[117631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:12:59 compute-0 sudo[117631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:12:59 compute-0 sudo[117631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:12:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:12:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:12:59.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:12:59 compute-0 sudo[117603]: pam_unix(sudo:session): session closed for user root
Jan 31 07:12:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:12:59 compute-0 sudo[117738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjtletswvojiyrnepysvwivlrcqvcedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843578.4319818-791-40492506323647/AnsiballZ_systemd.py'
Jan 31 07:12:59 compute-0 sudo[117738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:12:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:00 compute-0 python3.9[117740]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:13:00 compute-0 sudo[117738]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:00 compute-0 ceph-mon[74496]: pgmap v353: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:01 compute-0 sshd-session[112811]: Connection closed by 192.168.122.30 port 50296
Jan 31 07:13:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:01 compute-0 sshd-session[112807]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:13:01 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 07:13:01 compute-0 systemd[1]: session-38.scope: Consumed 20.896s CPU time.
Jan 31 07:13:01 compute-0 systemd-logind[816]: Session 38 logged out. Waiting for processes to exit.
Jan 31 07:13:01 compute-0 systemd-logind[816]: Removed session 38.
Jan 31 07:13:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:01 compute-0 ceph-mon[74496]: pgmap v354: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:04 compute-0 ceph-mon[74496]: pgmap v355: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:05.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:06 compute-0 ceph-mon[74496]: pgmap v356: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:06.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:07.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:07 compute-0 sshd-session[117771]: Accepted publickey for zuul from 192.168.122.30 port 45960 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:13:07 compute-0 systemd-logind[816]: New session 39 of user zuul.
Jan 31 07:13:07 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 31 07:13:07 compute-0 sshd-session[117771]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:13:08 compute-0 sudo[117924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnjzforgcvlqzoslmjgmnljfanhaffbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843587.8996756-26-277179237021980/AnsiballZ_file.py'
Jan 31 07:13:08 compute-0 sudo[117924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:08 compute-0 python3.9[117926]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:08 compute-0 ceph-mon[74496]: pgmap v357: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:08 compute-0 sudo[117924]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:08.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:09 compute-0 sudo[118076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmsdwthqaubyqferazozkmrwzlphieti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843588.7089741-62-74186548009724/AnsiballZ_stat.py'
Jan 31 07:13:09 compute-0 sudo[118076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:09.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:09 compute-0 python3.9[118078]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:09 compute-0 sudo[118076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:09 compute-0 sudo[118155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqvzdhrghhvotiydcvkwiwqrduskfcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843588.7089741-62-74186548009724/AnsiballZ_file.py'
Jan 31 07:13:09 compute-0 sudo[118155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:09 compute-0 python3.9[118157]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:09 compute-0 sudo[118155]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:10 compute-0 sshd-session[117774]: Connection closed by 192.168.122.30 port 45960
Jan 31 07:13:10 compute-0 sshd-session[117771]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:13:10 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 07:13:10 compute-0 systemd[1]: session-39.scope: Consumed 1.383s CPU time.
Jan 31 07:13:10 compute-0 systemd-logind[816]: Session 39 logged out. Waiting for processes to exit.
Jan 31 07:13:10 compute-0 systemd-logind[816]: Removed session 39.
Jan 31 07:13:10 compute-0 ceph-mon[74496]: pgmap v358: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:10.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 31 07:13:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 31 07:13:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:12 compute-0 ceph-mon[74496]: pgmap v359: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:12.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:14 compute-0 ceph-mon[74496]: pgmap v360: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:15 compute-0 sshd-session[118185]: Accepted publickey for zuul from 192.168.122.30 port 45974 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:13:15 compute-0 systemd-logind[816]: New session 40 of user zuul.
Jan 31 07:13:15 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 31 07:13:15 compute-0 sshd-session[118185]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:13:16 compute-0 ceph-mon[74496]: pgmap v361: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:16 compute-0 python3.9[118338]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:13:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:17.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:17 compute-0 sudo[118493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soswbsjcczechcoolyghwlfahkujioar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843597.3108912-59-134320426793454/AnsiballZ_file.py'
Jan 31 07:13:17 compute-0 sudo[118493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:17 compute-0 python3.9[118495]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:17 compute-0 sudo[118493]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:18 compute-0 sudo[118668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temlkfvwbdohoykgayyyylzefggrnwjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843598.1003547-83-26439087838864/AnsiballZ_stat.py'
Jan 31 07:13:18 compute-0 sudo[118668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:18 compute-0 ceph-mon[74496]: pgmap v362: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:18 compute-0 python3.9[118670]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:18 compute-0 sudo[118668]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:18.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:19 compute-0 sudo[118752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsugtsocafkmeavaghjaoucqwstzprfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843598.1003547-83-26439087838864/AnsiballZ_file.py'
Jan 31 07:13:19 compute-0 sudo[118752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:19 compute-0 sudo[118743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:19 compute-0 sudo[118743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:19 compute-0 sudo[118743]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:19 compute-0 sudo[118774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:19 compute-0 sudo[118774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:19 compute-0 sudo[118774]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:19.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:19 compute-0 python3.9[118771]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.q7ir0rts recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:19 compute-0 sudo[118752]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:13:19
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'volumes', 'default.rgw.control']
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:13:19 compute-0 sudo[118949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktnfyqkjptqdbntmefzpqbhcirzeejom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843599.7373502-143-34907156684942/AnsiballZ_stat.py'
Jan 31 07:13:19 compute-0 sudo[118949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:13:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:13:20 compute-0 python3.9[118951]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:20 compute-0 sudo[118949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:20 compute-0 sudo[119027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mybdjaqvkrrexdjqpyhvkemiljijwabj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843599.7373502-143-34907156684942/AnsiballZ_file.py'
Jan 31 07:13:20 compute-0 sudo[119027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:20 compute-0 python3.9[119029]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mmrpfxtx recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:20 compute-0 sudo[119027]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:20 compute-0 ceph-mon[74496]: pgmap v363: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:20.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:21 compute-0 sudo[119179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdhcnjsoprkvaptewwxnxmlvfajfecb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843600.910227-182-183110962331854/AnsiballZ_file.py'
Jan 31 07:13:21 compute-0 sudo[119179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:21.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:21 compute-0 python3.9[119181]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:13:21 compute-0 sudo[119179]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:21 compute-0 sudo[119332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbrfycgsteocrqcfygjehnspzsupouqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843601.4998507-206-49949785252780/AnsiballZ_stat.py'
Jan 31 07:13:21 compute-0 sudo[119332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:21 compute-0 python3.9[119334]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:21 compute-0 sudo[119332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:22 compute-0 sudo[119410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbkqmlczbcbvycdoymfgdpnswwfankzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843601.4998507-206-49949785252780/AnsiballZ_file.py'
Jan 31 07:13:22 compute-0 sudo[119410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:22 compute-0 python3.9[119412]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:13:22 compute-0 sudo[119410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:22 compute-0 ceph-mon[74496]: pgmap v364: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.717345) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602717406, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2314, "num_deletes": 251, "total_data_size": 3517995, "memory_usage": 3569120, "flush_reason": "Manual Compaction"}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 07:13:22 compute-0 sudo[119562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnvnjwvzmrazdjxlakvnjxjqkxivlima ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843602.4795856-206-22898049738816/AnsiballZ_stat.py'
Jan 31 07:13:22 compute-0 sudo[119562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602755796, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3439665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7651, "largest_seqno": 9964, "table_properties": {"data_size": 3430016, "index_size": 5631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24839, "raw_average_key_size": 21, "raw_value_size": 3408362, "raw_average_value_size": 2918, "num_data_blocks": 252, "num_entries": 1168, "num_filter_entries": 1168, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843416, "oldest_key_time": 1769843416, "file_creation_time": 1769843602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 38484 microseconds, and 6666 cpu microseconds.
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.755843) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3439665 bytes OK
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.755859) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.758173) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.758189) EVENT_LOG_v1 {"time_micros": 1769843602758183, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.758208) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3508074, prev total WAL file size 3508074, number of live WAL files 2.
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.759039) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3359KB)], [20(7651KB)]
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602759155, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11275218, "oldest_snapshot_seqno": -1}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3843 keys, 9678273 bytes, temperature: kUnknown
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602860567, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9678273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9646661, "index_size": 20878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 92689, "raw_average_key_size": 24, "raw_value_size": 9571515, "raw_average_value_size": 2490, "num_data_blocks": 913, "num_entries": 3843, "num_filter_entries": 3843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769843602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.860947) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9678273 bytes
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.864905) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.1 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4364, records dropped: 521 output_compression: NoCompression
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.864945) EVENT_LOG_v1 {"time_micros": 1769843602864928, "job": 6, "event": "compaction_finished", "compaction_time_micros": 101527, "compaction_time_cpu_micros": 21380, "output_level": 6, "num_output_files": 1, "total_output_size": 9678273, "num_input_records": 4364, "num_output_records": 3843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602865507, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843602866173, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.758906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.866222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.866228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.866229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.866231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:13:22.866233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:13:22 compute-0 python3.9[119564]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:22 compute-0 sudo[119562]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:22.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:23 compute-0 sudo[119640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxucxxzzjkglzwhotgsjihkptmegfmwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843602.4795856-206-22898049738816/AnsiballZ_file.py'
Jan 31 07:13:23 compute-0 sudo[119640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:23.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:23 compute-0 python3.9[119642]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:13:23 compute-0 sudo[119640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:23 compute-0 sudo[119793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karnbbxzxviuojbuxdyzmkxpojzardjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843603.6209393-275-27224366932057/AnsiballZ_file.py'
Jan 31 07:13:23 compute-0 sudo[119793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:24 compute-0 python3.9[119795]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:24 compute-0 sudo[119793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:24 compute-0 sudo[119945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dypbrzfvwsytocvggqcoteqdhdivqbyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843604.192235-299-233604625573387/AnsiballZ_stat.py'
Jan 31 07:13:24 compute-0 sudo[119945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:24 compute-0 python3.9[119947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:24 compute-0 sudo[119945]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:24 compute-0 ceph-mon[74496]: pgmap v365: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:24 compute-0 sudo[120023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgqimppqubsjxgzxutjsghhonuhiaxjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843604.192235-299-233604625573387/AnsiballZ_file.py'
Jan 31 07:13:24 compute-0 sudo[120023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:24.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:25 compute-0 python3.9[120025]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:25 compute-0 sudo[120023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:25.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:25 compute-0 sudo[120176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acabzmhvjzjatozjrkxcwincfxmlnhyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843605.2620761-335-50165648150242/AnsiballZ_stat.py'
Jan 31 07:13:25 compute-0 sudo[120176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:25 compute-0 python3.9[120178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:25 compute-0 sudo[120176]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-0 sudo[120198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:25 compute-0 sudo[120198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:25 compute-0 sudo[120198]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-0 sudo[120229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:25 compute-0 sudo[120229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:25 compute-0 sudo[120229]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-0 sudo[120277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:25 compute-0 sudo[120277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:25 compute-0 sudo[120277]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:25 compute-0 sudo[120312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:13:25 compute-0 sudo[120352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnoexhbsdidezixsaaghqtonwjxqmvql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843605.2620761-335-50165648150242/AnsiballZ_file.py'
Jan 31 07:13:25 compute-0 sudo[120352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:25 compute-0 sudo[120312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:26 compute-0 python3.9[120355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:26 compute-0 sudo[120352]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:26 compute-0 sudo[120312]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:26 compute-0 ceph-mon[74496]: pgmap v366: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:26.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:27 compute-0 sudo[120537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhjtzzaflxejclusglmmgbhojocwrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843606.3420286-371-152805997625932/AnsiballZ_systemd.py'
Jan 31 07:13:27 compute-0 sudo[120537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:27 compute-0 python3.9[120539]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:13:27 compute-0 systemd[1]: Reloading.
Jan 31 07:13:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:27 compute-0 systemd-rc-local-generator[120561]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:13:27 compute-0 systemd-sysv-generator[120566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:13:27 compute-0 sudo[120537]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 75a44ac7-21ef-4ad5-9bee-eeadfe72e9f6 does not exist
Jan 31 07:13:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 42a60be5-a184-4b7d-9fe7-504b80403b45 does not exist
Jan 31 07:13:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 29be02d7-3b65-420b-bb53-2bcbb3a220d7 does not exist
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:13:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:13:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:27 compute-0 sudo[120578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:27 compute-0 sudo[120578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-0 sudo[120578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-0 sudo[120627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:27 compute-0 sudo[120627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-0 sudo[120627]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-0 sudo[120652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:27 compute-0 sudo[120652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:27 compute-0 sudo[120652]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:27 compute-0 sudo[120700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:13:27 compute-0 sudo[120700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:28 compute-0 sudo[120853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agcpimorkwgmowqfdboyhctrsbhxzrtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843607.8280675-395-129724446569291/AnsiballZ_stat.py'
Jan 31 07:13:28 compute-0 sudo[120853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:28 compute-0 ceph-mon[74496]: pgmap v367: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:13:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.14518311 +0000 UTC m=+0.052306731 container create 78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:13:28 compute-0 systemd[1]: Started libpod-conmon-78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555.scope.
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.112917386 +0000 UTC m=+0.020041057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:13:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.235707986 +0000 UTC m=+0.142831587 container init 78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:13:28 compute-0 python3.9[120857]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.243698679 +0000 UTC m=+0.150822260 container start 78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:13:28 compute-0 systemd[1]: libpod-78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555.scope: Deactivated successfully.
Jan 31 07:13:28 compute-0 stupefied_wiles[120887]: 167 167
Jan 31 07:13:28 compute-0 conmon[120887]: conmon 78a3a02c3b1627b421b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555.scope/container/memory.events
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.25343692 +0000 UTC m=+0.160560511 container attach 78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.25416067 +0000 UTC m=+0.161284281 container died 78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:13:28 compute-0 sudo[120853]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bcca48e590a8c2f55b7c713645e32a3dc8b5cf880273836520e82441b204e93-merged.mount: Deactivated successfully.
Jan 31 07:13:28 compute-0 podman[120870]: 2026-01-31 07:13:28.298703923 +0000 UTC m=+0.205827514 container remove 78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:13:28 compute-0 systemd[1]: libpod-conmon-78a3a02c3b1627b421b32068f20f7ab7468d35a5db23def6e46ded7083aa1555.scope: Deactivated successfully.
Jan 31 07:13:28 compute-0 podman[120939]: 2026-01-31 07:13:28.44491462 +0000 UTC m=+0.053657428 container create 6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:13:28 compute-0 systemd[1]: Started libpod-conmon-6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b.scope.
Jan 31 07:13:28 compute-0 sudo[121004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eguyuoxanbigkgsrmldroafvuxmsxymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843607.8280675-395-129724446569291/AnsiballZ_file.py'
Jan 31 07:13:28 compute-0 sudo[121004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:13:28 compute-0 podman[120939]: 2026-01-31 07:13:28.41843898 +0000 UTC m=+0.027181878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39492cf881c2b38dc0649ffc61dc25c78036a0a66abded7348057784027ad17b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39492cf881c2b38dc0649ffc61dc25c78036a0a66abded7348057784027ad17b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39492cf881c2b38dc0649ffc61dc25c78036a0a66abded7348057784027ad17b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39492cf881c2b38dc0649ffc61dc25c78036a0a66abded7348057784027ad17b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39492cf881c2b38dc0649ffc61dc25c78036a0a66abded7348057784027ad17b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:28 compute-0 podman[120939]: 2026-01-31 07:13:28.543879411 +0000 UTC m=+0.152622249 container init 6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:13:28 compute-0 podman[120939]: 2026-01-31 07:13:28.550331623 +0000 UTC m=+0.159074431 container start 6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:13:28 compute-0 podman[120939]: 2026-01-31 07:13:28.571249674 +0000 UTC m=+0.179992502 container attach 6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:13:28 compute-0 python3.9[121010]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:28 compute-0 sudo[121004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:28.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:29 compute-0 sudo[121163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptqtxqfdxgfhctapfliqhgvsenhipwaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843608.8752859-431-212798383696881/AnsiballZ_stat.py'
Jan 31 07:13:29 compute-0 sudo[121163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:29.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:29 compute-0 python3.9[121165]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:29 compute-0 pensive_golick[121008]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:13:29 compute-0 pensive_golick[121008]: --> relative data size: 1.0
Jan 31 07:13:29 compute-0 pensive_golick[121008]: --> All data devices are unavailable
Jan 31 07:13:29 compute-0 systemd[1]: libpod-6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b.scope: Deactivated successfully.
Jan 31 07:13:29 compute-0 podman[120939]: 2026-01-31 07:13:29.335134877 +0000 UTC m=+0.943877715 container died 6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:13:29 compute-0 sudo[121163]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-39492cf881c2b38dc0649ffc61dc25c78036a0a66abded7348057784027ad17b-merged.mount: Deactivated successfully.
Jan 31 07:13:29 compute-0 podman[120939]: 2026-01-31 07:13:29.394667231 +0000 UTC m=+1.003410039 container remove 6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:13:29 compute-0 systemd[1]: libpod-conmon-6fe44ab03bfb38ec68527e704b39e3b30ac0e4389488fc399a81e471b2d5521b.scope: Deactivated successfully.
Jan 31 07:13:29 compute-0 sudo[120700]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:29 compute-0 sudo[121213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:29 compute-0 sudo[121213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:29 compute-0 sudo[121213]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-0 sudo[121261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:29 compute-0 sudo[121261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:29 compute-0 sudo[121261]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-0 sudo[121320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bemvowgezolutmyimgclghsekxdplxzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843608.8752859-431-212798383696881/AnsiballZ_file.py'
Jan 31 07:13:29 compute-0 sudo[121320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:29 compute-0 sudo[121310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:29 compute-0 sudo[121310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:29 compute-0 sudo[121310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-0 sudo[121341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:13:29 compute-0 sudo[121341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:29 compute-0 python3.9[121338]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:29 compute-0 sudo[121320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:29 compute-0 podman[121430]: 2026-01-31 07:13:29.916837659 +0000 UTC m=+0.042612062 container create f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:13:29 compute-0 systemd[1]: Started libpod-conmon-f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7.scope.
Jan 31 07:13:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:13:29 compute-0 podman[121430]: 2026-01-31 07:13:29.894878621 +0000 UTC m=+0.020653024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:13:29 compute-0 podman[121430]: 2026-01-31 07:13:29.99002804 +0000 UTC m=+0.115802443 container init f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:13:29 compute-0 podman[121430]: 2026-01-31 07:13:29.999676098 +0000 UTC m=+0.125450491 container start f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:13:30 compute-0 bold_lamport[121497]: 167 167
Jan 31 07:13:30 compute-0 systemd[1]: libpod-f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7.scope: Deactivated successfully.
Jan 31 07:13:30 compute-0 conmon[121497]: conmon f50e1dfd8a51b2400c3b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7.scope/container/memory.events
Jan 31 07:13:30 compute-0 podman[121430]: 2026-01-31 07:13:30.00909295 +0000 UTC m=+0.134867383 container attach f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:13:30 compute-0 podman[121430]: 2026-01-31 07:13:30.009538242 +0000 UTC m=+0.135312645 container died f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-646691865cfb97a0fba845fd7aa6e3954cb3ef6bee45d2801c35101e19d2feb2-merged.mount: Deactivated successfully.
Jan 31 07:13:30 compute-0 podman[121430]: 2026-01-31 07:13:30.050818048 +0000 UTC m=+0.176592461 container remove f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:13:30 compute-0 systemd[1]: libpod-conmon-f50e1dfd8a51b2400c3ba1591bfc2019cec94eb98c1c14ec678b2e2e6e09bbf7.scope: Deactivated successfully.
Jan 31 07:13:30 compute-0 sudo[121604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slrhrcyqxqgbtvjywhjapyohfslvzyda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843609.9113464-467-169000305329704/AnsiballZ_systemd.py'
Jan 31 07:13:30 compute-0 sudo[121604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:30 compute-0 podman[121592]: 2026-01-31 07:13:30.156475008 +0000 UTC m=+0.037597127 container create 4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:13:30 compute-0 systemd[1]: Started libpod-conmon-4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5.scope.
Jan 31 07:13:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4353e29e73dce7631dab38866d0267623f8e800719bb810f9f8851b947e317/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4353e29e73dce7631dab38866d0267623f8e800719bb810f9f8851b947e317/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4353e29e73dce7631dab38866d0267623f8e800719bb810f9f8851b947e317/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4353e29e73dce7631dab38866d0267623f8e800719bb810f9f8851b947e317/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:30 compute-0 podman[121592]: 2026-01-31 07:13:30.240322214 +0000 UTC m=+0.121444363 container init 4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_blackburn, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:13:30 compute-0 podman[121592]: 2026-01-31 07:13:30.139398251 +0000 UTC m=+0.020520430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:13:30 compute-0 podman[121592]: 2026-01-31 07:13:30.252231434 +0000 UTC m=+0.133353563 container start 4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:13:30 compute-0 podman[121592]: 2026-01-31 07:13:30.259445127 +0000 UTC m=+0.140567296 container attach 4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_blackburn, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:13:30 compute-0 python3.9[121613]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:13:30 compute-0 systemd[1]: Reloading.
Jan 31 07:13:30 compute-0 systemd-rc-local-generator[121642]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:13:30 compute-0 systemd-sysv-generator[121645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:13:30 compute-0 ceph-mon[74496]: pgmap v368: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:30 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 07:13:30 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:13:30 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:13:30 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 07:13:30 compute-0 sudo[121604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]: {
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:     "0": [
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:         {
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "devices": [
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "/dev/loop3"
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             ],
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "lv_name": "ceph_lv0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "lv_size": "7511998464",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "name": "ceph_lv0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "tags": {
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.cluster_name": "ceph",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.crush_device_class": "",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.encrypted": "0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.osd_id": "0",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.type": "block",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:                 "ceph.vdo": "0"
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             },
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "type": "block",
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:             "vg_name": "ceph_vg0"
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:         }
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]:     ]
Jan 31 07:13:30 compute-0 zealous_blackburn[121616]: }
Jan 31 07:13:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:31 compute-0 systemd[1]: libpod-4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5.scope: Deactivated successfully.
Jan 31 07:13:31 compute-0 podman[121592]: 2026-01-31 07:13:31.003758496 +0000 UTC m=+0.884880625 container died 4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba4353e29e73dce7631dab38866d0267623f8e800719bb810f9f8851b947e317-merged.mount: Deactivated successfully.
Jan 31 07:13:31 compute-0 podman[121592]: 2026-01-31 07:13:31.068341286 +0000 UTC m=+0.949463435 container remove 4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:13:31 compute-0 systemd[1]: libpod-conmon-4b965b033ff514a78d8813a40b903f0011d784e1f615e74450f76ea8b19109e5.scope: Deactivated successfully.
Jan 31 07:13:31 compute-0 sudo[121341]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:31 compute-0 sudo[121705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:31 compute-0 sudo[121705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:31 compute-0 sudo[121705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:31.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:31 compute-0 sudo[121731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:13:31 compute-0 sudo[121731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:31 compute-0 sudo[121731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:31 compute-0 sudo[121756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:31 compute-0 sudo[121756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:31 compute-0 sudo[121756]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:31 compute-0 sudo[121781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:13:31 compute-0 sudo[121781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.655177996 +0000 UTC m=+0.048761118 container create 7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:13:31 compute-0 systemd[1]: Started libpod-conmon-7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f.scope.
Jan 31 07:13:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.636593248 +0000 UTC m=+0.030176360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.74119927 +0000 UTC m=+0.134782442 container init 7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.748193117 +0000 UTC m=+0.141776209 container start 7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:13:31 compute-0 elated_sutherland[121863]: 167 167
Jan 31 07:13:31 compute-0 systemd[1]: libpod-7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f.scope: Deactivated successfully.
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.75351533 +0000 UTC m=+0.147098412 container attach 7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.754152676 +0000 UTC m=+0.147735778 container died 7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-373826a281933e88f156882351cd7ae1d0be175bef759af9b38a22913f2cbaaa-merged.mount: Deactivated successfully.
Jan 31 07:13:31 compute-0 podman[121846]: 2026-01-31 07:13:31.801400672 +0000 UTC m=+0.194983794 container remove 7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:13:31 compute-0 systemd[1]: libpod-conmon-7dc092c10a166cbea40b1877a2c63e39eee5550b164caa9f42c52a0ffaae399f.scope: Deactivated successfully.
Jan 31 07:13:31 compute-0 podman[121937]: 2026-01-31 07:13:31.93160025 +0000 UTC m=+0.039739285 container create c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:13:31 compute-0 systemd[1]: Started libpod-conmon-c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377.scope.
Jan 31 07:13:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff711d18f5d3c3fd9cb82e02fa485a9ad28ffdd4f722e1625a2d9ba042238c34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff711d18f5d3c3fd9cb82e02fa485a9ad28ffdd4f722e1625a2d9ba042238c34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff711d18f5d3c3fd9cb82e02fa485a9ad28ffdd4f722e1625a2d9ba042238c34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff711d18f5d3c3fd9cb82e02fa485a9ad28ffdd4f722e1625a2d9ba042238c34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:13:32 compute-0 podman[121937]: 2026-01-31 07:13:31.913788383 +0000 UTC m=+0.021927448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:13:32 compute-0 podman[121937]: 2026-01-31 07:13:32.027650023 +0000 UTC m=+0.135789168 container init c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:13:32 compute-0 podman[121937]: 2026-01-31 07:13:32.036739696 +0000 UTC m=+0.144878751 container start c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:13:32 compute-0 podman[121937]: 2026-01-31 07:13:32.045526832 +0000 UTC m=+0.153665907 container attach c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:13:32 compute-0 python3.9[122033]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:13:32 compute-0 network[122050]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:13:32 compute-0 network[122051]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:13:32 compute-0 network[122052]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:13:32 compute-0 ceph-mon[74496]: pgmap v369: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:32 compute-0 priceless_williams[121955]: {
Jan 31 07:13:32 compute-0 priceless_williams[121955]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:13:32 compute-0 priceless_williams[121955]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:13:32 compute-0 priceless_williams[121955]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:13:32 compute-0 priceless_williams[121955]:         "osd_id": 0,
Jan 31 07:13:32 compute-0 priceless_williams[121955]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:13:32 compute-0 priceless_williams[121955]:         "type": "bluestore"
Jan 31 07:13:32 compute-0 priceless_williams[121955]:     }
Jan 31 07:13:32 compute-0 priceless_williams[121955]: }
Jan 31 07:13:32 compute-0 podman[121937]: 2026-01-31 07:13:32.764464541 +0000 UTC m=+0.872603576 container died c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:13:32 compute-0 systemd[1]: libpod-c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377.scope: Deactivated successfully.
Jan 31 07:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff711d18f5d3c3fd9cb82e02fa485a9ad28ffdd4f722e1625a2d9ba042238c34-merged.mount: Deactivated successfully.
Jan 31 07:13:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:32.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:33 compute-0 podman[121937]: 2026-01-31 07:13:33.024701872 +0000 UTC m=+1.132840907 container remove c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:13:33 compute-0 systemd[1]: libpod-conmon-c1ba001fb65621a3f11133e213b4ba1d27d4d8495ed641ad148ac2fa16b99377.scope: Deactivated successfully.
Jan 31 07:13:33 compute-0 sudo[121781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:13:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:13:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 01d73f9b-ef24-409f-81a6-6dbececbda74 does not exist
Jan 31 07:13:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e38bb5e9-f9e0-43f2-b19f-9c5d08716f2e does not exist
Jan 31 07:13:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 97a03b90-bed5-4611-b7be-2f700557759e does not exist
Jan 31 07:13:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:33.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:33 compute-0 sudo[122095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:33 compute-0 sudo[122095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:33 compute-0 sudo[122095]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:33 compute-0 sudo[122120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:13:33 compute-0 sudo[122120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:33 compute-0 sudo[122120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:13:34 compute-0 ceph-mon[74496]: pgmap v370: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:13:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:13:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:35.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:36 compute-0 ceph-mon[74496]: pgmap v371: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:37.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:37 compute-0 sudo[122396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stbiopfqugqwckfjlgyhdzicbogtzqkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843617.5251722-545-8050724164224/AnsiballZ_stat.py'
Jan 31 07:13:37 compute-0 sudo[122396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:37 compute-0 python3.9[122398]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:37 compute-0 sudo[122396]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:38 compute-0 sudo[122474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yujpulnzkidlgzbbdnjwhsgwpndfkwsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843617.5251722-545-8050724164224/AnsiballZ_file.py'
Jan 31 07:13:38 compute-0 sudo[122474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:38 compute-0 python3.9[122476]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:38 compute-0 sudo[122474]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:38 compute-0 ceph-mon[74496]: pgmap v372: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:39 compute-0 sudo[122649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhskwckvfhgglxcjumnzxeuqnsbwegp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843618.8911936-584-227679818512772/AnsiballZ_file.py'
Jan 31 07:13:39 compute-0 sudo[122649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:39 compute-0 sudo[122607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:39 compute-0 sudo[122607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:39 compute-0 sudo[122607]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:39 compute-0 sudo[122655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:39.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:39 compute-0 sudo[122655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:39 compute-0 sudo[122655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:39 compute-0 python3.9[122652]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:39.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:39 compute-0 sudo[122649]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:39 compute-0 sudo[122829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceyupyhksvukwesukrmzhwedsfvscrik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843619.4929142-608-253827735230990/AnsiballZ_stat.py'
Jan 31 07:13:39 compute-0 sudo[122829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:39 compute-0 python3.9[122831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:39 compute-0 sudo[122829]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:40 compute-0 sudo[122907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dexwiyghciuaahxscccvcsvmeeavrvet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843619.4929142-608-253827735230990/AnsiballZ_file.py'
Jan 31 07:13:40 compute-0 sudo[122907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:40 compute-0 python3.9[122909]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:40 compute-0 sudo[122907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:40 compute-0 ceph-mon[74496]: pgmap v373: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:41.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:41 compute-0 sudo[123060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwadmglrbdctgosgibfvgwfstqzobqcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843620.7967927-653-205920241454413/AnsiballZ_timezone.py'
Jan 31 07:13:41 compute-0 sudo[123060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:41 compute-0 python3.9[123062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:13:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:41 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 07:13:41 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 07:13:41 compute-0 sudo[123060]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:42 compute-0 sudo[123216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvwdpmgugeqdflynpxtzrertulekgrwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843621.8420777-680-86529121873051/AnsiballZ_file.py'
Jan 31 07:13:42 compute-0 sudo[123216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:42 compute-0 python3.9[123218]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:42 compute-0 sudo[123216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:42 compute-0 ceph-mon[74496]: pgmap v374: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:42 compute-0 sudo[123368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuhbgbhcrqeqyfyzlhklthritunpwhiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843622.6473336-704-228590760674754/AnsiballZ_stat.py'
Jan 31 07:13:42 compute-0 sudo[123368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:43.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:43 compute-0 python3.9[123370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:43 compute-0 sudo[123368]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:43 compute-0 sudo[123447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyypsnqchbxzmiykzqvintfjewiowzqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843622.6473336-704-228590760674754/AnsiballZ_file.py'
Jan 31 07:13:43 compute-0 sudo[123447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:43 compute-0 python3.9[123449]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:43 compute-0 sudo[123447]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:44 compute-0 ceph-mon[74496]: pgmap v375: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:44 compute-0 sudo[123599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkravksewzlmhmvktfpcdtjurshangmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843623.9407382-740-190649637184283/AnsiballZ_stat.py'
Jan 31 07:13:44 compute-0 sudo[123599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:44 compute-0 python3.9[123601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:44 compute-0 sudo[123599]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:44 compute-0 sudo[123677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezfyqsfgtnksylkfmlrevbwqverjdclt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843623.9407382-740-190649637184283/AnsiballZ_file.py'
Jan 31 07:13:44 compute-0 sudo[123677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:44 compute-0 python3.9[123679]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qaj1_vzn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:44 compute-0 sudo[123677]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:45 compute-0 sudo[123830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphzvvuwabjcclrvaklxpndxuvnratcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843625.1141136-776-51506978186780/AnsiballZ_stat.py'
Jan 31 07:13:45 compute-0 sudo[123830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:45 compute-0 python3.9[123832]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:45 compute-0 sudo[123830]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:45 compute-0 sudo[123908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uebowhlrgbaykalfhykwdvxjpfrkpakr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843625.1141136-776-51506978186780/AnsiballZ_file.py'
Jan 31 07:13:45 compute-0 sudo[123908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:45 compute-0 python3.9[123910]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:45 compute-0 sudo[123908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:46 compute-0 ceph-mon[74496]: pgmap v376: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:46 compute-0 sudo[124060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alzgpmsxcpsgzjfccmrlbclgvpaplfwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843626.275414-815-13111742334964/AnsiballZ_command.py'
Jan 31 07:13:46 compute-0 sudo[124060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:46 compute-0 python3.9[124062]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:13:46 compute-0 sudo[124060]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:47.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:47.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:47 compute-0 sudo[124214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffnsluvraydumkkmyyzcobwfqmfhzwnc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843627.1461146-839-56880485636893/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:13:47 compute-0 sudo[124214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:47 compute-0 python3[124216]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:13:47 compute-0 sudo[124214]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:48 compute-0 sudo[124366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anhlruogqdkbnjvkmjmxetexewklhglg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843628.030066-863-181057310890602/AnsiballZ_stat.py'
Jan 31 07:13:48 compute-0 sudo[124366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:48 compute-0 python3.9[124368]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:48 compute-0 sudo[124366]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:48 compute-0 ceph-mon[74496]: pgmap v377: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:48 compute-0 sudo[124444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rflgzltijkkmzwtjxisgkwdfiwnuonlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843628.030066-863-181057310890602/AnsiballZ_file.py'
Jan 31 07:13:48 compute-0 sudo[124444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:49 compute-0 python3.9[124446]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:49 compute-0 sudo[124444]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:49.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:49.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:49 compute-0 sudo[124597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufdmejtvgxxcosrtisxggpjamkyozsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843629.1496985-899-216843832953163/AnsiballZ_stat.py'
Jan 31 07:13:49 compute-0 sudo[124597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:49 compute-0 python3.9[124599]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:49 compute-0 sudo[124597]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:13:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:13:50 compute-0 sudo[124722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkfdnmzurdkxbtmforwvlfohkmeaxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843629.1496985-899-216843832953163/AnsiballZ_copy.py'
Jan 31 07:13:50 compute-0 sudo[124722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:50 compute-0 python3.9[124724]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843629.1496985-899-216843832953163/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:50 compute-0 sudo[124722]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:50 compute-0 ceph-mon[74496]: pgmap v378: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:50 compute-0 sudo[124874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voenznapasjksgtrqligqfmqlpgdgeud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843630.5255835-944-27085603606721/AnsiballZ_stat.py'
Jan 31 07:13:50 compute-0 sudo[124874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:50 compute-0 python3.9[124876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:51 compute-0 sudo[124874]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:51 compute-0 sudo[124953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldvqkkmofhtukmltcsqhchvwftvpbmuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843630.5255835-944-27085603606721/AnsiballZ_file.py'
Jan 31 07:13:51 compute-0 sudo[124953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:51 compute-0 python3.9[124955]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:51 compute-0 sudo[124953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:51 compute-0 sudo[125105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgupkjwyrrjrmbzskhdxlapllxwostti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843631.5979543-980-216135131803554/AnsiballZ_stat.py'
Jan 31 07:13:51 compute-0 sudo[125105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:52 compute-0 python3.9[125107]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:52 compute-0 sudo[125105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:52 compute-0 sudo[125183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufrituzvqsasnyfytyweyjbiboicncbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843631.5979543-980-216135131803554/AnsiballZ_file.py'
Jan 31 07:13:52 compute-0 sudo[125183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:52 compute-0 python3.9[125185]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:52 compute-0 sudo[125183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:52 compute-0 ceph-mon[74496]: pgmap v379: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:53 compute-0 sudo[125335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzikooffudrrqhkbybdcdjlpxupzhtab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843632.7853887-1016-189825639449332/AnsiballZ_stat.py'
Jan 31 07:13:53 compute-0 sudo[125335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:13:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:53.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:13:53 compute-0 python3.9[125337]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:13:53 compute-0 sudo[125335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:53 compute-0 sudo[125414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvtjqrtknkvlhgfdfvrzpozqagppkoln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843632.7853887-1016-189825639449332/AnsiballZ_file.py'
Jan 31 07:13:53 compute-0 sudo[125414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:53 compute-0 python3.9[125416]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:53 compute-0 sudo[125414]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:54 compute-0 sudo[125566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shaakajvyjwtftgqhxbyyujuszhspote ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843634.064688-1055-276385541454765/AnsiballZ_command.py'
Jan 31 07:13:54 compute-0 sudo[125566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:54 compute-0 python3.9[125568]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:13:54 compute-0 sudo[125566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:54 compute-0 ceph-mon[74496]: pgmap v380: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:13:55 compute-0 sudo[125723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnrhsutexaycdxcgscopnewxvikatycx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843634.7676215-1079-66752500156895/AnsiballZ_blockinfile.py'
Jan 31 07:13:55 compute-0 sudo[125723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:55 compute-0 sshd-session[125648]: Invalid user funded from 45.148.10.240 port 50384
Jan 31 07:13:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:13:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:55.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:13:55 compute-0 sshd-session[125648]: Connection closed by invalid user funded 45.148.10.240 port 50384 [preauth]
Jan 31 07:13:55 compute-0 python3.9[125725]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:55 compute-0 sudo[125723]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:55.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:55 compute-0 sudo[125876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwabwshnminilutoemlfijvwqphadqne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843635.6376958-1106-122678526190549/AnsiballZ_file.py'
Jan 31 07:13:55 compute-0 sudo[125876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:56 compute-0 python3.9[125878]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:56 compute-0 sudo[125876]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:56 compute-0 sudo[126028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljiyqtykiclzycpznrlcvyyuzsmnvmdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843636.228168-1106-204021863175126/AnsiballZ_file.py'
Jan 31 07:13:56 compute-0 sudo[126028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:56 compute-0 python3.9[126030]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:13:56 compute-0 sudo[126028]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:56 compute-0 ceph-mon[74496]: pgmap v381: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:13:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:57.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:13:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:57.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:57 compute-0 sudo[126181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzxgkxffofwokyykouhaxorqaxziwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843637.029362-1151-80538191921299/AnsiballZ_mount.py'
Jan 31 07:13:57 compute-0 sudo[126181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:57 compute-0 python3.9[126183]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:13:57 compute-0 sudo[126181]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:58 compute-0 sudo[126333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzudszcituclezapmobvsiftyimnjeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843637.8476486-1151-74325010426953/AnsiballZ_mount.py'
Jan 31 07:13:58 compute-0 sudo[126333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:13:58 compute-0 python3.9[126335]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 07:13:58 compute-0 sudo[126333]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:58 compute-0 ceph-mon[74496]: pgmap v382: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:58 compute-0 sshd-session[118188]: Connection closed by 192.168.122.30 port 45974
Jan 31 07:13:58 compute-0 sshd-session[118185]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:13:58 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 07:13:58 compute-0 systemd[1]: session-40.scope: Consumed 25.565s CPU time.
Jan 31 07:13:58 compute-0 systemd-logind[816]: Session 40 logged out. Waiting for processes to exit.
Jan 31 07:13:58 compute-0 systemd-logind[816]: Removed session 40.
Jan 31 07:13:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:13:59.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:59 compute-0 sudo[126361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:59 compute-0 sudo[126361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:59 compute-0 sudo[126361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:59 compute-0 sudo[126386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:13:59 compute-0 sudo[126386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:13:59 compute-0 sudo[126386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:13:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:13:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:13:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:13:59.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:13:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:13:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:00 compute-0 ceph-mon[74496]: pgmap v383: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:01.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:01.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:02 compute-0 ceph-mon[74496]: pgmap v384: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:14:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:03.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:14:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:04 compute-0 sshd-session[126413]: Accepted publickey for zuul from 192.168.122.30 port 33428 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:04 compute-0 systemd-logind[816]: New session 41 of user zuul.
Jan 31 07:14:04 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 31 07:14:04 compute-0 sshd-session[126413]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:04 compute-0 sudo[126566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxupouotitnlrlejoavhjzdqydtgobcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843644.1350267-23-20796171570444/AnsiballZ_tempfile.py'
Jan 31 07:14:04 compute-0 sudo[126566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:04 compute-0 python3.9[126568]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 07:14:04 compute-0 sudo[126566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:04 compute-0 ceph-mon[74496]: pgmap v385: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:05.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:05 compute-0 sudo[126719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-relhttvzpzfgcpspsdzsaosrpbpoupft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843644.8767638-59-32636911517444/AnsiballZ_stat.py'
Jan 31 07:14:05 compute-0 sudo[126719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:05 compute-0 python3.9[126721]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:05 compute-0 sudo[126719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:06 compute-0 sudo[126873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sarpnznkabpnpswksihqnpwmdiebazjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843645.7044256-83-68080129612861/AnsiballZ_slurp.py'
Jan 31 07:14:06 compute-0 sudo[126873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:06 compute-0 python3.9[126875]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 07:14:06 compute-0 sudo[126873]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:06 compute-0 sudo[127025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhzlplhsfeodcpidsldwxmkyhuxmgsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843646.567429-107-244952100565198/AnsiballZ_stat.py'
Jan 31 07:14:06 compute-0 sudo[127025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:06 compute-0 ceph-mon[74496]: pgmap v386: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:07 compute-0 python3.9[127027]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.68o_ood4 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:07 compute-0 sudo[127025]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:07.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:07 compute-0 sudo[127151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebfxswjazqsylwgfjsshrratzltjsev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843646.567429-107-244952100565198/AnsiballZ_copy.py'
Jan 31 07:14:07 compute-0 sudo[127151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:07 compute-0 python3.9[127153]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.68o_ood4 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843646.567429-107-244952100565198/.source.68o_ood4 _original_basename=.ki1s_2gg follow=False checksum=a4502e4e8f59847dd2b7c5f9ecd52d55f7558ce1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:07 compute-0 sudo[127151]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:08 compute-0 ceph-mon[74496]: pgmap v387: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:08 compute-0 sudo[127303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cshllybnqgnvkmngdbjcfcajdvhavhfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843647.9596026-152-254381487637415/AnsiballZ_setup.py'
Jan 31 07:14:08 compute-0 sudo[127303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:08 compute-0 python3.9[127305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:08 compute-0 sudo[127303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:09.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:09.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:09 compute-0 sudo[127456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjyecrtijbiqzrfsupeppjfeisolxlcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843649.341277-177-140879370080506/AnsiballZ_blockinfile.py'
Jan 31 07:14:09 compute-0 sudo[127456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:09 compute-0 python3.9[127458]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwK9tbwI1sVhVFn3RGaEAgpi2689y9VdIyBp+cw+RWFupGnK46xr4HB/N67Aw+A+3FJtEl1Zq1cnt3Gy8PYb6XnLd4xH/NFtUI3ukhekrtKvSmysEjpRGIamjt1BkH4Lxh79PNkk13AVMQN92Wo271/fHEvcV7HaC0Q5VypZMd+77ZvI9NuEG1nofpvI8+32YECZBLpoC5KQK7EibqD9MUR2OmapGZhV+5B5jdb0ZvNb966Q0kwAGV8E+xgHSVnh5eCWC8oxgWkycmQd2co9E79fiIHEioABE9aDUGKw0+nsZ7HrvjG/ENeg5C6fjdJE4MsPq3FNHAiTCQPZ7QZgv/CSudt7WYyLTztGL9ksWqaTUeDocKVKPlJlzGrn/TXgMoix8+qbFzxVixIROb2nqElyEy6mo0Xxt2b4aisil9ZQhWVMQY0hGX5vtVv0E6+svzjSTfkyZolbjyRsolJF4pH7+klLEmlWGDlgSoCDZeK/XEi7xq3yaCymuWtX2fAX8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICzJ5+1VSPloOqHhejNen2lHjfV4Hvj7nbRbNJjS6dtd
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBECc0+1u2G3haTNDUnwK7F3+bqZqLNjR6ayEsOJcH6U6RkqhSd2eAlivxlw9dfPuir2TFrYzGTtSXuJ8iauDAtQ=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlvTGYGifalEmozttYlZ79wRHZPo6p3FfxUn+H8fCt//gLYJvHB9ygqCWO8F06xZhwaSJlU3R5k49AFtcq6rCaf4D9FuDYpYU5B1qGxpqY2S/6r/PmC9TmJJe6DJfuIf95os5YrDLR82BbT8dLFvu76PfZiMt0+kvm9gj1Q6XCUTgIsIvY9pyPySu0V4JDeT8EBgROR7WA5Fev80wO2/RlFXH9xVIupO8rswjwWPuIXoua1w44d35HWWHBdMAFXeZZMopWHWwY+fIlyz4B8y/TWDow7KZxG9GHKZ04e73/RA972Gub2LC0SlBFsBqaSnub8ooOcA3jZ3R2bjHAVkZvLgCK9UFSgwvvfyOWxtkJgj5KalAy9vZeGQ02ndAPNkQ6B1GnnRHaR5yGPG78q9Nd8RDmzhTr1iwnYLHhup04nAUnUDw5ubZFyF9bW1KQWvDv+4cfFeT8mhARMCxu7Imzne5FDq9OZAA9VLfnA26YFT0MpGjGl332cx20iz3Z4IU=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDBM/OyT9HQGjLM76vSXpTFer+lkr//u0v4BsUk+Rcai
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPttGgqMF5HnqNXeajmhgAAhQFj1yReXfFmUGT6cv24PcfDX+VeASpBgDGWJKvbu1EgrSPUu2R8sDzajVI5+ETk=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDir5Ux7IUuKTsqwrpZRFpieFX7Hi9Bsaw7N3jCiMd+vuHlEKHLX54HbyTIVnox1XbNjeYynLRRz7VKBfder8IEerGmST/uWuX5FOdve7vDdY++9J6qYkj1Gf6v6BGp8BT97bbPdvaQdLP6YS2jFEfOz4s0oJkgr8dsHjPU70e1P0b7vKxqo3z/E/XCe2BUGEv5j/z9GTl2oQ9/KoTvahfr6qfonnQK9E0gsJKDB9S1UPNFkJUxvVPfKfEao207dmT8EmQL2ZdwDwecA2Mg0SneGaNmEFWDW4CWQjdbHuikc3vsZ1do7kzq2+tz+WLEXqdb4Ig4S0OfV/MAcaC/C1DRfZHxZN3vSayrm99nFc8oPaLnRtT8Jz1dVonMOpwLm3xMm6nAeGNTzM0ImTrJTusVmKNRQI3x6VPiEcWdKNvN5sVcrN9uyINDMuzpXIxc1LmpmR/338EfP4HYhfsTqdM0worzzewvh2XhAVxQAiNYRRUbLvR4/EE5SjXTjSA4ID0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ8oYZpZvdB1n917+wvTxetgtueloCox+7yBQBW8LHZX
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCEH62xmPSqzu7EFth8e8ITel7fLvoU9FKlxQN/eSXzUuR/7sZGPhcgLzjrJmEcn4Za0K2VNu6+z559d/AEJY2U=
                                              create=True mode=0644 path=/tmp/ansible.68o_ood4 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:09 compute-0 sudo[127456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:10 compute-0 sudo[127608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbiucamhlrfgmirrbxpugitvmvksbrsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843650.1300306-201-87285776528681/AnsiballZ_command.py'
Jan 31 07:14:10 compute-0 sudo[127608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:10 compute-0 ceph-mon[74496]: pgmap v388: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:10 compute-0 python3.9[127610]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.68o_ood4' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:14:10 compute-0 sudo[127608]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:11.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:11 compute-0 sudo[127763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogenutujecemwflcxhoqiepkckojokoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843650.888215-225-228223455457594/AnsiballZ_file.py'
Jan 31 07:14:11 compute-0 sudo[127763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:11.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:11 compute-0 python3.9[127765]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.68o_ood4 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:11 compute-0 sudo[127763]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:11 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:14:11 compute-0 sshd-session[126416]: Connection closed by 192.168.122.30 port 33428
Jan 31 07:14:11 compute-0 sshd-session[126413]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:11 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 07:14:11 compute-0 systemd[1]: session-41.scope: Consumed 4.296s CPU time.
Jan 31 07:14:11 compute-0 systemd-logind[816]: Session 41 logged out. Waiting for processes to exit.
Jan 31 07:14:11 compute-0 systemd-logind[816]: Removed session 41.
Jan 31 07:14:12 compute-0 ceph-mon[74496]: pgmap v389: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:13.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:13.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:13 compute-0 sshd-session[71471]: Received disconnect from 38.129.56.250 port 47718:11: disconnected by user
Jan 31 07:14:13 compute-0 sshd-session[71471]: Disconnected from user zuul 38.129.56.250 port 47718
Jan 31 07:14:13 compute-0 sshd-session[71468]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:13 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 07:14:13 compute-0 systemd[1]: session-18.scope: Consumed 1min 11.275s CPU time.
Jan 31 07:14:13 compute-0 systemd-logind[816]: Session 18 logged out. Waiting for processes to exit.
Jan 31 07:14:13 compute-0 systemd-logind[816]: Removed session 18.
Jan 31 07:14:14 compute-0 ceph-mon[74496]: pgmap v390: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:16 compute-0 ceph-mon[74496]: pgmap v391: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:17.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:17 compute-0 sshd-session[127795]: Accepted publickey for zuul from 192.168.122.30 port 40956 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:17 compute-0 systemd-logind[816]: New session 42 of user zuul.
Jan 31 07:14:17 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 31 07:14:18 compute-0 sshd-session[127795]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:18 compute-0 ceph-mon[74496]: pgmap v392: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:19 compute-0 python3.9[127948]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:19.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:19 compute-0 sudo[127954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:19 compute-0 sudo[127954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:19 compute-0 sudo[127954]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:19 compute-0 sudo[127979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:19 compute-0 sudo[127979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:19 compute-0 sudo[127979]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:14:19
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms']
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:14:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:14:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:14:20 compute-0 sudo[128153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iquxvjlrvazbcnamyncbejakqltmldox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843659.6805193-56-93859978993393/AnsiballZ_systemd.py'
Jan 31 07:14:20 compute-0 sudo[128153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:21 compute-0 python3.9[128155]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:14:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:21.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:21 compute-0 sudo[128153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:21.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:21 compute-0 sudo[128308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njiivrxmlbbjiyrdglmkaktxzwnajiuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843661.5513117-80-152449423115867/AnsiballZ_systemd.py'
Jan 31 07:14:21 compute-0 sudo[128308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:22 compute-0 python3.9[128310]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:14:22 compute-0 sudo[128308]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:23 compute-0 sudo[128461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coepbajeuxcvnzxqczzryxgvpgrdrspx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843662.6177456-107-128875904506247/AnsiballZ_command.py'
Jan 31 07:14:23 compute-0 sudo[128461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:23.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:23 compute-0 python3.9[128463]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:14:23 compute-0 sudo[128461]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:23 compute-0 ceph-mon[74496]: pgmap v393: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:24 compute-0 sudo[128616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aookvgbklykwhlfgbwelxsrzpkyegepe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843663.7920532-131-51240624034376/AnsiballZ_stat.py'
Jan 31 07:14:24 compute-0 sudo[128616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:24 compute-0 python3.9[128618]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:24 compute-0 sudo[128616]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:25.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:25 compute-0 ceph-mon[74496]: pgmap v394: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:25 compute-0 ceph-mon[74496]: pgmap v395: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:25 compute-0 sudo[128769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bonytkoflpjvzgkzincrlawpdwcsinhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843664.9344757-158-91543522717416/AnsiballZ_file.py'
Jan 31 07:14:25 compute-0 sudo[128769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:25.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:25 compute-0 python3.9[128771]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:25 compute-0 sudo[128769]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:26 compute-0 sshd-session[127798]: Connection closed by 192.168.122.30 port 40956
Jan 31 07:14:26 compute-0 sshd-session[127795]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:26 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 07:14:26 compute-0 systemd[1]: session-42.scope: Consumed 3.508s CPU time.
Jan 31 07:14:26 compute-0 systemd-logind[816]: Session 42 logged out. Waiting for processes to exit.
Jan 31 07:14:26 compute-0 systemd-logind[816]: Removed session 42.
Jan 31 07:14:26 compute-0 ceph-mon[74496]: pgmap v396: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:27.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:28 compute-0 ceph-mon[74496]: pgmap v397: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:29.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:29.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.932165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669932207, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 777, "num_deletes": 255, "total_data_size": 1173977, "memory_usage": 1197064, "flush_reason": "Manual Compaction"}
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669941180, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 749363, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9965, "largest_seqno": 10741, "table_properties": {"data_size": 746005, "index_size": 1202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8495, "raw_average_key_size": 19, "raw_value_size": 738934, "raw_average_value_size": 1726, "num_data_blocks": 53, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843603, "oldest_key_time": 1769843603, "file_creation_time": 1769843669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9067 microseconds, and 2857 cpu microseconds.
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.941230) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 749363 bytes OK
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.941252) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.946888) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.946965) EVENT_LOG_v1 {"time_micros": 1769843669946922, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.946995) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1170155, prev total WAL file size 1170155, number of live WAL files 2.
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.947867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323536' seq:0, type:0; will stop at (end)
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(731KB)], [23(9451KB)]
Jan 31 07:14:29 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843669947932, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10427636, "oldest_snapshot_seqno": -1}
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3773 keys, 7723184 bytes, temperature: kUnknown
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670005716, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7723184, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7695068, "index_size": 17574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 91687, "raw_average_key_size": 24, "raw_value_size": 7624036, "raw_average_value_size": 2020, "num_data_blocks": 769, "num_entries": 3773, "num_filter_entries": 3773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769843669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.005938) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7723184 bytes
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.007467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.2 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(24.2) write-amplify(10.3) OK, records in: 4271, records dropped: 498 output_compression: NoCompression
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.007515) EVENT_LOG_v1 {"time_micros": 1769843670007505, "job": 8, "event": "compaction_finished", "compaction_time_micros": 57857, "compaction_time_cpu_micros": 14401, "output_level": 6, "num_output_files": 1, "total_output_size": 7723184, "num_input_records": 4271, "num_output_records": 3773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670007701, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843670008479, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:29.947741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.008597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.008603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.008604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.008606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:14:30.008608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:14:30 compute-0 ceph-mon[74496]: pgmap v398: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:31.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:31.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:31 compute-0 sshd-session[128799]: Accepted publickey for zuul from 192.168.122.30 port 42022 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:31 compute-0 systemd-logind[816]: New session 43 of user zuul.
Jan 31 07:14:31 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 31 07:14:31 compute-0 sshd-session[128799]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:32 compute-0 ceph-mon[74496]: pgmap v399: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:32 compute-0 python3.9[128952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:33 compute-0 sudo[129107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypskusjmcpxmbfcpibbryzyifbboncfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843672.9549024-62-162056368218225/AnsiballZ_setup.py'
Jan 31 07:14:33 compute-0 sudo[129107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:33.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:33 compute-0 sudo[129110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:33 compute-0 sudo[129110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-0 sudo[129110]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-0 sudo[129135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:33 compute-0 python3.9[129109]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:14:33 compute-0 sudo[129135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-0 sudo[129135]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-0 sudo[129162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:33 compute-0 sudo[129162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-0 sudo[129162]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:33 compute-0 sudo[129192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:14:33 compute-0 sudo[129192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:33 compute-0 sudo[129107]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:34 compute-0 sudo[129192]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:34 compute-0 sudo[129322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlgnlauvkvvqxthmouprekjcldtwwsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843672.9549024-62-162056368218225/AnsiballZ_dnf.py'
Jan 31 07:14:34 compute-0 sudo[129322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d0419700-8017-4b56-ab3f-44e45b35c88d does not exist
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b240ebb9-5802-419d-b56c-5e8d5237881a does not exist
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev df6d81fd-a3d8-4d27-a33b-dcddd4b1e6c2 does not exist
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:34 compute-0 sudo[129325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:34 compute-0 sudo[129325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:34 compute-0 sudo[129325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:34 compute-0 sudo[129350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:34 compute-0 sudo[129350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:34 compute-0 sudo[129350]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:34 compute-0 sudo[129375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:34 compute-0 sudo[129375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:34 compute-0 sudo[129375]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:34 compute-0 sudo[129400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:14:34 compute-0 python3.9[129324]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 07:14:34 compute-0 sudo[129400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:14:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2376 writes, 10K keys, 2375 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2376 writes, 2375 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2376 writes, 10K keys, 2375 commit groups, 1.0 writes per commit group, ingest: 13.72 MB, 0.02 MB/s
                                           Interval WAL: 2376 writes, 2375 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     95.3      0.12              0.03         4    0.030       0      0       0.0       0.0
                                             L6      1/0    7.37 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    136.2    116.2      0.21              0.05         3    0.069     12K   1310       0.0       0.0
                                            Sum      1/0    7.37 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     86.0    108.5      0.33              0.08         7    0.047     12K   1310       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     87.2    109.8      0.32              0.08         6    0.054     12K   1310       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    136.2    116.2      0.21              0.05         3    0.069     12K   1310       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     98.5      0.12              0.03         3    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 308.00 MB usage: 1.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(53,911.67 KB,0.28906%) FilterBlock(8,41.73 KB,0.0132325%) IndexBlock(8,92.06 KB,0.0291899%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:14:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:14:34 compute-0 ceph-mon[74496]: pgmap v400: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:14:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.645970439 +0000 UTC m=+0.051589951 container create b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:14:34 compute-0 systemd[1]: Started libpod-conmon-b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce.scope.
Jan 31 07:14:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.625990582 +0000 UTC m=+0.031610124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.727693795 +0000 UTC m=+0.133313327 container init b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.736057733 +0000 UTC m=+0.141677245 container start b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elgamal, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.740508653 +0000 UTC m=+0.146128205 container attach b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:14:34 compute-0 cool_elgamal[129481]: 167 167
Jan 31 07:14:34 compute-0 systemd[1]: libpod-b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce.scope: Deactivated successfully.
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.744321938 +0000 UTC m=+0.149941460 container died b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-11a3b548ebaca13713fec5d73b9ae842e865188abd3dc3b2326232729c02d3f3-merged.mount: Deactivated successfully.
Jan 31 07:14:34 compute-0 podman[129465]: 2026-01-31 07:14:34.786557866 +0000 UTC m=+0.192177378 container remove b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:14:34 compute-0 systemd[1]: libpod-conmon-b65ac0ca87615998adbb7e39d04e9946bb31abde0c29224f70f74f421c816fce.scope: Deactivated successfully.
Jan 31 07:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:34 compute-0 podman[129504]: 2026-01-31 07:14:34.93270009 +0000 UTC m=+0.047560081 container create df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chandrasekhar, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:14:34 compute-0 systemd[1]: Started libpod-conmon-df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa.scope.
Jan 31 07:14:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85308ec30717d9fe81a7aaae3a3e7e085c52e7cd648f03077f015948d46c0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85308ec30717d9fe81a7aaae3a3e7e085c52e7cd648f03077f015948d46c0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85308ec30717d9fe81a7aaae3a3e7e085c52e7cd648f03077f015948d46c0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85308ec30717d9fe81a7aaae3a3e7e085c52e7cd648f03077f015948d46c0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85308ec30717d9fe81a7aaae3a3e7e085c52e7cd648f03077f015948d46c0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:35 compute-0 podman[129504]: 2026-01-31 07:14:34.91454363 +0000 UTC m=+0.029403641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:14:35 compute-0 podman[129504]: 2026-01-31 07:14:35.016579321 +0000 UTC m=+0.131439322 container init df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chandrasekhar, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:14:35 compute-0 podman[129504]: 2026-01-31 07:14:35.023559424 +0000 UTC m=+0.138419415 container start df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:35 compute-0 podman[129504]: 2026-01-31 07:14:35.027442021 +0000 UTC m=+0.142302032 container attach df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:14:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:35.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:35 compute-0 compassionate_chandrasekhar[129520]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:14:35 compute-0 compassionate_chandrasekhar[129520]: --> relative data size: 1.0
Jan 31 07:14:35 compute-0 compassionate_chandrasekhar[129520]: --> All data devices are unavailable
Jan 31 07:14:35 compute-0 sudo[129322]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:35 compute-0 systemd[1]: libpod-df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa.scope: Deactivated successfully.
Jan 31 07:14:35 compute-0 podman[129504]: 2026-01-31 07:14:35.898824045 +0000 UTC m=+1.013684046 container died df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a85308ec30717d9fe81a7aaae3a3e7e085c52e7cd648f03077f015948d46c0a-merged.mount: Deactivated successfully.
Jan 31 07:14:35 compute-0 podman[129504]: 2026-01-31 07:14:35.946013926 +0000 UTC m=+1.060873927 container remove df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:35 compute-0 systemd[1]: libpod-conmon-df8fa897a04ef361e1d5605cbb74b0653356ae9d35f1abea6f91da8974dcb6fa.scope: Deactivated successfully.
Jan 31 07:14:35 compute-0 sudo[129400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:36 compute-0 sudo[129586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:36 compute-0 sudo[129586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:36 compute-0 sudo[129586]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:36 compute-0 sudo[129643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:36 compute-0 sudo[129643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:36 compute-0 sudo[129643]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:36 compute-0 sudo[129675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:36 compute-0 sudo[129675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:36 compute-0 sudo[129675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:36 compute-0 sudo[129700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:14:36 compute-0 sudo[129700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.477080819 +0000 UTC m=+0.051425936 container create 58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shirley, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:14:36 compute-0 systemd[1]: Started libpod-conmon-58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830.scope.
Jan 31 07:14:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.458322874 +0000 UTC m=+0.032668021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.56337182 +0000 UTC m=+0.137716967 container init 58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.573231675 +0000 UTC m=+0.147576782 container start 58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:14:36 compute-0 compassionate_shirley[129856]: 167 167
Jan 31 07:14:36 compute-0 systemd[1]: libpod-58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830.scope: Deactivated successfully.
Jan 31 07:14:36 compute-0 python3.9[129838]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.584850033 +0000 UTC m=+0.159195150 container attach 58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.586330839 +0000 UTC m=+0.160676026 container died 58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7decbbf47c8b4e841795b3f97a3d41361b92926d239acf53f5175a69aafa1523-merged.mount: Deactivated successfully.
Jan 31 07:14:36 compute-0 podman[129839]: 2026-01-31 07:14:36.641759395 +0000 UTC m=+0.216104502 container remove 58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:36 compute-0 systemd[1]: libpod-conmon-58d10f909849db87e8ef78fc99e804c44066b6950c035c977fdb310493371830.scope: Deactivated successfully.
Jan 31 07:14:36 compute-0 podman[129882]: 2026-01-31 07:14:36.784724881 +0000 UTC m=+0.038865885 container create 6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:14:36 compute-0 ceph-mon[74496]: pgmap v401: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:36 compute-0 systemd[1]: Started libpod-conmon-6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d.scope.
Jan 31 07:14:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22afe153d57b8fec0f24af97a38a5ae7d3a709230e3c14bcd67a56918b6be4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22afe153d57b8fec0f24af97a38a5ae7d3a709230e3c14bcd67a56918b6be4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22afe153d57b8fec0f24af97a38a5ae7d3a709230e3c14bcd67a56918b6be4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d22afe153d57b8fec0f24af97a38a5ae7d3a709230e3c14bcd67a56918b6be4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:36 compute-0 podman[129882]: 2026-01-31 07:14:36.76773147 +0000 UTC m=+0.021872494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:14:36 compute-0 podman[129882]: 2026-01-31 07:14:36.879907092 +0000 UTC m=+0.134048106 container init 6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:14:36 compute-0 podman[129882]: 2026-01-31 07:14:36.887011608 +0000 UTC m=+0.141152612 container start 6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pascal, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:14:36 compute-0 podman[129882]: 2026-01-31 07:14:36.891003387 +0000 UTC m=+0.145144391 container attach 6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pascal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:14:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:37.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:37.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]: {
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:     "0": [
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:         {
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "devices": [
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "/dev/loop3"
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             ],
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "lv_name": "ceph_lv0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "lv_size": "7511998464",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "name": "ceph_lv0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "tags": {
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.cluster_name": "ceph",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.crush_device_class": "",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.encrypted": "0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.osd_id": "0",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.type": "block",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:                 "ceph.vdo": "0"
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             },
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "type": "block",
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:             "vg_name": "ceph_vg0"
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:         }
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]:     ]
Jan 31 07:14:37 compute-0 unruffled_pascal[129898]: }
Jan 31 07:14:37 compute-0 podman[129882]: 2026-01-31 07:14:37.628401109 +0000 UTC m=+0.882542113 container died 6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:14:37 compute-0 systemd[1]: libpod-6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d.scope: Deactivated successfully.
Jan 31 07:14:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d22afe153d57b8fec0f24af97a38a5ae7d3a709230e3c14bcd67a56918b6be4b-merged.mount: Deactivated successfully.
Jan 31 07:14:37 compute-0 podman[129882]: 2026-01-31 07:14:37.692319534 +0000 UTC m=+0.946460538 container remove 6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:14:37 compute-0 systemd[1]: libpod-conmon-6ffecc6e8e3240a169ef9adb5badd4908909a3f0195b9824a342b8e006023c4d.scope: Deactivated successfully.
Jan 31 07:14:37 compute-0 sudo[129700]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:37 compute-0 sudo[130020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:37 compute-0 sudo[130020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:37 compute-0 sudo[130020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:37 compute-0 sudo[130045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:14:37 compute-0 sudo[130045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:37 compute-0 sudo[130045]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:37 compute-0 sudo[130094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:37 compute-0 sudo[130094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:37 compute-0 sudo[130094]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:37 compute-0 sudo[130146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:14:37 compute-0 sudo[130146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:38 compute-0 python3.9[130145]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.19000426 +0000 UTC m=+0.035808870 container create 33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:14:38 compute-0 systemd[1]: Started libpod-conmon-33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86.scope.
Jan 31 07:14:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.173659364 +0000 UTC m=+0.019463994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.272596849 +0000 UTC m=+0.118401459 container init 33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.281581111 +0000 UTC m=+0.127385711 container start 33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.285469798 +0000 UTC m=+0.131274428 container attach 33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gates, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 31 07:14:38 compute-0 infallible_gates[130268]: 167 167
Jan 31 07:14:38 compute-0 systemd[1]: libpod-33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86.scope: Deactivated successfully.
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.287580061 +0000 UTC m=+0.133384671 container died 33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gates, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f81956780f6153765c47ec917683ee4f57105490c15894c5a57116e0e16f92-merged.mount: Deactivated successfully.
Jan 31 07:14:38 compute-0 podman[130232]: 2026-01-31 07:14:38.323245425 +0000 UTC m=+0.169050035 container remove 33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gates, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:14:38 compute-0 systemd[1]: libpod-conmon-33a9f8028690f50595395fa6be52bded4a59211919510fc28d5a8b90a6c19a86.scope: Deactivated successfully.
Jan 31 07:14:38 compute-0 podman[130326]: 2026-01-31 07:14:38.48431201 +0000 UTC m=+0.054777410 container create d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:14:38 compute-0 systemd[1]: Started libpod-conmon-d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2.scope.
Jan 31 07:14:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed4142feaec6e6f0e8009ff9c60d5a5cb19dcb313488ea1d5f94fda2f32595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed4142feaec6e6f0e8009ff9c60d5a5cb19dcb313488ea1d5f94fda2f32595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed4142feaec6e6f0e8009ff9c60d5a5cb19dcb313488ea1d5f94fda2f32595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49ed4142feaec6e6f0e8009ff9c60d5a5cb19dcb313488ea1d5f94fda2f32595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:14:38 compute-0 podman[130326]: 2026-01-31 07:14:38.467593095 +0000 UTC m=+0.038058485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:14:38 compute-0 podman[130326]: 2026-01-31 07:14:38.580885475 +0000 UTC m=+0.151350895 container init d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:38 compute-0 podman[130326]: 2026-01-31 07:14:38.587244704 +0000 UTC m=+0.157710084 container start d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:14:38 compute-0 podman[130326]: 2026-01-31 07:14:38.593810876 +0000 UTC m=+0.164276286 container attach d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:14:38 compute-0 ceph-mon[74496]: pgmap v402: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:38 compute-0 python3.9[130421]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:39 compute-0 quirky_ride[130343]: {
Jan 31 07:14:39 compute-0 quirky_ride[130343]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:14:39 compute-0 quirky_ride[130343]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:14:39 compute-0 quirky_ride[130343]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:14:39 compute-0 quirky_ride[130343]:         "osd_id": 0,
Jan 31 07:14:39 compute-0 quirky_ride[130343]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:14:39 compute-0 quirky_ride[130343]:         "type": "bluestore"
Jan 31 07:14:39 compute-0 quirky_ride[130343]:     }
Jan 31 07:14:39 compute-0 quirky_ride[130343]: }
Jan 31 07:14:39 compute-0 systemd[1]: libpod-d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2.scope: Deactivated successfully.
Jan 31 07:14:39 compute-0 podman[130326]: 2026-01-31 07:14:39.371906748 +0000 UTC m=+0.942372138 container died d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 07:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-49ed4142feaec6e6f0e8009ff9c60d5a5cb19dcb313488ea1d5f94fda2f32595-merged.mount: Deactivated successfully.
Jan 31 07:14:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:39.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:39 compute-0 podman[130326]: 2026-01-31 07:14:39.428668095 +0000 UTC m=+0.999133465 container remove d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:14:39 compute-0 systemd[1]: libpod-conmon-d11c7d3f314536a4e14ebe54548292afd0f11d5fe490555138509ac11a6368c2.scope: Deactivated successfully.
Jan 31 07:14:39 compute-0 sudo[130601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:39 compute-0 sudo[130146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-0 sudo[130601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:14:39 compute-0 sudo[130601]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:14:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:39 compute-0 python3.9[130583]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:14:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ced53393-7483-4bc0-92ac-427c523210d8 does not exist
Jan 31 07:14:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 08389940-1c0c-4bc3-9fb0-80d037bdf958 does not exist
Jan 31 07:14:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2be12c87-2206-4e47-8320-2db7588f29d3 does not exist
Jan 31 07:14:39 compute-0 sudo[130630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:39 compute-0 sudo[130630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-0 sudo[130630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-0 sudo[130629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:39 compute-0 sudo[130629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-0 sudo[130629]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-0 sudo[130679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:14:39 compute-0 sudo[130679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:39 compute-0 sudo[130679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:40 compute-0 sshd-session[128802]: Connection closed by 192.168.122.30 port 42022
Jan 31 07:14:40 compute-0 sshd-session[128799]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:14:40 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 07:14:40 compute-0 systemd[1]: session-43.scope: Consumed 5.407s CPU time.
Jan 31 07:14:40 compute-0 systemd-logind[816]: Session 43 logged out. Waiting for processes to exit.
Jan 31 07:14:40 compute-0 systemd-logind[816]: Removed session 43.
Jan 31 07:14:40 compute-0 ceph-mon[74496]: pgmap v403: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:14:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:41.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:42 compute-0 ceph-mon[74496]: pgmap v404: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:44 compute-0 ceph-mon[74496]: pgmap v405: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:45 compute-0 sshd-session[130731]: Accepted publickey for zuul from 192.168.122.30 port 50866 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:14:45 compute-0 systemd-logind[816]: New session 44 of user zuul.
Jan 31 07:14:45 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 31 07:14:45 compute-0 sshd-session[130731]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:14:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:45.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:46 compute-0 python3.9[130884]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:14:46 compute-0 ceph-mon[74496]: pgmap v406: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:48 compute-0 sudo[131039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lelnpbqgwjxqdzjrnolvwfeoeekjvmgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843687.6102757-110-223084459108014/AnsiballZ_file.py'
Jan 31 07:14:48 compute-0 sudo[131039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:48 compute-0 python3.9[131041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:48 compute-0 sudo[131039]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:48 compute-0 ceph-mon[74496]: pgmap v407: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:48 compute-0 sudo[131191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzvczbxxlwaqpsvukopmxwlrpykxtosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843688.3820882-110-162595115693508/AnsiballZ_file.py'
Jan 31 07:14:48 compute-0 sudo[131191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:48 compute-0 python3.9[131193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:48 compute-0 sudo[131191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:49 compute-0 sudo[131344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thoauosmihhdfkuzwzuufjqyojzyiiae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843689.081094-158-60776238111359/AnsiballZ_stat.py'
Jan 31 07:14:49 compute-0 sudo[131344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:14:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:49.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:49 compute-0 python3.9[131346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:49 compute-0 sudo[131344]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:14:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:14:50 compute-0 sudo[131467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvoxzxquiennthtkvsntmvspcjkjhumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843689.081094-158-60776238111359/AnsiballZ_copy.py'
Jan 31 07:14:50 compute-0 sudo[131467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:50 compute-0 python3.9[131469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843689.081094-158-60776238111359/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e519a7b993367072017cbccb6735ddaf471a194e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:50 compute-0 sudo[131467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:50 compute-0 ceph-mon[74496]: pgmap v408: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:50 compute-0 sudo[131619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynehacmkxazgkavqsjrlkgqdysgfckvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843690.4630392-158-100453952906085/AnsiballZ_stat.py'
Jan 31 07:14:50 compute-0 sudo[131619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:50 compute-0 python3.9[131621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:50 compute-0 sudo[131619]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:51 compute-0 sudo[131743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qclktwdinkswydfsxbajrixegknuweyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843690.4630392-158-100453952906085/AnsiballZ_copy.py'
Jan 31 07:14:51 compute-0 sudo[131743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:51.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:51 compute-0 python3.9[131745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843690.4630392-158-100453952906085/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d19f6ebd16a505bd4f1bae6f0d06a9a74ad0f67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:51 compute-0 sudo[131743]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:51 compute-0 sudo[131895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klknbgqmonchayfgvpwwwfgcbygwhjnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843691.6708853-158-240423233606304/AnsiballZ_stat.py'
Jan 31 07:14:51 compute-0 sudo[131895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:52 compute-0 python3.9[131897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:52 compute-0 sudo[131895]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:52 compute-0 sudo[132018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vahbcqnqqqlaowyrziflbogorsikcasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843691.6708853-158-240423233606304/AnsiballZ_copy.py'
Jan 31 07:14:52 compute-0 sudo[132018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:52 compute-0 ceph-mon[74496]: pgmap v409: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:52 compute-0 python3.9[132020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843691.6708853-158-240423233606304/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d82ede924899d13a640d0adecb3c0c4abbe68d4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:52 compute-0 sudo[132018]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:53 compute-0 sudo[132170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pexuoophraxtffagrzarxhwnajbuiynp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843692.8129573-295-280913051593164/AnsiballZ_file.py'
Jan 31 07:14:53 compute-0 sudo[132170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:53 compute-0 python3.9[132172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:53 compute-0 sudo[132170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:53.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:53 compute-0 sudo[132323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiojmcfotbxrujpnqeldvjzoxsjxtpbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843693.470048-295-249517794022906/AnsiballZ_file.py'
Jan 31 07:14:53 compute-0 sudo[132323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:53 compute-0 python3.9[132325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:53 compute-0 sudo[132323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:54 compute-0 sudo[132475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sixiueukfibgtxikotpzshemfvoogbwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843694.1622205-342-131866436661613/AnsiballZ_stat.py'
Jan 31 07:14:54 compute-0 sudo[132475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:54 compute-0 python3.9[132477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:54 compute-0 ceph-mon[74496]: pgmap v410: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:54 compute-0 sudo[132475]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:14:54 compute-0 sudo[132598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qceliquveuxrfxblxgbmfhharyhenagv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843694.1622205-342-131866436661613/AnsiballZ_copy.py'
Jan 31 07:14:54 compute-0 sudo[132598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:55 compute-0 python3.9[132600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843694.1622205-342-131866436661613/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=46bfa25e60f8f6ecc27294486ee71da1b6c1d315 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:55 compute-0 sudo[132598]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:55.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:55.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:55 compute-0 sudo[132751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xetxmuwnyqjexplcknlvfkpjgektigzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843695.2747867-342-7012600955343/AnsiballZ_stat.py'
Jan 31 07:14:55 compute-0 sudo[132751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:55 compute-0 python3.9[132753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:55 compute-0 sudo[132751]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:56 compute-0 sudo[132874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmuxpahubwyubpgicbeyeugfvnsufuuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843695.2747867-342-7012600955343/AnsiballZ_copy.py'
Jan 31 07:14:56 compute-0 sudo[132874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:56 compute-0 python3.9[132876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843695.2747867-342-7012600955343/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=005e44589b03f310b2e01f05c09d39b290e9f9f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:56 compute-0 sudo[132874]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:56 compute-0 ceph-mon[74496]: pgmap v411: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:56 compute-0 sudo[133026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veimnixnqytmsuvgzgzdigjwbeynkkkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843696.6714716-342-250118413520322/AnsiballZ_stat.py'
Jan 31 07:14:56 compute-0 sudo[133026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:57 compute-0 python3.9[133028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:57 compute-0 sudo[133026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:57.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:14:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:57.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:14:57 compute-0 sudo[133150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwvvletcurowrlojhqjqqomdydwzbkoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843696.6714716-342-250118413520322/AnsiballZ_copy.py'
Jan 31 07:14:57 compute-0 sudo[133150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:57 compute-0 python3.9[133152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843696.6714716-342-250118413520322/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f07499da87bc8b746fea450bd604d2b0da154758 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:14:57 compute-0 sudo[133150]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:58 compute-0 sudo[133302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemrajfxvcagrojdwbbesjwyainjwsls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843697.9047482-476-172182229281879/AnsiballZ_file.py'
Jan 31 07:14:58 compute-0 sudo[133302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:58 compute-0 python3.9[133304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:58 compute-0 sudo[133302]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:58 compute-0 ceph-mon[74496]: pgmap v412: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:58 compute-0 sudo[133454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apykfthknmdkxmztbfdyaughuzlrmbdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843698.5345685-476-162642502807124/AnsiballZ_file.py'
Jan 31 07:14:58 compute-0 sudo[133454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:58 compute-0 python3.9[133456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:14:59 compute-0 sudo[133454]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:14:59.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:59 compute-0 sudo[133607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvabnfwlylryswnsvebfzbennwdkwnee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843699.1730292-522-264384523654212/AnsiballZ_stat.py'
Jan 31 07:14:59 compute-0 sudo[133607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:14:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:14:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:14:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:14:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:14:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:14:59 compute-0 sudo[133610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:59 compute-0 sudo[133610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:59 compute-0 sudo[133610]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-0 python3.9[133609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:14:59 compute-0 sudo[133607]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-0 sudo[133635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:14:59 compute-0 sudo[133635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:14:59 compute-0 sudo[133635]: pam_unix(sudo:session): session closed for user root
Jan 31 07:14:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:00 compute-0 sudo[133780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsyxpxkuxidmooqkkynapiewdjoecmip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843699.1730292-522-264384523654212/AnsiballZ_copy.py'
Jan 31 07:15:00 compute-0 sudo[133780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:00 compute-0 python3.9[133782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843699.1730292-522-264384523654212/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=02cd07580ea3fd8cfe1ebf744b9c5266e28c2646 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:00 compute-0 sudo[133780]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:00 compute-0 ceph-mon[74496]: pgmap v413: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:00 compute-0 sudo[133932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvtldbrrxvkwsukdgvrpfowvjatkrnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843700.4810138-522-133361533196469/AnsiballZ_stat.py'
Jan 31 07:15:00 compute-0 sudo[133932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:00 compute-0 python3.9[133934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:01 compute-0 sudo[133932]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:01.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:01 compute-0 sudo[134056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdsynbveypdxijxtkeaklupqsleubfvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843700.4810138-522-133361533196469/AnsiballZ_copy.py'
Jan 31 07:15:01 compute-0 sudo[134056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:01.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:01 compute-0 python3.9[134058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843700.4810138-522-133361533196469/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=005e44589b03f310b2e01f05c09d39b290e9f9f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:01 compute-0 sudo[134056]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:02 compute-0 sudo[134208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dumdrwrvmngumgbizclcdmekwxdgrubc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843701.7709-522-190999437203398/AnsiballZ_stat.py'
Jan 31 07:15:02 compute-0 sudo[134208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:02 compute-0 python3.9[134210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:02 compute-0 sudo[134208]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:02 compute-0 sudo[134331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llawzehvrgcnwjzxqksimanoygcnngso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843701.7709-522-190999437203398/AnsiballZ_copy.py'
Jan 31 07:15:02 compute-0 sudo[134331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:02 compute-0 ceph-mon[74496]: pgmap v414: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:02 compute-0 python3.9[134333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843701.7709-522-190999437203398/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e31054b760c36a84cc4dcb489f7008af8f71b537 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:02 compute-0 sudo[134331]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:03.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:03.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:04 compute-0 sudo[134484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmzqyogvxcnqqdspqayaweiqokqnaouu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843704.0718923-708-159669538971507/AnsiballZ_file.py'
Jan 31 07:15:04 compute-0 sudo[134484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:04 compute-0 python3.9[134486]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:04 compute-0 sudo[134484]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:04 compute-0 ceph-mon[74496]: pgmap v415: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:04 compute-0 sudo[134636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axknbssxlsqwkjclbxqkxjoeovvljyvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843704.669576-735-213956172529717/AnsiballZ_stat.py'
Jan 31 07:15:04 compute-0 sudo[134636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:05 compute-0 python3.9[134638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:05 compute-0 sudo[134636]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:05.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:05 compute-0 sudo[134760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srmvkykqyvtwagjunjazejglworindea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843704.669576-735-213956172529717/AnsiballZ_copy.py'
Jan 31 07:15:05 compute-0 sudo[134760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:05.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:05 compute-0 python3.9[134762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843704.669576-735-213956172529717/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:05 compute-0 sudo[134760]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:06 compute-0 sudo[134912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmvarylxnxmziskircqoxtnvfvqgymdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843705.856042-781-237135196191296/AnsiballZ_file.py'
Jan 31 07:15:06 compute-0 sudo[134912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:06 compute-0 python3.9[134914]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:06 compute-0 sudo[134912]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:06 compute-0 ceph-mon[74496]: pgmap v416: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:06 compute-0 sudo[135064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bonjnslfwacwugjuxwzcwbfiucazamfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843706.4962409-807-88397031315610/AnsiballZ_stat.py'
Jan 31 07:15:06 compute-0 sudo[135064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:06 compute-0 python3.9[135066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:07 compute-0 sudo[135064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:07.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:07 compute-0 sudo[135188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctbviolicnjszrvzlhpruqzugaemwrfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843706.4962409-807-88397031315610/AnsiballZ_copy.py'
Jan 31 07:15:07 compute-0 sudo[135188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:07.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:07 compute-0 python3.9[135190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843706.4962409-807-88397031315610/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:07 compute-0 sudo[135188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:08 compute-0 sudo[135340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owlqksyssxvhufhgtmfcpwqyzxpsikfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843707.9482064-856-215246431633216/AnsiballZ_file.py'
Jan 31 07:15:08 compute-0 sudo[135340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:08 compute-0 python3.9[135342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:08 compute-0 sudo[135340]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:08 compute-0 ceph-mon[74496]: pgmap v417: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:08 compute-0 sudo[135492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtsjcilbpibiofyhruijrnyqtytyxdxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843708.637961-880-217508936306210/AnsiballZ_stat.py'
Jan 31 07:15:08 compute-0 sudo[135492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:09 compute-0 python3.9[135494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:09 compute-0 sudo[135492]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:09 compute-0 sudo[135616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjvrtidzexttergpufwhrmdnzjvqaxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843708.637961-880-217508936306210/AnsiballZ_copy.py'
Jan 31 07:15:09 compute-0 sudo[135616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:09 compute-0 python3.9[135618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843708.637961-880-217508936306210/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:09 compute-0 sudo[135616]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:10 compute-0 sudo[135768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfhsmjaexdgkbwvbqhozufcilalcxbzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843710.0066206-936-152824974524261/AnsiballZ_file.py'
Jan 31 07:15:10 compute-0 sudo[135768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:10 compute-0 python3.9[135770]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:10 compute-0 sudo[135768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:10 compute-0 ceph-mon[74496]: pgmap v418: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:10 compute-0 sudo[135920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfijkrrihcgihrpvzmuksywsrogljdyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843710.6703336-961-230713125753154/AnsiballZ_stat.py'
Jan 31 07:15:10 compute-0 sudo[135920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:11 compute-0 python3.9[135922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:11 compute-0 sudo[135920]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:11.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:11 compute-0 sudo[136044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjvdwhiaypzmnvynctjebejhwfttmdpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843710.6703336-961-230713125753154/AnsiballZ_copy.py'
Jan 31 07:15:11 compute-0 sudo[136044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:11.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:11 compute-0 python3.9[136046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843710.6703336-961-230713125753154/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:11 compute-0 sudo[136044]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:12 compute-0 sudo[136196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffxrptxsnowisurkpaduypvuxlflnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843712.013182-1008-79771081845508/AnsiballZ_file.py'
Jan 31 07:15:12 compute-0 sudo[136196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:12 compute-0 python3.9[136198]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:12 compute-0 sudo[136196]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:12 compute-0 ceph-mon[74496]: pgmap v419: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:12 compute-0 sudo[136348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxwioxuvypbomeltjdinlajnnnutxgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843712.6781244-1032-105456657051273/AnsiballZ_stat.py'
Jan 31 07:15:12 compute-0 sudo[136348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:13 compute-0 python3.9[136350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:13 compute-0 sudo[136348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:13.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:13.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:13 compute-0 sudo[136472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juhynoeasuooqkyfxwzjuzqmgukvnfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843712.6781244-1032-105456657051273/AnsiballZ_copy.py'
Jan 31 07:15:13 compute-0 sudo[136472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:13 compute-0 python3.9[136474]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843712.6781244-1032-105456657051273/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:13 compute-0 sudo[136472]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:14 compute-0 sudo[136624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmetnpvtvmhmxbsyzpucuryuxoepwkiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843713.8527524-1071-123850019896150/AnsiballZ_file.py'
Jan 31 07:15:14 compute-0 sudo[136624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:14 compute-0 python3.9[136626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:14 compute-0 sudo[136624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:14 compute-0 sudo[136776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icsugmdyljuwhsjirlrntcqkvngnzpem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843714.4460733-1087-146732019963020/AnsiballZ_stat.py'
Jan 31 07:15:14 compute-0 sudo[136776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:14 compute-0 ceph-mon[74496]: pgmap v420: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:14 compute-0 python3.9[136778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:14 compute-0 sudo[136776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:15 compute-0 sudo[136900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynmkzwfvvihhrwtnrjdcdfqwksyqgcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843714.4460733-1087-146732019963020/AnsiballZ_copy.py'
Jan 31 07:15:15 compute-0 sudo[136900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:15.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:15.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:15 compute-0 python3.9[136902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843714.4460733-1087-146732019963020/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=823ddfb9481e8da2761411a2055d0fb6b98e0ac2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:15 compute-0 sudo[136900]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:16 compute-0 sshd-session[130734]: Connection closed by 192.168.122.30 port 50866
Jan 31 07:15:16 compute-0 sshd-session[130731]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:15:16 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 07:15:16 compute-0 systemd[1]: session-44.scope: Consumed 19.933s CPU time.
Jan 31 07:15:16 compute-0 systemd-logind[816]: Session 44 logged out. Waiting for processes to exit.
Jan 31 07:15:16 compute-0 systemd-logind[816]: Removed session 44.
Jan 31 07:15:16 compute-0 ceph-mon[74496]: pgmap v421: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:17.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:17 compute-0 ceph-mon[74496]: pgmap v422: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:19 compute-0 sudo[136929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:19 compute-0 sudo[136929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:19 compute-0 sudo[136929]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:19 compute-0 sudo[136954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:19 compute-0 sudo[136954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:19 compute-0 sudo[136954]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:15:19
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes']
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:15:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:15:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:15:20 compute-0 ceph-mon[74496]: pgmap v423: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:15:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:15:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:21.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:21 compute-0 sshd-session[136980]: Accepted publickey for zuul from 192.168.122.30 port 55928 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:15:21 compute-0 systemd-logind[816]: New session 45 of user zuul.
Jan 31 07:15:21 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 31 07:15:21 compute-0 sshd-session[136980]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:15:22 compute-0 sudo[137133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyldsaavcbzpieavxwwwxcodsnudopfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843721.984106-26-115046524486575/AnsiballZ_file.py'
Jan 31 07:15:22 compute-0 sudo[137133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:22 compute-0 python3.9[137135]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:22 compute-0 sudo[137133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:22 compute-0 ceph-mon[74496]: pgmap v424: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:23 compute-0 sudo[137286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eujymshvijrxmpxbbiuhubpimtgeqgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843722.8042102-62-81030905013470/AnsiballZ_stat.py'
Jan 31 07:15:23 compute-0 sudo[137286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:23.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:23 compute-0 python3.9[137288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:23 compute-0 sudo[137286]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:23 compute-0 sudo[137409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodsjhbxyxglognhnavhmzjlppdopcvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843722.8042102-62-81030905013470/AnsiballZ_copy.py'
Jan 31 07:15:23 compute-0 sudo[137409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:24 compute-0 python3.9[137411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843722.8042102-62-81030905013470/.source.conf _original_basename=ceph.conf follow=False checksum=d315a9cac0e1e65728b0668f9e154f01a66e4c1a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:24 compute-0 sudo[137409]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:24 compute-0 sudo[137561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhiltoyxgqrawmvpgcvrkzckmvvybcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843724.2103841-62-271932292632908/AnsiballZ_stat.py'
Jan 31 07:15:24 compute-0 sudo[137561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:24 compute-0 python3.9[137563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:24 compute-0 sudo[137561]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:24 compute-0 ceph-mon[74496]: pgmap v425: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:24 compute-0 sudo[137684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwzfzvesjedepnhcxdzzkxyowasynnrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843724.2103841-62-271932292632908/AnsiballZ_copy.py'
Jan 31 07:15:24 compute-0 sudo[137684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:25 compute-0 python3.9[137686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843724.2103841-62-271932292632908/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=d07c30b1acab71467a05fb02d206fcd55de2512c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:25 compute-0 sudo[137684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:25.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:25.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:25 compute-0 sshd-session[136983]: Connection closed by 192.168.122.30 port 55928
Jan 31 07:15:25 compute-0 sshd-session[136980]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:15:25 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 07:15:25 compute-0 systemd[1]: session-45.scope: Consumed 2.417s CPU time.
Jan 31 07:15:25 compute-0 systemd-logind[816]: Session 45 logged out. Waiting for processes to exit.
Jan 31 07:15:25 compute-0 systemd-logind[816]: Removed session 45.
Jan 31 07:15:26 compute-0 ceph-mon[74496]: pgmap v426: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:27.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:28 compute-0 ceph-mon[74496]: pgmap v427: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:29.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:29.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:30 compute-0 ceph-mon[74496]: pgmap v428: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:30 compute-0 sshd-session[137714]: Accepted publickey for zuul from 192.168.122.30 port 60472 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:15:30 compute-0 systemd-logind[816]: New session 46 of user zuul.
Jan 31 07:15:30 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 31 07:15:30 compute-0 sshd-session[137714]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:15:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:31.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:31.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:31 compute-0 python3.9[137868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:15:32 compute-0 sudo[138022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzcwbjpdslljcfhivjcyoclzhjnhkciz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843732.4292033-62-20063160569829/AnsiballZ_file.py'
Jan 31 07:15:32 compute-0 sudo[138022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:32 compute-0 ceph-mon[74496]: pgmap v429: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:33 compute-0 python3.9[138024]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:33 compute-0 sudo[138022]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:33.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:33 compute-0 sudo[138175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdlyiywxnmtszuiqftfvqsqjesmulzuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843733.1605768-62-144614605124304/AnsiballZ_file.py'
Jan 31 07:15:33 compute-0 sudo[138175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:33 compute-0 python3.9[138177]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:15:33 compute-0 sudo[138175]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:34 compute-0 python3.9[138327]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:15:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:15:34 compute-0 sudo[138477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqxdzxdzgygerywxkzwqcethinezuhjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843734.4359965-131-76264842249621/AnsiballZ_seboolean.py'
Jan 31 07:15:34 compute-0 sudo[138477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:34 compute-0 ceph-mon[74496]: pgmap v430: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:35 compute-0 python3.9[138479]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 07:15:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:35.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:35.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:36 compute-0 ceph-mon[74496]: pgmap v431: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:36 compute-0 sudo[138477]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:37 compute-0 sudo[138634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gglcxckqyxrpmpmalccieijingxotxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843736.7450502-161-86011766309122/AnsiballZ_setup.py'
Jan 31 07:15:37 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 07:15:37 compute-0 sudo[138634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:37 compute-0 python3.9[138636]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:15:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:37.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:37 compute-0 sudo[138634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:37 compute-0 sudo[138719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fredwpacsqvrqrxvhnciewxayunfpmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843736.7450502-161-86011766309122/AnsiballZ_dnf.py'
Jan 31 07:15:37 compute-0 sudo[138719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:38 compute-0 python3.9[138721]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:15:38 compute-0 ceph-mon[74496]: pgmap v432: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:39 compute-0 sudo[138719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:39.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:39.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:39 compute-0 sudo[138748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-0 sudo[138750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-0 sudo[138748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-0 sudo[138750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-0 sudo[138748]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-0 sudo[138750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-0 sudo[138821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-0 sudo[138821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-0 sudo[138821]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-0 sudo[138825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:39 compute-0 sudo[138825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-0 sudo[138825]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-0 sudo[138900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:39 compute-0 sudo[138900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:39 compute-0 sudo[138900]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:39 compute-0 sudo[138925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:15:39 compute-0 sudo[138925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:40 compute-0 sudo[138925]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:40 compute-0 sudo[139043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suxqwdkdcvcdnxguczdhszipwujyfgpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843739.8110902-197-121798258735996/AnsiballZ_systemd.py'
Jan 31 07:15:40 compute-0 sudo[139043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:40 compute-0 sudo[139046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:40 compute-0 sudo[139046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:40 compute-0 sudo[139046]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-0 sudo[139071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:40 compute-0 sudo[139071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:40 compute-0 sudo[139071]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-0 sudo[139096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:40 compute-0 sudo[139096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:40 compute-0 sudo[139096]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-0 sudo[139121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:15:40 compute-0 sudo[139121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:40 compute-0 python3.9[139045]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:15:40 compute-0 sudo[139043]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-0 ceph-mon[74496]: pgmap v433: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:40 compute-0 sudo[139121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 72a58198-8e1b-4565-a708-4ec8fd16d534 does not exist
Jan 31 07:15:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a487410e-d165-4a3b-92d4-de07689b43f9 does not exist
Jan 31 07:15:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f1f94f92-acca-4e4d-a90c-9583ed424878 does not exist
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:15:41 compute-0 sudo[139256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:41 compute-0 sudo[139256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:41 compute-0 sudo[139256]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:41 compute-0 sudo[139281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:41 compute-0 sudo[139281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:41 compute-0 sudo[139281]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:41 compute-0 sudo[139306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:41 compute-0 sudo[139306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:41 compute-0 sudo[139306]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:41 compute-0 sudo[139331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:15:41 compute-0 sudo[139331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:41 compute-0 sudo[139432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnibgtlidmttxnhnaxurabcjitdheujs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843740.930505-221-71640460984552/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 07:15:41 compute-0 sudo[139432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:41.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:41.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:41 compute-0 python3[139444]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 07:15:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:41 compute-0 podman[139473]: 2026-01-31 07:15:41.518228548 +0000 UTC m=+0.091292919 container create f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_yonath, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:15:41 compute-0 sudo[139432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:41 compute-0 podman[139473]: 2026-01-31 07:15:41.443605823 +0000 UTC m=+0.016670194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:15:41 compute-0 systemd[1]: Started libpod-conmon-f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b.scope.
Jan 31 07:15:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:15:41 compute-0 podman[139473]: 2026-01-31 07:15:41.707538777 +0000 UTC m=+0.280603238 container init f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:15:41 compute-0 podman[139473]: 2026-01-31 07:15:41.716699774 +0000 UTC m=+0.289764185 container start f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_yonath, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:15:41 compute-0 upbeat_yonath[139514]: 167 167
Jan 31 07:15:41 compute-0 systemd[1]: libpod-f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b.scope: Deactivated successfully.
Jan 31 07:15:41 compute-0 conmon[139514]: conmon f0c018d243b2860dc2c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b.scope/container/memory.events
Jan 31 07:15:41 compute-0 podman[139473]: 2026-01-31 07:15:41.749527955 +0000 UTC m=+0.322592356 container attach f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:15:41 compute-0 podman[139473]: 2026-01-31 07:15:41.750964462 +0000 UTC m=+0.324028863 container died f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_yonath, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 31 07:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-483925fac55802531eddaef91e233e5eb32442e93a38ffb4ad1a1baaa611e787-merged.mount: Deactivated successfully.
Jan 31 07:15:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:15:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:15:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:15:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:15:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:15:42 compute-0 podman[139473]: 2026-01-31 07:15:42.008731756 +0000 UTC m=+0.581796157 container remove f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:15:42 compute-0 sudo[139658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rveingbiszhyirdamecpeohdwyapuztl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843741.8035932-248-10053179062967/AnsiballZ_file.py'
Jan 31 07:15:42 compute-0 sudo[139658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:42 compute-0 systemd[1]: libpod-conmon-f0c018d243b2860dc2c1b59f4a4507b5a72d0cdfc94fbfdce1974adce10a627b.scope: Deactivated successfully.
Jan 31 07:15:42 compute-0 podman[139665]: 2026-01-31 07:15:42.13888105 +0000 UTC m=+0.060084229 container create 3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brattain, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:15:42 compute-0 systemd[1]: Started libpod-conmon-3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178.scope.
Jan 31 07:15:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f727a0824f1d7e5bf5132b71851f4adcdeed0a557eca8484be21908943de84f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:42 compute-0 podman[139665]: 2026-01-31 07:15:42.107390713 +0000 UTC m=+0.028593922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f727a0824f1d7e5bf5132b71851f4adcdeed0a557eca8484be21908943de84f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f727a0824f1d7e5bf5132b71851f4adcdeed0a557eca8484be21908943de84f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f727a0824f1d7e5bf5132b71851f4adcdeed0a557eca8484be21908943de84f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f727a0824f1d7e5bf5132b71851f4adcdeed0a557eca8484be21908943de84f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:42 compute-0 podman[139665]: 2026-01-31 07:15:42.227749764 +0000 UTC m=+0.148952943 container init 3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:15:42 compute-0 podman[139665]: 2026-01-31 07:15:42.232609351 +0000 UTC m=+0.153812530 container start 3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brattain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:15:42 compute-0 podman[139665]: 2026-01-31 07:15:42.247530068 +0000 UTC m=+0.168733327 container attach 3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:15:42 compute-0 python3.9[139667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:42 compute-0 sudo[139658]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:42 compute-0 sudo[139839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpuivfilskpmyyejbiqpnqimbmeranzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843742.431858-272-166045997120016/AnsiballZ_stat.py'
Jan 31 07:15:42 compute-0 sudo[139839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:42 compute-0 affectionate_brattain[139683]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:15:42 compute-0 affectionate_brattain[139683]: --> relative data size: 1.0
Jan 31 07:15:42 compute-0 affectionate_brattain[139683]: --> All data devices are unavailable
Jan 31 07:15:43 compute-0 systemd[1]: libpod-3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178.scope: Deactivated successfully.
Jan 31 07:15:43 compute-0 podman[139665]: 2026-01-31 07:15:43.003692053 +0000 UTC m=+0.924895272 container died 3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brattain, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:15:43 compute-0 python3.9[139843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:43 compute-0 sudo[139839]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-0 ceph-mon[74496]: pgmap v434: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:43.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f727a0824f1d7e5bf5132b71851f4adcdeed0a557eca8484be21908943de84f5-merged.mount: Deactivated successfully.
Jan 31 07:15:43 compute-0 podman[139665]: 2026-01-31 07:15:43.397071703 +0000 UTC m=+1.318274882 container remove 3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:15:43 compute-0 sudo[139938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwqmgictbzdwfcoishkhaeohkmcacqwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843742.431858-272-166045997120016/AnsiballZ_file.py'
Jan 31 07:15:43 compute-0 sudo[139938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:43 compute-0 sudo[139331]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:15:43 compute-0 systemd[1]: libpod-conmon-3f84279700147e767cd702f218d241db0dd670fd1c36ff3fcd9f15bc6c753178.scope: Deactivated successfully.
Jan 31 07:15:43 compute-0 sudo[139941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:43 compute-0 sudo[139941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:43 compute-0 sudo[139941]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:43.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:43 compute-0 sudo[139966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:43 compute-0 sudo[139966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:43 compute-0 sudo[139966]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-0 sudo[139991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:43 compute-0 sudo[139991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:43 compute-0 sudo[139991]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-0 python3.9[139940]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:43 compute-0 sudo[140016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:15:43 compute-0 sudo[140016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:43 compute-0 sudo[139938]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:43 compute-0 podman[140156]: 2026-01-31 07:15:43.922349442 +0000 UTC m=+0.045521842 container create a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:15:43 compute-0 podman[140156]: 2026-01-31 07:15:43.894038427 +0000 UTC m=+0.017210837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:15:43 compute-0 systemd[1]: Started libpod-conmon-a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e.scope.
Jan 31 07:15:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:15:44 compute-0 sudo[140249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skmqqqgkzznmtojdplvmfbuvdsxrpeyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843743.812806-308-137371560813271/AnsiballZ_stat.py'
Jan 31 07:15:44 compute-0 sudo[140249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:44 compute-0 podman[140156]: 2026-01-31 07:15:44.093838008 +0000 UTC m=+0.217010478 container init a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:15:44 compute-0 podman[140156]: 2026-01-31 07:15:44.10161224 +0000 UTC m=+0.224784670 container start a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:15:44 compute-0 nervous_noyce[140220]: 167 167
Jan 31 07:15:44 compute-0 systemd[1]: libpod-a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e.scope: Deactivated successfully.
Jan 31 07:15:44 compute-0 podman[140156]: 2026-01-31 07:15:44.122056989 +0000 UTC m=+0.245229419 container attach a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:15:44 compute-0 podman[140156]: 2026-01-31 07:15:44.122694556 +0000 UTC m=+0.245867006 container died a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e1ef8e47bdc46c0c9cf13df6dc223c06ff6b5e767dbd9cac3ac52e6eea80b81-merged.mount: Deactivated successfully.
Jan 31 07:15:44 compute-0 python3.9[140251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:44 compute-0 podman[140156]: 2026-01-31 07:15:44.24317171 +0000 UTC m=+0.366344140 container remove a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:15:44 compute-0 systemd[1]: libpod-conmon-a8434b6f5e8294fed48208600e2cef28ca772b31544fdff1d0ea6860f951bb2e.scope: Deactivated successfully.
Jan 31 07:15:44 compute-0 ceph-mon[74496]: pgmap v435: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:44 compute-0 sudo[140249]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:44 compute-0 podman[140277]: 2026-01-31 07:15:44.369850914 +0000 UTC m=+0.046033035 container create ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:15:44 compute-0 systemd[1]: Started libpod-conmon-ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320.scope.
Jan 31 07:15:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88cd11193d675da30efbfcf0d05428199360cf407568ff29f1b4d51557682a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88cd11193d675da30efbfcf0d05428199360cf407568ff29f1b4d51557682a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88cd11193d675da30efbfcf0d05428199360cf407568ff29f1b4d51557682a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88cd11193d675da30efbfcf0d05428199360cf407568ff29f1b4d51557682a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:44 compute-0 podman[140277]: 2026-01-31 07:15:44.345559465 +0000 UTC m=+0.021741606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:15:44 compute-0 podman[140277]: 2026-01-31 07:15:44.451564463 +0000 UTC m=+0.127746584 container init ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:15:44 compute-0 podman[140277]: 2026-01-31 07:15:44.458293067 +0000 UTC m=+0.134475218 container start ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:15:44 compute-0 podman[140277]: 2026-01-31 07:15:44.462856806 +0000 UTC m=+0.139038947 container attach ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_allen, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:15:44 compute-0 sudo[140371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxegkuwxipgfvjkacdfipcbalwhqmfvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843743.812806-308-137371560813271/AnsiballZ_file.py'
Jan 31 07:15:44 compute-0 sudo[140371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:44 compute-0 python3.9[140373]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8bjnokqz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:44 compute-0 sudo[140371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:45 compute-0 sudo[140523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuxhiikbxotqndozuhououvejlgkwcdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843744.8489733-344-33148320634857/AnsiballZ_stat.py'
Jan 31 07:15:45 compute-0 sudo[140523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:45 compute-0 objective_allen[140339]: {
Jan 31 07:15:45 compute-0 objective_allen[140339]:     "0": [
Jan 31 07:15:45 compute-0 objective_allen[140339]:         {
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "devices": [
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "/dev/loop3"
Jan 31 07:15:45 compute-0 objective_allen[140339]:             ],
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "lv_name": "ceph_lv0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "lv_size": "7511998464",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "name": "ceph_lv0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "tags": {
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.cluster_name": "ceph",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.crush_device_class": "",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.encrypted": "0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.osd_id": "0",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.type": "block",
Jan 31 07:15:45 compute-0 objective_allen[140339]:                 "ceph.vdo": "0"
Jan 31 07:15:45 compute-0 objective_allen[140339]:             },
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "type": "block",
Jan 31 07:15:45 compute-0 objective_allen[140339]:             "vg_name": "ceph_vg0"
Jan 31 07:15:45 compute-0 objective_allen[140339]:         }
Jan 31 07:15:45 compute-0 objective_allen[140339]:     ]
Jan 31 07:15:45 compute-0 objective_allen[140339]: }
Jan 31 07:15:45 compute-0 systemd[1]: libpod-ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320.scope: Deactivated successfully.
Jan 31 07:15:45 compute-0 podman[140277]: 2026-01-31 07:15:45.202878293 +0000 UTC m=+0.879060414 container died ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:15:45 compute-0 python3.9[140525]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b88cd11193d675da30efbfcf0d05428199360cf407568ff29f1b4d51557682a3-merged.mount: Deactivated successfully.
Jan 31 07:15:45 compute-0 podman[140277]: 2026-01-31 07:15:45.250715833 +0000 UTC m=+0.926897954 container remove ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:15:45 compute-0 systemd[1]: libpod-conmon-ab453765445215c1370f0f4a839195e590fa38d5745efab12354342fe78dc320.scope: Deactivated successfully.
Jan 31 07:15:45 compute-0 sudo[140523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-0 sudo[140016]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-0 sudo[140545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:45 compute-0 sudo[140545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:45 compute-0 sudo[140545]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:45.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:45 compute-0 sudo[140593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:15:45 compute-0 sudo[140593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:45 compute-0 sudo[140593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-0 sudo[140625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:45 compute-0 sudo[140625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:45 compute-0 sudo[140625]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-0 sudo[140668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:15:45 compute-0 sudo[140668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:45 compute-0 sudo[140716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-subdsjhjqcuhclqjgmvmhjcwqjtfxdfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843744.8489733-344-33148320634857/AnsiballZ_file.py'
Jan 31 07:15:45 compute-0 sudo[140716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:45 compute-0 python3.9[140720]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:45 compute-0 sudo[140716]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.72305495 +0000 UTC m=+0.041239431 container create 4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shaw, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:15:45 compute-0 systemd[1]: Started libpod-conmon-4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9.scope.
Jan 31 07:15:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.797300295 +0000 UTC m=+0.115484796 container init 4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.70260258 +0000 UTC m=+0.020787101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.804177194 +0000 UTC m=+0.122361675 container start 4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.807608932 +0000 UTC m=+0.125793453 container attach 4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shaw, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:15:45 compute-0 laughing_shaw[140804]: 167 167
Jan 31 07:15:45 compute-0 systemd[1]: libpod-4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9.scope: Deactivated successfully.
Jan 31 07:15:45 compute-0 conmon[140804]: conmon 4d689533d1dee41f11f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9.scope/container/memory.events
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.809152292 +0000 UTC m=+0.127336773 container died 4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shaw, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:15:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eb33c59e0023b4e4c4b956096aa87b6afefa44fea249a224aefb9db3918ee3f-merged.mount: Deactivated successfully.
Jan 31 07:15:45 compute-0 podman[140763]: 2026-01-31 07:15:45.841997014 +0000 UTC m=+0.160181495 container remove 4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:15:45 compute-0 systemd[1]: libpod-conmon-4d689533d1dee41f11f4e0bfb232e00956600e0aec2ad46965f154c0d1b478d9.scope: Deactivated successfully.
Jan 31 07:15:45 compute-0 podman[140837]: 2026-01-31 07:15:45.945763205 +0000 UTC m=+0.036084457 container create 30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:15:45 compute-0 systemd[1]: Started libpod-conmon-30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6.scope.
Jan 31 07:15:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d956e6be57bcb4bc21676bb93fa0a84a98a6df9d1b8ba3df8cc2a60bfe73eedb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d956e6be57bcb4bc21676bb93fa0a84a98a6df9d1b8ba3df8cc2a60bfe73eedb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d956e6be57bcb4bc21676bb93fa0a84a98a6df9d1b8ba3df8cc2a60bfe73eedb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d956e6be57bcb4bc21676bb93fa0a84a98a6df9d1b8ba3df8cc2a60bfe73eedb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:15:46 compute-0 podman[140837]: 2026-01-31 07:15:45.931409393 +0000 UTC m=+0.021730665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:15:46 compute-0 podman[140837]: 2026-01-31 07:15:46.033054287 +0000 UTC m=+0.123375579 container init 30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:15:46 compute-0 podman[140837]: 2026-01-31 07:15:46.037489762 +0000 UTC m=+0.127811024 container start 30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:15:46 compute-0 podman[140837]: 2026-01-31 07:15:46.041321462 +0000 UTC m=+0.131642724 container attach 30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:15:46 compute-0 sudo[140976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmvxgdqpiqmeqntkkyjterhmzekitqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843745.9201436-383-199937002840973/AnsiballZ_command.py'
Jan 31 07:15:46 compute-0 sudo[140976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:46 compute-0 python3.9[140978]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:46 compute-0 sudo[140976]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:46 compute-0 ceph-mon[74496]: pgmap v436: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:46 compute-0 happy_chatelet[140898]: {
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:         "osd_id": 0,
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:         "type": "bluestore"
Jan 31 07:15:46 compute-0 happy_chatelet[140898]:     }
Jan 31 07:15:46 compute-0 happy_chatelet[140898]: }
Jan 31 07:15:46 compute-0 systemd[1]: libpod-30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6.scope: Deactivated successfully.
Jan 31 07:15:46 compute-0 conmon[140898]: conmon 30997db0491d31659c10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6.scope/container/memory.events
Jan 31 07:15:46 compute-0 podman[140837]: 2026-01-31 07:15:46.864905815 +0000 UTC m=+0.955227067 container died 30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d956e6be57bcb4bc21676bb93fa0a84a98a6df9d1b8ba3df8cc2a60bfe73eedb-merged.mount: Deactivated successfully.
Jan 31 07:15:46 compute-0 podman[140837]: 2026-01-31 07:15:46.917370906 +0000 UTC m=+1.007692158 container remove 30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:15:46 compute-0 systemd[1]: libpod-conmon-30997db0491d31659c10d86c364a11f818113128b141e831c4d5db10aae015f6.scope: Deactivated successfully.
Jan 31 07:15:46 compute-0 sudo[140668]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:15:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:15:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev af9b470c-02fe-424b-a526-31bfbbf2a7b1 does not exist
Jan 31 07:15:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 31144715-125d-4646-a74d-b12938592040 does not exist
Jan 31 07:15:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 505bb22d-217f-41d1-913e-e8d0b92f5cc2 does not exist
Jan 31 07:15:47 compute-0 sudo[141132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:47 compute-0 sudo[141132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:47 compute-0 sudo[141132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-0 sudo[141185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjbdxhitlagkyfmhkveosjfzhmrqwort ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843746.6891904-407-163327358622153/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:15:47 compute-0 sudo[141185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:47 compute-0 sudo[141183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:15:47 compute-0 sudo[141183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:47 compute-0 sudo[141183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-0 python3[141203]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:15:47 compute-0 sudo[141185]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:47.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:47.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:47 compute-0 sudo[141361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsmvgisyownsnxyrveuashpgkwrcapxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843747.4714417-431-167509845580067/AnsiballZ_stat.py'
Jan 31 07:15:47 compute-0 sudo[141361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:15:47 compute-0 ceph-mon[74496]: pgmap v437: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:47 compute-0 python3.9[141363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:48 compute-0 sudo[141361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:48 compute-0 sudo[141486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogqolhzmuhfdmobdyewzskcgcfgjzcwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843747.4714417-431-167509845580067/AnsiballZ_copy.py'
Jan 31 07:15:48 compute-0 sudo[141486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:48 compute-0 python3.9[141488]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843747.4714417-431-167509845580067/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:48 compute-0 sudo[141486]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:49 compute-0 sudo[141638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emllacfxdkbodpwlglqpvennrkfzvvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843748.8110437-476-174887220755162/AnsiballZ_stat.py'
Jan 31 07:15:49 compute-0 sudo[141638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:49 compute-0 python3.9[141640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:49 compute-0 sudo[141638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:15:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:15:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:49 compute-0 sudo[141764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mloaicbvndmldxjalnutyeyzaaivkuhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843748.8110437-476-174887220755162/AnsiballZ_copy.py'
Jan 31 07:15:49 compute-0 sudo[141764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:49 compute-0 python3.9[141766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843748.8110437-476-174887220755162/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:49 compute-0 sudo[141764]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:15:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:15:50 compute-0 sudo[141916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anuywxtbylgebcgpvcubstxqskapusaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843750.004359-521-53657362412697/AnsiballZ_stat.py'
Jan 31 07:15:50 compute-0 sudo[141916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:50 compute-0 python3.9[141918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:50 compute-0 sudo[141916]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:50 compute-0 ceph-mon[74496]: pgmap v438: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:50 compute-0 sudo[142041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjypymvgbnrqxpnyqszzsndroiepojw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843750.004359-521-53657362412697/AnsiballZ_copy.py'
Jan 31 07:15:50 compute-0 sudo[142041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:51 compute-0 python3.9[142043]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843750.004359-521-53657362412697/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:51 compute-0 sudo[142041]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:51 compute-0 sudo[142194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodbkrvbjsgtplqjqzfqsbhdnwkative ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843751.246784-566-112808936169482/AnsiballZ_stat.py'
Jan 31 07:15:51 compute-0 sudo[142194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:51 compute-0 python3.9[142196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:51 compute-0 sudo[142194]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:51 compute-0 sudo[142319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmnhosmjimjmpfyjhndnxeonupjplzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843751.246784-566-112808936169482/AnsiballZ_copy.py'
Jan 31 07:15:51 compute-0 sudo[142319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:52 compute-0 python3.9[142321]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843751.246784-566-112808936169482/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:52 compute-0 sudo[142319]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:52 compute-0 ceph-mon[74496]: pgmap v439: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:52 compute-0 sudo[142471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvlogoeazjbqaujaxqypedlusdsekwts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843752.4389906-611-273793191193499/AnsiballZ_stat.py'
Jan 31 07:15:52 compute-0 sudo[142471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:52 compute-0 python3.9[142473]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:15:52 compute-0 sudo[142471]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:53 compute-0 sudo[142597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzwxsbfrasbmxadviwzbaxclfsqoybs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843752.4389906-611-273793191193499/AnsiballZ_copy.py'
Jan 31 07:15:53 compute-0 sudo[142597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:53.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:53 compute-0 python3.9[142599]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843752.4389906-611-273793191193499/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:53 compute-0 sudo[142597]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:53 compute-0 sudo[142749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxyoonceqgeymudjjwvzgsexqtuuplp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843753.6541858-656-233789346615334/AnsiballZ_file.py'
Jan 31 07:15:53 compute-0 sudo[142749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:54 compute-0 python3.9[142751]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:54 compute-0 sudo[142749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:54 compute-0 sudo[142901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cppntspsedguolfwrsyfxjaulgrzsvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843754.2700284-680-271421507894404/AnsiballZ_command.py'
Jan 31 07:15:54 compute-0 sudo[142901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:54 compute-0 ceph-mon[74496]: pgmap v440: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:54 compute-0 python3.9[142903]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:54 compute-0 sudo[142901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:15:55 compute-0 sudo[143057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczkmslnldhjywwdbrmjasyrxzwvcfxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843754.9213717-704-264112118729325/AnsiballZ_blockinfile.py'
Jan 31 07:15:55 compute-0 sudo[143057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:55.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:55 compute-0 python3.9[143059]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:55 compute-0 sudo[143057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:56 compute-0 sudo[143209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnvzcvvykootouqinassmniaibiwdjgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843755.7541578-731-150133267464101/AnsiballZ_command.py'
Jan 31 07:15:56 compute-0 sudo[143209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:56 compute-0 python3.9[143211]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:56 compute-0 sudo[143209]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:56 compute-0 sudo[143362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gleicoxazrbbikyisplcznmiqocwmhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843756.4129767-755-154024044587630/AnsiballZ_stat.py'
Jan 31 07:15:56 compute-0 sudo[143362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:56 compute-0 ceph-mon[74496]: pgmap v441: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:56 compute-0 python3.9[143364]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:15:56 compute-0 sudo[143362]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:57 compute-0 sudo[143517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkcrwvndkgsshyprvdbzxfzxontxdmto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843757.0583186-779-86840431216105/AnsiballZ_command.py'
Jan 31 07:15:57 compute-0 sudo[143517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:57 compute-0 python3.9[143519]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:15:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:57 compute-0 sudo[143517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:57 compute-0 sudo[143672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wokzauckaxvfxaltjfhsnqvrlgxednry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843757.701883-803-84536323530229/AnsiballZ_file.py'
Jan 31 07:15:57 compute-0 sudo[143672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:15:58 compute-0 python3.9[143674]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:15:58 compute-0 sudo[143672]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:58 compute-0 ceph-mon[74496]: pgmap v442: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:15:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:15:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:15:59 compute-0 python3.9[143824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:15:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:15:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:15:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:15:59.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:15:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:15:59 compute-0 sudo[143851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:59 compute-0 sudo[143851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-0 sudo[143851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-0 sudo[143876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:15:59 compute-0 sudo[143876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:15:59 compute-0 sudo[143876]: pam_unix(sudo:session): session closed for user root
Jan 31 07:15:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:00 compute-0 sudo[144026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzslgqvtiuyqdgqqjdtevwlkbgchumyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843760.245087-923-4212573105523/AnsiballZ_command.py'
Jan 31 07:16:00 compute-0 sudo[144026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:00 compute-0 python3.9[144028]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:00 compute-0 ovs-vsctl[144029]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 07:16:00 compute-0 sudo[144026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:00 compute-0 ceph-mon[74496]: pgmap v443: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:01 compute-0 sudo[144180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucfgzfceyomzpbvtmouvfrmqrzayoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843760.9986212-950-249375847809519/AnsiballZ_command.py'
Jan 31 07:16:01 compute-0 sudo[144180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:01 compute-0 python3.9[144182]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:01 compute-0 sudo[144180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:01.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:01 compute-0 sudo[144335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iilktvjmsgbypffoqlpjvjsydiviiyoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843761.6133013-974-241313972699652/AnsiballZ_command.py'
Jan 31 07:16:01 compute-0 sudo[144335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:02 compute-0 python3.9[144337]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:02 compute-0 ovs-vsctl[144338]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 07:16:02 compute-0 sudo[144335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:02 compute-0 python3.9[144488]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:16:02 compute-0 ceph-mon[74496]: pgmap v444: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:03 compute-0 sudo[144640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acmdkaudjbmklsvjpwgtkazwzbhsbddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843762.8752122-1025-186464607005960/AnsiballZ_file.py'
Jan 31 07:16:03 compute-0 sudo[144640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:03 compute-0 python3.9[144642]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:03 compute-0 sudo[144640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:03.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:03 compute-0 sudo[144793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfguczlfsqnsschhzayarrtkfxaurrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843763.4901335-1049-163634803921243/AnsiballZ_stat.py'
Jan 31 07:16:03 compute-0 sudo[144793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:03 compute-0 python3.9[144795]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:03 compute-0 sudo[144793]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:04 compute-0 sudo[144871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewxutsncyipsdhxuyklmbygkitpeaibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843763.4901335-1049-163634803921243/AnsiballZ_file.py'
Jan 31 07:16:04 compute-0 sudo[144871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:04 compute-0 python3.9[144873]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:04 compute-0 sudo[144871]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:04 compute-0 sudo[145023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtzliirvdnwxzetbadpzettqxqxsplti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843764.4629154-1049-175864944369628/AnsiballZ_stat.py'
Jan 31 07:16:04 compute-0 sudo[145023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:04 compute-0 python3.9[145025]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:04 compute-0 sudo[145023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:05 compute-0 ceph-mon[74496]: pgmap v445: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:05 compute-0 sudo[145101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upzbmliidorjkehreamlpsgpmreqotmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843764.4629154-1049-175864944369628/AnsiballZ_file.py'
Jan 31 07:16:05 compute-0 sudo[145101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:05 compute-0 python3.9[145103]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:05 compute-0 sudo[145101]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:05.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:05 compute-0 sudo[145254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obccggkeeghfckfkbwjuwfmoyvwmpjez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843765.5571008-1118-246398460309843/AnsiballZ_file.py'
Jan 31 07:16:05 compute-0 sudo[145254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:05 compute-0 python3.9[145256]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:05 compute-0 sudo[145254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:06 compute-0 ceph-mon[74496]: pgmap v446: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:06 compute-0 sudo[145406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxcynrrmhmcrzghvzlgoztihzklbthjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843766.1946018-1142-177379403490650/AnsiballZ_stat.py'
Jan 31 07:16:06 compute-0 sudo[145406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:06 compute-0 python3.9[145408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:06 compute-0 sudo[145406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:06 compute-0 sudo[145484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljuiaearzvywmevjwcedqqqxsuveqbxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843766.1946018-1142-177379403490650/AnsiballZ_file.py'
Jan 31 07:16:06 compute-0 sudo[145484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:07 compute-0 python3.9[145486]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:07 compute-0 sudo[145484]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:07 compute-0 sudo[145637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnjfvrncmsvjgmenhftqrakgxseiasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843767.2055888-1178-221897301735228/AnsiballZ_stat.py'
Jan 31 07:16:07 compute-0 sudo[145637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:07 compute-0 python3.9[145639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:07 compute-0 sudo[145637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:07 compute-0 sudo[145715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhksjhdrrrgszpymxcwlhyrywlehkjah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843767.2055888-1178-221897301735228/AnsiballZ_file.py'
Jan 31 07:16:07 compute-0 sudo[145715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:08 compute-0 python3.9[145717]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:08 compute-0 sudo[145715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:08 compute-0 sudo[145867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyvixmdqhippxhsffqbmunqglvmhfxnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843768.2952602-1214-51715515615334/AnsiballZ_systemd.py'
Jan 31 07:16:08 compute-0 sudo[145867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:08 compute-0 ceph-mon[74496]: pgmap v447: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:08 compute-0 python3.9[145869]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:16:08 compute-0 systemd[1]: Reloading.
Jan 31 07:16:08 compute-0 systemd-rc-local-generator[145888]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:08 compute-0 systemd-sysv-generator[145891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:09 compute-0 sudo[145867]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:16:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:16:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:09.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:09 compute-0 sudo[146058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yonyeuzycyifutkfmtrvbqyghosjlkyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843769.4390385-1238-13482312038779/AnsiballZ_stat.py'
Jan 31 07:16:09 compute-0 sudo[146058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:09 compute-0 python3.9[146060]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:09 compute-0 sudo[146058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:10 compute-0 sudo[146136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxpelbfymdpongjmphcrpkyclfxysczg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843769.4390385-1238-13482312038779/AnsiballZ_file.py'
Jan 31 07:16:10 compute-0 sudo[146136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:10 compute-0 python3.9[146138]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:10 compute-0 sudo[146136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:10 compute-0 ceph-mon[74496]: pgmap v448: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:10 compute-0 sudo[146288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhwqvvkkwjspmbhqotprgtnqgvcbgkrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843770.684773-1274-273977094342676/AnsiballZ_stat.py'
Jan 31 07:16:10 compute-0 sudo[146288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:11 compute-0 python3.9[146290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:11 compute-0 sudo[146288]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:11 compute-0 sudo[146367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcndbmkoztxpsoximblasbyiwboivzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843770.684773-1274-273977094342676/AnsiballZ_file.py'
Jan 31 07:16:11 compute-0 sudo[146367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:16:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:11.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:16:11 compute-0 python3.9[146369]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:11 compute-0 sudo[146367]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:11 compute-0 sudo[146519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snoxpawscjikmcktebpontspquuxmnhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843771.7219977-1310-121267352148336/AnsiballZ_systemd.py'
Jan 31 07:16:11 compute-0 sudo[146519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:12 compute-0 python3.9[146521]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:16:12 compute-0 systemd[1]: Reloading.
Jan 31 07:16:12 compute-0 systemd-rc-local-generator[146549]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:12 compute-0 systemd-sysv-generator[146553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:12 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 07:16:12 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:16:12 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:16:12 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 07:16:12 compute-0 ceph-mon[74496]: pgmap v449: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:12 compute-0 sudo[146519]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:13 compute-0 sudo[146715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfknbgecfehtidxyjahwdnnriyaefbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843772.9510984-1340-58714453887042/AnsiballZ_file.py'
Jan 31 07:16:13 compute-0 sudo[146715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:13 compute-0 python3.9[146717]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:13 compute-0 sudo[146715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:13.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:13 compute-0 sudo[146867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvipkhngmrzipycuvtdrlitgaqqfslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843773.5865278-1364-89037210009789/AnsiballZ_stat.py'
Jan 31 07:16:13 compute-0 sudo[146867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:14 compute-0 python3.9[146869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:14 compute-0 sudo[146867]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:14 compute-0 sudo[146990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqlektatuvtczbckiahqpxdisiqnlumv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843773.5865278-1364-89037210009789/AnsiballZ_copy.py'
Jan 31 07:16:14 compute-0 sudo[146990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:14 compute-0 python3.9[146992]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843773.5865278-1364-89037210009789/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:14 compute-0 sudo[146990]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:14 compute-0 ceph-mon[74496]: pgmap v450: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:15 compute-0 sudo[147143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afotgdwqszwadudldibznlshxkzrykdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843775.024883-1415-111894591942906/AnsiballZ_file.py'
Jan 31 07:16:15 compute-0 sudo[147143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:15 compute-0 python3.9[147145]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:15 compute-0 sudo[147143]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:16 compute-0 sshd-session[147146]: Invalid user sol from 45.148.10.240 port 38634
Jan 31 07:16:16 compute-0 ceph-mon[74496]: pgmap v451: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:16 compute-0 sudo[147297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeceeoljoymultwcmbrxxtcgrkzhmmxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843775.8874507-1439-43950386123425/AnsiballZ_file.py'
Jan 31 07:16:16 compute-0 sudo[147297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:16 compute-0 python3.9[147299]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:16 compute-0 sudo[147297]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:16 compute-0 sshd-session[147146]: Connection closed by invalid user sol 45.148.10.240 port 38634 [preauth]
Jan 31 07:16:16 compute-0 sudo[147449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyapvvojbcscdeucttvazjujtefabwmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843776.5720189-1463-11129369796152/AnsiballZ_stat.py'
Jan 31 07:16:16 compute-0 sudo[147449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:17 compute-0 python3.9[147451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:17 compute-0 sudo[147449]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:17 compute-0 sudo[147573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mimmgkautcsyetfgpwbqvqzczhsvdzdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843776.5720189-1463-11129369796152/AnsiballZ_copy.py'
Jan 31 07:16:17 compute-0 sudo[147573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:17.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:17 compute-0 python3.9[147575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843776.5720189-1463-11129369796152/.source.json _original_basename=.qww_3skg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:17 compute-0 sudo[147573]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:18 compute-0 python3.9[147725]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:18 compute-0 ceph-mon[74496]: pgmap v452: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:16:19
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'volumes', 'backups', 'vms', 'default.rgw.meta', '.rgw.root']
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:16:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:16:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:16:20 compute-0 sudo[148097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:20 compute-0 sudo[148097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:20 compute-0 sudo[148097]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:20 compute-0 sudo[148146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:20 compute-0 sudo[148146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:20 compute-0 sudo[148146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:16:20 compute-0 sudo[148195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knkwkjsjkmssknnqrcraavxekrtecprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843779.6604877-1583-129105268466170/AnsiballZ_container_config_data.py'
Jan 31 07:16:20 compute-0 sudo[148195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:16:20 compute-0 python3.9[148199]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 07:16:20 compute-0 sudo[148195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:20 compute-0 ceph-mon[74496]: pgmap v453: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:21 compute-0 sudo[148349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllqebizogioccelovicmpqykshvylaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843780.73209-1616-229230770697341/AnsiballZ_container_config_hash.py'
Jan 31 07:16:21 compute-0 sudo[148349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:21 compute-0 python3.9[148351]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:16:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:21 compute-0 sudo[148349]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:22 compute-0 sudo[148502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbosnwyvzrlvpygqzhpwgogxyseatwd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843781.6818838-1646-209797198161461/AnsiballZ_edpm_container_manage.py'
Jan 31 07:16:22 compute-0 sudo[148502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:22 compute-0 python3[148504]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:16:22 compute-0 ceph-mon[74496]: pgmap v454: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:23.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:24 compute-0 ceph-mon[74496]: pgmap v455: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:16:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:25.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:16:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:25.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:27 compute-0 ceph-mon[74496]: pgmap v456: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:27 compute-0 podman[148517]: 2026-01-31 07:16:27.31952469 +0000 UTC m=+4.782750605 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 07:16:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:27 compute-0 podman[148639]: 2026-01-31 07:16:27.451889784 +0000 UTC m=+0.072538302 container create 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 07:16:27 compute-0 podman[148639]: 2026-01-31 07:16:27.395590873 +0000 UTC m=+0.016239441 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 07:16:27 compute-0 python3[148504]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 07:16:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:27 compute-0 sudo[148502]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:28 compute-0 sudo[148827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvcuweptrcjjtjrjataewevbodfousp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843787.86024-1670-226846563822357/AnsiballZ_stat.py'
Jan 31 07:16:28 compute-0 sudo[148827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:28 compute-0 ceph-mon[74496]: pgmap v457: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:28 compute-0 python3.9[148829]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:16:28 compute-0 sudo[148827]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:28 compute-0 sudo[148981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jekwhkraaqpgwhvzrajbnkgpaztaokfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843788.5967915-1697-22677708955258/AnsiballZ_file.py'
Jan 31 07:16:28 compute-0 sudo[148981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:29 compute-0 python3.9[148983]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:29 compute-0 sudo[148981]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:29 compute-0 sudo[149058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htpjhzhlsispprlulnvzngnwtnhqxxrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843788.5967915-1697-22677708955258/AnsiballZ_stat.py'
Jan 31 07:16:29 compute-0 sudo[149058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:29.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:16:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 7729 writes, 32K keys, 7729 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 7729 writes, 1386 syncs, 5.58 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7729 writes, 32K keys, 7729 commit groups, 1.0 writes per commit group, ingest: 20.60 MB, 0.03 MB/s
                                           Interval WAL: 7729 writes, 1386 syncs, 5.58 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:16:29 compute-0 python3.9[149060]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:16:29 compute-0 sudo[149058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:29 compute-0 sudo[149209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqojzoasafughgnjkcikhvboyekmqoza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843789.5131462-1697-178303533997814/AnsiballZ_copy.py'
Jan 31 07:16:29 compute-0 sudo[149209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:30 compute-0 python3.9[149211]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843789.5131462-1697-178303533997814/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:30 compute-0 sudo[149209]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:30 compute-0 sudo[149285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhfueksjxafalkpohhsushomyzjhyvkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843789.5131462-1697-178303533997814/AnsiballZ_systemd.py'
Jan 31 07:16:30 compute-0 sudo[149285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:30 compute-0 ceph-mon[74496]: pgmap v458: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:30 compute-0 python3.9[149287]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:16:30 compute-0 systemd[1]: Reloading.
Jan 31 07:16:30 compute-0 systemd-rc-local-generator[149306]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:30 compute-0 systemd-sysv-generator[149309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:30 compute-0 sudo[149285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:31 compute-0 sudo[149396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtskeigknkrldlhdlyiobrruqpphzupj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843789.5131462-1697-178303533997814/AnsiballZ_systemd.py'
Jan 31 07:16:31 compute-0 sudo[149396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:31.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:31 compute-0 python3.9[149398]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:16:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:31 compute-0 systemd[1]: Reloading.
Jan 31 07:16:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:31 compute-0 systemd-rc-local-generator[149427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:31 compute-0 systemd-sysv-generator[149432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:31 compute-0 systemd[1]: Starting ovn_controller container...
Jan 31 07:16:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:16:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc320a1049d8f05fd2fcd9b4a54d6d483d4695cc2e0a0929a7bc54470f0fb8a8/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 07:16:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238.
Jan 31 07:16:32 compute-0 podman[149441]: 2026-01-31 07:16:32.065351325 +0000 UTC m=+0.221921795 container init 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + sudo -E kolla_set_configs
Jan 31 07:16:32 compute-0 podman[149441]: 2026-01-31 07:16:32.093752207 +0000 UTC m=+0.250322657 container start 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:16:32 compute-0 edpm-start-podman-container[149441]: ovn_controller
Jan 31 07:16:32 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 31 07:16:32 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 07:16:32 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 07:16:32 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 31 07:16:32 compute-0 systemd[149491]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 07:16:32 compute-0 edpm-start-podman-container[149440]: Creating additional drop-in dependency for "ovn_controller" (1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238)
Jan 31 07:16:32 compute-0 podman[149464]: 2026-01-31 07:16:32.166462002 +0000 UTC m=+0.064050076 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:16:32 compute-0 systemd[1]: 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238-518afe6e7402b380.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 07:16:32 compute-0 systemd[1]: 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238-518afe6e7402b380.service: Failed with result 'exit-code'.
Jan 31 07:16:32 compute-0 systemd[1]: Reloading.
Jan 31 07:16:32 compute-0 systemd-sysv-generator[149543]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:16:32 compute-0 systemd-rc-local-generator[149539]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:16:32 compute-0 systemd[149491]: Queued start job for default target Main User Target.
Jan 31 07:16:32 compute-0 systemd[149491]: Created slice User Application Slice.
Jan 31 07:16:32 compute-0 systemd[149491]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 07:16:32 compute-0 systemd[149491]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:16:32 compute-0 systemd[149491]: Reached target Paths.
Jan 31 07:16:32 compute-0 systemd[149491]: Reached target Timers.
Jan 31 07:16:32 compute-0 systemd[149491]: Starting D-Bus User Message Bus Socket...
Jan 31 07:16:32 compute-0 systemd[149491]: Starting Create User's Volatile Files and Directories...
Jan 31 07:16:32 compute-0 systemd[149491]: Finished Create User's Volatile Files and Directories.
Jan 31 07:16:32 compute-0 systemd[149491]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:16:32 compute-0 systemd[149491]: Reached target Sockets.
Jan 31 07:16:32 compute-0 systemd[149491]: Reached target Basic System.
Jan 31 07:16:32 compute-0 systemd[149491]: Reached target Main User Target.
Jan 31 07:16:32 compute-0 systemd[149491]: Startup finished in 137ms.
Jan 31 07:16:32 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 31 07:16:32 compute-0 systemd[1]: Started ovn_controller container.
Jan 31 07:16:32 compute-0 systemd[1]: Started Session c1 of User root.
Jan 31 07:16:32 compute-0 sudo[149396]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:32 compute-0 ovn_controller[149457]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:16:32 compute-0 ovn_controller[149457]: INFO:__main__:Validating config file
Jan 31 07:16:32 compute-0 ovn_controller[149457]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:16:32 compute-0 ovn_controller[149457]: INFO:__main__:Writing out command to execute
Jan 31 07:16:32 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: ++ cat /run_command
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + ARGS=
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + sudo kolla_copy_cacerts
Jan 31 07:16:32 compute-0 systemd[1]: Started Session c2 of User root.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + [[ ! -n '' ]]
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + . kolla_extend_start
Jan 31 07:16:32 compute-0 ovn_controller[149457]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + umask 0022
Jan 31 07:16:32 compute-0 ovn_controller[149457]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 07:16:32 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.5801] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.5811] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <warn>  [1769843792.5814] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.5821] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.5825] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.5828] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:16:32 compute-0 kernel: br-int: entered promiscuous mode
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 07:16:32 compute-0 systemd-udevd[149588]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-0 ovn_controller[149457]: 2026-01-31T07:16:32Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.6548] manager: (ovn-a84029-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.6557] manager: (ovn-c06836-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.6567] manager: (ovn-f59cdb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 07:16:32 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.6754] device (genev_sys_6081): carrier: link connected
Jan 31 07:16:32 compute-0 NetworkManager[49108]: <info>  [1769843792.6757] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 07:16:32 compute-0 ceph-mon[74496]: pgmap v459: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:33 compute-0 python3.9[149720]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 07:16:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:16:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:16:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:34 compute-0 sudo[149870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llpsupnadiiwlbhupbuthmgmkirjbiuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843793.9287865-1832-130394229762063/AnsiballZ_stat.py'
Jan 31 07:16:34 compute-0 sudo[149870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:34 compute-0 python3.9[149872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:16:34 compute-0 sudo[149870]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:16:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:16:34 compute-0 sudo[149993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaoinrcwvjgmfzobtqatgtgpiqfwhhon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843793.9287865-1832-130394229762063/AnsiballZ_copy.py'
Jan 31 07:16:34 compute-0 sudo[149993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:34 compute-0 ceph-mon[74496]: pgmap v460: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:35 compute-0 python3.9[149995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843793.9287865-1832-130394229762063/.source.yaml _original_basename=.43sj2hkc follow=False checksum=f6b75a149047666d3f825893ea1fa078d9873798 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:16:35 compute-0 sudo[149993]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:35.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:35.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:35 compute-0 sudo[150146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azzyvyninwwavomdbtqolagjlibzynbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843795.3457453-1877-195719757968444/AnsiballZ_command.py'
Jan 31 07:16:35 compute-0 sudo[150146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:35 compute-0 python3.9[150148]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:35 compute-0 ovs-vsctl[150149]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 07:16:35 compute-0 sudo[150146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:36 compute-0 sudo[150299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dksraxeqxnhvmpaxqevhbhhwialvrffn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843796.004946-1901-86404890766020/AnsiballZ_command.py'
Jan 31 07:16:36 compute-0 sudo[150299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:36 compute-0 ceph-mon[74496]: pgmap v461: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:36 compute-0 python3.9[150301]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:36 compute-0 ovs-vsctl[150303]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 07:16:36 compute-0 sudo[150299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:37 compute-0 sudo[150455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcrxxsmibhriheksltzwmxyhgedpgwkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843797.0128107-1943-273328980291590/AnsiballZ_command.py'
Jan 31 07:16:37 compute-0 sudo[150455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:16:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:37.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:16:37 compute-0 python3.9[150457]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:16:37 compute-0 ovs-vsctl[150458]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 07:16:37 compute-0 sudo[150455]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:37 compute-0 sshd-session[137717]: Connection closed by 192.168.122.30 port 60472
Jan 31 07:16:37 compute-0 sshd-session[137714]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:16:37 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 07:16:37 compute-0 systemd[1]: session-46.scope: Consumed 50.997s CPU time.
Jan 31 07:16:37 compute-0 systemd-logind[816]: Session 46 logged out. Waiting for processes to exit.
Jan 31 07:16:37 compute-0 systemd-logind[816]: Removed session 46.
Jan 31 07:16:38 compute-0 ceph-mon[74496]: pgmap v462: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:39.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:40 compute-0 sudo[150484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:40 compute-0 sudo[150484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:40 compute-0 sudo[150484]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:40 compute-0 sudo[150509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:40 compute-0 sudo[150509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:40 compute-0 sudo[150509]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:40 compute-0 ceph-mon[74496]: pgmap v463: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:41.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 07:16:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:42 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 31 07:16:42 compute-0 systemd[149491]: Activating special unit Exit the Session...
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped target Main User Target.
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped target Basic System.
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped target Paths.
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped target Sockets.
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped target Timers.
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:16:42 compute-0 systemd[149491]: Closed D-Bus User Message Bus Socket.
Jan 31 07:16:42 compute-0 systemd[149491]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:16:42 compute-0 systemd[149491]: Removed slice User Application Slice.
Jan 31 07:16:42 compute-0 systemd[149491]: Reached target Shutdown.
Jan 31 07:16:42 compute-0 systemd[149491]: Finished Exit the Session.
Jan 31 07:16:42 compute-0 systemd[149491]: Reached target Exit the Session.
Jan 31 07:16:42 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 07:16:42 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 31 07:16:42 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 07:16:42 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 07:16:42 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 07:16:42 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 07:16:42 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 07:16:42 compute-0 ceph-mon[74496]: pgmap v464: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:43.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:43 compute-0 sshd-session[150538]: Accepted publickey for zuul from 192.168.122.30 port 46310 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:16:43 compute-0 systemd-logind[816]: New session 48 of user zuul.
Jan 31 07:16:43 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 31 07:16:43 compute-0 sshd-session[150538]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:16:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:45.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:47 compute-0 sudo[150706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:47 compute-0 sudo[150706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-0 sudo[150706]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:47 compute-0 sudo[150731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:16:47 compute-0 sudo[150731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-0 sudo[150731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:47.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:47 compute-0 sudo[150756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:16:47 compute-0 sudo[150756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-0 sudo[150756]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:47 compute-0 sudo[150781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:16:47 compute-0 sudo[150781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:16:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:47 compute-0 sudo[150781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:16:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:16:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:16:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:16:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:49.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:49.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:16:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:16:50 compute-0 ceph-mon[74496]: pgmap v465: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:51.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:16:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:16:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:53.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:16:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:53.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:53 compute-0 python3.9[150703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:16:54 compute-0 sudo[150994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpbeckbofqhgfoeqwrppouagkkjaiif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843814.24341-61-8490629378020/AnsiballZ_file.py'
Jan 31 07:16:54 compute-0 sudo[150994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:54 compute-0 python3.9[150996]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:55 compute-0 sudo[150994]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:55 compute-0 ceph-mon[74496]: pgmap v466: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-0 ceph-mon[74496]: pgmap v467: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:16:55 compute-0 ceph-mon[74496]: pgmap v468: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:16:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:55.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:55 compute-0 sudo[151147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nawvgurflfvcujpylwriohvxngyippcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843815.1976707-61-75353659959195/AnsiballZ_file.py'
Jan 31 07:16:55 compute-0 sudo[151147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:16:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:16:55 compute-0 python3.9[151149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:55 compute-0 sudo[151147]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:56 compute-0 sudo[151299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upuigiqndefkjsadbhezustdetbgyamf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843815.8051705-61-216034005092388/AnsiballZ_file.py'
Jan 31 07:16:56 compute-0 sudo[151299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:56 compute-0 python3.9[151301]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:56 compute-0 sudo[151299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:56 compute-0 sudo[151451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phdhpyshzocdfqcwcjljvytignbdxprc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843816.4681547-61-46986895334082/AnsiballZ_file.py'
Jan 31 07:16:56 compute-0 sudo[151451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:56 compute-0 python3.9[151453]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:56 compute-0 sudo[151451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:16:57 compute-0 sudo[151604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwopihvochiffitxmgjrqrcwujtkxew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843817.1067078-61-176353552582196/AnsiballZ_file.py'
Jan 31 07:16:57 compute-0 sudo[151604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:16:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:57 compute-0 python3.9[151606]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:16:57 compute-0 sudo[151604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:16:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:59 compute-0 ceph-mon[74496]: pgmap v469: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-0 ceph-mon[74496]: pgmap v470: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:59 compute-0 ceph-mon[74496]: pgmap v471: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:16:59 compute-0 python3.9[151756]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:16:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:16:59.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:16:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:16:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:16:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:16:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:16:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:16:59.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:16:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 07:16:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:16:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:16:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:16:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:16:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:16:59 compute-0 sudo[151907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hchczloahadmklgmhpxyczsibkefhhqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843819.3586419-193-78293508896942/AnsiballZ_seboolean.py'
Jan 31 07:16:59 compute-0 sudo[151907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:00 compute-0 python3.9[151909]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 07:17:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9e504758-75f7-452a-ad66-5e766b25bf41 does not exist
Jan 31 07:17:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4f05f725-f472-4229-8082-ad255c78bd81 does not exist
Jan 31 07:17:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 37b8de8e-9884-4855-92de-9448e62f6b40 does not exist
Jan 31 07:17:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:17:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:17:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:17:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:17:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:17:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:17:00 compute-0 sudo[151910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:00 compute-0 sudo[151910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-0 sudo[151910]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-0 sudo[151922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:00 compute-0 sudo[151922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-0 sudo[151922]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-0 sudo[151959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:00 compute-0 sudo[151959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-0 sudo[151959]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-0 sudo[151965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:17:00 compute-0 sudo[151965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-0 sudo[151965]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-0 ceph-mon[74496]: pgmap v472: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:17:00 compute-0 ceph-mon[74496]: pgmap v473: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 07:17:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:17:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:17:00 compute-0 sudo[152010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:00 compute-0 sudo[152010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-0 sudo[152010]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-0 sudo[152035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:17:00 compute-0 sudo[152035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:00 compute-0 sudo[151907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:00 compute-0 podman[152101]: 2026-01-31 07:17:00.696996011 +0000 UTC m=+0.021358356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:17:00 compute-0 podman[152101]: 2026-01-31 07:17:00.95151823 +0000 UTC m=+0.275880565 container create e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:17:01 compute-0 systemd[1]: Started libpod-conmon-e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6.scope.
Jan 31 07:17:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:17:01 compute-0 podman[152101]: 2026-01-31 07:17:01.229753316 +0000 UTC m=+0.554115751 container init e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:17:01 compute-0 podman[152101]: 2026-01-31 07:17:01.237441219 +0000 UTC m=+0.561803564 container start e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:17:01 compute-0 sleepy_zhukovsky[152193]: 167 167
Jan 31 07:17:01 compute-0 systemd[1]: libpod-e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6.scope: Deactivated successfully.
Jan 31 07:17:01 compute-0 conmon[152193]: conmon e29dfe4f772c9258f65b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6.scope/container/memory.events
Jan 31 07:17:01 compute-0 podman[152101]: 2026-01-31 07:17:01.379562762 +0000 UTC m=+0.703925137 container attach e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:17:01 compute-0 podman[152101]: 2026-01-31 07:17:01.380418144 +0000 UTC m=+0.704780479 container died e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:17:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:17:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:17:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:17:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:17:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:17:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:17:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:17:01 compute-0 python3.9[152283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-de87b47ece98bf7194dee5bd68044191b4511d58227c0474e5c4df75681d6e1a-merged.mount: Deactivated successfully.
Jan 31 07:17:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:02 compute-0 podman[152101]: 2026-01-31 07:17:02.16900636 +0000 UTC m=+1.493368685 container remove e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:17:02 compute-0 systemd[1]: libpod-conmon-e29dfe4f772c9258f65b15b464d75b10ae73f54807d07a4dd25cf05293548eb6.scope: Deactivated successfully.
Jan 31 07:17:02 compute-0 ovn_controller[149457]: 2026-01-31T07:17:02Z|00025|memory|INFO|17280 kB peak resident set size after 29.7 seconds
Jan 31 07:17:02 compute-0 ovn_controller[149457]: 2026-01-31T07:17:02Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 31 07:17:02 compute-0 python3.9[152406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843820.9442809-217-276327958577961/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:02 compute-0 podman[152407]: 2026-01-31 07:17:02.282981358 +0000 UTC m=+0.088732591 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:17:02 compute-0 podman[152431]: 2026-01-31 07:17:02.291662167 +0000 UTC m=+0.020890784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:17:02 compute-0 podman[152431]: 2026-01-31 07:17:02.49350727 +0000 UTC m=+0.222735877 container create a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:17:02 compute-0 systemd[1]: Started libpod-conmon-a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e.scope.
Jan 31 07:17:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9004c58823c4b2c7eaed7977e89ce4d63e5f98b9a2ae8ee20dc8b4086488ba0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9004c58823c4b2c7eaed7977e89ce4d63e5f98b9a2ae8ee20dc8b4086488ba0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9004c58823c4b2c7eaed7977e89ce4d63e5f98b9a2ae8ee20dc8b4086488ba0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9004c58823c4b2c7eaed7977e89ce4d63e5f98b9a2ae8ee20dc8b4086488ba0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9004c58823c4b2c7eaed7977e89ce4d63e5f98b9a2ae8ee20dc8b4086488ba0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:02 compute-0 ceph-mon[74496]: pgmap v474: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:03 compute-0 podman[152431]: 2026-01-31 07:17:03.062943146 +0000 UTC m=+0.792171833 container init a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:17:03 compute-0 podman[152431]: 2026-01-31 07:17:03.074526852 +0000 UTC m=+0.803755499 container start a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:17:03 compute-0 podman[152431]: 2026-01-31 07:17:03.119386819 +0000 UTC m=+0.848615476 container attach a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:17:03 compute-0 python3.9[152606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:17:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:17:03 compute-0 python3.9[152730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843822.4638402-262-198608301007287/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:03 compute-0 gifted_golick[152553]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:17:03 compute-0 gifted_golick[152553]: --> relative data size: 1.0
Jan 31 07:17:03 compute-0 gifted_golick[152553]: --> All data devices are unavailable
Jan 31 07:17:03 compute-0 systemd[1]: libpod-a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e.scope: Deactivated successfully.
Jan 31 07:17:03 compute-0 podman[152765]: 2026-01-31 07:17:03.952931367 +0000 UTC m=+0.028765213 container died a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:17:04 compute-0 ceph-mon[74496]: pgmap v475: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9004c58823c4b2c7eaed7977e89ce4d63e5f98b9a2ae8ee20dc8b4086488ba0-merged.mount: Deactivated successfully.
Jan 31 07:17:04 compute-0 sudo[152905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcoughcjveawotdrvxlsywwmhmzgnyhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843824.0882945-313-125099324630899/AnsiballZ_setup.py'
Jan 31 07:17:04 compute-0 sudo[152905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:04 compute-0 podman[152765]: 2026-01-31 07:17:04.347477801 +0000 UTC m=+0.423311607 container remove a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:17:04 compute-0 systemd[1]: libpod-conmon-a69e26a785212d3de2f1e685229dc6733840de2a279aed7be43b7f6e9ed8430e.scope: Deactivated successfully.
Jan 31 07:17:04 compute-0 sudo[152035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:04 compute-0 sudo[152908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:04 compute-0 sudo[152908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:04 compute-0 sudo[152908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:04 compute-0 sudo[152933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:17:04 compute-0 sudo[152933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:04 compute-0 sudo[152933]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:04 compute-0 sudo[152958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:04 compute-0 sudo[152958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:04 compute-0 sudo[152958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:04 compute-0 sudo[152983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:17:04 compute-0 sudo[152983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:04 compute-0 python3.9[152907]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:17:04 compute-0 sudo[152905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:04 compute-0 podman[153054]: 2026-01-31 07:17:04.947324111 +0000 UTC m=+0.091579276 container create 13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:17:04 compute-0 podman[153054]: 2026-01-31 07:17:04.879868135 +0000 UTC m=+0.024123350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:17:04 compute-0 systemd[1]: Started libpod-conmon-13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21.scope.
Jan 31 07:17:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:17:05 compute-0 podman[153054]: 2026-01-31 07:17:05.126235808 +0000 UTC m=+0.270490973 container init 13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:17:05 compute-0 podman[153054]: 2026-01-31 07:17:05.132161544 +0000 UTC m=+0.276416679 container start 13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:17:05 compute-0 infallible_agnesi[153073]: 167 167
Jan 31 07:17:05 compute-0 systemd[1]: libpod-13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21.scope: Deactivated successfully.
Jan 31 07:17:05 compute-0 sudo[153164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmwejrcnpqmfoptahxrfxvhhaoeyftd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843824.0882945-313-125099324630899/AnsiballZ_dnf.py'
Jan 31 07:17:05 compute-0 sudo[153164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:05 compute-0 podman[153054]: 2026-01-31 07:17:05.373482212 +0000 UTC m=+0.517737347 container attach 13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:17:05 compute-0 podman[153054]: 2026-01-31 07:17:05.374328925 +0000 UTC m=+0.518584070 container died 13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:17:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:05 compute-0 python3.9[153166]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:17:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:05.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c13370f68bc3a8f406fca9c06b8315aa3927934c1112ab5935b8538da1e498c-merged.mount: Deactivated successfully.
Jan 31 07:17:06 compute-0 podman[153054]: 2026-01-31 07:17:06.28375572 +0000 UTC m=+1.428010895 container remove 13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:17:06 compute-0 systemd[1]: libpod-conmon-13d2857f1d7fde2d5067c87a3c5b8ad6d97a30c5699de08c545273b57a09ab21.scope: Deactivated successfully.
Jan 31 07:17:06 compute-0 podman[153176]: 2026-01-31 07:17:06.402197766 +0000 UTC m=+0.021616734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:17:06 compute-0 podman[153176]: 2026-01-31 07:17:06.521567186 +0000 UTC m=+0.140986124 container create 6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:17:06 compute-0 systemd[1]: Started libpod-conmon-6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c.scope.
Jan 31 07:17:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15753a9397cbb9f8e5fc9f494389c3b0186df4a77dd47eff3d5007dab61638bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15753a9397cbb9f8e5fc9f494389c3b0186df4a77dd47eff3d5007dab61638bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15753a9397cbb9f8e5fc9f494389c3b0186df4a77dd47eff3d5007dab61638bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15753a9397cbb9f8e5fc9f494389c3b0186df4a77dd47eff3d5007dab61638bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:07 compute-0 ceph-mon[74496]: pgmap v476: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:07 compute-0 podman[153176]: 2026-01-31 07:17:07.228160701 +0000 UTC m=+0.847579629 container init 6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:17:07 compute-0 podman[153176]: 2026-01-31 07:17:07.238645329 +0000 UTC m=+0.858064257 container start 6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meitner, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:17:07 compute-0 sudo[153164]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:07 compute-0 podman[153176]: 2026-01-31 07:17:07.3395564 +0000 UTC m=+0.958975348 container attach 6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meitner, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:17:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:17:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:07.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:17:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 31 07:17:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 31 07:17:07 compute-0 sudo[153352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvvprfleuenozquiqwcujncqpugiqjex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843827.4207728-349-138975414276162/AnsiballZ_systemd.py'
Jan 31 07:17:07 compute-0 sudo[153352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]: {
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:     "0": [
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:         {
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "devices": [
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "/dev/loop3"
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             ],
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "lv_name": "ceph_lv0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "lv_size": "7511998464",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "name": "ceph_lv0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "tags": {
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.cluster_name": "ceph",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.crush_device_class": "",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.encrypted": "0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.osd_id": "0",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.type": "block",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:                 "ceph.vdo": "0"
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             },
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "type": "block",
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:             "vg_name": "ceph_vg0"
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:         }
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]:     ]
Jan 31 07:17:07 compute-0 beautiful_meitner[153193]: }
Jan 31 07:17:08 compute-0 systemd[1]: libpod-6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c.scope: Deactivated successfully.
Jan 31 07:17:08 compute-0 conmon[153193]: conmon 6ca3ec4800e80065ce97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c.scope/container/memory.events
Jan 31 07:17:08 compute-0 podman[153176]: 2026-01-31 07:17:08.01076625 +0000 UTC m=+1.630185178 container died 6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:17:08 compute-0 python3.9[153354]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:17:08 compute-0 sudo[153352]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:08 compute-0 ceph-mon[74496]: pgmap v477: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-15753a9397cbb9f8e5fc9f494389c3b0186df4a77dd47eff3d5007dab61638bc-merged.mount: Deactivated successfully.
Jan 31 07:17:09 compute-0 python3.9[153520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:09 compute-0 podman[153176]: 2026-01-31 07:17:09.57739099 +0000 UTC m=+3.196809958 container remove 6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:17:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:09.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:09 compute-0 sudo[152983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:09 compute-0 systemd[1]: libpod-conmon-6ca3ec4800e80065ce97ac7baea511774d4d5f6e9a05907e12fec00822c2869c.scope: Deactivated successfully.
Jan 31 07:17:09 compute-0 sudo[153627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:09 compute-0 sudo[153627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:09 compute-0 sudo[153627]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:09 compute-0 sudo[153668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:17:09 compute-0 sudo[153668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:09 compute-0 sudo[153668]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:09 compute-0 sudo[153693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:09 compute-0 sudo[153693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:09 compute-0 sudo[153693]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:09 compute-0 python3.9[153657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843828.4991903-373-54545845383789/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:09 compute-0 sudo[153718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:17:09 compute-0 sudo[153718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:10 compute-0 podman[153865]: 2026-01-31 07:17:10.180425844 +0000 UTC m=+0.036252742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:17:10 compute-0 podman[153865]: 2026-01-31 07:17:10.45893562 +0000 UTC m=+0.314762538 container create fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:17:10 compute-0 python3.9[153946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:10 compute-0 systemd[1]: Started libpod-conmon-fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828.scope.
Jan 31 07:17:10 compute-0 ceph-mon[74496]: pgmap v478: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:17:10 compute-0 podman[153865]: 2026-01-31 07:17:10.852723832 +0000 UTC m=+0.708550770 container init fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ellis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:17:10 compute-0 podman[153865]: 2026-01-31 07:17:10.862512325 +0000 UTC m=+0.718339213 container start fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:17:10 compute-0 systemd[1]: libpod-fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828.scope: Deactivated successfully.
Jan 31 07:17:10 compute-0 nostalgic_ellis[153973]: 167 167
Jan 31 07:17:10 compute-0 conmon[153973]: conmon fe7ce79d63a6a10b3e5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828.scope/container/memory.events
Jan 31 07:17:11 compute-0 python3.9[154073]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843830.0325308-373-214730456096520/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:11 compute-0 podman[153865]: 2026-01-31 07:17:11.262636235 +0000 UTC m=+1.118463123 container attach fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ellis, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:17:11 compute-0 podman[153865]: 2026-01-31 07:17:11.263771723 +0000 UTC m=+1.119598611 container died fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:17:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:11.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:11.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc1da0c2f7b9fe60068c2eb237048b0240c84e47931db5850994f69aa872f24f-merged.mount: Deactivated successfully.
Jan 31 07:17:11 compute-0 podman[153865]: 2026-01-31 07:17:11.757426199 +0000 UTC m=+1.613253077 container remove fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:17:11 compute-0 systemd[1]: libpod-conmon-fe7ce79d63a6a10b3e5fdcb3384117001d8cee5c33ff8fbd7493380ec4229828.scope: Deactivated successfully.
Jan 31 07:17:11 compute-0 podman[154145]: 2026-01-31 07:17:11.862621455 +0000 UTC m=+0.022132671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:17:11 compute-0 podman[154145]: 2026-01-31 07:17:11.973273547 +0000 UTC m=+0.132784713 container create 946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pasteur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:17:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:12 compute-0 systemd[1]: Started libpod-conmon-946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337.scope.
Jan 31 07:17:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b8b18f54172ebf4d69f2105a2784c2f5607a620ffe9baba261d63c4ab9c0d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b8b18f54172ebf4d69f2105a2784c2f5607a620ffe9baba261d63c4ab9c0d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b8b18f54172ebf4d69f2105a2784c2f5607a620ffe9baba261d63c4ab9c0d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b8b18f54172ebf4d69f2105a2784c2f5607a620ffe9baba261d63c4ab9c0d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:17:12 compute-0 podman[154145]: 2026-01-31 07:17:12.197489582 +0000 UTC m=+0.357000768 container init 946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:17:12 compute-0 podman[154145]: 2026-01-31 07:17:12.205186273 +0000 UTC m=+0.364697499 container start 946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:17:12 compute-0 podman[154145]: 2026-01-31 07:17:12.216398421 +0000 UTC m=+0.375909587 container attach 946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:17:12 compute-0 python3.9[154267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:12 compute-0 ceph-mon[74496]: pgmap v479: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:12 compute-0 python3.9[154390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843831.8334086-505-25969679370688/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]: {
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:         "osd_id": 0,
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:         "type": "bluestore"
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]:     }
Jan 31 07:17:13 compute-0 naughty_pasteur[154249]: }
Jan 31 07:17:13 compute-0 systemd[1]: libpod-946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337.scope: Deactivated successfully.
Jan 31 07:17:13 compute-0 podman[154145]: 2026-01-31 07:17:13.127347564 +0000 UTC m=+1.286858770 container died 946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pasteur, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:17:13 compute-0 python3.9[154567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:13.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b8b18f54172ebf4d69f2105a2784c2f5607a620ffe9baba261d63c4ab9c0d3-merged.mount: Deactivated successfully.
Jan 31 07:17:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:13.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:14 compute-0 python3.9[154690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843832.9578454-505-14592174721993/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:14 compute-0 podman[154145]: 2026-01-31 07:17:14.157127401 +0000 UTC m=+2.316638567 container remove 946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:17:14 compute-0 ceph-mon[74496]: pgmap v480: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:14 compute-0 sudo[153718]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:14 compute-0 systemd[1]: libpod-conmon-946d904d464672cf000976f3d99c159aad8073be7689e47e7ae23c01cce29337.scope: Deactivated successfully.
Jan 31 07:17:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:17:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:17:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9709900a-669e-4ca6-8370-c68e5765160c does not exist
Jan 31 07:17:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 620ce75f-c9fe-45b6-b26e-78b9fc396c10 does not exist
Jan 31 07:17:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8cb17d76-bdb0-41bf-8d65-24412276195e does not exist
Jan 31 07:17:14 compute-0 sudo[154841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:14 compute-0 sudo[154841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:14 compute-0 sudo[154841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:14 compute-0 sudo[154866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:17:14 compute-0 sudo[154866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:14 compute-0 sudo[154866]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:14 compute-0 python3.9[154840]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:17:15 compute-0 sudo[155042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjyuauluepikanwxhxnpwlzbdjpwfxtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843834.9471002-619-54775236040176/AnsiballZ_file.py'
Jan 31 07:17:15 compute-0 sudo[155042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:15 compute-0 python3.9[155045]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:15 compute-0 sudo[155042]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:17:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:15.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:15.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:15 compute-0 sudo[155195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdabdzebuhxtllcpiwizruabfhnuhfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843835.651219-643-217316083160097/AnsiballZ_stat.py'
Jan 31 07:17:15 compute-0 sudo[155195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:16 compute-0 python3.9[155197]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:16 compute-0 sudo[155195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:16 compute-0 sudo[155273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chjtjwkhckkqmntqchclhjxmxuljbaad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843835.651219-643-217316083160097/AnsiballZ_file.py'
Jan 31 07:17:16 compute-0 sudo[155273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:16 compute-0 python3.9[155275]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:16 compute-0 sudo[155273]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:16 compute-0 ceph-mon[74496]: pgmap v481: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:16 compute-0 sudo[155425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnycbmswkytgllottmupqlgunbcswglw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843836.6972299-643-33311186401379/AnsiballZ_stat.py'
Jan 31 07:17:16 compute-0 sudo[155425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:17 compute-0 python3.9[155427]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:17 compute-0 sudo[155425]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:17 compute-0 sudo[155504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucvkfzcbxemgpjnhprzzoylsknnlyhib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843836.6972299-643-33311186401379/AnsiballZ_file.py'
Jan 31 07:17:17 compute-0 sudo[155504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:17 compute-0 python3.9[155506]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:17 compute-0 sudo[155504]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:17.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:17 compute-0 sudo[155656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqqympdkfwbrohmeybmkxvzygbjbxzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843837.741338-712-212584784196014/AnsiballZ_file.py'
Jan 31 07:17:17 compute-0 sudo[155656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:18 compute-0 ceph-mon[74496]: pgmap v482: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:18 compute-0 python3.9[155658]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:18 compute-0 sudo[155656]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:18 compute-0 sudo[155808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikjzxdigellwcbrmikrfvrirkdgscirr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843838.3952634-736-166310935044402/AnsiballZ_stat.py'
Jan 31 07:17:18 compute-0 sudo[155808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:18 compute-0 python3.9[155810]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:18 compute-0 sudo[155808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:19 compute-0 sudo[155886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwsownwmqqmovfdossbyrabelmpnwbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843838.3952634-736-166310935044402/AnsiballZ_file.py'
Jan 31 07:17:19 compute-0 sudo[155886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:19 compute-0 python3.9[155888]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:19 compute-0 sudo[155886]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:19 compute-0 sudo[156039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiozvrtnmrrdhnsnmtazpenbahrbruvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843839.436358-772-258422483994095/AnsiballZ_stat.py'
Jan 31 07:17:19 compute-0 sudo[156039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:19 compute-0 python3.9[156041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:19 compute-0 sudo[156039]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:17:19
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.control']
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:17:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:17:20 compute-0 sudo[156117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-impcfekxzadbsfervnsepzhfjderjnlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843839.436358-772-258422483994095/AnsiballZ_file.py'
Jan 31 07:17:20 compute-0 sudo[156117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:17:20 compute-0 python3.9[156119]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:20 compute-0 sudo[156117]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-0 sudo[156144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:20 compute-0 sudo[156144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:20 compute-0 sudo[156144]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-0 sudo[156169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:20 compute-0 sudo[156169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:20 compute-0 sudo[156169]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:20 compute-0 sudo[156319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swivjntiqdxdxpflbufzlfqugmsumywh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843840.5138342-808-3125943623456/AnsiballZ_systemd.py'
Jan 31 07:17:20 compute-0 sudo[156319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:20 compute-0 ceph-mon[74496]: pgmap v483: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:21 compute-0 python3.9[156321]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:17:21 compute-0 systemd[1]: Reloading.
Jan 31 07:17:21 compute-0 systemd-rc-local-generator[156346]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:17:21 compute-0 systemd-sysv-generator[156352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:17:21 compute-0 sudo[156319]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:21.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:21.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:21 compute-0 sudo[156509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofmkvyyghwqqhqfjefumifuzmstlrcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843841.6158576-832-10056572059623/AnsiballZ_stat.py'
Jan 31 07:17:21 compute-0 sudo[156509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:22 compute-0 python3.9[156511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:22 compute-0 sudo[156509]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:22 compute-0 sudo[156587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrjlsqzpllcwezztolsfcrxyduyxqyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843841.6158576-832-10056572059623/AnsiballZ_file.py'
Jan 31 07:17:22 compute-0 sudo[156587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:22 compute-0 python3.9[156589]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:22 compute-0 sudo[156587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:23 compute-0 sudo[156739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjirairttsuehyvbpyfsryrveimzvyrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843842.7851934-868-25154096536812/AnsiballZ_stat.py'
Jan 31 07:17:23 compute-0 sudo[156739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:23 compute-0 ceph-mon[74496]: pgmap v484: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:23 compute-0 python3.9[156741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:23 compute-0 sudo[156739]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:23.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:23 compute-0 sudo[156818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsekdlerfgvuvfphbnkgcudbeurrfvbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843842.7851934-868-25154096536812/AnsiballZ_file.py'
Jan 31 07:17:23 compute-0 sudo[156818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:23 compute-0 python3.9[156820]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:23 compute-0 sudo[156818]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:24 compute-0 sudo[156970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyvhtebilebdenqcjalczvkqnqxnqlmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843843.8217764-904-233756438602698/AnsiballZ_systemd.py'
Jan 31 07:17:24 compute-0 sudo[156970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:24 compute-0 python3.9[156972]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:17:24 compute-0 systemd[1]: Reloading.
Jan 31 07:17:24 compute-0 systemd-rc-local-generator[156999]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:17:24 compute-0 systemd-sysv-generator[157002]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:17:24 compute-0 ceph-mon[74496]: pgmap v485: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.577802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844577858, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1601, "num_deletes": 252, "total_data_size": 2947291, "memory_usage": 2994256, "flush_reason": "Manual Compaction"}
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 07:17:24 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844641633, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2882773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10742, "largest_seqno": 12342, "table_properties": {"data_size": 2875389, "index_size": 4391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15006, "raw_average_key_size": 19, "raw_value_size": 2860522, "raw_average_value_size": 3768, "num_data_blocks": 197, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843670, "oldest_key_time": 1769843670, "file_creation_time": 1769843844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 63890 microseconds, and 8779 cpu microseconds.
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:17:24 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 07:17:24 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 07:17:24 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.641694) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2882773 bytes OK
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.641718) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.684055) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.684147) EVENT_LOG_v1 {"time_micros": 1769843844684138, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.684176) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2940572, prev total WAL file size 2940572, number of live WAL files 2.
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.685034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2815KB)], [26(7542KB)]
Jan 31 07:17:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843844685127, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10605957, "oldest_snapshot_seqno": -1}
Jan 31 07:17:25 compute-0 sudo[156970]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4010 keys, 8423770 bytes, temperature: kUnknown
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843845439409, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8423770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8394278, "index_size": 18374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97356, "raw_average_key_size": 24, "raw_value_size": 8319211, "raw_average_value_size": 2074, "num_data_blocks": 794, "num_entries": 4010, "num_filter_entries": 4010, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769843844, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:17:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:25.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.439909) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8423770 bytes
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.468184) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 14.1 rd, 11.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.6) write-amplify(2.9) OK, records in: 4532, records dropped: 522 output_compression: NoCompression
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.468238) EVENT_LOG_v1 {"time_micros": 1769843845468219, "job": 10, "event": "compaction_finished", "compaction_time_micros": 754372, "compaction_time_cpu_micros": 25779, "output_level": 6, "num_output_files": 1, "total_output_size": 8423770, "num_input_records": 4532, "num_output_records": 4010, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843845469844, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843845471145, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:24.684896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.471234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.471239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.471241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.471244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:25 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:17:25.471246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:17:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:25 compute-0 sudo[157164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmimvcwkhcowvjiccdtspqjcbiiuzqtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843845.4070632-934-69058049491685/AnsiballZ_file.py'
Jan 31 07:17:25 compute-0 sudo[157164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:26 compute-0 python3.9[157166]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:26 compute-0 sudo[157164]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:26 compute-0 sudo[157316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipaeerpzaosnalufrtxslpmvtbckpwjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843846.3310666-958-97121056331494/AnsiballZ_stat.py'
Jan 31 07:17:26 compute-0 sudo[157316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:26 compute-0 python3.9[157318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:26 compute-0 ceph-mon[74496]: pgmap v486: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:26 compute-0 sudo[157316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:27 compute-0 sudo[157439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwrxdppzsiwhzfdbhmgijwxhlfsmyvbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843846.3310666-958-97121056331494/AnsiballZ_copy.py'
Jan 31 07:17:27 compute-0 sudo[157439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:27 compute-0 python3.9[157441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843846.3310666-958-97121056331494/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:27 compute-0 sudo[157439]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:27.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:27 compute-0 sudo[157592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nselooejtaxqlukufjjhuqoquzmpicdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843847.760327-1009-248340484172169/AnsiballZ_file.py'
Jan 31 07:17:27 compute-0 sudo[157592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:28 compute-0 python3.9[157594]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:28 compute-0 sudo[157592]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:28 compute-0 sudo[157744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbxcydyafiynfrnptfqgtvedsvfoaerh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843848.4278488-1033-244901458016095/AnsiballZ_file.py'
Jan 31 07:17:28 compute-0 sudo[157744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:28 compute-0 python3.9[157746]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:17:28 compute-0 sudo[157744]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:28 compute-0 ceph-mon[74496]: pgmap v487: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:29 compute-0 sudo[157897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yceocsusyoxrjmakwgvoxqvjzybdrtud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843849.1129565-1057-180795230916787/AnsiballZ_stat.py'
Jan 31 07:17:29 compute-0 sudo[157897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:29.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:29 compute-0 python3.9[157899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:17:29 compute-0 sudo[157897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:29.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:29 compute-0 sudo[158020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewgoimgcxmsexcdmaenrdcmwgyojlab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843849.1129565-1057-180795230916787/AnsiballZ_copy.py'
Jan 31 07:17:29 compute-0 sudo[158020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:30 compute-0 ceph-mon[74496]: pgmap v488: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:30 compute-0 python3.9[158022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843849.1129565-1057-180795230916787/.source.json _original_basename=.ts1_eawa follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:30 compute-0 sudo[158020]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:30 compute-0 python3.9[158172]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:17:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:31.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:31.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:32 compute-0 sudo[158605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbrkxpgnpzrvbwowoasyucvmicqkvudj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843852.1673934-1177-16877569232615/AnsiballZ_container_config_data.py'
Jan 31 07:17:32 compute-0 sudo[158605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:32 compute-0 podman[158568]: 2026-01-31 07:17:32.618718673 +0000 UTC m=+0.076468283 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 07:17:32 compute-0 python3.9[158613]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 07:17:32 compute-0 sudo[158605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:32 compute-0 ceph-mon[74496]: pgmap v489: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 07:17:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:33 compute-0 sudo[158774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgcadyknwhcibfjtjhtpavrvxhzgqxgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843853.193588-1210-15466702868773/AnsiballZ_container_config_hash.py'
Jan 31 07:17:33 compute-0 sudo[158774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:34 compute-0 python3.9[158776]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:17:34 compute-0 sudo[158774]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:17:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:17:34 compute-0 sudo[158926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rabjexpsgrmvskmepnokxppuifwwqspp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769843854.385292-1240-261653224136717/AnsiballZ_edpm_container_manage.py'
Jan 31 07:17:34 compute-0 sudo[158926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:17:35 compute-0 python3[158928]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:17:35 compute-0 ceph-mon[74496]: pgmap v490: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:35.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:36 compute-0 ceph-mon[74496]: pgmap v491: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:37.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:37.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:38 compute-0 ceph-mon[74496]: pgmap v492: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:39.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:40 compute-0 sudo[159010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:40 compute-0 sudo[159010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:40 compute-0 sudo[159010]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:40 compute-0 sudo[159035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:17:40 compute-0 sudo[159035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:17:40 compute-0 sudo[159035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:17:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:41.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 07:17:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:44 compute-0 ceph-mon[74496]: pgmap v493: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:47.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:49.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:17:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:17:50 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:17:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:51.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:17:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:17:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:55.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:17:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 31 07:17:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:57.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 31 07:17:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:17:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:17:59.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:17:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:17:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:17:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:17:59.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:00 compute-0 ceph-mon[74496]: pgmap v494: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:00 compute-0 ceph-mon[74496]: pgmap v495: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:00 compute-0 podman[158941]: 2026-01-31 07:18:00.157716139 +0000 UTC m=+25.030335776 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:18:00 compute-0 podman[159125]: 2026-01-31 07:18:00.356548163 +0000 UTC m=+0.107848283 container create 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 07:18:00 compute-0 podman[159125]: 2026-01-31 07:18:00.276347868 +0000 UTC m=+0.027648068 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:18:00 compute-0 python3[158928]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:18:00 compute-0 sudo[158926]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:00 compute-0 sudo[159186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:00 compute-0 sudo[159186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:00 compute-0 sudo[159186]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:00 compute-0 sudo[159234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:00 compute-0 sudo[159234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:00 compute-0 sudo[159234]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:00 compute-0 sudo[159361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojyzbnsyjadqodiopiagmtryzffxzbdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843880.6856897-1264-128198489808381/AnsiballZ_stat.py'
Jan 31 07:18:00 compute-0 sudo[159361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:01 compute-0 python3.9[159363]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:18:01 compute-0 sudo[159361]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v496: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v497: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v498: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v499: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v500: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v501: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v502: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mon[74496]: pgmap v503: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:01.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:01.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:01 compute-0 sudo[159516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gleooibmckkywbcuyptbesprzunjkudq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843881.529112-1291-86408782656425/AnsiballZ_file.py'
Jan 31 07:18:01 compute-0 sudo[159516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 compute-0 python3.9[159518]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:02 compute-0 sudo[159516]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:02 compute-0 sudo[159592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glsztqifqqfylhjkqhaupxcundfqcisa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843881.529112-1291-86408782656425/AnsiballZ_stat.py'
Jan 31 07:18:02 compute-0 sudo[159592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 compute-0 python3.9[159594]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:18:02 compute-0 sudo[159592]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:02 compute-0 ceph-mon[74496]: pgmap v504: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:02 compute-0 sudo[159762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxvrhuazhvdqydiuaepekbmzhjlniblx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843882.5227096-1291-94614707627400/AnsiballZ_copy.py'
Jan 31 07:18:02 compute-0 sudo[159762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:02 compute-0 podman[159707]: 2026-01-31 07:18:02.929500673 +0000 UTC m=+0.096695446 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:18:03 compute-0 python3.9[159768]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843882.5227096-1291-94614707627400/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:03 compute-0 sudo[159762]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:03 compute-0 sudo[159846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkeahkeqeivgqpxwszlgzwmrzhoihvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843882.5227096-1291-94614707627400/AnsiballZ_systemd.py'
Jan 31 07:18:03 compute-0 sudo[159846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:03 compute-0 python3.9[159848]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:18:03 compute-0 systemd[1]: Reloading.
Jan 31 07:18:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:18:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:18:03 compute-0 systemd-rc-local-generator[159876]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:03 compute-0 systemd-sysv-generator[159880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:03 compute-0 sudo[159846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:04 compute-0 sudo[159958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujlqmvpwodvonwjjihkqyhwxekumslx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843882.5227096-1291-94614707627400/AnsiballZ_systemd.py'
Jan 31 07:18:04 compute-0 sudo[159958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:05 compute-0 python3.9[159960]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:05 compute-0 systemd[1]: Reloading.
Jan 31 07:18:05 compute-0 systemd-rc-local-generator[159989]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:05 compute-0 systemd-sysv-generator[159994]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:05.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:06 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 07:18:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:08 compute-0 ceph-mon[74496]: pgmap v505: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6075ef3a0a3c02c5632d39f6e4caa32c65341d00ba4c61213303c3bd86e03446/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6075ef3a0a3c02c5632d39f6e4caa32c65341d00ba4c61213303c3bd86e03446/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1.
Jan 31 07:18:09 compute-0 podman[160003]: 2026-01-31 07:18:09.112367968 +0000 UTC m=+2.749446709 container init 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + sudo -E kolla_set_configs
Jan 31 07:18:09 compute-0 podman[160003]: 2026-01-31 07:18:09.134363646 +0000 UTC m=+2.771442357 container start 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Validating config file
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Copying service configuration files
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Writing out command to execute
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: ++ cat /run_command
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + CMD=neutron-ovn-metadata-agent
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + ARGS=
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + sudo kolla_copy_cacerts
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + [[ ! -n '' ]]
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + . kolla_extend_start
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + umask 0022
Jan 31 07:18:09 compute-0 ovn_metadata_agent[160021]: + exec neutron-ovn-metadata-agent
Jan 31 07:18:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:09.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:09 compute-0 edpm-start-podman-container[160003]: ovn_metadata_agent
Jan 31 07:18:09 compute-0 podman[160030]: 2026-01-31 07:18:09.829775189 +0000 UTC m=+0.689811615 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:18:09 compute-0 edpm-start-podman-container[160002]: Creating additional drop-in dependency for "ovn_metadata_agent" (4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1)
Jan 31 07:18:09 compute-0 systemd[1]: Reloading.
Jan 31 07:18:09 compute-0 ceph-mon[74496]: pgmap v506: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:09 compute-0 ceph-mon[74496]: pgmap v507: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:09 compute-0 systemd-rc-local-generator[160102]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:09 compute-0 systemd-sysv-generator[160105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:10 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 31 07:18:10 compute-0 sudo[159958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.089 160028 INFO neutron.common.config [-] Logging enabled!
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.089 160028 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.089 160028 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.090 160028 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.091 160028 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.092 160028 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.093 160028 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.094 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.095 160028 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 python3.9[160265]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.096 160028 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.097 160028 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.098 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.099 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.100 160028 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.101 160028 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.102 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.103 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.103 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.103 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.103 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.103 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.103 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.104 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.105 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.106 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.106 160028 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.106 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.106 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.106 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.106 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.107 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.108 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.109 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.110 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.111 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.112 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.113 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.114 160028 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.115 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.116 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.117 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.118 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.119 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.120 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.121 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.122 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.123 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.124 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.124 160028 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.124 160028 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.132 160028 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.132 160028 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.132 160028 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.132 160028 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.133 160028 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.145 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 5c307474-e9ec-4d19-9f52-463eb0ff26d1 (UUID: 5c307474-e9ec-4d19-9f52-463eb0ff26d1) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.165 160028 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.165 160028 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.165 160028 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.165 160028 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.168 160028 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.175 160028 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.180 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '5c307474-e9ec-4d19-9f52-463eb0ff26d1'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], external_ids={}, name=5c307474-e9ec-4d19-9f52-463eb0ff26d1, nb_cfg_timestamp=1769843800594, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.181 160028 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb99a235f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.182 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.182 160028 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.183 160028 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.183 160028 INFO oslo_service.service [-] Starting 1 workers
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.186 160028 DEBUG oslo_service.service [-] Started child 160292 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.188 160028 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmprrz9sb_2/privsep.sock']
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.188 160292 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-426207'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.206 160292 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.207 160292 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.207 160292 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.210 160292 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.216 160292 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.223 160292 INFO eventlet.wsgi.server [-] (160292) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 07:18:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:11 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 07:18:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:11.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.810 160028 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.811 160028 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprrz9sb_2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.695 160297 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.701 160297 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.704 160297 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.704 160297 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160297
Jan 31 07:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:11.814 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9f40d7-37fe-4bf9-ad4b-af55096e516f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:18:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:12 compute-0 sudo[160427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzoomlvihidsdqhtszqierzafcpluse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843891.7778294-1426-233873790968600/AnsiballZ_stat.py'
Jan 31 07:18:12 compute-0 sudo[160427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.280 160297 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.280 160297 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.281 160297 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.754 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc3ff1b-6195-481d-8a9e-bf2ce9c68db8]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.756 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, column=external_ids, values=({'neutron:ovn-metadata-id': '89eae091-691b-537c-ad20-92edad53c135'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.767 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.778 160028 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.778 160028 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.778 160028 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.778 160028 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.778 160028 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.779 160028 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.779 160028 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.779 160028 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.779 160028 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.779 160028 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.779 160028 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.780 160028 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.780 160028 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.780 160028 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.780 160028 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.780 160028 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.780 160028 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.781 160028 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.782 160028 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.782 160028 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.782 160028 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.782 160028 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.782 160028 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.782 160028 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.783 160028 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.784 160028 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.784 160028 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.784 160028 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.784 160028 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.784 160028 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.784 160028 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.785 160028 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.786 160028 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.787 160028 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.787 160028 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.787 160028 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.787 160028 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.787 160028 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.787 160028 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.788 160028 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.789 160028 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.790 160028 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.791 160028 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.792 160028 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.793 160028 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.794 160028 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.794 160028 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.794 160028 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.794 160028 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.794 160028 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.794 160028 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.795 160028 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.796 160028 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.796 160028 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.796 160028 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.796 160028 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.796 160028 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.796 160028 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.797 160028 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.798 160028 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.799 160028 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.800 160028 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.801 160028 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.802 160028 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.803 160028 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.804 160028 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.805 160028 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.806 160028 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.807 160028 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.808 160028 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.809 160028 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.810 160028 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.811 160028 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.812 160028 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.813 160028 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.814 160028 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.815 160028 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.816 160028 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.817 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.818 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.819 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.820 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.821 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.821 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.821 160028 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.821 160028 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.821 160028 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.821 160028 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.822 160028 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:18:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:18:12.822 160028 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:18:13 compute-0 ceph-mon[74496]: pgmap v508: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:14 compute-0 sudo[160432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:14 compute-0 sudo[160432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:14 compute-0 sudo[160432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:14 compute-0 sudo[160457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:18:14 compute-0 sudo[160457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:14 compute-0 sudo[160457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-0 sudo[160482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:15 compute-0 sudo[160482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:15 compute-0 sudo[160482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-0 sudo[160507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:18:15 compute-0 sudo[160507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:15 compute-0 sudo[160507]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-0 ceph-mon[74496]: pgmap v509: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:15 compute-0 ceph-mon[74496]: pgmap v510: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 07:18:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:18:15 compute-0 python3.9[160429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:18:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:18:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:18:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:18:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:18:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:18:15 compute-0 sudo[160427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:15.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:15 compute-0 sudo[160687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jehjbtkvruvwsfngwdfoqtvmlsjtfmtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843891.7778294-1426-233873790968600/AnsiballZ_copy.py'
Jan 31 07:18:15 compute-0 sudo[160687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:16 compute-0 python3.9[160689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843891.7778294-1426-233873790968600/.source.yaml _original_basename=.qwaptby6 follow=False checksum=d444960821d3e2834cd73828669d050a1a289a05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:16 compute-0 sudo[160687]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:16 compute-0 sshd-session[150541]: Connection closed by 192.168.122.30 port 46310
Jan 31 07:18:16 compute-0 sshd-session[150538]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:18:16 compute-0 systemd-logind[816]: Session 48 logged out. Waiting for processes to exit.
Jan 31 07:18:16 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 07:18:16 compute-0 systemd[1]: session-48.scope: Consumed 48.957s CPU time.
Jan 31 07:18:16 compute-0 systemd-logind[816]: Removed session 48.
Jan 31 07:18:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 07:18:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:17.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:17.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:17 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 07:18:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:18:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:18:18 compute-0 ceph-mon[74496]: pgmap v511: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:18:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a024c891-277d-435d-a886-19b61c5b6e94 does not exist
Jan 31 07:18:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev de2e544c-7dba-471d-a4e5-4321b6ad417a does not exist
Jan 31 07:18:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 11a9f560-7a80-454e-bc53-31c93f1b33e5 does not exist
Jan 31 07:18:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:18:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:18:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:18:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:18:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:18:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:18:18 compute-0 sudo[160715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:18 compute-0 sudo[160715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:18 compute-0 sudo[160715]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:18 compute-0 sudo[160740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:18:18 compute-0 sudo[160740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:18 compute-0 sudo[160740]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:18 compute-0 sudo[160765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:18 compute-0 sudo[160765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:18 compute-0 sudo[160765]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:18 compute-0 sudo[160790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:18:18 compute-0 sudo[160790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.32639877 +0000 UTC m=+0.077141042 container create 9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dijkstra, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.275873632 +0000 UTC m=+0.026615884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:18:19 compute-0 systemd[1]: Started libpod-conmon-9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb.scope.
Jan 31 07:18:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.465995438 +0000 UTC m=+0.216737710 container init 9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.473174697 +0000 UTC m=+0.223916929 container start 9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:18:19 compute-0 reverent_dijkstra[160873]: 167 167
Jan 31 07:18:19 compute-0 systemd[1]: libpod-9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb.scope: Deactivated successfully.
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.508386404 +0000 UTC m=+0.259128666 container attach 9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dijkstra, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.509309757 +0000 UTC m=+0.260051999 container died 9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 07:18:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0752bd520b8bc7caf2c5f913dbefb9da8b508786980c335e5f8867351502f263-merged.mount: Deactivated successfully.
Jan 31 07:18:19 compute-0 podman[160857]: 2026-01-31 07:18:19.699219038 +0000 UTC m=+0.449961270 container remove 9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:18:19 compute-0 systemd[1]: libpod-conmon-9894b3105137ad67bd77bc53404b1c90807f49e87f4fbd5262f6b3a535b5dfcb.scope: Deactivated successfully.
Jan 31 07:18:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:19.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:19.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:19 compute-0 podman[160897]: 2026-01-31 07:18:19.840797604 +0000 UTC m=+0.055160365 container create b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mcnulty, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:18:19 compute-0 systemd[1]: Started libpod-conmon-b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c.scope.
Jan 31 07:18:19 compute-0 podman[160897]: 2026-01-31 07:18:19.809014973 +0000 UTC m=+0.023377754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:18:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f57169435aa52507c1dd4c37f7ac200e6f419262dd3448cd6cf593b6feda38a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f57169435aa52507c1dd4c37f7ac200e6f419262dd3448cd6cf593b6feda38a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f57169435aa52507c1dd4c37f7ac200e6f419262dd3448cd6cf593b6feda38a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f57169435aa52507c1dd4c37f7ac200e6f419262dd3448cd6cf593b6feda38a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f57169435aa52507c1dd4c37f7ac200e6f419262dd3448cd6cf593b6feda38a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:18:19
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.control', 'vms', 'images']
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:18:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:18:20 compute-0 podman[160897]: 2026-01-31 07:18:20.004062691 +0000 UTC m=+0.218425482 container init b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:18:20 compute-0 podman[160897]: 2026-01-31 07:18:20.013721762 +0000 UTC m=+0.228084523 container start b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:18:20 compute-0 podman[160897]: 2026-01-31 07:18:20.026805798 +0000 UTC m=+0.241168579 container attach b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:18:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 07:18:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 07:18:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:18:20 compute-0 sudo[160924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:20 compute-0 sudo[160924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:20 compute-0 sudo[160924]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:20 compute-0 agitated_mcnulty[160913]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:18:20 compute-0 agitated_mcnulty[160913]: --> relative data size: 1.0
Jan 31 07:18:20 compute-0 agitated_mcnulty[160913]: --> All data devices are unavailable
Jan 31 07:18:20 compute-0 sudo[160953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:20 compute-0 sudo[160953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:20 compute-0 systemd[1]: libpod-b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c.scope: Deactivated successfully.
Jan 31 07:18:20 compute-0 sudo[160953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:20 compute-0 podman[160897]: 2026-01-31 07:18:20.841297416 +0000 UTC m=+1.055660197 container died b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mcnulty, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:18:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f57169435aa52507c1dd4c37f7ac200e6f419262dd3448cd6cf593b6feda38a-merged.mount: Deactivated successfully.
Jan 31 07:18:21 compute-0 podman[160897]: 2026-01-31 07:18:21.045555175 +0000 UTC m=+1.259917936 container remove b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:18:21 compute-0 systemd[1]: libpod-conmon-b48c8e068eb6d2933a9948485ee17bb682db2683cea7552aab0236cd36d4ba4c.scope: Deactivated successfully.
Jan 31 07:18:21 compute-0 sudo[160790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:21 compute-0 sudo[160992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:21 compute-0 sudo[160992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:21 compute-0 sudo[160992]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:21 compute-0 sudo[161017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:18:21 compute-0 sudo[161017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:21 compute-0 sudo[161017]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:21 compute-0 sudo[161043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:21 compute-0 sudo[161043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:21 compute-0 sudo[161043]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:21 compute-0 sudo[161068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:18:21 compute-0 sudo[161068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.538452193 +0000 UTC m=+0.022672826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.653206581 +0000 UTC m=+0.137427154 container create a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:18:21 compute-0 ceph-mon[74496]: pgmap v512: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:18:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:18:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:18:21 compute-0 systemd[1]: Started libpod-conmon-a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2.scope.
Jan 31 07:18:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:21.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.758578036 +0000 UTC m=+0.242798599 container init a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.766968705 +0000 UTC m=+0.251189278 container start a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:18:21 compute-0 thirsty_galois[161149]: 167 167
Jan 31 07:18:21 compute-0 systemd[1]: libpod-a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2.scope: Deactivated successfully.
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.781830105 +0000 UTC m=+0.266050658 container attach a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galois, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.78278492 +0000 UTC m=+0.267005473 container died a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galois, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-023ccd9c3813b7c9a5c34799ce98245e9a29bc5fba4fd06591269cafc014b650-merged.mount: Deactivated successfully.
Jan 31 07:18:21 compute-0 podman[161133]: 2026-01-31 07:18:21.870426472 +0000 UTC m=+0.354647015 container remove a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galois, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:18:21 compute-0 systemd[1]: libpod-conmon-a126e0e611bb13ba51a9b449b2dcca75ce5a00fa685796ef40bd4b42244e10d2.scope: Deactivated successfully.
Jan 31 07:18:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:22 compute-0 podman[161174]: 2026-01-31 07:18:22.027650019 +0000 UTC m=+0.034918691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:18:22 compute-0 podman[161174]: 2026-01-31 07:18:22.33599502 +0000 UTC m=+0.343263602 container create e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_herschel, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:18:22 compute-0 sshd-session[161188]: Accepted publickey for zuul from 192.168.122.30 port 53284 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:18:22 compute-0 systemd-logind[816]: New session 49 of user zuul.
Jan 31 07:18:22 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 31 07:18:22 compute-0 sshd-session[161188]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:18:22 compute-0 systemd[1]: Started libpod-conmon-e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687.scope.
Jan 31 07:18:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236bad375d71c8fabc29f69a8dfaf99735bb57cc162b59f25df74eb5e928a4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236bad375d71c8fabc29f69a8dfaf99735bb57cc162b59f25df74eb5e928a4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236bad375d71c8fabc29f69a8dfaf99735bb57cc162b59f25df74eb5e928a4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d236bad375d71c8fabc29f69a8dfaf99735bb57cc162b59f25df74eb5e928a4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:22 compute-0 podman[161174]: 2026-01-31 07:18:22.619427529 +0000 UTC m=+0.626696131 container init e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_herschel, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:18:22 compute-0 podman[161174]: 2026-01-31 07:18:22.624717001 +0000 UTC m=+0.631985573 container start e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_herschel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:18:22 compute-0 podman[161174]: 2026-01-31 07:18:22.651045118 +0000 UTC m=+0.658313720 container attach e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:18:22 compute-0 ceph-mon[74496]: pgmap v513: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 07:18:22 compute-0 ceph-mon[74496]: pgmap v514: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]: {
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:     "0": [
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:         {
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "devices": [
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "/dev/loop3"
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             ],
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "lv_name": "ceph_lv0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "lv_size": "7511998464",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "name": "ceph_lv0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "tags": {
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.cluster_name": "ceph",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.crush_device_class": "",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.encrypted": "0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.osd_id": "0",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.type": "block",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:                 "ceph.vdo": "0"
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             },
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "type": "block",
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:             "vg_name": "ceph_vg0"
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:         }
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]:     ]
Jan 31 07:18:23 compute-0 dazzling_herschel[161195]: }
Jan 31 07:18:23 compute-0 systemd[1]: libpod-e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687.scope: Deactivated successfully.
Jan 31 07:18:23 compute-0 podman[161174]: 2026-01-31 07:18:23.413735637 +0000 UTC m=+1.421004219 container died e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_herschel, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d236bad375d71c8fabc29f69a8dfaf99735bb57cc162b59f25df74eb5e928a4e-merged.mount: Deactivated successfully.
Jan 31 07:18:23 compute-0 python3.9[161349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:18:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 07:18:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:23 compute-0 podman[161174]: 2026-01-31 07:18:23.877798166 +0000 UTC m=+1.885066758 container remove e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_herschel, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:18:23 compute-0 systemd[1]: libpod-conmon-e46d49f8bec633de36b2c28ddce77bdab04001e27a84959232f13a001d962687.scope: Deactivated successfully.
Jan 31 07:18:23 compute-0 sudo[161068]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:23 compute-0 sudo[161397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:23 compute-0 sudo[161397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:23 compute-0 sudo[161397]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:24 compute-0 sudo[161427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:18:24 compute-0 sudo[161427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:24 compute-0 sudo[161427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:24 compute-0 sudo[161482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:24 compute-0 sudo[161482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:24 compute-0 sudo[161482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:24 compute-0 sudo[161524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:18:24 compute-0 sudo[161524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:24 compute-0 sudo[161677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tantgufbvbupcvpaydkxbxjxevxsbybm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843903.9969966-62-67821904562260/AnsiballZ_command.py'
Jan 31 07:18:24 compute-0 sudo[161677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:24 compute-0 podman[161637]: 2026-01-31 07:18:24.411578702 +0000 UTC m=+0.021623489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:18:24 compute-0 python3.9[161679]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:25 compute-0 podman[161637]: 2026-01-31 07:18:25.090462693 +0000 UTC m=+0.700507490 container create 63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:18:25 compute-0 systemd[1]: Started libpod-conmon-63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb.scope.
Jan 31 07:18:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:25 compute-0 sudo[161677]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:25 compute-0 podman[161637]: 2026-01-31 07:18:25.373165506 +0000 UTC m=+0.983210303 container init 63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:18:25 compute-0 podman[161637]: 2026-01-31 07:18:25.383001651 +0000 UTC m=+0.993046418 container start 63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:18:25 compute-0 gifted_haibt[161695]: 167 167
Jan 31 07:18:25 compute-0 systemd[1]: libpod-63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb.scope: Deactivated successfully.
Jan 31 07:18:25 compute-0 ceph-mon[74496]: pgmap v515: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Jan 31 07:18:25 compute-0 podman[161637]: 2026-01-31 07:18:25.532799482 +0000 UTC m=+1.142844309 container attach 63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:18:25 compute-0 podman[161637]: 2026-01-31 07:18:25.534035063 +0000 UTC m=+1.144079850 container died 63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:18:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 07:18:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:25.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-117daf9283286640639b9671171d04cc7dc430160db20c1ec566b662eef4181e-merged.mount: Deactivated successfully.
Jan 31 07:18:26 compute-0 sudo[161863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjfeuxngqmouvezssuksxwzkzdicvvxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843905.6621628-95-182119362577199/AnsiballZ_systemd_service.py'
Jan 31 07:18:26 compute-0 sudo[161863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:26 compute-0 podman[161637]: 2026-01-31 07:18:26.494340154 +0000 UTC m=+2.104384921 container remove 63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_haibt, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:18:26 compute-0 systemd[1]: libpod-conmon-63970256f5d402554da4dfa9372e9fe7906b1bb2fb35c527f045915c28aac4cb.scope: Deactivated successfully.
Jan 31 07:18:26 compute-0 ceph-mon[74496]: pgmap v516: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 07:18:26 compute-0 python3.9[161865]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:18:26 compute-0 systemd[1]: Reloading.
Jan 31 07:18:26 compute-0 systemd-rc-local-generator[161914]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:26 compute-0 systemd-sysv-generator[161917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:26 compute-0 podman[161873]: 2026-01-31 07:18:26.699846504 +0000 UTC m=+0.110640928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:18:26 compute-0 podman[161873]: 2026-01-31 07:18:26.814276724 +0000 UTC m=+0.225071158 container create a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:18:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:27 compute-0 systemd[1]: Started libpod-conmon-a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477.scope.
Jan 31 07:18:27 compute-0 sudo[161863]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c7f8a0128b9d575fdcad89ddd6b204a73648a880b20d61b27070c9fd4a68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c7f8a0128b9d575fdcad89ddd6b204a73648a880b20d61b27070c9fd4a68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c7f8a0128b9d575fdcad89ddd6b204a73648a880b20d61b27070c9fd4a68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77c7f8a0128b9d575fdcad89ddd6b204a73648a880b20d61b27070c9fd4a68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:18:27 compute-0 podman[161873]: 2026-01-31 07:18:27.073358817 +0000 UTC m=+0.484153311 container init a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:18:27 compute-0 podman[161873]: 2026-01-31 07:18:27.08228332 +0000 UTC m=+0.493077734 container start a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:18:27 compute-0 podman[161873]: 2026-01-31 07:18:27.095742035 +0000 UTC m=+0.506536469 container attach a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 07:18:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Jan 31 07:18:27 compute-0 python3.9[162081]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:18:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:27.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:27 compute-0 network[162099]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:18:27 compute-0 network[162100]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:18:27 compute-0 network[162101]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]: {
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:         "osd_id": 0,
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:         "type": "bluestore"
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]:     }
Jan 31 07:18:27 compute-0 hardcore_bardeen[161926]: }
Jan 31 07:18:27 compute-0 podman[161873]: 2026-01-31 07:18:27.94942449 +0000 UTC m=+1.360218914 container died a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:18:28 compute-0 systemd[1]: libpod-a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477.scope: Deactivated successfully.
Jan 31 07:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a77c7f8a0128b9d575fdcad89ddd6b204a73648a880b20d61b27070c9fd4a68e-merged.mount: Deactivated successfully.
Jan 31 07:18:28 compute-0 podman[161873]: 2026-01-31 07:18:28.313638983 +0000 UTC m=+1.724433387 container remove a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bardeen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:18:28 compute-0 systemd[1]: libpod-conmon-a3b10bb3e03268db79e203f64468111283d986b913d51146bcb22e1435afd477.scope: Deactivated successfully.
Jan 31 07:18:28 compute-0 sudo[161524]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:18:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:18:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 757ee94a-ca17-411d-8ba7-bbfa81f87cce does not exist
Jan 31 07:18:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f572bb9d-45bc-42b9-9d73-ce69c80a18ef does not exist
Jan 31 07:18:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c7594792-643d-40da-a69a-c16e46021a7a does not exist
Jan 31 07:18:28 compute-0 sudo[162150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:28 compute-0 sudo[162150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:28 compute-0 sudo[162150]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:28 compute-0 sudo[162178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:18:28 compute-0 sudo[162178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:28 compute-0 sudo[162178]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:28 compute-0 ceph-mon[74496]: pgmap v517: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Jan 31 07:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:18:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Jan 31 07:18:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:29.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:30 compute-0 ceph-mon[74496]: pgmap v518: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Jan 31 07:18:30 compute-0 sudo[162441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkwdphfnsxhjrfmxyslxkqmnebsjhpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843910.3282475-152-14917403782571/AnsiballZ_systemd_service.py'
Jan 31 07:18:30 compute-0 sudo[162441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:31 compute-0 python3.9[162443]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:31 compute-0 sudo[162441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:31 compute-0 sudo[162595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-samjvdgltdbdchaijachxvvufsaufdam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843911.2152076-152-162868907897538/AnsiballZ_systemd_service.py'
Jan 31 07:18:31 compute-0 sudo[162595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 137 op/s
Jan 31 07:18:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:31.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:31 compute-0 python3.9[162597]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:31 compute-0 sudo[162595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:32 compute-0 sudo[162748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phaloavmvtglbvwnpfjefxvxnunugmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843911.9471028-152-184477306864132/AnsiballZ_systemd_service.py'
Jan 31 07:18:32 compute-0 sudo[162748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:32 compute-0 python3.9[162750]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:32 compute-0 sudo[162748]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:32 compute-0 ceph-mon[74496]: pgmap v519: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 82 KiB/s rd, 0 B/s wr, 137 op/s
Jan 31 07:18:32 compute-0 sudo[162901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snchaptreqrdxcappumbayhmmgpuyliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843912.6429973-152-258577175800441/AnsiballZ_systemd_service.py'
Jan 31 07:18:32 compute-0 sudo[162901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:33 compute-0 python3.9[162903]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:33 compute-0 podman[162905]: 2026-01-31 07:18:33.280530119 +0000 UTC m=+0.079844810 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 07:18:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 163 op/s
Jan 31 07:18:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:34 compute-0 ceph-mon[74496]: pgmap v520: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 163 op/s
Jan 31 07:18:34 compute-0 sudo[162901]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:18:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:18:34 compute-0 sudo[163081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkqzmjcafblcjdwshudzpytubbxgrhzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843914.331846-152-13630296784654/AnsiballZ_systemd_service.py'
Jan 31 07:18:34 compute-0 sudo[163081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:34 compute-0 python3.9[163083]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:34 compute-0 sudo[163081]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:35 compute-0 sshd-session[163084]: Invalid user sol from 45.148.10.240 port 57850
Jan 31 07:18:35 compute-0 sshd-session[163084]: Connection closed by invalid user sol 45.148.10.240 port 57850 [preauth]
Jan 31 07:18:35 compute-0 sudo[163237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyxphawbjnylwxzedloymelmnmwbbked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843915.0725265-152-81389708321813/AnsiballZ_systemd_service.py'
Jan 31 07:18:35 compute-0 sudo[163237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 07:18:35 compute-0 python3.9[163239]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:35 compute-0 sudo[163237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:35.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:35.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:36 compute-0 sudo[163390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlstlshoekjxvlonmkopqnljdtecoglq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843915.8468027-152-187074848962503/AnsiballZ_systemd_service.py'
Jan 31 07:18:36 compute-0 sudo[163390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:36 compute-0 python3.9[163392]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:18:36 compute-0 sudo[163390]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:36 compute-0 ceph-mon[74496]: pgmap v521: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 07:18:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:37 compute-0 sudo[163544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxpsxfmmzkqfnzugtmwucwbagqqyoyrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843916.991238-308-241538398392590/AnsiballZ_file.py'
Jan 31 07:18:37 compute-0 sudo[163544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:37 compute-0 python3.9[163546]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:37 compute-0 sudo[163544]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Jan 31 07:18:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:37.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:37 compute-0 sudo[163696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oytgwjkrrsavnifbbricbmwjerurecth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843917.7084439-308-169496946438743/AnsiballZ_file.py'
Jan 31 07:18:37 compute-0 sudo[163696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:38 compute-0 python3.9[163698]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:38 compute-0 sudo[163696]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:38 compute-0 sudo[163848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tppffrkznhsxetcxikfhiibqfisgwagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843918.2521267-308-75139972780790/AnsiballZ_file.py'
Jan 31 07:18:38 compute-0 sudo[163848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:38 compute-0 python3.9[163850]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:38 compute-0 sudo[163848]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:39 compute-0 ceph-mon[74496]: pgmap v522: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Jan 31 07:18:39 compute-0 sudo[164000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hirhpjfklrkbhmhccmkzaxtyolapxwan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843918.8929965-308-119974383970930/AnsiballZ_file.py'
Jan 31 07:18:39 compute-0 sudo[164000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:39 compute-0 python3.9[164002]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:39 compute-0 sudo[164000]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 76 KiB/s rd, 0 B/s wr, 126 op/s
Jan 31 07:18:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:39.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:39 compute-0 sudo[164153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jquvujfsdovwsdzjrymvyvtcbswusmvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843919.5130582-308-206840439542348/AnsiballZ_file.py'
Jan 31 07:18:39 compute-0 sudo[164153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:39 compute-0 python3.9[164155]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:40 compute-0 sudo[164153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-0 ceph-mon[74496]: pgmap v523: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 76 KiB/s rd, 0 B/s wr, 126 op/s
Jan 31 07:18:40 compute-0 sudo[164321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syrurvahunqeavaiitjzguhquevqilzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843920.1257184-308-18997137999382/AnsiballZ_file.py'
Jan 31 07:18:40 compute-0 sudo[164321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:40 compute-0 podman[164279]: 2026-01-31 07:18:40.525218813 +0000 UTC m=+0.050832888 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:18:40 compute-0 python3.9[164326]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:40 compute-0 sudo[164321]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-0 sudo[164378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:40 compute-0 sudo[164378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:40 compute-0 sudo[164378]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:40 compute-0 sudo[164429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:18:40 compute-0 sudo[164429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:18:40 compute-0 sudo[164429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:41 compute-0 sudo[164526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cruyvxznorvqqayxdrcwavwofoblvknf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843920.847831-308-145511364250594/AnsiballZ_file.py'
Jan 31 07:18:41 compute-0 sudo[164526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:41 compute-0 python3.9[164528]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:41 compute-0 sudo[164526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 31 07:18:41 compute-0 sudo[164679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjvdktfufucnvbbjaafywhigopofzfeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843921.4372091-458-16390844327470/AnsiballZ_file.py'
Jan 31 07:18:41 compute-0 sudo[164679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:41 compute-0 python3.9[164681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:41 compute-0 sudo[164679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:42 compute-0 sudo[164831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djfkhvqdqohooebhkgghulomydkmshpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843922.082555-458-99139778798978/AnsiballZ_file.py'
Jan 31 07:18:42 compute-0 sudo[164831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:42 compute-0 python3.9[164833]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:42 compute-0 sudo[164831]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:42 compute-0 ceph-mon[74496]: pgmap v524: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Jan 31 07:18:42 compute-0 sudo[164983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svxzajwzniocjxgdcodfeoacyjepocjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843922.6820998-458-28620226125934/AnsiballZ_file.py'
Jan 31 07:18:42 compute-0 sudo[164983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:43 compute-0 python3.9[164985]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:43 compute-0 sudo[164983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:43 compute-0 sudo[165136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-surgpkpsptewotwfvuhdgazfaqceluhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843923.2671652-458-118311495067894/AnsiballZ_file.py'
Jan 31 07:18:43 compute-0 sudo[165136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 07:18:43 compute-0 python3.9[165138]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:43 compute-0 sudo[165136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:44 compute-0 sudo[165288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baroyausregwynmbcfkyfplojnvfzxem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843923.8619204-458-266415469891368/AnsiballZ_file.py'
Jan 31 07:18:44 compute-0 sudo[165288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:44 compute-0 python3.9[165290]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:44 compute-0 sudo[165288]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:44 compute-0 sudo[165440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqrllrksunmtfrwsbunwcjsbseejaweh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843924.4773805-458-186326045736061/AnsiballZ_file.py'
Jan 31 07:18:44 compute-0 sudo[165440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:44 compute-0 python3.9[165442]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:45 compute-0 sudo[165440]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:45 compute-0 ceph-mon[74496]: pgmap v525: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Jan 31 07:18:45 compute-0 sudo[165593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylvpojqmmdblkghuyijrrjynzvizfcxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843925.1249762-458-265147502755532/AnsiballZ_file.py'
Jan 31 07:18:45 compute-0 sudo[165593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:45 compute-0 python3.9[165595]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:18:45 compute-0 sudo[165593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 07:18:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:45.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:46 compute-0 sudo[165745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bumbwbdrcbilswijcarcpfdtjfrnbdcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843925.8552449-611-171526492216595/AnsiballZ_command.py'
Jan 31 07:18:46 compute-0 sudo[165745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:46 compute-0 python3.9[165747]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:46 compute-0 sudo[165745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:46 compute-0 ceph-mon[74496]: pgmap v526: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 07:18:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:47 compute-0 python3.9[165899]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:18:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 07:18:47 compute-0 sudo[166050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxdcplregbevutzkqnfkqeiczddkrwmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843927.4891188-665-138430380740569/AnsiballZ_systemd_service.py'
Jan 31 07:18:47 compute-0 sudo[166050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:47.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:47.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:48 compute-0 python3.9[166052]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:18:48 compute-0 systemd[1]: Reloading.
Jan 31 07:18:48 compute-0 systemd-rc-local-generator[166080]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:18:48 compute-0 systemd-sysv-generator[166084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:18:48 compute-0 sudo[166050]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:48 compute-0 sudo[166237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkckthczmidwalyjpojjtzyuqkucoyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843928.5045476-689-168736807896303/AnsiballZ_command.py'
Jan 31 07:18:48 compute-0 sudo[166237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:48 compute-0 ceph-mon[74496]: pgmap v527: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 31 07:18:48 compute-0 python3.9[166239]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:48 compute-0 sudo[166237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:49 compute-0 sudo[166391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumtehsjmhtgnvososldnbewgkxhhshn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843929.1298077-689-259638359183767/AnsiballZ_command.py'
Jan 31 07:18:49 compute-0 sudo[166391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:49 compute-0 python3.9[166393]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:49 compute-0 sudo[166391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:49.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:18:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:49.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:18:49 compute-0 sudo[166544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iypbwgcscfkjkxlqwyjindumpimbndsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843929.7250526-689-129321624227743/AnsiballZ_command.py'
Jan 31 07:18:49 compute-0 sudo[166544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:18:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:18:50 compute-0 python3.9[166546]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:50 compute-0 sudo[166544]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:50 compute-0 sudo[166697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydbcxdhshkurbnqrifkeqpkgulkgtmiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843930.3678577-689-103900612924330/AnsiballZ_command.py'
Jan 31 07:18:50 compute-0 sudo[166697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:50 compute-0 python3.9[166699]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:50 compute-0 sudo[166697]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:50 compute-0 ceph-mon[74496]: pgmap v528: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:51 compute-0 sudo[166851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgqanwahwswdpkrnrrwysdiuhmsrmhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843930.9840832-689-71654956626408/AnsiballZ_command.py'
Jan 31 07:18:51 compute-0 sudo[166851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:51 compute-0 python3.9[166853]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:51 compute-0 sudo[166851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:18:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:51.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:18:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:51 compute-0 sudo[167004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blynofiarynccvgqefeesjqsuacxjltq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843931.5842865-689-273330887453902/AnsiballZ_command.py'
Jan 31 07:18:51 compute-0 sudo[167004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:52 compute-0 python3.9[167006]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:52 compute-0 sudo[167004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:52 compute-0 sudo[167157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdnviiewimmskjhdjigcwyxiutwcjgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843932.177732-689-26753955048520/AnsiballZ_command.py'
Jan 31 07:18:52 compute-0 sudo[167157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:52 compute-0 python3.9[167159]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:18:52 compute-0 sudo[167157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:53 compute-0 ceph-mon[74496]: pgmap v529: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:53 compute-0 sudo[167311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouotwheyculvqdmswfzfcztmyhuvwgow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843933.2527616-851-108857276992937/AnsiballZ_getent.py'
Jan 31 07:18:53 compute-0 sudo[167311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:53.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:53 compute-0 python3.9[167313]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 07:18:53 compute-0 sudo[167311]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:54 compute-0 ceph-mon[74496]: pgmap v530: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:54 compute-0 sudo[167464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftowtegebtsdfdeqmonzmldjcbwxkium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843934.0311034-875-82364487979446/AnsiballZ_group.py'
Jan 31 07:18:54 compute-0 sudo[167464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:54 compute-0 python3.9[167466]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:18:55 compute-0 groupadd[167467]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 07:18:55 compute-0 groupadd[167467]: group added to /etc/gshadow: name=libvirt
Jan 31 07:18:55 compute-0 groupadd[167467]: new group: name=libvirt, GID=42473
Jan 31 07:18:55 compute-0 sudo[167464]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:55 compute-0 sudo[167623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bisaznurbbvnrljdtbcosgazwdlyyyxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843935.4641676-899-199379349933913/AnsiballZ_user.py'
Jan 31 07:18:55 compute-0 sudo[167623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:56 compute-0 python3.9[167625]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:18:56 compute-0 useradd[167627]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 07:18:56 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:18:56 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:18:56 compute-0 sudo[167623]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:56 compute-0 ceph-mon[74496]: pgmap v531: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:18:57 compute-0 sudo[167784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gexzfjtuxlmiffjkxepuhynagmlonxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843936.808986-932-134217783615404/AnsiballZ_setup.py'
Jan 31 07:18:57 compute-0 sudo[167784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:57 compute-0 python3.9[167786]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:18:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:57 compute-0 sudo[167784]: pam_unix(sudo:session): session closed for user root
Jan 31 07:18:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:18:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:18:57 compute-0 sudo[167869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvskcthgrtvoojabpcwtwkltmjwvryng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769843936.808986-932-134217783615404/AnsiballZ_dnf.py'
Jan 31 07:18:57 compute-0 sudo[167869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:18:58 compute-0 python3.9[167871]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:18:58 compute-0 ceph-mon[74496]: pgmap v532: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:18:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:18:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:18:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:18:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:18:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:18:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:00 compute-0 ceph-mon[74496]: pgmap v533: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:01 compute-0 sudo[167882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:01 compute-0 sudo[167882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:01 compute-0 sudo[167882]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:01 compute-0 sudo[167907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:01 compute-0 sudo[167907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:01 compute-0 sudo[167907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:01.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:03 compute-0 ceph-mon[74496]: pgmap v534: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:03.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:03 compute-0 podman[167934]: 2026-01-31 07:19:03.906779438 +0000 UTC m=+0.078828364 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:19:04 compute-0 ceph-mon[74496]: pgmap v535: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:07 compute-0 ceph-mon[74496]: pgmap v536: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:07.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:08 compute-0 ceph-mon[74496]: pgmap v537: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:09.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:19:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:09.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:19:10 compute-0 ceph-mon[74496]: pgmap v538: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:10 compute-0 podman[167964]: 2026-01-31 07:19:10.877065937 +0000 UTC m=+0.052442586 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 07:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:19:11.126 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:19:11.127 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:19:11.127 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:19:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:11.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:12 compute-0 ceph-mon[74496]: pgmap v539: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:19:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:13.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:19:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:14 compute-0 ceph-mon[74496]: pgmap v540: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:19:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:19:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:15.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:16 compute-0 ceph-mon[74496]: pgmap v541: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:17.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:17.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:18 compute-0 ceph-mon[74496]: pgmap v542: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:19.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:19.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:19:19
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'vms', 'volumes']
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:19:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:19:21 compute-0 ceph-mon[74496]: pgmap v543: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:21 compute-0 sudo[168162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:21 compute-0 sudo[168162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:21 compute-0 sudo[168162]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:21 compute-0 sudo[168187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:21 compute-0 sudo[168187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:21 compute-0 sudo[168187]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 07:19:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:21.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 07:19:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:22 compute-0 ceph-mon[74496]: pgmap v544: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:23.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:23.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:24 compute-0 ceph-mon[74496]: pgmap v545: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:25.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:25.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:26 compute-0 ceph-mon[74496]: pgmap v546: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:27.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:27.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:28 compute-0 ceph-mon[74496]: pgmap v547: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:28 compute-0 sudo[168222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:28 compute-0 sudo[168222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:28 compute-0 sudo[168222]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:28 compute-0 sudo[168247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:19:28 compute-0 sudo[168247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:28 compute-0 sudo[168247]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:28 compute-0 sudo[168272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:28 compute-0 sudo[168272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:28 compute-0 sudo[168272]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:28 compute-0 sudo[168297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:19:28 compute-0 sudo[168297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:29 compute-0 sudo[168297]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:19:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:19:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:19:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bd583c81-03d0-4964-92e0-b0b4d65f4885 does not exist
Jan 31 07:19:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fb9e5275-ce23-4cc6-9d59-7684eeceacd5 does not exist
Jan 31 07:19:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4ec3aa4b-36c4-41ee-aa46-51c4df3ec557 does not exist
Jan 31 07:19:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:19:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:19:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:19:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:19:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:29 compute-0 sudo[168353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:29 compute-0 sudo[168353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:29 compute-0 sudo[168353]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:29 compute-0 sudo[168378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:19:29 compute-0 sudo[168378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:29 compute-0 sudo[168378]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:29 compute-0 sudo[168403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:29 compute-0 sudo[168403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:29 compute-0 sudo[168403]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:29 compute-0 sudo[168428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:19:29 compute-0 sudo[168428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:29.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:29.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.087116085 +0000 UTC m=+0.049982564 container create 122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:19:30 compute-0 systemd[1]: Started libpod-conmon-122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe.scope.
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.056185781 +0000 UTC m=+0.019052270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:19:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.181281018 +0000 UTC m=+0.144147517 container init 122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.189623174 +0000 UTC m=+0.152489663 container start 122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:19:30 compute-0 brave_carson[168509]: 167 167
Jan 31 07:19:30 compute-0 systemd[1]: libpod-122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe.scope: Deactivated successfully.
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.19597945 +0000 UTC m=+0.158845949 container attach 122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.1984261 +0000 UTC m=+0.161292609 container died 122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carson, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:19:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f268e3e93b5ace3419d6d3ae95bd4505c34db00189b5f91a480e261e1fa5ff2e-merged.mount: Deactivated successfully.
Jan 31 07:19:30 compute-0 podman[168493]: 2026-01-31 07:19:30.258160034 +0000 UTC m=+0.221026543 container remove 122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_carson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:19:30 compute-0 systemd[1]: libpod-conmon-122e2bcc2ac2290fe9329dc839732503eff8f14049eaeabbbd41f0a369c2ecfe.scope: Deactivated successfully.
Jan 31 07:19:30 compute-0 podman[168531]: 2026-01-31 07:19:30.410757189 +0000 UTC m=+0.046378755 container create f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:19:30 compute-0 systemd[1]: Started libpod-conmon-f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd.scope.
Jan 31 07:19:30 compute-0 podman[168531]: 2026-01-31 07:19:30.388279815 +0000 UTC m=+0.023901361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:19:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79be35bf85c233a50bf153281070340739b27714eceaf621ae9a638577a660b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79be35bf85c233a50bf153281070340739b27714eceaf621ae9a638577a660b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79be35bf85c233a50bf153281070340739b27714eceaf621ae9a638577a660b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79be35bf85c233a50bf153281070340739b27714eceaf621ae9a638577a660b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d79be35bf85c233a50bf153281070340739b27714eceaf621ae9a638577a660b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:30 compute-0 podman[168531]: 2026-01-31 07:19:30.531799995 +0000 UTC m=+0.167421551 container init f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:19:30 compute-0 podman[168531]: 2026-01-31 07:19:30.537601498 +0000 UTC m=+0.173223024 container start f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:19:30 compute-0 podman[168531]: 2026-01-31 07:19:30.543777381 +0000 UTC m=+0.179398917 container attach f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:19:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:19:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:19:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:19:30 compute-0 ceph-mon[74496]: pgmap v548: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:31 compute-0 modest_jackson[168548]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:19:31 compute-0 modest_jackson[168548]: --> relative data size: 1.0
Jan 31 07:19:31 compute-0 modest_jackson[168548]: --> All data devices are unavailable
Jan 31 07:19:31 compute-0 systemd[1]: libpod-f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd.scope: Deactivated successfully.
Jan 31 07:19:31 compute-0 podman[168531]: 2026-01-31 07:19:31.297165737 +0000 UTC m=+0.932787283 container died f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:19:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d79be35bf85c233a50bf153281070340739b27714eceaf621ae9a638577a660b-merged.mount: Deactivated successfully.
Jan 31 07:19:31 compute-0 podman[168531]: 2026-01-31 07:19:31.604435048 +0000 UTC m=+1.240056574 container remove f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jackson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:19:31 compute-0 systemd[1]: libpod-conmon-f8e71bea7390361d3597571cf80b028a709b8ba37f803f0a425692bcf09879cd.scope: Deactivated successfully.
Jan 31 07:19:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:31 compute-0 sudo[168428]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:31 compute-0 sudo[168578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:31 compute-0 sudo[168578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:31 compute-0 sudo[168578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:31 compute-0 sudo[168603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:19:31 compute-0 sudo[168603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:31 compute-0 sudo[168603]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:31 compute-0 sudo[168628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:31 compute-0 sudo[168628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:31 compute-0 sudo[168628]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:31 compute-0 sudo[168653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:19:31 compute-0 sudo[168653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.219786539 +0000 UTC m=+0.054219769 container create 12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:19:32 compute-0 systemd[1]: Started libpod-conmon-12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016.scope.
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.19794153 +0000 UTC m=+0.032374800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:19:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.322291678 +0000 UTC m=+0.156724928 container init 12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.330325676 +0000 UTC m=+0.164758906 container start 12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:19:32 compute-0 compassionate_jennings[168734]: 167 167
Jan 31 07:19:32 compute-0 systemd[1]: libpod-12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016.scope: Deactivated successfully.
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.337333219 +0000 UTC m=+0.171766459 container attach 12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.337697298 +0000 UTC m=+0.172130528 container died 12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:19:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1798bbe35aaba45d4870a933db135bd665ca7180003b47d40efcbeef45de2416-merged.mount: Deactivated successfully.
Jan 31 07:19:32 compute-0 podman[168718]: 2026-01-31 07:19:32.468465984 +0000 UTC m=+0.302899254 container remove 12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:19:32 compute-0 systemd[1]: libpod-conmon-12d9ad07e7def762210d90d482878bb44af30c2ab3e6692a764aac176aa43016.scope: Deactivated successfully.
Jan 31 07:19:32 compute-0 podman[168760]: 2026-01-31 07:19:32.601509716 +0000 UTC m=+0.038769607 container create 7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_moser, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:19:32 compute-0 systemd[1]: Started libpod-conmon-7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15.scope.
Jan 31 07:19:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af7faff742c40d48382c781c887fb6b87cd5119fe000399b96651d0face9f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:32 compute-0 podman[168760]: 2026-01-31 07:19:32.580528048 +0000 UTC m=+0.017787939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af7faff742c40d48382c781c887fb6b87cd5119fe000399b96651d0face9f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af7faff742c40d48382c781c887fb6b87cd5119fe000399b96651d0face9f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af7faff742c40d48382c781c887fb6b87cd5119fe000399b96651d0face9f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:32 compute-0 podman[168760]: 2026-01-31 07:19:32.691764923 +0000 UTC m=+0.129024834 container init 7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_moser, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:19:32 compute-0 podman[168760]: 2026-01-31 07:19:32.698096699 +0000 UTC m=+0.135356590 container start 7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:19:32 compute-0 podman[168760]: 2026-01-31 07:19:32.703620755 +0000 UTC m=+0.140880676 container attach 7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_moser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:19:32 compute-0 ceph-mon[74496]: pgmap v549: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:33 compute-0 elated_moser[168776]: {
Jan 31 07:19:33 compute-0 elated_moser[168776]:     "0": [
Jan 31 07:19:33 compute-0 elated_moser[168776]:         {
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "devices": [
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "/dev/loop3"
Jan 31 07:19:33 compute-0 elated_moser[168776]:             ],
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "lv_name": "ceph_lv0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "lv_size": "7511998464",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "name": "ceph_lv0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "tags": {
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.cluster_name": "ceph",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.crush_device_class": "",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.encrypted": "0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.osd_id": "0",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.type": "block",
Jan 31 07:19:33 compute-0 elated_moser[168776]:                 "ceph.vdo": "0"
Jan 31 07:19:33 compute-0 elated_moser[168776]:             },
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "type": "block",
Jan 31 07:19:33 compute-0 elated_moser[168776]:             "vg_name": "ceph_vg0"
Jan 31 07:19:33 compute-0 elated_moser[168776]:         }
Jan 31 07:19:33 compute-0 elated_moser[168776]:     ]
Jan 31 07:19:33 compute-0 elated_moser[168776]: }
Jan 31 07:19:33 compute-0 systemd[1]: libpod-7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15.scope: Deactivated successfully.
Jan 31 07:19:33 compute-0 podman[168790]: 2026-01-31 07:19:33.613967934 +0000 UTC m=+0.032313838 container died 7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_moser, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:19:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:33.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:33.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9af7faff742c40d48382c781c887fb6b87cd5119fe000399b96651d0face9f19-merged.mount: Deactivated successfully.
Jan 31 07:19:34 compute-0 ceph-mon[74496]: pgmap v550: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:34 compute-0 podman[168790]: 2026-01-31 07:19:34.451439456 +0000 UTC m=+0.869785320 container remove 7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:19:34 compute-0 systemd[1]: libpod-conmon-7252987c379bd70fcd156b236405a36fa7b9210160ee528e1ffb303057048a15.scope: Deactivated successfully.
Jan 31 07:19:34 compute-0 sudo[168653]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:19:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:19:34 compute-0 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:19:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:19:34 compute-0 sudo[168828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:34 compute-0 auditd[699]: Audit daemon rotating log files
Jan 31 07:19:34 compute-0 podman[168805]: 2026-01-31 07:19:34.58901472 +0000 UTC m=+0.186582094 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 31 07:19:34 compute-0 sudo[168828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:34 compute-0 sudo[168828]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:34 compute-0 sudo[168859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:19:34 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 07:19:34 compute-0 sudo[168859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:34 compute-0 sudo[168859]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:34 compute-0 sudo[168885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:34 compute-0 sudo[168885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:34 compute-0 sudo[168885]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:34 compute-0 sudo[168910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:19:34 compute-0 sudo[168910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.254572819 +0000 UTC m=+0.055524240 container create c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:19:35 compute-0 systemd[1]: Started libpod-conmon-c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382.scope.
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.230948836 +0000 UTC m=+0.031900337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:19:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.348503116 +0000 UTC m=+0.149454567 container init c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.358973505 +0000 UTC m=+0.159924916 container start c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.363540187 +0000 UTC m=+0.164491658 container attach c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:19:35 compute-0 fervent_wilson[168992]: 167 167
Jan 31 07:19:35 compute-0 systemd[1]: libpod-c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382.scope: Deactivated successfully.
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.366627463 +0000 UTC m=+0.167578894 container died c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:19:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f386d4dff5794287b39fae52166b443292a6405dde99c2677f019f59be11997f-merged.mount: Deactivated successfully.
Jan 31 07:19:35 compute-0 podman[168976]: 2026-01-31 07:19:35.417674013 +0000 UTC m=+0.218625434 container remove c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:19:35 compute-0 systemd[1]: libpod-conmon-c185d04352512038505976478b87c4bbe13a07c5bb73e996abdc12a24bfe2382.scope: Deactivated successfully.
Jan 31 07:19:35 compute-0 podman[169017]: 2026-01-31 07:19:35.59264912 +0000 UTC m=+0.073099295 container create 64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:19:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:35 compute-0 systemd[1]: Started libpod-conmon-64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394.scope.
Jan 31 07:19:35 compute-0 podman[169017]: 2026-01-31 07:19:35.556623001 +0000 UTC m=+0.037073196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:19:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b7fbb3d4ec695377c106ad482229481263109804ba373f11d38fb4559d48e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b7fbb3d4ec695377c106ad482229481263109804ba373f11d38fb4559d48e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b7fbb3d4ec695377c106ad482229481263109804ba373f11d38fb4559d48e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905b7fbb3d4ec695377c106ad482229481263109804ba373f11d38fb4559d48e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:19:35 compute-0 podman[169017]: 2026-01-31 07:19:35.699918406 +0000 UTC m=+0.180368591 container init 64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:19:35 compute-0 podman[169017]: 2026-01-31 07:19:35.711882852 +0000 UTC m=+0.192333037 container start 64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:19:35 compute-0 podman[169017]: 2026-01-31 07:19:35.717772787 +0000 UTC m=+0.198223042 container attach 64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:19:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:35.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]: {
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:         "osd_id": 0,
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:         "type": "bluestore"
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]:     }
Jan 31 07:19:36 compute-0 jovial_chebyshev[169034]: }
Jan 31 07:19:36 compute-0 systemd[1]: libpod-64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394.scope: Deactivated successfully.
Jan 31 07:19:36 compute-0 podman[169017]: 2026-01-31 07:19:36.525061033 +0000 UTC m=+1.005511218 container died 64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:19:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-905b7fbb3d4ec695377c106ad482229481263109804ba373f11d38fb4559d48e-merged.mount: Deactivated successfully.
Jan 31 07:19:36 compute-0 podman[169017]: 2026-01-31 07:19:36.660970366 +0000 UTC m=+1.141420511 container remove 64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_chebyshev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:19:36 compute-0 systemd[1]: libpod-conmon-64b790f844ad7e0f1e789c30b737196af93a0ea8d0cdd3b68c7f668c13821394.scope: Deactivated successfully.
Jan 31 07:19:36 compute-0 sudo[168910]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:19:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:19:36 compute-0 ceph-mon[74496]: pgmap v551: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b4cac61f-ff9e-418c-b612-763d598140a6 does not exist
Jan 31 07:19:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fbce134b-a4b5-4328-8876-a1f913f7b5db does not exist
Jan 31 07:19:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6f05249b-820c-450b-a8c6-6d6cebc48020 does not exist
Jan 31 07:19:36 compute-0 sudo[169069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:36 compute-0 sudo[169069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:36 compute-0 sudo[169069]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:36 compute-0 sudo[169094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:19:36 compute-0 sudo[169094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:36 compute-0 sudo[169094]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:37.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:19:39 compute-0 ceph-mon[74496]: pgmap v552: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 07:19:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:39.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 07:19:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:39.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:40 compute-0 ceph-mon[74496]: pgmap v553: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:41 compute-0 sudo[169122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:41 compute-0 sudo[169122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:41 compute-0 sudo[169122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:41 compute-0 sudo[169153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:19:41 compute-0 sudo[169153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:19:41 compute-0 sudo[169153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:19:41 compute-0 podman[169146]: 2026-01-31 07:19:41.389513363 +0000 UTC m=+0.104896909 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 07:19:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:41.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:41.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:42 compute-0 ceph-mon[74496]: pgmap v554: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:43.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:43.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:44 compute-0 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:19:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:19:44 compute-0 ceph-mon[74496]: pgmap v555: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:45.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:45.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:47 compute-0 ceph-mon[74496]: pgmap v556: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:47.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:47.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:48 compute-0 ceph-mon[74496]: pgmap v557: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:49.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:49.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:19:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:19:50 compute-0 ceph-mon[74496]: pgmap v558: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:51.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:51.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:53 compute-0 ceph-mon[74496]: pgmap v559: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:53.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:53.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:54 compute-0 ceph-mon[74496]: pgmap v560: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:55.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:55.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:56 compute-0 ceph-mon[74496]: pgmap v561: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:19:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:57.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:57.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:19:58 compute-0 ceph-mon[74496]: pgmap v562: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:19:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:19:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:19:59.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:19:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:19:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:19:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:19:59.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:20:00 compute-0 ceph-mon[74496]: pgmap v563: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:20:01 compute-0 sudo[173408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:01 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 07:20:01 compute-0 sudo[173408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:01 compute-0 sudo[173408]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:01 compute-0 sudo[173482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:01 compute-0 sudo[173482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:01 compute-0 sudo[173482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:01.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:01.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:02 compute-0 ceph-mon[74496]: pgmap v564: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:03.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:04 compute-0 ceph-mon[74496]: pgmap v565: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:04 compute-0 podman[176175]: 2026-01-31 07:20:04.997266472 +0000 UTC m=+0.150535625 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:20:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:05.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:05.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:06 compute-0 ceph-mon[74496]: pgmap v566: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:08 compute-0 ceph-mon[74496]: pgmap v567: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:09.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:11 compute-0 ceph-mon[74496]: pgmap v568: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:20:11.127 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:20:11.128 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:20:11.128 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:20:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:11 compute-0 podman[181390]: 2026-01-31 07:20:11.876354454 +0000 UTC m=+0.052052695 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:20:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:12 compute-0 ceph-mon[74496]: pgmap v569: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:14 compute-0 ceph-mon[74496]: pgmap v570: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:15.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:16 compute-0 ceph-mon[74496]: pgmap v571: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:17.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:17.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:18 compute-0 ceph-mon[74496]: pgmap v572: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:20:19
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images']
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:20:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:20:21 compute-0 ceph-mon[74496]: pgmap v573: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:21 compute-0 sudo[186182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:21 compute-0 sudo[186182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:21 compute-0 sudo[186182]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:21 compute-0 sudo[186207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:21 compute-0 sudo[186207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:21 compute-0 sudo[186207]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:21.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:21.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:22 compute-0 ceph-mon[74496]: pgmap v574: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:23.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:24 compute-0 ceph-mon[74496]: pgmap v575: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:25.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:26 compute-0 ceph-mon[74496]: pgmap v576: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:27.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:28 compute-0 ceph-mon[74496]: pgmap v577: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:29.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:29.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:30 compute-0 ceph-mon[74496]: pgmap v578: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:30 compute-0 kernel: SELinux:  Converting 2781 SID table entries...
Jan 31 07:20:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:20:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 07:20:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:20:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:20:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:20:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:20:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:20:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:31.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:31.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:32 compute-0 ceph-mon[74496]: pgmap v579: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:33 compute-0 groupadd[186253]: group added to /etc/group: name=dnsmasq, GID=992
Jan 31 07:20:33 compute-0 groupadd[186253]: group added to /etc/gshadow: name=dnsmasq
Jan 31 07:20:33 compute-0 groupadd[186253]: new group: name=dnsmasq, GID=992
Jan 31 07:20:33 compute-0 useradd[186260]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 07:20:33 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 07:20:33 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 07:20:33 compute-0 dbus-broker-launch[809]: Noticed file-system modification, trigger reload.
Jan 31 07:20:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:33.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:34 compute-0 groupadd[186273]: group added to /etc/group: name=clevis, GID=991
Jan 31 07:20:34 compute-0 groupadd[186273]: group added to /etc/gshadow: name=clevis
Jan 31 07:20:34 compute-0 groupadd[186273]: new group: name=clevis, GID=991
Jan 31 07:20:34 compute-0 useradd[186280]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 07:20:34 compute-0 usermod[186290]: add 'clevis' to group 'tss'
Jan 31 07:20:34 compute-0 usermod[186290]: add 'clevis' to shadow group 'tss'
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:20:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:20:34 compute-0 ceph-mon[74496]: pgmap v580: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:35 compute-0 podman[186308]: 2026-01-31 07:20:35.472980815 +0000 UTC m=+0.101279429 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:20:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:35.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:35.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:37 compute-0 sudo[186342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:37 compute-0 sudo[186342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-0 sudo[186342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-0 sudo[186367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:20:37 compute-0 sudo[186367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-0 sudo[186367]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-0 sudo[186392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:37 compute-0 sudo[186392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-0 sudo[186392]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-0 sudo[186417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:20:37 compute-0 sudo[186417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:37 compute-0 polkitd[43587]: Reloading rules
Jan 31 07:20:37 compute-0 polkitd[43587]: Collecting garbage unconditionally...
Jan 31 07:20:37 compute-0 polkitd[43587]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:20:37 compute-0 polkitd[43587]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:20:37 compute-0 polkitd[43587]: Finished loading, compiling and executing 3 rules
Jan 31 07:20:37 compute-0 polkitd[43587]: Reloading rules
Jan 31 07:20:37 compute-0 polkitd[43587]: Collecting garbage unconditionally...
Jan 31 07:20:37 compute-0 polkitd[43587]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 07:20:37 compute-0 polkitd[43587]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 07:20:37 compute-0 polkitd[43587]: Finished loading, compiling and executing 3 rules
Jan 31 07:20:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:37 compute-0 sudo[186417]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:37.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:38 compute-0 ceph-mon[74496]: pgmap v581: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:38 compute-0 groupadd[186640]: group added to /etc/group: name=ceph, GID=167
Jan 31 07:20:38 compute-0 groupadd[186640]: group added to /etc/gshadow: name=ceph
Jan 31 07:20:38 compute-0 groupadd[186640]: new group: name=ceph, GID=167
Jan 31 07:20:38 compute-0 useradd[186646]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 31 07:20:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:20:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:20:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:39 compute-0 ceph-mon[74496]: pgmap v582: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:39.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 31294e77-1405-4df1-b461-20c46455a020 does not exist
Jan 31 07:20:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b58e6bd7-76ed-4bb3-bdcd-9bafec20ad1d does not exist
Jan 31 07:20:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 57df9b11-c980-477f-a9e4-24ff124d56ac does not exist
Jan 31 07:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:20:40 compute-0 sudo[186654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:40 compute-0 sudo[186654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:40 compute-0 sudo[186654]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:40 compute-0 sudo[186679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:20:40 compute-0 sudo[186679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:40 compute-0 sudo[186679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:40 compute-0 sudo[186704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:40 compute-0 sudo[186704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:40 compute-0 sudo[186704]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:40 compute-0 sudo[186729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:20:40 compute-0 sudo[186729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.675114366 +0000 UTC m=+0.047068336 container create 2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:20:40 compute-0 ceph-mon[74496]: pgmap v583: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:20:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:20:40 compute-0 systemd[1]: Started libpod-conmon-2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8.scope.
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.648378584 +0000 UTC m=+0.020332604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:20:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.763459124 +0000 UTC m=+0.135413144 container init 2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.772044837 +0000 UTC m=+0.143998827 container start 2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.776480857 +0000 UTC m=+0.148434837 container attach 2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:20:40 compute-0 systemd[1]: libpod-2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8.scope: Deactivated successfully.
Jan 31 07:20:40 compute-0 quirky_benz[186808]: 167 167
Jan 31 07:20:40 compute-0 conmon[186808]: conmon 2e1b2a8ec0844ef97987 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8.scope/container/memory.events
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.782396613 +0000 UTC m=+0.154350583 container died 2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:20:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f9dcae2565e5dd1c6c0b0634ac9a5388ab0aa42f098e517dcc2bcf67c1e716e-merged.mount: Deactivated successfully.
Jan 31 07:20:40 compute-0 podman[186791]: 2026-01-31 07:20:40.828244809 +0000 UTC m=+0.200198819 container remove 2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_benz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:20:40 compute-0 systemd[1]: libpod-conmon-2e1b2a8ec0844ef97987d3f11b4fe66f6ea398125f65beca9818f7600424e7f8.scope: Deactivated successfully.
Jan 31 07:20:40 compute-0 podman[186933]: 2026-01-31 07:20:40.982264633 +0000 UTC m=+0.044527724 container create 49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:20:41 compute-0 systemd[1]: Started libpod-conmon-49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad.scope.
Jan 31 07:20:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f0e756dedd17d878aa6a87fabcf57d12d9515a3812cb1c9600535e13c5907d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f0e756dedd17d878aa6a87fabcf57d12d9515a3812cb1c9600535e13c5907d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f0e756dedd17d878aa6a87fabcf57d12d9515a3812cb1c9600535e13c5907d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f0e756dedd17d878aa6a87fabcf57d12d9515a3812cb1c9600535e13c5907d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f0e756dedd17d878aa6a87fabcf57d12d9515a3812cb1c9600535e13c5907d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:41 compute-0 podman[186933]: 2026-01-31 07:20:40.964479342 +0000 UTC m=+0.026742433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:20:41 compute-0 podman[186933]: 2026-01-31 07:20:41.07503868 +0000 UTC m=+0.137301801 container init 49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:20:41 compute-0 podman[186933]: 2026-01-31 07:20:41.084950636 +0000 UTC m=+0.147213707 container start 49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:20:41 compute-0 podman[186933]: 2026-01-31 07:20:41.088796551 +0000 UTC m=+0.151059622 container attach 49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:20:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:41 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 07:20:41 compute-0 sshd[1006]: Received signal 15; terminating.
Jan 31 07:20:41 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 07:20:41 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 07:20:41 compute-0 systemd[1]: sshd.service: Consumed 2.804s CPU time, read 32.0K from disk, written 88.0K to disk.
Jan 31 07:20:41 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 07:20:41 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 31 07:20:41 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:20:41 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:20:41 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:20:41 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 31 07:20:41 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 31 07:20:41 compute-0 sudo[187478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:41 compute-0 sudo[187478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:41 compute-0 sshd[187503]: Server listening on 0.0.0.0 port 22.
Jan 31 07:20:41 compute-0 sshd[187503]: Server listening on :: port 22.
Jan 31 07:20:41 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 31 07:20:41 compute-0 sudo[187478]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:41 compute-0 sudo[187506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:41 compute-0 sudo[187506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:41 compute-0 sudo[187506]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:41 compute-0 objective_thompson[187016]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:20:41 compute-0 objective_thompson[187016]: --> relative data size: 1.0
Jan 31 07:20:41 compute-0 objective_thompson[187016]: --> All data devices are unavailable
Jan 31 07:20:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:41.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:41.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:41 compute-0 systemd[1]: libpod-49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad.scope: Deactivated successfully.
Jan 31 07:20:41 compute-0 podman[186933]: 2026-01-31 07:20:41.934404033 +0000 UTC m=+0.996667094 container died 49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f0e756dedd17d878aa6a87fabcf57d12d9515a3812cb1c9600535e13c5907d-merged.mount: Deactivated successfully.
Jan 31 07:20:41 compute-0 podman[186933]: 2026-01-31 07:20:41.995738382 +0000 UTC m=+1.058001453 container remove 49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:20:41 compute-0 podman[187564]: 2026-01-31 07:20:41.998401778 +0000 UTC m=+0.083160491 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:20:42 compute-0 systemd[1]: libpod-conmon-49298d3b02e6d4ce0384d9f1429d262e1ab927084f046051b65286915de858ad.scope: Deactivated successfully.
Jan 31 07:20:42 compute-0 sudo[186729]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:42 compute-0 sudo[187613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:42 compute-0 sudo[187613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:42 compute-0 sudo[187613]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:42 compute-0 sudo[187648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:20:42 compute-0 sudo[187648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:42 compute-0 sudo[187648]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:42 compute-0 sudo[187684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:42 compute-0 sudo[187684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:42 compute-0 sudo[187684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:42 compute-0 sudo[187718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:20:42 compute-0 sudo[187718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.570379182 +0000 UTC m=+0.041713134 container create b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:20:42 compute-0 systemd[1]: Started libpod-conmon-b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009.scope.
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.555164845 +0000 UTC m=+0.026498797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:20:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.668769729 +0000 UTC m=+0.140103751 container init b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.67688482 +0000 UTC m=+0.148218772 container start b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.68050341 +0000 UTC m=+0.151837462 container attach b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:20:42 compute-0 nervous_rubin[187861]: 167 167
Jan 31 07:20:42 compute-0 systemd[1]: libpod-b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009.scope: Deactivated successfully.
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.683439453 +0000 UTC m=+0.154773405 container died b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c7256120db3eac225536a6bf9edbab22670df0867ce41b327efed16e422ea7e-merged.mount: Deactivated successfully.
Jan 31 07:20:42 compute-0 podman[187831]: 2026-01-31 07:20:42.727830662 +0000 UTC m=+0.199164624 container remove b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rubin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:20:42 compute-0 systemd[1]: libpod-conmon-b43fc3d5ecebcb8e61722c861d773da2fff6d7b46fdcdbb2ac297a6d383c1009.scope: Deactivated successfully.
Jan 31 07:20:42 compute-0 ceph-mon[74496]: pgmap v584: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:42 compute-0 podman[187915]: 2026-01-31 07:20:42.885458645 +0000 UTC m=+0.052835799 container create 61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:20:42 compute-0 systemd[1]: Started libpod-conmon-61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de.scope.
Jan 31 07:20:42 compute-0 podman[187915]: 2026-01-31 07:20:42.860509727 +0000 UTC m=+0.027886971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:20:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf516574939eb21a863d1307ce206c7d34578857a500bbe24c1b14be3d088ad6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf516574939eb21a863d1307ce206c7d34578857a500bbe24c1b14be3d088ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf516574939eb21a863d1307ce206c7d34578857a500bbe24c1b14be3d088ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf516574939eb21a863d1307ce206c7d34578857a500bbe24c1b14be3d088ad6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:42 compute-0 podman[187915]: 2026-01-31 07:20:42.976863809 +0000 UTC m=+0.144241003 container init 61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wing, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:20:42 compute-0 podman[187915]: 2026-01-31 07:20:42.982974951 +0000 UTC m=+0.150352095 container start 61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:20:42 compute-0 podman[187915]: 2026-01-31 07:20:42.990723492 +0000 UTC m=+0.158100636 container attach 61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:20:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:20:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:20:43 compute-0 systemd[1]: Reloading.
Jan 31 07:20:43 compute-0 systemd-sysv-generator[188027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:43 compute-0 systemd-rc-local-generator[188023]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:43 compute-0 zealous_wing[187946]: {
Jan 31 07:20:43 compute-0 zealous_wing[187946]:     "0": [
Jan 31 07:20:43 compute-0 zealous_wing[187946]:         {
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "devices": [
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "/dev/loop3"
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             ],
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "lv_name": "ceph_lv0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "lv_size": "7511998464",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "name": "ceph_lv0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "tags": {
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.cluster_name": "ceph",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.crush_device_class": "",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.encrypted": "0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.osd_id": "0",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.type": "block",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:                 "ceph.vdo": "0"
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             },
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "type": "block",
Jan 31 07:20:43 compute-0 zealous_wing[187946]:             "vg_name": "ceph_vg0"
Jan 31 07:20:43 compute-0 zealous_wing[187946]:         }
Jan 31 07:20:43 compute-0 zealous_wing[187946]:     ]
Jan 31 07:20:43 compute-0 zealous_wing[187946]: }
Jan 31 07:20:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:20:43 compute-0 systemd[1]: libpod-61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de.scope: Deactivated successfully.
Jan 31 07:20:43 compute-0 conmon[187946]: conmon 61db681a721ac61787ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de.scope/container/memory.events
Jan 31 07:20:43 compute-0 podman[187915]: 2026-01-31 07:20:43.794553929 +0000 UTC m=+0.961931093 container died 61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:20:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf516574939eb21a863d1307ce206c7d34578857a500bbe24c1b14be3d088ad6-merged.mount: Deactivated successfully.
Jan 31 07:20:43 compute-0 podman[187915]: 2026-01-31 07:20:43.861175239 +0000 UTC m=+1.028552383 container remove 61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:20:43 compute-0 systemd[1]: libpod-conmon-61db681a721ac61787ce2c1d25bde4a79ceecef2f46c089468de5885a70301de.scope: Deactivated successfully.
Jan 31 07:20:43 compute-0 sudo[187718]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 07:20:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 07:20:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:43.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:43 compute-0 sudo[188523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:43 compute-0 sudo[188523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:43 compute-0 sudo[188523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:44 compute-0 sudo[188679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:20:44 compute-0 sudo[188679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:44 compute-0 sudo[188679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:44 compute-0 sudo[188779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:44 compute-0 sudo[188779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:44 compute-0 sudo[188779]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:44 compute-0 sudo[188889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:20:44 compute-0 sudo[188889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:44 compute-0 podman[189356]: 2026-01-31 07:20:44.439512671 +0000 UTC m=+0.025254626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:20:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:45 compute-0 podman[189356]: 2026-01-31 07:20:45.86688104 +0000 UTC m=+1.452622895 container create 626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:20:45 compute-0 ceph-mon[74496]: pgmap v585: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:45 compute-0 systemd[1]: Started libpod-conmon-626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e.scope.
Jan 31 07:20:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:20:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:45.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:45 compute-0 podman[189356]: 2026-01-31 07:20:45.939113769 +0000 UTC m=+1.524855644 container init 626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:20:45 compute-0 podman[189356]: 2026-01-31 07:20:45.946851011 +0000 UTC m=+1.532592866 container start 626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:20:45 compute-0 great_mclaren[191666]: 167 167
Jan 31 07:20:45 compute-0 podman[189356]: 2026-01-31 07:20:45.951016194 +0000 UTC m=+1.536758069 container attach 626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:20:45 compute-0 systemd[1]: libpod-626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e.scope: Deactivated successfully.
Jan 31 07:20:45 compute-0 podman[189356]: 2026-01-31 07:20:45.951736021 +0000 UTC m=+1.537477886 container died 626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccb8870c11eaf7e198be4a2c5ea865a7b07d6d570ebe39e9fb29178104b59dba-merged.mount: Deactivated successfully.
Jan 31 07:20:45 compute-0 podman[189356]: 2026-01-31 07:20:45.9912705 +0000 UTC m=+1.577012355 container remove 626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:20:46 compute-0 systemd[1]: libpod-conmon-626e89a45f04fc3d2f80bcc1afa3929c03b520babfad274d580099f21bd6167e.scope: Deactivated successfully.
Jan 31 07:20:46 compute-0 podman[191906]: 2026-01-31 07:20:46.136666381 +0000 UTC m=+0.049396184 container create 353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:20:46 compute-0 systemd[1]: Started libpod-conmon-353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787.scope.
Jan 31 07:20:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8c7a6fd9647ef59fef37974c0239abb077bf8c289c1c53f0359d21d69fa76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8c7a6fd9647ef59fef37974c0239abb077bf8c289c1c53f0359d21d69fa76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8c7a6fd9647ef59fef37974c0239abb077bf8c289c1c53f0359d21d69fa76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba8c7a6fd9647ef59fef37974c0239abb077bf8c289c1c53f0359d21d69fa76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:20:46 compute-0 podman[191906]: 2026-01-31 07:20:46.116015889 +0000 UTC m=+0.028745712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:20:46 compute-0 podman[191906]: 2026-01-31 07:20:46.218636081 +0000 UTC m=+0.131365914 container init 353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:20:46 compute-0 podman[191906]: 2026-01-31 07:20:46.224397524 +0000 UTC m=+0.137127317 container start 353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:20:46 compute-0 podman[191906]: 2026-01-31 07:20:46.228786972 +0000 UTC m=+0.141516805 container attach 353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:20:46 compute-0 sudo[167869]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:46 compute-0 ceph-mon[74496]: pgmap v586: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:46 compute-0 boring_mahavira[192095]: {
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:         "osd_id": 0,
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:         "type": "bluestore"
Jan 31 07:20:46 compute-0 boring_mahavira[192095]:     }
Jan 31 07:20:46 compute-0 boring_mahavira[192095]: }
Jan 31 07:20:47 compute-0 systemd[1]: libpod-353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787.scope: Deactivated successfully.
Jan 31 07:20:47 compute-0 conmon[192095]: conmon 353db36f8c623841629c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787.scope/container/memory.events
Jan 31 07:20:47 compute-0 podman[191906]: 2026-01-31 07:20:47.028019596 +0000 UTC m=+0.940749449 container died 353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:20:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ba8c7a6fd9647ef59fef37974c0239abb077bf8c289c1c53f0359d21d69fa76-merged.mount: Deactivated successfully.
Jan 31 07:20:47 compute-0 podman[191906]: 2026-01-31 07:20:47.097119586 +0000 UTC m=+1.009849399 container remove 353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:20:47 compute-0 systemd[1]: libpod-conmon-353db36f8c623841629c62292018958d4f102dadcf5c8cca3e5c2a43ce740787.scope: Deactivated successfully.
Jan 31 07:20:47 compute-0 sudo[188889]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:20:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:20:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 557883ed-a1e3-47f4-a1e8-119c3de80d1a does not exist
Jan 31 07:20:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6ffee8be-9409-4f99-996e-7b2c8d5d0f85 does not exist
Jan 31 07:20:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e59237e7-6b6f-4be0-a777-cbb495503836 does not exist
Jan 31 07:20:47 compute-0 sudo[193559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:20:47 compute-0 sudo[193559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:47 compute-0 sudo[193559]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:47 compute-0 sudo[193652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:20:47 compute-0 sudo[193652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:20:47 compute-0 sudo[193652]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:47 compute-0 sudo[194128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkffctpshikqakntfdlstugcpmeuapzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844046.9376-968-225341412007412/AnsiballZ_systemd.py'
Jan 31 07:20:47 compute-0 sudo[194128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:47 compute-0 python3.9[194156]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:47 compute-0 systemd[1]: Reloading.
Jan 31 07:20:47 compute-0 systemd-sysv-generator[194765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:47 compute-0 systemd-rc-local-generator[194761]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:47.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:20:48 compute-0 ceph-mon[74496]: pgmap v587: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:48 compute-0 sudo[194128]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:48 compute-0 sudo[195801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslslqoqrytnoaurkdnzglgcfwowvfzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844048.2982135-968-249278525134056/AnsiballZ_systemd.py'
Jan 31 07:20:48 compute-0 sudo[195801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:48 compute-0 python3.9[195823]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:48 compute-0 systemd[1]: Reloading.
Jan 31 07:20:49 compute-0 systemd-sysv-generator[196427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:49 compute-0 systemd-rc-local-generator[196420]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:49 compute-0 sudo[195801]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:49 compute-0 sudo[197217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtrxzorrzyxusyhcxgbqmjwgnrrliqxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844049.326688-968-112630993415701/AnsiballZ_systemd.py'
Jan 31 07:20:49 compute-0 sudo[197217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:20:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:20:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.863s CPU time.
Jan 31 07:20:49 compute-0 systemd[1]: run-r53452b2163424bad9a6567a80698c6dd.service: Deactivated successfully.
Jan 31 07:20:49 compute-0 python3.9[197241]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:49.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:49 compute-0 systemd[1]: Reloading.
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:20:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:20:50 compute-0 systemd-sysv-generator[197273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:50 compute-0 systemd-rc-local-generator[197269]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:50 compute-0 sudo[197217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:50 compute-0 sudo[197431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtztrsnlsdauendxhbzhlgsugbgtartu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844050.4205077-968-272704445289064/AnsiballZ_systemd.py'
Jan 31 07:20:50 compute-0 sudo[197431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:50 compute-0 ceph-mon[74496]: pgmap v588: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:50 compute-0 python3.9[197433]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:50 compute-0 systemd[1]: Reloading.
Jan 31 07:20:51 compute-0 systemd-rc-local-generator[197463]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:51 compute-0 systemd-sysv-generator[197466]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:51 compute-0 sudo[197431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:51 compute-0 sudo[197622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fagjzlkqfdvaqeqlhmtzynkmlieetsmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844051.4382699-1055-9488556362466/AnsiballZ_systemd.py'
Jan 31 07:20:51 compute-0 sudo[197622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:51.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:51.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:52 compute-0 python3.9[197624]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.078169) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052078206, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1732, "num_deletes": 251, "total_data_size": 3269301, "memory_usage": 3321520, "flush_reason": "Manual Compaction"}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 07:20:52 compute-0 systemd[1]: Reloading.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052101063, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 3212469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12343, "largest_seqno": 14074, "table_properties": {"data_size": 3204452, "index_size": 4896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14766, "raw_average_key_size": 18, "raw_value_size": 3188732, "raw_average_value_size": 3995, "num_data_blocks": 220, "num_entries": 798, "num_filter_entries": 798, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843845, "oldest_key_time": 1769843845, "file_creation_time": 1769844052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 22966 microseconds, and 4173 cpu microseconds.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.101131) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 3212469 bytes OK
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.101152) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.125220) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.125272) EVENT_LOG_v1 {"time_micros": 1769844052125262, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.125297) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3262218, prev total WAL file size 3262218, number of live WAL files 2.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.126526) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(3137KB)], [29(8226KB)]
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052126555, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 11636239, "oldest_snapshot_seqno": -1}
Jan 31 07:20:52 compute-0 systemd-sysv-generator[197657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:52 compute-0 systemd-rc-local-generator[197652]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4291 keys, 11135880 bytes, temperature: kUnknown
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052233113, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11135880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11101967, "index_size": 22091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 104779, "raw_average_key_size": 24, "raw_value_size": 11019342, "raw_average_value_size": 2568, "num_data_blocks": 941, "num_entries": 4291, "num_filter_entries": 4291, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.233367) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11135880 bytes
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.250328) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.1 rd, 104.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 4808, records dropped: 517 output_compression: NoCompression
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.250381) EVENT_LOG_v1 {"time_micros": 1769844052250357, "job": 12, "event": "compaction_finished", "compaction_time_micros": 106642, "compaction_time_cpu_micros": 21425, "output_level": 6, "num_output_files": 1, "total_output_size": 11135880, "num_input_records": 4808, "num_output_records": 4291, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052250810, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844052251775, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.126493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.251815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.251820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.251821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.251823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:20:52.251824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:20:52 compute-0 sudo[197622]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:52 compute-0 sudo[197812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owhsyikpchbwrdlsalamefboxpxkifpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844052.518181-1055-279546616982863/AnsiballZ_systemd.py'
Jan 31 07:20:52 compute-0 sudo[197812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:52 compute-0 ceph-mon[74496]: pgmap v589: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:53 compute-0 python3.9[197814]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:53 compute-0 systemd[1]: Reloading.
Jan 31 07:20:53 compute-0 systemd-rc-local-generator[197843]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:53 compute-0 systemd-sysv-generator[197848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:53 compute-0 sudo[197812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:53 compute-0 sudo[198003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqepxtxhsagmsqywviqjwxlkqwxihwel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844053.5897057-1055-256288754234846/AnsiballZ_systemd.py'
Jan 31 07:20:53 compute-0 sudo[198003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:53.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:53.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:54 compute-0 python3.9[198005]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:54 compute-0 systemd[1]: Reloading.
Jan 31 07:20:54 compute-0 systemd-sysv-generator[198036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:54 compute-0 systemd-rc-local-generator[198033]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:54 compute-0 sudo[198003]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:54 compute-0 sshd-session[198010]: Invalid user sol from 45.148.10.240 port 40912
Jan 31 07:20:54 compute-0 sshd-session[198010]: Connection closed by invalid user sol 45.148.10.240 port 40912 [preauth]
Jan 31 07:20:54 compute-0 ceph-mon[74496]: pgmap v590: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:54 compute-0 sudo[198195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esucepoddiizsqttrftkltydvebsthda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844054.620491-1055-173301032990640/AnsiballZ_systemd.py'
Jan 31 07:20:54 compute-0 sudo[198195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:55 compute-0 python3.9[198197]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:55 compute-0 sudo[198195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:55 compute-0 sudo[198351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjitnqxfecyfoshwqzqogdnstfdbnbiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844055.345019-1055-221216368707761/AnsiballZ_systemd.py'
Jan 31 07:20:55 compute-0 sudo[198351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:20:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:20:55 compute-0 python3.9[198353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:56 compute-0 systemd[1]: Reloading.
Jan 31 07:20:56 compute-0 systemd-rc-local-generator[198386]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:56 compute-0 systemd-sysv-generator[198390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:56 compute-0 sudo[198351]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:56 compute-0 ceph-mon[74496]: pgmap v591: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:20:57 compute-0 sudo[198543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-konmuaawlxtpschusoyfaqqpbophhric ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844057.3496487-1163-46624247407089/AnsiballZ_systemd.py'
Jan 31 07:20:57 compute-0 sudo[198543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:57.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:57 compute-0 python3.9[198545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 07:20:58 compute-0 systemd[1]: Reloading.
Jan 31 07:20:58 compute-0 systemd-sysv-generator[198577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:20:58 compute-0 systemd-rc-local-generator[198572]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:20:58 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 07:20:58 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 07:20:58 compute-0 sudo[198543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:58 compute-0 sudo[198736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihzeprmuoqjgygkmztmcfavhnzoblvll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844058.5990086-1187-229493773057281/AnsiballZ_systemd.py'
Jan 31 07:20:58 compute-0 sudo[198736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:59 compute-0 python3.9[198738]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:20:59 compute-0 sudo[198736]: pam_unix(sudo:session): session closed for user root
Jan 31 07:20:59 compute-0 ceph-mon[74496]: pgmap v592: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:20:59 compute-0 sudo[198892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbaolosxjkpsxahdhnhfbiyemtogchky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844059.426088-1187-76244233911376/AnsiballZ_systemd.py'
Jan 31 07:20:59 compute-0 sudo[198892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:20:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:20:59.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:20:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:20:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:20:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:20:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:00 compute-0 python3.9[198894]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:00 compute-0 sudo[198892]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:00 compute-0 sudo[199047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbhccofaypuelvprijdgncmhoboxfagz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844060.2424262-1187-46003404120534/AnsiballZ_systemd.py'
Jan 31 07:21:00 compute-0 sudo[199047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:00 compute-0 python3.9[199049]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:00 compute-0 sudo[199047]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:01 compute-0 sudo[199203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brciykqharzybscsyvvpvtzemsmbndej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844060.998634-1187-11507963004214/AnsiballZ_systemd.py'
Jan 31 07:21:01 compute-0 sudo[199203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:01 compute-0 ceph-mon[74496]: pgmap v593: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:01 compute-0 python3.9[199205]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:01 compute-0 sudo[199203]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:01 compute-0 sudo[199290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:01 compute-0 sudo[199290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:01 compute-0 sudo[199290]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:01 compute-0 sudo[199336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:01 compute-0 sudo[199336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:01 compute-0 sudo[199336]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:02 compute-0 sudo[199408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibizldoubgyuxauhsubgpqooqcadtzls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844061.7656975-1187-1553654581498/AnsiballZ_systemd.py'
Jan 31 07:21:02 compute-0 sudo[199408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:02 compute-0 python3.9[199410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:02 compute-0 sudo[199408]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:02 compute-0 ceph-mon[74496]: pgmap v594: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:02 compute-0 sudo[199563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckqankxrjtgjucqnokduluvifckqnptd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844062.4883916-1187-234692501640000/AnsiballZ_systemd.py'
Jan 31 07:21:02 compute-0 sudo[199563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:03 compute-0 python3.9[199565]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:03 compute-0 sudo[199563]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:03 compute-0 sudo[199719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgaommqrzsrgzmgzgiwayavpkveppyyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844063.2143595-1187-105790706588149/AnsiballZ_systemd.py'
Jan 31 07:21:03 compute-0 sudo[199719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:03 compute-0 python3.9[199721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:03 compute-0 sudo[199719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:04 compute-0 sudo[199874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eepofrkdulcglhldsrpesnuamxigrmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844063.904373-1187-248451612352018/AnsiballZ_systemd.py'
Jan 31 07:21:04 compute-0 sudo[199874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:04 compute-0 python3.9[199876]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:04 compute-0 sudo[199874]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:04 compute-0 ceph-mon[74496]: pgmap v595: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:04 compute-0 sudo[200029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfxaqttmqtkkndqfhjrkxuhcvaxusft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844064.6688263-1187-252351286514717/AnsiballZ_systemd.py'
Jan 31 07:21:04 compute-0 sudo[200029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:05 compute-0 python3.9[200031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:05 compute-0 sudo[200029]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:05 compute-0 sudo[200197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fowtslgppkujoxzgujzbpzoryzdbiqqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844065.4269614-1187-48810620656962/AnsiballZ_systemd.py'
Jan 31 07:21:05 compute-0 sudo[200197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:05 compute-0 podman[200159]: 2026-01-31 07:21:05.811528598 +0000 UTC m=+0.160292580 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:21:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:05.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:06 compute-0 python3.9[200206]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:06 compute-0 sudo[200197]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:06 compute-0 sudo[200366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bridiwneirmgbpoevqmgainkyywctlkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844066.259883-1187-268732719465544/AnsiballZ_systemd.py'
Jan 31 07:21:06 compute-0 sudo[200366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:06 compute-0 ceph-mon[74496]: pgmap v596: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:06 compute-0 python3.9[200368]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:06 compute-0 sudo[200366]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:07 compute-0 sudo[200522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccajiqxiwhuowgkkvovshlxwbmthwycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844067.0433707-1187-119380905006440/AnsiballZ_systemd.py'
Jan 31 07:21:07 compute-0 sudo[200522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:07 compute-0 python3.9[200524]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:07 compute-0 sudo[200522]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:07.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:08 compute-0 sudo[200677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzbryakrcxjspqjglgvmfppljssqcphm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844067.7850168-1187-58149279125994/AnsiballZ_systemd.py'
Jan 31 07:21:08 compute-0 sudo[200677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:08 compute-0 python3.9[200679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:08 compute-0 sudo[200677]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:08 compute-0 ceph-mon[74496]: pgmap v597: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:08 compute-0 sudo[200832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edpcpahheoyiewgxkehjptulcirbpkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844068.5225759-1187-25607557437182/AnsiballZ_systemd.py'
Jan 31 07:21:08 compute-0 sudo[200832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:09 compute-0 python3.9[200834]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 07:21:09 compute-0 sudo[200832]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:09.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:10 compute-0 ceph-mon[74496]: pgmap v598: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:21:11.127 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:21:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:21:11.128 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:21:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:21:11.128 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:21:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:11.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:12 compute-0 sudo[201004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eebviiuyiciewtglskoaojwjcijeozym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844071.727315-1493-111092267023509/AnsiballZ_file.py'
Jan 31 07:21:12 compute-0 podman[200963]: 2026-01-31 07:21:12.227650833 +0000 UTC m=+0.056345737 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:21:12 compute-0 sudo[201004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:12 compute-0 python3.9[201010]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:12 compute-0 sudo[201004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:12 compute-0 ceph-mon[74496]: pgmap v599: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:12 compute-0 sudo[201160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irabghahogdhwbqtbjhwyupsyqtrhsmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844072.6722412-1493-144294905255048/AnsiballZ_file.py'
Jan 31 07:21:12 compute-0 sudo[201160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:13 compute-0 python3.9[201162]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:13 compute-0 sudo[201160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:13 compute-0 sudo[201313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnepaazpbwtcswaksydbvcwagnrjxsrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844073.3078158-1493-129733337529209/AnsiballZ_file.py'
Jan 31 07:21:13 compute-0 sudo[201313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:13 compute-0 python3.9[201315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:13 compute-0 sudo[201313]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:21:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:21:14 compute-0 sudo[201465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nanmkwwptddgebcxbeswlshhzakaabyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844073.91083-1493-122547002838034/AnsiballZ_file.py'
Jan 31 07:21:14 compute-0 sudo[201465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:14 compute-0 python3.9[201467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:14 compute-0 sudo[201465]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:14 compute-0 ceph-mon[74496]: pgmap v600: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:14 compute-0 sudo[201617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcbfgdhwpjpmmimohgownkzcdgdnuyom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844074.6033814-1493-276696025598451/AnsiballZ_file.py'
Jan 31 07:21:14 compute-0 sudo[201617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:15 compute-0 python3.9[201619]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:15 compute-0 sudo[201617]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:15 compute-0 sudo[201770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmqvoyijcrvindlakgsckbkzxuwdqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844075.3080232-1493-40909029794432/AnsiballZ_file.py'
Jan 31 07:21:15 compute-0 sudo[201770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:15 compute-0 python3.9[201772]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:21:15 compute-0 sudo[201770]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:15.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:16 compute-0 python3.9[201922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:21:16 compute-0 ceph-mon[74496]: pgmap v601: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:17 compute-0 sudo[202072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshmqgqihcgbtnhezhahbntjftqrjfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844076.659938-1646-227198019018828/AnsiballZ_stat.py'
Jan 31 07:21:17 compute-0 sudo[202072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:17 compute-0 python3.9[202074]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:17 compute-0 sudo[202072]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:17 compute-0 sudo[202198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoezwsusjbhczitpeyvbcyixrkfuveju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844076.659938-1646-227198019018828/AnsiballZ_copy.py'
Jan 31 07:21:17 compute-0 sudo[202198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:17.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:17 compute-0 python3.9[202200]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844076.659938-1646-227198019018828/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:18 compute-0 sudo[202198]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:18 compute-0 sudo[202350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbpqanhgmaomouehvonsfsaukqwgdpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844078.168303-1646-85432423047318/AnsiballZ_stat.py'
Jan 31 07:21:18 compute-0 sudo[202350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:18 compute-0 python3.9[202352]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:18 compute-0 sudo[202350]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:18 compute-0 sudo[202475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkftakgaqnztvvnyivrnqouyegbqzgzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844078.168303-1646-85432423047318/AnsiballZ_copy.py'
Jan 31 07:21:18 compute-0 sudo[202475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:19 compute-0 ceph-mon[74496]: pgmap v602: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:19 compute-0 python3.9[202477]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844078.168303-1646-85432423047318/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:19 compute-0 sudo[202475]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:19 compute-0 sudo[202628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbthcbhbyoxryikyxluwakwkythydnky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844079.3298428-1646-178468642834404/AnsiballZ_stat.py'
Jan 31 07:21:19 compute-0 sudo[202628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:19 compute-0 python3.9[202630]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:19 compute-0 sudo[202628]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:21:19
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'images']
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:21:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:19.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:21:19 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:21:20 compute-0 sudo[202753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvkpcylkhpmgvzsgshupgudfnlxwunak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844079.3298428-1646-178468642834404/AnsiballZ_copy.py'
Jan 31 07:21:20 compute-0 sudo[202753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:21:20 compute-0 python3.9[202755]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844079.3298428-1646-178468642834404/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:20 compute-0 sudo[202753]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:20 compute-0 sudo[202905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwtxgvotrcorgjzxmkjcqqhihashjsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844080.4940982-1646-75699959242902/AnsiballZ_stat.py'
Jan 31 07:21:20 compute-0 sudo[202905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:20 compute-0 python3.9[202907]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:20 compute-0 sudo[202905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:21 compute-0 ceph-mon[74496]: pgmap v603: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:21 compute-0 sudo[203031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzywazmdtdzenamddcjcsjvnmxyjdotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844080.4940982-1646-75699959242902/AnsiballZ_copy.py'
Jan 31 07:21:21 compute-0 sudo[203031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:21 compute-0 python3.9[203033]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844080.4940982-1646-75699959242902/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:21 compute-0 sudo[203031]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:21.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:22 compute-0 sudo[203191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijkufdkztuosvnnpxggzkuhajtpzkimq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844081.7183106-1646-3555987138212/AnsiballZ_stat.py'
Jan 31 07:21:22 compute-0 sudo[203191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:22 compute-0 sudo[203174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:22 compute-0 sudo[203174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:22 compute-0 sudo[203174]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:22 compute-0 sudo[203211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:22 compute-0 sudo[203211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:22 compute-0 sudo[203211]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:22 compute-0 python3.9[203208]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:22 compute-0 sudo[203191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:22 compute-0 sudo[203358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftfueoghwwteuodnzrzvdlrqgiosncs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844081.7183106-1646-3555987138212/AnsiballZ_copy.py'
Jan 31 07:21:22 compute-0 sudo[203358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:22 compute-0 python3.9[203360]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844081.7183106-1646-3555987138212/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:22 compute-0 sudo[203358]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:23 compute-0 ceph-mon[74496]: pgmap v604: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:23 compute-0 sudo[203510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkqhmelmbovigkdrdksxhmgevetikyub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844082.9179912-1646-670969545471/AnsiballZ_stat.py'
Jan 31 07:21:23 compute-0 sudo[203510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:23 compute-0 python3.9[203512]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:23 compute-0 sudo[203510]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:23 compute-0 sudo[203636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvgjcykesgcnsngpdgaahwodyonenepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844082.9179912-1646-670969545471/AnsiballZ_copy.py'
Jan 31 07:21:23 compute-0 sudo[203636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:23.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:24 compute-0 python3.9[203638]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844082.9179912-1646-670969545471/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:24 compute-0 sudo[203636]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:24 compute-0 ceph-mon[74496]: pgmap v605: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:24 compute-0 sudo[203788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtbqcrxvjwbomciigozqrmsjtlnyovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844084.18635-1646-77197243507238/AnsiballZ_stat.py'
Jan 31 07:21:24 compute-0 sudo[203788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:24 compute-0 python3.9[203790]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:24 compute-0 sudo[203788]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:25 compute-0 sudo[203911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obxxwhmhipvfprjijjiatikwfznoijwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844084.18635-1646-77197243507238/AnsiballZ_copy.py'
Jan 31 07:21:25 compute-0 sudo[203911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:25 compute-0 python3.9[203913]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844084.18635-1646-77197243507238/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:25 compute-0 sudo[203911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:25 compute-0 sudo[204064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxisvmkqyedrqnktfccfjqwyxmswgtvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844085.395986-1646-6500887089405/AnsiballZ_stat.py'
Jan 31 07:21:25 compute-0 sudo[204064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:25 compute-0 python3.9[204066]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:25 compute-0 sudo[204064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:25.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:26 compute-0 sudo[204189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihznngzinbkyzkzvpvnbstkrpudjhdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844085.395986-1646-6500887089405/AnsiballZ_copy.py'
Jan 31 07:21:26 compute-0 sudo[204189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:26 compute-0 python3.9[204191]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844085.395986-1646-6500887089405/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:26 compute-0 sudo[204189]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:26 compute-0 ceph-mon[74496]: pgmap v606: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:26 compute-0 sudo[204341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utqaudlkelkvuginwidjmfumauerqema ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844086.6952817-1985-79030303457171/AnsiballZ_command.py'
Jan 31 07:21:26 compute-0 sudo[204341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:27 compute-0 python3.9[204343]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 07:21:27 compute-0 sudo[204341]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:27 compute-0 sudo[204495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firwifcogybywqygzhzaufkypcupxirp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844087.5466297-2012-255980087450789/AnsiballZ_file.py'
Jan 31 07:21:27 compute-0 sudo[204495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:27.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:28 compute-0 python3.9[204497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:28 compute-0 sudo[204495]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:28 compute-0 sudo[204647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjlrsmieqdpfwtgjpefdqhhqzguodayy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844088.2699568-2012-279846858747277/AnsiballZ_file.py'
Jan 31 07:21:28 compute-0 sudo[204647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:28 compute-0 python3.9[204649]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:28 compute-0 sudo[204647]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:28 compute-0 ceph-mon[74496]: pgmap v607: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:29 compute-0 sudo[204799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kifwagzaczxowadvpsygucgrpcggiqmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844088.9429076-2012-206991790052495/AnsiballZ_file.py'
Jan 31 07:21:29 compute-0 sudo[204799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:29 compute-0 python3.9[204802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:29 compute-0 sudo[204799]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:29 compute-0 sudo[204952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmfievecddxyfwsgzuswwwgvlybgssij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844089.5627842-2012-112357210410539/AnsiballZ_file.py'
Jan 31 07:21:29 compute-0 sudo[204952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:29.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:30 compute-0 python3.9[204954]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:30 compute-0 sudo[204952]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:30 compute-0 sudo[205104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elprozpsayaymsbussyadwilsdyvbstt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844090.2632005-2012-150732465167711/AnsiballZ_file.py'
Jan 31 07:21:30 compute-0 sudo[205104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:30 compute-0 python3.9[205106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:30 compute-0 sudo[205104]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:30 compute-0 ceph-mon[74496]: pgmap v608: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:31 compute-0 sudo[205256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwvqfnickfaryiumihyrvizrxfitrmfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844090.931779-2012-174802989115783/AnsiballZ_file.py'
Jan 31 07:21:31 compute-0 sudo[205256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:31 compute-0 python3.9[205259]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:31 compute-0 sudo[205256]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:31 compute-0 sudo[205409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drxoxhoskubzxgpbkadpppttzcgpmyku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844091.5970907-2012-46507574543350/AnsiballZ_file.py'
Jan 31 07:21:31 compute-0 sudo[205409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:31.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 07:21:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 07:21:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:32 compute-0 python3.9[205411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:32 compute-0 sudo[205409]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:32 compute-0 sudo[205561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnwzusgkshbcojzlgmoijgslwlervlyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844092.2930255-2012-47502305099372/AnsiballZ_file.py'
Jan 31 07:21:32 compute-0 sudo[205561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:32 compute-0 python3.9[205563]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:32 compute-0 sudo[205561]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:32 compute-0 ceph-mon[74496]: pgmap v609: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:33 compute-0 sudo[205714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejtthyozpvfuxjwuoktvlaqodreicaio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844092.9843452-2012-251559504663475/AnsiballZ_file.py'
Jan 31 07:21:33 compute-0 sudo[205714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:33 compute-0 python3.9[205716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:33 compute-0 sudo[205714]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:34 compute-0 sudo[205866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afhtnewjusuxznewtpuhfxxbaaypjhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844093.644962-2012-21618902522944/AnsiballZ_file.py'
Jan 31 07:21:34 compute-0 sudo[205866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:34 compute-0 python3.9[205868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:34 compute-0 sudo[205866]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:21:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:21:34 compute-0 sudo[206018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsbgzvzaagzhjmkjztqrwvmfcgwrowdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844094.4334316-2012-143380064576331/AnsiballZ_file.py'
Jan 31 07:21:34 compute-0 sudo[206018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:34 compute-0 python3.9[206020]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:34 compute-0 ceph-mon[74496]: pgmap v610: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:34 compute-0 sudo[206018]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:35.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:35.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:36 compute-0 sudo[206183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvxqrgrvidwzgkhzimaqxskirghaynbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844095.0922215-2012-10710812192707/AnsiballZ_file.py'
Jan 31 07:21:36 compute-0 sudo[206183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:36 compute-0 podman[206145]: 2026-01-31 07:21:36.337328136 +0000 UTC m=+0.131596801 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 07:21:36 compute-0 ceph-mon[74496]: pgmap v611: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:36 compute-0 python3.9[206187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:36 compute-0 sudo[206183]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:37 compute-0 sudo[206348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yabxtsmtrzbucuugubyxevpgqdqxusjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844096.7160816-2012-260603352370465/AnsiballZ_file.py'
Jan 31 07:21:37 compute-0 sudo[206348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:37 compute-0 python3.9[206350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:37 compute-0 sudo[206348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:37 compute-0 sudo[206501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivxnmcrgxfkivfintzrootpdwkpixiot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844097.4931772-2012-99459542966355/AnsiballZ_file.py'
Jan 31 07:21:37 compute-0 sudo[206501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:37 compute-0 python3.9[206503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:37 compute-0 sudo[206501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:37.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:38 compute-0 sudo[206653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyncpvkxrytedeipaaqcbjzeoyxdwaev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844098.168309-2309-14802488686384/AnsiballZ_stat.py'
Jan 31 07:21:38 compute-0 sudo[206653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:38 compute-0 ceph-mon[74496]: pgmap v612: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:38 compute-0 python3.9[206655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:38 compute-0 sudo[206653]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:39 compute-0 sudo[206777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylqudzckigcrwpictsjkwbnxzkogaekw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844098.168309-2309-14802488686384/AnsiballZ_copy.py'
Jan 31 07:21:39 compute-0 sudo[206777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:39 compute-0 python3.9[206779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844098.168309-2309-14802488686384/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:39 compute-0 sudo[206777]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:39 compute-0 sudo[206929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhsjgfqrmqfmjgeojulmnyruagtapcev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844099.6772387-2309-151365807724375/AnsiballZ_stat.py'
Jan 31 07:21:39 compute-0 sudo[206929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:39.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:40 compute-0 python3.9[206931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:40 compute-0 sudo[206929]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:40 compute-0 sudo[207052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdtphlsbcypmhdscifuayrphzeqmjitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844099.6772387-2309-151365807724375/AnsiballZ_copy.py'
Jan 31 07:21:40 compute-0 sudo[207052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:40 compute-0 python3.9[207054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844099.6772387-2309-151365807724375/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:40 compute-0 sudo[207052]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:40 compute-0 ceph-mon[74496]: pgmap v613: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:41 compute-0 sudo[207204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-illrrpqtyhttdcrtroxusugqdfivcuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844100.8733-2309-94596041074549/AnsiballZ_stat.py'
Jan 31 07:21:41 compute-0 sudo[207204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:41 compute-0 python3.9[207206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:41 compute-0 sudo[207204]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:41 compute-0 sudo[207328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poehidliahidqlbnpsnftsqicbmxmnxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844100.8733-2309-94596041074549/AnsiballZ_copy.py'
Jan 31 07:21:41 compute-0 sudo[207328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:41 compute-0 python3.9[207330]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844100.8733-2309-94596041074549/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:41 compute-0 sudo[207328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:41.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:42 compute-0 sudo[207331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:42 compute-0 sudo[207331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:42 compute-0 sudo[207331]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:42 compute-0 sudo[207356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:42 compute-0 sudo[207356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:42 compute-0 sudo[207356]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:42 compute-0 ceph-mon[74496]: pgmap v614: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:42 compute-0 podman[207483]: 2026-01-31 07:21:42.879241199 +0000 UTC m=+0.050900082 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 07:21:42 compute-0 sudo[207551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxjitkboobnjcnoxwnpcufxhjvtaavyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844102.6496313-2309-169938630364640/AnsiballZ_stat.py'
Jan 31 07:21:42 compute-0 sudo[207551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:43 compute-0 python3.9[207553]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:43 compute-0 sudo[207551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:43 compute-0 sudo[207675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrseshvhzviejwzvivuohyyicsljbuxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844102.6496313-2309-169938630364640/AnsiballZ_copy.py'
Jan 31 07:21:43 compute-0 sudo[207675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:43 compute-0 python3.9[207677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844102.6496313-2309-169938630364640/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:43 compute-0 sudo[207675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:43.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:44 compute-0 sudo[207827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhmypghfgauylldjxugcwfrdelltodol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844103.8196445-2309-269379633220931/AnsiballZ_stat.py'
Jan 31 07:21:44 compute-0 sudo[207827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:44 compute-0 python3.9[207829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:44 compute-0 sudo[207827]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:44 compute-0 sudo[207950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmysuzetfffyilnmamarsxvhdaibdewh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844103.8196445-2309-269379633220931/AnsiballZ_copy.py'
Jan 31 07:21:44 compute-0 sudo[207950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:44 compute-0 ceph-mon[74496]: pgmap v615: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:44 compute-0 python3.9[207952]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844103.8196445-2309-269379633220931/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:44 compute-0 sudo[207950]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:45 compute-0 sudo[208103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmywhlkbsasrbhlsvaesxanbhlxepmhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844105.0908136-2309-214203337606542/AnsiballZ_stat.py'
Jan 31 07:21:45 compute-0 sudo[208103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:45 compute-0 python3.9[208105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:45 compute-0 sudo[208103]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:45 compute-0 sudo[208226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijtzyhirpokqpsxkrttcuugunruorkgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844105.0908136-2309-214203337606542/AnsiballZ_copy.py'
Jan 31 07:21:45 compute-0 sudo[208226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:46.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:46 compute-0 python3.9[208228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844105.0908136-2309-214203337606542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:46 compute-0 sudo[208226]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:46 compute-0 sudo[208378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdyaimfwzuuttprtephrjsskhgmmgjdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844106.2458324-2309-103211367842882/AnsiballZ_stat.py'
Jan 31 07:21:46 compute-0 sudo[208378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:46 compute-0 python3.9[208380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:46 compute-0 sudo[208378]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:46 compute-0 ceph-mon[74496]: pgmap v616: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:47 compute-0 sudo[208501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgwtqawoxabwtgppvmxwxdjbhxreaci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844106.2458324-2309-103211367842882/AnsiballZ_copy.py'
Jan 31 07:21:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:47 compute-0 sudo[208501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:47 compute-0 python3.9[208503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844106.2458324-2309-103211367842882/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:47 compute-0 sudo[208501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-0 sudo[208552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:47 compute-0 sudo[208552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-0 sudo[208552]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-0 sudo[208605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:21:47 compute-0 sudo[208605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-0 sudo[208605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-0 sudo[208631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:47 compute-0 sudo[208631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-0 sudo[208631]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:47 compute-0 sudo[208679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:21:47 compute-0 sudo[208679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:47 compute-0 sudo[208754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piclrgwzqmmunstxjjmwqwdjelclmtjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844107.4730985-2309-166836023562966/AnsiballZ_stat.py'
Jan 31 07:21:47 compute-0 sudo[208754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:48 compute-0 python3.9[208756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:48.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:48.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:48 compute-0 sudo[208754]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:48 compute-0 podman[208867]: 2026-01-31 07:21:48.213446109 +0000 UTC m=+0.074482706 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:21:48 compute-0 podman[208867]: 2026-01-31 07:21:48.328654403 +0000 UTC m=+0.189691050 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:21:48 compute-0 sudo[209001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqnwdiquginifzsppbowqnwehhoxgbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844107.4730985-2309-166836023562966/AnsiballZ_copy.py'
Jan 31 07:21:48 compute-0 sudo[209001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:48 compute-0 python3.9[209005]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844107.4730985-2309-166836023562966/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:48 compute-0 sudo[209001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:49 compute-0 ceph-mon[74496]: pgmap v617: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:49 compute-0 sudo[209265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdkkoakyzwhcyphdzswhfpxgmewkzdzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844108.8455765-2309-131785374927093/AnsiballZ_stat.py'
Jan 31 07:21:49 compute-0 sudo[209265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:49 compute-0 podman[209203]: 2026-01-31 07:21:49.272557112 +0000 UTC m=+0.274864949 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:21:49 compute-0 podman[209203]: 2026-01-31 07:21:49.294561597 +0000 UTC m=+0.296869434 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:21:49 compute-0 python3.9[209267]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:49 compute-0 sudo[209265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:49 compute-0 podman[209348]: 2026-01-31 07:21:49.574160952 +0000 UTC m=+0.099195428 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, name=keepalived, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Jan 31 07:21:49 compute-0 podman[209416]: 2026-01-31 07:21:49.648254717 +0000 UTC m=+0.054364067 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Jan 31 07:21:49 compute-0 podman[209348]: 2026-01-31 07:21:49.671249246 +0000 UTC m=+0.196283682 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., name=keepalived, io.openshift.expose-services=, description=keepalived for Ceph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:49 compute-0 sudo[208679]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:21:49 compute-0 sudo[209476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljobxakmppoiwqigzeusukwjhrmubkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844108.8455765-2309-131785374927093/AnsiballZ_copy.py'
Jan 31 07:21:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:21:49 compute-0 sudo[209476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:49 compute-0 sudo[209479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:49 compute-0 sudo[209479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:49 compute-0 sudo[209479]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:49 compute-0 sudo[209504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:21:49 compute-0 sudo[209504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:49 compute-0 sudo[209504]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:49 compute-0 python3.9[209478]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844108.8455765-2309-131785374927093/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:21:49 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:21:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:50.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:50 compute-0 sudo[209529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:50 compute-0 sudo[209476]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 sudo[209529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:50.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:50 compute-0 sudo[209529]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 sudo[209554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:21:50 compute-0 sudo[209554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:50 compute-0 sudo[209747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cirgftdufxzeedvowgeutxpfghhiablf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844110.1516843-2309-39853939521076/AnsiballZ_stat.py'
Jan 31 07:21:50 compute-0 sudo[209747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:50 compute-0 sudo[209554]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:21:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:21:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:21:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 493d5f9e-03d8-4d61-9e7f-4c910d080b67 does not exist
Jan 31 07:21:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f1bb7684-c4ed-43c6-8b53-59e5a9a87cda does not exist
Jan 31 07:21:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 47498bcb-50b9-4fb8-9305-e862f3c2095e does not exist
Jan 31 07:21:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:21:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:21:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:21:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:21:50 compute-0 sudo[209762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:50 compute-0 sudo[209762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:50 compute-0 sudo[209762]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 python3.9[209749]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:50 compute-0 sudo[209747]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 sudo[209787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:21:50 compute-0 sudo[209787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:50 compute-0 sudo[209787]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 sudo[209830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:50 compute-0 sudo[209830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:50 compute-0 sudo[209830]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:50 compute-0 sudo[209864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:21:50 compute-0 sudo[209864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:50 compute-0 ceph-mon[74496]: pgmap v618: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:21:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:21:50 compute-0 sudo[210005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-canbvysslfsjiyiasdzqupbycgdmppaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844110.1516843-2309-39853939521076/AnsiballZ_copy.py'
Jan 31 07:21:50 compute-0 sudo[210005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.119006065 +0000 UTC m=+0.065439001 container create b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.078385449 +0000 UTC m=+0.024818415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:21:51 compute-0 systemd[1]: Started libpod-conmon-b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914.scope.
Jan 31 07:21:51 compute-0 python3.9[210007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844110.1516843-2309-39853939521076/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:21:51 compute-0 sudo[210005]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.245852317 +0000 UTC m=+0.192285333 container init b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haibt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.253844926 +0000 UTC m=+0.200277902 container start b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:21:51 compute-0 nervous_haibt[210038]: 167 167
Jan 31 07:21:51 compute-0 systemd[1]: libpod-b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914.scope: Deactivated successfully.
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.263882434 +0000 UTC m=+0.210315380 container attach b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.264348915 +0000 UTC m=+0.210781871 container died b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a97491083b1aa075a50f7411ccfb4f00245458b9bb67afeac236a5c02dd2e8-merged.mount: Deactivated successfully.
Jan 31 07:21:51 compute-0 podman[210022]: 2026-01-31 07:21:51.332820051 +0000 UTC m=+0.279253007 container remove b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:21:51 compute-0 systemd[1]: libpod-conmon-b63a1c62c89f52ddb2ffdac6b52eaf12a427582939047ca39fd43e504713c914.scope: Deactivated successfully.
Jan 31 07:21:51 compute-0 podman[210139]: 2026-01-31 07:21:51.479018103 +0000 UTC m=+0.045388456 container create 54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:21:51 compute-0 systemd[1]: Started libpod-conmon-54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601.scope.
Jan 31 07:21:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bea5afc404281df47b080a37c48b45b3cab743770080fe01026a7cea7dc8c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bea5afc404281df47b080a37c48b45b3cab743770080fe01026a7cea7dc8c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bea5afc404281df47b080a37c48b45b3cab743770080fe01026a7cea7dc8c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bea5afc404281df47b080a37c48b45b3cab743770080fe01026a7cea7dc8c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37bea5afc404281df47b080a37c48b45b3cab743770080fe01026a7cea7dc8c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:51 compute-0 podman[210139]: 2026-01-31 07:21:51.552718168 +0000 UTC m=+0.119088521 container init 54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:21:51 compute-0 podman[210139]: 2026-01-31 07:21:51.458105285 +0000 UTC m=+0.024475668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:21:51 compute-0 podman[210139]: 2026-01-31 07:21:51.608658493 +0000 UTC m=+0.175028826 container start 54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:21:51 compute-0 podman[210139]: 2026-01-31 07:21:51.612395916 +0000 UTC m=+0.178766289 container attach 54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:21:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:51 compute-0 sudo[210233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agbkqjsyznlkbusujdsgxjpmdyyjoblf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844111.3635244-2309-274842486073457/AnsiballZ_stat.py'
Jan 31 07:21:51 compute-0 sudo[210233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:51 compute-0 python3.9[210235]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:51 compute-0 sudo[210233]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:52.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:52.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:52 compute-0 sudo[210356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ascqyeqrwygkxyaosjgbyaubwnhdmuel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844111.3635244-2309-274842486073457/AnsiballZ_copy.py'
Jan 31 07:21:52 compute-0 sudo[210356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:52 compute-0 frosty_chebyshev[210178]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:21:52 compute-0 frosty_chebyshev[210178]: --> relative data size: 1.0
Jan 31 07:21:52 compute-0 frosty_chebyshev[210178]: --> All data devices are unavailable
Jan 31 07:21:52 compute-0 systemd[1]: libpod-54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601.scope: Deactivated successfully.
Jan 31 07:21:52 compute-0 podman[210139]: 2026-01-31 07:21:52.42934295 +0000 UTC m=+0.995713303 container died 54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:21:52 compute-0 python3.9[210358]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844111.3635244-2309-274842486073457/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-37bea5afc404281df47b080a37c48b45b3cab743770080fe01026a7cea7dc8c8-merged.mount: Deactivated successfully.
Jan 31 07:21:52 compute-0 sudo[210356]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-0 podman[210139]: 2026-01-31 07:21:52.487760188 +0000 UTC m=+1.054130521 container remove 54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:21:52 compute-0 systemd[1]: libpod-conmon-54c3ed420f61246dfe118b3c4a0705dad25ae94ecc50dbddd0c4444c67e76601.scope: Deactivated successfully.
Jan 31 07:21:52 compute-0 sudo[209864]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-0 sudo[210391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:52 compute-0 sudo[210391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:52 compute-0 sudo[210391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-0 sudo[210432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:21:52 compute-0 sudo[210432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:52 compute-0 sudo[210432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-0 sudo[210480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:52 compute-0 sudo[210480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:52 compute-0 sudo[210480]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:52 compute-0 sudo[210528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:21:52 compute-0 sudo[210528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:52 compute-0 sudo[210632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcxeuoeuflrxamnadmzeopqxuwoxvvef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844112.5980134-2309-26949085628129/AnsiballZ_stat.py'
Jan 31 07:21:52 compute-0 sudo[210632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:52 compute-0 ceph-mon[74496]: pgmap v619: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:52 compute-0 python3.9[210636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:52 compute-0 podman[210675]: 2026-01-31 07:21:52.985190698 +0000 UTC m=+0.032048605 container create b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:21:52 compute-0 sudo[210632]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:53 compute-0 systemd[1]: Started libpod-conmon-b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36.scope.
Jan 31 07:21:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:21:53 compute-0 podman[210675]: 2026-01-31 07:21:53.057578381 +0000 UTC m=+0.104436308 container init b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:21:53 compute-0 podman[210675]: 2026-01-31 07:21:53.066746609 +0000 UTC m=+0.113604516 container start b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:21:53 compute-0 podman[210675]: 2026-01-31 07:21:52.97149521 +0000 UTC m=+0.018353137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:21:53 compute-0 podman[210675]: 2026-01-31 07:21:53.070275696 +0000 UTC m=+0.117133623 container attach b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lumiere, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:21:53 compute-0 magical_lumiere[210691]: 167 167
Jan 31 07:21:53 compute-0 systemd[1]: libpod-b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36.scope: Deactivated successfully.
Jan 31 07:21:53 compute-0 podman[210675]: 2026-01-31 07:21:53.072387168 +0000 UTC m=+0.119245115 container died b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e38487066b42cacf1ba509cfa762b0e41820cef6a41b5e05c72d14e75f1559-merged.mount: Deactivated successfully.
Jan 31 07:21:53 compute-0 podman[210675]: 2026-01-31 07:21:53.118332666 +0000 UTC m=+0.165190583 container remove b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:21:53 compute-0 systemd[1]: libpod-conmon-b3010e41af60c8fb23d12e00e1256ed87cf45a16bc4639a5019403fdeb577d36.scope: Deactivated successfully.
Jan 31 07:21:53 compute-0 podman[210787]: 2026-01-31 07:21:53.262613249 +0000 UTC m=+0.047803025 container create 991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 07:21:53 compute-0 systemd[1]: Started libpod-conmon-991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b.scope.
Jan 31 07:21:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a12417359983a494511a05cf3ec356cd272116b83400c6554eb59303d8132f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a12417359983a494511a05cf3ec356cd272116b83400c6554eb59303d8132f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a12417359983a494511a05cf3ec356cd272116b83400c6554eb59303d8132f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a12417359983a494511a05cf3ec356cd272116b83400c6554eb59303d8132f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:53 compute-0 podman[210787]: 2026-01-31 07:21:53.243484276 +0000 UTC m=+0.028674102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:21:53 compute-0 sudo[210857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkmaoukflzbpyjcuqzxhewychuqctkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844112.5980134-2309-26949085628129/AnsiballZ_copy.py'
Jan 31 07:21:53 compute-0 podman[210787]: 2026-01-31 07:21:53.355716916 +0000 UTC m=+0.140906702 container init 991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hoover, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:21:53 compute-0 sudo[210857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:53 compute-0 podman[210787]: 2026-01-31 07:21:53.362618377 +0000 UTC m=+0.147808143 container start 991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:21:53 compute-0 podman[210787]: 2026-01-31 07:21:53.366181625 +0000 UTC m=+0.151371421 container attach 991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:21:53 compute-0 python3.9[210860]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844112.5980134-2309-26949085628129/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:53 compute-0 sudo[210857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:54.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:21:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:54.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:21:54 compute-0 sudo[211012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytnhxitbgnrszhakeniimnuhgnuswgiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844113.7320135-2309-180152922028244/AnsiballZ_stat.py'
Jan 31 07:21:54 compute-0 sudo[211012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:54 compute-0 bold_hoover[210828]: {
Jan 31 07:21:54 compute-0 bold_hoover[210828]:     "0": [
Jan 31 07:21:54 compute-0 bold_hoover[210828]:         {
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "devices": [
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "/dev/loop3"
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             ],
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "lv_name": "ceph_lv0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "lv_size": "7511998464",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "name": "ceph_lv0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "tags": {
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.cluster_name": "ceph",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.crush_device_class": "",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.encrypted": "0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.osd_id": "0",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.type": "block",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:                 "ceph.vdo": "0"
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             },
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "type": "block",
Jan 31 07:21:54 compute-0 bold_hoover[210828]:             "vg_name": "ceph_vg0"
Jan 31 07:21:54 compute-0 bold_hoover[210828]:         }
Jan 31 07:21:54 compute-0 bold_hoover[210828]:     ]
Jan 31 07:21:54 compute-0 bold_hoover[210828]: }
Jan 31 07:21:54 compute-0 systemd[1]: libpod-991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b.scope: Deactivated successfully.
Jan 31 07:21:54 compute-0 podman[210787]: 2026-01-31 07:21:54.180342771 +0000 UTC m=+0.965532547 container died 991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-91a12417359983a494511a05cf3ec356cd272116b83400c6554eb59303d8132f-merged.mount: Deactivated successfully.
Jan 31 07:21:54 compute-0 podman[210787]: 2026-01-31 07:21:54.23522438 +0000 UTC m=+1.020414156 container remove 991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:21:54 compute-0 systemd[1]: libpod-conmon-991c9f691cb252eb9f36092aaf96419632c20d1855cc13864016f3345671278b.scope: Deactivated successfully.
Jan 31 07:21:54 compute-0 sudo[210528]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-0 python3.9[211015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:54 compute-0 sudo[211012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-0 sudo[211032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:54 compute-0 sudo[211032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:54 compute-0 sudo[211032]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-0 sudo[211057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:21:54 compute-0 sudo[211057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:54 compute-0 sudo[211057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-0 sudo[211105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:54 compute-0 sudo[211105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:54 compute-0 sudo[211105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-0 sudo[211154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:21:54 compute-0 sudo[211154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:54 compute-0 sudo[211266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhdpaihcqdowfxyxgszogbmvewbianxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844113.7320135-2309-180152922028244/AnsiballZ_copy.py'
Jan 31 07:21:54 compute-0 sudo[211266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.805835473 +0000 UTC m=+0.039243593 container create 987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:21:54 compute-0 systemd[1]: Started libpod-conmon-987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22.scope.
Jan 31 07:21:54 compute-0 python3.9[211278]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844113.7320135-2309-180152922028244/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.789189881 +0000 UTC m=+0.022598011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:21:54 compute-0 sudo[211266]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.902694092 +0000 UTC m=+0.136102302 container init 987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.91027473 +0000 UTC m=+0.143682830 container start 987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_black, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.914891215 +0000 UTC m=+0.148299395 container attach 987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:21:54 compute-0 thirsty_black[211314]: 167 167
Jan 31 07:21:54 compute-0 systemd[1]: libpod-987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22.scope: Deactivated successfully.
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.916089073 +0000 UTC m=+0.149497173 container died 987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:21:54 compute-0 ceph-mon[74496]: pgmap v620: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-28e97b95c7a981b1ab8efa16ff6474cdc3566d68c0223317884a15dd81fe0ef4-merged.mount: Deactivated successfully.
Jan 31 07:21:54 compute-0 podman[211297]: 2026-01-31 07:21:54.960192176 +0000 UTC m=+0.193600286 container remove 987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:21:54 compute-0 systemd[1]: libpod-conmon-987777f3587faa3b0e763d3bac9ac4343fdc01a0ea45a6989055aa6b262a2c22.scope: Deactivated successfully.
Jan 31 07:21:55 compute-0 podman[211414]: 2026-01-31 07:21:55.143530877 +0000 UTC m=+0.057143206 container create eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:21:55 compute-0 systemd[1]: Started libpod-conmon-eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218.scope.
Jan 31 07:21:55 compute-0 podman[211414]: 2026-01-31 07:21:55.117885562 +0000 UTC m=+0.031497941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:21:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d4f893bdd9a37e2e6084de55955c608778cff3fd37b4a75966b1ca68968925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d4f893bdd9a37e2e6084de55955c608778cff3fd37b4a75966b1ca68968925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d4f893bdd9a37e2e6084de55955c608778cff3fd37b4a75966b1ca68968925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0d4f893bdd9a37e2e6084de55955c608778cff3fd37b4a75966b1ca68968925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:21:55 compute-0 podman[211414]: 2026-01-31 07:21:55.259550651 +0000 UTC m=+0.173162970 container init eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:21:55 compute-0 podman[211414]: 2026-01-31 07:21:55.269782274 +0000 UTC m=+0.183394573 container start eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:21:55 compute-0 podman[211414]: 2026-01-31 07:21:55.281294059 +0000 UTC m=+0.194906388 container attach eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:21:55 compute-0 sudo[211509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyinyrstbvpfoigjlzqwssntqobdtzpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844115.019014-2309-211288199531811/AnsiballZ_stat.py'
Jan 31 07:21:55 compute-0 sudo[211509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:55 compute-0 python3.9[211511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:21:55 compute-0 sudo[211509]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:55 compute-0 sudo[211633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftagfvwpucpfkbqeznxvcyqxmboqvxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844115.019014-2309-211288199531811/AnsiballZ_copy.py'
Jan 31 07:21:55 compute-0 sudo[211633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:56.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:56 compute-0 festive_colden[211457]: {
Jan 31 07:21:56 compute-0 festive_colden[211457]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:21:56 compute-0 festive_colden[211457]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:21:56 compute-0 festive_colden[211457]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:21:56 compute-0 festive_colden[211457]:         "osd_id": 0,
Jan 31 07:21:56 compute-0 festive_colden[211457]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:21:56 compute-0 festive_colden[211457]:         "type": "bluestore"
Jan 31 07:21:56 compute-0 festive_colden[211457]:     }
Jan 31 07:21:56 compute-0 festive_colden[211457]: }
Jan 31 07:21:56 compute-0 systemd[1]: libpod-eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218.scope: Deactivated successfully.
Jan 31 07:21:56 compute-0 podman[211414]: 2026-01-31 07:21:56.133936838 +0000 UTC m=+1.047549167 container died eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:21:56 compute-0 python3.9[211636]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844115.019014-2309-211288199531811/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0d4f893bdd9a37e2e6084de55955c608778cff3fd37b4a75966b1ca68968925-merged.mount: Deactivated successfully.
Jan 31 07:21:56 compute-0 sudo[211633]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:56 compute-0 podman[211414]: 2026-01-31 07:21:56.181785313 +0000 UTC m=+1.095397612 container remove eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:21:56 compute-0 systemd[1]: libpod-conmon-eb326807153c4d362c686c00b9c09804be06a8a97d00dc9c4105ee4d8cda0218.scope: Deactivated successfully.
Jan 31 07:21:56 compute-0 sudo[211154]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:21:56 compute-0 python3.9[211810]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:21:56 compute-0 ceph-mon[74496]: pgmap v621: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cbc8e9ca-5eaa-4300-9f7b-45d26afb09c2 does not exist
Jan 31 07:21:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 961b398c-b005-4cae-a95b-2691aed37485 does not exist
Jan 31 07:21:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3abef406-e67a-4f0c-a01c-300e554bf506 does not exist
Jan 31 07:21:57 compute-0 sudo[211838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:21:57 compute-0 sudo[211838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:21:57 compute-0 sudo[211838]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:57 compute-0 sudo[211863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:21:57 compute-0 sudo[211863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:21:57 compute-0 sudo[211863]: pam_unix(sudo:session): session closed for user root
Jan 31 07:21:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:21:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:21:58.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:21:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:21:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:21:58.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:21:58 compute-0 sudo[212014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhuxzouegfnifzogdzubmhtnxxvvpie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844117.7515569-2927-46467392484076/AnsiballZ_seboolean.py'
Jan 31 07:21:58 compute-0 sudo[212014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:21:59 compute-0 ceph-mon[74496]: pgmap v622: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:21:59 compute-0 python3.9[212016]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 07:21:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:00.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:00 compute-0 ceph-mon[74496]: pgmap v623: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:00 compute-0 sudo[212014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:00 compute-0 sudo[212171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rodcwqgzardmjulpemfpvtkcumzslzcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844120.4479241-2951-56761103263390/AnsiballZ_copy.py'
Jan 31 07:22:00 compute-0 dbus-broker-launch[810]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 07:22:00 compute-0 sudo[212171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:01 compute-0 python3.9[212173]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:01 compute-0 sudo[212171]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:01 compute-0 sudo[212324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztbvgbmluapnfevqnxopewkzkebdeuuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844121.22407-2951-103887824448865/AnsiballZ_copy.py'
Jan 31 07:22:01 compute-0 sudo[212324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:01 compute-0 python3.9[212326]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:01 compute-0 sudo[212324]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:02.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:02 compute-0 sudo[212476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanybmknqsrxgugbzpblysmgapehqjte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844121.8542657-2951-215885598733506/AnsiballZ_copy.py'
Jan 31 07:22:02 compute-0 sudo[212476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:02 compute-0 sudo[212479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:02 compute-0 sudo[212479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:02 compute-0 sudo[212479]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-0 python3.9[212478]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:02 compute-0 sudo[212504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:02 compute-0 sudo[212504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:02 compute-0 sudo[212504]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-0 sudo[212476]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:02 compute-0 sudo[212678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmcfycfwnroizvjyuljlyklymlsuyyiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844122.4468577-2951-281964189329/AnsiballZ_copy.py'
Jan 31 07:22:02 compute-0 sudo[212678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:02 compute-0 ceph-mon[74496]: pgmap v624: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:02 compute-0 python3.9[212680]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:02 compute-0 sudo[212678]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:03 compute-0 sudo[212831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giuxaceavokkjbutqbqqmdqoxgkopuyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844123.0668092-2951-29976923439962/AnsiballZ_copy.py'
Jan 31 07:22:03 compute-0 sudo[212831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:03 compute-0 python3.9[212833]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:03 compute-0 sudo[212831]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:04 compute-0 sudo[212983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlnzsdahonbitppkzwtgugqhrcenwapd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844123.7279449-3059-132219745249453/AnsiballZ_copy.py'
Jan 31 07:22:04 compute-0 sudo[212983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:04.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:04.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:04 compute-0 python3.9[212985]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:04 compute-0 sudo[212983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:04 compute-0 sudo[213135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmgaxzcpdhqdizkiljnogkecytapmcox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844124.3748848-3059-58431977648326/AnsiballZ_copy.py'
Jan 31 07:22:04 compute-0 sudo[213135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:04 compute-0 ceph-mon[74496]: pgmap v625: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:04 compute-0 python3.9[213137]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:04 compute-0 sudo[213135]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:05 compute-0 sudo[213288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcqinwwqfjzcaaqkswsvnktgofwprvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844125.0371783-3059-217391686118076/AnsiballZ_copy.py'
Jan 31 07:22:05 compute-0 sudo[213288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:05 compute-0 python3.9[213290]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:05 compute-0 sudo[213288]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:06 compute-0 sudo[213440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vllkitoqnhqmsuoqreqsepoazekurugq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844125.728619-3059-275639702497163/AnsiballZ_copy.py'
Jan 31 07:22:06 compute-0 sudo[213440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:06.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:06.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:06 compute-0 python3.9[213442]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:06 compute-0 sudo[213440]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:06 compute-0 sudo[213603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmywandverqwprlepbjevbtwcpslqcgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844126.4159117-3059-172303275747053/AnsiballZ_copy.py'
Jan 31 07:22:06 compute-0 sudo[213603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:06 compute-0 ceph-mon[74496]: pgmap v626: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:06 compute-0 podman[213566]: 2026-01-31 07:22:06.881152681 +0000 UTC m=+0.128621767 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:22:07 compute-0 python3.9[213613]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:07 compute-0 sudo[213603]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:07 compute-0 sudo[213771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxukltybszffuziecozzwkjlytafgnux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844127.2465286-3167-21983155416626/AnsiballZ_systemd.py'
Jan 31 07:22:07 compute-0 sudo[213771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:07 compute-0 python3.9[213773]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:07 compute-0 systemd[1]: Reloading.
Jan 31 07:22:08 compute-0 systemd-rc-local-generator[213801]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:08.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:08 compute-0 systemd-sysv-generator[213805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:08.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:08 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 07:22:08 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 07:22:08 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 07:22:08 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 07:22:08 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 31 07:22:08 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 31 07:22:08 compute-0 sudo[213771]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:08 compute-0 sudo[213965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjfqtavgisyjyczevmuoayzbgewphjmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844128.5676365-3167-233128594610004/AnsiballZ_systemd.py'
Jan 31 07:22:08 compute-0 sudo[213965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:08 compute-0 ceph-mon[74496]: pgmap v627: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:09 compute-0 python3.9[213967]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:09 compute-0 systemd[1]: Reloading.
Jan 31 07:22:09 compute-0 systemd-rc-local-generator[213997]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:09 compute-0 systemd-sysv-generator[214000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:09 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 07:22:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 07:22:09 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 07:22:09 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 07:22:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 07:22:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 07:22:09 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 07:22:09 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 07:22:09 compute-0 sudo[213965]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:09 compute-0 sudo[214184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkcfhzyaovccxszzkbpuvhouchlsnoul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844129.6045463-3167-225461401363080/AnsiballZ_systemd.py'
Jan 31 07:22:09 compute-0 sudo[214184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:22:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:22:10 compute-0 python3.9[214186]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:10 compute-0 systemd[1]: Reloading.
Jan 31 07:22:10 compute-0 systemd-rc-local-generator[214214]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:10 compute-0 systemd-sysv-generator[214218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:10 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 07:22:10 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 07:22:10 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 07:22:10 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 07:22:10 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 07:22:10 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 07:22:10 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 07:22:10 compute-0 sudo[214184]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:10 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 07:22:10 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 07:22:10 compute-0 ceph-mon[74496]: pgmap v628: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:10 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 07:22:10 compute-0 sudo[214403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqivwgkgbchbbpibjodqzxccbxtzgchr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844130.7094219-3167-110357483383568/AnsiballZ_systemd.py'
Jan 31 07:22:10 compute-0 sudo[214403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:22:11.128 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:22:11.129 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:22:11.129 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:22:11 compute-0 python3.9[214405]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:11 compute-0 systemd[1]: Reloading.
Jan 31 07:22:11 compute-0 systemd-rc-local-generator[214430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:11 compute-0 systemd-sysv-generator[214433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:11 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 07:22:11 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 07:22:11 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 07:22:11 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 07:22:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 07:22:11 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 07:22:11 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 07:22:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 07:22:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 07:22:11 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 07:22:11 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 07:22:11 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 07:22:11 compute-0 sudo[214403]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:11 compute-0 setroubleshoot[214223]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 587efebe-c925-484c-a5b6-3098ae250a6c
Jan 31 07:22:11 compute-0 setroubleshoot[214223]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 07:22:11 compute-0 setroubleshoot[214223]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 587efebe-c925-484c-a5b6-3098ae250a6c
Jan 31 07:22:11 compute-0 setroubleshoot[214223]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 07:22:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:12.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:12 compute-0 sudo[214622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnpqiszuowjyjwisbvskwqdycfdqprkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844131.9141917-3167-277988731094/AnsiballZ_systemd.py'
Jan 31 07:22:12 compute-0 sudo[214622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:12 compute-0 python3.9[214624]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:22:12 compute-0 systemd[1]: Reloading.
Jan 31 07:22:12 compute-0 systemd-sysv-generator[214653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:12 compute-0 systemd-rc-local-generator[214650]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:12 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 07:22:12 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 07:22:12 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 07:22:12 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 07:22:12 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 07:22:12 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 07:22:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 31 07:22:12 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 31 07:22:12 compute-0 sudo[214622]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:12 compute-0 ceph-mon[74496]: pgmap v629: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:13 compute-0 sudo[214844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huosnaybkpavflmfwtlcmazmyfpfowec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844133.2980678-3278-223672919263971/AnsiballZ_file.py'
Jan 31 07:22:13 compute-0 sudo[214844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:13 compute-0 podman[214808]: 2026-01-31 07:22:13.618179178 +0000 UTC m=+0.083692594 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:22:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:13 compute-0 python3.9[214853]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:13 compute-0 sudo[214844]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:14.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:14 compute-0 sudo[215005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-makrzfjjrenvjsczqojfpivbptjrsmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844134.4578142-3302-94699540537129/AnsiballZ_find.py'
Jan 31 07:22:14 compute-0 sudo[215005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:14 compute-0 python3.9[215007]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:22:14 compute-0 sudo[215005]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:14 compute-0 ceph-mon[74496]: pgmap v630: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:15 compute-0 sudo[215158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqmxmwbgdsseqqkafogjeevdlojvjinn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844135.1538625-3326-30880652300530/AnsiballZ_command.py'
Jan 31 07:22:15 compute-0 sudo[215158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:15 compute-0 python3.9[215160]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:15 compute-0 sudo[215158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:16.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:16.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:16 compute-0 python3.9[215314]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:22:16 compute-0 ceph-mon[74496]: pgmap v631: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:17 compute-0 python3.9[215465]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:18.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:18 compute-0 python3.9[215586]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844137.1937342-3383-171644425409467/.source.xml follow=False _original_basename=secret.xml.j2 checksum=e49dd15d2c7191e2dea7492d81017d486826e706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:18 compute-0 sudo[215736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxjstmswqtvwrfhtbeyuscvvpbdsray ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844138.3684726-3428-118602000425197/AnsiballZ_command.py'
Jan 31 07:22:18 compute-0 sudo[215736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:18 compute-0 python3.9[215738]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine f70fcd2a-dcb4-5f89-a4ba-79a09959083b
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:18 compute-0 polkitd[43587]: Registered Authentication Agent for unix-process:215740:438150 (system bus name :1.2915 [pkttyagent --process 215740 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:22:18 compute-0 polkitd[43587]: Unregistered Authentication Agent for unix-process:215740:438150 (system bus name :1.2915, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:22:18 compute-0 polkitd[43587]: Registered Authentication Agent for unix-process:215739:438149 (system bus name :1.2916 [pkttyagent --process 215739 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:22:18 compute-0 polkitd[43587]: Unregistered Authentication Agent for unix-process:215739:438149 (system bus name :1.2916, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:22:18 compute-0 sudo[215736]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:19 compute-0 ceph-mon[74496]: pgmap v632: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:22:19
Jan 31 07:22:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:22:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:22:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.mgr']
Jan 31 07:22:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:22:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:20.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:22:20 compute-0 python3.9[215901]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:21 compute-0 sudo[216051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehsvxbwibnxtedoxhinavnftnyepvmnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844140.8174686-3476-65254171775226/AnsiballZ_command.py'
Jan 31 07:22:21 compute-0 sudo[216051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:21 compute-0 ceph-mon[74496]: pgmap v633: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:21 compute-0 sudo[216051]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:21 compute-0 sudo[216205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yukhxareepdiczynlefmnrjgxcvlecyk ; FSID=f70fcd2a-dcb4-5f89-a4ba-79a09959083b KEY=AQBjqX1pAAAAABAAZUDJ8pReeykI0ZmVlnkCdQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844141.5130742-3500-106754315343796/AnsiballZ_command.py'
Jan 31 07:22:21 compute-0 sudo[216205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:21 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 07:22:21 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 07:22:21 compute-0 polkitd[43587]: Registered Authentication Agent for unix-process:216208:438463 (system bus name :1.2919 [pkttyagent --process 216208 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 07:22:21 compute-0 polkitd[43587]: Unregistered Authentication Agent for unix-process:216208:438463 (system bus name :1.2919, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 07:22:22 compute-0 sudo[216205]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:22 compute-0 ceph-mon[74496]: pgmap v634: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:22 compute-0 sudo[216254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:22 compute-0 sudo[216254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:22 compute-0 sudo[216254]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:22 compute-0 sudo[216312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:22 compute-0 sudo[216312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:22 compute-0 sudo[216312]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:22 compute-0 sudo[216413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auqnrkgtgvupfigyrtantenimgiabhvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844142.3643322-3524-182963662401141/AnsiballZ_copy.py'
Jan 31 07:22:22 compute-0 sudo[216413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:22 compute-0 python3.9[216415]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:22 compute-0 sudo[216413]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:23 compute-0 sudo[216566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egunnkbeiffvimhsjnheimkswhhvnqwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844143.1011457-3548-6234844198168/AnsiballZ_stat.py'
Jan 31 07:22:23 compute-0 sudo[216566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:23 compute-0 python3.9[216568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:23 compute-0 sudo[216566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:24 compute-0 sudo[216689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjlngaepcwpradugkxyggktetdzfkkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844143.1011457-3548-6234844198168/AnsiballZ_copy.py'
Jan 31 07:22:24 compute-0 sudo[216689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:24 compute-0 python3.9[216691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844143.1011457-3548-6234844198168/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:24 compute-0 sudo[216689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:24 compute-0 ceph-mon[74496]: pgmap v635: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:24 compute-0 sudo[216841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcsqkzacvfupocwrjxdbkzvpaoyezixh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844144.6715882-3596-194643593625736/AnsiballZ_file.py'
Jan 31 07:22:24 compute-0 sudo[216841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:25 compute-0 python3.9[216843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:25 compute-0 sudo[216841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:25 compute-0 sudo[216994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfmucohqhtmlzjtpbemjvdmpujsoefex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844145.3781588-3620-186725079008070/AnsiballZ_stat.py'
Jan 31 07:22:25 compute-0 sudo[216994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:25 compute-0 python3.9[216996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:25 compute-0 sudo[216994]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:26 compute-0 sudo[217072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqyegcaxlbqxcoosqakjpeysrjnschk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844145.3781588-3620-186725079008070/AnsiballZ_file.py'
Jan 31 07:22:26 compute-0 sudo[217072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:26 compute-0 python3.9[217074]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:26 compute-0 sudo[217072]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:26 compute-0 ceph-mon[74496]: pgmap v636: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:26 compute-0 sudo[217224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayjjrdwoszthxowayxrpeyclsayhbrhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844146.5730288-3656-182257466161443/AnsiballZ_stat.py'
Jan 31 07:22:26 compute-0 sudo[217224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:27 compute-0 python3.9[217226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:27 compute-0 sudo[217224]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:27 compute-0 sudo[217303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tigpbnizuyzufeicnrwjkxpifqmppmwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844146.5730288-3656-182257466161443/AnsiballZ_file.py'
Jan 31 07:22:27 compute-0 sudo[217303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:27 compute-0 python3.9[217305]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ax90zvvk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:27 compute-0 sudo[217303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:28 compute-0 sudo[217455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdzgvuxhrbkkcawygczqeeyqyfqairt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844147.7897606-3692-44803471858933/AnsiballZ_stat.py'
Jan 31 07:22:28 compute-0 sudo[217455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:28.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:28.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:28 compute-0 python3.9[217457]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:28 compute-0 sudo[217455]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:28 compute-0 sudo[217533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmcaxvuxipzijpitxdxbtycduknzpnkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844147.7897606-3692-44803471858933/AnsiballZ_file.py'
Jan 31 07:22:28 compute-0 sudo[217533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:28 compute-0 python3.9[217535]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:28 compute-0 sudo[217533]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:28 compute-0 ceph-mon[74496]: pgmap v637: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:29 compute-0 sudo[217686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xepfbyzjunvkdrdosllagyishkcxzhvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844148.9747093-3731-253908158088722/AnsiballZ_command.py'
Jan 31 07:22:29 compute-0 sudo[217686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:29 compute-0 python3.9[217688]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:29 compute-0 sudo[217686]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:30 compute-0 sudo[217839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myyhucfoktmpexrxzacdvhwnylvakzev ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769844149.6594176-3755-120962373929549/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 07:22:30 compute-0 sudo[217839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:30 compute-0 python3[217841]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 07:22:30 compute-0 sudo[217839]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:30 compute-0 ceph-mon[74496]: pgmap v638: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:31 compute-0 sudo[217991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jajzfwrpqjzxnvyydwinzdsmksedpqor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844150.7487404-3779-201749060823287/AnsiballZ_stat.py'
Jan 31 07:22:31 compute-0 sudo[217991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:31 compute-0 python3.9[217993]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:31 compute-0 sudo[217991]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:31 compute-0 sudo[218070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esmtqnvervfcrlktpbtthfgslwgvtuxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844150.7487404-3779-201749060823287/AnsiballZ_file.py'
Jan 31 07:22:31 compute-0 sudo[218070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:31 compute-0 python3.9[218072]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:31 compute-0 sudo[218070]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:32.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:32.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:32 compute-0 sudo[218222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwmbuzrstaibsirkcayjqicjjwxysxrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844151.9954376-3815-59759895030610/AnsiballZ_stat.py'
Jan 31 07:22:32 compute-0 sudo[218222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:32 compute-0 python3.9[218224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:32 compute-0 sudo[218222]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:32 compute-0 ceph-mon[74496]: pgmap v639: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:33 compute-0 sudo[218347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwqhryfxhjuniwdvskkutdvwvcdtjczj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844151.9954376-3815-59759895030610/AnsiballZ_copy.py'
Jan 31 07:22:33 compute-0 sudo[218347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:33 compute-0 python3.9[218349]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844151.9954376-3815-59759895030610/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:33 compute-0 sudo[218347]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:33 compute-0 sudo[218500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydsrvsdfheihnnwtuhpsxlrlsatzeqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844153.5986493-3860-248868074669060/AnsiballZ_stat.py'
Jan 31 07:22:33 compute-0 sudo[218500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:34 compute-0 python3.9[218502]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:34.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:34 compute-0 sudo[218500]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:34 compute-0 sudo[218578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtpulsucjtoszbiscerzrrcrizukcpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844153.5986493-3860-248868074669060/AnsiballZ_file.py'
Jan 31 07:22:34 compute-0 sudo[218578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:34 compute-0 python3.9[218580]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:22:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:22:34 compute-0 sudo[218578]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:34 compute-0 ceph-mon[74496]: pgmap v640: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:35 compute-0 sudo[218730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtgfrlljswaasmkmscqafgwaksiofpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844154.7945032-3896-246200575879634/AnsiballZ_stat.py'
Jan 31 07:22:35 compute-0 sudo[218730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:35 compute-0 python3.9[218732]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:35 compute-0 sudo[218730]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:35 compute-0 sudo[218809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyjkuxesoqsydjfwjbmojisfvxgpupiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844154.7945032-3896-246200575879634/AnsiballZ_file.py'
Jan 31 07:22:35 compute-0 sudo[218809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:35 compute-0 python3.9[218811]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:35 compute-0 sudo[218809]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:36.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:36.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:36 compute-0 sudo[218961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnshtpwpscqmuyuiojslbzyblmmxprgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844156.0110688-3932-11971103571670/AnsiballZ_stat.py'
Jan 31 07:22:36 compute-0 sudo[218961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:36 compute-0 python3.9[218963]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:36 compute-0 sudo[218961]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:36 compute-0 ceph-mon[74496]: pgmap v641: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:37 compute-0 sudo[219096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccmsumesugmrcrclxojprgqhqmsweamo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844156.0110688-3932-11971103571670/AnsiballZ_copy.py'
Jan 31 07:22:37 compute-0 sudo[219096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:37 compute-0 podman[219060]: 2026-01-31 07:22:37.096879019 +0000 UTC m=+0.126674988 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:22:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:37 compute-0 python3.9[219101]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844156.0110688-3932-11971103571670/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:37 compute-0 sudo[219096]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:37 compute-0 sudo[219265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmlgfavfcqpryisfpnucrwwrwyvyzcae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844157.4137938-3977-104515933436112/AnsiballZ_file.py'
Jan 31 07:22:37 compute-0 sudo[219265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:37 compute-0 python3.9[219267]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:37 compute-0 sudo[219265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:38.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:38.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:38 compute-0 sudo[219417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwxdskpzdhqosdbzeixxpldwfylsfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844158.2371867-4001-94947292658872/AnsiballZ_command.py'
Jan 31 07:22:38 compute-0 sudo[219417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:38 compute-0 python3.9[219419]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:38 compute-0 sudo[219417]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:39 compute-0 ceph-mon[74496]: pgmap v642: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:39 compute-0 sudo[219573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzjxrsdthpkidjeairawyaoqogktazd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844159.032406-4025-11847788333629/AnsiballZ_blockinfile.py'
Jan 31 07:22:39 compute-0 sudo[219573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:39 compute-0 python3.9[219575]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:39 compute-0 sudo[219573]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:40.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:40.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:40 compute-0 sudo[219725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khvftowanrcswhvmvsmnuwquhmdqepcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844160.0770314-4052-83729453787816/AnsiballZ_command.py'
Jan 31 07:22:40 compute-0 sudo[219725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:40 compute-0 python3.9[219727]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:40 compute-0 sudo[219725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:41 compute-0 ceph-mon[74496]: pgmap v643: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:41 compute-0 sudo[219878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcxvqtzpsvblrtkacjkwkntowedmahmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844160.8517525-4076-208849781641345/AnsiballZ_stat.py'
Jan 31 07:22:41 compute-0 sudo[219878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:41 compute-0 python3.9[219880]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:22:41 compute-0 sudo[219878]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:42 compute-0 sudo[220033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnxaqufuwrhwdoiorwpwfehetoakffta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844161.563208-4100-165855631001162/AnsiballZ_command.py'
Jan 31 07:22:42 compute-0 sudo[220033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:42.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:42 compute-0 python3.9[220035]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:22:42 compute-0 sudo[220033]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-0 sudo[220063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:42 compute-0 sudo[220063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:42 compute-0 sudo[220063]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-0 sudo[220111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:42 compute-0 sudo[220111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:42 compute-0 sudo[220111]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:42 compute-0 sudo[220238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwrpdojbfklcpgxhgjwyxqqjlnlmquqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844162.5646799-4124-93120207414609/AnsiballZ_file.py'
Jan 31 07:22:42 compute-0 sudo[220238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:43 compute-0 python3.9[220240]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:43 compute-0 sudo[220238]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:43 compute-0 ceph-mon[74496]: pgmap v644: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:43 compute-0 sudo[220391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oetrbrnpwyvqatmtkiybkwwbhwmjyedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844163.2595742-4148-143224404979186/AnsiballZ_stat.py'
Jan 31 07:22:43 compute-0 sudo[220391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:43 compute-0 python3.9[220393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:43 compute-0 sudo[220391]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:43 compute-0 podman[220417]: 2026-01-31 07:22:43.880272553 +0000 UTC m=+0.050889841 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:22:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:44.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:44 compute-0 sudo[220532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudiwdszurltksymgdahxaqrecdqlwym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844163.2595742-4148-143224404979186/AnsiballZ_copy.py'
Jan 31 07:22:44 compute-0 sudo[220532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:44.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:44 compute-0 python3.9[220534]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844163.2595742-4148-143224404979186/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:44 compute-0 sudo[220532]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:44 compute-0 sudo[220684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncrdxnrzifgghyctqnkttkalniusxwjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844164.579006-4193-3455954014493/AnsiballZ_stat.py'
Jan 31 07:22:44 compute-0 sudo[220684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:45 compute-0 python3.9[220686]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:45 compute-0 sudo[220684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:45 compute-0 ceph-mon[74496]: pgmap v645: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:45 compute-0 sudo[220808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhvqoiydeoajjokadxjkkufgbkdkywpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844164.579006-4193-3455954014493/AnsiballZ_copy.py'
Jan 31 07:22:45 compute-0 sudo[220808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:45 compute-0 python3.9[220810]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844164.579006-4193-3455954014493/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:45 compute-0 sudo[220808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:46.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:46 compute-0 ceph-mon[74496]: pgmap v646: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:46 compute-0 sudo[220960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkvauhulpmmfkhawqimfwcvlkswitqjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844165.9281201-4238-235753767404993/AnsiballZ_stat.py'
Jan 31 07:22:46 compute-0 sudo[220960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:46 compute-0 python3.9[220962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:22:46 compute-0 sudo[220960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:46 compute-0 sudo[221083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojxtncsbnpbevzrfwgghbufsnretffgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844165.9281201-4238-235753767404993/AnsiballZ_copy.py'
Jan 31 07:22:46 compute-0 sudo[221083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:46 compute-0 python3.9[221085]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844165.9281201-4238-235753767404993/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:22:46 compute-0 sudo[221083]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:47 compute-0 sudo[221236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrkztvhzpquxyazrgweqzvfvllefrgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844167.2156239-4283-143425157358835/AnsiballZ_systemd.py'
Jan 31 07:22:47 compute-0 sudo[221236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:47 compute-0 python3.9[221238]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:22:47 compute-0 systemd[1]: Reloading.
Jan 31 07:22:47 compute-0 systemd-rc-local-generator[221263]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:47 compute-0 systemd-sysv-generator[221268]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:48.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:48 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 07:22:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:48 compute-0 sudo[221236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:48 compute-0 sudo[221427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijybwkdohtrjltcatzksqvuvihetpqgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844168.3637037-4307-233857913015863/AnsiballZ_systemd.py'
Jan 31 07:22:48 compute-0 sudo[221427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:22:48 compute-0 ceph-mon[74496]: pgmap v647: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:49 compute-0 python3.9[221429]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 07:22:49 compute-0 systemd[1]: Reloading.
Jan 31 07:22:49 compute-0 systemd-rc-local-generator[221453]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:49 compute-0 systemd-sysv-generator[221458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:49 compute-0 systemd[1]: Reloading.
Jan 31 07:22:49 compute-0 systemd-rc-local-generator[221491]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:22:49 compute-0 systemd-sysv-generator[221495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:22:49 compute-0 sudo[221427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:22:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:50 compute-0 sshd-session[161194]: Connection closed by 192.168.122.30 port 53284
Jan 31 07:22:50 compute-0 sshd-session[161188]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:22:50 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 07:22:50 compute-0 systemd[1]: session-49.scope: Consumed 3min 12.793s CPU time.
Jan 31 07:22:50 compute-0 systemd-logind[816]: Session 49 logged out. Waiting for processes to exit.
Jan 31 07:22:50 compute-0 systemd-logind[816]: Removed session 49.
Jan 31 07:22:50 compute-0 ceph-mon[74496]: pgmap v648: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:52.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:52 compute-0 ceph-mon[74496]: pgmap v649: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:54.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:54.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:54 compute-0 ceph-mon[74496]: pgmap v650: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:56.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:22:57 compute-0 sshd-session[221532]: Accepted publickey for zuul from 192.168.122.30 port 47926 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:22:57 compute-0 systemd-logind[816]: New session 50 of user zuul.
Jan 31 07:22:57 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 31 07:22:57 compute-0 sshd-session[221532]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:22:57 compute-0 ceph-mon[74496]: pgmap v651: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:57 compute-0 sudo[221571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:57 compute-0 sudo[221571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-0 sudo[221571]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:57 compute-0 sudo[221613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:22:57 compute-0 sudo[221613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-0 sudo[221613]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:57 compute-0 sudo[221638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:57 compute-0 sudo[221638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-0 sudo[221638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:57 compute-0 sudo[221663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:22:57 compute-0 sudo[221663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:58 compute-0 sudo[221663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:22:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:22:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:22:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:22:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:22:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:22:58.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:22:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:22:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:22:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:22:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:22:58 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e3785f54-82f5-407c-bf26-f350c2351058 does not exist
Jan 31 07:22:58 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 34b20c4e-be7f-46d0-a904-7f1a308d73b4 does not exist
Jan 31 07:22:58 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f56a2eed-8398-4dec-bd24-46e2fc144026 does not exist
Jan 31 07:22:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:22:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:22:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:22:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:22:58 compute-0 sudo[221818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:58 compute-0 sudo[221818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:58 compute-0 sudo[221818]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:58 compute-0 sudo[221843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:22:58 compute-0 sudo[221843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:58 compute-0 sudo[221843]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:58 compute-0 python3.9[221817]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:22:58 compute-0 ceph-mon[74496]: pgmap v652: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:22:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:22:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:22:58 compute-0 sudo[221872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:22:58 compute-0 sudo[221872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:58 compute-0 sudo[221872]: pam_unix(sudo:session): session closed for user root
Jan 31 07:22:58 compute-0 sudo[221897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:22:58 compute-0 sudo[221897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.781476583 +0000 UTC m=+0.053960818 container create 9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:22:58 compute-0 systemd[1]: Started libpod-conmon-9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037.scope.
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.754542727 +0000 UTC m=+0.027027052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:22:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.880838174 +0000 UTC m=+0.153322499 container init 9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.890461793 +0000 UTC m=+0.162946068 container start 9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.895920498 +0000 UTC m=+0.168404733 container attach 9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:22:58 compute-0 systemd[1]: libpod-9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037.scope: Deactivated successfully.
Jan 31 07:22:58 compute-0 compassionate_jepsen[222003]: 167 167
Jan 31 07:22:58 compute-0 conmon[222003]: conmon 9c36cbf2d0e933fc9d39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037.scope/container/memory.events
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.901194718 +0000 UTC m=+0.173678973 container died 9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aaa569189671b3d84c882c51375d6bcd8fb288f8b9bb881c1fe62e03eedf0c9-merged.mount: Deactivated successfully.
Jan 31 07:22:58 compute-0 podman[221976]: 2026-01-31 07:22:58.941056056 +0000 UTC m=+0.213540331 container remove 9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:22:58 compute-0 systemd[1]: libpod-conmon-9c36cbf2d0e933fc9d39bcc6eb932673ac07866d114f24f33641536f73f2b037.scope: Deactivated successfully.
Jan 31 07:22:59 compute-0 podman[222028]: 2026-01-31 07:22:59.118210473 +0000 UTC m=+0.053569697 container create b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rubin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:22:59 compute-0 systemd[1]: Started libpod-conmon-b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5.scope.
Jan 31 07:22:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6b1e54fab127d689845b9e91da6c4f40e05dc2fa67c9d03b0e3c19b89cfbd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6b1e54fab127d689845b9e91da6c4f40e05dc2fa67c9d03b0e3c19b89cfbd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6b1e54fab127d689845b9e91da6c4f40e05dc2fa67c9d03b0e3c19b89cfbd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6b1e54fab127d689845b9e91da6c4f40e05dc2fa67c9d03b0e3c19b89cfbd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf6b1e54fab127d689845b9e91da6c4f40e05dc2fa67c9d03b0e3c19b89cfbd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:22:59 compute-0 podman[222028]: 2026-01-31 07:22:59.089954184 +0000 UTC m=+0.025313388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:22:59 compute-0 podman[222028]: 2026-01-31 07:22:59.200227295 +0000 UTC m=+0.135586509 container init b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rubin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:22:59 compute-0 podman[222028]: 2026-01-31 07:22:59.205525157 +0000 UTC m=+0.140884341 container start b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:22:59 compute-0 podman[222028]: 2026-01-31 07:22:59.209768222 +0000 UTC m=+0.145127416 container attach b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:22:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:22:59 compute-0 python3.9[222174]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:22:59 compute-0 network[222191]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:22:59 compute-0 network[222192]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:22:59 compute-0 network[222193]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:22:59 compute-0 affectionate_rubin[222095]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:22:59 compute-0 affectionate_rubin[222095]: --> relative data size: 1.0
Jan 31 07:22:59 compute-0 affectionate_rubin[222095]: --> All data devices are unavailable
Jan 31 07:22:59 compute-0 podman[222028]: 2026-01-31 07:22:59.944574652 +0000 UTC m=+0.879933886 container died b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:23:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:00 compute-0 systemd[1]: libpod-b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5.scope: Deactivated successfully.
Jan 31 07:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf6b1e54fab127d689845b9e91da6c4f40e05dc2fa67c9d03b0e3c19b89cfbd4-merged.mount: Deactivated successfully.
Jan 31 07:23:00 compute-0 podman[222028]: 2026-01-31 07:23:00.464209563 +0000 UTC m=+1.399568757 container remove b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:23:00 compute-0 systemd[1]: libpod-conmon-b3620b0f3a09c92911d9aa8b00819ad26574344e5ad2122a72bba596abb4cca5.scope: Deactivated successfully.
Jan 31 07:23:00 compute-0 sudo[221897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:00 compute-0 sudo[222229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:00 compute-0 sudo[222229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:00 compute-0 sudo[222229]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:00 compute-0 sudo[222258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:23:00 compute-0 sudo[222258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:00 compute-0 sudo[222258]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:00 compute-0 sudo[222288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:00 compute-0 sudo[222288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:00 compute-0 sudo[222288]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:00 compute-0 sudo[222315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:23:00 compute-0 sudo[222315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:00 compute-0 ceph-mon[74496]: pgmap v653: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.020518731 +0000 UTC m=+0.037753746 container create 59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:23:01 compute-0 systemd[1]: Started libpod-conmon-59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b.scope.
Jan 31 07:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.0979929 +0000 UTC m=+0.115227915 container init 59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.004745901 +0000 UTC m=+0.021980946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.105206589 +0000 UTC m=+0.122441594 container start 59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mcnulty, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:23:01 compute-0 peaceful_mcnulty[222420]: 167 167
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.110633283 +0000 UTC m=+0.127868358 container attach 59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.11130492 +0000 UTC m=+0.128539925 container died 59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:23:01 compute-0 systemd[1]: libpod-59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b.scope: Deactivated successfully.
Jan 31 07:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-123c816437f5b4ebbd3c9749324858fd598f6e8a34d92d05deed485bc792af8b-merged.mount: Deactivated successfully.
Jan 31 07:23:01 compute-0 podman[222398]: 2026-01-31 07:23:01.149725441 +0000 UTC m=+0.166960446 container remove 59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mcnulty, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:23:01 compute-0 systemd[1]: libpod-conmon-59be39c31e17a82dad9126abf5b047572e22306351fd7a31eb457e0c0301510b.scope: Deactivated successfully.
Jan 31 07:23:01 compute-0 podman[222458]: 2026-01-31 07:23:01.27600017 +0000 UTC m=+0.038416793 container create adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:23:01 compute-0 systemd[1]: Started libpod-conmon-adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3.scope.
Jan 31 07:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9b756be095cbafec8b0ac65e8e68cd09ee525ff859420d73028dff6d38a472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:01 compute-0 podman[222458]: 2026-01-31 07:23:01.257442089 +0000 UTC m=+0.019858742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9b756be095cbafec8b0ac65e8e68cd09ee525ff859420d73028dff6d38a472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9b756be095cbafec8b0ac65e8e68cd09ee525ff859420d73028dff6d38a472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c9b756be095cbafec8b0ac65e8e68cd09ee525ff859420d73028dff6d38a472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:01 compute-0 podman[222458]: 2026-01-31 07:23:01.378019736 +0000 UTC m=+0.140436389 container init adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:23:01 compute-0 podman[222458]: 2026-01-31 07:23:01.38867739 +0000 UTC m=+0.151094013 container start adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_almeida, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:23:01 compute-0 podman[222458]: 2026-01-31 07:23:01.392830083 +0000 UTC m=+0.155246816 container attach adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:23:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:02.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:02 compute-0 laughing_almeida[222479]: {
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:     "0": [
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:         {
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "devices": [
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "/dev/loop3"
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             ],
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "lv_name": "ceph_lv0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "lv_size": "7511998464",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "name": "ceph_lv0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "tags": {
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.cluster_name": "ceph",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.crush_device_class": "",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.encrypted": "0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.osd_id": "0",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.type": "block",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:                 "ceph.vdo": "0"
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             },
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "type": "block",
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:             "vg_name": "ceph_vg0"
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:         }
Jan 31 07:23:02 compute-0 laughing_almeida[222479]:     ]
Jan 31 07:23:02 compute-0 laughing_almeida[222479]: }
Jan 31 07:23:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:02 compute-0 systemd[1]: libpod-adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3.scope: Deactivated successfully.
Jan 31 07:23:02 compute-0 podman[222458]: 2026-01-31 07:23:02.132503203 +0000 UTC m=+0.894919846 container died adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_almeida, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c9b756be095cbafec8b0ac65e8e68cd09ee525ff859420d73028dff6d38a472-merged.mount: Deactivated successfully.
Jan 31 07:23:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:02.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:02 compute-0 podman[222458]: 2026-01-31 07:23:02.203450791 +0000 UTC m=+0.965867414 container remove adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_almeida, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:23:02 compute-0 systemd[1]: libpod-conmon-adcf4c6c778f37b758aead329189fde0c374bbdf954b81d4cfdcd57498838dd3.scope: Deactivated successfully.
Jan 31 07:23:02 compute-0 sudo[222315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-0 sudo[222522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:02 compute-0 sudo[222522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-0 sudo[222522]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-0 sudo[222547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:23:02 compute-0 sudo[222547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-0 sudo[222547]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-0 sudo[222572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:02 compute-0 sudo[222572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-0 sudo[222572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-0 sudo[222597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:23:02 compute-0 sudo[222597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-0 sudo[222648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:02 compute-0 sudo[222648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-0 sudo[222648]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.762372314 +0000 UTC m=+0.053643649 container create 35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:23:02 compute-0 sudo[222690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:02 compute-0 sudo[222690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:02 compute-0 sudo[222690]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:02 compute-0 systemd[1]: Started libpod-conmon-35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d.scope.
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.732659909 +0000 UTC m=+0.023931334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:23:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:23:02 compute-0 ceph-mon[74496]: pgmap v654: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.847849121 +0000 UTC m=+0.139120536 container init 35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.858181728 +0000 UTC m=+0.149453053 container start 35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.862171656 +0000 UTC m=+0.153443071 container attach 35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:23:02 compute-0 amazing_galois[222729]: 167 167
Jan 31 07:23:02 compute-0 systemd[1]: libpod-35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d.scope: Deactivated successfully.
Jan 31 07:23:02 compute-0 conmon[222729]: conmon 35ce8a0afcf330ab9483 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d.scope/container/memory.events
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.865696884 +0000 UTC m=+0.156968239 container died 35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa03d0a0998e5b19e579f7986af670ded85e65f09f64b9f4e3c6c9dd5e92de88-merged.mount: Deactivated successfully.
Jan 31 07:23:02 compute-0 podman[222683]: 2026-01-31 07:23:02.913417336 +0000 UTC m=+0.204688671 container remove 35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:23:02 compute-0 systemd[1]: libpod-conmon-35ce8a0afcf330ab948320b3ff06d0b00ce6d22ea95d8b2db5c49f1beed21d6d.scope: Deactivated successfully.
Jan 31 07:23:03 compute-0 podman[222766]: 2026-01-31 07:23:03.055822313 +0000 UTC m=+0.045125530 container create f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:23:03 compute-0 systemd[1]: Started libpod-conmon-f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff.scope.
Jan 31 07:23:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:23:03 compute-0 podman[222766]: 2026-01-31 07:23:03.034290049 +0000 UTC m=+0.023593246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca2f28629ede5b90d4a0907e18649f7d92b3b1510efd3e407e815a39018a04d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca2f28629ede5b90d4a0907e18649f7d92b3b1510efd3e407e815a39018a04d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca2f28629ede5b90d4a0907e18649f7d92b3b1510efd3e407e815a39018a04d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca2f28629ede5b90d4a0907e18649f7d92b3b1510efd3e407e815a39018a04d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:23:03 compute-0 podman[222766]: 2026-01-31 07:23:03.161456969 +0000 UTC m=+0.150760166 container init f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:23:03 compute-0 podman[222766]: 2026-01-31 07:23:03.170405791 +0000 UTC m=+0.159708968 container start f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:23:03 compute-0 podman[222766]: 2026-01-31 07:23:03.179033535 +0000 UTC m=+0.168336722 container attach f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:23:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]: {
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:         "osd_id": 0,
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:         "type": "bluestore"
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]:     }
Jan 31 07:23:04 compute-0 vibrant_khayyam[222786]: }
Jan 31 07:23:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:04 compute-0 systemd[1]: libpod-f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff.scope: Deactivated successfully.
Jan 31 07:23:04 compute-0 podman[222766]: 2026-01-31 07:23:04.113756846 +0000 UTC m=+1.103060063 container died f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:23:04 compute-0 sudo[222979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeezsjtqezjknfiyalndtygckfnnpyyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844183.8669682-101-188431657923892/AnsiballZ_setup.py'
Jan 31 07:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ca2f28629ede5b90d4a0907e18649f7d92b3b1510efd3e407e815a39018a04d-merged.mount: Deactivated successfully.
Jan 31 07:23:04 compute-0 sudo[222979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:04 compute-0 podman[222766]: 2026-01-31 07:23:04.195735127 +0000 UTC m=+1.185038314 container remove f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:23:04 compute-0 systemd[1]: libpod-conmon-f01f06bcc72249f4dc7a7f5d987e7b861076d87d029453bc8a6942f47dc18fff.scope: Deactivated successfully.
Jan 31 07:23:04 compute-0 sudo[222597]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:23:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:23:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:23:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:23:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3e8c6d82-1e93-4775-80d5-104fdc56bd50 does not exist
Jan 31 07:23:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 82cff86d-ea67-4643-a900-94afa54d57c1 does not exist
Jan 31 07:23:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3ab661c6-da75-495d-bc75-f3711b40a57f does not exist
Jan 31 07:23:04 compute-0 sudo[222989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:04 compute-0 sudo[222989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:04 compute-0 sudo[222989]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-0 python3.9[222988]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:23:04 compute-0 sudo[223014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:23:04 compute-0 sudo[223014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:04 compute-0 sudo[223014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-0 sudo[222979]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:04 compute-0 ceph-mon[74496]: pgmap v655: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:23:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:23:05 compute-0 sudo[223120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awbeslyppzqmxtohpvzdufosfjvplcix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844183.8669682-101-188431657923892/AnsiballZ_dnf.py'
Jan 31 07:23:05 compute-0 sudo[223120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:05 compute-0 python3.9[223122]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:23:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:06.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:06 compute-0 ceph-mon[74496]: pgmap v656: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:07 compute-0 podman[223126]: 2026-01-31 07:23:07.946338294 +0000 UTC m=+0.113894813 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:23:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 07:23:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 07:23:08 compute-0 ceph-mon[74496]: pgmap v657: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:10.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:10.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:10 compute-0 sudo[223120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:10 compute-0 sshd-session[223153]: Invalid user sol from 45.148.10.240 port 60238
Jan 31 07:23:10 compute-0 ceph-mon[74496]: pgmap v658: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:10 compute-0 sshd-session[223153]: Connection closed by invalid user sol 45.148.10.240 port 60238 [preauth]
Jan 31 07:23:11 compute-0 sudo[223304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aslzgitqcwobsflvhwssdyiakdyzbvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844190.6999562-137-111169865361676/AnsiballZ_stat.py'
Jan 31 07:23:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:23:11.129 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:23:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:23:11.130 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:23:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:23:11.130 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:23:11 compute-0 sudo[223304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:11 compute-0 python3.9[223306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:11 compute-0 sudo[223304]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:12.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:12 compute-0 sudo[223457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynnrqvjxqhfywahraobfgjvlcpglsntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844191.7497687-167-101865609406343/AnsiballZ_command.py'
Jan 31 07:23:12 compute-0 sudo[223457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:12.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:12 compute-0 python3.9[223459]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:12 compute-0 sudo[223457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:12 compute-0 ceph-mon[74496]: pgmap v659: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:12 compute-0 sudo[223610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowjyqsnqqgkzwsssqrymqagcgrccrfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844192.7714736-197-144584634708823/AnsiballZ_stat.py'
Jan 31 07:23:12 compute-0 sudo[223610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:13 compute-0 python3.9[223612]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:13 compute-0 sudo[223610]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:13 compute-0 sudo[223763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbsqeembeurofiztpczmerfvmupedhsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844193.5100577-221-100102625084394/AnsiballZ_command.py'
Jan 31 07:23:13 compute-0 sudo[223763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:14 compute-0 python3.9[223765]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:14 compute-0 sudo[223763]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:14.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:14.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:14 compute-0 sudo[223927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fypnccbbbthukgvfcnrpezgfdglluqav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844194.2481291-245-52479821335525/AnsiballZ_stat.py'
Jan 31 07:23:14 compute-0 sudo[223927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:14 compute-0 podman[223890]: 2026-01-31 07:23:14.550004967 +0000 UTC m=+0.070051737 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:23:14 compute-0 python3.9[223935]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:14 compute-0 sudo[223927]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:14 compute-0 ceph-mon[74496]: pgmap v660: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:15 compute-0 sudo[224058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odyedivxtfhmihbgoomgrdzzwpptnnjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844194.2481291-245-52479821335525/AnsiballZ_copy.py'
Jan 31 07:23:15 compute-0 sudo[224058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:15 compute-0 python3.9[224061]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844194.2481291-245-52479821335525/.source.iscsi _original_basename=.8mw55ckn follow=False checksum=f7159aaec5c825bd9145b62039d3e852fa178c7f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:15 compute-0 sudo[224058]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:16 compute-0 sudo[224211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgkthjanmisbktfhugcqmiclqletwbsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844195.5994546-290-131729252356318/AnsiballZ_file.py'
Jan 31 07:23:16 compute-0 sudo[224211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:16.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:16.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:16 compute-0 python3.9[224213]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:16 compute-0 sudo[224211]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:16 compute-0 sudo[224363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vasiezbddiazzwpwlvsulkxqfznpyqbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844196.499277-314-210254304417065/AnsiballZ_lineinfile.py'
Jan 31 07:23:16 compute-0 sudo[224363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:17 compute-0 ceph-mon[74496]: pgmap v661: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:17 compute-0 python3.9[224365]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:17 compute-0 sudo[224363]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:18 compute-0 sudo[224516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqcosqzqlwupdczwrrwvhezupbcjiatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844197.4638612-341-233116261198805/AnsiballZ_systemd_service.py'
Jan 31 07:23:18 compute-0 sudo[224516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:18 compute-0 python3.9[224518]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:18 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 07:23:18 compute-0 sudo[224516]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:18 compute-0 sudo[224672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtnhiokhnkfsfykllweomwzopbjbabq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844198.6795354-365-146875816878090/AnsiballZ_systemd_service.py'
Jan 31 07:23:18 compute-0 sudo[224672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:19 compute-0 ceph-mon[74496]: pgmap v662: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:19 compute-0 python3.9[224674]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:19 compute-0 systemd[1]: Reloading.
Jan 31 07:23:19 compute-0 systemd-rc-local-generator[224702]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:19 compute-0 systemd-sysv-generator[224707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:19 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 07:23:19 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 07:23:19 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 07:23:19 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 07:23:19 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 07:23:19 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 07:23:19 compute-0 sudo[224672]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:23:19
Jan 31 07:23:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:23:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:23:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'volumes', '.rgw.root']
Jan 31 07:23:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:23:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:23:20 compute-0 ceph-mon[74496]: pgmap v663: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:20 compute-0 python3.9[224874]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:23:20 compute-0 network[224891]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:23:20 compute-0 network[224892]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:23:20 compute-0 network[224893]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:23:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 07:23:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:22.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.128740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202128779, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1677, "num_deletes": 503, "total_data_size": 2605296, "memory_usage": 2648312, "flush_reason": "Manual Compaction"}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202139241, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1495129, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14075, "largest_seqno": 15751, "table_properties": {"data_size": 1489630, "index_size": 2254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16560, "raw_average_key_size": 19, "raw_value_size": 1475728, "raw_average_value_size": 1709, "num_data_blocks": 104, "num_entries": 863, "num_filter_entries": 863, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844052, "oldest_key_time": 1769844052, "file_creation_time": 1769844202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10551 microseconds, and 4008 cpu microseconds.
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.139289) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1495129 bytes OK
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.139310) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.145045) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.145070) EVENT_LOG_v1 {"time_micros": 1769844202145062, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.145105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2597278, prev total WAL file size 2597278, number of live WAL files 2.
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.145817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323535' seq:72057594037927935, type:22 .. '6D67727374617400353038' seq:0, type:0; will stop at (end)
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1460KB)], [32(10MB)]
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202145847, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12631009, "oldest_snapshot_seqno": -1}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4203 keys, 8058591 bytes, temperature: kUnknown
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202194450, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8058591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8029153, "index_size": 17808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 104073, "raw_average_key_size": 24, "raw_value_size": 7951811, "raw_average_value_size": 1891, "num_data_blocks": 750, "num_entries": 4203, "num_filter_entries": 4203, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.194681) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8058591 bytes
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.205187) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.5 rd, 165.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.8) write-amplify(5.4) OK, records in: 5154, records dropped: 951 output_compression: NoCompression
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.205231) EVENT_LOG_v1 {"time_micros": 1769844202205216, "job": 14, "event": "compaction_finished", "compaction_time_micros": 48675, "compaction_time_cpu_micros": 13574, "output_level": 6, "num_output_files": 1, "total_output_size": 8058591, "num_input_records": 5154, "num_output_records": 4203, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202205610, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844202207557, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.145761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.207592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.207597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.207599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.207601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:23:22.207603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:23:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:22 compute-0 ceph-mon[74496]: pgmap v664: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:22 compute-0 sudo[224972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:22 compute-0 sudo[224972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:22 compute-0 sudo[224972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:22 compute-0 sudo[225001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:22 compute-0 sudo[225001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:22 compute-0 sudo[225001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:24.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:24 compute-0 ceph-mon[74496]: pgmap v665: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:25 compute-0 sudo[225216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmvhthlkvxyyhgdtajcaynavgdkkodp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844205.073589-434-195374802361175/AnsiballZ_dnf.py'
Jan 31 07:23:25 compute-0 sudo[225216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:25 compute-0 python3.9[225218]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:23:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:26 compute-0 ceph-mon[74496]: pgmap v666: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:23:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:23:27 compute-0 systemd[1]: Reloading.
Jan 31 07:23:28 compute-0 systemd-sysv-generator[225269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:28 compute-0 systemd-rc-local-generator[225265]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:28.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:23:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:28.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:23:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:23:28 compute-0 systemd[1]: run-r3f0709bb3377425da32c08d4714e2985.service: Deactivated successfully.
Jan 31 07:23:28 compute-0 sudo[225216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:28 compute-0 ceph-mon[74496]: pgmap v667: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:29 compute-0 sudo[225534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkdgngimlsmjrfpwqvyhpuuiugzbpyah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844209.0993285-461-73122654280703/AnsiballZ_file.py'
Jan 31 07:23:29 compute-0 sudo[225534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:29 compute-0 python3.9[225536]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 07:23:29 compute-0 sudo[225534]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:30.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:30.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:30 compute-0 sudo[225686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiaiqkhlxopzoyotyrllkvtrhuvwzoyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844209.9954014-485-215253884874685/AnsiballZ_modprobe.py'
Jan 31 07:23:30 compute-0 sudo[225686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:30 compute-0 ceph-mon[74496]: pgmap v668: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:30 compute-0 python3.9[225688]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 07:23:30 compute-0 sudo[225686]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:31 compute-0 sudo[225842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sojylpjddyoxfrpwlatzqrdqtomfoiqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844210.875276-509-208457268871964/AnsiballZ_stat.py'
Jan 31 07:23:31 compute-0 sudo[225842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:31 compute-0 python3.9[225844]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:31 compute-0 sudo[225842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:31 compute-0 sudo[225966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztnnclbtwzgocbzodaiogyawevgjibp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844210.875276-509-208457268871964/AnsiballZ_copy.py'
Jan 31 07:23:31 compute-0 sudo[225966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:31 compute-0 python3.9[225968]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844210.875276-509-208457268871964/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:31 compute-0 sudo[225966]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:32.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:32 compute-0 sudo[226118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usnuejiqiapmtalbushyghbfigbzlqbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844212.3168244-557-221654882637144/AnsiballZ_lineinfile.py'
Jan 31 07:23:32 compute-0 sudo[226118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:32 compute-0 python3.9[226120]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:32 compute-0 sudo[226118]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:33 compute-0 ceph-mon[74496]: pgmap v669: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:33 compute-0 sudo[226271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-licmducvlqrcnqphwjenftjokmjckord ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844212.9996526-581-96919689618248/AnsiballZ_systemd.py'
Jan 31 07:23:33 compute-0 sudo[226271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:33 compute-0 python3.9[226273]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:34 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 07:23:34 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 07:23:34 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 07:23:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:23:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:23:34 compute-0 sudo[226271]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:34.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:34 compute-0 ceph-mon[74496]: pgmap v670: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:34.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:34 compute-0 sudo[226428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlqfdlwimfuoflnpexdazoxtnbgjvyra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844214.3025148-605-79479630066319/AnsiballZ_command.py'
Jan 31 07:23:34 compute-0 sudo[226428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:23:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:23:34 compute-0 python3.9[226430]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:34 compute-0 sudo[226428]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:35 compute-0 sudo[226582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fywissvmecaabwinahumpokcqamxazqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844215.3780632-635-269476674369565/AnsiballZ_stat.py'
Jan 31 07:23:35 compute-0 sudo[226582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:35 compute-0 python3.9[226584]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:35 compute-0 sudo[226582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:23:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:36.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:23:36 compute-0 sudo[226734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxcgidlfthfjwfiurocmxcwmqwhsdoil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844216.0208287-662-173775628342478/AnsiballZ_stat.py'
Jan 31 07:23:36 compute-0 sudo[226734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:36 compute-0 python3.9[226736]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:36 compute-0 sudo[226734]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:36 compute-0 ceph-mon[74496]: pgmap v671: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:36 compute-0 sudo[226857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jepkojixsdfjilehuhgepzfoeybpvfaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844216.0208287-662-173775628342478/AnsiballZ_copy.py'
Jan 31 07:23:36 compute-0 sudo[226857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:37 compute-0 python3.9[226859]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844216.0208287-662-173775628342478/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:37 compute-0 sudo[226857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:37 compute-0 sudo[227010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mczqobsqlepwraipyfnebodtonljhwyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844217.2823598-707-48650159728460/AnsiballZ_command.py'
Jan 31 07:23:37 compute-0 sudo[227010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:37 compute-0 python3.9[227012]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:37 compute-0 sudo[227010]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:38.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:38 compute-0 sudo[227173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phftjlhkguuezvrfipkpfywdgyhifxbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844217.9802651-731-187799204225209/AnsiballZ_lineinfile.py'
Jan 31 07:23:38 compute-0 sudo[227173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:38 compute-0 podman[227137]: 2026-01-31 07:23:38.321053768 +0000 UTC m=+0.095532429 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:23:38 compute-0 python3.9[227180]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:38 compute-0 sudo[227173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:38 compute-0 ceph-mon[74496]: pgmap v672: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:39 compute-0 sudo[227342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puwktrxpbnmetnltjdmmelepqlwsjdqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844218.690006-755-59154690231114/AnsiballZ_replace.py'
Jan 31 07:23:39 compute-0 sudo[227342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:39 compute-0 python3.9[227344]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:39 compute-0 sudo[227342]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:40 compute-0 sudo[227494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iexbjlqbkletqhctfkfngezixyzbouvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844219.8209815-779-230251400259143/AnsiballZ_replace.py'
Jan 31 07:23:40 compute-0 sudo[227494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:40.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:40 compute-0 python3.9[227496]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:40 compute-0 sudo[227494]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:40 compute-0 sudo[227646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqzvzmqvlctykgiblwltewuaimnaxuzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844220.6161163-806-91181642727321/AnsiballZ_lineinfile.py'
Jan 31 07:23:40 compute-0 sudo[227646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:40 compute-0 ceph-mon[74496]: pgmap v673: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:41 compute-0 python3.9[227648]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:41 compute-0 sudo[227646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:41 compute-0 sudo[227799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxzelwdsmzosyninupnvsrybbijkveqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844221.2743409-806-113729660258920/AnsiballZ_lineinfile.py'
Jan 31 07:23:41 compute-0 sudo[227799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:41 compute-0 python3.9[227801]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:41 compute-0 sudo[227799]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:42.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:42.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:42 compute-0 sudo[227951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdhjonauzokainbfpgopcqzybuntxhve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844221.9514458-806-200057401098314/AnsiballZ_lineinfile.py'
Jan 31 07:23:42 compute-0 sudo[227951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:42 compute-0 python3.9[227953]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:42 compute-0 sudo[227951]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:42 compute-0 sudo[228103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzuldllomyvrhgkgildtjjttebhhyqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844222.6299667-806-254740951382215/AnsiballZ_lineinfile.py'
Jan 31 07:23:42 compute-0 sudo[228103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:42 compute-0 sudo[228106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:42 compute-0 sudo[228106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:42 compute-0 sudo[228106]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-0 sudo[228131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:23:43 compute-0 sudo[228131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:23:43 compute-0 sudo[228131]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-0 python3.9[228105]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:43 compute-0 sudo[228103]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:43 compute-0 ceph-mon[74496]: pgmap v674: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:43 compute-0 sudo[228306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dehuevvjdwceitkgwwkmqwsbcdbiadyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844223.2517395-893-214070403061406/AnsiballZ_stat.py'
Jan 31 07:23:43 compute-0 sudo[228306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:43 compute-0 python3.9[228308]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:23:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:43 compute-0 sudo[228306]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:44.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:44 compute-0 sudo[228460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctmvnvhoconstqitohwbnzkzorvvctde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844223.925049-917-41703386024349/AnsiballZ_command.py'
Jan 31 07:23:44 compute-0 sudo[228460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:44.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:44 compute-0 python3.9[228462]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:23:44 compute-0 sudo[228460]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:44 compute-0 podman[228519]: 2026-01-31 07:23:44.902685679 +0000 UTC m=+0.068856580 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 07:23:45 compute-0 sudo[228634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxbfzeskxihrvejkizjoqxvgdqqegog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844224.7757845-944-16635550708412/AnsiballZ_systemd_service.py'
Jan 31 07:23:45 compute-0 sudo[228634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:45 compute-0 ceph-mon[74496]: pgmap v675: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:45 compute-0 python3.9[228636]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:45 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 31 07:23:45 compute-0 sudo[228634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:46 compute-0 sudo[228791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czctyhktkiozlcshmgzvxmtqoxoirzgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844225.744585-968-118511334265778/AnsiballZ_systemd_service.py'
Jan 31 07:23:46 compute-0 sudo[228791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:46.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:46.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:46 compute-0 python3.9[228793]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:23:46 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 07:23:46 compute-0 udevadm[228798]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 07:23:46 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 07:23:46 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 07:23:46 compute-0 ceph-mon[74496]: pgmap v676: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:46 compute-0 multipathd[228802]: --------start up--------
Jan 31 07:23:46 compute-0 multipathd[228802]: read /etc/multipath.conf
Jan 31 07:23:46 compute-0 multipathd[228802]: path checkers start up
Jan 31 07:23:46 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 07:23:46 compute-0 sudo[228791]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:47 compute-0 sudo[228960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxgtmlhdjxhfgnhcrnnclwkzwwektdjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844227.047093-1004-16341733660605/AnsiballZ_file.py'
Jan 31 07:23:47 compute-0 sudo[228960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:47 compute-0 python3.9[228962]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 07:23:47 compute-0 sudo[228960]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:48 compute-0 sudo[229112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiacfhihoxbpsmffnrjnharayhggqipu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844227.8658586-1028-254642841956673/AnsiballZ_modprobe.py'
Jan 31 07:23:48 compute-0 sudo[229112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:48.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:48 compute-0 python3.9[229114]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 07:23:48 compute-0 kernel: Key type psk registered
Jan 31 07:23:48 compute-0 sudo[229112]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:48 compute-0 ceph-mon[74496]: pgmap v677: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:48 compute-0 sudo[229276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alkvdylbkdufklkgqlutzijhkbqqfowd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844228.6890345-1052-233962643736689/AnsiballZ_stat.py'
Jan 31 07:23:48 compute-0 sudo[229276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:49 compute-0 python3.9[229278]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:23:49 compute-0 sudo[229276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:49 compute-0 sudo[229400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjysbmwrrvzfnrrdaiiywvqdqbqihxfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844228.6890345-1052-233962643736689/AnsiballZ_copy.py'
Jan 31 07:23:49 compute-0 sudo[229400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:49 compute-0 python3.9[229402]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844228.6890345-1052-233962643736689/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:49 compute-0 sudo[229400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:23:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:50.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:50.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:50 compute-0 sudo[229552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvmssuhtybvllsyxvjbwittqdvorvtrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844230.1292753-1100-80348421061595/AnsiballZ_lineinfile.py'
Jan 31 07:23:50 compute-0 sudo[229552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:50 compute-0 python3.9[229554]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:23:50 compute-0 sudo[229552]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:51 compute-0 ceph-mon[74496]: pgmap v678: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:51 compute-0 sudo[229704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmfztvklzukwkxowfdalbqhwyksjzlov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844230.8211122-1124-50879242637383/AnsiballZ_systemd.py'
Jan 31 07:23:51 compute-0 sudo[229704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:51 compute-0 python3.9[229706]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:51 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 07:23:51 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 07:23:51 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 07:23:51 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 07:23:51 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 07:23:51 compute-0 sudo[229704]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:52.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:52 compute-0 sudo[229861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akabftefzpbqgnebssjdicremwclowoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844232.1526487-1148-31227667318106/AnsiballZ_dnf.py'
Jan 31 07:23:52 compute-0 sudo[229861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:52 compute-0 python3.9[229863]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:23:53 compute-0 ceph-mon[74496]: pgmap v679: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:54.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:54 compute-0 ceph-mon[74496]: pgmap v680: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:54 compute-0 systemd[1]: Reloading.
Jan 31 07:23:54 compute-0 systemd-sysv-generator[229893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:54 compute-0 systemd-rc-local-generator[229886]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:55 compute-0 systemd[1]: Reloading.
Jan 31 07:23:55 compute-0 systemd-sysv-generator[229931]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:55 compute-0 systemd-rc-local-generator[229925]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:55 compute-0 systemd-logind[816]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 07:23:55 compute-0 systemd-logind[816]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 07:23:55 compute-0 lvm[229979]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 07:23:55 compute-0 lvm[229979]: VG ceph_vg0 finished
Jan 31 07:23:55 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:23:55 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:23:55 compute-0 systemd[1]: Reloading.
Jan 31 07:23:55 compute-0 systemd-sysv-generator[230035]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:23:55 compute-0 systemd-rc-local-generator[230031]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:23:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:55 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:23:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:56.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:56 compute-0 sudo[229861]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:56 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:23:56 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:23:56 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.200s CPU time.
Jan 31 07:23:56 compute-0 systemd[1]: run-r7770e083adc34cb4bf523cf271b857fb.service: Deactivated successfully.
Jan 31 07:23:56 compute-0 ceph-mon[74496]: pgmap v681: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:23:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:23:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:58.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:23:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:23:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:23:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:23:58 compute-0 sudo[231334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylmhozroseidegqnveaecydhtkqsner ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844238.1961577-1172-47683715401423/AnsiballZ_systemd_service.py'
Jan 31 07:23:58 compute-0 sudo[231334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:59 compute-0 python3.9[231336]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:59 compute-0 iscsid[224715]: iscsid shutting down.
Jan 31 07:23:59 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 31 07:23:59 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 07:23:59 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 31 07:23:59 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 07:23:59 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 07:23:59 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 07:23:59 compute-0 sudo[231334]: pam_unix(sudo:session): session closed for user root
Jan 31 07:23:59 compute-0 ceph-mon[74496]: pgmap v682: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:59 compute-0 sudo[231492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edguxeffxyqihaerbuoqgeeotnelavbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844239.3395803-1196-250040866252470/AnsiballZ_systemd_service.py'
Jan 31 07:23:59 compute-0 sudo[231492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:23:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:23:59 compute-0 python3.9[231494]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:23:59 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 07:23:59 compute-0 multipathd[228802]: exit (signal)
Jan 31 07:23:59 compute-0 multipathd[228802]: --------shut down-------
Jan 31 07:24:00 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 07:24:00 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 07:24:00 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 07:24:00 compute-0 multipathd[231500]: --------start up--------
Jan 31 07:24:00 compute-0 multipathd[231500]: read /etc/multipath.conf
Jan 31 07:24:00 compute-0 multipathd[231500]: path checkers start up
Jan 31 07:24:00 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 07:24:00 compute-0 sudo[231492]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:00.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:00 compute-0 python3.9[231657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:24:01 compute-0 ceph-mon[74496]: pgmap v683: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:01 compute-0 sudo[231812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tripkyzccfntccvwhvpzhvmizzocomgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844241.4897144-1248-148184038486552/AnsiballZ_file.py'
Jan 31 07:24:01 compute-0 sudo[231812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:01 compute-0 python3.9[231814]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:01 compute-0 sudo[231812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:02.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:02 compute-0 sudo[231964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxssqujeuqaheaedbpysyzccwehqiefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844242.402479-1281-268352575186366/AnsiballZ_systemd_service.py'
Jan 31 07:24:02 compute-0 sudo[231964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:02 compute-0 python3.9[231966]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:24:02 compute-0 systemd[1]: Reloading.
Jan 31 07:24:03 compute-0 systemd-rc-local-generator[231991]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:24:03 compute-0 systemd-sysv-generator[231996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:24:03 compute-0 sudo[232001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:03 compute-0 sudo[232001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:03 compute-0 sudo[232001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:03 compute-0 sudo[231964]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:03 compute-0 sudo[232027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:03 compute-0 sudo[232027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:03 compute-0 sudo[232027]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:03 compute-0 ceph-mon[74496]: pgmap v684: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:03 compute-0 python3.9[232201]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:24:04 compute-0 network[232218]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 07:24:04 compute-0 network[232219]: 'network-scripts' will be removed from distribution in near future.
Jan 31 07:24:04 compute-0 network[232220]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 07:24:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:04.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:04.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:04 compute-0 sudo[232231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:04 compute-0 sudo[232231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:04 compute-0 sudo[232231]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:04 compute-0 sudo[232259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:24:04 compute-0 sudo[232259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:04 compute-0 sudo[232259]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:04 compute-0 sudo[232289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:04 compute-0 sudo[232289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:04 compute-0 sudo[232289]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:04 compute-0 sudo[232316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:24:04 compute-0 sudo[232316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:05 compute-0 sudo[232316]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:05 compute-0 ceph-mon[74496]: pgmap v685: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:06 compute-0 ceph-mon[74496]: pgmap v686: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:24:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:24:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:24:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:24:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:24:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4c8d5d36-4d64-43a7-8418-6b8fca9e0182 does not exist
Jan 31 07:24:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c238614d-5ace-4f5c-b202-8e1116fda2d2 does not exist
Jan 31 07:24:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 90e0bb2b-2165-479b-84b1-da99bd7709b9 does not exist
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:24:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:24:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:24:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:24:07 compute-0 sudo[232497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:07 compute-0 sudo[232497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-0 sudo[232497]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:07 compute-0 sudo[232522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:24:07 compute-0 sudo[232522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-0 sudo[232522]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-0 sudo[232547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:07 compute-0 sudo[232547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-0 sudo[232547]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:07 compute-0 sudo[232572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:24:07 compute-0 sudo[232572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:24:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:24:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:08.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.26492331 +0000 UTC m=+0.051067102 container create 4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:24:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:08 compute-0 systemd[1]: Started libpod-conmon-4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22.scope.
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.237223196 +0000 UTC m=+0.023367008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.567853204 +0000 UTC m=+0.353997076 container init 4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.583290435 +0000 UTC m=+0.369434257 container start 4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.588929054 +0000 UTC m=+0.375072846 container attach 4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:24:08 compute-0 inspiring_bartik[232655]: 167 167
Jan 31 07:24:08 compute-0 systemd[1]: libpod-4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22.scope: Deactivated successfully.
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.593830705 +0000 UTC m=+0.379974537 container died 4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-74b47157713fbb1f94e7fc2a935304797cec0d41fca8962f149a3c5c02eda29e-merged.mount: Deactivated successfully.
Jan 31 07:24:08 compute-0 podman[232639]: 2026-01-31 07:24:08.702279181 +0000 UTC m=+0.488422973 container remove 4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_bartik, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:24:08 compute-0 systemd[1]: libpod-conmon-4151438e80ed1f1cf4801769e7d588b388e9d146f0c53134d19ed5c497f69b22.scope: Deactivated successfully.
Jan 31 07:24:08 compute-0 podman[232671]: 2026-01-31 07:24:08.719767073 +0000 UTC m=+0.356474267 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 07:24:08 compute-0 sudo[232821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoidylbmggnwgxqwpivqmspygalgccnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844248.3755968-1338-241981886086499/AnsiballZ_systemd_service.py'
Jan 31 07:24:08 compute-0 sudo[232821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:08 compute-0 podman[232832]: 2026-01-31 07:24:08.889270295 +0000 UTC m=+0.061500429 container create de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:24:08 compute-0 ceph-mon[74496]: pgmap v687: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:08 compute-0 systemd[1]: Started libpod-conmon-de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2.scope.
Jan 31 07:24:08 compute-0 podman[232832]: 2026-01-31 07:24:08.859192282 +0000 UTC m=+0.031422506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd17cf88c5c364c9fe3441d80fce4efe814e3a69e6925988ad3271b861bc5ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd17cf88c5c364c9fe3441d80fce4efe814e3a69e6925988ad3271b861bc5ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd17cf88c5c364c9fe3441d80fce4efe814e3a69e6925988ad3271b861bc5ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd17cf88c5c364c9fe3441d80fce4efe814e3a69e6925988ad3271b861bc5ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd17cf88c5c364c9fe3441d80fce4efe814e3a69e6925988ad3271b861bc5ab2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:08 compute-0 podman[232832]: 2026-01-31 07:24:08.995437324 +0000 UTC m=+0.167667448 container init de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_taussig, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:24:09 compute-0 podman[232832]: 2026-01-31 07:24:09.003799861 +0000 UTC m=+0.176030025 container start de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_taussig, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:24:09 compute-0 podman[232832]: 2026-01-31 07:24:09.007979934 +0000 UTC m=+0.180210128 container attach de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_taussig, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:24:09 compute-0 python3.9[232826]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:09 compute-0 sudo[232821]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:09 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 07:24:09 compute-0 sudo[233006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmmcosjdmdwhbatmafdcmqrvqeweoldc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844249.2529767-1338-107000631508349/AnsiballZ_systemd_service.py'
Jan 31 07:24:09 compute-0 sudo[233006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:09 compute-0 funny_taussig[232849]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:24:09 compute-0 funny_taussig[232849]: --> relative data size: 1.0
Jan 31 07:24:09 compute-0 funny_taussig[232849]: --> All data devices are unavailable
Jan 31 07:24:09 compute-0 systemd[1]: libpod-de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2.scope: Deactivated successfully.
Jan 31 07:24:09 compute-0 podman[232832]: 2026-01-31 07:24:09.840067824 +0000 UTC m=+1.012297948 container died de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_taussig, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd17cf88c5c364c9fe3441d80fce4efe814e3a69e6925988ad3271b861bc5ab2-merged.mount: Deactivated successfully.
Jan 31 07:24:09 compute-0 podman[232832]: 2026-01-31 07:24:09.916345186 +0000 UTC m=+1.088575350 container remove de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:24:09 compute-0 systemd[1]: libpod-conmon-de9326e1469f43819f2365af90e9ae976e145a0f23d6b7abd787ed11f68832a2.scope: Deactivated successfully.
Jan 31 07:24:09 compute-0 python3.9[233008]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:09 compute-0 sudo[232572]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:09 compute-0 sudo[233006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:09 compute-0 sudo[233032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:09 compute-0 sudo[233032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:09 compute-0 sudo[233032]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:10 compute-0 sudo[233065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:24:10 compute-0 sudo[233065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:10 compute-0 sudo[233065]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:10 compute-0 sudo[233111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:10 compute-0 sudo[233111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:10 compute-0 sudo[233111]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:10.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:10 compute-0 sudo[233177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:24:10 compute-0 sudo[233177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:10 compute-0 sudo[233294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijcmhwizhtaghnibcmjzhaeolqvzsica ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844250.1084538-1338-20905054352205/AnsiballZ_systemd_service.py'
Jan 31 07:24:10 compute-0 sudo[233294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:10 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.562677954 +0000 UTC m=+0.044654303 container create b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:24:10 compute-0 systemd[1]: Started libpod-conmon-b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a.scope.
Jan 31 07:24:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.542995788 +0000 UTC m=+0.024972157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.640725949 +0000 UTC m=+0.122702338 container init b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.652006728 +0000 UTC m=+0.133983107 container start b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:24:10 compute-0 pensive_chaplygin[233343]: 167 167
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.660683602 +0000 UTC m=+0.142659971 container attach b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:24:10 compute-0 systemd[1]: libpod-b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a.scope: Deactivated successfully.
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.662414294 +0000 UTC m=+0.144390653 container died b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-997e9ce31e494a54411bac717f1d9a3d6fa38e8df1b158eeb9c681d5948ebe33-merged.mount: Deactivated successfully.
Jan 31 07:24:10 compute-0 podman[233326]: 2026-01-31 07:24:10.709622149 +0000 UTC m=+0.191598538 container remove b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:24:10 compute-0 systemd[1]: libpod-conmon-b391a3e1638baf81cc74714ea13b63c90254cc3f687c6e4d7343c1d1feec830a.scope: Deactivated successfully.
Jan 31 07:24:10 compute-0 python3.9[233304]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:10 compute-0 sudo[233294]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:10 compute-0 podman[233392]: 2026-01-31 07:24:10.899014112 +0000 UTC m=+0.052648380 container create 17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:24:10 compute-0 systemd[1]: Started libpod-conmon-17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93.scope.
Jan 31 07:24:10 compute-0 ceph-mon[74496]: pgmap v688: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:10 compute-0 podman[233392]: 2026-01-31 07:24:10.872350094 +0000 UTC m=+0.025984442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:24:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34259b705c19a3957fbd7ca27c4fad6008749ef5775a741cec369afe6cd1d5f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34259b705c19a3957fbd7ca27c4fad6008749ef5775a741cec369afe6cd1d5f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34259b705c19a3957fbd7ca27c4fad6008749ef5775a741cec369afe6cd1d5f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34259b705c19a3957fbd7ca27c4fad6008749ef5775a741cec369afe6cd1d5f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:10 compute-0 podman[233392]: 2026-01-31 07:24:10.992665183 +0000 UTC m=+0.146299491 container init 17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:24:11 compute-0 podman[233392]: 2026-01-31 07:24:11.000873715 +0000 UTC m=+0.154507993 container start 17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:24:11 compute-0 podman[233392]: 2026-01-31 07:24:11.007262974 +0000 UTC m=+0.160897262 container attach 17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:24:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:24:11.130 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:24:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:24:11.131 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:24:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:24:11.132 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:24:11 compute-0 sudo[233540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgcdflqvfvbifqduyvpustuxluzopyvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844250.9207232-1338-204335209165925/AnsiballZ_systemd_service.py'
Jan 31 07:24:11 compute-0 sudo[233540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:11 compute-0 python3.9[233542]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:11 compute-0 sudo[233540]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:11 compute-0 recursing_yonath[233438]: {
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:     "0": [
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:         {
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "devices": [
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "/dev/loop3"
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             ],
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "lv_name": "ceph_lv0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "lv_size": "7511998464",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "name": "ceph_lv0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "tags": {
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.cluster_name": "ceph",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.crush_device_class": "",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.encrypted": "0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.osd_id": "0",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.type": "block",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:                 "ceph.vdo": "0"
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             },
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "type": "block",
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:             "vg_name": "ceph_vg0"
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:         }
Jan 31 07:24:11 compute-0 recursing_yonath[233438]:     ]
Jan 31 07:24:11 compute-0 recursing_yonath[233438]: }
Jan 31 07:24:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:11 compute-0 systemd[1]: libpod-17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93.scope: Deactivated successfully.
Jan 31 07:24:11 compute-0 podman[233392]: 2026-01-31 07:24:11.780876851 +0000 UTC m=+0.934511139 container died 17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-34259b705c19a3957fbd7ca27c4fad6008749ef5775a741cec369afe6cd1d5f0-merged.mount: Deactivated successfully.
Jan 31 07:24:11 compute-0 podman[233392]: 2026-01-31 07:24:11.85501867 +0000 UTC m=+1.008652928 container remove 17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:24:11 compute-0 systemd[1]: libpod-conmon-17b3d1e4340be41066cc4a63f9b04292e7194be97daf06b2dc26ed01fe5d7e93.scope: Deactivated successfully.
Jan 31 07:24:11 compute-0 sudo[233177]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:11 compute-0 sudo[233660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:11 compute-0 sudo[233660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:11 compute-0 sudo[233660]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:12 compute-0 sudo[233709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:24:12 compute-0 sudo[233709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:12 compute-0 sudo[233758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfykqxiyxnpqnajhyawjdipvqmbxdueq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844251.7315128-1338-170690551193014/AnsiballZ_systemd_service.py'
Jan 31 07:24:12 compute-0 sudo[233709]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:12 compute-0 sudo[233758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:12 compute-0 sudo[233763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:12 compute-0 sudo[233763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:12 compute-0 sudo[233763]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:12 compute-0 sudo[233788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:24:12 compute-0 sudo[233788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:12.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:12 compute-0 python3.9[233762]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:12 compute-0 sudo[233758]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.438591329 +0000 UTC m=+0.038955202 container create c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:24:12 compute-0 systemd[1]: Started libpod-conmon-c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a.scope.
Jan 31 07:24:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.508823492 +0000 UTC m=+0.109187385 container init c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.420324999 +0000 UTC m=+0.020688892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.519006203 +0000 UTC m=+0.119370076 container start c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.522924 +0000 UTC m=+0.123287873 container attach c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:24:12 compute-0 charming_gagarin[233890]: 167 167
Jan 31 07:24:12 compute-0 systemd[1]: libpod-c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a.scope: Deactivated successfully.
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.525684648 +0000 UTC m=+0.126048561 container died c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9c2bf3c3e959b8d5564f2c7191c3213a63e0271a04573429d66145cff02011a-merged.mount: Deactivated successfully.
Jan 31 07:24:12 compute-0 podman[233854]: 2026-01-31 07:24:12.567342046 +0000 UTC m=+0.167705929 container remove c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_gagarin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:24:12 compute-0 systemd[1]: libpod-conmon-c312343d6d3af89e393bd8463f0a4df9b3f912dc231d01fdf43020af649d404a.scope: Deactivated successfully.
Jan 31 07:24:12 compute-0 podman[233993]: 2026-01-31 07:24:12.708220071 +0000 UTC m=+0.037718781 container create 0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:24:12 compute-0 systemd[1]: Started libpod-conmon-0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955.scope.
Jan 31 07:24:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db687197f64241cbccaabba416d693196943c3e0b1af6fc6f58ee5813e5d45a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db687197f64241cbccaabba416d693196943c3e0b1af6fc6f58ee5813e5d45a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db687197f64241cbccaabba416d693196943c3e0b1af6fc6f58ee5813e5d45a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db687197f64241cbccaabba416d693196943c3e0b1af6fc6f58ee5813e5d45a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:24:12 compute-0 podman[233993]: 2026-01-31 07:24:12.691870469 +0000 UTC m=+0.021369199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:24:12 compute-0 podman[233993]: 2026-01-31 07:24:12.794297826 +0000 UTC m=+0.123796566 container init 0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:24:12 compute-0 podman[233993]: 2026-01-31 07:24:12.800272903 +0000 UTC m=+0.129771633 container start 0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:24:12 compute-0 podman[233993]: 2026-01-31 07:24:12.804296122 +0000 UTC m=+0.133794832 container attach 0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:24:12 compute-0 sudo[234065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebfjxjnjywthirynjidrltytwsfgpesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844252.5524895-1338-186122589515279/AnsiballZ_systemd_service.py'
Jan 31 07:24:12 compute-0 sudo[234065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:13 compute-0 ceph-mon[74496]: pgmap v689: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:13 compute-0 python3.9[234067]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:13 compute-0 sudo[234065]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:13 compute-0 sudo[234230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspfvtmxtkvrbmlkrkgzpgvxjmieggdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844253.3304653-1338-88251307104329/AnsiballZ_systemd_service.py'
Jan 31 07:24:13 compute-0 sudo[234230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]: {
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:         "osd_id": 0,
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:         "type": "bluestore"
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]:     }
Jan 31 07:24:13 compute-0 zealous_bardeen[234034]: }
Jan 31 07:24:13 compute-0 systemd[1]: libpod-0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955.scope: Deactivated successfully.
Jan 31 07:24:13 compute-0 podman[233993]: 2026-01-31 07:24:13.668601378 +0000 UTC m=+0.998100118 container died 0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2db687197f64241cbccaabba416d693196943c3e0b1af6fc6f58ee5813e5d45a-merged.mount: Deactivated successfully.
Jan 31 07:24:13 compute-0 podman[233993]: 2026-01-31 07:24:13.750347465 +0000 UTC m=+1.079846185 container remove 0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:24:13 compute-0 systemd[1]: libpod-conmon-0e6cd90b48db2d66d69371d1ef86bc368ca4fbbee93347f72f8dadb44d74a955.scope: Deactivated successfully.
Jan 31 07:24:13 compute-0 sudo[233788]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:24:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:24:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:13 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fcfd7f39-1f55-4a28-8a34-2d00d223cc33 does not exist
Jan 31 07:24:13 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cc62aaea-ad95-46c9-bf39-6a2c52f3490d does not exist
Jan 31 07:24:13 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a305284d-1829-4304-b461-4865a020ad58 does not exist
Jan 31 07:24:13 compute-0 sudo[234251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:13 compute-0 sudo[234251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:13 compute-0 sudo[234251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:13 compute-0 sudo[234276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:24:13 compute-0 sudo[234276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:13 compute-0 sudo[234276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:13 compute-0 python3.9[234235]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:14 compute-0 sudo[234230]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:14.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:14.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:14 compute-0 sudo[234451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aclrandrjfyvuceixaugmidnayandjvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844254.1204185-1338-997079373138/AnsiballZ_systemd_service.py'
Jan 31 07:24:14 compute-0 sudo[234451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:14 compute-0 python3.9[234453]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:24:14 compute-0 sudo[234451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:14 compute-0 ceph-mon[74496]: pgmap v690: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:24:15 compute-0 sudo[234616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqpgysqedvsgxybcupjxwrsicikdtksn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844255.0839632-1515-56151829688107/AnsiballZ_file.py'
Jan 31 07:24:15 compute-0 sudo[234616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:15 compute-0 podman[234579]: 2026-01-31 07:24:15.394278497 +0000 UTC m=+0.072016329 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 07:24:15 compute-0 python3.9[234619]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:15 compute-0 sudo[234616]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:15 compute-0 sudo[234775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omgdffwdglosrajiyvuefmggcezfnprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844255.6667233-1515-261770158017957/AnsiballZ_file.py'
Jan 31 07:24:15 compute-0 sudo[234775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:16 compute-0 python3.9[234777]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:16 compute-0 sudo[234775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:16.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:16.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:16 compute-0 sudo[234927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obdhgewubxrugrltearumudzwccsyqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844256.2360594-1515-192784715592264/AnsiballZ_file.py'
Jan 31 07:24:16 compute-0 sudo[234927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:16 compute-0 python3.9[234929]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:16 compute-0 sudo[234927]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:16 compute-0 ceph-mon[74496]: pgmap v691: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:17 compute-0 sudo[235080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noyadihjfqieyubyyqgvsrcaffsmgskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844256.8994882-1515-140665403648206/AnsiballZ_file.py'
Jan 31 07:24:17 compute-0 sudo[235080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:17 compute-0 python3.9[235082]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:17 compute-0 sudo[235080]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:17 compute-0 sudo[235232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cskdnaxeyxndoohkbpmoixigudqdrlly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844257.6411192-1515-167618835742980/AnsiballZ_file.py'
Jan 31 07:24:17 compute-0 sudo[235232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:18 compute-0 python3.9[235234]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:18 compute-0 sudo[235232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:18.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:18.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:18 compute-0 sudo[235384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emlyrcjbeerfunywshyxkzmkcpvqxoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844258.2824097-1515-157650820657496/AnsiballZ_file.py'
Jan 31 07:24:18 compute-0 sudo[235384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:18 compute-0 python3.9[235386]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:18 compute-0 sudo[235384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:18 compute-0 ceph-mon[74496]: pgmap v692: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:19 compute-0 sudo[235536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plsqxdxtgmxnlwitgzwaruhgtuakcqaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844258.917228-1515-36059557924185/AnsiballZ_file.py'
Jan 31 07:24:19 compute-0 sudo[235536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:19 compute-0 python3.9[235538]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:19 compute-0 sudo[235536]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:19 compute-0 sudo[235689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crqodddwrwzpqektchnmdwfzhxivcibp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844259.5109794-1515-157865662693529/AnsiballZ_file.py'
Jan 31 07:24:19 compute-0 sudo[235689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:24:19
Jan 31 07:24:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:24:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:24:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images', '.mgr', 'cephfs.cephfs.data', 'vms']
Jan 31 07:24:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:24:19 compute-0 python3.9[235691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:19 compute-0 sudo[235689]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:24:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:20.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:20.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:20 compute-0 sudo[235841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfphiuvwytxxdumgimlrwxxshpnlmoug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844260.3398232-1686-59203838466889/AnsiballZ_file.py'
Jan 31 07:24:20 compute-0 sudo[235841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:20 compute-0 python3.9[235843]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:20 compute-0 sudo[235841]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:20 compute-0 ceph-mon[74496]: pgmap v693: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:21 compute-0 sudo[235993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whyfxfrwnhxnwpcutkyemburaxuawavo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844260.873627-1686-197240222616334/AnsiballZ_file.py'
Jan 31 07:24:21 compute-0 sudo[235993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:21 compute-0 python3.9[235995]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:21 compute-0 sudo[235993]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:21 compute-0 sudo[236146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfyuooqcbblvnikvmnsuoqmrucszobei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844261.646782-1686-30096354080194/AnsiballZ_file.py'
Jan 31 07:24:21 compute-0 sudo[236146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:22 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 07:24:22 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 07:24:22 compute-0 python3.9[236148]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:22 compute-0 sudo[236146]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:22.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:22 compute-0 sudo[236300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szdfjjsczorzpgqzgbjxulfnaucxnxmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844262.2523088-1686-126174273306968/AnsiballZ_file.py'
Jan 31 07:24:22 compute-0 sudo[236300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:22 compute-0 python3.9[236302]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:22 compute-0 sudo[236300]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:22 compute-0 ceph-mon[74496]: pgmap v694: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:23 compute-0 sudo[236452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swokahderooypzspsxmtdkaenmdoysvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844262.833874-1686-194627252174545/AnsiballZ_file.py'
Jan 31 07:24:23 compute-0 sudo[236452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:23 compute-0 python3.9[236454]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:23 compute-0 sudo[236452]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-0 sudo[236457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:23 compute-0 sudo[236457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:23 compute-0 sudo[236457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-0 sudo[236505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:23 compute-0 sudo[236505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:23 compute-0 sudo[236505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:23 compute-0 sudo[236655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zspiecwcaihcfiugqlmmqnojubqkmbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844263.3835728-1686-214887264141768/AnsiballZ_file.py'
Jan 31 07:24:23 compute-0 sudo[236655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:23 compute-0 python3.9[236657]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:23 compute-0 sudo[236655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:24 compute-0 sudo[236807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryjvbwhmazfisprpwovfrczepkdopuus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844263.9335225-1686-250493979332858/AnsiballZ_file.py'
Jan 31 07:24:24 compute-0 sudo[236807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:24.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:24.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:24 compute-0 python3.9[236809]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:24 compute-0 sudo[236807]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:24 compute-0 sudo[236959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qanmscosobplzevwtzqezsvvopqlwwfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844264.4968436-1686-12805622965020/AnsiballZ_file.py'
Jan 31 07:24:24 compute-0 sudo[236959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:24 compute-0 python3.9[236961]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:24 compute-0 ceph-mon[74496]: pgmap v695: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:24 compute-0 sudo[236959]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:25 compute-0 sudo[237112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlffnfgwaqclsvtnmklspmkjevafurvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844265.3014114-1860-142990616645561/AnsiballZ_command.py'
Jan 31 07:24:25 compute-0 sudo[237112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:25 compute-0 python3.9[237114]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:25 compute-0 sudo[237112]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:26.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:26 compute-0 python3.9[237266]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 07:24:27 compute-0 ceph-mon[74496]: pgmap v696: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:27 compute-0 sudo[237416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkmtrfsaeiipsaydmtzmogtyooffnpgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844266.9185634-1914-45975912887554/AnsiballZ_systemd_service.py'
Jan 31 07:24:27 compute-0 sudo[237416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:27 compute-0 python3.9[237418]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:24:27 compute-0 systemd[1]: Reloading.
Jan 31 07:24:27 compute-0 systemd-sysv-generator[237450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:24:27 compute-0 systemd-rc-local-generator[237447]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:24:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:27 compute-0 sudo[237416]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:28.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:28 compute-0 sudo[237604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gahevmgfpoxklcwbphjsipoeneorgiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844268.0436895-1938-2212626008615/AnsiballZ_command.py'
Jan 31 07:24:28 compute-0 sudo[237604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:28.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:28 compute-0 python3.9[237606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:28 compute-0 sudo[237604]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:28 compute-0 sudo[237757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swphbmvyhinewsmupjmghyghmtjymzlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844268.6323006-1938-122772524985748/AnsiballZ_command.py'
Jan 31 07:24:28 compute-0 sudo[237757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:29 compute-0 python3.9[237759]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:29 compute-0 ceph-mon[74496]: pgmap v697: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:29 compute-0 sudo[237757]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:29 compute-0 sudo[237911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovdohkebynggreqtlkgusplidjfytmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844269.2710602-1938-211404457220269/AnsiballZ_command.py'
Jan 31 07:24:29 compute-0 sudo[237911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:29 compute-0 python3.9[237913]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:29 compute-0 sudo[237911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:30 compute-0 sudo[238064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sveqtkkjywyyiqumfcdyerarvvfrxuei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844269.8313854-1938-279872743477330/AnsiballZ_command.py'
Jan 31 07:24:30 compute-0 sudo[238064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:30.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:30 compute-0 python3.9[238066]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:30 compute-0 sudo[238064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:30.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:30 compute-0 sudo[238217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgcfwrkhaxdrkkgxtyyoweoddxvgyyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844270.422583-1938-192264768946051/AnsiballZ_command.py'
Jan 31 07:24:30 compute-0 sudo[238217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:30 compute-0 python3.9[238219]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:31 compute-0 ceph-mon[74496]: pgmap v698: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:31 compute-0 sudo[238217]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:32.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:32 compute-0 sudo[238371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcqgunmroqloypgweqpusdlnilepvizi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844272.0281634-1938-50616800372346/AnsiballZ_command.py'
Jan 31 07:24:32 compute-0 sudo[238371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:32.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:32 compute-0 python3.9[238373]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:32 compute-0 sudo[238371]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:32 compute-0 sudo[238524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzlnwbncjpbaxhwbiszitjwaqwhqrnnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844272.646172-1938-152867466921338/AnsiballZ_command.py'
Jan 31 07:24:32 compute-0 sudo[238524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:33 compute-0 python3.9[238526]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:33 compute-0 sudo[238524]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:33 compute-0 ceph-mon[74496]: pgmap v699: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:33 compute-0 sudo[238678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smdanbqxgyjgnbyxvsuagarkzamolahe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844273.2834365-1938-10150308245400/AnsiballZ_command.py'
Jan 31 07:24:33 compute-0 sudo[238678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:33 compute-0 python3.9[238680]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:24:33 compute-0 sudo[238678]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:34.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:24:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3724 writes, 16K keys, 3723 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3724 writes, 3723 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1348 writes, 5546 keys, 1348 commit groups, 1.0 writes per commit group, ingest: 9.55 MB, 0.02 MB/s
                                           Interval WAL: 1348 writes, 1348 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     85.9      0.22              0.05         7    0.031       0      0       0.0       0.0
                                             L6      1/0    7.69 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7     55.0     45.1      1.12              0.11         6    0.186     26K   3300       0.0       0.0
                                            Sum      1/0    7.69 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     46.0     51.8      1.34              0.16        13    0.103     26K   3300       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     33.0     33.3      1.01              0.08         6    0.168     14K   1990       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     55.0     45.1      1.12              0.11         6    0.186     26K   3300       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     87.5      0.21              0.05         6    0.036       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.018, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 1.3 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 308.00 MB usage: 2.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000101 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(103,1.82 MB,0.590134%) FilterBlock(14,82.42 KB,0.0261332%) IndexBlock(14,168.28 KB,0.0533562%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:24:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:24:35 compute-0 ceph-mon[74496]: pgmap v700: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:35 compute-0 sudo[238832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnpqvalcrtyrziasgxlirqmgpmdiiiwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844275.142161-2145-280075173370568/AnsiballZ_file.py'
Jan 31 07:24:35 compute-0 sudo[238832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:35 compute-0 python3.9[238834]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:35 compute-0 sudo[238832]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:35 compute-0 sudo[238984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aamdrvcahofbttftuvwaychyauqdlnmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844275.7465417-2145-82124348073714/AnsiballZ_file.py'
Jan 31 07:24:35 compute-0 sudo[238984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:36 compute-0 python3.9[238986]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:36.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:36 compute-0 sudo[238984]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:24:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:36.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:24:36 compute-0 sudo[239136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onriegaieplqcjqmmwoqyzhdawegcydn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844276.3430367-2145-246400349836030/AnsiballZ_file.py'
Jan 31 07:24:36 compute-0 sudo[239136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:36 compute-0 python3.9[239138]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:36 compute-0 sudo[239136]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:37 compute-0 ceph-mon[74496]: pgmap v701: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:37 compute-0 sudo[239289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbigulbwmfrfptoxexntfuigryyucwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844277.2613041-2211-75788576876176/AnsiballZ_file.py'
Jan 31 07:24:37 compute-0 sudo[239289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:37 compute-0 python3.9[239291]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:37 compute-0 sudo[239289]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:38 compute-0 sudo[239441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnkeypeosldprepjjqomwadhatfelovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844277.8549562-2211-132664531057513/AnsiballZ_file.py'
Jan 31 07:24:38 compute-0 sudo[239441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:38.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:38.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:38 compute-0 python3.9[239443]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:38 compute-0 sudo[239441]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:38 compute-0 sudo[239593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiocvyjmdznxpufheuothmrcnybujaqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844278.4837632-2211-12138455992145/AnsiballZ_file.py'
Jan 31 07:24:38 compute-0 sudo[239593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:38 compute-0 podman[239595]: 2026-01-31 07:24:38.899063873 +0000 UTC m=+0.123671682 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:24:38 compute-0 python3.9[239596]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:39 compute-0 sudo[239593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:39 compute-0 ceph-mon[74496]: pgmap v702: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:39 compute-0 sudo[239775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdkwklvezntynlbgryzswuxscyxkkwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844279.1588805-2211-60905410326580/AnsiballZ_file.py'
Jan 31 07:24:39 compute-0 sudo[239775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:39 compute-0 python3.9[239777]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:39 compute-0 sudo[239775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:40 compute-0 sudo[239927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyqcuqjgfopvdyigzyznryncqjlelwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844279.770127-2211-124085333220/AnsiballZ_file.py'
Jan 31 07:24:40 compute-0 sudo[239927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:40.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:40 compute-0 python3.9[239929]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:40 compute-0 sudo[239927]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:40.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:40 compute-0 sudo[240079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akbvkxucxnobedtfccjjwwntqwjzhsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844280.4039145-2211-275002853009419/AnsiballZ_file.py'
Jan 31 07:24:40 compute-0 sudo[240079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:40 compute-0 python3.9[240081]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:40 compute-0 sudo[240079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:41 compute-0 sudo[240232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyufkpbermvtfioohmblohqfeebxguoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844281.031341-2211-205182368394788/AnsiballZ_file.py'
Jan 31 07:24:41 compute-0 ceph-mon[74496]: pgmap v703: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:41 compute-0 sudo[240232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:41 compute-0 python3.9[240234]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:41 compute-0 sudo[240232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:42.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:43 compute-0 ceph-mon[74496]: pgmap v704: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:43 compute-0 sudo[240260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:43 compute-0 sudo[240260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:43 compute-0 sudo[240260]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:43 compute-0 sudo[240285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:24:43 compute-0 sudo[240285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:24:43 compute-0 sudo[240285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:44 compute-0 ceph-mon[74496]: pgmap v705: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:45 compute-0 podman[240311]: 2026-01-31 07:24:45.896304391 +0000 UTC m=+0.065480057 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:24:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:46.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:46 compute-0 ceph-mon[74496]: pgmap v706: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:47 compute-0 sudo[240456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbtkglhumafthlbzdpppzjdwafdncfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844286.8660188-2536-249268854398794/AnsiballZ_getent.py'
Jan 31 07:24:47 compute-0 sudo[240456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:47 compute-0 python3.9[240458]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 07:24:47 compute-0 sudo[240456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:48 compute-0 sudo[240609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klxkinummdpamttkmarsoevfqvbybhsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844287.7274365-2560-115808223493728/AnsiballZ_group.py'
Jan 31 07:24:48 compute-0 sudo[240609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:48 compute-0 python3.9[240611]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 07:24:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:48.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:48 compute-0 groupadd[240612]: group added to /etc/group: name=nova, GID=42436
Jan 31 07:24:48 compute-0 groupadd[240612]: group added to /etc/gshadow: name=nova
Jan 31 07:24:48 compute-0 groupadd[240612]: new group: name=nova, GID=42436
Jan 31 07:24:48 compute-0 sudo[240609]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:48 compute-0 ceph-mon[74496]: pgmap v707: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:49 compute-0 sudo[240768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsigcwldycbqaxecknryvxlzoljjmfpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844288.9479735-2584-10463909344759/AnsiballZ_user.py'
Jan 31 07:24:49 compute-0 sudo[240768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:49 compute-0 python3.9[240770]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 07:24:49 compute-0 useradd[240772]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 07:24:49 compute-0 useradd[240772]: add 'nova' to group 'libvirt'
Jan 31 07:24:49 compute-0 useradd[240772]: add 'nova' to shadow group 'libvirt'
Jan 31 07:24:49 compute-0 sudo[240768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:24:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:50.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:50.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:50 compute-0 ceph-mon[74496]: pgmap v708: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:51 compute-0 sshd-session[240803]: Accepted publickey for zuul from 192.168.122.30 port 47680 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 07:24:51 compute-0 systemd-logind[816]: New session 51 of user zuul.
Jan 31 07:24:51 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 31 07:24:51 compute-0 sshd-session[240803]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:24:51 compute-0 sshd-session[240806]: Received disconnect from 192.168.122.30 port 47680:11: disconnected by user
Jan 31 07:24:51 compute-0 sshd-session[240806]: Disconnected from user zuul 192.168.122.30 port 47680
Jan 31 07:24:51 compute-0 sshd-session[240803]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:24:51 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 07:24:51 compute-0 systemd-logind[816]: Session 51 logged out. Waiting for processes to exit.
Jan 31 07:24:51 compute-0 systemd-logind[816]: Removed session 51.
Jan 31 07:24:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:51 compute-0 python3.9[240957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:52.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:52 compute-0 python3.9[241078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844291.3882127-2659-262998557676581/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:52 compute-0 ceph-mon[74496]: pgmap v709: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:53 compute-0 python3.9[241228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:53 compute-0 python3.9[241305]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:54 compute-0 python3.9[241455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:54.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:54.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:54 compute-0 python3.9[241576]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844293.6961544-2659-271733721824710/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:55 compute-0 ceph-mon[74496]: pgmap v710: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:55 compute-0 python3.9[241726]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:55 compute-0 python3.9[241848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844294.78838-2659-90194715316063/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:24:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:56.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:24:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:56 compute-0 python3.9[241998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:56 compute-0 python3.9[242119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844295.9509318-2659-189554845313896/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:57 compute-0 ceph-mon[74496]: pgmap v711: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:24:57 compute-0 python3.9[242270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:24:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:58 compute-0 python3.9[242391]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844297.0588584-2659-158045989775300/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:24:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:24:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:24:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:24:58 compute-0 sudo[242541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ervakbnpnittkyjjolwfugefsiikbqjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844298.2724814-2908-191189919719543/AnsiballZ_file.py'
Jan 31 07:24:58 compute-0 sudo[242541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:58 compute-0 python3.9[242543]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:58 compute-0 sudo[242541]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:59 compute-0 ceph-mon[74496]: pgmap v712: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:59 compute-0 sudo[242694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqiqjxtnaayvlnewtmylgwrdvgjvvuzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844298.990032-2932-209161475077478/AnsiballZ_copy.py'
Jan 31 07:24:59 compute-0 sudo[242694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:24:59 compute-0 python3.9[242696]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:24:59 compute-0 sudo[242694]: pam_unix(sudo:session): session closed for user root
Jan 31 07:24:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:24:59 compute-0 sudo[242846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueygebgwofsmdrbvpghvxgtzjwwinubq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844299.7129781-2956-280300599576036/AnsiballZ_stat.py'
Jan 31 07:24:59 compute-0 sudo[242846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:00 compute-0 python3.9[242848]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:00 compute-0 sudo[242846]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:00.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:00 compute-0 sudo[242998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxezmkxojtyvwxwafdredvlpccooddke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844300.4037342-2980-203292973484982/AnsiballZ_stat.py'
Jan 31 07:25:00 compute-0 sudo[242998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:00 compute-0 python3.9[243000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:25:00 compute-0 sudo[242998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:01 compute-0 ceph-mon[74496]: pgmap v713: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:01 compute-0 sudo[243122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqerbtetzbbixujrjymxbrzccrxupgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844300.4037342-2980-203292973484982/AnsiballZ_copy.py'
Jan 31 07:25:01 compute-0 sudo[243122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:01 compute-0 python3.9[243124]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769844300.4037342-2980-203292973484982/.source _original_basename=.qxk4g348 follow=False checksum=e1af378f412c9faac574ffab83d7a7b2db2e70f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 07:25:01 compute-0 sudo[243122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:02.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:02 compute-0 python3.9[243276]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:03 compute-0 python3.9[243428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:25:03 compute-0 ceph-mon[74496]: pgmap v714: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:03 compute-0 python3.9[243550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844302.580499-3058-199257268139539/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:25:03 compute-0 sudo[243551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:03 compute-0 sudo[243551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:03 compute-0 sudo[243551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:03 compute-0 sudo[243576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:03 compute-0 sudo[243576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:03 compute-0 sudo[243576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:04 compute-0 python3.9[243750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:25:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:04.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:04.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:04 compute-0 python3.9[243871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844303.7505455-3103-58679109773091/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:25:05 compute-0 ceph-mon[74496]: pgmap v715: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:05 compute-0 sudo[244022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imyvipcxlxsalovbujjvcgrzyjfpnxdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844305.2248487-3154-31738858078051/AnsiballZ_container_config_data.py'
Jan 31 07:25:05 compute-0 sudo[244022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:05 compute-0 python3.9[244024]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 07:25:05 compute-0 sudo[244022]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:06.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:06 compute-0 sudo[244174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oabmnyiobaxonuvqupgqjzloiqvcyxpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844306.2360017-3187-114909498991750/AnsiballZ_container_config_hash.py'
Jan 31 07:25:06 compute-0 sudo[244174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:06 compute-0 python3.9[244176]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:25:06 compute-0 sudo[244174]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:07 compute-0 ceph-mon[74496]: pgmap v716: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:07 compute-0 sudo[244327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uduyomyprttfetsiwccjruozzcaqemrz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769844307.4065938-3217-63313576666229/AnsiballZ_edpm_container_manage.py'
Jan 31 07:25:07 compute-0 sudo[244327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:08.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:08 compute-0 python3[244329]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:25:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:08 compute-0 ceph-mon[74496]: pgmap v717: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:09 compute-0 podman[244366]: 2026-01-31 07:25:09.916962956 +0000 UTC m=+0.087190383 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:25:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:10.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:11 compute-0 ceph-mon[74496]: pgmap v718: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:25:11.132 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:25:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:25:11.133 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:25:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:25:11.133 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:25:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:12.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:14 compute-0 sudo[244427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:14 compute-0 sudo[244427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-0 sudo[244427]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:14.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:14 compute-0 sudo[244453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:25:14 compute-0 sudo[244453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-0 sudo[244453]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:14 compute-0 sudo[244478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:14 compute-0 sudo[244478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-0 sudo[244478]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:14.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:14 compute-0 sudo[244503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:25:14 compute-0 sudo[244503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:14 compute-0 ceph-mon[74496]: pgmap v719: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:16.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:16.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:17 compute-0 ceph-mon[74496]: pgmap v720: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:18.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:18.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:25:19
Jan 31 07:25:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:25:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:25:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'vms']
Jan 31 07:25:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:25:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:20 compute-0 ceph-mon[74496]: pgmap v721: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:25:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:25:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:22 compute-0 ceph-mon[74496]: pgmap v722: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:22 compute-0 ceph-mon[74496]: pgmap v723: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:22 compute-0 podman[244548]: 2026-01-31 07:25:22.761722942 +0000 UTC m=+5.931157645 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:25:22 compute-0 podman[244342]: 2026-01-31 07:25:22.800627031 +0000 UTC m=+14.454094475 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:25:22 compute-0 sudo[244503]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:22 compute-0 podman[244597]: 2026-01-31 07:25:22.979552836 +0000 UTC m=+0.098669695 container create fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.build-date=20260127, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:25:22 compute-0 podman[244597]: 2026-01-31 07:25:22.90228063 +0000 UTC m=+0.021397499 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:25:22 compute-0 python3[244329]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 07:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5b828425-1097-43c7-a044-97557fdb1c50 does not exist
Jan 31 07:25:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9b9ee6b7-e9bd-4266-b1f9-0cf080b4cdbf does not exist
Jan 31 07:25:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 24ad010c-7df1-4a22-9644-3ecf8b0f5194 does not exist
Jan 31 07:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:25:23 compute-0 sudo[244327]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 sudo[244647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:23 compute-0 sudo[244647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-0 sudo[244647]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 sudo[244681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:25:23 compute-0 sudo[244681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-0 sudo[244681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 sudo[244722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:23 compute-0 sudo[244722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-0 sudo[244722]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 sudo[244768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:25:23 compute-0 sudo[244768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-0 sudo[244922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygmteebxlulltajmxtxkjymiojjxvkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844323.2815259-3241-42758487877724/AnsiballZ_stat.py'
Jan 31 07:25:23 compute-0 sudo[244922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.682851439 +0000 UTC m=+0.061017766 container create 39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:25:23 compute-0 ceph-mon[74496]: pgmap v724: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:25:23 compute-0 systemd[1]: Started libpod-conmon-39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b.scope.
Jan 31 07:25:23 compute-0 sudo[244955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:23 compute-0 sudo[244955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-0 sudo[244955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.64721908 +0000 UTC m=+0.025385447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:25:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:23 compute-0 python3.9[244927]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.761344826 +0000 UTC m=+0.139511153 container init 39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.769606889 +0000 UTC m=+0.147773186 container start 39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.77568448 +0000 UTC m=+0.153850797 container attach 39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:25:23 compute-0 blissful_feynman[244980]: 167 167
Jan 31 07:25:23 compute-0 systemd[1]: libpod-39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b.scope: Deactivated successfully.
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.786527387 +0000 UTC m=+0.164693674 container died 39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:25:23 compute-0 sudo[244922]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 sudo[244985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:23 compute-0 sudo[244985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:23 compute-0 sudo[244985]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2824319d0f26b9b95cf9a8b5e992f0b358a726dd355e655ab0a8d4a5261346-merged.mount: Deactivated successfully.
Jan 31 07:25:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:23 compute-0 podman[244941]: 2026-01-31 07:25:23.830151264 +0000 UTC m=+0.208317551 container remove 39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:25:23 compute-0 systemd[1]: libpod-conmon-39d2d3e42d5722163db10b5e8aecd9b2a813339dd729222e3d4e96a273c0ba6b.scope: Deactivated successfully.
Jan 31 07:25:24 compute-0 podman[245055]: 2026-01-31 07:25:24.003422028 +0000 UTC m=+0.062366859 container create b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:25:24 compute-0 systemd[1]: Started libpod-conmon-b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e.scope.
Jan 31 07:25:24 compute-0 podman[245055]: 2026-01-31 07:25:23.97916901 +0000 UTC m=+0.038113891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:25:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17b34fcf75cec08d0924db52ddab13eb0874355b00820d71ec33e070006bbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17b34fcf75cec08d0924db52ddab13eb0874355b00820d71ec33e070006bbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17b34fcf75cec08d0924db52ddab13eb0874355b00820d71ec33e070006bbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17b34fcf75cec08d0924db52ddab13eb0874355b00820d71ec33e070006bbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a17b34fcf75cec08d0924db52ddab13eb0874355b00820d71ec33e070006bbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:24 compute-0 podman[245055]: 2026-01-31 07:25:24.105454716 +0000 UTC m=+0.164399607 container init b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:25:24 compute-0 podman[245055]: 2026-01-31 07:25:24.119338118 +0000 UTC m=+0.178282959 container start b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:25:24 compute-0 podman[245055]: 2026-01-31 07:25:24.123722336 +0000 UTC m=+0.182667227 container attach b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:25:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:24.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:24 compute-0 sudo[245201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gugwdqcjddkccftxhqweairzhyrxgpvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844324.4700563-3277-126882487875401/AnsiballZ_container_config_data.py'
Jan 31 07:25:24 compute-0 sudo[245201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:24 compute-0 ceph-mon[74496]: pgmap v725: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:24 compute-0 python3.9[245203]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 07:25:24 compute-0 sudo[245201]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:24 compute-0 amazing_bartik[245071]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:25:24 compute-0 amazing_bartik[245071]: --> relative data size: 1.0
Jan 31 07:25:24 compute-0 amazing_bartik[245071]: --> All data devices are unavailable
Jan 31 07:25:25 compute-0 systemd[1]: libpod-b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e.scope: Deactivated successfully.
Jan 31 07:25:25 compute-0 podman[245055]: 2026-01-31 07:25:25.025989219 +0000 UTC m=+1.084934040 container died b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a17b34fcf75cec08d0924db52ddab13eb0874355b00820d71ec33e070006bbd-merged.mount: Deactivated successfully.
Jan 31 07:25:25 compute-0 podman[245055]: 2026-01-31 07:25:25.073591964 +0000 UTC m=+1.132536765 container remove b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:25:25 compute-0 systemd[1]: libpod-conmon-b1072ab237db1251505f6fbb4680e2911793d559acdf318ae16ae97bf3304c3e.scope: Deactivated successfully.
Jan 31 07:25:25 compute-0 sudo[244768]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:25 compute-0 sudo[245250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:25 compute-0 sudo[245250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:25 compute-0 sudo[245250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:25 compute-0 sudo[245275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:25:25 compute-0 sudo[245275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:25 compute-0 sudo[245275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:25 compute-0 sudo[245302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:25 compute-0 sudo[245302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:25 compute-0 sudo[245302]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:25 compute-0 sudo[245328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:25:25 compute-0 sudo[245328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:25 compute-0 sshd-session[245300]: Invalid user sol from 45.148.10.240 port 41792
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.647100034 +0000 UTC m=+0.039832784 container create 174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:25:25 compute-0 systemd[1]: Started libpod-conmon-174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50.scope.
Jan 31 07:25:25 compute-0 sudo[245531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edfkkbqocsnhduutmelyvmhwggyoimvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844325.416459-3310-103607216099652/AnsiballZ_container_config_hash.py'
Jan 31 07:25:25 compute-0 sudo[245531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:25 compute-0 sshd-session[245300]: Connection closed by invalid user sol 45.148.10.240 port 41792 [preauth]
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.628871294 +0000 UTC m=+0.021604074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:25:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.760318547 +0000 UTC m=+0.153051317 container init 174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.768218113 +0000 UTC m=+0.160950863 container start 174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.773189485 +0000 UTC m=+0.165922265 container attach 174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:25:25 compute-0 sharp_haslett[245535]: 167 167
Jan 31 07:25:25 compute-0 systemd[1]: libpod-174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50.scope: Deactivated successfully.
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.774875127 +0000 UTC m=+0.167607897 container died 174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6226d0595b8c133710d0b5f38daefdba96f6dab6bb53efce0e046147b805208-merged.mount: Deactivated successfully.
Jan 31 07:25:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:25 compute-0 podman[245490]: 2026-01-31 07:25:25.813421948 +0000 UTC m=+0.206154708 container remove 174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:25 compute-0 systemd[1]: libpod-conmon-174c992c5e9832ea9bee4a472beffdf63482ec3bd618ae685176bed674532b50.scope: Deactivated successfully.
Jan 31 07:25:25 compute-0 python3.9[245537]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 07:25:25 compute-0 podman[245558]: 2026-01-31 07:25:25.951757731 +0000 UTC m=+0.042007997 container create eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:25:25 compute-0 sudo[245531]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:25 compute-0 systemd[1]: Started libpod-conmon-eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc.scope.
Jan 31 07:25:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb9ccc4d9bedb6f4a6c9b5cc3ab68c5b2ac308a43949bc61b879b4719d02508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb9ccc4d9bedb6f4a6c9b5cc3ab68c5b2ac308a43949bc61b879b4719d02508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb9ccc4d9bedb6f4a6c9b5cc3ab68c5b2ac308a43949bc61b879b4719d02508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efb9ccc4d9bedb6f4a6c9b5cc3ab68c5b2ac308a43949bc61b879b4719d02508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:26 compute-0 podman[245558]: 2026-01-31 07:25:25.928795875 +0000 UTC m=+0.019046191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:25:26 compute-0 podman[245558]: 2026-01-31 07:25:26.039223939 +0000 UTC m=+0.129474295 container init eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:26 compute-0 podman[245558]: 2026-01-31 07:25:26.045804921 +0000 UTC m=+0.136055197 container start eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:26 compute-0 podman[245558]: 2026-01-31 07:25:26.049218916 +0000 UTC m=+0.139469222 container attach eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:25:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:26.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:26 compute-0 sudo[245728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxbtuexbsgcfbzkkzntljkojjsigrutp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769844326.31769-3340-149924334821357/AnsiballZ_edpm_container_manage.py'
Jan 31 07:25:26 compute-0 sudo[245728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]: {
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:     "0": [
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:         {
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "devices": [
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "/dev/loop3"
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             ],
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "lv_name": "ceph_lv0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "lv_size": "7511998464",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "name": "ceph_lv0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "tags": {
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.cluster_name": "ceph",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.crush_device_class": "",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.encrypted": "0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.osd_id": "0",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.type": "block",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:                 "ceph.vdo": "0"
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             },
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "type": "block",
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:             "vg_name": "ceph_vg0"
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:         }
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]:     ]
Jan 31 07:25:26 compute-0 hopeful_jackson[245582]: }
Jan 31 07:25:26 compute-0 systemd[1]: libpod-eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc.scope: Deactivated successfully.
Jan 31 07:25:26 compute-0 podman[245558]: 2026-01-31 07:25:26.806342296 +0000 UTC m=+0.896592642 container died eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-efb9ccc4d9bedb6f4a6c9b5cc3ab68c5b2ac308a43949bc61b879b4719d02508-merged.mount: Deactivated successfully.
Jan 31 07:25:26 compute-0 python3[245730]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 07:25:26 compute-0 podman[245558]: 2026-01-31 07:25:26.878949188 +0000 UTC m=+0.969199494 container remove eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jackson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:25:26 compute-0 ceph-mon[74496]: pgmap v726: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:26 compute-0 systemd[1]: libpod-conmon-eee6655fa6cd7660f5b77435578294677bf0a325fe215135dad79c79a6cfd2fc.scope: Deactivated successfully.
Jan 31 07:25:26 compute-0 sudo[245328]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:26 compute-0 sudo[245767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:26 compute-0 sudo[245767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:26 compute-0 sudo[245767]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:27 compute-0 sudo[245812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:25:27 compute-0 sudo[245812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:27 compute-0 sudo[245812]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:27 compute-0 podman[245805]: 2026-01-31 07:25:27.081896885 +0000 UTC m=+0.077484302 container create 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm)
Jan 31 07:25:27 compute-0 podman[245805]: 2026-01-31 07:25:27.043813936 +0000 UTC m=+0.039401413 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 07:25:27 compute-0 python3[245730]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 07:25:27 compute-0 sudo[245844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:27 compute-0 sudo[245844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:27 compute-0 sudo[245844]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:27 compute-0 sudo[245883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:25:27 compute-0 sudo[245883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:27 compute-0 sudo[245728]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.513943766 +0000 UTC m=+0.046208621 container create 77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kowalevski, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:25:27 compute-0 systemd[1]: Started libpod-conmon-77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f.scope.
Jan 31 07:25:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.487170586 +0000 UTC m=+0.019435501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.591245143 +0000 UTC m=+0.123510008 container init 77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.601917187 +0000 UTC m=+0.134182012 container start 77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kowalevski, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.605358861 +0000 UTC m=+0.137623696 container attach 77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:25:27 compute-0 gracious_kowalevski[246092]: 167 167
Jan 31 07:25:27 compute-0 systemd[1]: libpod-77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f.scope: Deactivated successfully.
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.608467108 +0000 UTC m=+0.140731943 container died 77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6746c9382a555084720ebfcc2f96c3c0cc2fe077f99dab074427874a52883e72-merged.mount: Deactivated successfully.
Jan 31 07:25:27 compute-0 sudo[246137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czrzuqwavoavnznezqoqtdvmumdiimon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844327.3922117-3364-256888738324671/AnsiballZ_stat.py'
Jan 31 07:25:27 compute-0 podman[246036]: 2026-01-31 07:25:27.650262899 +0000 UTC m=+0.182527764 container remove 77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_kowalevski, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:25:27 compute-0 sudo[246137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:27 compute-0 systemd[1]: libpod-conmon-77e4168bbc26846feae34415ce72cf16a1b6d9dd0db0569222409ce009624b0f.scope: Deactivated successfully.
Jan 31 07:25:27 compute-0 podman[246152]: 2026-01-31 07:25:27.813646751 +0000 UTC m=+0.054868745 container create a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:25:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:27 compute-0 systemd[1]: Started libpod-conmon-a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d.scope.
Jan 31 07:25:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1acf5c8df1b0265b15883708f256d6dc443a15fce3e79aadb94b0c9b83f86de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1acf5c8df1b0265b15883708f256d6dc443a15fce3e79aadb94b0c9b83f86de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1acf5c8df1b0265b15883708f256d6dc443a15fce3e79aadb94b0c9b83f86de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1acf5c8df1b0265b15883708f256d6dc443a15fce3e79aadb94b0c9b83f86de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:27 compute-0 podman[246152]: 2026-01-31 07:25:27.791153536 +0000 UTC m=+0.032375560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:25:27 compute-0 python3.9[246144]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:27 compute-0 podman[246152]: 2026-01-31 07:25:27.900463173 +0000 UTC m=+0.141685227 container init a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:25:27 compute-0 podman[246152]: 2026-01-31 07:25:27.911423253 +0000 UTC m=+0.152645267 container start a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:25:27 compute-0 podman[246152]: 2026-01-31 07:25:27.916024797 +0000 UTC m=+0.157246861 container attach a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:25:27 compute-0 sudo[246137]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:28.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:25:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:25:28 compute-0 sudo[246325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrvbfwlnxmlzrkfytiyooantdphtpgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.1921625-3391-191146145473776/AnsiballZ_file.py'
Jan 31 07:25:28 compute-0 sudo[246325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:28 compute-0 python3.9[246327]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]: {
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:         "osd_id": 0,
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:         "type": "bluestore"
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]:     }
Jan 31 07:25:28 compute-0 heuristic_heyrovsky[246169]: }
Jan 31 07:25:28 compute-0 sudo[246325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:28 compute-0 systemd[1]: libpod-a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d.scope: Deactivated successfully.
Jan 31 07:25:28 compute-0 podman[246344]: 2026-01-31 07:25:28.824982684 +0000 UTC m=+0.037671871 container died a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:25:29 compute-0 ceph-mon[74496]: pgmap v727: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1acf5c8df1b0265b15883708f256d6dc443a15fce3e79aadb94b0c9b83f86de-merged.mount: Deactivated successfully.
Jan 31 07:25:29 compute-0 podman[246344]: 2026-01-31 07:25:29.250915542 +0000 UTC m=+0.463604709 container remove a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:25:29 compute-0 systemd[1]: libpod-conmon-a7507d0e88a58f93e6dfd2785db63e2cc1b452283e486757be689d6f0593f34d.scope: Deactivated successfully.
Jan 31 07:25:29 compute-0 sudo[245883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-0 sudo[246508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbzvgyxnaaieiccomttpbfmrbijrsyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.8295488-3391-67848975200116/AnsiballZ_copy.py'
Jan 31 07:25:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:25:29 compute-0 sudo[246508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:25:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7637255f-e20d-4f21-a5b8-22f783c330ea does not exist
Jan 31 07:25:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 99746950-ca55-454d-b438-114c03bd1dd6 does not exist
Jan 31 07:25:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 149c1799-f941-43ff-9edb-5e8b9267873f does not exist
Jan 31 07:25:29 compute-0 sudo[246511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:29 compute-0 sudo[246511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:29 compute-0 sudo[246511]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-0 sudo[246536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:25:29 compute-0 sudo[246536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:29 compute-0 sudo[246536]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-0 python3.9[246510]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844328.8295488-3391-67848975200116/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:25:29 compute-0 sudo[246508]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:29 compute-0 sudo[246634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxrwwmckqtccodounqkvbhhdaalaensp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.8295488-3391-67848975200116/AnsiballZ_systemd.py'
Jan 31 07:25:29 compute-0 sudo[246634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:30 compute-0 python3.9[246636]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:25:30 compute-0 systemd[1]: Reloading.
Jan 31 07:25:30 compute-0 systemd-rc-local-generator[246664]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:25:30 compute-0 systemd-sysv-generator[246669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:25:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:30.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:25:30 compute-0 sudo[246634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:30 compute-0 sudo[246744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dchqflqlnyumsojodhoajovwhavdbvbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844328.8295488-3391-67848975200116/AnsiballZ_systemd.py'
Jan 31 07:25:30 compute-0 sudo[246744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:30 compute-0 python3.9[246746]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 07:25:30 compute-0 systemd[1]: Reloading.
Jan 31 07:25:31 compute-0 systemd-rc-local-generator[246776]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:25:31 compute-0 systemd-sysv-generator[246779]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 07:25:31 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 07:25:31 compute-0 ceph-mon[74496]: pgmap v728: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:31 compute-0 podman[246787]: 2026-01-31 07:25:31.406249493 +0000 UTC m=+0.103381852 container init 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:25:31 compute-0 podman[246787]: 2026-01-31 07:25:31.418044674 +0000 UTC m=+0.115177013 container start 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_managed=true)
Jan 31 07:25:31 compute-0 podman[246787]: nova_compute
Jan 31 07:25:31 compute-0 nova_compute[246802]: + sudo -E kolla_set_configs
Jan 31 07:25:31 compute-0 systemd[1]: Started nova_compute container.
Jan 31 07:25:31 compute-0 sudo[246744]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Validating config file
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying service configuration files
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Deleting /etc/ceph
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Creating directory /etc/ceph
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Writing out command to execute
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:31 compute-0 nova_compute[246802]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:31 compute-0 nova_compute[246802]: ++ cat /run_command
Jan 31 07:25:31 compute-0 nova_compute[246802]: + CMD=nova-compute
Jan 31 07:25:31 compute-0 nova_compute[246802]: + ARGS=
Jan 31 07:25:31 compute-0 nova_compute[246802]: + sudo kolla_copy_cacerts
Jan 31 07:25:31 compute-0 nova_compute[246802]: Running command: 'nova-compute'
Jan 31 07:25:31 compute-0 nova_compute[246802]: + [[ ! -n '' ]]
Jan 31 07:25:31 compute-0 nova_compute[246802]: + . kolla_extend_start
Jan 31 07:25:31 compute-0 nova_compute[246802]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 07:25:31 compute-0 nova_compute[246802]: + umask 0022
Jan 31 07:25:31 compute-0 nova_compute[246802]: + exec nova-compute
Jan 31 07:25:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:32.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:32.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:32 compute-0 ceph-mon[74496]: pgmap v729: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.500362) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332500410, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1278, "num_deletes": 256, "total_data_size": 2240590, "memory_usage": 2276272, "flush_reason": "Manual Compaction"}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332519476, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2206040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15752, "largest_seqno": 17029, "table_properties": {"data_size": 2200047, "index_size": 3320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11881, "raw_average_key_size": 18, "raw_value_size": 2188076, "raw_average_value_size": 3445, "num_data_blocks": 150, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844202, "oldest_key_time": 1769844202, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19196 microseconds, and 4673 cpu microseconds.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.519548) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2206040 bytes OK
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.519589) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.523402) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.523429) EVENT_LOG_v1 {"time_micros": 1769844332523422, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.523455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2235048, prev total WAL file size 2243272, number of live WAL files 2.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.524131) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2154KB)], [35(7869KB)]
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332524189, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10264631, "oldest_snapshot_seqno": -1}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4311 keys, 9882464 bytes, temperature: kUnknown
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332583632, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9882464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9850815, "index_size": 19763, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 107407, "raw_average_key_size": 24, "raw_value_size": 9769993, "raw_average_value_size": 2266, "num_data_blocks": 825, "num_entries": 4311, "num_filter_entries": 4311, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.583889) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9882464 bytes
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.585930) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.5 rd, 166.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 4838, records dropped: 527 output_compression: NoCompression
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.585950) EVENT_LOG_v1 {"time_micros": 1769844332585940, "job": 16, "event": "compaction_finished", "compaction_time_micros": 59512, "compaction_time_cpu_micros": 20314, "output_level": 6, "num_output_files": 1, "total_output_size": 9882464, "num_input_records": 4838, "num_output_records": 4311, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332586423, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332587320, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.524022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.587441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.587448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.587450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.587452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.587454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.587881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332587921, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19336, "flush_reason": "Manual Compaction"}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332591249, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 13304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17030, "largest_seqno": 17285, "table_properties": {"data_size": 11551, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844332, "oldest_key_time": 1769844332, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3397 microseconds, and 622 cpu microseconds.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.591283) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 13304 bytes OK
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.591299) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.596457) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.596530) EVENT_LOG_v1 {"time_micros": 1769844332596513, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.596563) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.597024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(12KB)], [38(9650KB)]
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332597188, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9895768, "oldest_snapshot_seqno": -1}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4061 keys, 7831557 bytes, temperature: kUnknown
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332640369, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7831557, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7803360, "index_size": 16951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 102907, "raw_average_key_size": 25, "raw_value_size": 7728615, "raw_average_value_size": 1903, "num_data_blocks": 698, "num_entries": 4061, "num_filter_entries": 4061, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844332, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.640627) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7831557 bytes
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.642178) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.9 rd, 181.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(1332.5) write-amplify(588.7) OK, records in: 4567, records dropped: 506 output_compression: NoCompression
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.642195) EVENT_LOG_v1 {"time_micros": 1769844332642186, "job": 18, "event": "compaction_finished", "compaction_time_micros": 43236, "compaction_time_cpu_micros": 12805, "output_level": 6, "num_output_files": 1, "total_output_size": 7831557, "num_input_records": 4567, "num_output_records": 4061, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332642376, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844332643331, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.596905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.643463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.643472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.643474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.643477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:25:32.643479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:25:32 compute-0 python3.9[246964]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:33 compute-0 python3.9[247115]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:34.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:34 compute-0 python3.9[247265]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:25:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:25:34 compute-0 ceph-mon[74496]: pgmap v730: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.062 246806 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.063 246806 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.064 246806 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.065 246806 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 07:25:35 compute-0 sudo[247418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkgsjmuytqaaofkvrwmywhsltiszglt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844334.759332-3571-256657765518037/AnsiballZ_podman_container.py'
Jan 31 07:25:35 compute-0 sudo[247418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.334 246806 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.353 246806 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.354 246806 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:25:35 compute-0 python3.9[247420]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 07:25:35 compute-0 sudo[247418]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:25:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:25:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:35 compute-0 nova_compute[246802]: 2026-01-31 07:25:35.904 246806 INFO nova.virt.driver [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.121 246806 INFO nova.compute.provider_config [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 07:25:36 compute-0 sudo[247596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zymdfvmgdbxvjkcpaxftlwkoqraorhtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844335.8704839-3595-150172037640763/AnsiballZ_systemd.py'
Jan 31 07:25:36 compute-0 sudo[247596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.215 246806 DEBUG oslo_concurrency.lockutils [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.216 246806 DEBUG oslo_concurrency.lockutils [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.217 246806 DEBUG oslo_concurrency.lockutils [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.217 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.217 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.218 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.218 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.218 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.218 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.219 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.219 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.219 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.219 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.220 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.220 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.220 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.221 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.221 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.221 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.221 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.221 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.222 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.222 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.222 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.222 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.222 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.223 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.223 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.223 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.223 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.223 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.224 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.224 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.224 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.224 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.224 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.225 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.225 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.225 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.225 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.225 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.226 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.226 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.226 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.226 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.227 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.227 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.227 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.227 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.227 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.228 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.228 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.228 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.228 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.228 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.229 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.229 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.229 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.229 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.229 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.230 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.230 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.230 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.230 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.231 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.231 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.231 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.231 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.231 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.231 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.232 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.232 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.232 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.232 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.232 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.233 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.233 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.233 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.233 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.233 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.234 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.234 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.234 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.234 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.234 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.235 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.235 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.235 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.235 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.235 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.236 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.236 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.236 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.236 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.236 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.237 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.237 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.237 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.237 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.237 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.238 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.238 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.238 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.238 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.238 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.239 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.239 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.239 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.239 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.239 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.240 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.240 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.240 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.240 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.240 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.241 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.241 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.241 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.241 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.241 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.242 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.242 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.242 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.242 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.243 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.243 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.243 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.243 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.243 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.243 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.244 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.244 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.244 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.244 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.245 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.245 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.245 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.245 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.245 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.246 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.246 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.246 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.246 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.246 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.246 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.247 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.247 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.247 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.247 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.247 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.248 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.248 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.248 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.248 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.248 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.249 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.249 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.249 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.249 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.250 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.250 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.250 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.250 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.250 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.251 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.251 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.251 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.251 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.251 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.251 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.252 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.252 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.252 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.252 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.253 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.253 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.253 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.253 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.253 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.254 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.254 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.254 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.254 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.254 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.255 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.255 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.255 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.255 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.255 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.256 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.256 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.256 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.256 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.256 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.257 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.257 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.257 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.257 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.257 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.258 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.258 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.258 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.258 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.258 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.259 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.259 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.259 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.259 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.259 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.260 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.260 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.260 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.260 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.260 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.261 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.261 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.261 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.261 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.261 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.262 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.262 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.262 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.262 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.262 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.263 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.263 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.263 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.263 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.263 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.264 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.264 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.264 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.264 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.264 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.265 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.265 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.265 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.265 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.265 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.266 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.266 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.266 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.266 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.266 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.267 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.267 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.267 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.267 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.267 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:36.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.268 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.268 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.268 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.268 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.268 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.269 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.269 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.269 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.269 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.270 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.270 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.270 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.270 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.270 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.271 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.271 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.271 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.271 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.271 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.272 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.272 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.272 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.272 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.273 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.273 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.273 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.273 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.273 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.274 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.274 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.274 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.274 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.274 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.275 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.275 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.275 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.275 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.275 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.276 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.276 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.276 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.277 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.277 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.277 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.277 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.277 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.278 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.278 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.278 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.278 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.278 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.279 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.279 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.279 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.279 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.279 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.280 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.280 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.280 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.280 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.280 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.281 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.281 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.281 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.281 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.281 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.281 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.282 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.283 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.283 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.283 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.283 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.283 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.283 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.284 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.285 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.285 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.285 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.285 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.285 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.285 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.286 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.287 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.288 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.288 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.288 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.288 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.288 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.289 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.290 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.290 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.290 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.290 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.290 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.290 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.291 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.292 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.292 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.292 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.292 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.292 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.292 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.293 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.294 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.295 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.296 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.297 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.298 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.299 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.299 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.299 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.299 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.299 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.299 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.300 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.301 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.302 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.302 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.302 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.302 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.302 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.302 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.303 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.303 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.303 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.303 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.303 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.303 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.304 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.304 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.304 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.304 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.304 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.304 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.305 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.305 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.305 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.305 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.305 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.305 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.306 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.306 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.306 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.306 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.306 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.306 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.307 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.307 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.307 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.307 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.307 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.307 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.308 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.308 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.308 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.308 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.308 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.309 246806 WARNING oslo_config.cfg [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 07:25:36 compute-0 nova_compute[246802]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 07:25:36 compute-0 nova_compute[246802]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 07:25:36 compute-0 nova_compute[246802]: and ``live_migration_inbound_addr`` respectively.
Jan 31 07:25:36 compute-0 nova_compute[246802]: ).  Its value may be silently ignored in the future.
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.309 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.309 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.309 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.309 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.309 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.310 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.311 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rbd_secret_uuid        = f70fcd2a-dcb4-5f89-a4ba-79a09959083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.312 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.312 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.312 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.312 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.312 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.312 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.313 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.313 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.313 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.313 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.313 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.313 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.314 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.315 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.316 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.317 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.318 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.319 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.320 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.321 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.321 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.321 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.321 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.321 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.321 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.322 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.323 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.324 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.324 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.324 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.324 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.324 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.325 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.326 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.327 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.328 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.329 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.329 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.329 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.329 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.329 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.330 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.330 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.330 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.330 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.330 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.330 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.331 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.331 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.331 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.331 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.331 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.332 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.332 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.332 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.332 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.332 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.332 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.333 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.333 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.333 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.333 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.333 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.333 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.334 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.335 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.336 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.336 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.336 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.336 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.336 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.336 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.337 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.338 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.339 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.339 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.339 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.339 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.339 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.339 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.340 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.341 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.342 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.343 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.344 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.345 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.345 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.345 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.345 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.345 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.345 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.346 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.346 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.346 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.346 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.346 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.346 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.347 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.348 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.349 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.350 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.351 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.351 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.351 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.351 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.351 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.351 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.352 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.352 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.352 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.352 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.352 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.353 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.353 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.353 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.353 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.353 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.354 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.354 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.354 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.354 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.354 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.354 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.355 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.355 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.355 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.355 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.355 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.356 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.356 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.356 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.356 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.356 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.356 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.357 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.357 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.357 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.357 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.357 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.357 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.358 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.359 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.360 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.360 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.360 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.360 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.360 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.361 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.362 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.363 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.364 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.364 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.364 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.364 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.364 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.364 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.365 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.365 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.365 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.365 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.365 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.365 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.366 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.367 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.368 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.369 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.369 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.369 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.369 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.369 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.369 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.370 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.370 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.370 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.370 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.370 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.371 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.372 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.373 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.373 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.373 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.373 246806 DEBUG oslo_service.service [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.374 246806 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 07:25:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:36.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.406 246806 DEBUG nova.virt.libvirt.host [None req-abbf26d5-961a-45fd-91ed-db46e7f45584 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.407 246806 DEBUG nova.virt.libvirt.host [None req-abbf26d5-961a-45fd-91ed-db46e7f45584 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.407 246806 DEBUG nova.virt.libvirt.host [None req-abbf26d5-961a-45fd-91ed-db46e7f45584 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.408 246806 DEBUG nova.virt.libvirt.host [None req-abbf26d5-961a-45fd-91ed-db46e7f45584 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 07:25:36 compute-0 python3.9[247598]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:25:36 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 07:25:36 compute-0 systemd[1]: Stopping nova_compute container...
Jan 31 07:25:36 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.506 246806 DEBUG nova.virt.libvirt.host [None req-abbf26d5-961a-45fd-91ed-db46e7f45584 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f92bcd18a90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.509 246806 DEBUG oslo_concurrency.lockutils [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.509 246806 DEBUG oslo_concurrency.lockutils [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:25:36 compute-0 nova_compute[246802]: 2026-01-31 07:25:36.510 246806 DEBUG oslo_concurrency.lockutils [None req-5501001b-0963-479a-9110-e0698e7cd101 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:25:36 compute-0 virtqemud[247621]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 07:25:36 compute-0 virtqemud[247621]: hostname: compute-0
Jan 31 07:25:36 compute-0 virtqemud[247621]: End of file while reading data: Input/output error
Jan 31 07:25:36 compute-0 systemd[1]: libpod-08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f.scope: Deactivated successfully.
Jan 31 07:25:36 compute-0 systemd[1]: libpod-08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f.scope: Consumed 3.271s CPU time.
Jan 31 07:25:36 compute-0 podman[247624]: 2026-01-31 07:25:36.930441604 +0000 UTC m=+0.456564696 container died 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Jan 31 07:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f-userdata-shm.mount: Deactivated successfully.
Jan 31 07:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834-merged.mount: Deactivated successfully.
Jan 31 07:25:37 compute-0 ceph-mon[74496]: pgmap v731: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:38.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:38.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:39 compute-0 podman[247624]: 2026-01-31 07:25:39.287155152 +0000 UTC m=+2.813278254 container cleanup 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:25:39 compute-0 podman[247624]: nova_compute
Jan 31 07:25:39 compute-0 podman[247675]: nova_compute
Jan 31 07:25:39 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 07:25:39 compute-0 systemd[1]: Stopped nova_compute container.
Jan 31 07:25:39 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 07:25:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f652ce010712b756097ce0a41d3a318e6fd92f4f4b27a3ba59ca6964be76834/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:39 compute-0 podman[247688]: 2026-01-31 07:25:39.467373299 +0000 UTC m=+0.101463374 container init 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm)
Jan 31 07:25:39 compute-0 podman[247688]: 2026-01-31 07:25:39.473926221 +0000 UTC m=+0.108016236 container start 08cc1288983ebbc3e029df4ba78417491faf6dcee06220d37f18f3223d9f406f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:25:39 compute-0 podman[247688]: nova_compute
Jan 31 07:25:39 compute-0 nova_compute[247704]: + sudo -E kolla_set_configs
Jan 31 07:25:39 compute-0 systemd[1]: Started nova_compute container.
Jan 31 07:25:39 compute-0 sudo[247596]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Validating config file
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying service configuration files
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /etc/ceph
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Creating directory /etc/ceph
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Writing out command to execute
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:39 compute-0 nova_compute[247704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 07:25:39 compute-0 nova_compute[247704]: ++ cat /run_command
Jan 31 07:25:39 compute-0 nova_compute[247704]: + CMD=nova-compute
Jan 31 07:25:39 compute-0 nova_compute[247704]: + ARGS=
Jan 31 07:25:39 compute-0 nova_compute[247704]: + sudo kolla_copy_cacerts
Jan 31 07:25:39 compute-0 nova_compute[247704]: + [[ ! -n '' ]]
Jan 31 07:25:39 compute-0 nova_compute[247704]: + . kolla_extend_start
Jan 31 07:25:39 compute-0 nova_compute[247704]: Running command: 'nova-compute'
Jan 31 07:25:39 compute-0 nova_compute[247704]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 07:25:39 compute-0 nova_compute[247704]: + umask 0022
Jan 31 07:25:39 compute-0 nova_compute[247704]: + exec nova-compute
Jan 31 07:25:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:40.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:40.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:40 compute-0 podman[247741]: 2026-01-31 07:25:40.937233716 +0000 UTC m=+0.111626685 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 07:25:41 compute-0 ceph-mon[74496]: pgmap v732: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.368 247708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.369 247708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.369 247708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.370 247708 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.527 247708 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.550 247708 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:41 compute-0 nova_compute[247704]: 2026-01-31 07:25:41.550 247708 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:25:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.105 247708 INFO nova.virt.driver [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 07:25:42 compute-0 sudo[247897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spogxdikrvmztdamxoafblpobpimrogq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769844341.8658214-3622-226872871215274/AnsiballZ_podman_container.py'
Jan 31 07:25:42 compute-0 sudo[247897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.234 247708 INFO nova.compute.provider_config [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.250 247708 DEBUG oslo_concurrency.lockutils [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.251 247708 DEBUG oslo_concurrency.lockutils [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.251 247708 DEBUG oslo_concurrency.lockutils [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.251 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.252 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.253 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.253 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.253 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.253 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.253 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.253 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.254 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.254 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.254 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.254 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.254 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.254 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.255 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.255 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.255 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.255 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.256 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.256 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.256 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.256 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.256 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.256 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.257 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.257 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.257 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.257 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.257 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.257 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.258 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.258 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.258 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.258 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.258 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.258 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.259 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.260 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.261 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.262 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.263 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.263 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.263 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.263 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.263 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.263 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.264 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.264 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.264 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.264 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.264 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.264 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.265 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.265 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.265 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.265 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.265 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.265 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.266 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.267 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.267 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.267 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.267 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.267 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.267 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.268 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.269 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.270 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.271 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.272 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.272 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.272 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.272 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.272 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.272 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:42.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.273 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.274 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.274 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.274 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.274 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.274 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.274 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.275 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.276 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.277 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.277 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.277 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.277 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.277 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.278 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.279 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.279 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.279 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.279 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.279 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.279 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.280 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.280 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.280 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.280 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.280 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.280 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.281 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.281 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.281 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.281 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.281 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.282 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.282 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.282 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.282 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.282 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.282 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.283 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.283 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.283 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.283 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.283 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.284 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.284 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.284 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.284 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.284 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.284 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.285 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.285 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.285 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.285 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.285 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.286 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.286 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.286 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.286 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.286 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.287 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.287 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.287 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.287 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.287 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.288 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.288 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.288 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.288 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.288 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.289 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.289 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.289 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.289 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.289 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.290 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.290 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.290 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.290 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.290 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.291 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.291 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.291 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.291 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.292 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.292 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.292 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.292 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.292 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.293 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.293 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.293 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.293 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.293 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.294 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.294 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.294 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.294 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.294 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.295 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.295 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.295 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.295 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.296 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.296 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.296 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.296 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.296 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.297 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.297 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.297 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.297 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.298 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.298 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.298 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.298 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.298 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.299 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.299 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.299 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.299 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.299 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.300 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.300 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.300 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.300 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.300 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.301 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.301 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.301 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.301 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.301 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.302 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.302 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.302 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.302 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.302 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.303 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.303 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.303 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.303 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.303 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.304 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.304 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.304 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.304 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.305 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.305 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.305 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.305 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.305 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.306 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.306 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.306 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.306 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.306 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.306 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.307 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.308 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.308 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.308 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.308 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.308 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.308 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.309 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.309 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.309 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.309 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.309 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.310 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.311 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.311 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.311 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.311 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.311 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.311 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.312 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.312 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.312 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.312 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.312 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.313 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.313 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.313 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.313 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.313 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.314 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.314 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.314 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.314 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.314 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.314 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.315 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.315 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.315 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.315 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.315 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.315 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.316 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.316 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.316 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.316 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.316 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.316 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.317 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.318 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.318 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.318 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.318 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.318 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.318 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.319 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.319 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.319 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.319 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.320 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.321 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.321 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.321 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.321 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.321 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.321 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.322 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.322 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.322 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.322 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.322 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.322 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.323 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.324 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.325 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.326 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.327 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.327 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.327 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.327 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.327 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.327 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.328 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.329 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.330 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.330 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.330 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.330 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.330 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.330 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.331 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.331 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.331 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.331 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.331 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.331 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.332 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.333 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.334 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.335 247708 WARNING oslo_config.cfg [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 07:25:42 compute-0 nova_compute[247704]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 07:25:42 compute-0 nova_compute[247704]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 07:25:42 compute-0 nova_compute[247704]: and ``live_migration_inbound_addr`` respectively.
Jan 31 07:25:42 compute-0 nova_compute[247704]: ).  Its value may be silently ignored in the future.
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.335 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.335 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.335 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.336 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.336 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.336 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.336 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.336 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.337 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.337 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.337 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.337 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.337 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.337 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.338 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.338 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.338 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.338 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.338 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rbd_secret_uuid        = f70fcd2a-dcb4-5f89-a4ba-79a09959083b log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.338 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.339 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.339 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.339 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.339 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.339 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.339 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.340 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.340 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.340 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.340 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.340 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.341 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.341 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.341 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.341 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.341 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.342 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.342 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.342 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.342 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.342 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.343 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.344 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.345 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.345 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.345 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.345 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.345 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.346 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.346 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.346 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.346 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.346 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.346 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.347 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.347 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.347 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.347 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.347 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.347 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.348 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.348 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.348 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.348 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.348 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.348 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.349 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.349 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.349 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.349 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.349 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.349 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.350 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.350 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.350 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.350 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.350 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.350 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.351 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.352 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.352 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.352 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.352 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.352 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.352 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.353 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.353 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 python3.9[247899]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.353 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.353 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.353 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.353 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.354 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.354 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.354 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.354 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.355 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.355 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.355 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.355 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.355 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.356 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.356 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.356 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.356 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.356 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.356 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.357 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.357 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.357 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.357 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.357 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.358 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.358 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.358 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.358 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.358 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.358 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.359 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.359 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.359 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.359 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.359 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.360 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.360 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.360 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.360 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.361 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.361 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.361 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.361 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.361 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.362 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.362 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.362 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.362 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.363 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.363 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.363 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.363 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.363 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.364 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.364 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.364 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.364 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.365 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.365 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.365 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.365 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.365 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.366 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.366 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.366 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.367 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.367 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.367 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.367 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.368 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.368 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.368 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.368 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.368 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.369 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.369 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.369 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.369 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.370 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.370 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.370 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.370 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.370 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.370 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.371 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.371 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.371 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.371 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.371 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.371 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.372 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.372 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.372 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.372 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.372 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.373 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.373 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.373 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.373 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.374 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.374 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.374 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.374 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.374 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.374 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.375 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.375 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.375 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.375 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.375 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.376 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.377 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.378 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.379 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.379 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.379 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.379 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.379 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.379 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.380 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.380 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.380 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.380 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.380 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.380 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.381 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.381 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.381 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.381 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.381 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.381 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.382 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.382 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.382 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.382 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.382 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.382 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.383 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.384 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.384 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.384 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.384 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.384 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.384 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.385 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.385 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.385 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.385 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.386 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.386 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.386 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.386 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.386 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.387 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.387 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.387 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.387 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.387 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.388 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.388 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.388 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.388 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.388 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.389 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.389 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.389 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.389 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.389 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.390 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.390 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.390 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.390 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.391 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.391 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.391 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.391 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.391 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.391 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.392 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.393 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.393 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.393 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.393 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.393 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.393 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.394 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.395 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.396 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.396 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.396 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.396 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.396 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.397 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.397 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.397 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.397 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.397 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.398 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.398 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.398 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.398 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.398 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.399 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.400 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.401 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.402 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.403 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.404 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.405 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.405 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.405 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.405 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.405 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.406 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.407 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.407 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.407 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.407 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.407 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.408 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.408 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.408 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.408 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.408 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.408 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.409 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.410 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.410 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.410 247708 DEBUG oslo_service.service [None req-2a2d67ff-b436-4e11-832a-0e4233d0c09f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.411 247708 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 07:25:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:42.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.443 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.444 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.444 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.444 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.459 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f1d5815fac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.463 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f1d5815fac0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.464 247708 INFO nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Connection event '1' reason 'None'
Jan 31 07:25:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.514 247708 WARNING nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 07:25:42 compute-0 nova_compute[247704]: 2026-01-31 07:25:42.515 247708 DEBUG nova.virt.libvirt.volume.mount [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 07:25:42 compute-0 ceph-mon[74496]: pgmap v733: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:42 compute-0 systemd[1]: Started libpod-conmon-fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4.scope.
Jan 31 07:25:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08eb072197edcbc93d9db019a7bd6039a71b0fa9dbd3b8fce88dba0c13b01040/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08eb072197edcbc93d9db019a7bd6039a71b0fa9dbd3b8fce88dba0c13b01040/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08eb072197edcbc93d9db019a7bd6039a71b0fa9dbd3b8fce88dba0c13b01040/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 07:25:42 compute-0 podman[247945]: 2026-01-31 07:25:42.664167875 +0000 UTC m=+0.171324718 container init fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:25:42 compute-0 podman[247945]: 2026-01-31 07:25:42.672382988 +0000 UTC m=+0.179539821 container start fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:25:42 compute-0 python3.9[247899]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 07:25:42 compute-0 nova_compute_init[247976]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 07:25:42 compute-0 systemd[1]: libpod-fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4.scope: Deactivated successfully.
Jan 31 07:25:42 compute-0 podman[247991]: 2026-01-31 07:25:42.775214025 +0000 UTC m=+0.032121453 container died fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4-userdata-shm.mount: Deactivated successfully.
Jan 31 07:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-08eb072197edcbc93d9db019a7bd6039a71b0fa9dbd3b8fce88dba0c13b01040-merged.mount: Deactivated successfully.
Jan 31 07:25:42 compute-0 podman[247991]: 2026-01-31 07:25:42.810366743 +0000 UTC m=+0.067274151 container cleanup fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:42 compute-0 systemd[1]: libpod-conmon-fa9b3aae9ea27eaaf7f98849726b81435fbdac15f5ed2faef42815ffe052cda4.scope: Deactivated successfully.
Jan 31 07:25:42 compute-0 sudo[247897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.523 247708 INFO nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]: 
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <host>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <uuid>8f281c2a-1a44-41ea-8268-6c420f002b7f</uuid>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <arch>x86_64</arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model>EPYC-Rome-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <vendor>AMD</vendor>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <microcode version='16777317'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <signature family='23' model='49' stepping='0'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='x2apic'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='tsc-deadline'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='osxsave'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='hypervisor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='tsc_adjust'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='spec-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='stibp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='arch-capabilities'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='cmp_legacy'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='topoext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='virt-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='lbrv'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='tsc-scale'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='vmcb-clean'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='pause-filter'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='pfthreshold'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='svme-addr-chk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='rdctl-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='mds-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature name='pschange-mc-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <pages unit='KiB' size='4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <pages unit='KiB' size='2048'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <pages unit='KiB' size='1048576'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <power_management>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <suspend_mem/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </power_management>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <iommu support='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <migration_features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <live/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <uri_transports>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <uri_transport>tcp</uri_transport>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <uri_transport>rdma</uri_transport>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </uri_transports>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </migration_features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <topology>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <cells num='1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <cell id='0'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           <memory unit='KiB'>7864292</memory>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           <pages unit='KiB' size='4'>1966073</pages>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           <distances>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <sibling id='0' value='10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           </distances>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           <cpus num='8'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:           </cpus>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         </cell>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </cells>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </topology>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <cache>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </cache>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <secmodel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model>selinux</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <doi>0</doi>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </secmodel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <secmodel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model>dac</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <doi>0</doi>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </secmodel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </host>
Jan 31 07:25:43 compute-0 nova_compute[247704]: 
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <guest>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <os_type>hvm</os_type>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <arch name='i686'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <wordsize>32</wordsize>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <domain type='qemu'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <domain type='kvm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <pae/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <nonpae/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <acpi default='on' toggle='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <apic default='on' toggle='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <cpuselection/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <deviceboot/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <disksnapshot default='on' toggle='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <externalSnapshot/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </guest>
Jan 31 07:25:43 compute-0 nova_compute[247704]: 
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <guest>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <os_type>hvm</os_type>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <arch name='x86_64'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <wordsize>64</wordsize>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <domain type='qemu'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <domain type='kvm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <acpi default='on' toggle='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <apic default='on' toggle='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <cpuselection/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <deviceboot/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <disksnapshot default='on' toggle='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <externalSnapshot/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </guest>
Jan 31 07:25:43 compute-0 nova_compute[247704]: 
Jan 31 07:25:43 compute-0 nova_compute[247704]: </capabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]: 
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.532 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 07:25:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.563 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 07:25:43 compute-0 nova_compute[247704]: <domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <domain>kvm</domain>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <arch>i686</arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <vcpu max='4096'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <iothreads supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <os supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='firmware'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <loader supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>rom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pflash</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='readonly'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>yes</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='secure'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </loader>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </os>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='maximumMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <vendor>AMD</vendor>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='succor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='custom' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 ceph-mon[74496]: pgmap v734: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <memoryBacking supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='sourceType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>anonymous</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>memfd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </memoryBacking>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <disk supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='diskDevice'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>disk</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cdrom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>floppy</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>lun</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>fdc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>sata</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <graphics supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vnc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egl-headless</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </graphics>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <video supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='modelType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vga</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cirrus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>none</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>bochs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ramfb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </video>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hostdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='mode'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>subsystem</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='startupPolicy'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>mandatory</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>requisite</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>optional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='subsysType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pci</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='capsType'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='pciBackend'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hostdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <rng supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>random</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <filesystem supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='driverType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>path</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>handle</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtiofs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </filesystem>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tpm supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-tis</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-crb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emulator</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>external</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendVersion'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>2.0</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </tpm>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <redirdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </redirdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <channel supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </channel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <crypto supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </crypto>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <interface supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>passt</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <panic supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>isa</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>hyperv</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </panic>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <console supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>null</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dev</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pipe</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stdio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>udp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tcp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu-vdagent</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </console>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <gic supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <genid supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backup supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <async-teardown supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <s390-pv supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <ps2 supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tdx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sev supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sgx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hyperv supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='features'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>relaxed</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vapic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>spinlocks</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vpindex</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>runtime</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>synic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stimer</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reset</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vendor_id</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>frequencies</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reenlightenment</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tlbflush</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ipi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>avic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emsr_bitmap</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>xmm_input</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hyperv>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <launchSecurity supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </features>
Jan 31 07:25:43 compute-0 nova_compute[247704]: </domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.572 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 07:25:43 compute-0 nova_compute[247704]: <domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <domain>kvm</domain>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <arch>i686</arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <vcpu max='240'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <iothreads supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <os supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='firmware'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <loader supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>rom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pflash</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='readonly'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>yes</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='secure'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </loader>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </os>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='maximumMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <vendor>AMD</vendor>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='succor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='custom' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <memoryBacking supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='sourceType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>anonymous</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>memfd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </memoryBacking>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <disk supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='diskDevice'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>disk</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cdrom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>floppy</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>lun</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ide</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>fdc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>sata</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <graphics supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vnc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egl-headless</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </graphics>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <video supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='modelType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vga</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cirrus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>none</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>bochs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ramfb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </video>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hostdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='mode'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>subsystem</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='startupPolicy'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>mandatory</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>requisite</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>optional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='subsysType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pci</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='capsType'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='pciBackend'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hostdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <rng supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>random</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <filesystem supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='driverType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>path</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>handle</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtiofs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </filesystem>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tpm supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-tis</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-crb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emulator</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>external</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendVersion'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>2.0</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </tpm>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <redirdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </redirdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <channel supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </channel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <crypto supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </crypto>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <interface supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>passt</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <panic supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>isa</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>hyperv</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </panic>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <console supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>null</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dev</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pipe</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stdio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>udp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tcp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu-vdagent</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </console>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <gic supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <genid supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backup supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <async-teardown supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <s390-pv supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <ps2 supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tdx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sev supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sgx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hyperv supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='features'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>relaxed</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vapic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>spinlocks</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vpindex</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>runtime</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>synic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stimer</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reset</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vendor_id</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>frequencies</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reenlightenment</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tlbflush</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ipi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>avic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emsr_bitmap</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>xmm_input</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hyperv>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <launchSecurity supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </features>
Jan 31 07:25:43 compute-0 nova_compute[247704]: </domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.657 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.661 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 07:25:43 compute-0 nova_compute[247704]: <domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <domain>kvm</domain>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <arch>x86_64</arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <vcpu max='4096'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <iothreads supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <os supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='firmware'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>efi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <loader supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>rom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pflash</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='readonly'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>yes</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='secure'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>yes</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </loader>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </os>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='maximumMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <vendor>AMD</vendor>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='succor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='custom' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <memoryBacking supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='sourceType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>anonymous</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>memfd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </memoryBacking>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <disk supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='diskDevice'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>disk</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cdrom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>floppy</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>lun</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>fdc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>sata</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <graphics supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vnc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egl-headless</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </graphics>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <video supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='modelType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vga</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cirrus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>none</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>bochs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ramfb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </video>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hostdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='mode'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>subsystem</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='startupPolicy'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>mandatory</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>requisite</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>optional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='subsysType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pci</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='capsType'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='pciBackend'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hostdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <rng supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>random</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <filesystem supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='driverType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>path</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>handle</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtiofs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </filesystem>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tpm supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-tis</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-crb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emulator</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>external</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendVersion'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>2.0</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </tpm>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <redirdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </redirdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <channel supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </channel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <crypto supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </crypto>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <interface supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>passt</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <panic supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>isa</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>hyperv</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </panic>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <console supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>null</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dev</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pipe</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stdio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>udp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tcp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu-vdagent</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </console>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <gic supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <genid supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backup supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <async-teardown supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <s390-pv supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <ps2 supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tdx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sev supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sgx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hyperv supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='features'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>relaxed</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vapic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>spinlocks</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vpindex</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>runtime</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>synic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stimer</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reset</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vendor_id</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>frequencies</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reenlightenment</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tlbflush</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ipi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>avic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emsr_bitmap</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>xmm_input</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hyperv>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <launchSecurity supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </features>
Jan 31 07:25:43 compute-0 nova_compute[247704]: </domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.732 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 07:25:43 compute-0 nova_compute[247704]: <domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <domain>kvm</domain>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <arch>x86_64</arch>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <vcpu max='240'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <iothreads supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <os supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='firmware'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <loader supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>rom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pflash</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='readonly'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>yes</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='secure'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>no</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </loader>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </os>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-passthrough' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='hostPassthroughMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='maximum' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='maximumMigratable'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>on</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>off</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='host-model' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <vendor>AMD</vendor>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='x2apic'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='hypervisor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='stibp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='overflow-recov'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='succor'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lbrv'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='tsc-scale'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='flushbyasid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pause-filter'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='pfthreshold'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <feature policy='disable' name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <mode name='custom' supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Broadwell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='ClearwaterForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ddpd-u'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sha512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm3'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sm4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Cooperlake-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Denverton-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Dhyana-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Milan-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 07:25:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Rome-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-Turin-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amd-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='auto-ibrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vp2intersect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fs-gs-base-ns'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibpb-brtype'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='no-nested-data-bp'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='null-sel-clr-base'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='perfmon-v2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbpb'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='srso-user-kernel-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='stibp-always-on'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='EPYC-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 sudo[248054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='GraniteRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-128'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-256'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx10-512'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 sudo[248054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='prefetchiti'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 sudo[248054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Haswell-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v6'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Icelake-Server-v7'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='IvyBridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='KnightsMill-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4fmaps'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-4vnniw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512er'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512pf'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G4-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Opteron_G5-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fma4'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tbm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xop'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SapphireRapids-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='amx-tile'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-bf16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-fp16'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512-vpopcntdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bitalg'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vbmi2'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrc'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fzrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='la57'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='taa-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='tsx-ldtrk'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='SierraForest-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ifma'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-ne-convert'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx-vnni-int8'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bhi-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='bus-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cmpccxadd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fbsdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='fsrs'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ibrs-all'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='intel-psfd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ipred-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='lam'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mcdt-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pbrsb-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='psdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rrsba-ctrl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='sbdr-ssdp-no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='serialize'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vaes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='vpclmulqdq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Client-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='hle'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='rtm'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Skylake-Server-v5'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512bw'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512cd'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512dq'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512f'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='avx512vl'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='invpcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pcid'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='pku'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='mpx'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v2'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v3'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='core-capability'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='split-lock-detect'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='Snowridge-v4'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='cldemote'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='erms'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='gfni'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdir64b'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='movdiri'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='xsaves'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='athlon-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='core2duo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='coreduo-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='n270-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='ss'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <blockers model='phenom-v1'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnow'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <feature name='3dnowext'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </blockers>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </mode>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <memoryBacking supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <enum name='sourceType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>anonymous</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <value>memfd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </memoryBacking>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <disk supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='diskDevice'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>disk</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cdrom</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>floppy</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>lun</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ide</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>fdc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>sata</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <graphics supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vnc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egl-headless</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </graphics>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <video supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='modelType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vga</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>cirrus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>none</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>bochs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ramfb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </video>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hostdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='mode'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>subsystem</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='startupPolicy'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>mandatory</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>requisite</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>optional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='subsysType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pci</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>scsi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='capsType'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='pciBackend'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hostdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <rng supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtio-non-transitional</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>random</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>egd</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <filesystem supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='driverType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>path</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>handle</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>virtiofs</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </filesystem>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tpm supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-tis</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tpm-crb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emulator</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>external</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendVersion'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>2.0</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </tpm>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <redirdev supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='bus'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>usb</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </redirdev>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <channel supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </channel>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <crypto supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendModel'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>builtin</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </crypto>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <interface supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='backendType'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>default</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>passt</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <panic supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='model'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>isa</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>hyperv</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </panic>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <console supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='type'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>null</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vc</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pty</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dev</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>file</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>pipe</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stdio</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>udp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tcp</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>unix</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>qemu-vdagent</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>dbus</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </console>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <features>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <gic supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <vmcoreinfo supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <genid supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backingStoreInput supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <backup supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <async-teardown supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <s390-pv supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <ps2 supported='yes'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <tdx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sev supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <sgx supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <hyperv supported='yes'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <enum name='features'>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>relaxed</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vapic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>spinlocks</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vpindex</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>runtime</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>synic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>stimer</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reset</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>vendor_id</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>frequencies</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>reenlightenment</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>tlbflush</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>ipi</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>avic</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>emsr_bitmap</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <value>xmm_input</value>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </enum>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       <defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <spinlocks>4095</spinlocks>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <stimer_direct>on</stimer_direct>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 07:25:43 compute-0 sudo[248079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 07:25:43 compute-0 nova_compute[247704]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 07:25:43 compute-0 nova_compute[247704]:       </defaults>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     </hyperv>
Jan 31 07:25:43 compute-0 nova_compute[247704]:     <launchSecurity supported='no'/>
Jan 31 07:25:43 compute-0 nova_compute[247704]:   </features>
Jan 31 07:25:43 compute-0 nova_compute[247704]: </domainCapabilities>
Jan 31 07:25:43 compute-0 nova_compute[247704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.802 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.803 247708 INFO nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Secure Boot support detected
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.804 247708 INFO nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.805 247708 INFO nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.831 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 31 07:25:43 compute-0 nova_compute[247704]:   <model>Nehalem</model>
Jan 31 07:25:43 compute-0 nova_compute[247704]: </cpu>
Jan 31 07:25:43 compute-0 nova_compute[247704]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.835 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 07:25:43 compute-0 sudo[248079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:25:43 compute-0 sudo[248079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:25:43 compute-0 nova_compute[247704]: 2026-01-31 07:25:43.953 247708 INFO nova.virt.node [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Determined node identity 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from /var/lib/nova/compute_id
Jan 31 07:25:44 compute-0 sshd-session[221535]: Connection closed by 192.168.122.30 port 47926
Jan 31 07:25:44 compute-0 sshd-session[221532]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:25:44 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 07:25:44 compute-0 systemd[1]: session-50.scope: Consumed 1min 54.664s CPU time.
Jan 31 07:25:44 compute-0 systemd-logind[816]: Session 50 logged out. Waiting for processes to exit.
Jan 31 07:25:44 compute-0 systemd-logind[816]: Removed session 50.
Jan 31 07:25:44 compute-0 nova_compute[247704]: 2026-01-31 07:25:44.161 247708 WARNING nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Compute nodes ['39dae8fb-a3d6-4f01-ab04-67eb06f4b735'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 07:25:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:44.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:44 compute-0 nova_compute[247704]: 2026-01-31 07:25:44.587 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 07:25:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2169502590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:44 compute-0 ceph-mon[74496]: pgmap v735: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.274 247708 WARNING nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.275 247708 DEBUG oslo_concurrency.lockutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.275 247708 DEBUG oslo_concurrency.lockutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.276 247708 DEBUG oslo_concurrency.lockutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.276 247708 DEBUG nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.278 247708 DEBUG oslo_concurrency.processutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:25:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613405914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:45 compute-0 nova_compute[247704]: 2026-01-31 07:25:45.723 247708 DEBUG oslo_concurrency.processutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:45 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 07:25:45 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 07:25:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/613405914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.002 247708 WARNING nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.005 247708 DEBUG nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5175MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.005 247708 DEBUG oslo_concurrency.lockutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.005 247708 DEBUG oslo_concurrency.lockutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.095 247708 WARNING nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] No compute node record for compute-0.ctlplane.example.com:39dae8fb-a3d6-4f01-ab04-67eb06f4b735: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 could not be found.
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.254 247708 INFO nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735
Jan 31 07:25:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:46.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:46.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.988 247708 DEBUG nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:25:46 compute-0 nova_compute[247704]: 2026-01-31 07:25:46.988 247708 DEBUG nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:25:47 compute-0 ceph-mon[74496]: pgmap v736: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/724673685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.355 247708 INFO nova.scheduler.client.report [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [req-02fd14ce-618e-4f95-87f9-b5008034fbce] Created resource provider record via placement API for resource provider with UUID 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 and name compute-0.ctlplane.example.com.
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.414 247708 DEBUG oslo_concurrency.processutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:25:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:25:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2119535112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.879 247708 DEBUG oslo_concurrency.processutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.884 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 07:25:47 compute-0 nova_compute[247704]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.884 247708 INFO nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] kernel doesn't support AMD SEV
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.885 247708 DEBUG nova.compute.provider_tree [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.886 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.887 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Libvirt baseline CPU <cpu>
Jan 31 07:25:47 compute-0 nova_compute[247704]:   <arch>x86_64</arch>
Jan 31 07:25:47 compute-0 nova_compute[247704]:   <model>Nehalem</model>
Jan 31 07:25:47 compute-0 nova_compute[247704]:   <vendor>AMD</vendor>
Jan 31 07:25:47 compute-0 nova_compute[247704]:   <topology sockets="8" cores="1" threads="1"/>
Jan 31 07:25:47 compute-0 nova_compute[247704]: </cpu>
Jan 31 07:25:47 compute-0 nova_compute[247704]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.995 247708 DEBUG nova.scheduler.client.report [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Updated inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.995 247708 DEBUG nova.compute.provider_tree [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Updating resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:25:47 compute-0 nova_compute[247704]: 2026-01-31 07:25:47.995 247708 DEBUG nova.compute.provider_tree [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:25:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2858455899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2119535112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:48 compute-0 nova_compute[247704]: 2026-01-31 07:25:48.104 247708 DEBUG nova.compute.provider_tree [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Updating resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:25:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:48.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:48 compute-0 nova_compute[247704]: 2026-01-31 07:25:48.310 247708 DEBUG nova.compute.resource_tracker [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:25:48 compute-0 nova_compute[247704]: 2026-01-31 07:25:48.310 247708 DEBUG oslo_concurrency.lockutils [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:25:48 compute-0 nova_compute[247704]: 2026-01-31 07:25:48.310 247708 DEBUG nova.service [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 07:25:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:48 compute-0 nova_compute[247704]: 2026-01-31 07:25:48.458 247708 DEBUG nova.service [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 07:25:48 compute-0 nova_compute[247704]: 2026-01-31 07:25:48.459 247708 DEBUG nova.servicegroup.drivers.db [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 07:25:49 compute-0 ceph-mon[74496]: pgmap v737: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3659799667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:25:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:25:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:50.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:51 compute-0 ceph-mon[74496]: pgmap v738: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:25:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:52.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:25:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:52.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:52 compute-0 podman[248175]: 2026-01-31 07:25:52.927737076 +0000 UTC m=+0.097354987 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:25:53 compute-0 ceph-mon[74496]: pgmap v739: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:54.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:54.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:54 compute-0 ceph-mon[74496]: pgmap v740: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:56.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:25:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:25:56 compute-0 ceph-mon[74496]: pgmap v741: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:25:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:25:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:58.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:25:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:25:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:25:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:58.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:25:58 compute-0 ceph-mon[74496]: pgmap v742: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:25:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:00 compute-0 ceph-mon[74496]: pgmap v743: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:02.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:02.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:02 compute-0 ceph-mon[74496]: pgmap v744: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:04 compute-0 sudo[248202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:04 compute-0 sudo[248202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:04 compute-0 sudo[248202]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:04 compute-0 sudo[248227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:04 compute-0 sudo[248227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:04 compute-0 sudo[248227]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:04 compute-0 ceph-mon[74496]: pgmap v745: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:06.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:06.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:06 compute-0 ceph-mon[74496]: pgmap v746: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:08.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:08 compute-0 ceph-mon[74496]: pgmap v747: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:10.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:11 compute-0 ceph-mon[74496]: pgmap v748: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:26:11.134 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:26:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:26:11.135 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:26:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:26:11.135 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:26:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:11 compute-0 podman[248256]: 2026-01-31 07:26:11.987116645 +0000 UTC m=+0.147756652 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 31 07:26:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:12.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:13 compute-0 ceph-mon[74496]: pgmap v749: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:14.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:15 compute-0 ceph-mon[74496]: pgmap v750: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:16.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:16 compute-0 nova_compute[247704]: 2026-01-31 07:26:16.461 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:17 compute-0 ceph-mon[74496]: pgmap v751: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:17 compute-0 nova_compute[247704]: 2026-01-31 07:26:17.928 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:18.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:18.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:19 compute-0 ceph-mon[74496]: pgmap v752: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:26:19
Jan 31 07:26:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:26:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:26:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'backups']
Jan 31 07:26:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:26:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2172451859' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2172451859' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:26:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:20.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:21 compute-0 ceph-mon[74496]: pgmap v753: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/571993948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/571993948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:22.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:22.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:23 compute-0 ceph-mon[74496]: pgmap v754: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:23 compute-0 podman[248288]: 2026-01-31 07:26:23.903559128 +0000 UTC m=+0.078159556 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:26:24 compute-0 sudo[248307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:24 compute-0 sudo[248307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:24 compute-0 sudo[248307]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:24 compute-0 sudo[248332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:24 compute-0 sudo[248332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:24 compute-0 sudo[248332]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:24.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:25 compute-0 ceph-mon[74496]: pgmap v755: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:26.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:27 compute-0 ceph-mon[74496]: pgmap v756: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:28.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:29 compute-0 ceph-mon[74496]: pgmap v757: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:26:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 8361 writes, 33K keys, 8361 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8361 writes, 1670 syncs, 5.01 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 632 writes, 1045 keys, 632 commit groups, 1.0 writes per commit group, ingest: 0.34 MB, 0.00 MB/s
                                           Interval WAL: 632 writes, 284 syncs, 2.23 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 07:26:29 compute-0 sudo[248360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:29 compute-0 sudo[248360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:29 compute-0 sudo[248360]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:29 compute-0 sudo[248385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:26:29 compute-0 sudo[248385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:29 compute-0 sudo[248385]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:29 compute-0 sudo[248410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:29 compute-0 sudo[248410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:29 compute-0 sudo[248410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:29 compute-0 sudo[248435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:26:29 compute-0 sudo[248435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:30 compute-0 sudo[248435]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:30 compute-0 sudo[248480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:30.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:30 compute-0 sudo[248480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:30 compute-0 sudo[248480]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:30 compute-0 sudo[248505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:26:30 compute-0 sudo[248505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:30 compute-0 sudo[248505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:30 compute-0 sudo[248530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:30.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:30 compute-0 sudo[248530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:30 compute-0 sudo[248530]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:30 compute-0 sudo[248555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:26:30 compute-0 sudo[248555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:30 compute-0 sudo[248555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:26:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:26:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:26:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b3079674-40de-4ab9-bc7e-57bdad8a17f4 does not exist
Jan 31 07:26:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 105d64b2-0740-4b3d-b568-842737562439 does not exist
Jan 31 07:26:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3f549239-76d0-46d2-9107-344576675c15 does not exist
Jan 31 07:26:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:26:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:26:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:26:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:26:31 compute-0 sudo[248612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:31 compute-0 sudo[248612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:31 compute-0 sudo[248612]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:31 compute-0 sudo[248637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:26:31 compute-0 sudo[248637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:31 compute-0 sudo[248637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:31 compute-0 sudo[248662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:31 compute-0 sudo[248662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:31 compute-0 sudo[248662]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:31 compute-0 ceph-mon[74496]: pgmap v758: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:26:31 compute-0 sudo[248688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:26:31 compute-0 sudo[248688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.631809807 +0000 UTC m=+0.057044459 container create 4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:26:31 compute-0 systemd[1]: Started libpod-conmon-4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156.scope.
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.60582849 +0000 UTC m=+0.031063212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:26:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.723638897 +0000 UTC m=+0.148873559 container init 4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_haibt, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.732270328 +0000 UTC m=+0.157504980 container start 4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.736210215 +0000 UTC m=+0.161444877 container attach 4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:26:31 compute-0 priceless_haibt[248770]: 167 167
Jan 31 07:26:31 compute-0 systemd[1]: libpod-4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156.scope: Deactivated successfully.
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.741414623 +0000 UTC m=+0.166649305 container died 4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_haibt, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-016c8fa9cece2a524db4ae87e6e70d8b1c01ab7591f4f4663f51ff53368dec48-merged.mount: Deactivated successfully.
Jan 31 07:26:31 compute-0 podman[248753]: 2026-01-31 07:26:31.798502162 +0000 UTC m=+0.223736834 container remove 4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:26:31 compute-0 systemd[1]: libpod-conmon-4f50f5299554fd3febea1f92305266612a3265cf434103f9e5b9aca432c96156.scope: Deactivated successfully.
Jan 31 07:26:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:31 compute-0 podman[248793]: 2026-01-31 07:26:31.947019631 +0000 UTC m=+0.047476855 container create aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:26:31 compute-0 systemd[1]: Started libpod-conmon-aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b.scope.
Jan 31 07:26:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add0b08c8bb8ff6283e2e77b76346bc6b467672b10058658c49dbc45f4f6ec31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add0b08c8bb8ff6283e2e77b76346bc6b467672b10058658c49dbc45f4f6ec31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add0b08c8bb8ff6283e2e77b76346bc6b467672b10058658c49dbc45f4f6ec31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add0b08c8bb8ff6283e2e77b76346bc6b467672b10058658c49dbc45f4f6ec31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add0b08c8bb8ff6283e2e77b76346bc6b467672b10058658c49dbc45f4f6ec31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:32 compute-0 podman[248793]: 2026-01-31 07:26:31.921458384 +0000 UTC m=+0.021915638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:26:32 compute-0 podman[248793]: 2026-01-31 07:26:32.031932551 +0000 UTC m=+0.132389835 container init aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:26:32 compute-0 podman[248793]: 2026-01-31 07:26:32.039983748 +0000 UTC m=+0.140440982 container start aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:26:32 compute-0 podman[248793]: 2026-01-31 07:26:32.043641728 +0000 UTC m=+0.144098972 container attach aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:26:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2853494607' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:26:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2853494607' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:26:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:32.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:32 compute-0 vibrant_ride[248809]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:26:32 compute-0 vibrant_ride[248809]: --> relative data size: 1.0
Jan 31 07:26:32 compute-0 vibrant_ride[248809]: --> All data devices are unavailable
Jan 31 07:26:32 compute-0 systemd[1]: libpod-aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b.scope: Deactivated successfully.
Jan 31 07:26:32 compute-0 podman[248793]: 2026-01-31 07:26:32.87301464 +0000 UTC m=+0.973471904 container died aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:26:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-add0b08c8bb8ff6283e2e77b76346bc6b467672b10058658c49dbc45f4f6ec31-merged.mount: Deactivated successfully.
Jan 31 07:26:32 compute-0 podman[248793]: 2026-01-31 07:26:32.933182415 +0000 UTC m=+1.033639639 container remove aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:26:32 compute-0 systemd[1]: libpod-conmon-aa072a72d75b87560572fdde65c6bf5a15e4e724db85b18969d2d9b81dd98e2b.scope: Deactivated successfully.
Jan 31 07:26:32 compute-0 sudo[248688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:33 compute-0 sudo[248838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:33 compute-0 sudo[248838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:33 compute-0 sudo[248838]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:33 compute-0 sudo[248863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:26:33 compute-0 sudo[248863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:33 compute-0 sudo[248863]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:33 compute-0 sudo[248888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:33 compute-0 sudo[248888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:33 compute-0 sudo[248888]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:33 compute-0 sudo[248913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:26:33 compute-0 sudo[248913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:33 compute-0 ceph-mon[74496]: pgmap v759: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.621228984 +0000 UTC m=+0.055519561 container create 0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:26:33 compute-0 systemd[1]: Started libpod-conmon-0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5.scope.
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.592797058 +0000 UTC m=+0.027087685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:26:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.714398247 +0000 UTC m=+0.148688864 container init 0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.72470919 +0000 UTC m=+0.158999767 container start 0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.729323433 +0000 UTC m=+0.163614020 container attach 0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:26:33 compute-0 wonderful_grothendieck[248996]: 167 167
Jan 31 07:26:33 compute-0 systemd[1]: libpod-0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5.scope: Deactivated successfully.
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.732387558 +0000 UTC m=+0.166678115 container died 0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f459436ebe64c679a07fdfdb7dcab5bb8e457f1883687293dc039b011a3054-merged.mount: Deactivated successfully.
Jan 31 07:26:33 compute-0 podman[248980]: 2026-01-31 07:26:33.766068464 +0000 UTC m=+0.200359001 container remove 0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:26:33 compute-0 systemd[1]: libpod-conmon-0de58aab53abe1f3318b296244bfeb31b23b18583a661452903c648d46c249e5.scope: Deactivated successfully.
Jan 31 07:26:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:33 compute-0 podman[249020]: 2026-01-31 07:26:33.942062166 +0000 UTC m=+0.057844589 container create 5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:26:33 compute-0 systemd[1]: Started libpod-conmon-5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31.scope.
Jan 31 07:26:34 compute-0 podman[249020]: 2026-01-31 07:26:33.916856539 +0000 UTC m=+0.032638982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:26:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8383337a78bdaa8dec07851ff22d118d5818d73dbfb6f1027708479c1eec0137/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8383337a78bdaa8dec07851ff22d118d5818d73dbfb6f1027708479c1eec0137/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8383337a78bdaa8dec07851ff22d118d5818d73dbfb6f1027708479c1eec0137/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8383337a78bdaa8dec07851ff22d118d5818d73dbfb6f1027708479c1eec0137/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:34 compute-0 podman[249020]: 2026-01-31 07:26:34.041191944 +0000 UTC m=+0.156974407 container init 5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:26:34 compute-0 podman[249020]: 2026-01-31 07:26:34.047011028 +0000 UTC m=+0.162793451 container start 5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:26:34 compute-0 podman[249020]: 2026-01-31 07:26:34.052209055 +0000 UTC m=+0.167991488 container attach 5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elbakyan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 07:26:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:34.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:26:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]: {
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:     "0": [
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:         {
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "devices": [
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "/dev/loop3"
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             ],
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "lv_name": "ceph_lv0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "lv_size": "7511998464",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "name": "ceph_lv0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "tags": {
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.cluster_name": "ceph",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.crush_device_class": "",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.encrypted": "0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.osd_id": "0",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.type": "block",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:                 "ceph.vdo": "0"
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             },
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "type": "block",
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:             "vg_name": "ceph_vg0"
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:         }
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]:     ]
Jan 31 07:26:34 compute-0 vigorous_elbakyan[249037]: }
Jan 31 07:26:34 compute-0 systemd[1]: libpod-5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31.scope: Deactivated successfully.
Jan 31 07:26:34 compute-0 podman[249020]: 2026-01-31 07:26:34.862202723 +0000 UTC m=+0.977985126 container died 5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elbakyan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8383337a78bdaa8dec07851ff22d118d5818d73dbfb6f1027708479c1eec0137-merged.mount: Deactivated successfully.
Jan 31 07:26:34 compute-0 podman[249020]: 2026-01-31 07:26:34.919884776 +0000 UTC m=+1.035667169 container remove 5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:26:34 compute-0 systemd[1]: libpod-conmon-5ce27293c454b1dc653a3a00e46061217a5429f25ceeb2f2f6f6d6107c922f31.scope: Deactivated successfully.
Jan 31 07:26:34 compute-0 sudo[248913]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:35 compute-0 sudo[249056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:35 compute-0 sudo[249056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:35 compute-0 sudo[249056]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:35 compute-0 sudo[249081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:26:35 compute-0 sudo[249081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:35 compute-0 sudo[249081]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:35 compute-0 sudo[249106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:35 compute-0 sudo[249106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:35 compute-0 sudo[249106]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:35 compute-0 sudo[249131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:26:35 compute-0 sudo[249131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:35 compute-0 ceph-mon[74496]: pgmap v760: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.436820982 +0000 UTC m=+0.020960624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.664230675 +0000 UTC m=+0.248370277 container create 41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:26:35 compute-0 systemd[1]: Started libpod-conmon-41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac.scope.
Jan 31 07:26:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.74319224 +0000 UTC m=+0.327331812 container init 41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.750270543 +0000 UTC m=+0.334410145 container start 41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.754424675 +0000 UTC m=+0.338564367 container attach 41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:26:35 compute-0 systemd[1]: libpod-41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac.scope: Deactivated successfully.
Jan 31 07:26:35 compute-0 nifty_shockley[249216]: 167 167
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.756432524 +0000 UTC m=+0.340572126 container died 41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:26:35 compute-0 conmon[249216]: conmon 41a97823177c6e876db8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac.scope/container/memory.events
Jan 31 07:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fce543dc17c434ae7c7aad80dea8e989245cb1486bb5b918478b71e8751e6973-merged.mount: Deactivated successfully.
Jan 31 07:26:35 compute-0 podman[249199]: 2026-01-31 07:26:35.796316492 +0000 UTC m=+0.380456094 container remove 41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shockley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:26:35 compute-0 systemd[1]: libpod-conmon-41a97823177c6e876db83f1ccba9de477cf820e75966f7c3f5e3e9500e857aac.scope: Deactivated successfully.
Jan 31 07:26:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:35 compute-0 podman[249239]: 2026-01-31 07:26:35.954732083 +0000 UTC m=+0.051110503 container create 1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:26:35 compute-0 systemd[1]: Started libpod-conmon-1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722.scope.
Jan 31 07:26:36 compute-0 podman[249239]: 2026-01-31 07:26:35.933236017 +0000 UTC m=+0.029614507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:26:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e7a8b739b4c79649b0994c98e8c744d8f3354a2e3462756e7c2de950ec046f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e7a8b739b4c79649b0994c98e8c744d8f3354a2e3462756e7c2de950ec046f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e7a8b739b4c79649b0994c98e8c744d8f3354a2e3462756e7c2de950ec046f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e7a8b739b4c79649b0994c98e8c744d8f3354a2e3462756e7c2de950ec046f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:26:36 compute-0 podman[249239]: 2026-01-31 07:26:36.053878393 +0000 UTC m=+0.150256883 container init 1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:26:36 compute-0 podman[249239]: 2026-01-31 07:26:36.060428834 +0000 UTC m=+0.156807284 container start 1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:26:36 compute-0 podman[249239]: 2026-01-31 07:26:36.064429601 +0000 UTC m=+0.160808051 container attach 1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:26:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:36.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:36.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:36 compute-0 beautiful_edison[249255]: {
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:         "osd_id": 0,
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:         "type": "bluestore"
Jan 31 07:26:36 compute-0 beautiful_edison[249255]:     }
Jan 31 07:26:36 compute-0 beautiful_edison[249255]: }
Jan 31 07:26:36 compute-0 systemd[1]: libpod-1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722.scope: Deactivated successfully.
Jan 31 07:26:37 compute-0 podman[249276]: 2026-01-31 07:26:37.038773926 +0000 UTC m=+0.029443633 container died 1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e7a8b739b4c79649b0994c98e8c744d8f3354a2e3462756e7c2de950ec046f8-merged.mount: Deactivated successfully.
Jan 31 07:26:37 compute-0 podman[249276]: 2026-01-31 07:26:37.086165017 +0000 UTC m=+0.076834684 container remove 1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:26:37 compute-0 systemd[1]: libpod-conmon-1f0816bcbd7826fc5d8c03cd2fef8281a4fe9a7edf5b4acda62d03de9b8b8722.scope: Deactivated successfully.
Jan 31 07:26:37 compute-0 sudo[249131]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:26:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:26:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a03f9673-2d23-470f-a8ec-4672c2833725 does not exist
Jan 31 07:26:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 166a2fe3-487d-48b9-a078-92f0a0484537 does not exist
Jan 31 07:26:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b5ef81b1-0f52-45cb-805d-bc15ec753c1a does not exist
Jan 31 07:26:37 compute-0 sudo[249291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:37 compute-0 sudo[249291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:37 compute-0 sudo[249291]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:37 compute-0 sudo[249317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:26:37 compute-0 sudo[249317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:37 compute-0 sudo[249317]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:37 compute-0 ceph-mon[74496]: pgmap v761: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:26:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:38.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:39 compute-0 ceph-mon[74496]: pgmap v762: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:40.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:41 compute-0 ceph-mon[74496]: pgmap v763: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.566 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.566 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.566 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.616 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.617 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.618 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.618 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.619 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.619 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.620 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.620 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.621 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.653 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.653 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.654 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.654 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:26:41 compute-0 nova_compute[247704]: 2026-01-31 07:26:41.654 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:26:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:26:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089613681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.108 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.259 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.260 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5234MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.261 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.261 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:26:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:42.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1089613681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/54853499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:42.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.815 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.815 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:26:42 compute-0 nova_compute[247704]: 2026-01-31 07:26:42.836 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:26:42 compute-0 podman[249366]: 2026-01-31 07:26:42.917782343 +0000 UTC m=+0.087032394 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 07:26:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:26:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433876801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:43 compute-0 nova_compute[247704]: 2026-01-31 07:26:43.217 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:26:43 compute-0 nova_compute[247704]: 2026-01-31 07:26:43.224 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:26:43 compute-0 nova_compute[247704]: 2026-01-31 07:26:43.237 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:26:43 compute-0 nova_compute[247704]: 2026-01-31 07:26:43.239 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:26:43 compute-0 nova_compute[247704]: 2026-01-31 07:26:43.239 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:26:43 compute-0 ceph-mon[74496]: pgmap v764: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1433876801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2851660928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:44 compute-0 sudo[249415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:44 compute-0 sudo[249415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:44 compute-0 sudo[249415]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:44.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:44 compute-0 sudo[249440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:26:44 compute-0 sudo[249440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:26:44 compute-0 sudo[249440]: pam_unix(sudo:session): session closed for user root
Jan 31 07:26:44 compute-0 ceph-mon[74496]: pgmap v765: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2103975054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:44.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/848047206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:26:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:46 compute-0 ceph-mon[74496]: pgmap v766: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:46.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:48.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:48.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:48 compute-0 ceph-mon[74496]: pgmap v767: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:26:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:50.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 07:26:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:50.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 07:26:50 compute-0 ceph-mon[74496]: pgmap v768: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:52.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:52 compute-0 ceph-mon[74496]: pgmap v769: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:26:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:54.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:26:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:54 compute-0 podman[249470]: 2026-01-31 07:26:54.921413164 +0000 UTC m=+0.083706241 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:26:55 compute-0 ceph-mon[74496]: pgmap v770: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:56.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:57 compute-0 ceph-mon[74496]: pgmap v771: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:26:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:26:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:26:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:26:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:26:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:26:59 compute-0 ceph-mon[74496]: pgmap v772: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:26:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:00.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:01 compute-0 ceph-mon[74496]: pgmap v773: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:02.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:03 compute-0 ceph-mon[74496]: pgmap v774: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:04.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:04 compute-0 sudo[249494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:04 compute-0 sudo[249494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:04 compute-0 sudo[249494]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:04 compute-0 sudo[249519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:04 compute-0 sudo[249519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:04 compute-0 sudo[249519]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:05 compute-0 ceph-mon[74496]: pgmap v775: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:06.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:06.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:07 compute-0 ceph-mon[74496]: pgmap v776: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:09 compute-0 ceph-mon[74496]: pgmap v777: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:10 compute-0 ceph-mon[74496]: pgmap v778: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:10.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:27:11.136 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:27:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:27:11.137 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:27:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:27:11.137 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:27:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:12 compute-0 ceph-mon[74496]: pgmap v779: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:13 compute-0 podman[249549]: 2026-01-31 07:27:13.91821442 +0000 UTC m=+0.091701978 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:27:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:14.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:14.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:14 compute-0 ceph-mon[74496]: pgmap v780: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:16.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:17 compute-0 ceph-mon[74496]: pgmap v781: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:18.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:18.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:19 compute-0 ceph-mon[74496]: pgmap v782: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:27:19
Jan 31 07:27:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:27:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:27:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Jan 31 07:27:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:27:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:20.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:20.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:21 compute-0 ceph-mon[74496]: pgmap v783: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:22.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:23 compute-0 ceph-mon[74496]: pgmap v784: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:24.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:24 compute-0 sudo[249580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:24 compute-0 sudo[249580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:24 compute-0 sudo[249580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:24.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:24 compute-0 sudo[249605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:24 compute-0 sudo[249605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:24 compute-0 sudo[249605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:25 compute-0 ceph-mon[74496]: pgmap v785: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:25 compute-0 podman[249631]: 2026-01-31 07:27:25.879945142 +0000 UTC m=+0.057061728 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:27:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:26.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:27 compute-0 ceph-mon[74496]: pgmap v786: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:28.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:29 compute-0 ceph-mon[74496]: pgmap v787: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:30.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:30.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:31 compute-0 ceph-mon[74496]: pgmap v788: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:32.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:33 compute-0 ceph-mon[74496]: pgmap v789: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:27:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:27:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 07:27:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:35 compute-0 ceph-mon[74496]: pgmap v790: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:36 compute-0 ceph-mon[74496]: pgmap v791: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:37.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:37 compute-0 sudo[249658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:37 compute-0 sudo[249658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-0 sudo[249658]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:37 compute-0 sudo[249683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:27:37 compute-0 sudo[249683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-0 sudo[249683]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:37 compute-0 sudo[249708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:37 compute-0 sudo[249708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-0 sudo[249708]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:37 compute-0 sudo[249733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:27:37 compute-0 sudo[249733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:27:38 compute-0 sudo[249733]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:27:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:27:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:27:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:27:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:27:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-0 ceph-mon[74496]: pgmap v792: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:27:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:27:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b7e6025a-9fd6-4e70-a027-5a0dcfcf76b1 does not exist
Jan 31 07:27:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fed867e3-9444-45bb-a7a2-7b165ce69bd7 does not exist
Jan 31 07:27:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e27b248e-404d-4a83-9106-fcce9ff2e1c6 does not exist
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:27:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:27:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:39 compute-0 sudo[249789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:39 compute-0 sudo[249789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:39 compute-0 sudo[249789]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:39 compute-0 sudo[249814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:27:39 compute-0 sudo[249814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:39 compute-0 sudo[249814]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:39 compute-0 sudo[249840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:39 compute-0 sudo[249840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:39 compute-0 sudo[249840]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:39 compute-0 sudo[249865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:27:39 compute-0 sudo[249865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.730863606 +0000 UTC m=+0.061238961 container create 66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:27:39 compute-0 systemd[1]: Started libpod-conmon-66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f.scope.
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.701015634 +0000 UTC m=+0.031391009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:27:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.818955075 +0000 UTC m=+0.149330450 container init 66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banach, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.825921096 +0000 UTC m=+0.156296421 container start 66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.82980989 +0000 UTC m=+0.160185305 container attach 66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:27:39 compute-0 optimistic_banach[249945]: 167 167
Jan 31 07:27:39 compute-0 systemd[1]: libpod-66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f.scope: Deactivated successfully.
Jan 31 07:27:39 compute-0 conmon[249945]: conmon 66d902b6dca370204902 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f.scope/container/memory.events
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.834775432 +0000 UTC m=+0.165150767 container died 66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:27:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eda8d30dfab3ddbb1ce6a1ad11475783b6a8c2d6fc24b83f35fd31b2f713a87-merged.mount: Deactivated successfully.
Jan 31 07:27:39 compute-0 podman[249929]: 2026-01-31 07:27:39.879253142 +0000 UTC m=+0.209628477 container remove 66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_banach, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:27:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:39 compute-0 systemd[1]: libpod-conmon-66d902b6dca37020490246ab4ab455377ef9b4c25aa50796664db581863a075f.scope: Deactivated successfully.
Jan 31 07:27:40 compute-0 podman[249969]: 2026-01-31 07:27:40.036584257 +0000 UTC m=+0.056592267 container create 77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:27:40 compute-0 systemd[1]: Started libpod-conmon-77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b.scope.
Jan 31 07:27:40 compute-0 podman[249969]: 2026-01-31 07:27:40.01305917 +0000 UTC m=+0.033067240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:27:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb0ec288d07c4f7f894b88b31ed92beb9945d36233b8bde60048dc83d05ba71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb0ec288d07c4f7f894b88b31ed92beb9945d36233b8bde60048dc83d05ba71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb0ec288d07c4f7f894b88b31ed92beb9945d36233b8bde60048dc83d05ba71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb0ec288d07c4f7f894b88b31ed92beb9945d36233b8bde60048dc83d05ba71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb0ec288d07c4f7f894b88b31ed92beb9945d36233b8bde60048dc83d05ba71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:40 compute-0 podman[249969]: 2026-01-31 07:27:40.144343368 +0000 UTC m=+0.164351428 container init 77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:27:40 compute-0 podman[249969]: 2026-01-31 07:27:40.154902097 +0000 UTC m=+0.174910097 container start 77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:40 compute-0 podman[249969]: 2026-01-31 07:27:40.160072723 +0000 UTC m=+0.180080743 container attach 77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:27:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:27:40.781 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:27:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:27:40.786 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:27:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:27:40.787 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:27:40 compute-0 musing_golick[249986]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:27:40 compute-0 musing_golick[249986]: --> relative data size: 1.0
Jan 31 07:27:40 compute-0 musing_golick[249986]: --> All data devices are unavailable
Jan 31 07:27:40 compute-0 systemd[1]: libpod-77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b.scope: Deactivated successfully.
Jan 31 07:27:40 compute-0 podman[249969]: 2026-01-31 07:27:40.992804798 +0000 UTC m=+1.012812788 container died 77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:27:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-adb0ec288d07c4f7f894b88b31ed92beb9945d36233b8bde60048dc83d05ba71-merged.mount: Deactivated successfully.
Jan 31 07:27:41 compute-0 podman[249969]: 2026-01-31 07:27:41.048722948 +0000 UTC m=+1.068730938 container remove 77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_golick, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:27:41 compute-0 systemd[1]: libpod-conmon-77e7c4597b280e25aac8c1b7c3621f47bf51c54a4ec347ae60323d5995cec47b.scope: Deactivated successfully.
Jan 31 07:27:41 compute-0 ceph-mon[74496]: pgmap v793: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:41 compute-0 sudo[249865]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:41 compute-0 sudo[250014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:41 compute-0 sudo[250014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:41.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:41 compute-0 sudo[250014]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:41 compute-0 sudo[250039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:27:41 compute-0 sudo[250039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:41 compute-0 sudo[250039]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:41 compute-0 sudo[250065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:41 compute-0 sudo[250065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:41 compute-0 sudo[250065]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:41 compute-0 sudo[250090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:27:41 compute-0 sudo[250090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:41 compute-0 podman[250155]: 2026-01-31 07:27:41.735931187 +0000 UTC m=+0.033278457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:27:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:41 compute-0 podman[250155]: 2026-01-31 07:27:41.888428623 +0000 UTC m=+0.185775893 container create 974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_allen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:27:41 compute-0 systemd[1]: Started libpod-conmon-974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b.scope.
Jan 31 07:27:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:27:42 compute-0 podman[250155]: 2026-01-31 07:27:42.034755278 +0000 UTC m=+0.332102558 container init 974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_allen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:27:42 compute-0 podman[250155]: 2026-01-31 07:27:42.044670831 +0000 UTC m=+0.342018101 container start 974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_allen, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:27:42 compute-0 podman[250155]: 2026-01-31 07:27:42.050054923 +0000 UTC m=+0.347402193 container attach 974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:27:42 compute-0 sleepy_allen[250171]: 167 167
Jan 31 07:27:42 compute-0 systemd[1]: libpod-974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b.scope: Deactivated successfully.
Jan 31 07:27:42 compute-0 podman[250155]: 2026-01-31 07:27:42.052960314 +0000 UTC m=+0.350307644 container died 974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_allen, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f88e2f24995494aeea5fcbe13d67ea5827a8be2fc8d3e64330cef390512cb3d-merged.mount: Deactivated successfully.
Jan 31 07:27:42 compute-0 podman[250155]: 2026-01-31 07:27:42.104134549 +0000 UTC m=+0.401481789 container remove 974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 31 07:27:42 compute-0 systemd[1]: libpod-conmon-974d06cc775d3d197e0455e51a23960d6541a8aeaadcf5759653d0eaaf98797b.scope: Deactivated successfully.
Jan 31 07:27:42 compute-0 podman[250195]: 2026-01-31 07:27:42.298366918 +0000 UTC m=+0.068024298 container create ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:27:42 compute-0 systemd[1]: Started libpod-conmon-ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f.scope.
Jan 31 07:27:42 compute-0 podman[250195]: 2026-01-31 07:27:42.269268205 +0000 UTC m=+0.038925655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:27:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3052a1bd030a818ff640122ce09b956518360940f0cfbab49f40a20c41c785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3052a1bd030a818ff640122ce09b956518360940f0cfbab49f40a20c41c785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3052a1bd030a818ff640122ce09b956518360940f0cfbab49f40a20c41c785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d3052a1bd030a818ff640122ce09b956518360940f0cfbab49f40a20c41c785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:42 compute-0 podman[250195]: 2026-01-31 07:27:42.396825101 +0000 UTC m=+0.166482451 container init ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 07:27:42 compute-0 podman[250195]: 2026-01-31 07:27:42.415054057 +0000 UTC m=+0.184711407 container start ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cohen, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:27:42 compute-0 podman[250195]: 2026-01-31 07:27:42.418352358 +0000 UTC m=+0.188009728 container attach ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:27:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:43 compute-0 ceph-mon[74496]: pgmap v794: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:43 compute-0 sharp_cohen[250212]: {
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:     "0": [
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:         {
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "devices": [
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "/dev/loop3"
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             ],
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "lv_name": "ceph_lv0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "lv_size": "7511998464",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "name": "ceph_lv0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "tags": {
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.cluster_name": "ceph",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.crush_device_class": "",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.encrypted": "0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.osd_id": "0",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.type": "block",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:                 "ceph.vdo": "0"
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             },
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "type": "block",
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:             "vg_name": "ceph_vg0"
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:         }
Jan 31 07:27:43 compute-0 sharp_cohen[250212]:     ]
Jan 31 07:27:43 compute-0 sharp_cohen[250212]: }
Jan 31 07:27:43 compute-0 systemd[1]: libpod-ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f.scope: Deactivated successfully.
Jan 31 07:27:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:43.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:43 compute-0 podman[250221]: 2026-01-31 07:27:43.169316189 +0000 UTC m=+0.030809686 container died ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:27:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:43.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d3052a1bd030a818ff640122ce09b956518360940f0cfbab49f40a20c41c785-merged.mount: Deactivated successfully.
Jan 31 07:27:43 compute-0 podman[250221]: 2026-01-31 07:27:43.227155756 +0000 UTC m=+0.088649223 container remove ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.228 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 systemd[1]: libpod-conmon-ce00b8e873a94170f125c362018dce16e9ed8c5bc3ad258ac601bea255ca203f.scope: Deactivated successfully.
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.268 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.269 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.269 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:27:43 compute-0 sudo[250090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.306 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.307 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.308 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.308 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.309 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.309 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.311 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:43 compute-0 sudo[250239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.356 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:27:43 compute-0 sudo[250239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.358 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.359 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.359 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.360 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:27:43 compute-0 sudo[250239]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:43 compute-0 sudo[250265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:27:43 compute-0 sudo[250265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:43 compute-0 sudo[250265]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:43 compute-0 sudo[250290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:43 compute-0 sudo[250290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:43 compute-0 sudo[250290]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:43 compute-0 sshd-session[250222]: Invalid user sol from 45.148.10.240 port 56678
Jan 31 07:27:43 compute-0 sudo[250325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:27:43 compute-0 sudo[250325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:43 compute-0 sshd-session[250222]: Connection closed by invalid user sol 45.148.10.240 port 56678 [preauth]
Jan 31 07:27:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:27:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464465284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:43 compute-0 nova_compute[247704]: 2026-01-31 07:27:43.830 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:27:43 compute-0 podman[250401]: 2026-01-31 07:27:43.880842393 +0000 UTC m=+0.050524409 container create 7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:43 compute-0 systemd[1]: Started libpod-conmon-7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270.scope.
Jan 31 07:27:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:27:43 compute-0 podman[250401]: 2026-01-31 07:27:43.85500633 +0000 UTC m=+0.024688386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:27:43 compute-0 podman[250401]: 2026-01-31 07:27:43.950167552 +0000 UTC m=+0.119849598 container init 7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:27:43 compute-0 podman[250401]: 2026-01-31 07:27:43.957785479 +0000 UTC m=+0.127467485 container start 7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:27:43 compute-0 podman[250401]: 2026-01-31 07:27:43.962835082 +0000 UTC m=+0.132517128 container attach 7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:27:43 compute-0 determined_moore[250418]: 167 167
Jan 31 07:27:43 compute-0 systemd[1]: libpod-7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270.scope: Deactivated successfully.
Jan 31 07:27:43 compute-0 podman[250401]: 2026-01-31 07:27:43.966701367 +0000 UTC m=+0.136383373 container died 7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d601cc70301181583d0b9974055708d2116f7e17f1b0b2a9cb2d1d477e7822c-merged.mount: Deactivated successfully.
Jan 31 07:27:44 compute-0 podman[250401]: 2026-01-31 07:27:44.003337134 +0000 UTC m=+0.173019140 container remove 7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:44 compute-0 systemd[1]: libpod-conmon-7add7834ae857f55583588bab6504bf19175b4148e0be49a3f4dcba235508270.scope: Deactivated successfully.
Jan 31 07:27:44 compute-0 nova_compute[247704]: 2026-01-31 07:27:44.013 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:27:44 compute-0 nova_compute[247704]: 2026-01-31 07:27:44.014 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:27:44 compute-0 nova_compute[247704]: 2026-01-31 07:27:44.015 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:27:44 compute-0 nova_compute[247704]: 2026-01-31 07:27:44.015 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:27:44 compute-0 podman[250421]: 2026-01-31 07:27:44.060451674 +0000 UTC m=+0.106561282 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:27:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1464465284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4160809978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:44 compute-0 podman[250467]: 2026-01-31 07:27:44.129343412 +0000 UTC m=+0.043400104 container create 55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bassi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:27:44 compute-0 systemd[1]: Started libpod-conmon-55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9.scope.
Jan 31 07:27:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81836371564c9e15ca12d3cba205965732a90a39182af742402588d2d06f8c07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81836371564c9e15ca12d3cba205965732a90a39182af742402588d2d06f8c07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81836371564c9e15ca12d3cba205965732a90a39182af742402588d2d06f8c07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81836371564c9e15ca12d3cba205965732a90a39182af742402588d2d06f8c07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:27:44 compute-0 podman[250467]: 2026-01-31 07:27:44.111446963 +0000 UTC m=+0.025503675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:27:44 compute-0 podman[250467]: 2026-01-31 07:27:44.216009456 +0000 UTC m=+0.130066168 container init 55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:27:44 compute-0 podman[250467]: 2026-01-31 07:27:44.221128942 +0000 UTC m=+0.135185664 container start 55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bassi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:44 compute-0 podman[250467]: 2026-01-31 07:27:44.224368971 +0000 UTC m=+0.138425663 container attach 55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bassi, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:27:44 compute-0 sudo[250488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:44 compute-0 sudo[250488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:44 compute-0 sudo[250488]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:44 compute-0 sudo[250513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:44 compute-0 sudo[250513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:44 compute-0 sudo[250513]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:45 compute-0 silly_bassi[250483]: {
Jan 31 07:27:45 compute-0 silly_bassi[250483]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:27:45 compute-0 silly_bassi[250483]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:27:45 compute-0 silly_bassi[250483]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:27:45 compute-0 silly_bassi[250483]:         "osd_id": 0,
Jan 31 07:27:45 compute-0 silly_bassi[250483]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:27:45 compute-0 silly_bassi[250483]:         "type": "bluestore"
Jan 31 07:27:45 compute-0 silly_bassi[250483]:     }
Jan 31 07:27:45 compute-0 silly_bassi[250483]: }
Jan 31 07:27:45 compute-0 systemd[1]: libpod-55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9.scope: Deactivated successfully.
Jan 31 07:27:45 compute-0 podman[250467]: 2026-01-31 07:27:45.06544896 +0000 UTC m=+0.979505682 container died 55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bassi, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 07:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-81836371564c9e15ca12d3cba205965732a90a39182af742402588d2d06f8c07-merged.mount: Deactivated successfully.
Jan 31 07:27:45 compute-0 podman[250467]: 2026-01-31 07:27:45.134986284 +0000 UTC m=+1.049042986 container remove 55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:27:45 compute-0 ceph-mon[74496]: pgmap v795: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2038699296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:27:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2038699296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:27:45 compute-0 systemd[1]: libpod-conmon-55f553729775cafe828253eb1c08d4bc7b585b41a8246fa887d67f4ad9a013c9.scope: Deactivated successfully.
Jan 31 07:27:45 compute-0 sudo[250325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:27:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:27:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 37169573-1eb7-4b35-b869-f78debb30561 does not exist
Jan 31 07:27:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 299d8a98-ef6b-4128-adc8-597cdcefab40 does not exist
Jan 31 07:27:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 82d19371-487a-441d-88a1-e54091b7c039 does not exist
Jan 31 07:27:45 compute-0 sudo[250568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:27:45 compute-0 sudo[250568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:45 compute-0 sudo[250568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:45 compute-0 sudo[250594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:27:45 compute-0 sudo[250594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:27:45 compute-0 sudo[250594]: pam_unix(sudo:session): session closed for user root
Jan 31 07:27:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:27:46 compute-0 ceph-mon[74496]: pgmap v796: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.166 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.168 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:27:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.194 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:27:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2871053164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2323152489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:27:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229641050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.682 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.690 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.714 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.717 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.718 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:27:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.972 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.972 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:47 compute-0 nova_compute[247704]: 2026-01-31 07:27:47.973 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:27:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/229641050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:48 compute-0 ceph-mon[74496]: pgmap v797: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/60847909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:27:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:27:50 compute-0 ceph-mon[74496]: pgmap v798: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:27:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:27:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:52 compute-0 ceph-mon[74496]: pgmap v799: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:54 compute-0 ceph-mon[74496]: pgmap v800: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:55.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:55.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:56 compute-0 podman[250646]: 2026-01-31 07:27:56.903938698 +0000 UTC m=+0.081214877 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:27:57 compute-0 ceph-mon[74496]: pgmap v801: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:57.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:27:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:27:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:59.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:27:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:27:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:27:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:59.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:27:59 compute-0 ceph-mon[74496]: pgmap v802: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:27:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:00 compute-0 ceph-mon[74496]: pgmap v803: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:01.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:01.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:02 compute-0 ceph-mon[74496]: pgmap v804: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:03.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:03.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:04 compute-0 sudo[250670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:04 compute-0 sudo[250670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:04 compute-0 sudo[250670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:04 compute-0 sudo[250695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:04 compute-0 sudo[250695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:04 compute-0 sudo[250695]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:05 compute-0 ceph-mon[74496]: pgmap v805: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:05.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:05.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:07 compute-0 ceph-mon[74496]: pgmap v806: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:07.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:08 compute-0 ceph-mon[74496]: pgmap v807: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:09.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:09.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:10 compute-0 ceph-mon[74496]: pgmap v808: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:28:11.136 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:28:11.137 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:28:11.137 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:28:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:11.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:11.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:13 compute-0 ceph-mon[74496]: pgmap v809: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:13.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:13.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:14 compute-0 ceph-mon[74496]: pgmap v810: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:14 compute-0 podman[250725]: 2026-01-31 07:28:14.914657403 +0000 UTC m=+0.084255962 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:28:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:15.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:28:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:15.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:28:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:17 compute-0 ceph-mon[74496]: pgmap v811: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:17.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:19.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:19 compute-0 ceph-mon[74496]: pgmap v812: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Jan 31 07:28:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:28:19
Jan 31 07:28:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:28:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:28:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', 'vms', 'backups', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 07:28:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:28:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:21.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:21.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:21 compute-0 ceph-mon[74496]: pgmap v813: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Jan 31 07:28:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 07:28:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:22 compute-0 ceph-mon[74496]: pgmap v814: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 31 07:28:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:23.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:23.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 07:28:25 compute-0 sudo[250758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:25 compute-0 ceph-mon[74496]: pgmap v815: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 31 07:28:25 compute-0 sudo[250758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:25 compute-0 sudo[250758]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:25 compute-0 sudo[250783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:25 compute-0 sudo[250783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:25 compute-0 sudo[250783]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:25.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 114 op/s
Jan 31 07:28:26 compute-0 ceph-mon[74496]: pgmap v816: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 114 op/s
Jan 31 07:28:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:27.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:27.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:27 compute-0 podman[250810]: 2026-01-31 07:28:27.883909383 +0000 UTC m=+0.054286754 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:28:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 07:28:29 compute-0 ceph-mon[74496]: pgmap v817: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 07:28:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:29.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Jan 31 07:28:30 compute-0 ceph-mon[74496]: pgmap v818: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Jan 31 07:28:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:31.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:31.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 164 op/s
Jan 31 07:28:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:33 compute-0 ceph-mon[74496]: pgmap v819: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 98 KiB/s rd, 0 B/s wr, 164 op/s
Jan 31 07:28:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:33.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 0 B/s wr, 154 op/s
Jan 31 07:28:34 compute-0 ceph-mon[74496]: pgmap v820: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 0 B/s wr, 154 op/s
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:28:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:28:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:35.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 124 op/s
Jan 31 07:28:36 compute-0 ceph-mon[74496]: pgmap v821: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 0 B/s wr, 124 op/s
Jan 31 07:28:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:37.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Jan 31 07:28:38 compute-0 ceph-mon[74496]: pgmap v822: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Jan 31 07:28:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:39.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:39.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 31 07:28:41 compute-0 ceph-mon[74496]: pgmap v823: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 31 07:28:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:41.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:41.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 07:28:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:42 compute-0 nova_compute[247704]: 2026-01-31 07:28:42.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:42 compute-0 nova_compute[247704]: 2026-01-31 07:28:42.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:28:42 compute-0 nova_compute[247704]: 2026-01-31 07:28:42.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:28:43 compute-0 nova_compute[247704]: 2026-01-31 07:28:43.033 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:28:43 compute-0 ceph-mon[74496]: pgmap v824: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 07:28:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:43.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:43 compute-0 nova_compute[247704]: 2026-01-31 07:28:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:43 compute-0 nova_compute[247704]: 2026-01-31 07:28:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:43 compute-0 nova_compute[247704]: 2026-01-31 07:28:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.004 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.005 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.005 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.006 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.006 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:28:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:28:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2535823876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.463 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:28:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2522133190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.671 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.672 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5234MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.672 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.673 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.810 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.810 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:28:44 compute-0 nova_compute[247704]: 2026-01-31 07:28:44.836 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:28:45 compute-0 sudo[250881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-0 sudo[250881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-0 sudo[250881]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:28:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309724722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:45 compute-0 sudo[250912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:28:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:45.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:28:45 compute-0 sudo[250912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-0 sudo[250912]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-0 nova_compute[247704]: 2026-01-31 07:28:45.259 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:28:45 compute-0 nova_compute[247704]: 2026-01-31 07:28:45.264 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:28:45 compute-0 podman[250905]: 2026-01-31 07:28:45.270956603 +0000 UTC m=+0.089128771 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:28:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:45.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:45 compute-0 nova_compute[247704]: 2026-01-31 07:28:45.316 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:28:45 compute-0 nova_compute[247704]: 2026-01-31 07:28:45.317 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:28:45 compute-0 nova_compute[247704]: 2026-01-31 07:28:45.317 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:28:45 compute-0 sudo[250958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-0 sudo[250958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-0 sudo[250958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-0 sudo[250983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:28:45 compute-0 sudo[250983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-0 sudo[250983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-0 sudo[251008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:45 compute-0 sudo[251008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-0 sudo[251008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:45 compute-0 sudo[251033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:28:45 compute-0 sudo[251033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:45 compute-0 ceph-mon[74496]: pgmap v825: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2535823876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/840305027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3089092945' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:28:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3089092945' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:28:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1309724722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:46 compute-0 sudo[251033]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:46 compute-0 nova_compute[247704]: 2026-01-31 07:28:46.312 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:46 compute-0 nova_compute[247704]: 2026-01-31 07:28:46.313 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:46 compute-0 nova_compute[247704]: 2026-01-31 07:28:46.313 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:46 compute-0 nova_compute[247704]: 2026-01-31 07:28:46.313 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:46 compute-0 nova_compute[247704]: 2026-01-31 07:28:46.314 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:28:46 compute-0 nova_compute[247704]: 2026-01-31 07:28:46.314 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f0125ab6-975e-4524-a75e-4a6f01998091 does not exist
Jan 31 07:28:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4e46e287-e425-442c-af17-76a9c55eda1d does not exist
Jan 31 07:28:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5b85e15b-da2c-4cc5-a5e7-ebc22f4541c0 does not exist
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:28:46 compute-0 sudo[251090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:46 compute-0 sudo[251090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:46 compute-0 sudo[251090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:46 compute-0 sudo[251115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:28:46 compute-0 sudo[251115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:46 compute-0 sudo[251115]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:46 compute-0 sudo[251140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:46 compute-0 sudo[251140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:46 compute-0 sudo[251140]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:46 compute-0 sudo[251165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:28:46 compute-0 sudo[251165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:47.012472344 +0000 UTC m=+0.059332499 container create d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:28:47 compute-0 systemd[1]: Started libpod-conmon-d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13.scope.
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:46.976433899 +0000 UTC m=+0.023294144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:28:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:47.10549291 +0000 UTC m=+0.152353085 container init d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:47.112451241 +0000 UTC m=+0.159311396 container start d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:28:47 compute-0 happy_lamport[251249]: 167 167
Jan 31 07:28:47 compute-0 systemd[1]: libpod-d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13.scope: Deactivated successfully.
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:47.127146822 +0000 UTC m=+0.174007027 container attach d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:47.128799933 +0000 UTC m=+0.175660078 container died d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ab49e505544cbef0fbaaa4fc91417a145d8378e4664a83e9a0bb307384526d4-merged.mount: Deactivated successfully.
Jan 31 07:28:47 compute-0 ceph-mon[74496]: pgmap v826: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:28:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:28:47 compute-0 podman[251233]: 2026-01-31 07:28:47.202236948 +0000 UTC m=+0.249097103 container remove d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:28:47 compute-0 systemd[1]: libpod-conmon-d98d421aed498ddb589aef693be28a56792dd06c25532ca43c39b4f9bb7a6c13.scope: Deactivated successfully.
Jan 31 07:28:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:47 compute-0 podman[251276]: 2026-01-31 07:28:47.355469603 +0000 UTC m=+0.056752825 container create c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_saha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:28:47 compute-0 systemd[1]: Started libpod-conmon-c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1.scope.
Jan 31 07:28:47 compute-0 podman[251276]: 2026-01-31 07:28:47.32846149 +0000 UTC m=+0.029744772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:28:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:28:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298def5a0ccfda4f2546c13e10196cb41f9f2aa135eebd0af0805d474884bfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298def5a0ccfda4f2546c13e10196cb41f9f2aa135eebd0af0805d474884bfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298def5a0ccfda4f2546c13e10196cb41f9f2aa135eebd0af0805d474884bfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298def5a0ccfda4f2546c13e10196cb41f9f2aa135eebd0af0805d474884bfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298def5a0ccfda4f2546c13e10196cb41f9f2aa135eebd0af0805d474884bfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:47 compute-0 podman[251276]: 2026-01-31 07:28:47.461311525 +0000 UTC m=+0.162594807 container init c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:28:47 compute-0 podman[251276]: 2026-01-31 07:28:47.470585642 +0000 UTC m=+0.171868874 container start c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:28:47 compute-0 podman[251276]: 2026-01-31 07:28:47.480918107 +0000 UTC m=+0.182201339 container attach c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:28:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:48 compute-0 ceph-mon[74496]: pgmap v827: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:48 compute-0 competent_saha[251292]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:28:48 compute-0 competent_saha[251292]: --> relative data size: 1.0
Jan 31 07:28:48 compute-0 competent_saha[251292]: --> All data devices are unavailable
Jan 31 07:28:48 compute-0 systemd[1]: libpod-c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1.scope: Deactivated successfully.
Jan 31 07:28:48 compute-0 podman[251276]: 2026-01-31 07:28:48.313234352 +0000 UTC m=+1.014517554 container died c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:28:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6298def5a0ccfda4f2546c13e10196cb41f9f2aa135eebd0af0805d474884bfd-merged.mount: Deactivated successfully.
Jan 31 07:28:48 compute-0 podman[251276]: 2026-01-31 07:28:48.383882059 +0000 UTC m=+1.085165271 container remove c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:28:48 compute-0 systemd[1]: libpod-conmon-c99246d16f92bb053e95abfb39884a06736021e34037d8cf8c2bbdf4375878b1.scope: Deactivated successfully.
Jan 31 07:28:48 compute-0 sudo[251165]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:48 compute-0 sudo[251320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:48 compute-0 sudo[251320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:48 compute-0 sudo[251320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:48 compute-0 sudo[251345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:28:48 compute-0 sudo[251345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:48 compute-0 sudo[251345]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:48 compute-0 sudo[251370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:48 compute-0 sudo[251370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:48 compute-0 sudo[251370]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:48 compute-0 sudo[251395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:28:48 compute-0 sudo[251395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:49 compute-0 podman[251463]: 2026-01-31 07:28:48.966962869 +0000 UTC m=+0.024669917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:28:49 compute-0 podman[251463]: 2026-01-31 07:28:49.15823268 +0000 UTC m=+0.215939648 container create c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:28:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:49.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:49 compute-0 systemd[1]: Started libpod-conmon-c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3.scope.
Jan 31 07:28:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:28:49 compute-0 podman[251463]: 2026-01-31 07:28:49.575321291 +0000 UTC m=+0.633028339 container init c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:28:49 compute-0 podman[251463]: 2026-01-31 07:28:49.585003389 +0000 UTC m=+0.642710347 container start c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:28:49 compute-0 suspicious_cartwright[251480]: 167 167
Jan 31 07:28:49 compute-0 systemd[1]: libpod-c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3.scope: Deactivated successfully.
Jan 31 07:28:49 compute-0 podman[251463]: 2026-01-31 07:28:49.805215031 +0000 UTC m=+0.862922089 container attach c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:28:49 compute-0 podman[251463]: 2026-01-31 07:28:49.805796745 +0000 UTC m=+0.863503743 container died c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:28:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e7223b1885fa1584320f1707e05833b752b31b452d60197b3744310ce11d42-merged.mount: Deactivated successfully.
Jan 31 07:28:50 compute-0 podman[251463]: 2026-01-31 07:28:50.317769328 +0000 UTC m=+1.375476286 container remove c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:28:50 compute-0 systemd[1]: libpod-conmon-c9ed75b783c21028983c99a124a8c538f4923ccccfc30571932e7e422b15c6f3.scope: Deactivated successfully.
Jan 31 07:28:50 compute-0 podman[251504]: 2026-01-31 07:28:50.469184969 +0000 UTC m=+0.050964193 container create 4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:28:50 compute-0 systemd[1]: Started libpod-conmon-4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3.scope.
Jan 31 07:28:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:28:50 compute-0 podman[251504]: 2026-01-31 07:28:50.44522506 +0000 UTC m=+0.027004364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7593becf9986d793c09fb795be5b3262b92872b9b5bf722af1b684cceac99bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7593becf9986d793c09fb795be5b3262b92872b9b5bf722af1b684cceac99bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7593becf9986d793c09fb795be5b3262b92872b9b5bf722af1b684cceac99bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7593becf9986d793c09fb795be5b3262b92872b9b5bf722af1b684cceac99bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:50 compute-0 podman[251504]: 2026-01-31 07:28:50.559719094 +0000 UTC m=+0.141498328 container init 4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:28:50 compute-0 podman[251504]: 2026-01-31 07:28:50.565795323 +0000 UTC m=+0.147574537 container start 4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:28:50 compute-0 podman[251504]: 2026-01-31 07:28:50.574594859 +0000 UTC m=+0.156374093 container attach 4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:28:51 compute-0 ceph-mon[74496]: pgmap v828: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3985746322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:51.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:28:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]: {
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:     "0": [
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:         {
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "devices": [
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "/dev/loop3"
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             ],
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "lv_name": "ceph_lv0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "lv_size": "7511998464",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "name": "ceph_lv0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "tags": {
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.cluster_name": "ceph",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.crush_device_class": "",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.encrypted": "0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.osd_id": "0",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.type": "block",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:                 "ceph.vdo": "0"
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             },
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "type": "block",
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:             "vg_name": "ceph_vg0"
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:         }
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]:     ]
Jan 31 07:28:51 compute-0 peaceful_mirzakhani[251520]: }
Jan 31 07:28:51 compute-0 systemd[1]: libpod-4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3.scope: Deactivated successfully.
Jan 31 07:28:51 compute-0 podman[251504]: 2026-01-31 07:28:51.382361312 +0000 UTC m=+0.964140516 container died 4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7593becf9986d793c09fb795be5b3262b92872b9b5bf722af1b684cceac99bc-merged.mount: Deactivated successfully.
Jan 31 07:28:51 compute-0 podman[251504]: 2026-01-31 07:28:51.483504308 +0000 UTC m=+1.065283532 container remove 4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:28:51 compute-0 systemd[1]: libpod-conmon-4dffc87e7a6272502426d285f43ffd0da2f8ac2c05dcb6e9ff41608d770148d3.scope: Deactivated successfully.
Jan 31 07:28:51 compute-0 sudo[251395]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:51 compute-0 sudo[251542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:51 compute-0 sudo[251542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:51 compute-0 sudo[251542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:51 compute-0 sudo[251567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:28:51 compute-0 sudo[251567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:51 compute-0 sudo[251567]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:51 compute-0 sudo[251592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:51 compute-0 sudo[251592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:51 compute-0 sudo[251592]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:51 compute-0 sudo[251617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:28:51 compute-0 sudo[251617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.022251078 +0000 UTC m=+0.025831465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.218604164 +0000 UTC m=+0.222184581 container create 8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_buck, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:28:52 compute-0 systemd[1]: Started libpod-conmon-8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21.scope.
Jan 31 07:28:52 compute-0 ceph-mon[74496]: pgmap v829: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.333634891 +0000 UTC m=+0.337215348 container init 8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_buck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.343888003 +0000 UTC m=+0.347468410 container start 8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_buck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:28:52 compute-0 upbeat_buck[251698]: 167 167
Jan 31 07:28:52 compute-0 systemd[1]: libpod-8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21.scope: Deactivated successfully.
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.350962226 +0000 UTC m=+0.354542693 container attach 8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.351671194 +0000 UTC m=+0.355251601 container died 8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e52afd416055269fe1032217e68d82f694df1f6a7b8146e49c504704fab397e8-merged.mount: Deactivated successfully.
Jan 31 07:28:52 compute-0 podman[251682]: 2026-01-31 07:28:52.401363655 +0000 UTC m=+0.404944032 container remove 8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_buck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:28:52 compute-0 systemd[1]: libpod-conmon-8d2a6854088e145fec85fd16f60b06fa7baaf6cf870b54ab27ab069e146bbd21.scope: Deactivated successfully.
Jan 31 07:28:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:52 compute-0 podman[251721]: 2026-01-31 07:28:52.567684663 +0000 UTC m=+0.054447849 container create ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:28:52 compute-0 systemd[1]: Started libpod-conmon-ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042.scope.
Jan 31 07:28:52 compute-0 podman[251721]: 2026-01-31 07:28:52.54277058 +0000 UTC m=+0.029533806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:28:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:28:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2630192b6bb88b17621113e4b369efdd2cc70f238e9b38921bd34d09152f783e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2630192b6bb88b17621113e4b369efdd2cc70f238e9b38921bd34d09152f783e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2630192b6bb88b17621113e4b369efdd2cc70f238e9b38921bd34d09152f783e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2630192b6bb88b17621113e4b369efdd2cc70f238e9b38921bd34d09152f783e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:28:52 compute-0 podman[251721]: 2026-01-31 07:28:52.665323313 +0000 UTC m=+0.152086569 container init ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:28:52 compute-0 podman[251721]: 2026-01-31 07:28:52.672794766 +0000 UTC m=+0.159557982 container start ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:28:52 compute-0 podman[251721]: 2026-01-31 07:28:52.679285476 +0000 UTC m=+0.166048762 container attach ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:28:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:53.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:53.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:53 compute-0 tender_johnson[251738]: {
Jan 31 07:28:53 compute-0 tender_johnson[251738]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:28:53 compute-0 tender_johnson[251738]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:28:53 compute-0 tender_johnson[251738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:28:53 compute-0 tender_johnson[251738]:         "osd_id": 0,
Jan 31 07:28:53 compute-0 tender_johnson[251738]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:28:53 compute-0 tender_johnson[251738]:         "type": "bluestore"
Jan 31 07:28:53 compute-0 tender_johnson[251738]:     }
Jan 31 07:28:53 compute-0 tender_johnson[251738]: }
Jan 31 07:28:53 compute-0 systemd[1]: libpod-ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042.scope: Deactivated successfully.
Jan 31 07:28:53 compute-0 podman[251721]: 2026-01-31 07:28:53.501973414 +0000 UTC m=+0.988736630 container died ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2630192b6bb88b17621113e4b369efdd2cc70f238e9b38921bd34d09152f783e-merged.mount: Deactivated successfully.
Jan 31 07:28:53 compute-0 podman[251721]: 2026-01-31 07:28:53.737043402 +0000 UTC m=+1.223806588 container remove ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:28:53 compute-0 systemd[1]: libpod-conmon-ce3bc3ecb87e2ffd136ab42de173c37cf9469def3721d88cc2f1f13275e12042.scope: Deactivated successfully.
Jan 31 07:28:53 compute-0 sudo[251617]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:28:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:28:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6e6a50ef-0786-4e62-8d76-87613b41285d does not exist
Jan 31 07:28:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 14a0cbd7-c117-403b-b639-d42244c93f8d does not exist
Jan 31 07:28:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d156f654-ed45-4e5f-8303-2ff5fa5a4334 does not exist
Jan 31 07:28:53 compute-0 sudo[251772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:28:53 compute-0 sudo[251772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:53 compute-0 sudo[251772]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:53 compute-0 sudo[251797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:28:53 compute-0 sudo[251797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:28:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:53 compute-0 sudo[251797]: pam_unix(sudo:session): session closed for user root
Jan 31 07:28:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:28:54 compute-0 ceph-mon[74496]: pgmap v830: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/82651890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:28:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:28:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:55.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:28:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:57 compute-0 ceph-mon[74496]: pgmap v831: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:57.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:28:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:58 compute-0 podman[251824]: 2026-01-31 07:28:58.889183725 +0000 UTC m=+0.061723808 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:28:59 compute-0 ceph-mon[74496]: pgmap v832: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:28:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:28:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:28:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:28:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:28:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:59.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:28:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:01 compute-0 ceph-mon[74496]: pgmap v833: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:01.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:01.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:01 compute-0 anacron[52258]: Job `cron.weekly' started
Jan 31 07:29:01 compute-0 anacron[52258]: Job `cron.weekly' terminated
Jan 31 07:29:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:03 compute-0 ceph-mon[74496]: pgmap v834: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:03.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:03.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:05 compute-0 ceph-mon[74496]: pgmap v835: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:29:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:29:05 compute-0 sudo[251851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:05 compute-0 sudo[251851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:05 compute-0 sudo[251851]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:05 compute-0 sudo[251876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:05 compute-0 sudo[251876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:05 compute-0 sudo[251876]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:07 compute-0 ceph-mon[74496]: pgmap v836: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:07.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:08 compute-0 ceph-mon[74496]: pgmap v837: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:09.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:09.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:11 compute-0 ceph-mon[74496]: pgmap v838: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:29:11.138 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:29:11.139 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:29:11.139 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:29:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:11.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:13 compute-0 ceph-mon[74496]: pgmap v839: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:14 compute-0 ceph-mon[74496]: pgmap v840: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:15.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:15 compute-0 podman[251906]: 2026-01-31 07:29:15.937390094 +0000 UTC m=+0.103751980 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 07:29:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:16 compute-0 ceph-mon[74496]: pgmap v841: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:17.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.027324) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558027395, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2103, "num_deletes": 251, "total_data_size": 3976536, "memory_usage": 4037528, "flush_reason": "Manual Compaction"}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558070223, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3902909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17286, "largest_seqno": 19388, "table_properties": {"data_size": 3893310, "index_size": 6093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19107, "raw_average_key_size": 20, "raw_value_size": 3874271, "raw_average_value_size": 4073, "num_data_blocks": 272, "num_entries": 951, "num_filter_entries": 951, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844333, "oldest_key_time": 1769844333, "file_creation_time": 1769844558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 42943 microseconds, and 8355 cpu microseconds.
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.070277) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3902909 bytes OK
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.070302) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.075204) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.075224) EVENT_LOG_v1 {"time_micros": 1769844558075218, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.075247) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3967996, prev total WAL file size 3967996, number of live WAL files 2.
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.076052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3811KB)], [41(7648KB)]
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558076162, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11734466, "oldest_snapshot_seqno": -1}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4493 keys, 9656891 bytes, temperature: kUnknown
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558216967, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9656891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9624432, "index_size": 20125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 112336, "raw_average_key_size": 25, "raw_value_size": 9540647, "raw_average_value_size": 2123, "num_data_blocks": 836, "num_entries": 4493, "num_filter_entries": 4493, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.217404) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9656891 bytes
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.224517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.2 rd, 68.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5012, records dropped: 519 output_compression: NoCompression
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.224555) EVENT_LOG_v1 {"time_micros": 1769844558224537, "job": 20, "event": "compaction_finished", "compaction_time_micros": 140976, "compaction_time_cpu_micros": 20344, "output_level": 6, "num_output_files": 1, "total_output_size": 9656891, "num_input_records": 5012, "num_output_records": 4493, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558225232, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844558226246, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.075911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.226413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.226423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.226426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.226429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:18 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:29:18.226432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:29:19 compute-0 ceph-mon[74496]: pgmap v842: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:19.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:29:19
Jan 31 07:29:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:29:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:29:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.data']
Jan 31 07:29:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:29:21 compute-0 ceph-mon[74496]: pgmap v843: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:21.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:22 compute-0 ceph-mon[74496]: pgmap v844: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=404 latency=0.002000048s ======
Jan 31 07:29:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:23.111 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.002000048s
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - - [31/Jan/2026:07:29:23.128 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:25 compute-0 ceph-mon[74496]: pgmap v845: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:25.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:25 compute-0 sudo[251938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:25 compute-0 sudo[251938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:25 compute-0 sudo[251938]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:25 compute-0 sudo[251963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:25 compute-0 sudo[251963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:25 compute-0 sudo[251963]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:27 compute-0 ceph-mon[74496]: pgmap v846: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:27.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 07:29:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 07:29:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 07:29:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:29.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 07:29:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:29.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:29 compute-0 ceph-mon[74496]: pgmap v847: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:29 compute-0 ceph-mon[74496]: osdmap e131: 3 total, 3 up, 3 in
Jan 31 07:29:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 07:29:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 07:29:29 compute-0 podman[251990]: 2026-01-31 07:29:29.889941214 +0000 UTC m=+0.064080487 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:29:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Jan 31 07:29:30 compute-0 ceph-mon[74496]: osdmap e132: 3 total, 3 up, 3 in
Jan 31 07:29:30 compute-0 ceph-mon[74496]: pgmap v850: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Jan 31 07:29:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:31.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:31.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 31 07:29:32 compute-0 ceph-mon[74496]: pgmap v851: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Jan 31 07:29:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 07:29:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 07:29:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 07:29:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:33.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 07:29:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 15 op/s
Jan 31 07:29:34 compute-0 ceph-mon[74496]: osdmap e133: 3 total, 3 up, 3 in
Jan 31 07:29:34 compute-0 ceph-mon[74496]: pgmap v853: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 15 op/s
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:29:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:29:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:35.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:35.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 19 op/s
Jan 31 07:29:36 compute-0 ceph-mon[74496]: pgmap v854: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 19 op/s
Jan 31 07:29:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:37.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:37.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 07:29:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 29 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 31 07:29:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 07:29:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 07:29:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:39 compute-0 ceph-mon[74496]: pgmap v855: 305 pgs: 305 active+clean; 29 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Jan 31 07:29:39 compute-0 ceph-mon[74496]: osdmap e134: 3 total, 3 up, 3 in
Jan 31 07:29:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:39.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 07:29:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 07:29:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 07:29:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 37 MiB data, 189 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 MiB/s wr, 31 op/s
Jan 31 07:29:40 compute-0 ceph-mon[74496]: osdmap e135: 3 total, 3 up, 3 in
Jan 31 07:29:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:41.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:41.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:41 compute-0 ceph-mon[74496]: pgmap v858: 305 pgs: 305 active+clean; 37 MiB data, 189 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 MiB/s wr, 31 op/s
Jan 31 07:29:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 4.9 MiB/s wr, 26 op/s
Jan 31 07:29:42 compute-0 nova_compute[247704]: 2026-01-31 07:29:42.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:42 compute-0 nova_compute[247704]: 2026-01-31 07:29:42.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:29:42 compute-0 nova_compute[247704]: 2026-01-31 07:29:42.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:29:42 compute-0 nova_compute[247704]: 2026-01-31 07:29:42.701 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:29:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:43 compute-0 nova_compute[247704]: 2026-01-31 07:29:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:43 compute-0 ceph-mon[74496]: pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 4.9 MiB/s wr, 26 op/s
Jan 31 07:29:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3500024337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 25 op/s
Jan 31 07:29:44 compute-0 nova_compute[247704]: 2026-01-31 07:29:44.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:44 compute-0 nova_compute[247704]: 2026-01-31 07:29:44.662 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/451814909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:29:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848144418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:29:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:29:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848144418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:29:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:45.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.589 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.589 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.590 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.590 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:29:45 compute-0 nova_compute[247704]: 2026-01-31 07:29:45.591 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:29:45 compute-0 sudo[252017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:45 compute-0 sudo[252017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:45 compute-0 sudo[252017]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:45 compute-0 sudo[252043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:45 compute-0 sudo[252043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:45 compute-0 sudo[252043]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:29:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552159884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.011 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.164 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.165 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5224MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.165 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.165 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:29:46 compute-0 ceph-mon[74496]: pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 25 op/s
Jan 31 07:29:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/848144418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:29:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/848144418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:29:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.681 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.681 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:29:46 compute-0 nova_compute[247704]: 2026-01-31 07:29:46.698 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:29:46 compute-0 podman[252109]: 2026-01-31 07:29:46.896044387 +0000 UTC m=+0.077599608 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:29:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:29:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2138970295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:47 compute-0 nova_compute[247704]: 2026-01-31 07:29:47.111 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:29:47 compute-0 nova_compute[247704]: 2026-01-31 07:29:47.118 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:29:47 compute-0 nova_compute[247704]: 2026-01-31 07:29:47.146 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:29:47 compute-0 nova_compute[247704]: 2026-01-31 07:29:47.148 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:29:47 compute-0 nova_compute[247704]: 2026-01-31 07:29:47.148 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:29:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2552159884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:47 compute-0 ceph-mon[74496]: pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 9.6 KiB/s rd, 1.6 MiB/s wr, 14 op/s
Jan 31 07:29:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2138970295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:47.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:48 compute-0 nova_compute[247704]: 2026-01-31 07:29:48.149 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:29:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Jan 31 07:29:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:49 compute-0 ceph-mon[74496]: pgmap v862: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:29:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 420 KiB/s wr, 6 op/s
Jan 31 07:29:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:51.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:51.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:51 compute-0 ceph-mon[74496]: pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.5 KiB/s rd, 420 KiB/s wr, 6 op/s
Jan 31 07:29:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/394366881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 379 KiB/s wr, 5 op/s
Jan 31 07:29:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2708000108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:29:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:53.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:53.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:53 compute-0 ceph-mon[74496]: pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 379 KiB/s wr, 5 op/s
Jan 31 07:29:54 compute-0 sudo[252141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:54 compute-0 sudo[252141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-0 sudo[252141]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-0 sudo[252166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:29:54 compute-0 sudo[252166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-0 sudo[252166]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 426 B/s wr, 5 op/s
Jan 31 07:29:54 compute-0 sudo[252191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:54 compute-0 sudo[252191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-0 sudo[252191]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-0 sudo[252216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:29:54 compute-0 sudo[252216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:54 compute-0 sudo[252216]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:29:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:29:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:29:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:29:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:29:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:29:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 96a54c32-2806-4f5d-8ca9-6b6df637ea2b does not exist
Jan 31 07:29:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b5e5a186-204b-4950-a305-f7bdbc4daca5 does not exist
Jan 31 07:29:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a005ecf9-16a0-4f39-826d-332c04c16734 does not exist
Jan 31 07:29:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:29:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:29:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:29:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:29:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:29:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:29:55 compute-0 sudo[252274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:55 compute-0 sudo[252274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:55 compute-0 sudo[252274]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:55 compute-0 sudo[252299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:29:55 compute-0 sudo[252299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:55 compute-0 sudo[252299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:55 compute-0 sudo[252325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:55 compute-0 sudo[252325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:55 compute-0 sudo[252325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:55.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:55 compute-0 sudo[252350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:29:55 compute-0 sudo[252350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:55 compute-0 podman[252415]: 2026-01-31 07:29:55.63531074 +0000 UTC m=+0.022462673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:29:55 compute-0 podman[252415]: 2026-01-31 07:29:55.792619246 +0000 UTC m=+0.179771149 container create cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:29:55 compute-0 ceph-mon[74496]: pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 426 B/s wr, 5 op/s
Jan 31 07:29:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:29:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:29:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:29:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:29:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:29:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:29:55 compute-0 systemd[1]: Started libpod-conmon-cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674.scope.
Jan 31 07:29:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:29:56 compute-0 podman[252415]: 2026-01-31 07:29:56.111885504 +0000 UTC m=+0.499037517 container init cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:29:56 compute-0 podman[252415]: 2026-01-31 07:29:56.122891704 +0000 UTC m=+0.510043647 container start cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:29:56 compute-0 stupefied_cori[252431]: 167 167
Jan 31 07:29:56 compute-0 systemd[1]: libpod-cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674.scope: Deactivated successfully.
Jan 31 07:29:56 compute-0 podman[252415]: 2026-01-31 07:29:56.228476368 +0000 UTC m=+0.615628271 container attach cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cori, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:29:56 compute-0 podman[252415]: 2026-01-31 07:29:56.230227602 +0000 UTC m=+0.617379525 container died cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:29:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 31 07:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6323bee7cc070da27e569023508f15873cd9e0b1c9b461f103fb7d1f1cc75eb-merged.mount: Deactivated successfully.
Jan 31 07:29:56 compute-0 podman[252415]: 2026-01-31 07:29:56.450749491 +0000 UTC m=+0.837901404 container remove cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:29:56 compute-0 systemd[1]: libpod-conmon-cb0f6d559a2661f7e6b08d6826e1b1c3caa0ee6754fbc1919604799b9b823674.scope: Deactivated successfully.
Jan 31 07:29:56 compute-0 podman[252455]: 2026-01-31 07:29:56.653603586 +0000 UTC m=+0.104540230 container create d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:29:56 compute-0 podman[252455]: 2026-01-31 07:29:56.574173404 +0000 UTC m=+0.025110088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:29:56 compute-0 systemd[1]: Started libpod-conmon-d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a.scope.
Jan 31 07:29:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da4ee078838c55de12c955f7214b59b29ad95194a44d2124fdac8f6f723acfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da4ee078838c55de12c955f7214b59b29ad95194a44d2124fdac8f6f723acfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da4ee078838c55de12c955f7214b59b29ad95194a44d2124fdac8f6f723acfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da4ee078838c55de12c955f7214b59b29ad95194a44d2124fdac8f6f723acfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:29:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da4ee078838c55de12c955f7214b59b29ad95194a44d2124fdac8f6f723acfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:29:56 compute-0 podman[252455]: 2026-01-31 07:29:56.798501947 +0000 UTC m=+0.249438581 container init d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:29:56 compute-0 podman[252455]: 2026-01-31 07:29:56.804555936 +0000 UTC m=+0.255492580 container start d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:29:56 compute-0 podman[252455]: 2026-01-31 07:29:56.872970468 +0000 UTC m=+0.323907632 container attach d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:29:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:29:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:57.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:29:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:57.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:57 compute-0 nervous_mayer[252471]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:29:57 compute-0 nervous_mayer[252471]: --> relative data size: 1.0
Jan 31 07:29:57 compute-0 nervous_mayer[252471]: --> All data devices are unavailable
Jan 31 07:29:57 compute-0 systemd[1]: libpod-d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a.scope: Deactivated successfully.
Jan 31 07:29:57 compute-0 podman[252455]: 2026-01-31 07:29:57.630441694 +0000 UTC m=+1.081378368 container died d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:29:58 compute-0 ceph-mon[74496]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 31 07:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da4ee078838c55de12c955f7214b59b29ad95194a44d2124fdac8f6f723acfd-merged.mount: Deactivated successfully.
Jan 31 07:29:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:29:58 compute-0 podman[252455]: 2026-01-31 07:29:58.345761003 +0000 UTC m=+1.796697637 container remove d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:29:58 compute-0 systemd[1]: libpod-conmon-d6950f46adc0a1091c9409a0a51aea622cff753b8cc68d6594aaee5ae77e971a.scope: Deactivated successfully.
Jan 31 07:29:58 compute-0 sudo[252350]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:58 compute-0 sudo[252501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:58 compute-0 sudo[252501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:58 compute-0 sudo[252501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:58 compute-0 sudo[252526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:29:58 compute-0 sudo[252526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:58 compute-0 sudo[252526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:58 compute-0 sudo[252551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:29:58 compute-0 sudo[252551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:58 compute-0 sudo[252551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:29:58 compute-0 sudo[252576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:29:58 compute-0 sudo[252576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:29:59 compute-0 podman[252642]: 2026-01-31 07:29:59.023663865 +0000 UTC m=+0.031254310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:29:59 compute-0 podman[252642]: 2026-01-31 07:29:59.212546857 +0000 UTC m=+0.220137282 container create f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:29:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:59.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:29:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:29:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:59.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:29:59 compute-0 ceph-mon[74496]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:29:59 compute-0 systemd[1]: Started libpod-conmon-f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764.scope.
Jan 31 07:29:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:29:59 compute-0 podman[252642]: 2026-01-31 07:29:59.63463199 +0000 UTC m=+0.642222485 container init f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:29:59 compute-0 podman[252642]: 2026-01-31 07:29:59.644166884 +0000 UTC m=+0.651757339 container start f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:29:59 compute-0 goofy_perlman[252660]: 167 167
Jan 31 07:29:59 compute-0 systemd[1]: libpod-f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764.scope: Deactivated successfully.
Jan 31 07:29:59 compute-0 conmon[252660]: conmon f92eb6f586ca95ebdada <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764.scope/container/memory.events
Jan 31 07:29:59 compute-0 podman[252642]: 2026-01-31 07:29:59.75016455 +0000 UTC m=+0.757755055 container attach f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:29:59 compute-0 podman[252642]: 2026-01-31 07:29:59.751645536 +0000 UTC m=+0.759235991 container died f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:30:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7930152276dd7ce5b7386c668d3c4f67b5fea16d90209df938425d002237963f-merged.mount: Deactivated successfully.
Jan 31 07:30:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:00 compute-0 podman[252642]: 2026-01-31 07:30:00.35198818 +0000 UTC m=+1.359578585 container remove f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:30:00 compute-0 systemd[1]: libpod-conmon-f92eb6f586ca95ebdada9133da86b2f517fd8f973e68630b762f3af2b6b2a764.scope: Deactivated successfully.
Jan 31 07:30:00 compute-0 podman[252678]: 2026-01-31 07:30:00.429132776 +0000 UTC m=+0.373019938 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 07:30:00 compute-0 podman[252704]: 2026-01-31 07:30:00.560430393 +0000 UTC m=+0.114983637 container create 158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:30:00 compute-0 podman[252704]: 2026-01-31 07:30:00.481964934 +0000 UTC m=+0.036518198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:30:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:30:00 compute-0 systemd[1]: Started libpod-conmon-158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b.scope.
Jan 31 07:30:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c87d3a1476282e9308ab7fb053c151b6d0ede8d1404d6da67e52b9a47acb28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c87d3a1476282e9308ab7fb053c151b6d0ede8d1404d6da67e52b9a47acb28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c87d3a1476282e9308ab7fb053c151b6d0ede8d1404d6da67e52b9a47acb28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c87d3a1476282e9308ab7fb053c151b6d0ede8d1404d6da67e52b9a47acb28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:00 compute-0 podman[252704]: 2026-01-31 07:30:00.722530517 +0000 UTC m=+0.277083851 container init 158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:30:00 compute-0 podman[252704]: 2026-01-31 07:30:00.73567908 +0000 UTC m=+0.290232324 container start 158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:30:00 compute-0 podman[252704]: 2026-01-31 07:30:00.748783772 +0000 UTC m=+0.303337036 container attach 158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:30:01 compute-0 sshd-session[252720]: Invalid user sol from 45.148.10.240 port 40300
Jan 31 07:30:01 compute-0 sshd-session[252720]: Connection closed by invalid user sol 45.148.10.240 port 40300 [preauth]
Jan 31 07:30:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:01.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:30:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:01.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:30:01 compute-0 great_vaughan[252721]: {
Jan 31 07:30:01 compute-0 great_vaughan[252721]:     "0": [
Jan 31 07:30:01 compute-0 great_vaughan[252721]:         {
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "devices": [
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "/dev/loop3"
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             ],
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "lv_name": "ceph_lv0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "lv_size": "7511998464",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "name": "ceph_lv0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "tags": {
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.cluster_name": "ceph",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.crush_device_class": "",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.encrypted": "0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.osd_id": "0",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.type": "block",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:                 "ceph.vdo": "0"
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             },
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "type": "block",
Jan 31 07:30:01 compute-0 great_vaughan[252721]:             "vg_name": "ceph_vg0"
Jan 31 07:30:01 compute-0 great_vaughan[252721]:         }
Jan 31 07:30:01 compute-0 great_vaughan[252721]:     ]
Jan 31 07:30:01 compute-0 great_vaughan[252721]: }
Jan 31 07:30:01 compute-0 systemd[1]: libpod-158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b.scope: Deactivated successfully.
Jan 31 07:30:01 compute-0 podman[252704]: 2026-01-31 07:30:01.559384714 +0000 UTC m=+1.113938018 container died 158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:30:01 compute-0 ceph-mon[74496]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-90c87d3a1476282e9308ab7fb053c151b6d0ede8d1404d6da67e52b9a47acb28-merged.mount: Deactivated successfully.
Jan 31 07:30:02 compute-0 podman[252704]: 2026-01-31 07:30:02.213146772 +0000 UTC m=+1.767700016 container remove 158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:30:02 compute-0 systemd[1]: libpod-conmon-158a1fb91acc277e0c0e085870dc8bc8e7aeacb678458f37f02bb92de32ffe4b.scope: Deactivated successfully.
Jan 31 07:30:02 compute-0 sudo[252576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:02 compute-0 sudo[252744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:02 compute-0 sudo[252744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:02 compute-0 sudo[252744]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:02 compute-0 sudo[252769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:30:02 compute-0 sudo[252769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:02 compute-0 sudo[252769]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:02 compute-0 sudo[252794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:02 compute-0 sudo[252794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:02 compute-0 sudo[252794]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:02 compute-0 sudo[252819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:30:02 compute-0 sudo[252819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:02 compute-0 podman[252885]: 2026-01-31 07:30:02.840153001 +0000 UTC m=+0.074933662 container create 435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_newton, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:30:02 compute-0 podman[252885]: 2026-01-31 07:30:02.786760998 +0000 UTC m=+0.021541659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:30:03 compute-0 systemd[1]: Started libpod-conmon-435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b.scope.
Jan 31 07:30:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:30:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:03.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:03 compute-0 podman[252885]: 2026-01-31 07:30:03.334683795 +0000 UTC m=+0.569464496 container init 435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:30:03 compute-0 podman[252885]: 2026-01-31 07:30:03.344757983 +0000 UTC m=+0.579538674 container start 435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:30:03 compute-0 blissful_newton[252902]: 167 167
Jan 31 07:30:03 compute-0 systemd[1]: libpod-435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b.scope: Deactivated successfully.
Jan 31 07:30:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:03.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:03 compute-0 podman[252885]: 2026-01-31 07:30:03.519676802 +0000 UTC m=+0.754457513 container attach 435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:30:03 compute-0 podman[252885]: 2026-01-31 07:30:03.520366568 +0000 UTC m=+0.755147269 container died 435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_newton, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:30:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:04 compute-0 ceph-mon[74496]: pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b369c1ac4536d98d5849dd16dca327a06e72967cc93aaa42fe176be5b5164894-merged.mount: Deactivated successfully.
Jan 31 07:30:04 compute-0 podman[252885]: 2026-01-31 07:30:04.434962086 +0000 UTC m=+1.669742757 container remove 435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 07:30:04 compute-0 systemd[1]: libpod-conmon-435406c1e04a67a8fa5d7a2d0a3777827ff633fcc775cfb6587bd0fe9f70c64b.scope: Deactivated successfully.
Jan 31 07:30:04 compute-0 podman[252926]: 2026-01-31 07:30:04.591998046 +0000 UTC m=+0.048431702 container create 4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 07:30:04 compute-0 systemd[1]: Started libpod-conmon-4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04.scope.
Jan 31 07:30:04 compute-0 podman[252926]: 2026-01-31 07:30:04.568674512 +0000 UTC m=+0.025108158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:30:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d165859197dfc89995d6a67bec37480b88a77ba97f47fefcb4a66da451639b21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d165859197dfc89995d6a67bec37480b88a77ba97f47fefcb4a66da451639b21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d165859197dfc89995d6a67bec37480b88a77ba97f47fefcb4a66da451639b21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d165859197dfc89995d6a67bec37480b88a77ba97f47fefcb4a66da451639b21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:30:04 compute-0 podman[252926]: 2026-01-31 07:30:04.687542784 +0000 UTC m=+0.143976500 container init 4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:30:04 compute-0 podman[252926]: 2026-01-31 07:30:04.693411818 +0000 UTC m=+0.149845444 container start 4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:30:04 compute-0 podman[252926]: 2026-01-31 07:30:04.703045885 +0000 UTC m=+0.159479601 container attach 4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:30:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:30:04.730 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:30:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:30:04.732 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:30:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:30:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:05.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:30:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:30:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:05.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:30:05 compute-0 bold_lehmann[252942]: {
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:         "osd_id": 0,
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:         "type": "bluestore"
Jan 31 07:30:05 compute-0 bold_lehmann[252942]:     }
Jan 31 07:30:05 compute-0 bold_lehmann[252942]: }
Jan 31 07:30:05 compute-0 ceph-mon[74496]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:05 compute-0 systemd[1]: libpod-4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04.scope: Deactivated successfully.
Jan 31 07:30:05 compute-0 podman[252926]: 2026-01-31 07:30:05.46046312 +0000 UTC m=+0.916896746 container died 4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:30:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d165859197dfc89995d6a67bec37480b88a77ba97f47fefcb4a66da451639b21-merged.mount: Deactivated successfully.
Jan 31 07:30:05 compute-0 podman[252926]: 2026-01-31 07:30:05.520760851 +0000 UTC m=+0.977194477 container remove 4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:30:05 compute-0 systemd[1]: libpod-conmon-4f74bc65bd15a3f14acfb983f9b1ab2f2bfe4332cebf56d0b56bddabff327d04.scope: Deactivated successfully.
Jan 31 07:30:05 compute-0 sudo[252819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:30:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:30:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:30:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:30:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 001e8496-cb2c-47f4-981b-2b7e1d5c8772 does not exist
Jan 31 07:30:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 35faae5f-00f7-42fa-864f-c29798091b8d does not exist
Jan 31 07:30:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 01874245-af74-41bd-8bdf-ca78c2f41ee1 does not exist
Jan 31 07:30:05 compute-0 sudo[252978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:05 compute-0 sudo[252978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-0 sudo[252978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-0 sudo[253003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:30:05 compute-0 sudo[253003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-0 sudo[253003]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-0 sudo[253026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:05 compute-0 sudo[253026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-0 sudo[253026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:05 compute-0 sudo[253053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:05 compute-0 sudo[253053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:05 compute-0 sudo[253053]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:30:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:30:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:07.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:07.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:07 compute-0 ceph-mon[74496]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:09.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:09.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:09 compute-0 ceph-mon[74496]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:30:11.140 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:30:11.140 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:30:11.140 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:11.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:11.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:11 compute-0 ceph-mon[74496]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:30:12.734 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:30:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:13.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:13 compute-0 ceph-mon[74496]: pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:15.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:15 compute-0 ceph-mon[74496]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:17 compute-0 podman[253084]: 2026-01-31 07:30:17.959309858 +0000 UTC m=+0.129661892 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 31 07:30:18 compute-0 ceph-mon[74496]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:19 compute-0 ceph-mon[74496]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:19.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:30:19
Jan 31 07:30:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:30:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:30:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', 'volumes', 'backups']
Jan 31 07:30:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:30:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:21.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:21 compute-0 ceph-mon[74496]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:23.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:24 compute-0 ceph-mon[74496]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:30:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:30:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:25 compute-0 ceph-mon[74496]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:25 compute-0 sudo[253116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:25 compute-0 sudo[253116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:25 compute-0 sudo[253116]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:25 compute-0 sudo[253141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:25 compute-0 sudo[253141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:25 compute-0 sudo[253141]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:27.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:27.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:27 compute-0 ceph-mon[74496]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.019 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "61b5403c-6129-422a-9f82-16daa15b7dd6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.020 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "61b5403c-6129-422a-9f82-16daa15b7dd6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.114 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.310 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.311 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.323 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.323 247708 INFO nova.compute.claims [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:30:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:29.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.496 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:29 compute-0 ceph-mon[74496]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547252698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.971 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:29 compute-0 nova_compute[247704]: 2026-01-31 07:30:29.976 247708 DEBUG nova.compute.provider_tree [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.013 247708 DEBUG nova.scheduler.client.report [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.076 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.077 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.224 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.224 247708 DEBUG nova.network.neutron [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.268 247708 INFO nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.311 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:30:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.452 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.453 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.454 247708 INFO nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Creating image(s)
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.490 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.530 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.562 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.567 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:30 compute-0 nova_compute[247704]: 2026-01-31 07:30:30.568 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:30 compute-0 podman[253244]: 2026-01-31 07:30:30.873033922 +0000 UTC m=+0.044726187 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:30:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/547252698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:31.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:31.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:31 compute-0 nova_compute[247704]: 2026-01-31 07:30:31.900 247708 DEBUG nova.network.neutron [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:30:31 compute-0 nova_compute[247704]: 2026-01-31 07:30:31.900 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:30:32 compute-0 ceph-mon[74496]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:32 compute-0 nova_compute[247704]: 2026-01-31 07:30:32.383 247708 DEBUG nova.virt.libvirt.imagebackend [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7c23949f-bba8-4466-bb79-caf568852d38/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/7c23949f-bba8-4466-bb79-caf568852d38/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:30:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:33.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:34 compute-0 ceph-mon[74496]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 379 KiB/s rd, 0 op/s
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:30:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:30:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:35 compute-0 nova_compute[247704]: 2026-01-31 07:30:35.615 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:35 compute-0 nova_compute[247704]: 2026-01-31 07:30:35.981 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part --force-share --output=json" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:35 compute-0 nova_compute[247704]: 2026-01-31 07:30:35.982 247708 DEBUG nova.virt.images [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] 7c23949f-bba8-4466-bb79-caf568852d38 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 07:30:35 compute-0 nova_compute[247704]: 2026-01-31 07:30:35.984 247708 DEBUG nova.privsep.utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:30:35 compute-0 nova_compute[247704]: 2026-01-31 07:30:35.985 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:36 compute-0 ceph-mon[74496]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 379 KiB/s rd, 0 op/s
Jan 31 07:30:36 compute-0 nova_compute[247704]: 2026-01-31 07:30:36.248 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.part /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:36 compute-0 nova_compute[247704]: 2026-01-31 07:30:36.252 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:36 compute-0 nova_compute[247704]: 2026-01-31 07:30:36.322 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6.converted --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:36 compute-0 nova_compute[247704]: 2026-01-31 07:30:36.324 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 31 07:30:36 compute-0 nova_compute[247704]: 2026-01-31 07:30:36.360 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:36 compute-0 nova_compute[247704]: 2026-01-31 07:30:36.365 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 61b5403c-6129-422a-9f82-16daa15b7dd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 07:30:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 07:30:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 07:30:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:37 compute-0 ceph-mon[74496]: pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 31 07:30:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Jan 31 07:30:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 07:30:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 07:30:38 compute-0 ceph-mon[74496]: osdmap e136: 3 total, 3 up, 3 in
Jan 31 07:30:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 07:30:38 compute-0 nova_compute[247704]: 2026-01-31 07:30:38.961 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 61b5403c-6129-422a-9f82-16daa15b7dd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.047 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] resizing rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.194 247708 DEBUG nova.objects.instance [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lazy-loading 'migration_context' on Instance uuid 61b5403c-6129-422a-9f82-16daa15b7dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.219 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.219 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Ensure instance console log exists: /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.220 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.220 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.220 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.222 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.227 247708 WARNING nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.231 247708 DEBUG nova.virt.libvirt.host [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.232 247708 DEBUG nova.virt.libvirt.host [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.235 247708 DEBUG nova.virt.libvirt.host [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.235 247708 DEBUG nova.virt.libvirt.host [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.236 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.236 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.237 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.237 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.237 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.237 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.238 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.238 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.238 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.238 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.239 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.239 247708 DEBUG nova.virt.hardware [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.242 247708 DEBUG nova.privsep.utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.243 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:39.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:39 compute-0 ceph-mon[74496]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Jan 31 07:30:39 compute-0 ceph-mon[74496]: osdmap e137: 3 total, 3 up, 3 in
Jan 31 07:30:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:30:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461678655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.721 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.759 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:39 compute-0 nova_compute[247704]: 2026-01-31 07:30:39.765 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:30:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111554275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.232 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.235 247708 DEBUG nova.objects.instance [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lazy-loading 'pci_devices' on Instance uuid 61b5403c-6129-422a-9f82-16daa15b7dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.251 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <uuid>61b5403c-6129-422a-9f82-16daa15b7dd6</uuid>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <name>instance-00000001</name>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-1286235749</nova:name>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:30:39</nova:creationTime>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:user uuid="28efb7ff08054b4b93c9cc461b8e8862">tempest-DeleteServersAdminTestJSON-1558339450-project-member</nova:user>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <nova:project uuid="7931d3e634664686a06dd20aa52399ae">tempest-DeleteServersAdminTestJSON-1558339450</nova:project>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <system>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <entry name="serial">61b5403c-6129-422a-9f82-16daa15b7dd6</entry>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <entry name="uuid">61b5403c-6129-422a-9f82-16daa15b7dd6</entry>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </system>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <os>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </os>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <features>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </features>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/61b5403c-6129-422a-9f82-16daa15b7dd6_disk">
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       </source>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/61b5403c-6129-422a-9f82-16daa15b7dd6_disk.config">
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       </source>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:30:40 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/console.log" append="off"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <video>
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </video>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:30:40 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:30:40 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:30:40 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:30:40 compute-0 nova_compute[247704]: </domain>
Jan 31 07:30:40 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.309 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.309 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.310 247708 INFO nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Using config drive
Jan 31 07:30:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 53 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 518 KiB/s wr, 27 op/s
Jan 31 07:30:40 compute-0 nova_compute[247704]: 2026-01-31 07:30:40.336 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/461678655' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:30:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2111554275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.301 247708 INFO nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Creating config drive at /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/disk.config
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.305 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpw1pa2sg0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:41.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.436 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpw1pa2sg0" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.470 247708 DEBUG nova.storage.rbd_utils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] rbd image 61b5403c-6129-422a-9f82-16daa15b7dd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.475 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/disk.config 61b5403c-6129-422a-9f82-16daa15b7dd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.596 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.597 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.597 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:30:41 compute-0 ceph-mon[74496]: pgmap v890: 305 pgs: 305 active+clean; 53 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 518 KiB/s wr, 27 op/s
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.620 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.676 247708 DEBUG oslo_concurrency.processutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/disk.config 61b5403c-6129-422a-9f82-16daa15b7dd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:41 compute-0 nova_compute[247704]: 2026-01-31 07:30:41.677 247708 INFO nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Deleting local config drive /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6/disk.config because it was imported into RBD.
Jan 31 07:30:41 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 31 07:30:41 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 31 07:30:41 compute-0 systemd-machined[214448]: New machine qemu-1-instance-00000001.
Jan 31 07:30:41 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 31 07:30:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 79 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 31 op/s
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.424 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844642.4239252, 61b5403c-6129-422a-9f82-16daa15b7dd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.426 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] VM Resumed (Lifecycle Event)
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.444 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.447 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.453 247708 INFO nova.virt.libvirt.driver [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance spawned successfully.
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.454 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.536 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.544 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.547 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.548 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.548 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.549 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.549 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.550 247708 DEBUG nova.virt.libvirt.driver [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.620 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.620 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844642.4243848, 61b5403c-6129-422a-9f82-16daa15b7dd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.620 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] VM Started (Lifecycle Event)
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.716 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.719 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.750 247708 INFO nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Took 12.30 seconds to spawn the instance on the hypervisor.
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.751 247708 DEBUG nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:30:42 compute-0 nova_compute[247704]: 2026-01-31 07:30:42.759 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.047 247708 INFO nova.compute.manager [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Took 13.79 seconds to build instance.
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.092 247708 DEBUG oslo_concurrency.lockutils [None req-13c1f93d-b06e-49dd-82d8-58aa832dec9e 28efb7ff08054b4b93c9cc461b8e8862 7931d3e634664686a06dd20aa52399ae - - default default] Lock "61b5403c-6129-422a-9f82-16daa15b7dd6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:43.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.634 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.635 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.635 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.893 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-61b5403c-6129-422a-9f82-16daa15b7dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.893 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-61b5403c-6129-422a-9f82-16daa15b7dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.894 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:30:43 compute-0 nova_compute[247704]: 2026-01-31 07:30:43.894 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 61b5403c-6129-422a-9f82-16daa15b7dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:30:44 compute-0 ceph-mon[74496]: pgmap v891: 305 pgs: 305 active+clean; 79 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 31 op/s
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.211 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:30:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 578 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.958 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Acquiring lock "61b5403c-6129-422a-9f82-16daa15b7dd6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.959 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lock "61b5403c-6129-422a-9f82-16daa15b7dd6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.959 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Acquiring lock "61b5403c-6129-422a-9f82-16daa15b7dd6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.960 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lock "61b5403c-6129-422a-9f82-16daa15b7dd6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.961 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lock "61b5403c-6129-422a-9f82-16daa15b7dd6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.962 247708 INFO nova.compute.manager [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Terminating instance
Jan 31 07:30:44 compute-0 nova_compute[247704]: 2026-01-31 07:30:44.964 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Acquiring lock "refresh_cache-61b5403c-6129-422a-9f82-16daa15b7dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.079 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.103 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-61b5403c-6129-422a-9f82-16daa15b7dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.104 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.104 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Acquired lock "refresh_cache-61b5403c-6129-422a-9f82-16daa15b7dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.104 247708 DEBUG nova.network.neutron [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.106 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:45.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/909324454' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:30:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/909324454' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:30:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:45.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:45 compute-0 nova_compute[247704]: 2026-01-31 07:30:45.743 247708 DEBUG nova.network.neutron [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:30:46 compute-0 sudo[253593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:46 compute-0 sudo[253593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:46 compute-0 sudo[253593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:46 compute-0 sudo[253618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:30:46 compute-0 sudo[253618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:30:46 compute-0 sudo[253618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.099 247708 DEBUG nova.network.neutron [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.125 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Releasing lock "refresh_cache-61b5403c-6129-422a-9f82-16daa15b7dd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.127 247708 DEBUG nova.compute.manager [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:30:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Jan 31 07:30:46 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 31 07:30:46 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 4.171s CPU time.
Jan 31 07:30:46 compute-0 systemd-machined[214448]: Machine qemu-1-instance-00000001 terminated.
Jan 31 07:30:46 compute-0 ceph-mon[74496]: pgmap v892: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 578 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Jan 31 07:30:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1515124371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3350655955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.543 247708 INFO nova.virt.libvirt.driver [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance destroyed successfully.
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.543 247708 DEBUG nova.objects.instance [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lazy-loading 'resources' on Instance uuid 61b5403c-6129-422a-9f82-16daa15b7dd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:46 compute-0 nova_compute[247704]: 2026-01-31 07:30:46.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.277 247708 INFO nova.virt.libvirt.driver [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Deleting instance files /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6_del
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.278 247708 INFO nova.virt.libvirt.driver [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Deletion of /var/lib/nova/instances/61b5403c-6129-422a-9f82-16daa15b7dd6_del complete
Jan 31 07:30:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:47.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.412 247708 DEBUG nova.virt.libvirt.host [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.412 247708 INFO nova.virt.libvirt.host [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] UEFI support detected
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.415 247708 INFO nova.compute.manager [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Took 1.29 seconds to destroy the instance on the hypervisor.
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.416 247708 DEBUG oslo.service.loopingcall [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.416 247708 DEBUG nova.compute.manager [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.417 247708 DEBUG nova.network.neutron [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:30:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:47.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.614 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.615 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.616 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.616 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.617 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.850 247708 DEBUG nova.network.neutron [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.871 247708 DEBUG nova.network.neutron [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:30:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 07:30:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 07:30:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 07:30:47 compute-0 nova_compute[247704]: 2026-01-31 07:30:47.982 247708 INFO nova.compute.manager [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Took 0.57 seconds to deallocate network for instance.
Jan 31 07:30:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880901487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:48 compute-0 ceph-mon[74496]: pgmap v893: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 132 op/s
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.064 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.210 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.211 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:48 compute-0 podman[253689]: 2026-01-31 07:30:48.235890064 +0000 UTC m=+0.127747767 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.276 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.277 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5242MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.277 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 31 07:30:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 79 MiB data, 214 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 07:30:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 07:30:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.605 247708 DEBUG nova.scheduler.client.report [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.757 247708 DEBUG nova.scheduler.client.report [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.758 247708 DEBUG nova.compute.provider_tree [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.887 247708 DEBUG nova.scheduler.client.report [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.920 247708 DEBUG nova.scheduler.client.report [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:30:48 compute-0 nova_compute[247704]: 2026-01-31 07:30:48.983 247708 DEBUG oslo_concurrency.processutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:49 compute-0 ceph-mon[74496]: osdmap e138: 3 total, 3 up, 3 in
Jan 31 07:30:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1880901487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:49 compute-0 ceph-mon[74496]: osdmap e139: 3 total, 3 up, 3 in
Jan 31 07:30:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:49.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270920724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.396 247708 DEBUG oslo_concurrency.processutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.401 247708 DEBUG nova.compute.provider_tree [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.462 247708 DEBUG nova.scheduler.client.report [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Updated inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.462 247708 DEBUG nova.compute.provider_tree [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Updating resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.463 247708 DEBUG nova.compute.provider_tree [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:30:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:49.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.486 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.488 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.559 247708 INFO nova.scheduler.client.report [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Deleted allocations for instance 61b5403c-6129-422a-9f82-16daa15b7dd6
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.593 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.593 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.646 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:49 compute-0 nova_compute[247704]: 2026-01-31 07:30:49.689 247708 DEBUG oslo_concurrency.lockutils [None req-879334a6-1eb6-4080-9016-c4007b259027 48b2a696196b4d10a40ef3d5d14f5740 9ad1e816831a460e9d66902e5981a2af - - default default] Lock "61b5403c-6129-422a-9f82-16daa15b7dd6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:30:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3494218237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-0 nova_compute[247704]: 2026-01-31 07:30:50.058 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:50 compute-0 nova_compute[247704]: 2026-01-31 07:30:50.063 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:30:50 compute-0 nova_compute[247704]: 2026-01-31 07:30:50.087 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:30:50 compute-0 nova_compute[247704]: 2026-01-31 07:30:50.119 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:30:50 compute-0 nova_compute[247704]: 2026-01-31 07:30:50.120 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:50 compute-0 ceph-mon[74496]: pgmap v895: 305 pgs: 305 active+clean; 79 MiB data, 214 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 07:30:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2270920724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4030013093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3494218237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 67 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 123 KiB/s wr, 158 op/s
Jan 31 07:30:51 compute-0 nova_compute[247704]: 2026-01-31 07:30:51.120 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:30:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:51.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:51.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:52 compute-0 ceph-mon[74496]: pgmap v897: 305 pgs: 305 active+clean; 67 MiB data, 210 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 123 KiB/s wr, 158 op/s
Jan 31 07:30:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/263640405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 KiB/s wr, 123 op/s
Jan 31 07:30:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:53.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:54 compute-0 ceph-mon[74496]: pgmap v898: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 KiB/s wr, 123 op/s
Jan 31 07:30:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2419106158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 333 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Jan 31 07:30:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:56 compute-0 ceph-mon[74496]: pgmap v899: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 333 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Jan 31 07:30:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.4 KiB/s wr, 27 op/s
Jan 31 07:30:57 compute-0 ceph-mon[74496]: pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 2.4 KiB/s wr, 27 op/s
Jan 31 07:30:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:30:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:57.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:30:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:30:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 31 07:30:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.0 KiB/s wr, 29 op/s
Jan 31 07:30:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 07:30:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 31 07:30:58 compute-0 nova_compute[247704]: 2026-01-31 07:30:58.972 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:58 compute-0 nova_compute[247704]: 2026-01-31 07:30:58.973 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.009 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.146 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.148 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.156 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.157 247708 INFO nova.compute.claims [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.283 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:30:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:59.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:30:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:30:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:59.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:30:59 compute-0 ceph-mon[74496]: pgmap v901: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.0 KiB/s wr, 29 op/s
Jan 31 07:30:59 compute-0 ceph-mon[74496]: osdmap e140: 3 total, 3 up, 3 in
Jan 31 07:30:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:30:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672843128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.802 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.806 247708 DEBUG nova.compute.provider_tree [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.825 247708 DEBUG nova.scheduler.client.report [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.851 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.852 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.906 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.907 247708 DEBUG nova.network.neutron [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.936 247708 INFO nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:30:59 compute-0 nova_compute[247704]: 2026-01-31 07:30:59.976 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.089 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.091 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.091 247708 INFO nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Creating image(s)
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.126 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.164 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.203 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.208 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.276 247708 WARNING oslo_policy.policy [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.277 247708 WARNING oslo_policy.policy [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.280 247708 DEBUG nova.policy [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd4520c09fdbc4ac79fbc4c76f049c0fb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7269d3f50e464bf7953583453c06f6c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.283 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.283 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.284 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.284 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.318 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.322 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 68 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 465 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 31 07:31:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2672843128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1000464503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/86164115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.619 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.727 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] resizing rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.889 247708 DEBUG nova.objects.instance [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lazy-loading 'migration_context' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.926 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.927 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Ensure instance console log exists: /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.928 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.929 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:00 compute-0 nova_compute[247704]: 2026-01-31 07:31:00.929 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:01.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:01.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:01 compute-0 nova_compute[247704]: 2026-01-31 07:31:01.542 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844646.5407455, 61b5403c-6129-422a-9f82-16daa15b7dd6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:01 compute-0 nova_compute[247704]: 2026-01-31 07:31:01.543 247708 INFO nova.compute.manager [-] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] VM Stopped (Lifecycle Event)
Jan 31 07:31:01 compute-0 nova_compute[247704]: 2026-01-31 07:31:01.570 247708 DEBUG nova.compute.manager [None req-2315bb3d-e065-4781-906f-e7191f685b5f - - - - - -] [instance: 61b5403c-6129-422a-9f82-16daa15b7dd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:02 compute-0 podman[253954]: 2026-01-31 07:31:02.001934725 +0000 UTC m=+0.173368213 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:31:02 compute-0 ceph-mon[74496]: pgmap v903: 305 pgs: 305 active+clean; 68 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 465 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 31 07:31:02 compute-0 nova_compute[247704]: 2026-01-31 07:31:02.211 247708 DEBUG nova.network.neutron [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Successfully created port: ec2025b5-972e-43ae-8cb4-88e60da18197 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:31:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 94 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 43 op/s
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.342 247708 DEBUG nova.network.neutron [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Successfully updated port: ec2025b5-972e-43ae-8cb4-88e60da18197 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:31:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.398 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.399 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquired lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.399 247708 DEBUG nova.network.neutron [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:31:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:03.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.657 247708 DEBUG nova.network.neutron [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.980 247708 DEBUG nova.compute.manager [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-changed-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.980 247708 DEBUG nova.compute.manager [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Refreshing instance network info cache due to event network-changed-ec2025b5-972e-43ae-8cb4-88e60da18197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:31:03 compute-0 nova_compute[247704]: 2026-01-31 07:31:03.980 247708 DEBUG oslo_concurrency.lockutils [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:04 compute-0 ceph-mon[74496]: pgmap v904: 305 pgs: 305 active+clean; 94 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 43 op/s
Jan 31 07:31:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 134 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 108 op/s
Jan 31 07:31:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:05.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:05.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:06 compute-0 sudo[253976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-0 sudo[253976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-0 sudo[253976]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-0 sudo[254001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:31:06 compute-0 sudo[254001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-0 sudo[254001]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-0 sudo[254026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-0 sudo[254026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-0 sudo[254026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-0 sudo[254038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-0 sudo[254038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-0 sudo[254038]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-0 sudo[254074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:31:06 compute-0 sudo[254074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-0 ceph-mon[74496]: pgmap v905: 305 pgs: 305 active+clean; 134 MiB data, 228 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 108 op/s
Jan 31 07:31:06 compute-0 sudo[254099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:06 compute-0 sudo[254099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:06 compute-0 sudo[254099]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 123 op/s
Jan 31 07:31:06 compute-0 sudo[254074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:07 compute-0 ceph-mon[74496]: pgmap v906: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 123 op/s
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.276 247708 DEBUG nova.network.neutron [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updating instance_info_cache with network_info: [{"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.298 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Releasing lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.299 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Instance network_info: |[{"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.300 247708 DEBUG oslo_concurrency.lockutils [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.300 247708 DEBUG nova.network.neutron [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Refreshing network info cache for port ec2025b5-972e-43ae-8cb4-88e60da18197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:31:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.306 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Start _get_guest_xml network_info=[{"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.314 247708 WARNING nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:31:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.326 247708 DEBUG nova.virt.libvirt.host [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.327 247708 DEBUG nova.virt.libvirt.host [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.332 247708 DEBUG nova.virt.libvirt.host [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.333 247708 DEBUG nova.virt.libvirt.host [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.335 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.336 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.337 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.338 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.338 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.339 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.339 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.340 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.340 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.341 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.341 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.341 247708 DEBUG nova.virt.hardware [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.347 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:07.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:07.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:31:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388821805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.829 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.856 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:07 compute-0 nova_compute[247704]: 2026-01-31 07:31:07.862 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:31:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:31:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:31:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev abccf4c1-56b4-4a7c-9cdb-42e96472dad7 does not exist
Jan 31 07:31:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0c7a5f14-d3de-40ae-ad56-ad1fd714e011 does not exist
Jan 31 07:31:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1a34fc12-93dd-498f-a183-c0e61a5ce78b does not exist
Jan 31 07:31:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:31:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:31:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:31:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:31:08 compute-0 sudo[254218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:08 compute-0 sudo[254218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:08 compute-0 sudo[254218]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:08 compute-0 sudo[254243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:31:08 compute-0 sudo[254243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:08 compute-0 sudo[254243]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:08 compute-0 sudo[254268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:08 compute-0 sudo[254268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:08 compute-0 sudo[254268]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:08 compute-0 sudo[254293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:31:08 compute-0 sudo[254293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:31:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23788351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1388821805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:31:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/23788351' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.317 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.319 247708 DEBUG nova.virt.libvirt.vif [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-138857472',display_name='tempest-VolumesAssistedSnapshotsTest-server-138857472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-138857472',id=3,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBykBcv16QQ6ttRMCuOYN3UOOsywiIkXrJA3SNGD0zaYKz3WtwcWNi/Eezi1ka9ukpZxVLE9mKIYnDGHSMsGmDcX41IyFM0mEs7A/Oc1luJV/SEL9WeshGDKnacE4ZECw==',key_name='tempest-keypair-692678977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7269d3f50e464bf7953583453c06f6c7',ramdisk_id='',reservation_id='r-69kx30hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-378059459',owner_user_name='tempest-VolumesAssistedSnapshotsTest-378059459-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:31:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4520c09fdbc4ac79fbc4c76f049c0fb',uuid=f610bafa-64ee-4d38-8be4-7c17cd2b2a99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.320 247708 DEBUG nova.network.os_vif_util [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Converting VIF {"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.321 247708 DEBUG nova.network.os_vif_util [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.324 247708 DEBUG nova.objects.instance [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 145 op/s
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.368 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <uuid>f610bafa-64ee-4d38-8be4-7c17cd2b2a99</uuid>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <name>instance-00000003</name>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:name>tempest-VolumesAssistedSnapshotsTest-server-138857472</nova:name>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:31:07</nova:creationTime>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:user uuid="d4520c09fdbc4ac79fbc4c76f049c0fb">tempest-VolumesAssistedSnapshotsTest-378059459-project-member</nova:user>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:project uuid="7269d3f50e464bf7953583453c06f6c7">tempest-VolumesAssistedSnapshotsTest-378059459</nova:project>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <nova:port uuid="ec2025b5-972e-43ae-8cb4-88e60da18197">
Jan 31 07:31:08 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <system>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <entry name="serial">f610bafa-64ee-4d38-8be4-7c17cd2b2a99</entry>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <entry name="uuid">f610bafa-64ee-4d38-8be4-7c17cd2b2a99</entry>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </system>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <os>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </os>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <features>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </features>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk">
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </source>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk.config">
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </source>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:31:08 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:7d:55:d2"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <target dev="tapec2025b5-97"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/console.log" append="off"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <video>
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </video>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:31:08 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:31:08 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:31:08 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:31:08 compute-0 nova_compute[247704]: </domain>
Jan 31 07:31:08 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.370 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Preparing to wait for external event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.370 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.370 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.370 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.371 247708 DEBUG nova.virt.libvirt.vif [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-138857472',display_name='tempest-VolumesAssistedSnapshotsTest-server-138857472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-138857472',id=3,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBykBcv16QQ6ttRMCuOYN3UOOsywiIkXrJA3SNGD0zaYKz3WtwcWNi/Eezi1ka9ukpZxVLE9mKIYnDGHSMsGmDcX41IyFM0mEs7A/Oc1luJV/SEL9WeshGDKnacE4ZECw==',key_name='tempest-keypair-692678977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7269d3f50e464bf7953583453c06f6c7',ramdisk_id='',reservation_id='r-69kx30hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-378059459',owner_user_name='tempest-VolumesAssistedSnapshotsTest-378059459-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:31:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4520c09fdbc4ac79fbc4c76f049c0fb',uuid=f610bafa-64ee-4d38-8be4-7c17cd2b2a99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.371 247708 DEBUG nova.network.os_vif_util [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Converting VIF {"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.372 247708 DEBUG nova.network.os_vif_util [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.372 247708 DEBUG os_vif [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:31:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.421 247708 DEBUG ovsdbapp.backend.ovs_idl [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.422 247708 DEBUG ovsdbapp.backend.ovs_idl [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.423 247708 DEBUG ovsdbapp.backend.ovs_idl [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.424 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.424 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.425 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.429 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.450 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.450 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:31:08 compute-0 nova_compute[247704]: 2026-01-31 07:31:08.452 247708 INFO oslo.privsep.daemon [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpf8bmbhs4/privsep.sock']
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.636412319 +0000 UTC m=+0.094137793 container create ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.564100932 +0000 UTC m=+0.021826426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:31:08 compute-0 systemd[1]: Started libpod-conmon-ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1.scope.
Jan 31 07:31:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.774821466 +0000 UTC m=+0.232546970 container init ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.781897223 +0000 UTC m=+0.239622697 container start ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:31:08 compute-0 compassionate_jemison[254379]: 167 167
Jan 31 07:31:08 compute-0 systemd[1]: libpod-ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1.scope: Deactivated successfully.
Jan 31 07:31:08 compute-0 conmon[254379]: conmon ae9d77952042d66ab192 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1.scope/container/memory.events
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.801435294 +0000 UTC m=+0.259160808 container attach ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.802502159 +0000 UTC m=+0.260227673 container died ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jemison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-be4133dc96a51e180a6371a21abe9f169ecce012a7224d7b2eb61d95c7558200-merged.mount: Deactivated successfully.
Jan 31 07:31:08 compute-0 podman[254363]: 2026-01-31 07:31:08.995533825 +0000 UTC m=+0.453259299 container remove ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jemison, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:31:09 compute-0 systemd[1]: libpod-conmon-ae9d77952042d66ab1925b4d9541c6b550c1b3b649a074d4b5f4f6e9230ad7a1.scope: Deactivated successfully.
Jan 31 07:31:09 compute-0 podman[254406]: 2026-01-31 07:31:09.162758692 +0000 UTC m=+0.070406753 container create 7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:31:09 compute-0 podman[254406]: 2026-01-31 07:31:09.114308018 +0000 UTC m=+0.021956059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:31:09 compute-0 systemd[1]: Started libpod-conmon-7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317.scope.
Jan 31 07:31:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e1eaef1dc74c60dd82b5f959c9a6a3570600d57e428cc2740b9e9570310c19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e1eaef1dc74c60dd82b5f959c9a6a3570600d57e428cc2740b9e9570310c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e1eaef1dc74c60dd82b5f959c9a6a3570600d57e428cc2740b9e9570310c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e1eaef1dc74c60dd82b5f959c9a6a3570600d57e428cc2740b9e9570310c19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e1eaef1dc74c60dd82b5f959c9a6a3570600d57e428cc2740b9e9570310c19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:09 compute-0 podman[254406]: 2026-01-31 07:31:09.311893112 +0000 UTC m=+0.219541163 container init 7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:31:09 compute-0 podman[254406]: 2026-01-31 07:31:09.319712736 +0000 UTC m=+0.227360757 container start 7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:31:09 compute-0 podman[254406]: 2026-01-31 07:31:09.378058003 +0000 UTC m=+0.285706064 container attach 7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ardinghelli, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:31:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:31:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:31:09 compute-0 ceph-mon[74496]: pgmap v907: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 145 op/s
Jan 31 07:31:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3026629658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.513 247708 INFO oslo.privsep.daemon [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Spawned new privsep daemon via rootwrap
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.363 254428 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.368 254428 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.371 254428 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.372 254428 INFO oslo.privsep.daemon [-] privsep daemon running as pid 254428
Jan 31 07:31:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:09.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.828 247708 DEBUG nova.network.neutron [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updated VIF entry in instance network info cache for port ec2025b5-972e-43ae-8cb4-88e60da18197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.829 247708 DEBUG nova.network.neutron [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updating instance_info_cache with network_info: [{"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:09 compute-0 nova_compute[247704]: 2026-01-31 07:31:09.854 247708 DEBUG oslo_concurrency.lockutils [req-4014b05e-7d09-4887-8bde-6306691c08da req-5cc33bd2-f3e7-421a-9593-a2e63d30c7b8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:31:10 compute-0 peaceful_ardinghelli[254422]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:31:10 compute-0 peaceful_ardinghelli[254422]: --> relative data size: 1.0
Jan 31 07:31:10 compute-0 peaceful_ardinghelli[254422]: --> All data devices are unavailable
Jan 31 07:31:10 compute-0 systemd[1]: libpod-7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317.scope: Deactivated successfully.
Jan 31 07:31:10 compute-0 podman[254406]: 2026-01-31 07:31:10.222978624 +0000 UTC m=+1.130626655 container died 7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.333 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.335 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec2025b5-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.336 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec2025b5-97, col_values=(('external_ids', {'iface-id': 'ec2025b5-972e-43ae-8cb4-88e60da18197', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:55:d2', 'vm-uuid': 'f610bafa-64ee-4d38-8be4-7c17cd2b2a99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.339 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:10 compute-0 NetworkManager[49108]: <info>  [1769844670.3408] manager: (tapec2025b5-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.344 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.346 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.347 247708 INFO os_vif [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97')
Jan 31 07:31:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 107 MiB data, 226 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 31 07:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e1eaef1dc74c60dd82b5f959c9a6a3570600d57e428cc2740b9e9570310c19-merged.mount: Deactivated successfully.
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.417 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.430 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.431 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.431 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] No VIF found with MAC fa:16:3e:7d:55:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.432 247708 INFO nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Using config drive
Jan 31 07:31:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2466196534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.484 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:10 compute-0 podman[254406]: 2026-01-31 07:31:10.593795256 +0000 UTC m=+1.501443317 container remove 7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:31:10 compute-0 systemd[1]: libpod-conmon-7540c53d80c239caa4ec66e1b784fe91d4f51b83fa61dc08974095d183610317.scope: Deactivated successfully.
Jan 31 07:31:10 compute-0 sudo[254293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:10 compute-0 sudo[254475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:10 compute-0 sudo[254475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:10 compute-0 sudo[254475]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:10.718 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:31:10 compute-0 nova_compute[247704]: 2026-01-31 07:31:10.718 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:10.720 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:31:10 compute-0 sudo[254500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:31:10 compute-0 sudo[254500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:10 compute-0 sudo[254500]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:10 compute-0 sudo[254525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:10 compute-0 sudo[254525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:10 compute-0 sudo[254525]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:10 compute-0 sudo[254550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:31:10 compute-0 sudo[254550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:11.140 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:11.140 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:11.141 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:11 compute-0 podman[254617]: 2026-01-31 07:31:11.219443733 +0000 UTC m=+0.097217086 container create d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:31:11 compute-0 podman[254617]: 2026-01-31 07:31:11.143419009 +0000 UTC m=+0.021192382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:31:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:11 compute-0 systemd[1]: Started libpod-conmon-d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7.scope.
Jan 31 07:31:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:11.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:11 compute-0 podman[254617]: 2026-01-31 07:31:11.636376573 +0000 UTC m=+0.514149956 container init d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:11 compute-0 podman[254617]: 2026-01-31 07:31:11.645709113 +0000 UTC m=+0.523482466 container start d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:31:11 compute-0 gallant_varahamihira[254634]: 167 167
Jan 31 07:31:11 compute-0 systemd[1]: libpod-d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7.scope: Deactivated successfully.
Jan 31 07:31:11 compute-0 podman[254617]: 2026-01-31 07:31:11.728128948 +0000 UTC m=+0.605902311 container attach d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:31:11 compute-0 podman[254617]: 2026-01-31 07:31:11.728642791 +0000 UTC m=+0.606416144 container died d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:31:11 compute-0 ceph-mon[74496]: pgmap v908: 305 pgs: 305 active+clean; 107 MiB data, 226 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 31 07:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff4f42e5209463fa192641995a97e9efd2994ba855a634ccf246dd993e417de3-merged.mount: Deactivated successfully.
Jan 31 07:31:12 compute-0 podman[254617]: 2026-01-31 07:31:12.041602547 +0000 UTC m=+0.919375940 container remove d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:31:12 compute-0 systemd[1]: libpod-conmon-d5680eafd968324ed1b91dd7579de781ba0d8197b77b4627baf31392d4f800d7.scope: Deactivated successfully.
Jan 31 07:31:12 compute-0 podman[254658]: 2026-01-31 07:31:12.195910829 +0000 UTC m=+0.065031566 container create ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:31:12 compute-0 podman[254658]: 2026-01-31 07:31:12.153272903 +0000 UTC m=+0.022393720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:31:12 compute-0 systemd[1]: Started libpod-conmon-ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56.scope.
Jan 31 07:31:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18692056ff33ea98230cb12f99f16407c8378b56186c90a9900db2c49a6f3bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18692056ff33ea98230cb12f99f16407c8378b56186c90a9900db2c49a6f3bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18692056ff33ea98230cb12f99f16407c8378b56186c90a9900db2c49a6f3bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b18692056ff33ea98230cb12f99f16407c8378b56186c90a9900db2c49a6f3bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:12 compute-0 podman[254658]: 2026-01-31 07:31:12.350667432 +0000 UTC m=+0.219788219 container init ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 96 MiB data, 223 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 140 op/s
Jan 31 07:31:12 compute-0 podman[254658]: 2026-01-31 07:31:12.357863761 +0000 UTC m=+0.226984528 container start ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:31:12 compute-0 podman[254658]: 2026-01-31 07:31:12.386497007 +0000 UTC m=+0.255617744 container attach ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:31:12 compute-0 nova_compute[247704]: 2026-01-31 07:31:12.861 247708 INFO nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Creating config drive at /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/disk.config
Jan 31 07:31:12 compute-0 nova_compute[247704]: 2026-01-31 07:31:12.870 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdil3f0yo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:12 compute-0 nova_compute[247704]: 2026-01-31 07:31:12.992 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdil3f0yo" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.027 247708 DEBUG nova.storage.rbd_utils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] rbd image f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.031 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/disk.config f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:13 compute-0 epic_gould[254674]: {
Jan 31 07:31:13 compute-0 epic_gould[254674]:     "0": [
Jan 31 07:31:13 compute-0 epic_gould[254674]:         {
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "devices": [
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "/dev/loop3"
Jan 31 07:31:13 compute-0 epic_gould[254674]:             ],
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "lv_name": "ceph_lv0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "lv_size": "7511998464",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "name": "ceph_lv0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "tags": {
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.cluster_name": "ceph",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.crush_device_class": "",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.encrypted": "0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.osd_id": "0",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.type": "block",
Jan 31 07:31:13 compute-0 epic_gould[254674]:                 "ceph.vdo": "0"
Jan 31 07:31:13 compute-0 epic_gould[254674]:             },
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "type": "block",
Jan 31 07:31:13 compute-0 epic_gould[254674]:             "vg_name": "ceph_vg0"
Jan 31 07:31:13 compute-0 epic_gould[254674]:         }
Jan 31 07:31:13 compute-0 epic_gould[254674]:     ]
Jan 31 07:31:13 compute-0 epic_gould[254674]: }
Jan 31 07:31:13 compute-0 systemd[1]: libpod-ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56.scope: Deactivated successfully.
Jan 31 07:31:13 compute-0 podman[254721]: 2026-01-31 07:31:13.266944557 +0000 UTC m=+0.036319058 container died ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18692056ff33ea98230cb12f99f16407c8378b56186c90a9900db2c49a6f3bb-merged.mount: Deactivated successfully.
Jan 31 07:31:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:13.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:13 compute-0 podman[254721]: 2026-01-31 07:31:13.415398541 +0000 UTC m=+0.184772972 container remove ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:13 compute-0 systemd[1]: libpod-conmon-ddbc40d5143789b10826a010011359054f6015247bbd89d1c89fd95fd0b88d56.scope: Deactivated successfully.
Jan 31 07:31:13 compute-0 sudo[254550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.463 247708 DEBUG oslo_concurrency.processutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/disk.config f610bafa-64ee-4d38-8be4-7c17cd2b2a99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.464 247708 INFO nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Deleting local config drive /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99/disk.config because it was imported into RBD.
Jan 31 07:31:13 compute-0 sudo[254740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:13 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 31 07:31:13 compute-0 sudo[254740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:13 compute-0 sudo[254740]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:13 compute-0 kernel: tapec2025b5-97: entered promiscuous mode
Jan 31 07:31:13 compute-0 NetworkManager[49108]: <info>  [1769844673.5382] manager: (tapec2025b5-97): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.540 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:13 compute-0 ovn_controller[149457]: 2026-01-31T07:31:13Z|00027|binding|INFO|Claiming lport ec2025b5-972e-43ae-8cb4-88e60da18197 for this chassis.
Jan 31 07:31:13 compute-0 ovn_controller[149457]: 2026-01-31T07:31:13Z|00028|binding|INFO|ec2025b5-972e-43ae-8cb4-88e60da18197: Claiming fa:16:3e:7d:55:d2 10.100.0.10
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:13.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:13.573 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:55:d2 10.100.0.10'], port_security=['fa:16:3e:7d:55:d2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f610bafa-64ee-4d38-8be4-7c17cd2b2a99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7269d3f50e464bf7953583453c06f6c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd808ab9-31b9-4301-8549-d6c79ef30c77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c5e6cdb-96aa-487f-a2b0-8f46c703f10c, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ec2025b5-972e-43ae-8cb4-88e60da18197) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:31:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:13.574 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ec2025b5-972e-43ae-8cb4-88e60da18197 in datapath bfdb30d1-e66a-41bd-9dfb-f576cae026b5 bound to our chassis
Jan 31 07:31:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:13.579 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfdb30d1-e66a-41bd-9dfb-f576cae026b5
Jan 31 07:31:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:13.580 160028 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmptkbyd576/privsep.sock']
Jan 31 07:31:13 compute-0 systemd-udevd[254801]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:31:13 compute-0 systemd-machined[214448]: New machine qemu-2-instance-00000003.
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:13 compute-0 NetworkManager[49108]: <info>  [1769844673.6081] device (tapec2025b5-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:31:13 compute-0 NetworkManager[49108]: <info>  [1769844673.6089] device (tapec2025b5-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:31:13 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Jan 31 07:31:13 compute-0 ovn_controller[149457]: 2026-01-31T07:31:13Z|00029|binding|INFO|Setting lport ec2025b5-972e-43ae-8cb4-88e60da18197 ovn-installed in OVS
Jan 31 07:31:13 compute-0 ovn_controller[149457]: 2026-01-31T07:31:13Z|00030|binding|INFO|Setting lport ec2025b5-972e-43ae-8cb4-88e60da18197 up in Southbound
Jan 31 07:31:13 compute-0 sudo[254776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:31:13 compute-0 nova_compute[247704]: 2026-01-31 07:31:13.616 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:13 compute-0 sudo[254776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:13 compute-0 sudo[254776]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:13 compute-0 sudo[254816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:13 compute-0 sudo[254816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:13 compute-0 sudo[254816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:13 compute-0 sudo[254847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:31:13 compute-0 sudo[254847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:13 compute-0 ceph-mon[74496]: pgmap v909: 305 pgs: 305 active+clean; 96 MiB data, 223 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.7 MiB/s wr, 140 op/s
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.038695461 +0000 UTC m=+0.038499169 container create 9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:31:14 compute-0 systemd[1]: Started libpod-conmon-9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54.scope.
Jan 31 07:31:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.01954424 +0000 UTC m=+0.019347978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.127928428 +0000 UTC m=+0.127732186 container init 9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.135863855 +0000 UTC m=+0.135667583 container start 9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:31:14 compute-0 eloquent_williams[254930]: 167 167
Jan 31 07:31:14 compute-0 systemd[1]: libpod-9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54.scope: Deactivated successfully.
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.143183678 +0000 UTC m=+0.142987416 container attach 9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.143946216 +0000 UTC m=+0.143749944 container died 9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:31:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cdbac6aa38debaee4e69358b309836d4bdc95e24cd17885920dc6730f2e2008-merged.mount: Deactivated successfully.
Jan 31 07:31:14 compute-0 podman[254914]: 2026-01-31 07:31:14.239553472 +0000 UTC m=+0.239357190 container remove 9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:31:14 compute-0 systemd[1]: libpod-conmon-9cb4d5d6f88ae41c92bfac93edd64820549503ebe18c3cb0a28cd37b089aaa54.scope: Deactivated successfully.
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.275 160028 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.276 160028 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptkbyd576/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.141 254935 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.149 254935 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.153 254935 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.154 254935 INFO oslo.privsep.daemon [-] privsep daemon running as pid 254935
Jan 31 07:31:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:14.279 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4c154530-f1c7-4cf4-beff-cc10a0ec0d28]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 122 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Jan 31 07:31:14 compute-0 nova_compute[247704]: 2026-01-31 07:31:14.388 247708 DEBUG nova.compute.manager [req-cdedecb5-a6a3-47d6-9fc1-3e9d784a5914 req-2e6356c2-b025-432b-8cf5-8ec7a42935b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:14 compute-0 nova_compute[247704]: 2026-01-31 07:31:14.388 247708 DEBUG oslo_concurrency.lockutils [req-cdedecb5-a6a3-47d6-9fc1-3e9d784a5914 req-2e6356c2-b025-432b-8cf5-8ec7a42935b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:14 compute-0 nova_compute[247704]: 2026-01-31 07:31:14.388 247708 DEBUG oslo_concurrency.lockutils [req-cdedecb5-a6a3-47d6-9fc1-3e9d784a5914 req-2e6356c2-b025-432b-8cf5-8ec7a42935b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:14 compute-0 nova_compute[247704]: 2026-01-31 07:31:14.388 247708 DEBUG oslo_concurrency.lockutils [req-cdedecb5-a6a3-47d6-9fc1-3e9d784a5914 req-2e6356c2-b025-432b-8cf5-8ec7a42935b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:14 compute-0 nova_compute[247704]: 2026-01-31 07:31:14.388 247708 DEBUG nova.compute.manager [req-cdedecb5-a6a3-47d6-9fc1-3e9d784a5914 req-2e6356c2-b025-432b-8cf5-8ec7a42935b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Processing event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:31:14 compute-0 podman[254959]: 2026-01-31 07:31:14.388953719 +0000 UTC m=+0.058009071 container create 191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:31:14 compute-0 systemd[1]: Started libpod-conmon-191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228.scope.
Jan 31 07:31:14 compute-0 podman[254959]: 2026-01-31 07:31:14.360005865 +0000 UTC m=+0.029061227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:31:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8fd133f3343f7e9d17f9146d5f719cdd9db5263106fa2562b1478d2fc36457f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8fd133f3343f7e9d17f9146d5f719cdd9db5263106fa2562b1478d2fc36457f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8fd133f3343f7e9d17f9146d5f719cdd9db5263106fa2562b1478d2fc36457f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8fd133f3343f7e9d17f9146d5f719cdd9db5263106fa2562b1478d2fc36457f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:14 compute-0 podman[254959]: 2026-01-31 07:31:14.483448049 +0000 UTC m=+0.152503411 container init 191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:31:14 compute-0 podman[254959]: 2026-01-31 07:31:14.489663466 +0000 UTC m=+0.158718798 container start 191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:31:14 compute-0 podman[254959]: 2026-01-31 07:31:14.493145388 +0000 UTC m=+0.162200720 container attach 191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.135 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.135 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844675.1346958, f610bafa-64ee-4d38-8be4-7c17cd2b2a99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.136 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] VM Started (Lifecycle Event)
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.140 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.143 247708 INFO nova.virt.libvirt.driver [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Instance spawned successfully.
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.144 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:31:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:15.193 254935 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:15.193 254935 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:15.193 254935 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.203 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.207 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.208 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.208 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.209 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.209 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.210 247708 DEBUG nova.virt.libvirt.driver [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.214 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.261 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.261 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844675.1376278, f610bafa-64ee-4d38-8be4-7c17cd2b2a99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.262 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] VM Paused (Lifecycle Event)
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.308 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.313 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844675.139839, f610bafa-64ee-4d38-8be4-7c17cd2b2a99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.313 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] VM Resumed (Lifecycle Event)
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.340 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.355 247708 INFO nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Took 15.27 seconds to spawn the instance on the hypervisor.
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.356 247708 DEBUG nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.357 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.364 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:31:15 compute-0 magical_fermi[254976]: {
Jan 31 07:31:15 compute-0 magical_fermi[254976]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:31:15 compute-0 magical_fermi[254976]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:31:15 compute-0 magical_fermi[254976]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:31:15 compute-0 magical_fermi[254976]:         "osd_id": 0,
Jan 31 07:31:15 compute-0 magical_fermi[254976]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:31:15 compute-0 magical_fermi[254976]:         "type": "bluestore"
Jan 31 07:31:15 compute-0 magical_fermi[254976]:     }
Jan 31 07:31:15 compute-0 magical_fermi[254976]: }
Jan 31 07:31:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:15.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.408 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:31:15 compute-0 systemd[1]: libpod-191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228.scope: Deactivated successfully.
Jan 31 07:31:15 compute-0 podman[254959]: 2026-01-31 07:31:15.420417962 +0000 UTC m=+1.089473294 container died 191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.420 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.480 247708 INFO nova.compute.manager [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Took 16.38 seconds to build instance.
Jan 31 07:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8fd133f3343f7e9d17f9146d5f719cdd9db5263106fa2562b1478d2fc36457f-merged.mount: Deactivated successfully.
Jan 31 07:31:15 compute-0 nova_compute[247704]: 2026-01-31 07:31:15.530 247708 DEBUG oslo_concurrency.lockutils [None req-a65068c8-377a-4a8a-84f7-99e88f400b1b d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:15 compute-0 podman[254959]: 2026-01-31 07:31:15.531058284 +0000 UTC m=+1.200113616 container remove 191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_fermi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:31:15 compute-0 systemd[1]: libpod-conmon-191ab323af69ba38dc22f2734d52b36fc688cf3c22a0a16abfb6424b98a20228.scope: Deactivated successfully.
Jan 31 07:31:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:15.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:15 compute-0 sudo[254847]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:31:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:31:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6c0d7f55-de01-4d15-bb34-35c0422f6b05 does not exist
Jan 31 07:31:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ba73bbf2-7e97-4f86-8dab-2b2d495e32b0 does not exist
Jan 31 07:31:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 082c2aa8-bd53-4189-9dc5-13a7c0926df8 does not exist
Jan 31 07:31:15 compute-0 sudo[255054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:15 compute-0 sudo[255054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:15 compute-0 sudo[255054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:15 compute-0 sudo[255079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:31:15 compute-0 sudo[255079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:15 compute-0 sudo[255079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:15 compute-0 ceph-mon[74496]: pgmap v910: 305 pgs: 305 active+clean; 122 MiB data, 232 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Jan 31 07:31:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:31:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.400 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2d2035-a6f4-4162-bc79-a326a8330176]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.402 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfdb30d1-e1 in ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.404 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfdb30d1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.404 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e64e470b-3d7e-4143-b6ef-955ea3cb2817]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.408 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[18263d71-7fb4-4f99-8236-c895f673d0d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.424 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[131c36a5-d544-4373-859a-4b398944939d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.446 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[07652291-1510-4e10-a48f-e006eb4fe919]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:16.450 160028 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp6f9jilk8/privsep.sock']
Jan 31 07:31:16 compute-0 nova_compute[247704]: 2026-01-31 07:31:16.522 247708 DEBUG nova.compute.manager [req-51b2bd64-58b5-4cd7-93a4-ea36835d2d5e req-9f87aa7e-3f58-46be-91d4-1b5e37075817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:16 compute-0 nova_compute[247704]: 2026-01-31 07:31:16.523 247708 DEBUG oslo_concurrency.lockutils [req-51b2bd64-58b5-4cd7-93a4-ea36835d2d5e req-9f87aa7e-3f58-46be-91d4-1b5e37075817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:16 compute-0 nova_compute[247704]: 2026-01-31 07:31:16.523 247708 DEBUG oslo_concurrency.lockutils [req-51b2bd64-58b5-4cd7-93a4-ea36835d2d5e req-9f87aa7e-3f58-46be-91d4-1b5e37075817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:16 compute-0 nova_compute[247704]: 2026-01-31 07:31:16.523 247708 DEBUG oslo_concurrency.lockutils [req-51b2bd64-58b5-4cd7-93a4-ea36835d2d5e req-9f87aa7e-3f58-46be-91d4-1b5e37075817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:16 compute-0 nova_compute[247704]: 2026-01-31 07:31:16.524 247708 DEBUG nova.compute.manager [req-51b2bd64-58b5-4cd7-93a4-ea36835d2d5e req-9f87aa7e-3f58-46be-91d4-1b5e37075817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] No waiting events found dispatching network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:31:16 compute-0 nova_compute[247704]: 2026-01-31 07:31:16.524 247708 WARNING nova.compute.manager [req-51b2bd64-58b5-4cd7-93a4-ea36835d2d5e req-9f87aa7e-3f58-46be-91d4-1b5e37075817 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received unexpected event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 for instance with vm_state active and task_state None.
Jan 31 07:31:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2925801226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.304 160028 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.305 160028 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp6f9jilk8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.117 255113 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.122 255113 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.125 255113 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.126 255113 INFO oslo.privsep.daemon [-] privsep daemon running as pid 255113
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.335 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ffcb60-5b95-4f1b-8e14-52eeb2e1e8c5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:17.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:17.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.960 255113 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.960 255113 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:17.960 255113 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:18 compute-0 ceph-mon[74496]: pgmap v911: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 31 07:31:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2444794305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 07:31:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.559 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8df09818-4e16-4e5e-b81e-06e0a38515a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 NetworkManager[49108]: <info>  [1769844678.5857] manager: (tapbfdb30d1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.585 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[68c9d072-a620-46b6-8211-0aff43e17a61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 systemd-udevd[255132]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.613 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[cce86cc4-5d03-4297-b93a-d66c87878329]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.617 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[845b9db6-bf4b-4927-9f40-7664bfeee96c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 NetworkManager[49108]: <info>  [1769844678.6442] device (tapbfdb30d1-e0): carrier: link connected
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.648 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d3c0e1-f4d4-4c21-9113-ec9e3464a717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.669 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23e3196f-9d8e-4c0f-9dbb-6b53b963c662]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfdb30d1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:ea:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492126, 'reachable_time': 15455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255154, 'error': None, 'target': 'ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.687 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f5761cd8-da67-4753-9e7c-41be0d064dc1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:eafc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492126, 'tstamp': 492126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255161, 'error': None, 'target': 'ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.707 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2dea19-b3a3-4fd6-b9bf-52ed9c3cff0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfdb30d1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:ea:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492126, 'reachable_time': 15455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255165, 'error': None, 'target': 'ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 podman[255123]: 2026-01-31 07:31:18.733128768 +0000 UTC m=+0.121008477 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.734 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[36154f25-ccc3-4f84-ab04-bc5e69293c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.787 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6c74ae7c-e5e9-43f1-bfad-8a68f979aa7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.788 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfdb30d1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.789 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.789 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfdb30d1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:18 compute-0 nova_compute[247704]: 2026-01-31 07:31:18.791 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-0 kernel: tapbfdb30d1-e0: entered promiscuous mode
Jan 31 07:31:18 compute-0 NetworkManager[49108]: <info>  [1769844678.7928] manager: (tapbfdb30d1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 31 07:31:18 compute-0 nova_compute[247704]: 2026-01-31 07:31:18.795 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.798 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfdb30d1-e0, col_values=(('external_ids', {'iface-id': '0f14fafb-cb5a-4ac8-93e2-c2ffa1d2c9be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:18 compute-0 nova_compute[247704]: 2026-01-31 07:31:18.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-0 ovn_controller[149457]: 2026-01-31T07:31:18Z|00031|binding|INFO|Releasing lport 0f14fafb-cb5a-4ac8-93e2-c2ffa1d2c9be from this chassis (sb_readonly=0)
Jan 31 07:31:18 compute-0 nova_compute[247704]: 2026-01-31 07:31:18.800 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.800 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfdb30d1-e66a-41bd-9dfb-f576cae026b5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfdb30d1-e66a-41bd-9dfb-f576cae026b5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.801 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8e07a04f-3d1f-418e-8d85-5574ca54364a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.802 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-bfdb30d1-e66a-41bd-9dfb-f576cae026b5
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/bfdb30d1-e66a-41bd-9dfb-f576cae026b5.pid.haproxy
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID bfdb30d1-e66a-41bd-9dfb-f576cae026b5
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:31:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:18.802 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'env', 'PROCESS_TAG=haproxy-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfdb30d1-e66a-41bd-9dfb-f576cae026b5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:31:18 compute-0 nova_compute[247704]: 2026-01-31 07:31:18.805 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:19 compute-0 podman[255205]: 2026-01-31 07:31:19.179615746 +0000 UTC m=+0.066613133 container create 71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:31:19 compute-0 systemd[1]: Started libpod-conmon-71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e.scope.
Jan 31 07:31:19 compute-0 podman[255205]: 2026-01-31 07:31:19.1357381 +0000 UTC m=+0.022735507 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:31:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60009bf34f43ef27df29a1c3236ce1e3ea231e13430b7c90ad6decd8a5ccc3fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:19 compute-0 ceph-mon[74496]: pgmap v912: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 07:31:19 compute-0 podman[255205]: 2026-01-31 07:31:19.25857172 +0000 UTC m=+0.145569127 container init 71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:31:19 compute-0 podman[255205]: 2026-01-31 07:31:19.263768152 +0000 UTC m=+0.150765549 container start 71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 07:31:19 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [NOTICE]   (255225) : New worker (255227) forked
Jan 31 07:31:19 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [NOTICE]   (255225) : Loading success.
Jan 31 07:31:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:19.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:31:19
Jan 31 07:31:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:31:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:31:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 31 07:31:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:31:20 compute-0 nova_compute[247704]: 2026-01-31 07:31:20.163 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1648] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1655] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <warn>  [1769844680.1658] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1669] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1674] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <warn>  [1769844680.1675] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1685] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1693] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1709] device (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:31:20 compute-0 NetworkManager[49108]: <info>  [1769844680.1715] device (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:31:20 compute-0 nova_compute[247704]: 2026-01-31 07:31:20.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-0 ovn_controller[149457]: 2026-01-31T07:31:20Z|00032|binding|INFO|Releasing lport 0f14fafb-cb5a-4ac8-93e2-c2ffa1d2c9be from this chassis (sb_readonly=0)
Jan 31 07:31:20 compute-0 nova_compute[247704]: 2026-01-31 07:31:20.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-0 nova_compute[247704]: 2026-01-31 07:31:20.342 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 07:31:20 compute-0 nova_compute[247704]: 2026-01-31 07:31:20.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:31:20.723 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:31:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:21 compute-0 ceph-mon[74496]: pgmap v913: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 07:31:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:21.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Jan 31 07:31:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:23 compute-0 nova_compute[247704]: 2026-01-31 07:31:23.388 247708 DEBUG nova.compute.manager [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-changed-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:31:23 compute-0 nova_compute[247704]: 2026-01-31 07:31:23.388 247708 DEBUG nova.compute.manager [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Refreshing instance network info cache due to event network-changed-ec2025b5-972e-43ae-8cb4-88e60da18197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:31:23 compute-0 nova_compute[247704]: 2026-01-31 07:31:23.389 247708 DEBUG oslo_concurrency.lockutils [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:23 compute-0 nova_compute[247704]: 2026-01-31 07:31:23.389 247708 DEBUG oslo_concurrency.lockutils [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:23 compute-0 nova_compute[247704]: 2026-01-31 07:31:23.389 247708 DEBUG nova.network.neutron [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Refreshing network info cache for port ec2025b5-972e-43ae-8cb4-88e60da18197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:31:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:23.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:23.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:23 compute-0 ceph-mon[74496]: pgmap v914: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Jan 31 07:31:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 31 07:31:25 compute-0 nova_compute[247704]: 2026-01-31 07:31:25.224 247708 DEBUG nova.network.neutron [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updated VIF entry in instance network info cache for port ec2025b5-972e-43ae-8cb4-88e60da18197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:31:25 compute-0 nova_compute[247704]: 2026-01-31 07:31:25.225 247708 DEBUG nova.network.neutron [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updating instance_info_cache with network_info: [{"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:25 compute-0 nova_compute[247704]: 2026-01-31 07:31:25.260 247708 DEBUG oslo_concurrency.lockutils [req-46f95cac-a8b8-4901-9154-86c88c105a87 req-45d247b7-d5f3-4c92-8f1d-b325f7aa0d83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:31:25 compute-0 nova_compute[247704]: 2026-01-31 07:31:25.346 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:25.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:25 compute-0 nova_compute[247704]: 2026-01-31 07:31:25.424 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:25.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:26 compute-0 ceph-mon[74496]: pgmap v915: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 31 07:31:26 compute-0 sudo[255241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 441 KiB/s wr, 117 op/s
Jan 31 07:31:26 compute-0 sudo[255241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:26 compute-0 sudo[255241]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:26 compute-0 sudo[255266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:26 compute-0 sudo[255266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:26 compute-0 sudo[255266]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:27.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:31:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:27.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:31:28 compute-0 ceph-mon[74496]: pgmap v916: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 441 KiB/s wr, 117 op/s
Jan 31 07:31:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/817307480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 107 op/s
Jan 31 07:31:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:29 compute-0 ovn_controller[149457]: 2026-01-31T07:31:29Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:55:d2 10.100.0.10
Jan 31 07:31:29 compute-0 ovn_controller[149457]: 2026-01-31T07:31:29Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:55:d2 10.100.0.10
Jan 31 07:31:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:29.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:29.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:30 compute-0 ceph-mon[74496]: pgmap v917: 305 pgs: 305 active+clean; 134 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 15 KiB/s wr, 107 op/s
Jan 31 07:31:30 compute-0 nova_compute[247704]: 2026-01-31 07:31:30.349 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 171 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 187 op/s
Jan 31 07:31:30 compute-0 nova_compute[247704]: 2026-01-31 07:31:30.427 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:31.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:32 compute-0 ceph-mon[74496]: pgmap v918: 305 pgs: 305 active+clean; 171 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 187 op/s
Jan 31 07:31:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 189 MiB data, 304 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 31 07:31:32 compute-0 podman[255295]: 2026-01-31 07:31:32.877820346 +0000 UTC m=+0.052293376 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:31:33 compute-0 ceph-mon[74496]: pgmap v919: 305 pgs: 305 active+clean; 189 MiB data, 304 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 168 op/s
Jan 31 07:31:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:33.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:33.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/443679632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1935318245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0041496181718779085 of space, bias 1.0, pg target 1.2448854515633725 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:31:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:31:35 compute-0 nova_compute[247704]: 2026-01-31 07:31:35.357 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:35 compute-0 ceph-mon[74496]: pgmap v920: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 07:31:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:35 compute-0 nova_compute[247704]: 2026-01-31 07:31:35.429 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:35 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 07:31:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:35.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 217 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 154 op/s
Jan 31 07:31:37 compute-0 ceph-mon[74496]: pgmap v921: 305 pgs: 305 active+clean; 217 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 154 op/s
Jan 31 07:31:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:37.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 227 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.9 MiB/s wr, 147 op/s
Jan 31 07:31:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:39.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:39 compute-0 ceph-mon[74496]: pgmap v922: 305 pgs: 305 active+clean; 227 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.9 MiB/s wr, 147 op/s
Jan 31 07:31:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:39 compute-0 nova_compute[247704]: 2026-01-31 07:31:39.786 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:39 compute-0 nova_compute[247704]: 2026-01-31 07:31:39.786 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:39 compute-0 nova_compute[247704]: 2026-01-31 07:31:39.853 247708 DEBUG nova.objects.instance [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lazy-loading 'flavor' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:39 compute-0 nova_compute[247704]: 2026-01-31 07:31:39.906 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.272 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.272 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.273 247708 INFO nova.compute.manager [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Attaching volume 1bbb8596-60ef-452c-be5b-f1b3f34acf56 to /dev/vdb
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.359 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 238 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 198 op/s
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.775 247708 DEBUG os_brick.utils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:31:40 compute-0 nova_compute[247704]: 2026-01-31 07:31:40.777 247708 INFO oslo.privsep.daemon [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpm6uxsdmh/privsep.sock']
Jan 31 07:31:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.457 247708 INFO oslo.privsep.daemon [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Spawned new privsep daemon via rootwrap
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.313 255323 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.317 255323 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.320 255323 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.320 255323 INFO oslo.privsep.daemon [-] privsep daemon running as pid 255323
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.461 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[effa95ef-bb75-4dd9-b6f7-f161e5d66bd4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:41 compute-0 ceph-mon[74496]: pgmap v923: 305 pgs: 305 active+clean; 238 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 198 op/s
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.560 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.580 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.581 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[1a40261d-a518-4cf3-8bb4-f5f1d097064e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.582 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.590 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.590 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e746ce23-516b-49a1-b03b-22219bae918b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.592 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:41.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.607 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.608 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[d5727e8e-9339-45ed-91e8-27cfaf21bd11]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.610 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[026e5624-07e8-439f-9612-438fa7dfc97d]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.610 247708 DEBUG oslo_concurrency.processutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.628 247708 DEBUG oslo_concurrency.processutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.632 247708 DEBUG os_brick.initiator.connectors.lightos [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.633 247708 DEBUG os_brick.initiator.connectors.lightos [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.633 247708 DEBUG os_brick.initiator.connectors.lightos [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.633 247708 DEBUG os_brick.utils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] <== get_connector_properties: return (857ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:31:41 compute-0 nova_compute[247704]: 2026-01-31 07:31:41.634 247708 DEBUG nova.virt.block_device [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updating existing volume attachment record: 666c639f-1a54-48ac-89a9-c7b2a2580395 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:31:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 247 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 542 KiB/s rd, 4.2 MiB/s wr, 116 op/s
Jan 31 07:31:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:43 compute-0 ceph-mon[74496]: pgmap v924: 305 pgs: 305 active+clean; 247 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 542 KiB/s rd, 4.2 MiB/s wr, 116 op/s
Jan 31 07:31:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3329455590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:31:43 compute-0 nova_compute[247704]: 2026-01-31 07:31:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:31:43 compute-0 nova_compute[247704]: 2026-01-31 07:31:43.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:31:43 compute-0 nova_compute[247704]: 2026-01-31 07:31:43.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:31:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:43.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 160 op/s
Jan 31 07:31:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:31:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077544154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:31:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:31:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077544154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:31:45 compute-0 nova_compute[247704]: 2026-01-31 07:31:45.362 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:45 compute-0 nova_compute[247704]: 2026-01-31 07:31:45.433 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:45 compute-0 ceph-mon[74496]: pgmap v925: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 160 op/s
Jan 31 07:31:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2077544154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:31:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2077544154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:31:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:45.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.064 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.064 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.064 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.065 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.107 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.108 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.109 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.118 247708 DEBUG nova.objects.instance [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lazy-loading 'flavor' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.212 247708 DEBUG nova.virt.libvirt.driver [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Attempting to attach volume 1bbb8596-60ef-452c-be5b-f1b3f34acf56 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 07:31:46 compute-0 nova_compute[247704]: 2026-01-31 07:31:46.217 247708 DEBUG nova.virt.libvirt.guest [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 07:31:46 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:31:46 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-1bbb8596-60ef-452c-be5b-f1b3f34acf56">
Jan 31 07:31:46 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:31:46 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:31:46 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:31:46 compute-0 nova_compute[247704]:   </source>
Jan 31 07:31:46 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 07:31:46 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:31:46 compute-0 nova_compute[247704]:   </auth>
Jan 31 07:31:46 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:31:46 compute-0 nova_compute[247704]:   <serial>1bbb8596-60ef-452c-be5b-f1b3f34acf56</serial>
Jan 31 07:31:46 compute-0 nova_compute[247704]: </disk>
Jan 31 07:31:46 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:31:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 31 07:31:46 compute-0 sudo[255354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:46 compute-0 sudo[255354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:46 compute-0 sudo[255354]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:46 compute-0 sudo[255379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:31:46 compute-0 sudo[255379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:31:46 compute-0 sudo[255379]: pam_unix(sudo:session): session closed for user root
Jan 31 07:31:47 compute-0 nova_compute[247704]: 2026-01-31 07:31:47.034 247708 DEBUG nova.virt.libvirt.driver [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:47 compute-0 nova_compute[247704]: 2026-01-31 07:31:47.036 247708 DEBUG nova.virt.libvirt.driver [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:47 compute-0 nova_compute[247704]: 2026-01-31 07:31:47.036 247708 DEBUG nova.virt.libvirt.driver [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:31:47 compute-0 nova_compute[247704]: 2026-01-31 07:31:47.037 247708 DEBUG nova.virt.libvirt.driver [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] No VIF found with MAC fa:16:3e:7d:55:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:31:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:47.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:47 compute-0 ceph-mon[74496]: pgmap v926: 305 pgs: 305 active+clean; 247 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 31 07:31:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:31:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:47.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:31:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 233 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 07:31:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.397271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708397320, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1635, "num_deletes": 251, "total_data_size": 2747087, "memory_usage": 2789224, "flush_reason": "Manual Compaction"}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708406393, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1666517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19389, "largest_seqno": 21023, "table_properties": {"data_size": 1660650, "index_size": 2942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14975, "raw_average_key_size": 20, "raw_value_size": 1647758, "raw_average_value_size": 2282, "num_data_blocks": 131, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844559, "oldest_key_time": 1769844559, "file_creation_time": 1769844708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9168 microseconds, and 3696 cpu microseconds.
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.406441) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1666517 bytes OK
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.406467) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.410897) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.410948) EVENT_LOG_v1 {"time_micros": 1769844708410937, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.410978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2740119, prev total WAL file size 2740119, number of live WAL files 2.
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.411613) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353037' seq:72057594037927935, type:22 .. '6D67727374617400373539' seq:0, type:0; will stop at (end)
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1627KB)], [44(9430KB)]
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708411661, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11323408, "oldest_snapshot_seqno": -1}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4762 keys, 8492759 bytes, temperature: kUnknown
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708469130, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8492759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8460854, "index_size": 18867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 118421, "raw_average_key_size": 24, "raw_value_size": 8374613, "raw_average_value_size": 1758, "num_data_blocks": 781, "num_entries": 4762, "num_filter_entries": 4762, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.469371) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8492759 bytes
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.482292) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.8 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.2 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.9) write-amplify(5.1) OK, records in: 5215, records dropped: 453 output_compression: NoCompression
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.482341) EVENT_LOG_v1 {"time_micros": 1769844708482322, "job": 22, "event": "compaction_finished", "compaction_time_micros": 57547, "compaction_time_cpu_micros": 15079, "output_level": 6, "num_output_files": 1, "total_output_size": 8492759, "num_input_records": 5215, "num_output_records": 4762, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708482785, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844708484237, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.411529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.484437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.484445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.484449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.484452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:31:48.484455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:31:48 compute-0 podman[255405]: 2026-01-31 07:31:48.911951786 +0000 UTC m=+0.082661291 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 07:31:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:49.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:49 compute-0 ceph-mon[74496]: pgmap v927: 305 pgs: 305 active+clean; 233 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 07:31:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:49.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:31:50 compute-0 nova_compute[247704]: 2026-01-31 07:31:50.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 191 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 150 op/s
Jan 31 07:31:50 compute-0 nova_compute[247704]: 2026-01-31 07:31:50.436 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:31:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:51.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:31:51 compute-0 ceph-mon[74496]: pgmap v928: 305 pgs: 305 active+clean; 191 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 150 op/s
Jan 31 07:31:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:51.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 167 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 102 op/s
Jan 31 07:31:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:53.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:53 compute-0 ceph-mon[74496]: pgmap v929: 305 pgs: 305 active+clean; 167 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 102 op/s
Jan 31 07:31:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 195 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 07:31:55 compute-0 nova_compute[247704]: 2026-01-31 07:31:55.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:55 compute-0 nova_compute[247704]: 2026-01-31 07:31:55.438 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:31:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:55.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:55.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:55 compute-0 ceph-mon[74496]: pgmap v930: 305 pgs: 305 active+clean; 195 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 07:31:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2422846504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 07:31:56 compute-0 ovn_controller[149457]: 2026-01-31T07:31:56Z|00033|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 07:31:56 compute-0 nova_compute[247704]: 2026-01-31 07:31:56.729 247708 DEBUG oslo_concurrency.lockutils [None req-350fa66c-50de-409e-8f3c-16b779a88e99 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 16.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:31:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:31:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:57.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:31:57 compute-0 ceph-mon[74496]: pgmap v931: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 07:31:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/350385416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:31:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 07:31:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:31:58 compute-0 nova_compute[247704]: 2026-01-31 07:31:58.939 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updating instance_info_cache with network_info: [{"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.114 247708 DEBUG nova.virt.libvirt.driver [None req-005cba80-5691-44bf-9826-72f81315d8af c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] volume_snapshot_create: create_info: {'snapshot_id': 'cfa8f7a7-38d3-4420-a909-7f8997874dd3', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [None req-005cba80-5691-44bf-9826-72f81315d8af c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Traceback (most recent call last):
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]     self._volume_snapshot_create(context, instance, guest,
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]     raise exception.InternalError(msg)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] nova.exception.InternalError: Found no disk to snapshot.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.119 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] 
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver [None req-005cba80-5691-44bf-9826-72f81315d8af c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot cfa8f7a7-38d3-4420-a909-7f8997874dd3 could not be found.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot cfa8f7a7-38d3-4420-a909-7f8997874dd3 could not be found. (HTTP 404) (Request-ID: req-d29f0f28-353d-495d-9c93-8adab0537c9f)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot cfa8f7a7-38d3-4420-a909-7f8997874dd3 could not be found.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.280 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server [None req-005cba80-5691-44bf-9826-72f81315d8af c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Jan 31 07:31:59 compute-0 nova_compute[247704]: 2026-01-31 07:31:59.284 247708 ERROR oslo_messaging.rpc.server 
Jan 31 07:31:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:59.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:31:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:31:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:59.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:31:59 compute-0 ceph-mon[74496]: pgmap v932: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 280 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 07:31:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1371840649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.372 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.853 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-f610bafa-64ee-4d38-8be4-7c17cd2b2a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.853 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.854 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.854 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.854 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.854 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.855 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.855 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.855 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:32:00 compute-0 nova_compute[247704]: 2026-01-31 07:32:00.855 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:01.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:01.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:01 compute-0 ceph-mon[74496]: pgmap v933: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 31 07:32:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 07:32:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:03.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:03.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:03 compute-0 podman[255439]: 2026-01-31 07:32:03.913955108 +0000 UTC m=+0.082700362 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 07:32:03 compute-0 ceph-mon[74496]: pgmap v934: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 263 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 07:32:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.375 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.400 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.400 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.401 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.401 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.401 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:05.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:05.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406168050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:05 compute-0 nova_compute[247704]: 2026-01-31 07:32:05.929 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:06 compute-0 ceph-mon[74496]: pgmap v935: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 07:32:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2406168050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 81 KiB/s wr, 2 op/s
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.506 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.506 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.506 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:32:06 compute-0 sudo[255482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:06 compute-0 sudo[255482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:06 compute-0 sudo[255482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.672 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.674 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4713MB free_disk=20.897212982177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.674 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.675 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:06 compute-0 sudo[255507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:06 compute-0 sudo[255507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:06 compute-0 sudo[255507]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.851 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.852 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.852 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:32:06 compute-0 nova_compute[247704]: 2026-01-31 07:32:06.953 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3877595757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035829834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:07 compute-0 nova_compute[247704]: 2026-01-31 07:32:07.439 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:07 compute-0 nova_compute[247704]: 2026-01-31 07:32:07.449 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:07.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.069 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.125 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.126 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.363 247708 DEBUG nova.virt.libvirt.driver [None req-c4b72d8c-f0d8-4e3f-a704-81af37924013 c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] volume_snapshot_delete: delete_info: {'volume_id': '1bbb8596-60ef-452c-be5b-f1b3f34acf56'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [None req-c4b72d8c-f0d8-4e3f-a704-81af37924013 c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Traceback (most recent call last):
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]     self._volume_snapshot_delete(context, instance, volume_id,
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99]     if delete_info['type'] != 'qcow2':
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] KeyError: 'type'
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.364 247708 ERROR nova.virt.libvirt.driver [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] 
Jan 31 07:32:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 07:32:08 compute-0 ceph-mon[74496]: pgmap v936: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 81 KiB/s wr, 2 op/s
Jan 31 07:32:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3035829834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4022648652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.502 247708 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Creating tmpfile /var/lib/nova/instances/tmpb6taclt6 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver [None req-c4b72d8c-f0d8-4e3f-a704-81af37924013 c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver KeyError: 'type'
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-abd863f1-0340-4d46-9b9b-1f01c6ba40c1)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.676 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server [None req-c4b72d8c-f0d8-4e3f-a704-81af37924013 c578a6b1c1f04f7d8a1ba58a6a3efb7f eb5cd517e8854f269f4c6be46ac5a1a7 - - default default] Exception during message handling: KeyError: 'type'
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server KeyError: 'type'
Jan 31 07:32:08 compute-0 nova_compute[247704]: 2026-01-31 07:32:08.678 247708 ERROR oslo_messaging.rpc.server 
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.009 247708 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.035 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.036 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.045 247708 INFO nova.compute.rpcapi [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.046 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:09.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:09 compute-0 ceph-mon[74496]: pgmap v937: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 07:32:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1074906142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.978 247708 DEBUG oslo_concurrency.lockutils [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:09 compute-0 nova_compute[247704]: 2026-01-31 07:32:09.978 247708 DEBUG oslo_concurrency.lockutils [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.025 247708 INFO nova.compute.manager [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Detaching volume 1bbb8596-60ef-452c-be5b-f1b3f34acf56
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.210 247708 INFO nova.virt.block_device [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Attempting to driver detach volume 1bbb8596-60ef-452c-be5b-f1b3f34acf56 from mountpoint /dev/vdb
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.222 247708 DEBUG nova.virt.libvirt.driver [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Attempting to detach device vdb from instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.223 247708 DEBUG nova.virt.libvirt.guest [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-1bbb8596-60ef-452c-be5b-f1b3f34acf56">
Jan 31 07:32:10 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   </source>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <serial>1bbb8596-60ef-452c-be5b-f1b3f34acf56</serial>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]: </disk>
Jan 31 07:32:10 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.243 247708 INFO nova.virt.libvirt.driver [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Successfully detached device vdb from instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99 from the persistent domain config.
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.244 247708 DEBUG nova.virt.libvirt.driver [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.246 247708 DEBUG nova.virt.libvirt.guest [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-1bbb8596-60ef-452c-be5b-f1b3f34acf56">
Jan 31 07:32:10 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   </source>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <serial>1bbb8596-60ef-452c-be5b-f1b3f34acf56</serial>
Jan 31 07:32:10 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:32:10 compute-0 nova_compute[247704]: </disk>
Jan 31 07:32:10 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.377 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.401 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769844730.4012556, f610bafa-64ee-4d38-8be4-7c17cd2b2a99 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.403 247708 DEBUG nova.virt.libvirt.driver [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.406 247708 INFO nova.virt.libvirt.driver [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Successfully detached device vdb from instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99 from the live domain config.
Jan 31 07:32:10 compute-0 nova_compute[247704]: 2026-01-31 07:32:10.444 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1413958164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:11 compute-0 nova_compute[247704]: 2026-01-31 07:32:11.121 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:11 compute-0 nova_compute[247704]: 2026-01-31 07:32:11.122 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:11.141 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:11.142 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:11.143 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:11 compute-0 nova_compute[247704]: 2026-01-31 07:32:11.266 247708 DEBUG nova.objects.instance [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lazy-loading 'flavor' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:32:11 compute-0 nova_compute[247704]: 2026-01-31 07:32:11.357 247708 DEBUG oslo_concurrency.lockutils [None req-ffebe890-b763-4233-933c-f4f4a8a7913a 77ae480a4e8844c48c9dcde43f333250 c8eab1fe01c14c5aa74f8b1c00292fee - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:11.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:11.669 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:11 compute-0 nova_compute[247704]: 2026-01-31 07:32:11.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:11.671 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:32:11 compute-0 ceph-mon[74496]: pgmap v938: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 07:32:11 compute-0 nova_compute[247704]: 2026-01-31 07:32:11.981 247708 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23c338db-50ed-434c-ac85-8190b9b5f194',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 31 07:32:12 compute-0 nova_compute[247704]: 2026-01-31 07:32:12.039 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:12 compute-0 nova_compute[247704]: 2026-01-31 07:32:12.040 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:12 compute-0 nova_compute[247704]: 2026-01-31 07:32:12.040 247708 DEBUG nova.network.neutron [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:32:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Jan 31 07:32:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:13.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:13.673 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:14 compute-0 ceph-mon[74496]: pgmap v939: 305 pgs: 305 active+clean; 200 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Jan 31 07:32:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 215 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 9.3 KiB/s rd, 466 KiB/s wr, 17 op/s
Jan 31 07:32:15 compute-0 ceph-mon[74496]: pgmap v940: 305 pgs: 305 active+clean; 215 MiB data, 350 MiB used, 21 GiB / 21 GiB avail; 9.3 KiB/s rd, 466 KiB/s wr, 17 op/s
Jan 31 07:32:15 compute-0 nova_compute[247704]: 2026-01-31 07:32:15.383 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:15 compute-0 nova_compute[247704]: 2026-01-31 07:32:15.446 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:15.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:16 compute-0 sudo[255561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:16 compute-0 sudo[255561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-0 sudo[255561]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:16 compute-0 sudo[255586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:32:16 compute-0 sudo[255586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-0 sudo[255586]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.221 247708 DEBUG nova.network.neutron [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.252 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.255 247708 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23c338db-50ed-434c-ac85-8190b9b5f194',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.257 247708 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Creating instance directory: /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.258 247708 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Ensure instance console log exists: /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.258 247708 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 31 07:32:16 compute-0 sudo[255611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.261 247708 DEBUG nova.virt.libvirt.vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-933343300',display_name='tempest-LiveMigrationTest-server-933343300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-933343300',id=5,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:31:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-kv2e0wfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:31:39Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=23c338db-50ed-434c-ac85-8190b9b5f194,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.262 247708 DEBUG nova.network.os_vif_util [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:32:16 compute-0 sudo[255611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.264 247708 DEBUG nova.network.os_vif_util [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.265 247708 DEBUG os_vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.266 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.267 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.267 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:32:16 compute-0 sudo[255611]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.271 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.272 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2da07bdf-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.273 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2da07bdf-31, col_values=(('external_ids', {'iface-id': '2da07bdf-313d-4a90-a81e-e531c63b3d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:9e:1b', 'vm-uuid': '23c338db-50ed-434c-ac85-8190b9b5f194'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.274 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:16 compute-0 NetworkManager[49108]: <info>  [1769844736.2762] manager: (tap2da07bdf-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.280 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.283 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3231850267' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:32:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3231850267' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.286 247708 INFO os_vif [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31')
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.287 247708 DEBUG nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.288 247708 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23c338db-50ed-434c-ac85-8190b9b5f194',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 31 07:32:16 compute-0 sudo[255637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:32:16 compute-0 sudo[255637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 236 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Jan 31 07:32:16 compute-0 podman[255732]: 2026-01-31 07:32:16.856357094 +0000 UTC m=+0.077535198 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.972 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.974 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.975 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.976 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.976 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.978 247708 INFO nova.compute.manager [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Terminating instance
Jan 31 07:32:16 compute-0 nova_compute[247704]: 2026-01-31 07:32:16.979 247708 DEBUG nova.compute.manager [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:32:17 compute-0 kernel: tapec2025b5-97 (unregistering): left promiscuous mode
Jan 31 07:32:17 compute-0 podman[255753]: 2026-01-31 07:32:17.029172594 +0000 UTC m=+0.051627504 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:32:17 compute-0 NetworkManager[49108]: <info>  [1769844737.0345] device (tapec2025b5-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:32:17 compute-0 podman[255732]: 2026-01-31 07:32:17.037596597 +0000 UTC m=+0.258774651 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:32:17 compute-0 ovn_controller[149457]: 2026-01-31T07:32:17Z|00034|binding|INFO|Releasing lport ec2025b5-972e-43ae-8cb4-88e60da18197 from this chassis (sb_readonly=0)
Jan 31 07:32:17 compute-0 ovn_controller[149457]: 2026-01-31T07:32:17Z|00035|binding|INFO|Setting lport ec2025b5-972e-43ae-8cb4-88e60da18197 down in Southbound
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 ovn_controller[149457]: 2026-01-31T07:32:17Z|00036|binding|INFO|Removing iface tapec2025b5-97 ovn-installed in OVS
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.080 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.088 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:55:d2 10.100.0.10'], port_security=['fa:16:3e:7d:55:d2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f610bafa-64ee-4d38-8be4-7c17cd2b2a99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7269d3f50e464bf7953583453c06f6c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd808ab9-31b9-4301-8549-d6c79ef30c77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c5e6cdb-96aa-487f-a2b0-8f46c703f10c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ec2025b5-972e-43ae-8cb4-88e60da18197) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.090 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ec2025b5-972e-43ae-8cb4-88e60da18197 in datapath bfdb30d1-e66a-41bd-9dfb-f576cae026b5 unbound from our chassis
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.092 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfdb30d1-e66a-41bd-9dfb-f576cae026b5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.094 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c01900a-0122-4c03-b304-26ca904c590f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.095 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5 namespace which is not needed anymore
Jan 31 07:32:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 31 07:32:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 17.544s CPU time.
Jan 31 07:32:17 compute-0 systemd-machined[214448]: Machine qemu-2-instance-00000003 terminated.
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.212 247708 INFO nova.virt.libvirt.driver [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Instance destroyed successfully.
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.214 247708 DEBUG nova.objects.instance [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lazy-loading 'resources' on Instance uuid f610bafa-64ee-4d38-8be4-7c17cd2b2a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.256 247708 DEBUG nova.virt.libvirt.vif [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-138857472',display_name='tempest-VolumesAssistedSnapshotsTest-server-138857472',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-138857472',id=3,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNBykBcv16QQ6ttRMCuOYN3UOOsywiIkXrJA3SNGD0zaYKz3WtwcWNi/Eezi1ka9ukpZxVLE9mKIYnDGHSMsGmDcX41IyFM0mEs7A/Oc1luJV/SEL9WeshGDKnacE4ZECw==',key_name='tempest-keypair-692678977',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:31:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7269d3f50e464bf7953583453c06f6c7',ramdisk_id='',reservation_id='r-69kx30hz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-378059459',owner_user_name='tempest-VolumesAssistedSnapshotsTest-378059459-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:31:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d4520c09fdbc4ac79fbc4c76f049c0fb',uuid=f610bafa-64ee-4d38-8be4-7c17cd2b2a99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.259 247708 DEBUG nova.network.os_vif_util [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Converting VIF {"id": "ec2025b5-972e-43ae-8cb4-88e60da18197", "address": "fa:16:3e:7d:55:d2", "network": {"id": "bfdb30d1-e66a-41bd-9dfb-f576cae026b5", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-663321855-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7269d3f50e464bf7953583453c06f6c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2025b5-97", "ovs_interfaceid": "ec2025b5-972e-43ae-8cb4-88e60da18197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.261 247708 DEBUG nova.network.os_vif_util [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.262 247708 DEBUG os_vif [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.266 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec2025b5-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.268 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.270 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.273 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.277 247708 INFO os_vif [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:55:d2,bridge_name='br-int',has_traffic_filtering=True,id=ec2025b5-972e-43ae-8cb4-88e60da18197,network=Network(bfdb30d1-e66a-41bd-9dfb-f576cae026b5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2025b5-97')
Jan 31 07:32:17 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [NOTICE]   (255225) : haproxy version is 2.8.14-c23fe91
Jan 31 07:32:17 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [NOTICE]   (255225) : path to executable is /usr/sbin/haproxy
Jan 31 07:32:17 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [WARNING]  (255225) : Exiting Master process...
Jan 31 07:32:17 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [ALERT]    (255225) : Current worker (255227) exited with code 143 (Terminated)
Jan 31 07:32:17 compute-0 neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5[255220]: [WARNING]  (255225) : All workers exited. Exiting... (0)
Jan 31 07:32:17 compute-0 systemd[1]: libpod-71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e.scope: Deactivated successfully.
Jan 31 07:32:17 compute-0 podman[255815]: 2026-01-31 07:32:17.288972399 +0000 UTC m=+0.061983834 container died 71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 07:32:17 compute-0 ceph-mon[74496]: pgmap v941: 305 pgs: 305 active+clean; 236 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Jan 31 07:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e-userdata-shm.mount: Deactivated successfully.
Jan 31 07:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-60009bf34f43ef27df29a1c3236ce1e3ea231e13430b7c90ad6decd8a5ccc3fa-merged.mount: Deactivated successfully.
Jan 31 07:32:17 compute-0 podman[255815]: 2026-01-31 07:32:17.33970095 +0000 UTC m=+0.112712365 container cleanup 71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 07:32:17 compute-0 systemd[1]: libpod-conmon-71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e.scope: Deactivated successfully.
Jan 31 07:32:17 compute-0 podman[255890]: 2026-01-31 07:32:17.423435276 +0000 UTC m=+0.060221031 container remove 71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.429 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[583adc8e-e409-4751-b0ad-edbedeb6df0e]: (4, ('Sat Jan 31 07:32:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5 (71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e)\n71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e\nSat Jan 31 07:32:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5 (71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e)\n71e92b58498c7969dfb1a13c8f80a6b81ea90d64819ab13188c5a83113cd008e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.431 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1bbc61-683d-42d5-a7f1-4c18909a5c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.432 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfdb30d1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:17 compute-0 kernel: tapbfdb30d1-e0: left promiscuous mode
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.448 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.451 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8034cf04-1e94-4089-9aa7-96be04a04998]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.464 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[22dcc4b9-21f1-4b70-8144-3b683ec82532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.469 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2efeb0a6-44b8-4c27-91e0-4a1395c08f80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.482 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8beb7612-1b30-4df5-96ac-f2387b4f7b50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492117, 'reachable_time': 19374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255934, 'error': None, 'target': 'ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 systemd[1]: run-netns-ovnmeta\x2dbfdb30d1\x2de66a\x2d41bd\x2d9dfb\x2df576cae026b5.mount: Deactivated successfully.
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.492 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfdb30d1-e66a-41bd-9dfb-f576cae026b5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:17.493 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[4460fada-cae7-4738-bfdd-6ebd42a5fa82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.759 247708 INFO nova.virt.libvirt.driver [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Deleting instance files /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99_del
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.760 247708 INFO nova.virt.libvirt.driver [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Deletion of /var/lib/nova/instances/f610bafa-64ee-4d38-8be4-7c17cd2b2a99_del complete
Jan 31 07:32:17 compute-0 podman[255986]: 2026-01-31 07:32:17.814315766 +0000 UTC m=+0.083306717 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:32:17 compute-0 podman[255986]: 2026-01-31 07:32:17.835559917 +0000 UTC m=+0.104550868 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.890 247708 INFO nova.compute.manager [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.890 247708 DEBUG oslo.service.loopingcall [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.891 247708 DEBUG nova.compute.manager [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:32:17 compute-0 nova_compute[247704]: 2026-01-31 07:32:17.891 247708 DEBUG nova.network.neutron [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:32:18 compute-0 podman[256050]: 2026-01-31 07:32:18.065596704 +0000 UTC m=+0.065842406 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 07:32:18 compute-0 podman[256050]: 2026-01-31 07:32:18.076513307 +0000 UTC m=+0.076758949 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, com.redhat.component=keepalived-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20)
Jan 31 07:32:18 compute-0 sudo[255637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:18 compute-0 sudo[256083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:18 compute-0 sudo[256083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:18 compute-0 sudo[256083]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:18 compute-0 sudo[256108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:32:18 compute-0 sudo[256108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:18 compute-0 sudo[256108]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:18 compute-0 sudo[256133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:18 compute-0 sudo[256133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:18 compute-0 sudo[256133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 230 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 07:32:18 compute-0 sudo[256158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:32:18 compute-0 sudo[256158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:18 compute-0 sudo[256158]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e08c4af6-b36b-43c3-91e2-e3b533b777fa does not exist
Jan 31 07:32:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev af708aaf-e5a0-43d1-8c6d-6de6ee36f0a2 does not exist
Jan 31 07:32:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ebcaf0c7-98cd-4edf-ad78-e2c1248cf2bb does not exist
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:32:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:32:19 compute-0 sudo[256213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:19 compute-0 sudo[256213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:19 compute-0 sudo[256213]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:19 compute-0 nova_compute[247704]: 2026-01-31 07:32:19.069 247708 DEBUG nova.compute.manager [req-d95c5130-2133-4335-913b-59c25f33d9f0 req-6ffd6ffe-dae3-4cd5-8443-da3c0860f94f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-vif-unplugged-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:19 compute-0 nova_compute[247704]: 2026-01-31 07:32:19.071 247708 DEBUG oslo_concurrency.lockutils [req-d95c5130-2133-4335-913b-59c25f33d9f0 req-6ffd6ffe-dae3-4cd5-8443-da3c0860f94f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:19 compute-0 nova_compute[247704]: 2026-01-31 07:32:19.072 247708 DEBUG oslo_concurrency.lockutils [req-d95c5130-2133-4335-913b-59c25f33d9f0 req-6ffd6ffe-dae3-4cd5-8443-da3c0860f94f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:19 compute-0 nova_compute[247704]: 2026-01-31 07:32:19.072 247708 DEBUG oslo_concurrency.lockutils [req-d95c5130-2133-4335-913b-59c25f33d9f0 req-6ffd6ffe-dae3-4cd5-8443-da3c0860f94f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:19 compute-0 nova_compute[247704]: 2026-01-31 07:32:19.072 247708 DEBUG nova.compute.manager [req-d95c5130-2133-4335-913b-59c25f33d9f0 req-6ffd6ffe-dae3-4cd5-8443-da3c0860f94f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] No waiting events found dispatching network-vif-unplugged-ec2025b5-972e-43ae-8cb4-88e60da18197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:32:19 compute-0 nova_compute[247704]: 2026-01-31 07:32:19.073 247708 DEBUG nova.compute.manager [req-d95c5130-2133-4335-913b-59c25f33d9f0 req-6ffd6ffe-dae3-4cd5-8443-da3c0860f94f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-vif-unplugged-ec2025b5-972e-43ae-8cb4-88e60da18197 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:32:19 compute-0 sudo[256244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:32:19 compute-0 sudo[256244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:19 compute-0 sudo[256244]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:19 compute-0 podman[256237]: 2026-01-31 07:32:19.138513064 +0000 UTC m=+0.101313301 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:32:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:32:19 compute-0 sudo[256285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:19 compute-0 sudo[256285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:19 compute-0 sudo[256285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:19 compute-0 sudo[256314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:32:19 compute-0 sudo[256314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.593388454 +0000 UTC m=+0.052013514 container create 432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meninsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:32:19 compute-0 systemd[1]: Started libpod-conmon-432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865.scope.
Jan 31 07:32:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:19.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.569241052 +0000 UTC m=+0.027866152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:32:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.682254293 +0000 UTC m=+0.140879383 container init 432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.692328385 +0000 UTC m=+0.150953435 container start 432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meninsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.697408588 +0000 UTC m=+0.156033658 container attach 432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:32:19 compute-0 focused_meninsky[256396]: 167 167
Jan 31 07:32:19 compute-0 systemd[1]: libpod-432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865.scope: Deactivated successfully.
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.702377257 +0000 UTC m=+0.161002347 container died 432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d8f3ca06bf580633e252aa658a14bfe32c6dffd25896088de906196a23f2e3-merged.mount: Deactivated successfully.
Jan 31 07:32:19 compute-0 podman[256380]: 2026-01-31 07:32:19.741035778 +0000 UTC m=+0.199660808 container remove 432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:32:19 compute-0 systemd[1]: libpod-conmon-432513ec25f5d1ef4b3b4ffec9c38eeafc5592e2f4c50de70714709e36431865.scope: Deactivated successfully.
Jan 31 07:32:19 compute-0 podman[256420]: 2026-01-31 07:32:19.888462157 +0000 UTC m=+0.051009369 container create 58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:32:19 compute-0 systemd[1]: Started libpod-conmon-58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35.scope.
Jan 31 07:32:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd027465cb62f60b4db747b23f3a4259626590efc7310de0629d6d5734a713be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd027465cb62f60b4db747b23f3a4259626590efc7310de0629d6d5734a713be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd027465cb62f60b4db747b23f3a4259626590efc7310de0629d6d5734a713be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd027465cb62f60b4db747b23f3a4259626590efc7310de0629d6d5734a713be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd027465cb62f60b4db747b23f3a4259626590efc7310de0629d6d5734a713be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:19 compute-0 podman[256420]: 2026-01-31 07:32:19.869147782 +0000 UTC m=+0.031694994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:32:19 compute-0 podman[256420]: 2026-01-31 07:32:19.973718149 +0000 UTC m=+0.136265421 container init 58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:32:19 compute-0 podman[256420]: 2026-01-31 07:32:19.987310867 +0000 UTC m=+0.149858089 container start 58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:32:19 compute-0 podman[256420]: 2026-01-31 07:32:19.992127582 +0000 UTC m=+0.154674764 container attach 58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:32:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:32:19
Jan 31 07:32:19 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:32:19 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:32:19 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms', 'volumes', 'default.rgw.log']
Jan 31 07:32:19 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:32:20 compute-0 ceph-mon[74496]: pgmap v942: 305 pgs: 305 active+clean; 230 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 07:32:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2462985865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:32:20 compute-0 sshd-session[256432]: Invalid user sol from 45.148.10.240 port 56392
Jan 31 07:32:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 198 MiB data, 346 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 31 07:32:20 compute-0 sshd-session[256432]: Connection closed by invalid user sol 45.148.10.240 port 56392 [preauth]
Jan 31 07:32:20 compute-0 nova_compute[247704]: 2026-01-31 07:32:20.485 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:20 compute-0 ecstatic_chaplygin[256438]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:32:20 compute-0 ecstatic_chaplygin[256438]: --> relative data size: 1.0
Jan 31 07:32:20 compute-0 ecstatic_chaplygin[256438]: --> All data devices are unavailable
Jan 31 07:32:20 compute-0 systemd[1]: libpod-58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35.scope: Deactivated successfully.
Jan 31 07:32:20 compute-0 podman[256420]: 2026-01-31 07:32:20.755052908 +0000 UTC m=+0.917600130 container died 58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd027465cb62f60b4db747b23f3a4259626590efc7310de0629d6d5734a713be-merged.mount: Deactivated successfully.
Jan 31 07:32:20 compute-0 podman[256420]: 2026-01-31 07:32:20.810490583 +0000 UTC m=+0.973037765 container remove 58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:32:20 compute-0 systemd[1]: libpod-conmon-58e2ba49d1b8038e770250e6454967b5aa7bf07eaafb4d5d636e3b2764560d35.scope: Deactivated successfully.
Jan 31 07:32:20 compute-0 sudo[256314]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:20 compute-0 sudo[256467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:20 compute-0 sudo[256467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:20 compute-0 sudo[256467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:20 compute-0 sudo[256492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:32:20 compute-0 sudo[256492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:20 compute-0 sudo[256492]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:21 compute-0 sudo[256517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:21 compute-0 sudo[256517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:21 compute-0 sudo[256517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:21 compute-0 sudo[256542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:32:21 compute-0 sudo[256542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/377370642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3032247754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.423123771 +0000 UTC m=+0.039819879 container create 93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_faraday, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:32:21 compute-0 systemd[1]: Started libpod-conmon-93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61.scope.
Jan 31 07:32:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:21.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.487 247708 DEBUG nova.network.neutron [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.404929153 +0000 UTC m=+0.021625281 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.510682679 +0000 UTC m=+0.127378827 container init 93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.513 247708 INFO nova.compute.manager [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Took 3.62 seconds to deallocate network for instance.
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.518836066 +0000 UTC m=+0.135532174 container start 93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:32:21 compute-0 amazing_faraday[256624]: 167 167
Jan 31 07:32:21 compute-0 systemd[1]: libpod-93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61.scope: Deactivated successfully.
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.524696326 +0000 UTC m=+0.141392464 container attach 93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_faraday, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.525616949 +0000 UTC m=+0.142313057 container died 93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-184dec4ac1d2ea1caa3a316f900173cc59d9ed828f1dbd3b22878c0d6b3f60bc-merged.mount: Deactivated successfully.
Jan 31 07:32:21 compute-0 podman[256608]: 2026-01-31 07:32:21.559258529 +0000 UTC m=+0.175954637 container remove 93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:32:21 compute-0 systemd[1]: libpod-conmon-93ea5c2a82123605e507c74f16fb7f77264140b28ece5a8085d76b090eda9e61.scope: Deactivated successfully.
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.628 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.630 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:21.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.689 247708 DEBUG nova.compute.manager [req-ebc32980-da73-48e3-8e2b-4414008ea3ea req-b9a9003f-ec1c-4130-a9cd-3dd0f00be5a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.690 247708 DEBUG oslo_concurrency.lockutils [req-ebc32980-da73-48e3-8e2b-4414008ea3ea req-b9a9003f-ec1c-4130-a9cd-3dd0f00be5a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.692 247708 DEBUG oslo_concurrency.lockutils [req-ebc32980-da73-48e3-8e2b-4414008ea3ea req-b9a9003f-ec1c-4130-a9cd-3dd0f00be5a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.693 247708 DEBUG oslo_concurrency.lockutils [req-ebc32980-da73-48e3-8e2b-4414008ea3ea req-b9a9003f-ec1c-4130-a9cd-3dd0f00be5a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.694 247708 DEBUG nova.compute.manager [req-ebc32980-da73-48e3-8e2b-4414008ea3ea req-b9a9003f-ec1c-4130-a9cd-3dd0f00be5a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] No waiting events found dispatching network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.695 247708 WARNING nova.compute.manager [req-ebc32980-da73-48e3-8e2b-4414008ea3ea req-b9a9003f-ec1c-4130-a9cd-3dd0f00be5a2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received unexpected event network-vif-plugged-ec2025b5-972e-43ae-8cb4-88e60da18197 for instance with vm_state deleted and task_state None.
Jan 31 07:32:21 compute-0 podman[256648]: 2026-01-31 07:32:21.714400283 +0000 UTC m=+0.047626438 container create 92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:32:21 compute-0 systemd[1]: Started libpod-conmon-92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09.scope.
Jan 31 07:32:21 compute-0 nova_compute[247704]: 2026-01-31 07:32:21.772 247708 DEBUG oslo_concurrency.processutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7586d6c9aab83a9a82e07110701756252e20365b696aa3e6925ce5cb6e4eaa78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:21 compute-0 podman[256648]: 2026-01-31 07:32:21.693302865 +0000 UTC m=+0.026529030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7586d6c9aab83a9a82e07110701756252e20365b696aa3e6925ce5cb6e4eaa78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7586d6c9aab83a9a82e07110701756252e20365b696aa3e6925ce5cb6e4eaa78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7586d6c9aab83a9a82e07110701756252e20365b696aa3e6925ce5cb6e4eaa78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:21 compute-0 podman[256648]: 2026-01-31 07:32:21.805425584 +0000 UTC m=+0.138651759 container init 92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:32:21 compute-0 podman[256648]: 2026-01-31 07:32:21.812927205 +0000 UTC m=+0.146153390 container start 92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:32:21 compute-0 podman[256648]: 2026-01-31 07:32:21.821275626 +0000 UTC m=+0.154501791 container attach 92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.004 247708 DEBUG nova.network.neutron [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Port 2da07bdf-313d-4a90-a81e-e531c63b3d54 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.007 247708 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpb6taclt6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='23c338db-50ed-434c-ac85-8190b9b5f194',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 31 07:32:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709580168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.171 247708 DEBUG oslo_concurrency.processutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.176 247708 DEBUG nova.compute.provider_tree [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.201 247708 DEBUG nova.scheduler.client.report [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:22 compute-0 ceph-mon[74496]: pgmap v943: 305 pgs: 305 active+clean; 198 MiB data, 346 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 31 07:32:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2709580168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:22 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.245 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:22 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.270 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.308 247708 INFO nova.scheduler.client.report [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Deleted allocations for instance f610bafa-64ee-4d38-8be4-7c17cd2b2a99
Jan 31 07:32:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 169 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.417 247708 DEBUG nova.compute.manager [req-57da188c-1a4f-4d44-948a-326a1551c418 req-6bec5565-5cc2-4e15-99e7-9c3c0b0aaced 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Received event network-vif-deleted-ec2025b5-972e-43ae-8cb4-88e60da18197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:32:22 compute-0 kernel: tap2da07bdf-31: entered promiscuous mode
Jan 31 07:32:22 compute-0 NetworkManager[49108]: <info>  [1769844742.4471] manager: (tap2da07bdf-31): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.447 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:22 compute-0 ovn_controller[149457]: 2026-01-31T07:32:22Z|00037|binding|INFO|Claiming lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 for this additional chassis.
Jan 31 07:32:22 compute-0 ovn_controller[149457]: 2026-01-31T07:32:22Z|00038|binding|INFO|2da07bdf-313d-4a90-a81e-e531c63b3d54: Claiming fa:16:3e:cc:9e:1b 10.100.0.5
Jan 31 07:32:22 compute-0 ovn_controller[149457]: 2026-01-31T07:32:22Z|00039|binding|INFO|Claiming lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 for this additional chassis.
Jan 31 07:32:22 compute-0 ovn_controller[149457]: 2026-01-31T07:32:22Z|00040|binding|INFO|6e847cdf-cab0-4432-ba18-1faa5270e0d7: Claiming fa:16:3e:dc:25:75 19.80.0.160
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.453 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:22 compute-0 ovn_controller[149457]: 2026-01-31T07:32:22Z|00041|binding|INFO|Setting lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 ovn-installed in OVS
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.470 247708 DEBUG oslo_concurrency.lockutils [None req-e9d66543-e38b-4a0e-9287-f2f3bcb2d846 d4520c09fdbc4ac79fbc4c76f049c0fb 7269d3f50e464bf7953583453c06f6c7 - - default default] Lock "f610bafa-64ee-4d38-8be4-7c17cd2b2a99" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:22 compute-0 nova_compute[247704]: 2026-01-31 07:32:22.472 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:22 compute-0 systemd-machined[214448]: New machine qemu-3-instance-00000005.
Jan 31 07:32:22 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Jan 31 07:32:22 compute-0 systemd-udevd[256727]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:32:22 compute-0 NetworkManager[49108]: <info>  [1769844742.5092] device (tap2da07bdf-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:32:22 compute-0 NetworkManager[49108]: <info>  [1769844742.5101] device (tap2da07bdf-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:32:22 compute-0 agitated_cerf[256663]: {
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:     "0": [
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:         {
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "devices": [
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "/dev/loop3"
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             ],
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "lv_name": "ceph_lv0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "lv_size": "7511998464",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "name": "ceph_lv0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "tags": {
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.cluster_name": "ceph",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.crush_device_class": "",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.encrypted": "0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.osd_id": "0",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.type": "block",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:                 "ceph.vdo": "0"
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             },
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "type": "block",
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:             "vg_name": "ceph_vg0"
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:         }
Jan 31 07:32:22 compute-0 agitated_cerf[256663]:     ]
Jan 31 07:32:22 compute-0 agitated_cerf[256663]: }
Jan 31 07:32:22 compute-0 systemd[1]: libpod-92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09.scope: Deactivated successfully.
Jan 31 07:32:22 compute-0 conmon[256663]: conmon 92a7308416ff84c44c09 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09.scope/container/memory.events
Jan 31 07:32:22 compute-0 podman[256648]: 2026-01-31 07:32:22.558752269 +0000 UTC m=+0.891978454 container died 92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:32:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7586d6c9aab83a9a82e07110701756252e20365b696aa3e6925ce5cb6e4eaa78-merged.mount: Deactivated successfully.
Jan 31 07:32:22 compute-0 podman[256648]: 2026-01-31 07:32:22.632421523 +0000 UTC m=+0.965647708 container remove 92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:32:22 compute-0 systemd[1]: libpod-conmon-92a7308416ff84c44c0967806bfb25b96d9071f33d4fc1f14f32b24313bf8e09.scope: Deactivated successfully.
Jan 31 07:32:22 compute-0 sudo[256542]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:22 compute-0 sudo[256750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:22 compute-0 sudo[256750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:22 compute-0 sudo[256750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:22 compute-0 sudo[256775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:32:22 compute-0 sudo[256775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:22 compute-0 sudo[256775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:22 compute-0 sudo[256800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:22 compute-0 sudo[256800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:22 compute-0 sudo[256800]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:22 compute-0 sudo[256826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:32:22 compute-0 sudo[256826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.267757908 +0000 UTC m=+0.051702546 container create 96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:32:23 compute-0 ceph-mon[74496]: pgmap v944: 305 pgs: 305 active+clean; 169 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 31 07:32:23 compute-0 systemd[1]: Started libpod-conmon-96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290.scope.
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.243329039 +0000 UTC m=+0.027273777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:32:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.354303481 +0000 UTC m=+0.138248129 container init 96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.364286992 +0000 UTC m=+0.148231640 container start 96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.367665662 +0000 UTC m=+0.151610310 container attach 96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:32:23 compute-0 condescending_cartwright[256950]: 167 167
Jan 31 07:32:23 compute-0 systemd[1]: libpod-96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290.scope: Deactivated successfully.
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.371767361 +0000 UTC m=+0.155712019 container died 96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:32:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-033460c882df104405c6479132a1ef890920dc3c6f7f3d097951513196cb2f1c-merged.mount: Deactivated successfully.
Jan 31 07:32:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:23 compute-0 podman[256933]: 2026-01-31 07:32:23.414059999 +0000 UTC m=+0.198004647 container remove 96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:32:23 compute-0 systemd[1]: libpod-conmon-96d3741ec81a5f0dd0361b20812c8ab44cf45b89cf29e9ffbca18542e8cb8290.scope: Deactivated successfully.
Jan 31 07:32:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:23.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:23 compute-0 podman[256973]: 2026-01-31 07:32:23.564465301 +0000 UTC m=+0.042662399 container create ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:32:23 compute-0 systemd[1]: Started libpod-conmon-ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4.scope.
Jan 31 07:32:23 compute-0 podman[256973]: 2026-01-31 07:32:23.545278548 +0000 UTC m=+0.023475676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:32:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59793ef45b149dbd6d7741bcdf2de7f8bcba3a7dff77f81598d60a9a8111a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59793ef45b149dbd6d7741bcdf2de7f8bcba3a7dff77f81598d60a9a8111a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59793ef45b149dbd6d7741bcdf2de7f8bcba3a7dff77f81598d60a9a8111a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59793ef45b149dbd6d7741bcdf2de7f8bcba3a7dff77f81598d60a9a8111a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:23 compute-0 podman[256973]: 2026-01-31 07:32:23.670422461 +0000 UTC m=+0.148619579 container init ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dewdney, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:32:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:23 compute-0 podman[256973]: 2026-01-31 07:32:23.680786031 +0000 UTC m=+0.158983139 container start ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:32:23 compute-0 podman[256973]: 2026-01-31 07:32:23.686358735 +0000 UTC m=+0.164555843 container attach ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:32:23 compute-0 nova_compute[247704]: 2026-01-31 07:32:23.713 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844743.712996, 23c338db-50ed-434c-ac85-8190b9b5f194 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:23 compute-0 nova_compute[247704]: 2026-01-31 07:32:23.714 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Started (Lifecycle Event)
Jan 31 07:32:23 compute-0 nova_compute[247704]: 2026-01-31 07:32:23.744 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:24 compute-0 nova_compute[247704]: 2026-01-31 07:32:24.145 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844744.1453745, 23c338db-50ed-434c-ac85-8190b9b5f194 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:24 compute-0 nova_compute[247704]: 2026-01-31 07:32:24.146 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Resumed (Lifecycle Event)
Jan 31 07:32:24 compute-0 nova_compute[247704]: 2026-01-31 07:32:24.194 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:24 compute-0 nova_compute[247704]: 2026-01-31 07:32:24.198 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:32:24 compute-0 nova_compute[247704]: 2026-01-31 07:32:24.224 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 07:32:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]: {
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:         "osd_id": 0,
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:         "type": "bluestore"
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]:     }
Jan 31 07:32:24 compute-0 compassionate_dewdney[256990]: }
Jan 31 07:32:24 compute-0 systemd[1]: libpod-ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4.scope: Deactivated successfully.
Jan 31 07:32:24 compute-0 podman[256973]: 2026-01-31 07:32:24.536979342 +0000 UTC m=+1.015176470 container died ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dewdney, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:32:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c59793ef45b149dbd6d7741bcdf2de7f8bcba3a7dff77f81598d60a9a8111a6-merged.mount: Deactivated successfully.
Jan 31 07:32:24 compute-0 podman[256973]: 2026-01-31 07:32:24.606764222 +0000 UTC m=+1.084961320 container remove ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:32:24 compute-0 systemd[1]: libpod-conmon-ac7933c78cdca2c3dfd1b29701eacb0f6aae0a496eb0e7d8e40ad4cf9e585ce4.scope: Deactivated successfully.
Jan 31 07:32:24 compute-0 sudo[256826]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:32:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:32:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fedca87c-b3f1-4ebf-a272-9c2185fc7320 does not exist
Jan 31 07:32:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d08453d0-e63d-4918-bbd3-7ce1aa5a6e50 does not exist
Jan 31 07:32:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1e77edb7-7d81-4b95-b254-fe63c0a66b11 does not exist
Jan 31 07:32:24 compute-0 sudo[257023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:24 compute-0 sudo[257023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:24 compute-0 sudo[257023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:24 compute-0 sudo[257048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:32:24 compute-0 sudo[257048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:24 compute-0 sudo[257048]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:32:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:25.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:32:25 compute-0 nova_compute[247704]: 2026-01-31 07:32:25.485 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:25 compute-0 ceph-mon[74496]: pgmap v945: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 31 07:32:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:32:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 847 KiB/s rd, 1.3 MiB/s wr, 105 op/s
Jan 31 07:32:26 compute-0 sudo[257074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:26 compute-0 sudo[257074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:26 compute-0 sudo[257074]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:26 compute-0 sudo[257099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:26 compute-0 sudo[257099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:26 compute-0 sudo[257099]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:27 compute-0 nova_compute[247704]: 2026-01-31 07:32:27.274 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:27.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:27 compute-0 ceph-mon[74496]: pgmap v946: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 847 KiB/s rd, 1.3 MiB/s wr, 105 op/s
Jan 31 07:32:27 compute-0 ovn_controller[149457]: 2026-01-31T07:32:27Z|00042|binding|INFO|Claiming lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 for this chassis.
Jan 31 07:32:27 compute-0 ovn_controller[149457]: 2026-01-31T07:32:27Z|00043|binding|INFO|2da07bdf-313d-4a90-a81e-e531c63b3d54: Claiming fa:16:3e:cc:9e:1b 10.100.0.5
Jan 31 07:32:27 compute-0 ovn_controller[149457]: 2026-01-31T07:32:27Z|00044|binding|INFO|Claiming lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 for this chassis.
Jan 31 07:32:27 compute-0 ovn_controller[149457]: 2026-01-31T07:32:27Z|00045|binding|INFO|6e847cdf-cab0-4432-ba18-1faa5270e0d7: Claiming fa:16:3e:dc:25:75 19.80.0.160
Jan 31 07:32:27 compute-0 ovn_controller[149457]: 2026-01-31T07:32:27Z|00046|binding|INFO|Setting lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 up in Southbound
Jan 31 07:32:27 compute-0 ovn_controller[149457]: 2026-01-31T07:32:27Z|00047|binding|INFO|Setting lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 up in Southbound
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.806 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:9e:1b 10.100.0.5'], port_security=['fa:16:3e:cc:9e:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-211383396', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '23c338db-50ed-434c-ac85-8190b9b5f194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-211383396', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=2da07bdf-313d-4a90-a81e-e531c63b3d54) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.808 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:75 19.80.0.160'], port_security=['fa:16:3e:dc:25:75 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['2da07bdf-313d-4a90-a81e-e531c63b3d54'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-70189147', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6abbefe0-4d30-4477-876e-e1412d7347f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-70189147', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=00556c7f-96d8-4b10-939b-86f9a6371447, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6e847cdf-cab0-4432-ba18-1faa5270e0d7) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.809 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 2da07bdf-313d-4a90-a81e-e531c63b3d54 in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 bound to our chassis
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.812 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.821 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dc9fd6d5-d901-4ee4-8b11-afa87c7629f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.822 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap272cbcfe-d1 in ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.823 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap272cbcfe-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.823 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[064be742-5529-4c8f-bed7-944c3ecaa455]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.824 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6dbd7d-f23a-4eb7-9e63-34cd003d76d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.832 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[d573a08e-af05-4bfd-81a8-6ab4080f6397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.840 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5e0319-bc47-4eda-850f-a361ba7d9c79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.862 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[97e3ba92-3c08-4cd4-a7dc-283b43d0f434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.869 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3befa9-945b-48a3-ace6-ce6d470ea777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 NetworkManager[49108]: <info>  [1769844747.8704] manager: (tap272cbcfe-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.897 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c9391490-eb22-47b8-95dc-814d13b04ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.902 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c972ea7f-f07d-42bf-8570-18581e295479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 systemd-udevd[257133]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:32:27 compute-0 NetworkManager[49108]: <info>  [1769844747.9264] device (tap272cbcfe-d0): carrier: link connected
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.930 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[10734986-ea9a-4265-ade1-acc798e0132a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.955 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a2364144-f2f3-462c-b79a-9ab54e80f5ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499054, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257144, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.969 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[309441b5-9323-432c-9917-c5e554c95445]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:eac2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499054, 'tstamp': 499054}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257152, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:27.983 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc99e41-9905-4703-aa97-913f4bc9f6b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499054, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257153, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.012 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[424b1c59-7717-4192-b302-d01e77f92bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.028 247708 INFO nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Post operation of migration started
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.066 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24b45b64-28ee-4cbf-9f24-0e13868b0dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.067 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.068 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.068 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:28 compute-0 kernel: tap272cbcfe-d0: entered promiscuous mode
Jan 31 07:32:28 compute-0 NetworkManager[49108]: <info>  [1769844748.0712] manager: (tap272cbcfe-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.073 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:28 compute-0 ovn_controller[149457]: 2026-01-31T07:32:28Z|00048|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.076 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.076 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.077 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fcbe2e-0c6e-4c99-8530-b7cfdd375123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.077 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/272cbcfe-dc1b-4319-84a2-27d245d969a3.pid.haproxy
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.078 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'env', 'PROCESS_TAG=haproxy-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/272cbcfe-dc1b-4319-84a2-27d245d969a3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.081 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 420 KiB/s wr, 106 op/s
Jan 31 07:32:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:28 compute-0 podman[257187]: 2026-01-31 07:32:28.423171185 +0000 UTC m=+0.025640768 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:32:28 compute-0 podman[257187]: 2026-01-31 07:32:28.549451885 +0000 UTC m=+0.151921648 container create 08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:32:28 compute-0 systemd[1]: Started libpod-conmon-08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d.scope.
Jan 31 07:32:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d055446ad0f3bfe0fb24585c20d727579f33cffb1884aaa746cefe1f9c4b736/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:28 compute-0 podman[257187]: 2026-01-31 07:32:28.644256938 +0000 UTC m=+0.246726491 container init 08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:32:28 compute-0 podman[257187]: 2026-01-31 07:32:28.651160604 +0000 UTC m=+0.253630137 container start 08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:32:28 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [NOTICE]   (257206) : New worker (257208) forked
Jan 31 07:32:28 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [NOTICE]   (257206) : Loading success.
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.715 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6e847cdf-cab0-4432-ba18-1faa5270e0d7 in datapath 6abbefe0-4d30-4477-876e-e1412d7347f2 bound to our chassis
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.717 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6abbefe0-4d30-4477-876e-e1412d7347f2
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.726 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[41e6ce36-76ce-444a-8735-aa45838d9b46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.727 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6abbefe0-41 in ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.729 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6abbefe0-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.729 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[276baec4-09c7-4b1c-a896-0ca0eb306a40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.730 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dcecb062-8efb-4707-b1e1-080e03711ae0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.740 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9e472f29-7aad-4591-ad56-9b6543b452f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.752 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[58d02da3-9eef-4be1-8788-87d27bd61268]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.780 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c16774-263f-415e-b37f-1d593b5ee43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 NetworkManager[49108]: <info>  [1769844748.7879] manager: (tap6abbefe0-40): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 31 07:32:28 compute-0 systemd-udevd[257143]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.787 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba9cd97-0d11-4c86-916f-8bc85dbccb3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.805 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3ba13e-a467-439d-8ece-ceeae515dae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.808 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9e56679b-1e06-4aeb-aca6-45aad0ada704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 NetworkManager[49108]: <info>  [1769844748.8246] device (tap6abbefe0-40): carrier: link connected
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.826 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ce45fac3-1e0d-4e00-a82b-5f42e058de55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.843 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ec14df67-c057-4e07-b5e1-0f394807b42a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6abbefe0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:8f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499144, 'reachable_time': 28644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257227, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.856 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[26299f84-ede1-414f-9a7d-2faa69219478]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:8ff6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499144, 'tstamp': 499144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257228, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.872 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b1f8f5-8483-453f-8972-aff713839d4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6abbefe0-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:8f:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499144, 'reachable_time': 28644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257229, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.900 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[654c10ec-bc81-4893-9cc4-8958f70605f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.955 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[835d7bf8-bfe6-4f83-aaca-769be3de8243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.956 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6abbefe0-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.957 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.957 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6abbefe0-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.959 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 kernel: tap6abbefe0-40: entered promiscuous mode
Jan 31 07:32:28 compute-0 NetworkManager[49108]: <info>  [1769844748.9604] manager: (tap6abbefe0-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.962 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.965 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6abbefe0-40, col_values=(('external_ids', {'iface-id': '37e0bde4-4a51-4973-b284-d740caeb19be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 ovn_controller[149457]: 2026-01-31T07:32:28Z|00049|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 07:32:28 compute-0 nova_compute[247704]: 2026-01-31 07:32:28.977 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.978 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6abbefe0-4d30-4477-876e-e1412d7347f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6abbefe0-4d30-4477-876e-e1412d7347f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.980 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b8982d2b-2bff-41a4-a38b-e458d6017697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.981 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-6abbefe0-4d30-4477-876e-e1412d7347f2
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/6abbefe0-4d30-4477-876e-e1412d7347f2.pid.haproxy
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 6abbefe0-4d30-4477-876e-e1412d7347f2
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:32:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:32:28.982 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'env', 'PROCESS_TAG=haproxy-6abbefe0-4d30-4477-876e-e1412d7347f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6abbefe0-4d30-4477-876e-e1412d7347f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:32:29 compute-0 podman[257262]: 2026-01-31 07:32:29.339890024 +0000 UTC m=+0.061077552 container create 7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:32:29 compute-0 systemd[1]: Started libpod-conmon-7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d.scope.
Jan 31 07:32:29 compute-0 podman[257262]: 2026-01-31 07:32:29.308881718 +0000 UTC m=+0.030069256 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:32:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74cfcf56b04203de84d449420fd74cd9034daba32ffa3045588f3799532b3da7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:32:29 compute-0 podman[257262]: 2026-01-31 07:32:29.431665733 +0000 UTC m=+0.152853251 container init 7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 07:32:29 compute-0 podman[257262]: 2026-01-31 07:32:29.43985509 +0000 UTC m=+0.161042618 container start 7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 07:32:29 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [NOTICE]   (257281) : New worker (257283) forked
Jan 31 07:32:29 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [NOTICE]   (257281) : Loading success.
Jan 31 07:32:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:29.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:29 compute-0 ceph-mon[74496]: pgmap v947: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 420 KiB/s wr, 106 op/s
Jan 31 07:32:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 28 KiB/s wr, 96 op/s
Jan 31 07:32:30 compute-0 nova_compute[247704]: 2026-01-31 07:32:30.489 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:30 compute-0 nova_compute[247704]: 2026-01-31 07:32:30.786 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:30 compute-0 nova_compute[247704]: 2026-01-31 07:32:30.787 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:30 compute-0 nova_compute[247704]: 2026-01-31 07:32:30.787 247708 DEBUG nova.network.neutron [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:32:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:31.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:31 compute-0 ceph-mon[74496]: pgmap v948: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 28 KiB/s wr, 96 op/s
Jan 31 07:32:32 compute-0 nova_compute[247704]: 2026-01-31 07:32:32.211 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844737.2096815, f610bafa-64ee-4d38-8be4-7c17cd2b2a99 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:32:32 compute-0 nova_compute[247704]: 2026-01-31 07:32:32.213 247708 INFO nova.compute.manager [-] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] VM Stopped (Lifecycle Event)
Jan 31 07:32:32 compute-0 nova_compute[247704]: 2026-01-31 07:32:32.245 247708 DEBUG nova.compute.manager [None req-85230e63-2a5d-45eb-b76d-e1bd252fe66b - - - - - -] [instance: f610bafa-64ee-4d38-8be4-7c17cd2b2a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:32 compute-0 nova_compute[247704]: 2026-01-31 07:32:32.279 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 88 op/s
Jan 31 07:32:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:33.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:34 compute-0 ceph-mon[74496]: pgmap v949: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 88 op/s
Jan 31 07:32:34 compute-0 nova_compute[247704]: 2026-01-31 07:32:34.290 247708 DEBUG nova.network.neutron [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:34 compute-0 nova_compute[247704]: 2026-01-31 07:32:34.339 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:34 compute-0 nova_compute[247704]: 2026-01-31 07:32:34.387 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:34 compute-0 nova_compute[247704]: 2026-01-31 07:32:34.388 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:34 compute-0 nova_compute[247704]: 2026-01-31 07:32:34.388 247708 DEBUG oslo_concurrency.lockutils [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:34 compute-0 nova_compute[247704]: 2026-01-31 07:32:34.393 247708 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 31 07:32:34 compute-0 virtqemud[247621]: Domain id=3 name='instance-00000005' uuid=23c338db-50ed-434c-ac85-8190b9b5f194 is tainted: custom-monitor
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 84 op/s
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031657855248464544 of space, bias 1.0, pg target 0.9497356574539363 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:32:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:32:34 compute-0 podman[257294]: 2026-01-31 07:32:34.883819854 +0000 UTC m=+0.056169213 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:32:35 compute-0 nova_compute[247704]: 2026-01-31 07:32:35.401 247708 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 31 07:32:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:32:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:35.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:32:35 compute-0 nova_compute[247704]: 2026-01-31 07:32:35.492 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:35.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:36 compute-0 ceph-mon[74496]: pgmap v950: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 84 op/s
Jan 31 07:32:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 69 op/s
Jan 31 07:32:36 compute-0 nova_compute[247704]: 2026-01-31 07:32:36.406 247708 INFO nova.virt.libvirt.driver [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 31 07:32:36 compute-0 nova_compute[247704]: 2026-01-31 07:32:36.410 247708 DEBUG nova.compute.manager [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:32:36 compute-0 nova_compute[247704]: 2026-01-31 07:32:36.446 247708 DEBUG nova.objects.instance [None req-25d9d304-600e-4cb2-9be8-860300b9a1aa 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:32:36 compute-0 ovn_controller[149457]: 2026-01-31T07:32:36Z|00050|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 07:32:36 compute-0 ovn_controller[149457]: 2026-01-31T07:32:36Z|00051|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 07:32:36 compute-0 nova_compute[247704]: 2026-01-31 07:32:36.663 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:36 compute-0 ovn_controller[149457]: 2026-01-31T07:32:36Z|00052|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 07:32:36 compute-0 ovn_controller[149457]: 2026-01-31T07:32:36Z|00053|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 07:32:36 compute-0 nova_compute[247704]: 2026-01-31 07:32:36.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:37 compute-0 nova_compute[247704]: 2026-01-31 07:32:37.281 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:37.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:38 compute-0 ceph-mon[74496]: pgmap v951: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 69 op/s
Jan 31 07:32:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 173 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 271 KiB/s wr, 47 op/s
Jan 31 07:32:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2490782525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:32:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:39.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:32:40 compute-0 ceph-mon[74496]: pgmap v952: 305 pgs: 305 active+clean; 173 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 271 KiB/s wr, 47 op/s
Jan 31 07:32:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/529714511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 181 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 72 op/s
Jan 31 07:32:40 compute-0 nova_compute[247704]: 2026-01-31 07:32:40.494 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:41 compute-0 ceph-mon[74496]: pgmap v953: 305 pgs: 305 active+clean; 181 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.0 MiB/s wr, 72 op/s
Jan 31 07:32:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:41.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:42 compute-0 nova_compute[247704]: 2026-01-31 07:32:42.284 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 198 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 644 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 07:32:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:43 compute-0 ceph-mon[74496]: pgmap v954: 305 pgs: 305 active+clean; 198 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 644 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 07:32:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:43.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:32:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:32:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852389985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:32:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:32:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/852389985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:32:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:45.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:45 compute-0 nova_compute[247704]: 2026-01-31 07:32:45.505 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:45 compute-0 nova_compute[247704]: 2026-01-31 07:32:45.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:45 compute-0 nova_compute[247704]: 2026-01-31 07:32:45.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:32:45 compute-0 nova_compute[247704]: 2026-01-31 07:32:45.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:32:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:45 compute-0 ceph-mon[74496]: pgmap v955: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:32:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/852389985' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:32:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/852389985' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.756449) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765756497, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 773, "num_deletes": 251, "total_data_size": 1071276, "memory_usage": 1087288, "flush_reason": "Manual Compaction"}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765764858, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1049459, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21025, "largest_seqno": 21796, "table_properties": {"data_size": 1045501, "index_size": 1674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9031, "raw_average_key_size": 19, "raw_value_size": 1037552, "raw_average_value_size": 2245, "num_data_blocks": 75, "num_entries": 462, "num_filter_entries": 462, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844708, "oldest_key_time": 1769844708, "file_creation_time": 1769844765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 8474 microseconds, and 2466 cpu microseconds.
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.764930) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1049459 bytes OK
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.764954) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.771196) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.771210) EVENT_LOG_v1 {"time_micros": 1769844765771206, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.771231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1067402, prev total WAL file size 1067402, number of live WAL files 2.
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.771812) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1024KB)], [47(8293KB)]
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765771947, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9542218, "oldest_snapshot_seqno": -1}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4709 keys, 7504509 bytes, temperature: kUnknown
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765855998, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7504509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7473956, "index_size": 17707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117990, "raw_average_key_size": 25, "raw_value_size": 7389612, "raw_average_value_size": 1569, "num_data_blocks": 726, "num_entries": 4709, "num_filter_entries": 4709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.856253) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7504509 bytes
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.859354) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.4 rd, 89.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.1 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(16.2) write-amplify(7.2) OK, records in: 5224, records dropped: 515 output_compression: NoCompression
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.859370) EVENT_LOG_v1 {"time_micros": 1769844765859362, "job": 24, "event": "compaction_finished", "compaction_time_micros": 84139, "compaction_time_cpu_micros": 21721, "output_level": 6, "num_output_files": 1, "total_output_size": 7504509, "num_input_records": 5224, "num_output_records": 4709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765859532, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844765860344, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 07:32:45 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.771599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.860440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.860450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.860455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.860466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:45 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:45.860471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:46 compute-0 nova_compute[247704]: 2026-01-31 07:32:46.049 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:32:46 compute-0 nova_compute[247704]: 2026-01-31 07:32:46.050 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:32:46 compute-0 nova_compute[247704]: 2026-01-31 07:32:46.050 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:32:46 compute-0 nova_compute[247704]: 2026-01-31 07:32:46.051 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:32:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 07:32:46 compute-0 sudo[257322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:46 compute-0 sudo[257322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:46 compute-0 sudo[257322]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:46 compute-0 sudo[257347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:32:46 compute-0 sudo[257347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:32:46 compute-0 sudo[257347]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:47 compute-0 nova_compute[247704]: 2026-01-31 07:32:47.288 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:47.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:47 compute-0 ceph-mon[74496]: pgmap v956: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 07:32:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 07:32:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:49.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:49.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:49 compute-0 ceph-mon[74496]: pgmap v957: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 07:32:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1117879748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/467322244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:49 compute-0 podman[257374]: 2026-01-31 07:32:49.928993531 +0000 UTC m=+0.101518275 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:32:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 380 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Jan 31 07:32:50 compute-0 nova_compute[247704]: 2026-01-31 07:32:50.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.315 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.347 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.347 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.348 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.348 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.348 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.349 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.349 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.349 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.379 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.380 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.380 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.380 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.380 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:51.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:51.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:51 compute-0 ceph-mon[74496]: pgmap v958: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 380 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Jan 31 07:32:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198377444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.886 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.966 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:32:51 compute-0 nova_compute[247704]: 2026-01-31 07:32:51.967 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.115 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.116 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4651MB free_disk=20.897197723388672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.116 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.117 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.241 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 23c338db-50ed-434c-ac85-8190b9b5f194 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.241 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.242 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.290 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 164 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 217 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Jan 31 07:32:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:32:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700459329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.691 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.696 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.711 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.742 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.742 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:32:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/198377444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2472366945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2348250006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2700459329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.955 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.956 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:52 compute-0 nova_compute[247704]: 2026-01-31 07:32:52.956 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:32:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.434504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773434610, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 323, "num_deletes": 256, "total_data_size": 147008, "memory_usage": 154984, "flush_reason": "Manual Compaction"}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773440619, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 146562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21797, "largest_seqno": 22119, "table_properties": {"data_size": 144471, "index_size": 255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4893, "raw_average_key_size": 16, "raw_value_size": 140335, "raw_average_value_size": 483, "num_data_blocks": 12, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844766, "oldest_key_time": 1769844766, "file_creation_time": 1769844773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 6168 microseconds, and 1617 cpu microseconds.
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.440686) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 146562 bytes OK
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.440711) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.444597) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.444624) EVENT_LOG_v1 {"time_micros": 1769844773444615, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.444652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 144719, prev total WAL file size 144719, number of live WAL files 2.
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.445265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(143KB)], [50(7328KB)]
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773445325, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 7651071, "oldest_snapshot_seqno": -1}
Jan 31 07:32:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:32:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:53.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4479 keys, 7530751 bytes, temperature: kUnknown
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773645242, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7530751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7501052, "index_size": 17429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 114424, "raw_average_key_size": 25, "raw_value_size": 7420027, "raw_average_value_size": 1656, "num_data_blocks": 710, "num_entries": 4479, "num_filter_entries": 4479, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844773, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.645524) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7530751 bytes
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.682602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.3 rd, 37.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 7.2 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(103.6) write-amplify(51.4) OK, records in: 4999, records dropped: 520 output_compression: NoCompression
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.682651) EVENT_LOG_v1 {"time_micros": 1769844773682633, "job": 26, "event": "compaction_finished", "compaction_time_micros": 200002, "compaction_time_cpu_micros": 30250, "output_level": 6, "num_output_files": 1, "total_output_size": 7530751, "num_input_records": 4999, "num_output_records": 4479, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773682869, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844773683888, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.445128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.683983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.683990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.683993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.683996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:32:53.683999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:32:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:53.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:54 compute-0 ceph-mon[74496]: pgmap v959: 305 pgs: 305 active+clean; 164 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 217 KiB/s rd, 1.1 MiB/s wr, 42 op/s
Jan 31 07:32:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/399060722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:32:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 463 KiB/s rd, 27 KiB/s wr, 51 op/s
Jan 31 07:32:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:55.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:55 compute-0 nova_compute[247704]: 2026-01-31 07:32:55.509 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:56 compute-0 ceph-mon[74496]: pgmap v960: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 463 KiB/s rd, 27 KiB/s wr, 51 op/s
Jan 31 07:32:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 11 KiB/s wr, 46 op/s
Jan 31 07:32:57 compute-0 nova_compute[247704]: 2026-01-31 07:32:57.306 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:32:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:57.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:57.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:58 compute-0 ceph-mon[74496]: pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 11 KiB/s wr, 46 op/s
Jan 31 07:32:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:32:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:32:59 compute-0 ceph-mon[74496]: pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:32:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:59.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:32:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:32:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:32:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:59.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:33:00 compute-0 nova_compute[247704]: 2026-01-31 07:33:00.512 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:01.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:01 compute-0 ceph-mon[74496]: pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:33:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:02 compute-0 nova_compute[247704]: 2026-01-31 07:33:02.345 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:33:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:03 compute-0 ceph-mon[74496]: pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.9 KiB/s wr, 49 op/s
Jan 31 07:33:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:03.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.4 KiB/s wr, 37 op/s
Jan 31 07:33:05 compute-0 nova_compute[247704]: 2026-01-31 07:33:05.513 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:05.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:05.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:05 compute-0 podman[257453]: 2026-01-31 07:33:05.915253281 +0000 UTC m=+0.082505247 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:33:05 compute-0 ceph-mon[74496]: pgmap v965: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 9.4 KiB/s wr, 37 op/s
Jan 31 07:33:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1858440281' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 682 B/s wr, 5 op/s
Jan 31 07:33:07 compute-0 sudo[257472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:07 compute-0 sudo[257472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:07 compute-0 sudo[257472]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:07 compute-0 sudo[257497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:07 compute-0 sudo[257497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:07 compute-0 sudo[257497]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:07 compute-0 nova_compute[247704]: 2026-01-31 07:33:07.397 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:07.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:07.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:08 compute-0 ceph-mon[74496]: pgmap v966: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 682 B/s wr, 5 op/s
Jan 31 07:33:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/39252063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/802696960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 161 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Jan 31 07:33:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:09.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:09.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:10 compute-0 ceph-mon[74496]: pgmap v967: 305 pgs: 305 active+clean; 161 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Jan 31 07:33:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 181 MiB data, 337 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.0 MiB/s wr, 47 op/s
Jan 31 07:33:10 compute-0 nova_compute[247704]: 2026-01-31 07:33:10.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:11.142 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:11.143 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:11.144 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2606589484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:33:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:11.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:33:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:11.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:12 compute-0 ceph-mon[74496]: pgmap v968: 305 pgs: 305 active+clean; 181 MiB data, 337 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.0 MiB/s wr, 47 op/s
Jan 31 07:33:12 compute-0 nova_compute[247704]: 2026-01-31 07:33:12.400 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 31 07:33:12 compute-0 nova_compute[247704]: 2026-01-31 07:33:12.855 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:12 compute-0 nova_compute[247704]: 2026-01-31 07:33:12.856 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:12 compute-0 nova_compute[247704]: 2026-01-31 07:33:12.888 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.010 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.010 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.015 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.015 247708 INFO nova.compute.claims [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.186 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:13 compute-0 ceph-mon[74496]: pgmap v969: 305 pgs: 305 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 31 07:33:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1621728046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174104288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.674 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.681 247708 DEBUG nova.compute.provider_tree [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.703 247708 DEBUG nova.scheduler.client.report [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.732 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.733 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:33:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.811 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.812 247708 DEBUG nova.network.neutron [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.839 247708 INFO nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.870 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:33:13 compute-0 nova_compute[247704]: 2026-01-31 07:33:13.921 247708 INFO nova.virt.block_device [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Booting with volume 802e207e-cc7b-4779-89dc-d399ba68dc38 at /dev/vda
Jan 31 07:33:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:14.021 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:14.022 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.023 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.076 247708 DEBUG nova.policy [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a59df5da6284e4e8764816e1f8dfaa3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.081 247708 DEBUG os_brick.utils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.082 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.103 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.104 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e08301c9-89c6-403e-b117-4e4063870287]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.106 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.114 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.114 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[625c4668-d7ce-4383-b99e-b95360d1aa39]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.117 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.128 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.129 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[875775de-a1f9-4554-ae2a-e57d8cfc5e12]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.131 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e38b4d35-ce90-46ac-92eb-ff67a61b811a]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.132 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.172 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "nvme version" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.175 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.175 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.176 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.177 247708 DEBUG os_brick.utils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:33:14 compute-0 nova_compute[247704]: 2026-01-31 07:33:14.177 247708 DEBUG nova.virt.block_device [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating existing volume attachment record: 574cf48c-3357-4054-9e8a-58b071261019 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:33:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4094781662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4174104288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 251 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.149 247708 DEBUG nova.network.neutron [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Successfully created port: cc9a4557-33da-44f5-87b4-ca945cbc819c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:33:15 compute-0 ceph-mon[74496]: pgmap v970: 305 pgs: 305 active+clean; 251 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Jan 31 07:33:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/624948120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3348589074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.519 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:15.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.540 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.541 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.542 247708 INFO nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating image(s)
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.542 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.542 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Ensure instance console log exists: /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.543 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.543 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:15 compute-0 nova_compute[247704]: 2026-01-31 07:33:15.543 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:15.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:16.024 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.077 247708 DEBUG nova.network.neutron [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Successfully updated port: cc9a4557-33da-44f5-87b4-ca945cbc819c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.099 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.099 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.100 247708 DEBUG nova.network.neutron [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.198 247708 DEBUG nova.compute.manager [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-changed-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.198 247708 DEBUG nova.compute.manager [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Refreshing instance network info cache due to event network-changed-cc9a4557-33da-44f5-87b4-ca945cbc819c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.199 247708 DEBUG oslo_concurrency.lockutils [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 260 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Jan 31 07:33:16 compute-0 nova_compute[247704]: 2026-01-31 07:33:16.455 247708 DEBUG nova.network.neutron [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.380 247708 DEBUG nova.network.neutron [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.401 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.402 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance network_info: |[{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.403 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.404 247708 DEBUG oslo_concurrency.lockutils [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.404 247708 DEBUG nova.network.neutron [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Refreshing network info cache for port cc9a4557-33da-44f5-87b4-ca945cbc819c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.408 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Start _get_guest_xml network_info=[{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '574cf48c-3357-4054-9e8a-58b071261019', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-802e207e-cc7b-4779-89dc-d399ba68dc38', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '802e207e-cc7b-4779-89dc-d399ba68dc38', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4444d8df-265a-48a7-a945-08eb55a365e1', 'attached_at': '', 'detached_at': '', 'volume_id': '802e207e-cc7b-4779-89dc-d399ba68dc38', 'serial': '802e207e-cc7b-4779-89dc-d399ba68dc38'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.412 247708 WARNING nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.419 247708 DEBUG nova.virt.libvirt.host [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.420 247708 DEBUG nova.virt.libvirt.host [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.423 247708 DEBUG nova.virt.libvirt.host [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.424 247708 DEBUG nova.virt.libvirt.host [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.425 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.426 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.426 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.426 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.426 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.427 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.427 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.427 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.427 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.428 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.428 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.428 247708 DEBUG nova.virt.hardware [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.465 247708 DEBUG nova.storage.rbd_utils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 4444d8df-265a-48a7-a945-08eb55a365e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.470 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:17 compute-0 ceph-mon[74496]: pgmap v971: 305 pgs: 305 active+clean; 260 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 166 op/s
Jan 31 07:33:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2707124294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2950240011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:33:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:17.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:33:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389247019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.903 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.937 247708 DEBUG nova.virt.libvirt.vif [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:13Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.937 247708 DEBUG nova.network.os_vif_util [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.939 247708 DEBUG nova.network.os_vif_util [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.942 247708 DEBUG nova.objects.instance [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4444d8df-265a-48a7-a945-08eb55a365e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.960 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <uuid>4444d8df-265a-48a7-a945-08eb55a365e1</uuid>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <name>instance-00000009</name>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <nova:name>tempest-LiveMigrationTest-server-396476796</nova:name>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:33:17</nova:creationTime>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:user uuid="0a59df5da6284e4e8764816e1f8dfaa3">tempest-LiveMigrationTest-48073594-project-member</nova:user>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:project uuid="dfb4d4079ac944b288d5e285ce1de95a">tempest-LiveMigrationTest-48073594</nova:project>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <nova:port uuid="cc9a4557-33da-44f5-87b4-ca945cbc819c">
Jan 31 07:33:17 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <system>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <entry name="serial">4444d8df-265a-48a7-a945-08eb55a365e1</entry>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <entry name="uuid">4444d8df-265a-48a7-a945-08eb55a365e1</entry>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </system>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <os>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </os>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <features>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </features>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4444d8df-265a-48a7-a945-08eb55a365e1_disk.config">
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </source>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-802e207e-cc7b-4779-89dc-d399ba68dc38">
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </source>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:33:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <serial>802e207e-cc7b-4779-89dc-d399ba68dc38</serial>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:3c:9d:19"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <target dev="tapcc9a4557-33"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/console.log" append="off"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <video>
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </video>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:33:17 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:33:17 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:33:17 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:33:17 compute-0 nova_compute[247704]: </domain>
Jan 31 07:33:17 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.961 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Preparing to wait for external event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.962 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.962 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.962 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.963 247708 DEBUG nova.virt.libvirt.vif [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:13Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.963 247708 DEBUG nova.network.os_vif_util [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.964 247708 DEBUG nova.network.os_vif_util [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.964 247708 DEBUG os_vif [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.965 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.965 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.966 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.969 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9a4557-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.969 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9a4557-33, col_values=(('external_ids', {'iface-id': 'cc9a4557-33da-44f5-87b4-ca945cbc819c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:9d:19', 'vm-uuid': '4444d8df-265a-48a7-a945-08eb55a365e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.971 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:17 compute-0 NetworkManager[49108]: <info>  [1769844797.9727] manager: (tapcc9a4557-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.975 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.980 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:17 compute-0 nova_compute[247704]: 2026-01-31 07:33:17.981 247708 INFO os_vif [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33')
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.044 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.045 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.046 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] No VIF found with MAC fa:16:3e:3c:9d:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.046 247708 INFO nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Using config drive
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.085 247708 DEBUG nova.storage.rbd_utils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 4444d8df-265a-48a7-a945-08eb55a365e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 271 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 182 op/s
Jan 31 07:33:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2389247019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.737 247708 INFO nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating config drive at /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/disk.config
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.741 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_gkij3zy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.861 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_gkij3zy" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.900 247708 DEBUG nova.storage.rbd_utils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] rbd image 4444d8df-265a-48a7-a945-08eb55a365e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:18 compute-0 nova_compute[247704]: 2026-01-31 07:33:18.905 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/disk.config 4444d8df-265a-48a7-a945-08eb55a365e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.046 247708 DEBUG oslo_concurrency.processutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/disk.config 4444d8df-265a-48a7-a945-08eb55a365e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.047 247708 INFO nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deleting local config drive /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/disk.config because it was imported into RBD.
Jan 31 07:33:19 compute-0 kernel: tapcc9a4557-33: entered promiscuous mode
Jan 31 07:33:19 compute-0 NetworkManager[49108]: <info>  [1769844799.0796] manager: (tapcc9a4557-33): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 31 07:33:19 compute-0 ovn_controller[149457]: 2026-01-31T07:33:19Z|00054|binding|INFO|Claiming lport cc9a4557-33da-44f5-87b4-ca945cbc819c for this chassis.
Jan 31 07:33:19 compute-0 ovn_controller[149457]: 2026-01-31T07:33:19Z|00055|binding|INFO|cc9a4557-33da-44f5-87b4-ca945cbc819c: Claiming fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.081 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:19 compute-0 ovn_controller[149457]: 2026-01-31T07:33:19Z|00056|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c ovn-installed in OVS
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.091 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:19 compute-0 ovn_controller[149457]: 2026-01-31T07:33:19Z|00057|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c up in Southbound
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.097 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:9d:19 10.100.0.11'], port_security=['fa:16:3e:3c:9d:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4444d8df-265a-48a7-a945-08eb55a365e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=cc9a4557-33da-44f5-87b4-ca945cbc819c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.098 160028 INFO neutron.agent.ovn.metadata.agent [-] Port cc9a4557-33da-44f5-87b4-ca945cbc819c in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 bound to our chassis
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.100 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:33:19 compute-0 systemd-machined[214448]: New machine qemu-4-instance-00000009.
Jan 31 07:33:19 compute-0 systemd-udevd[257671]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.112 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b80ba99f-dba7-4220-b722-291112ff4481]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:19 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Jan 31 07:33:19 compute-0 NetworkManager[49108]: <info>  [1769844799.1213] device (tapcc9a4557-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:33:19 compute-0 NetworkManager[49108]: <info>  [1769844799.1220] device (tapcc9a4557-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.135 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[16051e55-91c7-4d13-a590-1c50888285f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.139 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2e60e444-27f3-477a-8ccd-77013c9a07b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.158 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4a173e-1449-4cba-90e5-18328f365a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.170 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23663707-6a43-44d7-8c94-99bb825ae5a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1162, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 5, 'rx_bytes': 1162, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499054, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257683, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.180 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4b7673-c369-4166-8bfc-762d1dd67b12]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499065, 'tstamp': 499065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257684, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499067, 'tstamp': 499067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257684, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.182 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.184 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.185 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.185 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.186 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:19.186 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.223 247708 DEBUG nova.network.neutron [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updated VIF entry in instance network info cache for port cc9a4557-33da-44f5-87b4-ca945cbc819c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.223 247708 DEBUG nova.network.neutron [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.245 247708 DEBUG oslo_concurrency.lockutils [req-88d2ea47-a525-44f9-abbe-a55344a980ac req-701b6da6-ff37-4bee-a832-0669267c0197 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.385 247708 DEBUG nova.compute.manager [req-2474e8b5-0a83-41c3-9a4b-86c6141e4357 req-829abebd-bfb8-4764-a7f4-6fedf0717892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.386 247708 DEBUG oslo_concurrency.lockutils [req-2474e8b5-0a83-41c3-9a4b-86c6141e4357 req-829abebd-bfb8-4764-a7f4-6fedf0717892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.386 247708 DEBUG oslo_concurrency.lockutils [req-2474e8b5-0a83-41c3-9a4b-86c6141e4357 req-829abebd-bfb8-4764-a7f4-6fedf0717892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.386 247708 DEBUG oslo_concurrency.lockutils [req-2474e8b5-0a83-41c3-9a4b-86c6141e4357 req-829abebd-bfb8-4764-a7f4-6fedf0717892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.386 247708 DEBUG nova.compute.manager [req-2474e8b5-0a83-41c3-9a4b-86c6141e4357 req-829abebd-bfb8-4764-a7f4-6fedf0717892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Processing event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:33:19 compute-0 ceph-mon[74496]: pgmap v972: 305 pgs: 305 active+clean; 271 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 182 op/s
Jan 31 07:33:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:19.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.851 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844799.85054, 4444d8df-265a-48a7-a945-08eb55a365e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.851 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Started (Lifecycle Event)
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.854 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.857 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.861 247708 INFO nova.virt.libvirt.driver [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance spawned successfully.
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.861 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.883 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.887 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.940 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.941 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.941 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.942 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.943 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.944 247708 DEBUG nova.virt.libvirt.driver [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.951 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.952 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844799.8506768, 4444d8df-265a-48a7-a945-08eb55a365e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.952 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Paused (Lifecycle Event)
Jan 31 07:33:19 compute-0 nova_compute[247704]: 2026-01-31 07:33:19.998 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:33:19
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes']
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.004 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844799.856651, 4444d8df-265a-48a7-a945-08eb55a365e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.004 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Resumed (Lifecycle Event)
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.025 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.028 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.034 247708 INFO nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 4.49 seconds to spawn the instance on the hypervisor.
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.034 247708 DEBUG nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.047 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.099 247708 INFO nova.compute.manager [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 7.14 seconds to build instance.
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.122 247708 DEBUG oslo_concurrency.lockutils [None req-8647ebe3-03ed-410e-bfeb-71b8c8f72fb3 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:33:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 295 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 228 op/s
Jan 31 07:33:20 compute-0 nova_compute[247704]: 2026-01-31 07:33:20.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:20 compute-0 podman[257729]: 2026-01-31 07:33:20.916647605 +0000 UTC m=+0.089427254 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.131 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.131 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.171 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.275 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.276 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.286 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.287 247708 INFO nova.compute.claims [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.515 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.567 247708 DEBUG nova.compute.manager [req-e62b2136-a04b-4248-bb70-fcf3770fad1e req-d4cdd7c8-f987-40b8-b295-0e67393d118f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.568 247708 DEBUG oslo_concurrency.lockutils [req-e62b2136-a04b-4248-bb70-fcf3770fad1e req-d4cdd7c8-f987-40b8-b295-0e67393d118f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.569 247708 DEBUG oslo_concurrency.lockutils [req-e62b2136-a04b-4248-bb70-fcf3770fad1e req-d4cdd7c8-f987-40b8-b295-0e67393d118f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.569 247708 DEBUG oslo_concurrency.lockutils [req-e62b2136-a04b-4248-bb70-fcf3770fad1e req-d4cdd7c8-f987-40b8-b295-0e67393d118f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.569 247708 DEBUG nova.compute.manager [req-e62b2136-a04b-4248-bb70-fcf3770fad1e req-d4cdd7c8-f987-40b8-b295-0e67393d118f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.570 247708 WARNING nova.compute.manager [req-e62b2136-a04b-4248-bb70-fcf3770fad1e req-d4cdd7c8-f987-40b8-b295-0e67393d118f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state None.
Jan 31 07:33:21 compute-0 ceph-mon[74496]: pgmap v973: 305 pgs: 305 active+clean; 295 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 228 op/s
Jan 31 07:33:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758955280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.974 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.980 247708 DEBUG nova.compute.provider_tree [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:21 compute-0 nova_compute[247704]: 2026-01-31 07:33:21.996 247708 DEBUG nova.scheduler.client.report [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.039 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.040 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.114 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.115 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.155 247708 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.190 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.330 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.332 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.333 247708 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Creating image(s)
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.356 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.394 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 315 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.447 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.451 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.474 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Automatically allocating a network for project ca84ce9280d74b4588f89bf679f563fa. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.529 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.530 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.531 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.532 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.566 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.571 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 425ccf69-9d5a-48da-9def-1744abd51e09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2670094037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1758955280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3803425268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:22 compute-0 nova_compute[247704]: 2026-01-31 07:33:22.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:23 compute-0 nova_compute[247704]: 2026-01-31 07:33:23.034 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 425ccf69-9d5a-48da-9def-1744abd51e09_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:23 compute-0 nova_compute[247704]: 2026-01-31 07:33:23.493 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Check if temp file /var/lib/nova/instances/tmp603267h0 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Jan 31 07:33:23 compute-0 nova_compute[247704]: 2026-01-31 07:33:23.494 247708 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Jan 31 07:33:23 compute-0 nova_compute[247704]: 2026-01-31 07:33:23.500 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] resizing rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:33:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:23.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:23 compute-0 ceph-mon[74496]: pgmap v974: 305 pgs: 305 active+clean; 315 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Jan 31 07:33:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:24 compute-0 nova_compute[247704]: 2026-01-31 07:33:24.016 247708 DEBUG nova.objects.instance [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'migration_context' on Instance uuid 425ccf69-9d5a-48da-9def-1744abd51e09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:24 compute-0 nova_compute[247704]: 2026-01-31 07:33:24.053 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:33:24 compute-0 nova_compute[247704]: 2026-01-31 07:33:24.054 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Ensure instance console log exists: /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:33:24 compute-0 nova_compute[247704]: 2026-01-31 07:33:24.055 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:24 compute-0 nova_compute[247704]: 2026-01-31 07:33:24.056 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:24 compute-0 nova_compute[247704]: 2026-01-31 07:33:24.056 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 380 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 7.1 MiB/s wr, 404 op/s
Jan 31 07:33:25 compute-0 sudo[257947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:25 compute-0 sudo[257947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-0 sudo[257947]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-0 sudo[257972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:33:25 compute-0 sudo[257972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-0 sudo[257972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-0 sudo[257998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:25 compute-0 sudo[257998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-0 sudo[257998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-0 sudo[258023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:33:25 compute-0 sudo[258023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:25 compute-0 nova_compute[247704]: 2026-01-31 07:33:25.524 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:25.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:25 compute-0 ceph-mon[74496]: pgmap v975: 305 pgs: 305 active+clean; 380 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 7.1 MiB/s wr, 404 op/s
Jan 31 07:33:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:25.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:25 compute-0 sudo[258023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:33:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:33:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:33:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:33:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:33:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7f1295e7-f4a6-4acb-87c9-45f550cc8758 does not exist
Jan 31 07:33:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ffd4ad02-5123-4903-bc5a-82bc0b5bc476 does not exist
Jan 31 07:33:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 66369894-91e2-4080-82d5-dd07d6285897 does not exist
Jan 31 07:33:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:33:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:33:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:33:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:33:26 compute-0 sudo[258079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1099092097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:26 compute-0 sudo[258079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:26 compute-0 sudo[258079]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:26 compute-0 sudo[258104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:33:26 compute-0 sudo[258104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:26 compute-0 sudo[258104]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:26 compute-0 sudo[258129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:26 compute-0 sudo[258129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:26 compute-0 sudo[258129]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:26 compute-0 sudo[258154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:33:26 compute-0 sudo[258154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 424 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.3 MiB/s wr, 374 op/s
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.558350739 +0000 UTC m=+0.088021781 container create ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.496427248 +0000 UTC m=+0.026098330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:33:26 compute-0 systemd[1]: Started libpod-conmon-ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270.scope.
Jan 31 07:33:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.707974201 +0000 UTC m=+0.237645273 container init ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.717154571 +0000 UTC m=+0.246825613 container start ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:33:26 compute-0 gracious_tharp[258236]: 167 167
Jan 31 07:33:26 compute-0 systemd[1]: libpod-ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270.scope: Deactivated successfully.
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:33:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1099092097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.81056465 +0000 UTC m=+0.340235732 container attach ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.811353889 +0000 UTC m=+0.341024971 container died ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5e3f047ff1fd79d910d4841838e097ae9b033060252a556fed755acd3ccff52-merged.mount: Deactivated successfully.
Jan 31 07:33:26 compute-0 podman[258219]: 2026-01-31 07:33:26.944400092 +0000 UTC m=+0.474071174 container remove ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_tharp, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:33:26 compute-0 systemd[1]: libpod-conmon-ac57a030a25cb5231d52e9d0d0623cf6178176315bc7baa36b36831eab5a7270.scope: Deactivated successfully.
Jan 31 07:33:27 compute-0 podman[258260]: 2026-01-31 07:33:27.137116111 +0000 UTC m=+0.048459217 container create de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:33:27 compute-0 systemd[1]: Started libpod-conmon-de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37.scope.
Jan 31 07:33:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:27 compute-0 podman[258260]: 2026-01-31 07:33:27.115717906 +0000 UTC m=+0.027061002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65368d632cba213767720a4f2c68e3acf3e8b643a54b3078d5e33148c991713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65368d632cba213767720a4f2c68e3acf3e8b643a54b3078d5e33148c991713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65368d632cba213767720a4f2c68e3acf3e8b643a54b3078d5e33148c991713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65368d632cba213767720a4f2c68e3acf3e8b643a54b3078d5e33148c991713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65368d632cba213767720a4f2c68e3acf3e8b643a54b3078d5e33148c991713/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:27 compute-0 podman[258260]: 2026-01-31 07:33:27.248263727 +0000 UTC m=+0.159606823 container init de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:33:27 compute-0 sudo[258276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:27 compute-0 podman[258260]: 2026-01-31 07:33:27.255496641 +0000 UTC m=+0.166839697 container start de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:33:27 compute-0 sudo[258276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:27 compute-0 sudo[258276]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:27 compute-0 podman[258260]: 2026-01-31 07:33:27.26127561 +0000 UTC m=+0.172618766 container attach de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_keldysh, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:33:27 compute-0 sudo[258308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:27 compute-0 sudo[258308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:27 compute-0 sudo[258308]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:27.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:27.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:27 compute-0 nova_compute[247704]: 2026-01-31 07:33:27.975 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:28 compute-0 nostalgic_keldysh[258284]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:33:28 compute-0 nostalgic_keldysh[258284]: --> relative data size: 1.0
Jan 31 07:33:28 compute-0 nostalgic_keldysh[258284]: --> All data devices are unavailable
Jan 31 07:33:28 compute-0 systemd[1]: libpod-de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37.scope: Deactivated successfully.
Jan 31 07:33:28 compute-0 podman[258260]: 2026-01-31 07:33:28.047073187 +0000 UTC m=+0.958416253 container died de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_keldysh, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:33:28 compute-0 ceph-mon[74496]: pgmap v976: 305 pgs: 305 active+clean; 424 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.3 MiB/s wr, 374 op/s
Jan 31 07:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65368d632cba213767720a4f2c68e3acf3e8b643a54b3078d5e33148c991713-merged.mount: Deactivated successfully.
Jan 31 07:33:28 compute-0 podman[258260]: 2026-01-31 07:33:28.1377731 +0000 UTC m=+1.049116166 container remove de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_keldysh, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:33:28 compute-0 systemd[1]: libpod-conmon-de154e75a584119d7425bff355b030affa0a01188040e6b86450a0668d7a8b37.scope: Deactivated successfully.
Jan 31 07:33:28 compute-0 sudo[258154]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:28 compute-0 sudo[258356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:28 compute-0 sudo[258356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:28 compute-0 sudo[258356]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:28 compute-0 sudo[258381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:33:28 compute-0 sudo[258381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:28 compute-0 sudo[258381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:28 compute-0 sudo[258406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:28 compute-0 sudo[258406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:28 compute-0 sudo[258406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:28 compute-0 sudo[258431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:33:28 compute-0 sudo[258431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 459 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.8 MiB/s wr, 421 op/s
Jan 31 07:33:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:28 compute-0 podman[258496]: 2026-01-31 07:33:28.772998123 +0000 UTC m=+0.089189369 container create c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:33:28 compute-0 podman[258496]: 2026-01-31 07:33:28.721999544 +0000 UTC m=+0.038190840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:33:28 compute-0 systemd[1]: Started libpod-conmon-c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3.scope.
Jan 31 07:33:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:28 compute-0 podman[258496]: 2026-01-31 07:33:28.895924801 +0000 UTC m=+0.212116107 container init c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:33:28 compute-0 podman[258496]: 2026-01-31 07:33:28.905603774 +0000 UTC m=+0.221795000 container start c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:33:28 compute-0 reverent_driscoll[258513]: 167 167
Jan 31 07:33:28 compute-0 systemd[1]: libpod-c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3.scope: Deactivated successfully.
Jan 31 07:33:28 compute-0 podman[258496]: 2026-01-31 07:33:28.921448036 +0000 UTC m=+0.237639342 container attach c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:33:28 compute-0 podman[258496]: 2026-01-31 07:33:28.92329204 +0000 UTC m=+0.239483256 container died c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-335a2218bb5e1136a624b500a73bd2a1225678d8da673972f8665a72e72082a2-merged.mount: Deactivated successfully.
Jan 31 07:33:29 compute-0 podman[258496]: 2026-01-31 07:33:29.255461816 +0000 UTC m=+0.571653032 container remove c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:33:29 compute-0 systemd[1]: libpod-conmon-c4d542a664cf0dc5a1c3a30a9d606913624f06b1de2a2ecd19f55af340efdaf3.scope: Deactivated successfully.
Jan 31 07:33:29 compute-0 podman[258538]: 2026-01-31 07:33:29.438301238 +0000 UTC m=+0.038583620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:33:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:29 compute-0 podman[258538]: 2026-01-31 07:33:29.551167625 +0000 UTC m=+0.151450017 container create 23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:33:29 compute-0 systemd[1]: Started libpod-conmon-23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253.scope.
Jan 31 07:33:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc857cf833c7780fc4b85dd20b0b0add2833f8cf5417db64603ccf6a482fdec1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc857cf833c7780fc4b85dd20b0b0add2833f8cf5417db64603ccf6a482fdec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc857cf833c7780fc4b85dd20b0b0add2833f8cf5417db64603ccf6a482fdec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc857cf833c7780fc4b85dd20b0b0add2833f8cf5417db64603ccf6a482fdec1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:29 compute-0 podman[258538]: 2026-01-31 07:33:29.670643052 +0000 UTC m=+0.270925444 container init 23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:33:29 compute-0 podman[258538]: 2026-01-31 07:33:29.676126614 +0000 UTC m=+0.276408966 container start 23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:33:29 compute-0 podman[258538]: 2026-01-31 07:33:29.682512457 +0000 UTC m=+0.282794839 container attach 23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:33:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:30 compute-0 ceph-mon[74496]: pgmap v977: 305 pgs: 305 active+clean; 459 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 9.8 MiB/s wr, 421 op/s
Jan 31 07:33:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3738968175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2346403018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:30 compute-0 zealous_einstein[258555]: {
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:     "0": [
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:         {
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "devices": [
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "/dev/loop3"
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             ],
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "lv_name": "ceph_lv0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "lv_size": "7511998464",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "name": "ceph_lv0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "tags": {
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.cluster_name": "ceph",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.crush_device_class": "",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.encrypted": "0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.osd_id": "0",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.type": "block",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:                 "ceph.vdo": "0"
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             },
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "type": "block",
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:             "vg_name": "ceph_vg0"
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:         }
Jan 31 07:33:30 compute-0 zealous_einstein[258555]:     ]
Jan 31 07:33:30 compute-0 zealous_einstein[258555]: }
Jan 31 07:33:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 435 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 444 op/s
Jan 31 07:33:30 compute-0 systemd[1]: libpod-23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253.scope: Deactivated successfully.
Jan 31 07:33:30 compute-0 podman[258564]: 2026-01-31 07:33:30.495217762 +0000 UTC m=+0.032042012 container died 23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:33:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc857cf833c7780fc4b85dd20b0b0add2833f8cf5417db64603ccf6a482fdec1-merged.mount: Deactivated successfully.
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.529 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:30 compute-0 podman[258564]: 2026-01-31 07:33:30.587188055 +0000 UTC m=+0.124012225 container remove 23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:33:30 compute-0 systemd[1]: libpod-conmon-23bee30c078e2f94b83881de16cba08b42174617107573e837194dd58f987253.scope: Deactivated successfully.
Jan 31 07:33:30 compute-0 sudo[258431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:30 compute-0 sudo[258580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:30 compute-0 sudo[258580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:30 compute-0 sudo[258580]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:30 compute-0 sudo[258605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:33:30 compute-0 sudo[258605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:30 compute-0 sudo[258605]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:30 compute-0 sudo[258630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:30 compute-0 sudo[258630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:30 compute-0 sudo[258630]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:30 compute-0 sudo[258655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:33:30 compute-0 sudo[258655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.972 247708 DEBUG nova.compute.manager [req-884541a6-003a-4b4e-a044-230f95034f7b req-58014b2f-e519-4bb1-9fb7-07670f45efa4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.973 247708 DEBUG oslo_concurrency.lockutils [req-884541a6-003a-4b4e-a044-230f95034f7b req-58014b2f-e519-4bb1-9fb7-07670f45efa4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.974 247708 DEBUG oslo_concurrency.lockutils [req-884541a6-003a-4b4e-a044-230f95034f7b req-58014b2f-e519-4bb1-9fb7-07670f45efa4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.974 247708 DEBUG oslo_concurrency.lockutils [req-884541a6-003a-4b4e-a044-230f95034f7b req-58014b2f-e519-4bb1-9fb7-07670f45efa4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.981 247708 DEBUG nova.compute.manager [req-884541a6-003a-4b4e-a044-230f95034f7b req-58014b2f-e519-4bb1-9fb7-07670f45efa4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:33:30 compute-0 nova_compute[247704]: 2026-01-31 07:33:30.982 247708 DEBUG nova.compute.manager [req-884541a6-003a-4b4e-a044-230f95034f7b req-58014b2f-e519-4bb1-9fb7-07670f45efa4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:33:31 compute-0 ceph-mon[74496]: pgmap v978: 305 pgs: 305 active+clean; 435 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 444 op/s
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.383551957 +0000 UTC m=+0.054063673 container create c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:33:31 compute-0 systemd[1]: Started libpod-conmon-c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392.scope.
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.352579191 +0000 UTC m=+0.023091007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:33:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.479500707 +0000 UTC m=+0.150012463 container init c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.488478883 +0000 UTC m=+0.158990599 container start c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:33:31 compute-0 reverent_murdock[258737]: 167 167
Jan 31 07:33:31 compute-0 systemd[1]: libpod-c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392.scope: Deactivated successfully.
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.494736434 +0000 UTC m=+0.165248160 container attach c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.495295237 +0000 UTC m=+0.165806963 container died c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:33:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-54df56c02bf6db78c66783637452229b5c2a7123597e0901ccefb50f7297a119-merged.mount: Deactivated successfully.
Jan 31 07:33:31 compute-0 podman[258721]: 2026-01-31 07:33:31.541583141 +0000 UTC m=+0.212094837 container remove c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:33:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:31.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:31 compute-0 systemd[1]: libpod-conmon-c6de0cc46dabb7400541178496191afa6f6ee3b60b0de0263bbf60c1211ba392.scope: Deactivated successfully.
Jan 31 07:33:31 compute-0 podman[258760]: 2026-01-31 07:33:31.703570491 +0000 UTC m=+0.049020411 container create ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:33:31 compute-0 systemd[1]: Started libpod-conmon-ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b.scope.
Jan 31 07:33:31 compute-0 podman[258760]: 2026-01-31 07:33:31.677468812 +0000 UTC m=+0.022918692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:33:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fcef7ff8e991383660bad3aba2284d7641425f2641cb1166ce883ef503ed48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fcef7ff8e991383660bad3aba2284d7641425f2641cb1166ce883ef503ed48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fcef7ff8e991383660bad3aba2284d7641425f2641cb1166ce883ef503ed48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64fcef7ff8e991383660bad3aba2284d7641425f2641cb1166ce883ef503ed48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:31 compute-0 podman[258760]: 2026-01-31 07:33:31.813157259 +0000 UTC m=+0.158607199 container init ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dhawan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:33:31 compute-0 podman[258760]: 2026-01-31 07:33:31.825040664 +0000 UTC m=+0.170490544 container start ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dhawan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:33:31 compute-0 podman[258760]: 2026-01-31 07:33:31.831322076 +0000 UTC m=+0.176771976 container attach ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.862 247708 INFO nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 6.93 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.865 247708 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.889 247708 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp603267h0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(a71efeb5-7d1e-40e6-88c1-c369cdd432a6),old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='574cf48c-3357-4054-9e8a-58b071261019'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.897 247708 DEBUG nova.objects.instance [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4444d8df-265a-48a7-a945-08eb55a365e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.899 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.901 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.902 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.926 247708 DEBUG nova.virt.libvirt.migration [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Find same serial number: pos=1, serial=802e207e-cc7b-4779-89dc-d399ba68dc38 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.928 247708 DEBUG nova.virt.libvirt.vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:20Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.928 247708 DEBUG nova.network.os_vif_util [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.930 247708 DEBUG nova.network.os_vif_util [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.931 247708 DEBUG nova.virt.libvirt.migration [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 07:33:31 compute-0 nova_compute[247704]:   <mac address="fa:16:3e:3c:9d:19"/>
Jan 31 07:33:31 compute-0 nova_compute[247704]:   <model type="virtio"/>
Jan 31 07:33:31 compute-0 nova_compute[247704]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:33:31 compute-0 nova_compute[247704]:   <mtu size="1442"/>
Jan 31 07:33:31 compute-0 nova_compute[247704]:   <target dev="tapcc9a4557-33"/>
Jan 31 07:33:31 compute-0 nova_compute[247704]: </interface>
Jan 31 07:33:31 compute-0 nova_compute[247704]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Jan 31 07:33:31 compute-0 nova_compute[247704]: 2026-01-31 07:33:31.932 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Jan 31 07:33:32 compute-0 nova_compute[247704]: 2026-01-31 07:33:32.405 247708 DEBUG nova.virt.libvirt.migration [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:33:32 compute-0 nova_compute[247704]: 2026-01-31 07:33:32.405 247708 INFO nova.virt.libvirt.migration [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Increasing downtime to 50 ms after 0 sec elapsed time
Jan 31 07:33:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 409 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 10 MiB/s wr, 402 op/s
Jan 31 07:33:32 compute-0 nova_compute[247704]: 2026-01-31 07:33:32.504 247708 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]: {
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:         "osd_id": 0,
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:         "type": "bluestore"
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]:     }
Jan 31 07:33:32 compute-0 naughty_dhawan[258776]: }
Jan 31 07:33:32 compute-0 systemd[1]: libpod-ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b.scope: Deactivated successfully.
Jan 31 07:33:32 compute-0 podman[258760]: 2026-01-31 07:33:32.70090216 +0000 UTC m=+1.046352070 container died ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dhawan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-64fcef7ff8e991383660bad3aba2284d7641425f2641cb1166ce883ef503ed48-merged.mount: Deactivated successfully.
Jan 31 07:33:32 compute-0 nova_compute[247704]: 2026-01-31 07:33:32.979 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.070 247708 DEBUG nova.virt.libvirt.migration [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.072 247708 DEBUG nova.virt.libvirt.migration [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Jan 31 07:33:33 compute-0 podman[258760]: 2026-01-31 07:33:33.083004738 +0000 UTC m=+1.428454648 container remove ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dhawan, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:33:33 compute-0 systemd[1]: libpod-conmon-ef05ccdda912e97b1cd40eff4de8c23ffae19650feecd9c4a9dbbe24a627487b.scope: Deactivated successfully.
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.136 247708 DEBUG nova.compute.manager [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:33 compute-0 sudo[258655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.137 247708 DEBUG oslo_concurrency.lockutils [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.138 247708 DEBUG oslo_concurrency.lockutils [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.139 247708 DEBUG oslo_concurrency.lockutils [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.139 247708 DEBUG nova.compute.manager [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.140 247708 WARNING nova.compute.manager [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.140 247708 DEBUG nova.compute.manager [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-changed-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.141 247708 DEBUG nova.compute.manager [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Refreshing instance network info cache due to event network-changed-cc9a4557-33da-44f5-87b4-ca945cbc819c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.141 247708 DEBUG oslo_concurrency.lockutils [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.142 247708 DEBUG oslo_concurrency.lockutils [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.142 247708 DEBUG nova.network.neutron [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Refreshing network info cache for port cc9a4557-33da-44f5-87b4-ca945cbc819c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:33:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:33:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:33:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.416 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844813.416439, 4444d8df-265a-48a7-a945-08eb55a365e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.417 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Paused (Lifecycle Event)
Jan 31 07:33:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3c210661-132f-4546-a575-ff6b2ecaf0ff does not exist
Jan 31 07:33:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 331f52dd-38b3-490f-94d2-ae002e1f57b6 does not exist
Jan 31 07:33:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a58d0ecf-218c-4e4f-90e6-b519ae1888a0 does not exist
Jan 31 07:33:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.440 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.444 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:33 compute-0 nova_compute[247704]: 2026-01-31 07:33:33.465 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] During sync_power_state the instance has a pending task (migrating). Skip.
Jan 31 07:33:33 compute-0 sudo[258815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:33 compute-0 sudo[258815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:33 compute-0 sudo[258815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:33 compute-0 sudo[258840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:33:33 compute-0 sudo[258840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:33 compute-0 sudo[258840]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:33.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:33 compute-0 ceph-mon[74496]: pgmap v979: 305 pgs: 305 active+clean; 409 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 10 MiB/s wr, 402 op/s
Jan 31 07:33:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:33:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:34 compute-0 kernel: tapcc9a4557-33 (unregistering): left promiscuous mode
Jan 31 07:33:34 compute-0 NetworkManager[49108]: <info>  [1769844814.0154] device (tapcc9a4557-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:33:34 compute-0 ovn_controller[149457]: 2026-01-31T07:33:34Z|00058|binding|INFO|Releasing lport cc9a4557-33da-44f5-87b4-ca945cbc819c from this chassis (sb_readonly=0)
Jan 31 07:33:34 compute-0 ovn_controller[149457]: 2026-01-31T07:33:34Z|00059|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c down in Southbound
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.025 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 ovn_controller[149457]: 2026-01-31T07:33:34Z|00060|binding|INFO|Removing iface tapcc9a4557-33 ovn-installed in OVS
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.029 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 ovn_controller[149457]: 2026-01-31T07:33:34Z|00061|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 07:33:34 compute-0 ovn_controller[149457]: 2026-01-31T07:33:34Z|00062|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.036 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:9d:19 10.100.0.11'], port_security=['fa:16:3e:3c:9d:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'a8402939-fce1-46a9-9749-88c4c6334003'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4444d8df-265a-48a7-a945-08eb55a365e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=cc9a4557-33da-44f5-87b4-ca945cbc819c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.038 160028 INFO neutron.agent.ovn.metadata.agent [-] Port cc9a4557-33da-44f5-87b4-ca945cbc819c in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 unbound from our chassis
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.041 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.053 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c7089e4b-4038-4561-97dd-0f8aa7a4b25c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:34 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 31 07:33:34 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 13.503s CPU time.
Jan 31 07:33:34 compute-0 systemd-machined[214448]: Machine qemu-4-instance-00000009 terminated.
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.088 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0b840b70-fdb1-49b1-b64a-0eeff7abe351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.091 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b766ce25-6709-41b5-9152-4a0021b0cdd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.119 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b6ba73-a677-4d3b-b051-4e69e86a28c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.136 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1c0a6752-949c-4076-bbf3-7d9409bd37a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1162, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1162, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499054, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258877, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.150 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0a530de9-20c5-49ed-9809-ba19d1ed5d5a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499065, 'tstamp': 499065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258878, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499067, 'tstamp': 499067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258878, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.151 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.153 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.157 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.158 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.158 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.159 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:34.159 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:34 compute-0 virtqemud[247621]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-802e207e-cc7b-4779-89dc-d399ba68dc38: No such file or directory
Jan 31 07:33:34 compute-0 virtqemud[247621]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-802e207e-cc7b-4779-89dc-d399ba68dc38: No such file or directory
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.190 247708 DEBUG nova.virt.libvirt.guest [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.191 247708 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration operation has completed
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.191 247708 INFO nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] _post_live_migration() is started..
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.193 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.194 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Jan 31 07:33:34 compute-0 nova_compute[247704]: 2026-01-31 07:33:34.194 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 389 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.2 MiB/s wr, 339 op/s
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00730758828866555 of space, bias 1.0, pg target 2.192276486599665 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0013482487669830635 of space, bias 1.0, pg target 0.4017781325609529 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:33:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.531 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.580 247708 DEBUG nova.network.neutron [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updated VIF entry in instance network info cache for port cc9a4557-33da-44f5-87b4-ca945cbc819c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.581 247708 DEBUG nova.network.neutron [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.605 247708 DEBUG oslo_concurrency.lockutils [req-17e4fc7e-e6ef-4966-b79e-ce5c00ceaa27 req-f74b1ee6-1232-434a-b73a-2301991b028c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.732 247708 DEBUG nova.network.neutron [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Activated binding for port cc9a4557-33da-44f5-87b4-ca945cbc819c and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.733 247708 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.734 247708 DEBUG nova.virt.libvirt.vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.734 247708 DEBUG nova.network.os_vif_util [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.735 247708 DEBUG nova.network.os_vif_util [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.736 247708 DEBUG os_vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.739 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.740 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9a4557-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.801 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.806 247708 INFO os_vif [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33')
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.806 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.807 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.807 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.807 247708 DEBUG nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.808 247708 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deleting instance files /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1_del
Jan 31 07:33:35 compute-0 nova_compute[247704]: 2026-01-31 07:33:35.808 247708 INFO nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deletion of /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1_del complete
Jan 31 07:33:35 compute-0 ceph-mon[74496]: pgmap v980: 305 pgs: 305 active+clean; 389 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.2 MiB/s wr, 339 op/s
Jan 31 07:33:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:35.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 405 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 619 KiB/s rd, 7.6 MiB/s wr, 233 op/s
Jan 31 07:33:36 compute-0 podman[258891]: 2026-01-31 07:33:36.954315773 +0000 UTC m=+0.079001324 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:33:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:37.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:37 compute-0 nova_compute[247704]: 2026-01-31 07:33:37.642 247708 DEBUG nova.compute.manager [req-43d92c31-5f47-4546-87e5-9793f3b45595 req-e47d3e42-2d9d-4386-8273-0064f5b655db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:37 compute-0 nova_compute[247704]: 2026-01-31 07:33:37.642 247708 DEBUG oslo_concurrency.lockutils [req-43d92c31-5f47-4546-87e5-9793f3b45595 req-e47d3e42-2d9d-4386-8273-0064f5b655db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:37 compute-0 nova_compute[247704]: 2026-01-31 07:33:37.643 247708 DEBUG oslo_concurrency.lockutils [req-43d92c31-5f47-4546-87e5-9793f3b45595 req-e47d3e42-2d9d-4386-8273-0064f5b655db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:37 compute-0 nova_compute[247704]: 2026-01-31 07:33:37.643 247708 DEBUG oslo_concurrency.lockutils [req-43d92c31-5f47-4546-87e5-9793f3b45595 req-e47d3e42-2d9d-4386-8273-0064f5b655db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:37 compute-0 nova_compute[247704]: 2026-01-31 07:33:37.643 247708 DEBUG nova.compute.manager [req-43d92c31-5f47-4546-87e5-9793f3b45595 req-e47d3e42-2d9d-4386-8273-0064f5b655db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:33:37 compute-0 nova_compute[247704]: 2026-01-31 07:33:37.643 247708 DEBUG nova.compute.manager [req-43d92c31-5f47-4546-87e5-9793f3b45595 req-e47d3e42-2d9d-4386-8273-0064f5b655db 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:33:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:37 compute-0 ceph-mon[74496]: pgmap v981: 305 pgs: 305 active+clean; 405 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 619 KiB/s rd, 7.6 MiB/s wr, 233 op/s
Jan 31 07:33:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/882411958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 417 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 667 KiB/s rd, 5.7 MiB/s wr, 204 op/s
Jan 31 07:33:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3081057397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:39.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:39 compute-0 ceph-mon[74496]: pgmap v982: 305 pgs: 305 active+clean; 417 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 667 KiB/s rd, 5.7 MiB/s wr, 204 op/s
Jan 31 07:33:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 601 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.599 247708 DEBUG nova.compute.manager [req-13f0e502-864f-4370-8e3c-bf14ab053541 req-042c8329-f6e8-4509-8b72-57a5154cfdd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.600 247708 DEBUG oslo_concurrency.lockutils [req-13f0e502-864f-4370-8e3c-bf14ab053541 req-042c8329-f6e8-4509-8b72-57a5154cfdd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.600 247708 DEBUG oslo_concurrency.lockutils [req-13f0e502-864f-4370-8e3c-bf14ab053541 req-042c8329-f6e8-4509-8b72-57a5154cfdd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.601 247708 DEBUG oslo_concurrency.lockutils [req-13f0e502-864f-4370-8e3c-bf14ab053541 req-042c8329-f6e8-4509-8b72-57a5154cfdd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.601 247708 DEBUG nova.compute.manager [req-13f0e502-864f-4370-8e3c-bf14ab053541 req-042c8329-f6e8-4509-8b72-57a5154cfdd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.601 247708 WARNING nova.compute.manager [req-13f0e502-864f-4370-8e3c-bf14ab053541 req-042c8329-f6e8-4509-8b72-57a5154cfdd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state active and task_state migrating.
Jan 31 07:33:40 compute-0 nova_compute[247704]: 2026-01-31 07:33:40.800 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:41.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:41.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:41 compute-0 ceph-mon[74496]: pgmap v983: 305 pgs: 305 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 601 KiB/s rd, 3.7 MiB/s wr, 150 op/s
Jan 31 07:33:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 525 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.939 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.939 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.940 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.963 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.963 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.963 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.964 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:33:42 compute-0 nova_compute[247704]: 2026-01-31 07:33:42.964 247708 DEBUG oslo_concurrency.processutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/67064187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891413516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.470 247708 DEBUG oslo_concurrency.processutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:43.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.712 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.712 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:33:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:43.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.889 247708 WARNING nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.890 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4685MB free_disk=20.83490753173828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.890 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:43 compute-0 nova_compute[247704]: 2026-01-31 07:33:43.890 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:44 compute-0 ceph-mon[74496]: pgmap v984: 305 pgs: 305 active+clean; 418 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 525 KiB/s rd, 2.4 MiB/s wr, 109 op/s
Jan 31 07:33:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2891413516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.015 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration for instance 4444d8df-265a-48a7-a945-08eb55a365e1 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.038 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.064 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Instance 23c338db-50ed-434c-ac85-8190b9b5f194 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.064 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Instance 425ccf69-9d5a-48da-9def-1744abd51e09 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.064 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Migration a71efeb5-7d1e-40e6-88c1-c369cdd432a6 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.065 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.065 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.164 247708 DEBUG oslo_concurrency.processutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 433 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 395 KiB/s rd, 3.0 MiB/s wr, 102 op/s
Jan 31 07:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741976636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.602 247708 DEBUG oslo_concurrency.processutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.610 247708 DEBUG nova.compute.provider_tree [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.630 247708 DEBUG nova.scheduler.client.report [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.650 247708 DEBUG nova.compute.resource_tracker [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.651 247708 DEBUG oslo_concurrency.lockutils [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.655 247708 INFO nova.compute.manager [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.734 247708 INFO nova.scheduler.client.report [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Deleted allocation for migration a71efeb5-7d1e-40e6-88c1-c369cdd432a6
Jan 31 07:33:44 compute-0 nova_compute[247704]: 2026-01-31 07:33:44.735 247708 DEBUG nova.virt.libvirt.driver [None req-e150b522-f9e8-4c1a-8006-0deff375ae31 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Jan 31 07:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958700523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958700523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:33:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2106868559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3741976636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3958700523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:33:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3958700523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:33:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1044616484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:45 compute-0 nova_compute[247704]: 2026-01-31 07:33:45.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:45.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:45 compute-0 nova_compute[247704]: 2026-01-31 07:33:45.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:46 compute-0 ceph-mon[74496]: pgmap v985: 305 pgs: 305 active+clean; 433 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 395 KiB/s rd, 3.0 MiB/s wr, 102 op/s
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.395 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating tmpfile /var/lib/nova/instances/tmpxr7wz9jw to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.396 247708 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 31 07:33:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 446 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 2.8 MiB/s wr, 96 op/s
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.596 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.826 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.827 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.827 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:33:46 compute-0 nova_compute[247704]: 2026-01-31 07:33:46.827 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:47 compute-0 nova_compute[247704]: 2026-01-31 07:33:47.151 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Automatically allocated network: {'id': '3741a1e4-1d7f-4ca0-b02c-790a05701782', 'name': 'auto_allocated_network', 'tenant_id': 'ca84ce9280d74b4588f89bf679f563fa', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['023e1bb7-94fd-4d3e-985f-de3a59c69198', 'c3df2e1c-5e37-495f-93dc-b811f92707e7'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-31T07:33:22Z', 'updated_at': '2026-01-31T07:33:31Z', 'revision_number': 4, 'project_id': 'ca84ce9280d74b4588f89bf679f563fa'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Jan 31 07:33:47 compute-0 nova_compute[247704]: 2026-01-31 07:33:47.152 247708 DEBUG nova.policy [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38001f2ce5654228b098939fd9619d3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:33:47 compute-0 sudo[258962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:47 compute-0 sudo[258962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:47 compute-0 sudo[258962]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:47 compute-0 sudo[258987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:33:47 compute-0 sudo[258987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:33:47 compute-0 sudo[258987]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:47.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:47 compute-0 nova_compute[247704]: 2026-01-31 07:33:47.692 247708 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 31 07:33:47 compute-0 nova_compute[247704]: 2026-01-31 07:33:47.734 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:47 compute-0 nova_compute[247704]: 2026-01-31 07:33:47.735 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:47 compute-0 nova_compute[247704]: 2026-01-31 07:33:47.735 247708 DEBUG nova.network.neutron [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:33:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:48 compute-0 ceph-mon[74496]: pgmap v986: 305 pgs: 305 active+clean; 446 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 347 KiB/s rd, 2.8 MiB/s wr, 96 op/s
Jan 31 07:33:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 464 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Jan 31 07:33:48 compute-0 nova_compute[247704]: 2026-01-31 07:33:48.948 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Successfully created port: 3dda480d-405b-4a74-bbcf-da85ac266fd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.094 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [{"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.118 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-23c338db-50ed-434c-ac85-8190b9b5f194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.118 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.119 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.190 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844814.189255, 4444d8df-265a-48a7-a945-08eb55a365e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.191 247708 INFO nova.compute.manager [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Stopped (Lifecycle Event)
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.226 247708 DEBUG nova.compute.manager [None req-2a5e337f-e51a-43ce-a310-e62c3d320fa5 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:49.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:49 compute-0 nova_compute[247704]: 2026-01-31 07:33:49.986 247708 DEBUG nova.network.neutron [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.018 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.021 247708 DEBUG os_brick.utils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.022 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.043 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.044 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[debbd353-2203-417f-b09b-4b1e151fa9d4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.045 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.052 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.052 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[b7895330-1cf9-4b59-9180-66223990791f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.055 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.066 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.066 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[95c56827-e9a8-4dbe-9697-2c57d2304407]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.069 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[087dbbab-b05b-4c18-9b5d-209cd78a16f8]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.070 247708 DEBUG oslo_concurrency.processutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.092 247708 DEBUG oslo_concurrency.processutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.096 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.096 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.096 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.097 247708 DEBUG os_brick.utils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:33:50 compute-0 ceph-mon[74496]: pgmap v987: 305 pgs: 305 active+clean; 464 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Jan 31 07:33:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.588 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.589 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.775 247708 DEBUG nova.compute.manager [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.858 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.913 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.914 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.917 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Successfully updated port: 3dda480d-405b-4a74-bbcf-da85ac266fd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.945 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "refresh_cache-425ccf69-9d5a-48da-9def-1744abd51e09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.945 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquired lock "refresh_cache-425ccf69-9d5a-48da-9def-1744abd51e09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.946 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.956 247708 DEBUG nova.objects.instance [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'pci_requests' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.991 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.991 247708 INFO nova.compute.claims [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:33:50 compute-0 nova_compute[247704]: 2026-01-31 07:33:50.992 247708 DEBUG nova.objects.instance [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'resources' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.012 247708 DEBUG nova.objects.instance [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'numa_topology' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.030 247708 DEBUG nova.objects.instance [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.041 247708 DEBUG nova.compute.manager [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-changed-3dda480d-405b-4a74-bbcf-da85ac266fd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.042 247708 DEBUG nova.compute.manager [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Refreshing instance network info cache due to event network-changed-3dda480d-405b-4a74-bbcf-da85ac266fd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.043 247708 DEBUG oslo_concurrency.lockutils [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-425ccf69-9d5a-48da-9def-1744abd51e09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.090 247708 INFO nova.compute.resource_tracker [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating resource usage from migration e3aa997c-5864-4157-ad53-1755be938003
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.090 247708 DEBUG nova.compute.resource_tracker [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Starting to track incoming migration e3aa997c-5864-4157-ad53-1755be938003 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:33:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3297960115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.179 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:33:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2733010016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.217 247708 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:51.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.584 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493479111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.665 247708 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.671 247708 DEBUG nova.compute.provider_tree [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.688 247708 DEBUG nova.scheduler.client.report [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.722 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.723 247708 INFO nova.compute.manager [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Migrating
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.735 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.735 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.736 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.736 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.755 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='632e6ca3-f050-475e-bcd6-268090116cf1'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.757 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Creating instance directory: /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.758 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Ensure instance console log exists: /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.759 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.768 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.770 247708 DEBUG nova.virt.libvirt.vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:42Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.771 247708 DEBUG nova.network.os_vif_util [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.772 247708 DEBUG nova.network.os_vif_util [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.773 247708 DEBUG os_vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.775 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.775 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.780 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.780 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc9a4557-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.781 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc9a4557-33, col_values=(('external_ids', {'iface-id': 'cc9a4557-33da-44f5-87b4-ca945cbc819c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:9d:19', 'vm-uuid': '4444d8df-265a-48a7-a945-08eb55a365e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.783 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:51 compute-0 NetworkManager[49108]: <info>  [1769844831.7844] manager: (tapcc9a4557-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.786 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.792 247708 INFO os_vif [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33')
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.796 247708 DEBUG nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 31 07:33:51 compute-0 nova_compute[247704]: 2026-01-31 07:33:51.796 247708 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='632e6ca3-f050-475e-bcd6-268090116cf1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 31 07:33:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:51 compute-0 podman[259046]: 2026-01-31 07:33:51.901159781 +0000 UTC m=+0.078599353 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:33:52 compute-0 ceph-mon[74496]: pgmap v988: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 07:33:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2733010016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2439060104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3493479111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214933026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.228 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.325 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.325 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:33:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.505 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.506 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4677MB free_disk=20.813865661621094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.506 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.507 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.550 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Migration for instance 4444d8df-265a-48a7-a945-08eb55a365e1 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.551 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Migration for instance d7327aed-ddc6-4772-8d2e-6b8be365dd2b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.602 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating resource usage from migration 4db0dc53-9175-4575-b2ef-88d2c46606ff
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.602 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Starting to track incoming migration 4db0dc53-9175-4575-b2ef-88d2c46606ff with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.603 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating resource usage from migration e3aa997c-5864-4157-ad53-1755be938003
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.603 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Starting to track incoming migration e3aa997c-5864-4157-ad53-1755be938003 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.621 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 23c338db-50ed-434c-ac85-8190b9b5f194 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.621 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 425ccf69-9d5a-48da-9def-1744abd51e09 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.638 247708 WARNING nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 4444d8df-265a-48a7-a945-08eb55a365e1 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.654 247708 WARNING nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance d7327aed-ddc6-4772-8d2e-6b8be365dd2b has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.654 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.655 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:33:52 compute-0 nova_compute[247704]: 2026-01-31 07:33:52.737 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/214933026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:33:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928453779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.289 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.297 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.312 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:33:53 compute-0 sshd-session[259113]: Accepted publickey for nova from 192.168.122.101 port 41796 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 07:33:53 compute-0 systemd-logind[816]: New session 52 of user nova.
Jan 31 07:33:53 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.347 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.347 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:53 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 07:33:53 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 07:33:53 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 31 07:33:53 compute-0 systemd[259120]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:33:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:53 compute-0 systemd[259120]: Queued start job for default target Main User Target.
Jan 31 07:33:53 compute-0 systemd[259120]: Created slice User Application Slice.
Jan 31 07:33:53 compute-0 systemd[259120]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:33:53 compute-0 systemd[259120]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:33:53 compute-0 systemd[259120]: Reached target Paths.
Jan 31 07:33:53 compute-0 systemd[259120]: Reached target Timers.
Jan 31 07:33:53 compute-0 systemd[259120]: Starting D-Bus User Message Bus Socket...
Jan 31 07:33:53 compute-0 systemd[259120]: Starting Create User's Volatile Files and Directories...
Jan 31 07:33:53 compute-0 systemd[259120]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:33:53 compute-0 systemd[259120]: Reached target Sockets.
Jan 31 07:33:53 compute-0 systemd[259120]: Finished Create User's Volatile Files and Directories.
Jan 31 07:33:53 compute-0 systemd[259120]: Reached target Basic System.
Jan 31 07:33:53 compute-0 systemd[259120]: Reached target Main User Target.
Jan 31 07:33:53 compute-0 systemd[259120]: Startup finished in 144ms.
Jan 31 07:33:53 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 31 07:33:53 compute-0 systemd[1]: Started Session 52 of User nova.
Jan 31 07:33:53 compute-0 sshd-session[259113]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:33:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:53.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:53 compute-0 sshd-session[259135]: Received disconnect from 192.168.122.101 port 41796:11: disconnected by user
Jan 31 07:33:53 compute-0 sshd-session[259135]: Disconnected from user nova 192.168.122.101 port 41796
Jan 31 07:33:53 compute-0 sshd-session[259113]: pam_unix(sshd:session): session closed for user nova
Jan 31 07:33:53 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 31 07:33:53 compute-0 systemd-logind[816]: Session 52 logged out. Waiting for processes to exit.
Jan 31 07:33:53 compute-0 systemd-logind[816]: Removed session 52.
Jan 31 07:33:53 compute-0 sshd-session[259137]: Accepted publickey for nova from 192.168.122.101 port 41810 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 07:33:53 compute-0 systemd-logind[816]: New session 54 of user nova.
Jan 31 07:33:53 compute-0 systemd[1]: Started Session 54 of User nova.
Jan 31 07:33:53 compute-0 sshd-session[259137]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:33:53 compute-0 sshd-session[259140]: Received disconnect from 192.168.122.101 port 41810:11: disconnected by user
Jan 31 07:33:53 compute-0 sshd-session[259140]: Disconnected from user nova 192.168.122.101 port 41810
Jan 31 07:33:53 compute-0 sshd-session[259137]: pam_unix(sshd:session): session closed for user nova
Jan 31 07:33:53 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 07:33:53 compute-0 systemd-logind[816]: Session 54 logged out. Waiting for processes to exit.
Jan 31 07:33:53 compute-0 systemd-logind[816]: Removed session 54.
Jan 31 07:33:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.859 247708 DEBUG nova.network.neutron [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Updating instance_info_cache with network_info: [{"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.886 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Releasing lock "refresh_cache-425ccf69-9d5a-48da-9def-1744abd51e09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.887 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Instance network_info: |[{"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.888 247708 DEBUG oslo_concurrency.lockutils [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-425ccf69-9d5a-48da-9def-1744abd51e09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.888 247708 DEBUG nova.network.neutron [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Refreshing network info cache for port 3dda480d-405b-4a74-bbcf-da85ac266fd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.891 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Start _get_guest_xml network_info=[{"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.897 247708 WARNING nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.902 247708 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.904 247708 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.906 247708 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.907 247708 DEBUG nova.virt.libvirt.host [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.909 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.909 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.910 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.910 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.911 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.911 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.912 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.912 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.913 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.913 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.914 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.914 247708 DEBUG nova.virt.hardware [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.919 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:53 compute-0 nova_compute[247704]: 2026-01-31 07:33:53.942 247708 DEBUG nova.network.neutron [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Port cc9a4557-33da-44f5-87b4-ca945cbc819c updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.269 247708 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxr7wz9jw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4444d8df-265a-48a7-a945-08eb55a365e1',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={802e207e-cc7b-4779-89dc-d399ba68dc38='632e6ca3-f050-475e-bcd6-268090116cf1'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 31 07:33:54 compute-0 ceph-mon[74496]: pgmap v989: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 31 07:33:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1928453779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030788466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.365 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.394 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.400 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 31 07:33:54 compute-0 kernel: tapcc9a4557-33: entered promiscuous mode
Jan 31 07:33:54 compute-0 NetworkManager[49108]: <info>  [1769844834.5119] manager: (tapcc9a4557-33): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.513 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 ovn_controller[149457]: 2026-01-31T07:33:54Z|00063|binding|INFO|Claiming lport cc9a4557-33da-44f5-87b4-ca945cbc819c for this additional chassis.
Jan 31 07:33:54 compute-0 ovn_controller[149457]: 2026-01-31T07:33:54Z|00064|binding|INFO|cc9a4557-33da-44f5-87b4-ca945cbc819c: Claiming fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 07:33:54 compute-0 ovn_controller[149457]: 2026-01-31T07:33:54Z|00065|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c ovn-installed in OVS
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.527 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 systemd-machined[214448]: New machine qemu-5-instance-00000009.
Jan 31 07:33:54 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Jan 31 07:33:54 compute-0 systemd-udevd[259215]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:33:54 compute-0 NetworkManager[49108]: <info>  [1769844834.5733] device (tapcc9a4557-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:33:54 compute-0 NetworkManager[49108]: <info>  [1769844834.5738] device (tapcc9a4557-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:33:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:33:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053054528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.845 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.848 247708 DEBUG nova.virt.libvirt.vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-2',id=12,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=425ccf69-9d5a-48da-9def-1744abd51e09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.849 247708 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.851 247708 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.854 247708 DEBUG nova.objects.instance [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 425ccf69-9d5a-48da-9def-1744abd51e09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.877 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <uuid>425ccf69-9d5a-48da-9def-1744abd51e09</uuid>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <name>instance-0000000c</name>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:name>tempest-tempest.common.compute-instance-2133368505-2</nova:name>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:33:53</nova:creationTime>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:user uuid="38001f2ce5654228b098939fd9619d3e">tempest-AutoAllocateNetworkTest-1036306085-project-member</nova:user>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:project uuid="ca84ce9280d74b4588f89bf679f563fa">tempest-AutoAllocateNetworkTest-1036306085</nova:project>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <nova:port uuid="3dda480d-405b-4a74-bbcf-da85ac266fd3">
Jan 31 07:33:54 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.1.0.34" ipVersion="4"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="fdfe:381f:8400::b0" ipVersion="6"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <system>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <entry name="serial">425ccf69-9d5a-48da-9def-1744abd51e09</entry>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <entry name="uuid">425ccf69-9d5a-48da-9def-1744abd51e09</entry>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </system>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <os>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </os>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <features>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </features>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/425ccf69-9d5a-48da-9def-1744abd51e09_disk">
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </source>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/425ccf69-9d5a-48da-9def-1744abd51e09_disk.config">
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </source>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:33:54 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:fc:f5:c8"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <target dev="tap3dda480d-40"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/console.log" append="off"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <video>
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </video>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:33:54 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:33:54 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:33:54 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:33:54 compute-0 nova_compute[247704]: </domain>
Jan 31 07:33:54 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.886 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Preparing to wait for external event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.887 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.887 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.888 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.889 247708 DEBUG nova.virt.libvirt.vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-2',id=12,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:33:22Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=425ccf69-9d5a-48da-9def-1744abd51e09,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.890 247708 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.891 247708 DEBUG nova.network.os_vif_util [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.892 247708 DEBUG os_vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.893 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.894 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.895 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.899 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.900 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3dda480d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.900 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3dda480d-40, col_values=(('external_ids', {'iface-id': '3dda480d-405b-4a74-bbcf-da85ac266fd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:f5:c8', 'vm-uuid': '425ccf69-9d5a-48da-9def-1744abd51e09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:54 compute-0 NetworkManager[49108]: <info>  [1769844834.9042] manager: (tap3dda480d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.903 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.909 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.911 247708 INFO os_vif [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40')
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.963 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.963 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.963 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] No VIF found with MAC fa:16:3e:fc:f5:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.964 247708 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Using config drive
Jan 31 07:33:54 compute-0 nova_compute[247704]: 2026-01-31 07:33:54.991 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/73668542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3030788466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-0 ceph-mon[74496]: pgmap v990: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 31 07:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2596142935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2053054528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4293098453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4007716689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.347 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.348 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.417 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844835.4166298, 4444d8df-265a-48a7-a945-08eb55a365e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.417 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Started (Lifecycle Event)
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.436 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:55.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.843 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844835.8432071, 4444d8df-265a-48a7-a945-08eb55a365e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.843 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Resumed (Lifecycle Event)
Jan 31 07:33:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:55.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.870 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.874 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:55 compute-0 nova_compute[247704]: 2026-01-31 07:33:55.940 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 07:33:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 963 KiB/s wr, 155 op/s
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.488 247708 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Creating config drive at /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/disk.config
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.492 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfwmr041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.621 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfwmr041s" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.651 247708 DEBUG nova.storage.rbd_utils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] rbd image 425ccf69-9d5a-48da-9def-1744abd51e09_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.656 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/disk.config 425ccf69-9d5a-48da-9def-1744abd51e09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.864 247708 DEBUG nova.network.neutron [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Updated VIF entry in instance network info cache for port 3dda480d-405b-4a74-bbcf-da85ac266fd3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.866 247708 DEBUG nova.network.neutron [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Updating instance_info_cache with network_info: [{"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.893 247708 DEBUG oslo_concurrency.lockutils [req-129cfa85-6a53-40c3-9715-c604c0dc5cff req-7e9027b1-33f7-442d-8ece-8b14e0529701 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-425ccf69-9d5a-48da-9def-1744abd51e09" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.953 247708 DEBUG oslo_concurrency.processutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/disk.config 425ccf69-9d5a-48da-9def-1744abd51e09_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:33:56 compute-0 nova_compute[247704]: 2026-01-31 07:33:56.954 247708 INFO nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Deleting local config drive /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09/disk.config because it was imported into RBD.
Jan 31 07:33:57 compute-0 NetworkManager[49108]: <info>  [1769844837.0062] manager: (tap3dda480d-40): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 31 07:33:57 compute-0 kernel: tap3dda480d-40: entered promiscuous mode
Jan 31 07:33:57 compute-0 systemd-udevd[259217]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:33:57 compute-0 ovn_controller[149457]: 2026-01-31T07:33:57Z|00066|binding|INFO|Claiming lport 3dda480d-405b-4a74-bbcf-da85ac266fd3 for this chassis.
Jan 31 07:33:57 compute-0 ovn_controller[149457]: 2026-01-31T07:33:57Z|00067|binding|INFO|3dda480d-405b-4a74-bbcf-da85ac266fd3: Claiming fa:16:3e:fc:f5:c8 10.1.0.34 fdfe:381f:8400::b0
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.009 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.015 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 NetworkManager[49108]: <info>  [1769844837.0206] device (tap3dda480d-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:33:57 compute-0 NetworkManager[49108]: <info>  [1769844837.0215] device (tap3dda480d-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:33:57 compute-0 systemd-machined[214448]: New machine qemu-6-instance-0000000c.
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 ovn_controller[149457]: 2026-01-31T07:33:57Z|00068|binding|INFO|Setting lport 3dda480d-405b-4a74-bbcf-da85ac266fd3 ovn-installed in OVS
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.047 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000c.
Jan 31 07:33:57 compute-0 ovn_controller[149457]: 2026-01-31T07:33:57Z|00069|binding|INFO|Setting lport 3dda480d-405b-4a74-bbcf-da85ac266fd3 up in Southbound
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.089 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:f5:c8 10.1.0.34 fdfe:381f:8400::b0'], port_security=['fa:16:3e:fc:f5:c8 10.1.0.34 fdfe:381f:8400::b0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.34/26 fdfe:381f:8400::b0/64', 'neutron:device_id': '425ccf69-9d5a-48da-9def-1744abd51e09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c568fd1-1fe0-466e-9918-da0b2ee34267', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5e73fc0-b2f3-4e49-a543-f424bee97362, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3dda480d-405b-4a74-bbcf-da85ac266fd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.090 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3dda480d-405b-4a74-bbcf-da85ac266fd3 in datapath 3741a1e4-1d7f-4ca0-b02c-790a05701782 bound to our chassis
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.092 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.100 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3c8e06-5999-404b-af6d-fbee53412085]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.101 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3741a1e4-11 in ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.103 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3741a1e4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.103 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0886ee2f-cd7f-49e6-ada6-3eff3fe5e0aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.103 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e370f08e-3af6-4292-9cd1-0fa36aea1be5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.112 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[62e9badd-d01e-4859-8b79-b7bb55bee91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.124 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[399db946-ffa1-4f35-91dd-aa96eb70783d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.145 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d886a8b1-07ef-4926-8ae6-f3f06979b84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.153 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9374f43-3072-4065-a0d4-5f5f08da8e40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 NetworkManager[49108]: <info>  [1769844837.1545] manager: (tap3741a1e4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.177 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[71698408-f303-4ccc-a5de-fd5faa43d181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.181 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b81d3f-ba2e-4a90-b645-76666da586f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 NetworkManager[49108]: <info>  [1769844837.1998] device (tap3741a1e4-10): carrier: link connected
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.204 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf853b4-6bf0-449b-9a8f-afebcfdb5cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.221 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3f6255-7b31-4f70-b26c-3bf31abcc4a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3741a1e4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:7f:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507981, 'reachable_time': 25589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259377, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.235 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b4366102-55ac-4555-adee-4ea32c9ed278]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:7f1a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507981, 'tstamp': 507981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259378, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.247 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[705bf2a0-6e2e-4a1c-8220-9a680674832c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3741a1e4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:7f:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507981, 'reachable_time': 25589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259379, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.267 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e219441e-07e8-4754-9f24-144b94bef8b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.303 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c471fe-fc08-4ac3-b491-d607d1e8ba76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.305 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3741a1e4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.306 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.307 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3741a1e4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.309 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 NetworkManager[49108]: <info>  [1769844837.3100] manager: (tap3741a1e4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 31 07:33:57 compute-0 kernel: tap3741a1e4-10: entered promiscuous mode
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.322 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3741a1e4-10, col_values=(('external_ids', {'iface-id': 'b790fc33-bc7c-415a-abab-507204f5d28b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:57 compute-0 ovn_controller[149457]: 2026-01-31T07:33:57Z|00070|binding|INFO|Releasing lport b790fc33-bc7c-415a-abab-507204f5d28b from this chassis (sb_readonly=0)
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.332 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.337 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.338 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1e438f68-a5d7-47d6-8628-c027ba3cd92e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.339 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/3741a1e4-1d7f-4ca0-b02c-790a05701782.pid.haproxy
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 3741a1e4-1d7f-4ca0-b02c-790a05701782
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:57.340 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'env', 'PROCESS_TAG=haproxy-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3741a1e4-1d7f-4ca0-b02c-790a05701782.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:33:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:57.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:57 compute-0 ceph-mon[74496]: pgmap v991: 305 pgs: 305 active+clean; 465 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 963 KiB/s wr, 155 op/s
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.733 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844837.7325013, 425ccf69-9d5a-48da-9def-1744abd51e09 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.735 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] VM Started (Lifecycle Event)
Jan 31 07:33:57 compute-0 podman[259453]: 2026-01-31 07:33:57.804782071 +0000 UTC m=+0.070111299 container create 1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.808 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.813 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844837.7345328, 425ccf69-9d5a-48da-9def-1744abd51e09 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.814 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] VM Paused (Lifecycle Event)
Jan 31 07:33:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:57.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:57 compute-0 podman[259453]: 2026-01-31 07:33:57.756305884 +0000 UTC m=+0.021635112 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:33:57 compute-0 systemd[1]: Started libpod-conmon-1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0.scope.
Jan 31 07:33:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0994be95f65fa0d84c052e8e32d8724564efb2343570cb7eb879aefccc66abe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.895 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:57 compute-0 nova_compute[247704]: 2026-01-31 07:33:57.900 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:57 compute-0 podman[259453]: 2026-01-31 07:33:57.903329003 +0000 UTC m=+0.168658251 container init 1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:33:57 compute-0 podman[259453]: 2026-01-31 07:33:57.90733046 +0000 UTC m=+0.172659678 container start 1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:33:57 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [NOTICE]   (259472) : New worker (259474) forked
Jan 31 07:33:57 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [NOTICE]   (259472) : Loading success.
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.110 247708 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.111 247708 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.112 247708 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.113 247708 DEBUG oslo_concurrency.lockutils [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.113 247708 DEBUG nova.compute.manager [req-6ccd1524-71d3-4c27-bec5-c6ccf83de2ea req-30c27753-42e0-41e1-b3b9-8d2c34ba4733 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Processing event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.114 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.119 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.122 247708 INFO nova.virt.libvirt.driver [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Instance spawned successfully.
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.123 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.129 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.130 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844838.1190252, 425ccf69-9d5a-48da-9def-1744abd51e09 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.130 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] VM Resumed (Lifecycle Event)
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.148 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.149 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.149 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.150 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.150 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.150 247708 DEBUG nova.virt.libvirt.driver [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.177 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.181 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:33:58 compute-0 ovn_controller[149457]: 2026-01-31T07:33:58Z|00071|binding|INFO|Claiming lport cc9a4557-33da-44f5-87b4-ca945cbc819c for this chassis.
Jan 31 07:33:58 compute-0 ovn_controller[149457]: 2026-01-31T07:33:58Z|00072|binding|INFO|cc9a4557-33da-44f5-87b4-ca945cbc819c: Claiming fa:16:3e:3c:9d:19 10.100.0.11
Jan 31 07:33:58 compute-0 ovn_controller[149457]: 2026-01-31T07:33:58Z|00073|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c up in Southbound
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.222 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.226 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:9d:19 10.100.0.11'], port_security=['fa:16:3e:3c:9d:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4444d8df-265a-48a7-a945-08eb55a365e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '19', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=cc9a4557-33da-44f5-87b4-ca945cbc819c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.228 160028 INFO neutron.agent.ovn.metadata.agent [-] Port cc9a4557-33da-44f5-87b4-ca945cbc819c in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 bound to our chassis
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.230 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.240 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8c49c60e-204b-4830-945f-db42ede19ff7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.268 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2b0104-c115-493c-9846-d50f6aabc9b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.272 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7d58b6c4-02d0-43a8-84f9-afe4ea2ccdda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.273 247708 INFO nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Took 35.94 seconds to spawn the instance on the hypervisor.
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.273 247708 DEBUG nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.291 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[90ea58b1-a84d-4085-9fbc-8c5134466c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.306 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe98abe-8ed8-4a12-a041-93537d06d6cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 9, 'rx_bytes': 1372, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 9, 'rx_bytes': 1372, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499054, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259488, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.318 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7e317771-14ab-41c6-93f7-eaa8c6be53eb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499065, 'tstamp': 499065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259489, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499067, 'tstamp': 499067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259489, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.319 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.321 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.322 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.323 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.323 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:33:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:33:58.323 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.420 247708 INFO nova.compute.manager [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Took 37.18 seconds to build instance.
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.447 247708 DEBUG oslo_concurrency.lockutils [None req-b7158bd6-79fc-44e5-bb7e-f9d958242d71 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 37.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:33:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.449 247708 INFO nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Post operation of migration started
Jan 31 07:33:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 469 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 153 op/s
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.843 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.843 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquired lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:33:58 compute-0 nova_compute[247704]: 2026-01-31 07:33:58.844 247708 DEBUG nova.network.neutron [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:33:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:33:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:59.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:33:59 compute-0 ceph-mon[74496]: pgmap v992: 305 pgs: 305 active+clean; 469 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 153 op/s
Jan 31 07:33:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:33:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:33:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:33:59 compute-0 nova_compute[247704]: 2026-01-31 07:33:59.904 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.100 247708 DEBUG nova.network.neutron [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [{"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.143 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Releasing lock "refresh_cache-4444d8df-265a-48a7-a945-08eb55a365e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.168 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.169 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.169 247708 DEBUG oslo_concurrency.lockutils [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.173 247708 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 31 07:34:00 compute-0 virtqemud[247621]: Domain id=5 name='instance-00000009' uuid=4444d8df-265a-48a7-a945-08eb55a365e1 is tainted: custom-monitor
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.238 247708 DEBUG nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.239 247708 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.239 247708 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.239 247708 DEBUG oslo_concurrency.lockutils [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.240 247708 DEBUG nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] No waiting events found dispatching network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.240 247708 WARNING nova.compute.manager [req-ba5474cd-25cc-45e5-9e0a-994a3b5a0f5c req-e4afa335-64b7-42ab-81be-99ab0581efde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received unexpected event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 for instance with vm_state active and task_state None.
Jan 31 07:34:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 494 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Jan 31 07:34:00 compute-0 nova_compute[247704]: 2026-01-31 07:34:00.572 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:01 compute-0 nova_compute[247704]: 2026-01-31 07:34:01.181 247708 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 31 07:34:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:34:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:01.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:34:01 compute-0 ceph-mon[74496]: pgmap v993: 305 pgs: 305 active+clean; 494 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Jan 31 07:34:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:02 compute-0 nova_compute[247704]: 2026-01-31 07:34:02.187 247708 INFO nova.virt.libvirt.driver [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 31 07:34:02 compute-0 nova_compute[247704]: 2026-01-31 07:34:02.191 247708 DEBUG nova.compute.manager [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:02 compute-0 nova_compute[247704]: 2026-01-31 07:34:02.295 247708 DEBUG nova.objects.instance [None req-a58f3210-c63c-461d-98d6-096259ef6397 3b647ca4f3ca4e6d93a5ea96dfda0e05 01596079b9b14e4b89c79a7f07fe77f8 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:34:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 522 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.8 MiB/s wr, 275 op/s
Jan 31 07:34:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:03.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:03 compute-0 ceph-mon[74496]: pgmap v994: 305 pgs: 305 active+clean; 522 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.8 MiB/s wr, 275 op/s
Jan 31 07:34:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:04 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 07:34:04 compute-0 systemd[259120]: Activating special unit Exit the Session...
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped target Main User Target.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped target Basic System.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped target Paths.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped target Sockets.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped target Timers.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:34:04 compute-0 systemd[259120]: Closed D-Bus User Message Bus Socket.
Jan 31 07:34:04 compute-0 systemd[259120]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:34:04 compute-0 systemd[259120]: Removed slice User Application Slice.
Jan 31 07:34:04 compute-0 systemd[259120]: Reached target Shutdown.
Jan 31 07:34:04 compute-0 systemd[259120]: Finished Exit the Session.
Jan 31 07:34:04 compute-0 systemd[259120]: Reached target Exit the Session.
Jan 31 07:34:04 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 07:34:04 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 07:34:04 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 07:34:04 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 07:34:04 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 07:34:04 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 07:34:04 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 07:34:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 508 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 288 op/s
Jan 31 07:34:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2122459944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:04 compute-0 nova_compute[247704]: 2026-01-31 07:34:04.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:05 compute-0 nova_compute[247704]: 2026-01-31 07:34:05.590 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:05 compute-0 ceph-mon[74496]: pgmap v995: 305 pgs: 305 active+clean; 508 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 288 op/s
Jan 31 07:34:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4057250205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 495 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 304 op/s
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.629 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.630 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.631 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.631 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.631 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.632 247708 INFO nova.compute.manager [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Terminating instance
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.634 247708 DEBUG nova.compute.manager [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:34:06 compute-0 kernel: tapcc9a4557-33 (unregistering): left promiscuous mode
Jan 31 07:34:06 compute-0 NetworkManager[49108]: <info>  [1769844846.6952] device (tapcc9a4557-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:34:06 compute-0 ovn_controller[149457]: 2026-01-31T07:34:06Z|00074|binding|INFO|Releasing lport cc9a4557-33da-44f5-87b4-ca945cbc819c from this chassis (sb_readonly=0)
Jan 31 07:34:06 compute-0 ovn_controller[149457]: 2026-01-31T07:34:06Z|00075|binding|INFO|Setting lport cc9a4557-33da-44f5-87b4-ca945cbc819c down in Southbound
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.712 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 ovn_controller[149457]: 2026-01-31T07:34:06Z|00076|binding|INFO|Removing iface tapcc9a4557-33 ovn-installed in OVS
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.714 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.726 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:9d:19 10.100.0.11'], port_security=['fa:16:3e:3c:9d:19 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4444d8df-265a-48a7-a945-08eb55a365e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '21', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=cc9a4557-33da-44f5-87b4-ca945cbc819c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.728 160028 INFO neutron.agent.ovn.metadata.agent [-] Port cc9a4557-33da-44f5-87b4-ca945cbc819c in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 unbound from our chassis
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.731 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 272cbcfe-dc1b-4319-84a2-27d245d969a3
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.742 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[90ea089d-09ba-42fd-a58e-17e2647aa56f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:06 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 31 07:34:06 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 1.644s CPU time.
Jan 31 07:34:06 compute-0 systemd-machined[214448]: Machine qemu-5-instance-00000009 terminated.
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.768 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[82fc07ac-2633-4cd8-aaa8-46a7daab4eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.771 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8400ca1a-f9ac-434c-8c58-d3889abfba39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.792 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[49c68496-fd1b-4459-8649-9284dcc05501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.804 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[14408cfd-9f1a-4790-aac4-5c8568846e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap272cbcfe-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:ea:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 41, 'tx_packets': 11, 'rx_bytes': 2002, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 41, 'tx_packets': 11, 'rx_bytes': 2002, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499054, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259509, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.815 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2bedb1d8-97f3-43cc-8bb5-8d8f57edb4a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499065, 'tstamp': 499065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259510, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap272cbcfe-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 499067, 'tstamp': 499067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259510, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.816 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.822 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap272cbcfe-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.823 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.823 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap272cbcfe-d0, col_values=(('external_ids', {'iface-id': '8bd64eda-9666-44b9-9b11-431cc2aca18a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:06.823 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.865 247708 INFO nova.virt.libvirt.driver [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Instance destroyed successfully.
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.865 247708 DEBUG nova.objects.instance [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lazy-loading 'resources' on Instance uuid 4444d8df-265a-48a7-a945-08eb55a365e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.880 247708 DEBUG nova.virt.libvirt.vif [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:33:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-396476796',display_name='tempest-LiveMigrationTest-server-396476796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-396476796',id=9,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:33:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-m03yz7pt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:34:02Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=4444d8df-265a-48a7-a945-08eb55a365e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.881 247708 DEBUG nova.network.os_vif_util [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converting VIF {"id": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "address": "fa:16:3e:3c:9d:19", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc9a4557-33", "ovs_interfaceid": "cc9a4557-33da-44f5-87b4-ca945cbc819c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.882 247708 DEBUG nova.network.os_vif_util [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.882 247708 DEBUG os_vif [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.885 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc9a4557-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.886 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.887 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.892 247708 INFO os_vif [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:9d:19,bridge_name='br-int',has_traffic_filtering=True,id=cc9a4557-33da-44f5-87b4-ca945cbc819c,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc9a4557-33')
Jan 31 07:34:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3395463091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.962 247708 DEBUG nova.compute.manager [req-5f013e09-8248-4c61-b5e2-ae3321ffbcc3 req-48102a30-f057-418a-9dce-92b650bdf48a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.962 247708 DEBUG oslo_concurrency.lockutils [req-5f013e09-8248-4c61-b5e2-ae3321ffbcc3 req-48102a30-f057-418a-9dce-92b650bdf48a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.962 247708 DEBUG oslo_concurrency.lockutils [req-5f013e09-8248-4c61-b5e2-ae3321ffbcc3 req-48102a30-f057-418a-9dce-92b650bdf48a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.963 247708 DEBUG oslo_concurrency.lockutils [req-5f013e09-8248-4c61-b5e2-ae3321ffbcc3 req-48102a30-f057-418a-9dce-92b650bdf48a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.963 247708 DEBUG nova.compute.manager [req-5f013e09-8248-4c61-b5e2-ae3321ffbcc3 req-48102a30-f057-418a-9dce-92b650bdf48a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:06 compute-0 nova_compute[247704]: 2026-01-31 07:34:06.963 247708 DEBUG nova.compute.manager [req-5f013e09-8248-4c61-b5e2-ae3321ffbcc3 req-48102a30-f057-418a-9dce-92b650bdf48a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-unplugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.216 247708 INFO nova.virt.libvirt.driver [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deleting instance files /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1_del
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.217 247708 INFO nova.virt.libvirt.driver [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deletion of /var/lib/nova/instances/4444d8df-265a-48a7-a945-08eb55a365e1_del complete
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.300 247708 INFO nova.compute.manager [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.302 247708 DEBUG oslo.service.loopingcall [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.303 247708 DEBUG nova.compute.manager [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.303 247708 DEBUG nova.network.neutron [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.383 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.384 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquired lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.384 247708 DEBUG nova.network.neutron [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:34:07 compute-0 sudo[259543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:07 compute-0 sudo[259543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:07 compute-0 sudo[259543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:07.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:07 compute-0 sudo[259574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:07 compute-0 podman[259567]: 2026-01-31 07:34:07.591498449 +0000 UTC m=+0.049624085 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 07:34:07 compute-0 sudo[259574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:07 compute-0 sudo[259574]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.659 247708 DEBUG nova.network.neutron [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.798 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.798 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.799 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.799 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.799 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.800 247708 INFO nova.compute.manager [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Terminating instance
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.801 247708 DEBUG nova.compute.manager [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:34:07 compute-0 kernel: tap3dda480d-40 (unregistering): left promiscuous mode
Jan 31 07:34:07 compute-0 NetworkManager[49108]: <info>  [1769844847.8475] device (tap3dda480d-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:34:07 compute-0 ovn_controller[149457]: 2026-01-31T07:34:07Z|00077|binding|INFO|Releasing lport 3dda480d-405b-4a74-bbcf-da85ac266fd3 from this chassis (sb_readonly=0)
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.851 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:07 compute-0 ovn_controller[149457]: 2026-01-31T07:34:07Z|00078|binding|INFO|Setting lport 3dda480d-405b-4a74-bbcf-da85ac266fd3 down in Southbound
Jan 31 07:34:07 compute-0 ovn_controller[149457]: 2026-01-31T07:34:07Z|00079|binding|INFO|Removing iface tap3dda480d-40 ovn-installed in OVS
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:07.864 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:f5:c8 10.1.0.34 fdfe:381f:8400::b0'], port_security=['fa:16:3e:fc:f5:c8 10.1.0.34 fdfe:381f:8400::b0'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.34/26 fdfe:381f:8400::b0/64', 'neutron:device_id': '425ccf69-9d5a-48da-9def-1744abd51e09', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca84ce9280d74b4588f89bf679f563fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4c568fd1-1fe0-466e-9918-da0b2ee34267', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5e73fc0-b2f3-4e49-a543-f424bee97362, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3dda480d-405b-4a74-bbcf-da85ac266fd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.866 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:07.868 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3dda480d-405b-4a74-bbcf-da85ac266fd3 in datapath 3741a1e4-1d7f-4ca0-b02c-790a05701782 unbound from our chassis
Jan 31 07:34:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:07.870 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3741a1e4-1d7f-4ca0-b02c-790a05701782, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:34:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:07.871 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[209a3558-1b61-495a-ab8c-dca961a4a729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:07.872 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 namespace which is not needed anymore
Jan 31 07:34:07 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 31 07:34:07 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Consumed 10.513s CPU time.
Jan 31 07:34:07 compute-0 systemd-machined[214448]: Machine qemu-6-instance-0000000c terminated.
Jan 31 07:34:07 compute-0 ceph-mon[74496]: pgmap v996: 305 pgs: 305 active+clean; 495 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 304 op/s
Jan 31 07:34:07 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [NOTICE]   (259472) : haproxy version is 2.8.14-c23fe91
Jan 31 07:34:07 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [NOTICE]   (259472) : path to executable is /usr/sbin/haproxy
Jan 31 07:34:07 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [WARNING]  (259472) : Exiting Master process...
Jan 31 07:34:07 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [WARNING]  (259472) : Exiting Master process...
Jan 31 07:34:07 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [ALERT]    (259472) : Current worker (259474) exited with code 143 (Terminated)
Jan 31 07:34:07 compute-0 neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782[259468]: [WARNING]  (259472) : All workers exited. Exiting... (0)
Jan 31 07:34:07 compute-0 nova_compute[247704]: 2026-01-31 07:34:07.992 247708 DEBUG nova.network.neutron [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:07 compute-0 systemd[1]: libpod-1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0.scope: Deactivated successfully.
Jan 31 07:34:08 compute-0 podman[259634]: 2026-01-31 07:34:08.003397405 +0000 UTC m=+0.045867395 container died 1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.013 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Releasing lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.025 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.029 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.039 247708 INFO nova.virt.libvirt.driver [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Instance destroyed successfully.
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.040 247708 DEBUG nova.objects.instance [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lazy-loading 'resources' on Instance uuid 425ccf69-9d5a-48da-9def-1744abd51e09 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0-userdata-shm.mount: Deactivated successfully.
Jan 31 07:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0994be95f65fa0d84c052e8e32d8724564efb2343570cb7eb879aefccc66abe-merged.mount: Deactivated successfully.
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.059 247708 DEBUG nova.virt.libvirt.vif [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:33:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2133368505-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2133368505-2',id=12,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-31T07:33:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca84ce9280d74b4588f89bf679f563fa',ramdisk_id='',reservation_id='r-xsv2fd0p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1036306085',owner_user_name='tempest-AutoAllocateNetworkTest-1036306085-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:33:58Z,user_data=None,user_id='38001f2ce5654228b098939fd9619d3e',uuid=425ccf69-9d5a-48da-9def-1744abd51e09,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.060 247708 DEBUG nova.network.os_vif_util [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converting VIF {"id": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "address": "fa:16:3e:fc:f5:c8", "network": {"id": "3741a1e4-1d7f-4ca0-b02c-790a05701782", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.34", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::b0", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca84ce9280d74b4588f89bf679f563fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dda480d-40", "ovs_interfaceid": "3dda480d-405b-4a74-bbcf-da85ac266fd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.060 247708 DEBUG nova.network.os_vif_util [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.061 247708 DEBUG os_vif [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.063 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3dda480d-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.067 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.071 247708 INFO os_vif [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:f5:c8,bridge_name='br-int',has_traffic_filtering=True,id=3dda480d-405b-4a74-bbcf-da85ac266fd3,network=Network(3741a1e4-1d7f-4ca0-b02c-790a05701782),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dda480d-40')
Jan 31 07:34:08 compute-0 podman[259634]: 2026-01-31 07:34:08.082038468 +0000 UTC m=+0.124508488 container cleanup 1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:34:08 compute-0 systemd[1]: libpod-conmon-1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0.scope: Deactivated successfully.
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.101 247708 DEBUG nova.network.neutron [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.134 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.135 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.135 247708 INFO nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Creating image(s)
Jan 31 07:34:08 compute-0 podman[259690]: 2026-01-31 07:34:08.151379307 +0000 UTC m=+0.048455157 container remove 1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.158 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[081de4bc-72a7-47dd-a0b5-7b3b8dd840fe]: (4, ('Sat Jan 31 07:34:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 (1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0)\n1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0\nSat Jan 31 07:34:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 (1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0)\n1f26bc474205c363ac8e49ca84aa9b0df584b3d7821f168b14fc086f6ea828c0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.160 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[32f71899-82d5-4b6b-8246-2847930a69af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.161 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3741a1e4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:08 compute-0 kernel: tap3741a1e4-10: left promiscuous mode
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.174 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[080e33ec-9a9d-475c-a032-236db9524e94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.183 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.186 247708 INFO nova.compute.manager [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 0.88 seconds to deallocate network for instance.
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.190 247708 DEBUG nova.storage.rbd_utils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] creating snapshot(nova-resize) on rbd image(d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.189 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f70a47e1-6ac4-4e13-8107-e12e22a0ed30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.191 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b99c9400-384e-491a-86b7-33c105105dcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.211 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[78a97e05-32f8-4dd4-a080-45f804d01bc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507976, 'reachable_time': 42719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259727, 'error': None, 'target': 'ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d3741a1e4\x2d1d7f\x2d4ca0\x2db02c\x2d790a05701782.mount: Deactivated successfully.
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.216 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3741a1e4-1d7f-4ca0-b02c-790a05701782 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:34:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:08.217 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fb3942-b6bf-4408-a637-82cde47d1122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.238 247708 DEBUG nova.compute.manager [req-5cc386cf-8904-45b8-a8c3-98fb20e321e9 req-907f4698-1bf4-417d-8564-58013ea675d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-deleted-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 484 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 311 op/s
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.697 247708 INFO nova.compute.manager [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Took 0.46 seconds to detach 1 volumes for instance.
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.700 247708 DEBUG nova.compute.manager [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Deleting volume: 802e207e-cc7b-4779-89dc-d399ba68dc38 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.724 247708 INFO nova.virt.libvirt.driver [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Deleting instance files /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09_del
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.725 247708 INFO nova.virt.libvirt.driver [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Deletion of /var/lib/nova/instances/425ccf69-9d5a-48da-9def-1744abd51e09_del complete
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.788 247708 INFO nova.compute.manager [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Took 0.99 seconds to destroy the instance on the hypervisor.
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.789 247708 DEBUG oslo.service.loopingcall [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.789 247708 DEBUG nova.compute.manager [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.789 247708 DEBUG nova.network.neutron [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.959 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.960 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.965 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 31 07:34:08 compute-0 nova_compute[247704]: 2026-01-31 07:34:08.989 247708 INFO nova.scheduler.client.report [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Deleted allocations for instance 4444d8df-265a-48a7-a945-08eb55a365e1
Jan 31 07:34:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 07:34:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.063 247708 DEBUG nova.objects.instance [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.067 247708 DEBUG oslo_concurrency.lockutils [None req-33ab0a32-3fc9-4666-98cc-d17658f18ac2 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.194 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.194 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Ensure instance console log exists: /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.195 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.195 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.196 247708 DEBUG oslo_concurrency.lockutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.197 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.202 247708 WARNING nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.206 247708 DEBUG nova.virt.libvirt.host [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.207 247708 DEBUG nova.virt.libvirt.host [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.210 247708 DEBUG nova.virt.libvirt.host [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.210 247708 DEBUG nova.virt.libvirt.host [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.212 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.212 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.213 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.213 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.214 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.214 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.214 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.214 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.214 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.214 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.215 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.215 247708 DEBUG nova.virt.hardware [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.215 247708 DEBUG nova.objects.instance [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.220 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.221 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.222 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.222 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4444d8df-265a-48a7-a945-08eb55a365e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.222 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] No waiting events found dispatching network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.222 247708 WARNING nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Received unexpected event network-vif-plugged-cc9a4557-33da-44f5-87b4-ca945cbc819c for instance with vm_state deleted and task_state None.
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.223 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-vif-unplugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.223 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.223 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.224 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.224 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] No waiting events found dispatching network-vif-unplugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.224 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-vif-unplugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.224 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.225 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.225 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.225 247708 DEBUG oslo_concurrency.lockutils [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.225 247708 DEBUG nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] No waiting events found dispatching network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.226 247708 WARNING nova.compute.manager [req-a71849dc-8a83-4180-90b1-782b7cad8096 req-268c63fc-33df-4703-bfee-4ca964c209fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received unexpected event network-vif-plugged-3dda480d-405b-4a74-bbcf-da85ac266fd3 for instance with vm_state active and task_state deleting.
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.235 247708 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.478 247708 DEBUG nova.network.neutron [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.496 247708 INFO nova.compute.manager [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Took 0.71 seconds to deallocate network for instance.
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.562 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.563 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:09.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:34:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687764905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.670 247708 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.698 247708 DEBUG oslo_concurrency.processutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:09 compute-0 nova_compute[247704]: 2026-01-31 07:34:09.718 247708 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:10 compute-0 ceph-mon[74496]: pgmap v997: 305 pgs: 305 active+clean; 484 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 311 op/s
Jan 31 07:34:10 compute-0 ceph-mon[74496]: osdmap e141: 3 total, 3 up, 3 in
Jan 31 07:34:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1687764905' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3166809655' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:34:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3166809655' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:34:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:34:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560336513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.165 247708 DEBUG oslo_concurrency.processutils [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505288083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.171 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <uuid>d7327aed-ddc6-4772-8d2e-6b8be365dd2b</uuid>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <name>instance-0000000e</name>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:name>tempest-MigrationsAdminTest-server-1723888188</nova:name>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:34:09</nova:creationTime>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <system>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <entry name="serial">d7327aed-ddc6-4772-8d2e-6b8be365dd2b</entry>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <entry name="uuid">d7327aed-ddc6-4772-8d2e-6b8be365dd2b</entry>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </system>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <os>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </os>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <features>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </features>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk">
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       </source>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/d7327aed-ddc6-4772-8d2e-6b8be365dd2b_disk.config">
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       </source>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:34:10 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b/console.log" append="off"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <video>
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </video>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:34:10 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:34:10 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:34:10 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:34:10 compute-0 nova_compute[247704]: </domain>
Jan 31 07:34:10 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.189 247708 DEBUG oslo_concurrency.processutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.195 247708 DEBUG nova.compute.provider_tree [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.213 247708 DEBUG nova.scheduler.client.report [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.234 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.235 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.235 247708 INFO nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Using config drive
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.276 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.292 247708 DEBUG nova.compute.manager [req-a26996b4-4665-49f1-9416-db92d37dd9b0 req-2652ee67-ae21-4b35-bf6e-20130924b442 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Received event network-vif-deleted-3dda480d-405b-4a74-bbcf-da85ac266fd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:10 compute-0 systemd-machined[214448]: New machine qemu-7-instance-0000000e.
Jan 31 07:34:10 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000e.
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.411 247708 INFO nova.scheduler.client.report [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Deleted allocations for instance 425ccf69-9d5a-48da-9def-1744abd51e09
Jan 31 07:34:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 474 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 277 op/s
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.575 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.576 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.576 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.576 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.577 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.578 247708 INFO nova.compute.manager [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Terminating instance
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.579 247708 DEBUG nova.compute.manager [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.592 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.596 247708 DEBUG oslo_concurrency.lockutils [None req-82274349-125d-4e8c-8b41-6df1a461097c 38001f2ce5654228b098939fd9619d3e ca84ce9280d74b4588f89bf679f563fa - - default default] Lock "425ccf69-9d5a-48da-9def-1744abd51e09" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:10 compute-0 kernel: tap2da07bdf-31 (unregistering): left promiscuous mode
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.638 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 NetworkManager[49108]: <info>  [1769844850.6417] device (tap2da07bdf-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.648 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00080|binding|INFO|Releasing lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 from this chassis (sb_readonly=0)
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00081|binding|INFO|Setting lport 2da07bdf-313d-4a90-a81e-e531c63b3d54 down in Southbound
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00082|binding|INFO|Releasing lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 from this chassis (sb_readonly=0)
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00083|binding|INFO|Setting lport 6e847cdf-cab0-4432-ba18-1faa5270e0d7 down in Southbound
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00084|binding|INFO|Removing iface tap2da07bdf-31 ovn-installed in OVS
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.651 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00085|binding|INFO|Releasing lport 8bd64eda-9666-44b9-9b11-431cc2aca18a from this chassis (sb_readonly=0)
Jan 31 07:34:10 compute-0 ovn_controller[149457]: 2026-01-31T07:34:10Z|00086|binding|INFO|Releasing lport 37e0bde4-4a51-4973-b284-d740caeb19be from this chassis (sb_readonly=0)
Jan 31 07:34:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:10.656 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:9e:1b 10.100.0.5'], port_security=['fa:16:3e:cc:9e:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-211383396', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '23c338db-50ed-434c-ac85-8190b9b5f194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-211383396', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8850cb79-5a97-415d-8eee-4d7273f04968, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=2da07bdf-313d-4a90-a81e-e531c63b3d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:10.658 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:75 19.80.0.160'], port_security=['fa:16:3e:dc:25:75 19.80.0.160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['2da07bdf-313d-4a90-a81e-e531c63b3d54'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-70189147', 'neutron:cidrs': '19.80.0.160/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6abbefe0-4d30-4477-876e-e1412d7347f2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-70189147', 'neutron:project_id': 'dfb4d4079ac944b288d5e285ce1de95a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'abfb90a3-3499-4078-8409-95077c250314', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=00556c7f-96d8-4b10-939b-86f9a6371447, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6e847cdf-cab0-4432-ba18-1faa5270e0d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:10.659 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 2da07bdf-313d-4a90-a81e-e531c63b3d54 in datapath 272cbcfe-dc1b-4319-84a2-27d245d969a3 unbound from our chassis
Jan 31 07:34:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:10.661 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 272cbcfe-dc1b-4319-84a2-27d245d969a3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.663 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:10.664 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebc37ec-c472-4e0a-aca5-511457fe03dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:10.665 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 namespace which is not needed anymore
Jan 31 07:34:10 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 31 07:34:10 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 5.936s CPU time.
Jan 31 07:34:10 compute-0 systemd-machined[214448]: Machine qemu-3-instance-00000005 terminated.
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.716 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [NOTICE]   (257206) : haproxy version is 2.8.14-c23fe91
Jan 31 07:34:10 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [NOTICE]   (257206) : path to executable is /usr/sbin/haproxy
Jan 31 07:34:10 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [WARNING]  (257206) : Exiting Master process...
Jan 31 07:34:10 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [WARNING]  (257206) : Exiting Master process...
Jan 31 07:34:10 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [ALERT]    (257206) : Current worker (257208) exited with code 143 (Terminated)
Jan 31 07:34:10 compute-0 neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3[257202]: [WARNING]  (257206) : All workers exited. Exiting... (0)
Jan 31 07:34:10 compute-0 systemd[1]: libpod-08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d.scope: Deactivated successfully.
Jan 31 07:34:10 compute-0 podman[259925]: 2026-01-31 07:34:10.797356714 +0000 UTC m=+0.046833608 container died 08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.797 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.804 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.813 247708 INFO nova.virt.libvirt.driver [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Instance destroyed successfully.
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.813 247708 DEBUG nova.objects.instance [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lazy-loading 'resources' on Instance uuid 23c338db-50ed-434c-ac85-8190b9b5f194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.838 247708 DEBUG nova.virt.libvirt.vif [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:31:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-933343300',display_name='tempest-LiveMigrationTest-server-933343300',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-933343300',id=5,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:31:39Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dfb4d4079ac944b288d5e285ce1de95a',ramdisk_id='',reservation_id='r-kv2e0wfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-48073594',owner_user_name='tempest-LiveMigrationTest-48073594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:32:36Z,user_data=None,user_id='0a59df5da6284e4e8764816e1f8dfaa3',uuid=23c338db-50ed-434c-ac85-8190b9b5f194,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.839 247708 DEBUG nova.network.os_vif_util [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converting VIF {"id": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "address": "fa:16:3e:cc:9e:1b", "network": {"id": "272cbcfe-dc1b-4319-84a2-27d245d969a3", "bridge": "br-int", "label": "tempest-LiveMigrationTest-397081098-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dfb4d4079ac944b288d5e285ce1de95a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2da07bdf-31", "ovs_interfaceid": "2da07bdf-313d-4a90-a81e-e531c63b3d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.840 247708 DEBUG nova.network.os_vif_util [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.841 247708 DEBUG os_vif [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.842 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.842 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2da07bdf-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.846 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.848 247708 INFO os_vif [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:9e:1b,bridge_name='br-int',has_traffic_filtering=True,id=2da07bdf-313d-4a90-a81e-e531c63b3d54,network=Network(272cbcfe-dc1b-4319-84a2-27d245d969a3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2da07bdf-31')
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.889 247708 DEBUG nova.compute.manager [req-5557477e-e2d7-42d6-aa94-05a3638847e3 req-f9f8d558-67c1-4d47-a595-917dcd332e5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.889 247708 DEBUG oslo_concurrency.lockutils [req-5557477e-e2d7-42d6-aa94-05a3638847e3 req-f9f8d558-67c1-4d47-a595-917dcd332e5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.889 247708 DEBUG oslo_concurrency.lockutils [req-5557477e-e2d7-42d6-aa94-05a3638847e3 req-f9f8d558-67c1-4d47-a595-917dcd332e5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.890 247708 DEBUG oslo_concurrency.lockutils [req-5557477e-e2d7-42d6-aa94-05a3638847e3 req-f9f8d558-67c1-4d47-a595-917dcd332e5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.890 247708 DEBUG nova.compute.manager [req-5557477e-e2d7-42d6-aa94-05a3638847e3 req-f9f8d558-67c1-4d47-a595-917dcd332e5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:10 compute-0 nova_compute[247704]: 2026-01-31 07:34:10.890 247708 DEBUG nova.compute.manager [req-5557477e-e2d7-42d6-aa94-05a3638847e3 req-f9f8d558-67c1-4d47-a595-917dcd332e5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-unplugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d-userdata-shm.mount: Deactivated successfully.
Jan 31 07:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d055446ad0f3bfe0fb24585c20d727579f33cffb1884aaa746cefe1f9c4b736-merged.mount: Deactivated successfully.
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.143 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.145 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.146 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:11 compute-0 podman[259925]: 2026-01-31 07:34:11.245496702 +0000 UTC m=+0.494973586 container cleanup 08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:34:11 compute-0 systemd[1]: libpod-conmon-08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d.scope: Deactivated successfully.
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.278 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844851.2781696, d7327aed-ddc6-4772-8d2e-6b8be365dd2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.278 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] VM Resumed (Lifecycle Event)
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.281 247708 DEBUG nova.compute.manager [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.283 247708 INFO nova.virt.libvirt.driver [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance running successfully.
Jan 31 07:34:11 compute-0 virtqemud[247621]: argument unsupported: QEMU guest agent is not configured
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.287 247708 DEBUG nova.virt.libvirt.guest [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.287 247708 DEBUG nova.virt.libvirt.driver [None req-064a9eab-792d-4849-b2df-dc3e13a9a712 ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.295 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.298 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.343 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.343 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844851.2806928, d7327aed-ddc6-4772-8d2e-6b8be365dd2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.343 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] VM Started (Lifecycle Event)
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.387 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.390 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:34:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1560336513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1505288083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:11.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:11 compute-0 podman[260018]: 2026-01-31 07:34:11.75217895 +0000 UTC m=+0.482306402 container remove 08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.758 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb0686c-3f21-45f2-8a41-834b6b152a59]: (4, ('Sat Jan 31 07:34:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 (08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d)\n08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d\nSat Jan 31 07:34:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 (08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d)\n08a18533690a89ba89257a96e32557148ed14d9da649dbc51ddfbf651f6ec90d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.760 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1e7e04-8d46-4e61-87b9-4951bc2dcade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.762 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap272cbcfe-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.764 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:11 compute-0 kernel: tap272cbcfe-d0: left promiscuous mode
Jan 31 07:34:11 compute-0 nova_compute[247704]: 2026-01-31 07:34:11.770 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.773 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a5f8f9-fe28-4b24-aeb4-8fdc277bde78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.800 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[32e593b1-c9c5-4424-a662-7752366f64fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.802 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[afdbe585-8baf-45aa-9cf8-ce59c5dace63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.821 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[48daff79-4b14-4415-a250-6c3e50244630]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499047, 'reachable_time': 40461, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260035, 'error': None, 'target': 'ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d272cbcfe\x2ddc1b\x2d4319\x2d84a2\x2d27d245d969a3.mount: Deactivated successfully.
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.823 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-272cbcfe-dc1b-4319-84a2-27d245d969a3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.823 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3b12a6-9a49-41df-8880-9c500e3d00eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.825 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6e847cdf-cab0-4432-ba18-1faa5270e0d7 in datapath 6abbefe0-4d30-4477-876e-e1412d7347f2 unbound from our chassis
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.826 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6abbefe0-4d30-4477-876e-e1412d7347f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.826 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbbe38f-2145-4968-9514-80e255b240fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:11.827 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 namespace which is not needed anymore
Jan 31 07:34:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:11 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [NOTICE]   (257281) : haproxy version is 2.8.14-c23fe91
Jan 31 07:34:11 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [NOTICE]   (257281) : path to executable is /usr/sbin/haproxy
Jan 31 07:34:11 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [WARNING]  (257281) : Exiting Master process...
Jan 31 07:34:11 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [ALERT]    (257281) : Current worker (257283) exited with code 143 (Terminated)
Jan 31 07:34:11 compute-0 neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2[257277]: [WARNING]  (257281) : All workers exited. Exiting... (0)
Jan 31 07:34:11 compute-0 systemd[1]: libpod-7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d.scope: Deactivated successfully.
Jan 31 07:34:11 compute-0 podman[260053]: 2026-01-31 07:34:11.95698769 +0000 UTC m=+0.059738768 container died 7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d-userdata-shm.mount: Deactivated successfully.
Jan 31 07:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-74cfcf56b04203de84d449420fd74cd9034daba32ffa3045588f3799532b3da7-merged.mount: Deactivated successfully.
Jan 31 07:34:12 compute-0 podman[260053]: 2026-01-31 07:34:12.155756325 +0000 UTC m=+0.258507413 container cleanup 7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:34:12 compute-0 systemd[1]: libpod-conmon-7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d.scope: Deactivated successfully.
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.190 247708 INFO nova.virt.libvirt.driver [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Deleting instance files /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194_del
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.190 247708 INFO nova.virt.libvirt.driver [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Deletion of /var/lib/nova/instances/23c338db-50ed-434c-ac85-8190b9b5f194_del complete
Jan 31 07:34:12 compute-0 podman[260085]: 2026-01-31 07:34:12.263944699 +0000 UTC m=+0.083899570 container remove 7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.271 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dc302a76-cb56-4460-ad9a-88bfeed453f1]: (4, ('Sat Jan 31 07:34:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 (7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d)\n7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d\nSat Jan 31 07:34:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 (7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d)\n7ae9ad975185432d3361ec21bc1cdd2103c0bd3ae0971b4c82706b0312abdb2d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.273 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e0771aa0-9032-4c7e-a989-03980b8a9dce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.274 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6abbefe0-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:12 compute-0 kernel: tap6abbefe0-40: left promiscuous mode
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.301 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.305 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f369c83d-4e0e-47ec-b6bb-dc3275ea6459]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.305 247708 INFO nova.compute.manager [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Took 1.73 seconds to destroy the instance on the hypervisor.
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.306 247708 DEBUG oslo.service.loopingcall [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.306 247708 DEBUG nova.compute.manager [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.306 247708 DEBUG nova.network.neutron [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.325 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a304b7d3-82ae-4483-8a59-39dd7761129d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.327 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7a9149-7076-4e7d-ad97-4942122f7916]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.340 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd33164-7ee4-41c5-9191-b51076797891]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 499139, 'reachable_time': 43983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260100, 'error': None, 'target': 'ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d6abbefe0\x2d4d30\x2d4477\x2d876e\x2de1412d7347f2.mount: Deactivated successfully.
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.342 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6abbefe0-4d30-4477-876e-e1412d7347f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:34:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:12.342 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[74474466-d2d5-41f4-9f24-ab6d63fc70d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:34:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 428 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 685 KiB/s wr, 185 op/s
Jan 31 07:34:12 compute-0 ceph-mon[74496]: pgmap v999: 305 pgs: 305 active+clean; 474 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 277 op/s
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.990 247708 DEBUG nova.compute.manager [req-277f5b72-1683-4e2e-9fb6-b8a0074cc129 req-83ae72de-c37d-4ff6-a845-c7a9823c5808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.991 247708 DEBUG oslo_concurrency.lockutils [req-277f5b72-1683-4e2e-9fb6-b8a0074cc129 req-83ae72de-c37d-4ff6-a845-c7a9823c5808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.991 247708 DEBUG oslo_concurrency.lockutils [req-277f5b72-1683-4e2e-9fb6-b8a0074cc129 req-83ae72de-c37d-4ff6-a845-c7a9823c5808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.992 247708 DEBUG oslo_concurrency.lockutils [req-277f5b72-1683-4e2e-9fb6-b8a0074cc129 req-83ae72de-c37d-4ff6-a845-c7a9823c5808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.992 247708 DEBUG nova.compute.manager [req-277f5b72-1683-4e2e-9fb6-b8a0074cc129 req-83ae72de-c37d-4ff6-a845-c7a9823c5808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] No waiting events found dispatching network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:34:12 compute-0 nova_compute[247704]: 2026-01-31 07:34:12.993 247708 WARNING nova.compute.manager [req-277f5b72-1683-4e2e-9fb6-b8a0074cc129 req-83ae72de-c37d-4ff6-a845-c7a9823c5808 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Received unexpected event network-vif-plugged-2da07bdf-313d-4a90-a81e-e531c63b3d54 for instance with vm_state active and task_state deleting.
Jan 31 07:34:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 31 07:34:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 07:34:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 31 07:34:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:14 compute-0 ceph-mon[74496]: pgmap v1000: 305 pgs: 305 active+clean; 428 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 685 KiB/s wr, 185 op/s
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.236 247708 DEBUG nova.network.neutron [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.255 247708 INFO nova.compute.manager [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Took 1.95 seconds to deallocate network for instance.
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.298 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.299 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.392 247708 DEBUG oslo_concurrency.processutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 259 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 262 op/s
Jan 31 07:34:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040447222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.821 247708 DEBUG oslo_concurrency.processutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.827 247708 DEBUG nova.compute.provider_tree [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.858 247708 DEBUG nova.scheduler.client.report [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.884 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:14 compute-0 nova_compute[247704]: 2026-01-31 07:34:14.923 247708 INFO nova.scheduler.client.report [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Deleted allocations for instance 23c338db-50ed-434c-ac85-8190b9b5f194
Jan 31 07:34:15 compute-0 nova_compute[247704]: 2026-01-31 07:34:15.038 247708 DEBUG oslo_concurrency.lockutils [None req-7b04b519-b2fa-44ee-8d0c-deca6333e001 0a59df5da6284e4e8764816e1f8dfaa3 dfb4d4079ac944b288d5e285ce1de95a - - default default] Lock "23c338db-50ed-434c-ac85-8190b9b5f194" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:15 compute-0 ceph-mon[74496]: osdmap e142: 3 total, 3 up, 3 in
Jan 31 07:34:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1040447222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2027725969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:15 compute-0 nova_compute[247704]: 2026-01-31 07:34:15.637 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:15 compute-0 nova_compute[247704]: 2026-01-31 07:34:15.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:15.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:16 compute-0 ceph-mon[74496]: pgmap v1002: 305 pgs: 305 active+clean; 259 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 262 op/s
Jan 31 07:34:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 217 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 KiB/s wr, 300 op/s
Jan 31 07:34:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:16.720 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:34:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:16.721 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:34:16 compute-0 nova_compute[247704]: 2026-01-31 07:34:16.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3529144254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:17.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:17.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:18 compute-0 ceph-mon[74496]: pgmap v1003: 305 pgs: 305 active+clean; 217 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 KiB/s wr, 300 op/s
Jan 31 07:34:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 200 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.5 KiB/s wr, 257 op/s
Jan 31 07:34:19 compute-0 ceph-mon[74496]: pgmap v1004: 305 pgs: 305 active+clean; 200 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.5 KiB/s wr, 257 op/s
Jan 31 07:34:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:19.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:19.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:34:20
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', '.mgr', 'volumes']
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:34:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 200 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.8 KiB/s wr, 209 op/s
Jan 31 07:34:20 compute-0 nova_compute[247704]: 2026-01-31 07:34:20.638 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:20 compute-0 nova_compute[247704]: 2026-01-31 07:34:20.846 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:21.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:21 compute-0 ceph-mon[74496]: pgmap v1005: 305 pgs: 305 active+clean; 200 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.8 KiB/s wr, 209 op/s
Jan 31 07:34:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2465457816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:21 compute-0 nova_compute[247704]: 2026-01-31 07:34:21.862 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844846.8615427, 4444d8df-265a-48a7-a945-08eb55a365e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:21 compute-0 nova_compute[247704]: 2026-01-31 07:34:21.863 247708 INFO nova.compute.manager [-] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] VM Stopped (Lifecycle Event)
Jan 31 07:34:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:21 compute-0 nova_compute[247704]: 2026-01-31 07:34:21.939 247708 DEBUG nova.compute.manager [None req-4d2ad5ad-c70d-443e-b851-01ccf69a79f0 - - - - - -] [instance: 4444d8df-265a-48a7-a945-08eb55a365e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 200 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 KiB/s wr, 179 op/s
Jan 31 07:34:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:34:22.723 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:34:22 compute-0 podman[260128]: 2026-01-31 07:34:22.968263608 +0000 UTC m=+0.141954049 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:34:23 compute-0 nova_compute[247704]: 2026-01-31 07:34:23.037 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844848.0357654, 425ccf69-9d5a-48da-9def-1744abd51e09 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:23 compute-0 nova_compute[247704]: 2026-01-31 07:34:23.037 247708 INFO nova.compute.manager [-] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] VM Stopped (Lifecycle Event)
Jan 31 07:34:23 compute-0 nova_compute[247704]: 2026-01-31 07:34:23.064 247708 DEBUG nova.compute.manager [None req-e6ea78c7-a5dd-4153-a464-5480aacf0561 - - - - - -] [instance: 425ccf69-9d5a-48da-9def-1744abd51e09] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 31 07:34:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 07:34:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 31 07:34:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:23.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:23.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:24 compute-0 ceph-mon[74496]: pgmap v1006: 305 pgs: 305 active+clean; 200 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 KiB/s wr, 179 op/s
Jan 31 07:34:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1668458985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:24 compute-0 ceph-mon[74496]: osdmap e143: 3 total, 3 up, 3 in
Jan 31 07:34:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3704704504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 239 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 870 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 31 07:34:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:25.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:25 compute-0 nova_compute[247704]: 2026-01-31 07:34:25.641 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:25 compute-0 nova_compute[247704]: 2026-01-31 07:34:25.807 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844850.8062022, 23c338db-50ed-434c-ac85-8190b9b5f194 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:25 compute-0 nova_compute[247704]: 2026-01-31 07:34:25.807 247708 INFO nova.compute.manager [-] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] VM Stopped (Lifecycle Event)
Jan 31 07:34:25 compute-0 nova_compute[247704]: 2026-01-31 07:34:25.847 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:25.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:26 compute-0 nova_compute[247704]: 2026-01-31 07:34:26.001 247708 DEBUG nova.compute.manager [None req-a1e6fd3a-0cc0-4c97-8534-92bd77214860 - - - - - -] [instance: 23c338db-50ed-434c-ac85-8190b9b5f194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:26 compute-0 ceph-mon[74496]: pgmap v1008: 305 pgs: 305 active+clean; 239 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 870 KiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 31 07:34:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 536 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 31 07:34:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:27.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:27 compute-0 sudo[260159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:27 compute-0 sudo[260159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:27 compute-0 sudo[260159]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:27 compute-0 sudo[260184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:27 compute-0 sudo[260184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:27 compute-0 ceph-mon[74496]: pgmap v1009: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 536 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 31 07:34:27 compute-0 sudo[260184]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:34:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:27.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:34:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 234 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 07:34:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:29.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:29 compute-0 nova_compute[247704]: 2026-01-31 07:34:29.956 247708 DEBUG nova.compute.manager [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.055 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.056 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.089 247708 DEBUG nova.objects.instance [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_requests' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.106 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.107 247708 INFO nova.compute.claims [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.107 247708 DEBUG nova.objects.instance [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.125 247708 DEBUG nova.objects.instance [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_devices' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.179 247708 INFO nova.compute.resource_tracker [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating resource usage from migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.179 247708 DEBUG nova.compute.resource_tracker [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Starting to track incoming migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078 with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.261 247708 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 189 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 189 op/s
Jan 31 07:34:30 compute-0 ceph-mon[74496]: pgmap v1010: 305 pgs: 305 active+clean; 234 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.642 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600340645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.723 247708 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.732 247708 DEBUG nova.compute.provider_tree [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.753 247708 DEBUG nova.scheduler.client.report [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.789 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.790 247708 INFO nova.compute.manager [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Migrating
Jan 31 07:34:30 compute-0 nova_compute[247704]: 2026-01-31 07:34:30.888 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:31 compute-0 sshd-session[260233]: Accepted publickey for nova from 192.168.122.102 port 52858 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 07:34:31 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 07:34:31 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 07:34:31 compute-0 systemd-logind[816]: New session 55 of user nova.
Jan 31 07:34:31 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 07:34:31 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 31 07:34:31 compute-0 systemd[260237]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:34:31 compute-0 ceph-mon[74496]: pgmap v1011: 305 pgs: 305 active+clean; 189 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 189 op/s
Jan 31 07:34:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1600340645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:31 compute-0 systemd[260237]: Queued start job for default target Main User Target.
Jan 31 07:34:31 compute-0 systemd[260237]: Created slice User Application Slice.
Jan 31 07:34:31 compute-0 systemd[260237]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:34:31 compute-0 systemd[260237]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:34:31 compute-0 systemd[260237]: Reached target Paths.
Jan 31 07:34:31 compute-0 systemd[260237]: Reached target Timers.
Jan 31 07:34:31 compute-0 systemd[260237]: Starting D-Bus User Message Bus Socket...
Jan 31 07:34:31 compute-0 systemd[260237]: Starting Create User's Volatile Files and Directories...
Jan 31 07:34:31 compute-0 systemd[260237]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:34:31 compute-0 systemd[260237]: Reached target Sockets.
Jan 31 07:34:31 compute-0 systemd[260237]: Finished Create User's Volatile Files and Directories.
Jan 31 07:34:31 compute-0 systemd[260237]: Reached target Basic System.
Jan 31 07:34:31 compute-0 systemd[260237]: Reached target Main User Target.
Jan 31 07:34:31 compute-0 systemd[260237]: Startup finished in 134ms.
Jan 31 07:34:31 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 31 07:34:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:31 compute-0 systemd[1]: Started Session 55 of User nova.
Jan 31 07:34:31 compute-0 sshd-session[260233]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:34:32 compute-0 sshd-session[260252]: Received disconnect from 192.168.122.102 port 52858:11: disconnected by user
Jan 31 07:34:32 compute-0 sshd-session[260252]: Disconnected from user nova 192.168.122.102 port 52858
Jan 31 07:34:32 compute-0 sshd-session[260233]: pam_unix(sshd:session): session closed for user nova
Jan 31 07:34:32 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 31 07:34:32 compute-0 systemd-logind[816]: Session 55 logged out. Waiting for processes to exit.
Jan 31 07:34:32 compute-0 systemd-logind[816]: Removed session 55.
Jan 31 07:34:32 compute-0 sshd-session[260254]: Accepted publickey for nova from 192.168.122.102 port 52870 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 07:34:32 compute-0 systemd-logind[816]: New session 57 of user nova.
Jan 31 07:34:32 compute-0 systemd[1]: Started Session 57 of User nova.
Jan 31 07:34:32 compute-0 sshd-session[260254]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 07:34:32 compute-0 sshd-session[260257]: Received disconnect from 192.168.122.102 port 52870:11: disconnected by user
Jan 31 07:34:32 compute-0 sshd-session[260257]: Disconnected from user nova 192.168.122.102 port 52870
Jan 31 07:34:32 compute-0 sshd-session[260254]: pam_unix(sshd:session): session closed for user nova
Jan 31 07:34:32 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Jan 31 07:34:32 compute-0 systemd-logind[816]: Session 57 logged out. Waiting for processes to exit.
Jan 31 07:34:32 compute-0 systemd-logind[816]: Removed session 57.
Jan 31 07:34:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 175 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 31 07:34:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:33 compute-0 sudo[260260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:33 compute-0 sudo[260260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:33 compute-0 sudo[260260]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:33 compute-0 sudo[260285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:34:33 compute-0 sudo[260285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:33 compute-0 sudo[260285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:33.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:33 compute-0 sudo[260310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:33 compute-0 sudo[260310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:33 compute-0 sudo[260310]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:34 compute-0 sudo[260335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:34:34 compute-0 sudo[260335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:34:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5220 writes, 22K keys, 5219 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 5220 writes, 5219 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1496 writes, 6728 keys, 1496 commit groups, 1.0 writes per commit group, ingest: 10.18 MB, 0.02 MB/s
                                           Interval WAL: 1496 writes, 1496 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.8      0.31              0.07        13    0.024       0      0       0.0       0.0
                                             L6      1/0    7.18 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6     70.0     58.1      1.70              0.23        12    0.142     56K   6340       0.0       0.0
                                            Sum      1/0    7.18 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     59.3     62.8      2.01              0.30        25    0.080     56K   6340       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     85.4     84.6      0.67              0.14        12    0.056     29K   3040       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     70.0     58.1      1.70              0.23        12    0.142     56K   6340       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.0      0.30              0.07        12    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.027, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 2.0 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 9.95 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000189 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(577,9.50 MB,3.12421%) FilterBlock(26,161.86 KB,0.0519953%) IndexBlock(26,303.53 KB,0.0975057%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:34:34 compute-0 sudo[260335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 169 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Jan 31 07:34:34 compute-0 ceph-mon[74496]: pgmap v1012: 305 pgs: 305 active+clean; 175 MiB data, 339 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 31 07:34:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1570788477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00316451324911595 of space, bias 1.0, pg target 0.949353974734785 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:34:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:34:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:34:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:34:35 compute-0 nova_compute[247704]: 2026-01-31 07:34:35.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:35 compute-0 ceph-mon[74496]: pgmap v1013: 305 pgs: 305 active+clean; 169 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Jan 31 07:34:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:35 compute-0 nova_compute[247704]: 2026-01-31 07:34:35.890 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:35.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:34:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:34:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:34:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 169 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 148 KiB/s wr, 132 op/s
Jan 31 07:34:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ac1a2741-3f08-47ed-8dbc-3b638ee2f5b6 does not exist
Jan 31 07:34:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b2063ab1-3d46-478b-b4b8-cdce7f8562d0 does not exist
Jan 31 07:34:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a695cad8-9517-41e0-8eea-fde696bb7ea0 does not exist
Jan 31 07:34:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:34:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:34:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:34:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:34:36 compute-0 sudo[260392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:36 compute-0 sudo[260392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:36 compute-0 sudo[260392]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:36 compute-0 sudo[260417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:34:36 compute-0 sudo[260417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:36 compute-0 sudo[260417]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:36 compute-0 sudo[260442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:36 compute-0 sudo[260442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:36 compute-0 sudo[260442]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:34:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:34:36 compute-0 sudo[260467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:34:36 compute-0 sudo[260467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.218312352 +0000 UTC m=+0.066003840 container create cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:34:37 compute-0 systemd[1]: Started libpod-conmon-cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5.scope.
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.177394537 +0000 UTC m=+0.025086045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:34:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.356280884 +0000 UTC m=+0.203972382 container init cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.363000136 +0000 UTC m=+0.210691624 container start cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:34:37 compute-0 modest_shockley[260548]: 167 167
Jan 31 07:34:37 compute-0 systemd[1]: libpod-cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5.scope: Deactivated successfully.
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.379907063 +0000 UTC m=+0.227598661 container attach cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.380185829 +0000 UTC m=+0.227877287 container died cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:34:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bd4915686354ffd337848b39466fd4fbb8c86274baa176f173fe0da0bec4bf9-merged.mount: Deactivated successfully.
Jan 31 07:34:37 compute-0 podman[260531]: 2026-01-31 07:34:37.50901899 +0000 UTC m=+0.356710448 container remove cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:34:37 compute-0 systemd[1]: libpod-conmon-cc3876fe68b051b1cb003da6b67b339a53a273f57df00629b1d63a44f7ebedb5.scope: Deactivated successfully.
Jan 31 07:34:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:37 compute-0 podman[260573]: 2026-01-31 07:34:37.74745775 +0000 UTC m=+0.102270463 container create 3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:34:37 compute-0 podman[260573]: 2026-01-31 07:34:37.670501908 +0000 UTC m=+0.025314651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:34:37 compute-0 systemd[1]: Started libpod-conmon-3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23.scope.
Jan 31 07:34:37 compute-0 ceph-mon[74496]: pgmap v1014: 305 pgs: 305 active+clean; 169 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 148 KiB/s wr, 132 op/s
Jan 31 07:34:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec771364f82a7db077e65fc7e0239be4a43766909f9e213a74d353806cb64cea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec771364f82a7db077e65fc7e0239be4a43766909f9e213a74d353806cb64cea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec771364f82a7db077e65fc7e0239be4a43766909f9e213a74d353806cb64cea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec771364f82a7db077e65fc7e0239be4a43766909f9e213a74d353806cb64cea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec771364f82a7db077e65fc7e0239be4a43766909f9e213a74d353806cb64cea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:34:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:34:37 compute-0 podman[260573]: 2026-01-31 07:34:37.988061332 +0000 UTC m=+0.342874055 container init 3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:34:37 compute-0 podman[260573]: 2026-01-31 07:34:37.99876355 +0000 UTC m=+0.353576263 container start 3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:34:38 compute-0 podman[260573]: 2026-01-31 07:34:38.157133253 +0000 UTC m=+0.511945966 container attach 3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:34:38 compute-0 podman[260587]: 2026-01-31 07:34:38.258581674 +0000 UTC m=+0.462003922 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:34:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 177 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 523 KiB/s wr, 114 op/s
Jan 31 07:34:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:38 compute-0 tender_wu[260601]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:34:38 compute-0 tender_wu[260601]: --> relative data size: 1.0
Jan 31 07:34:38 compute-0 tender_wu[260601]: --> All data devices are unavailable
Jan 31 07:34:38 compute-0 podman[260573]: 2026-01-31 07:34:38.893729125 +0000 UTC m=+1.248541818 container died 3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:34:38 compute-0 systemd[1]: libpod-3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23.scope: Deactivated successfully.
Jan 31 07:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec771364f82a7db077e65fc7e0239be4a43766909f9e213a74d353806cb64cea-merged.mount: Deactivated successfully.
Jan 31 07:34:39 compute-0 podman[260573]: 2026-01-31 07:34:39.046955193 +0000 UTC m=+1.401767886 container remove 3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:34:39 compute-0 systemd[1]: libpod-conmon-3a17804b7c6042f32a2a50b5da0ffe6f7ca8c2ed8aefd3e5aa09142ff8f4ad23.scope: Deactivated successfully.
Jan 31 07:34:39 compute-0 sudo[260467]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:39 compute-0 sudo[260638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:39 compute-0 sudo[260638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:39 compute-0 sudo[260638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:39 compute-0 sudo[260663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:34:39 compute-0 sudo[260663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:39 compute-0 sudo[260663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:39 compute-0 sudo[260688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:39 compute-0 sudo[260688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:39 compute-0 sudo[260688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:39 compute-0 sudo[260713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:34:39 compute-0 sudo[260713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:39 compute-0 podman[260779]: 2026-01-31 07:34:39.731160515 +0000 UTC m=+0.100532872 container create 6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galileo, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:34:39 compute-0 podman[260779]: 2026-01-31 07:34:39.653597778 +0000 UTC m=+0.022970155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:34:39 compute-0 systemd[1]: Started libpod-conmon-6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4.scope.
Jan 31 07:34:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:34:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:39.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:40 compute-0 podman[260779]: 2026-01-31 07:34:40.051948427 +0000 UTC m=+0.421320814 container init 6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:34:40 compute-0 podman[260779]: 2026-01-31 07:34:40.061952078 +0000 UTC m=+0.431324435 container start 6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galileo, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:34:40 compute-0 laughing_galileo[260795]: 167 167
Jan 31 07:34:40 compute-0 systemd[1]: libpod-6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4.scope: Deactivated successfully.
Jan 31 07:34:40 compute-0 podman[260779]: 2026-01-31 07:34:40.171952876 +0000 UTC m=+0.541325323 container attach 6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:34:40 compute-0 podman[260779]: 2026-01-31 07:34:40.172742325 +0000 UTC m=+0.542114782 container died 6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galileo, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:34:40 compute-0 ceph-mon[74496]: pgmap v1015: 305 pgs: 305 active+clean; 177 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 523 KiB/s wr, 114 op/s
Jan 31 07:34:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 184 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Jan 31 07:34:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b888b77d6a966cb3a517c132b125ba1832ef20247af11bfb5b07247a4251b10-merged.mount: Deactivated successfully.
Jan 31 07:34:40 compute-0 nova_compute[247704]: 2026-01-31 07:34:40.672 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:40 compute-0 nova_compute[247704]: 2026-01-31 07:34:40.891 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:41 compute-0 podman[260779]: 2026-01-31 07:34:41.041455798 +0000 UTC m=+1.410828195 container remove 6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:34:41 compute-0 systemd[1]: libpod-conmon-6d631a674dfc280271b62a2bb5f0384a672c406a450f81bd531e667f8f17f6e4.scope: Deactivated successfully.
Jan 31 07:34:41 compute-0 podman[260820]: 2026-01-31 07:34:41.199190084 +0000 UTC m=+0.034844309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:34:41 compute-0 podman[260820]: 2026-01-31 07:34:41.559205062 +0000 UTC m=+0.394859237 container create ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:34:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:41 compute-0 ceph-mon[74496]: pgmap v1016: 305 pgs: 305 active+clean; 184 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Jan 31 07:34:41 compute-0 systemd[1]: Started libpod-conmon-ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477.scope.
Jan 31 07:34:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42b989a62f195e584a37eb774908cfdb8d202eb67d65d771ba33e49ac560a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42b989a62f195e584a37eb774908cfdb8d202eb67d65d771ba33e49ac560a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42b989a62f195e584a37eb774908cfdb8d202eb67d65d771ba33e49ac560a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f42b989a62f195e584a37eb774908cfdb8d202eb67d65d771ba33e49ac560a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:41 compute-0 podman[260820]: 2026-01-31 07:34:41.916414801 +0000 UTC m=+0.752069056 container init ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:34:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:41 compute-0 podman[260820]: 2026-01-31 07:34:41.924132446 +0000 UTC m=+0.759786621 container start ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:34:42 compute-0 podman[260820]: 2026-01-31 07:34:42.016295755 +0000 UTC m=+0.851950060 container attach ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:34:42 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 07:34:42 compute-0 systemd[260237]: Activating special unit Exit the Session...
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped target Main User Target.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped target Basic System.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped target Paths.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped target Sockets.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped target Timers.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 07:34:42 compute-0 systemd[260237]: Closed D-Bus User Message Bus Socket.
Jan 31 07:34:42 compute-0 systemd[260237]: Stopped Create User's Volatile Files and Directories.
Jan 31 07:34:42 compute-0 systemd[260237]: Removed slice User Application Slice.
Jan 31 07:34:42 compute-0 systemd[260237]: Reached target Shutdown.
Jan 31 07:34:42 compute-0 systemd[260237]: Finished Exit the Session.
Jan 31 07:34:42 compute-0 systemd[260237]: Reached target Exit the Session.
Jan 31 07:34:42 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 07:34:42 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 07:34:42 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 07:34:42 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 07:34:42 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 07:34:42 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 07:34:42 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 07:34:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 198 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 07:34:42 compute-0 compassionate_panini[260837]: {
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:     "0": [
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:         {
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "devices": [
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "/dev/loop3"
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             ],
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "lv_name": "ceph_lv0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "lv_size": "7511998464",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "name": "ceph_lv0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "tags": {
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.cluster_name": "ceph",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.crush_device_class": "",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.encrypted": "0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.osd_id": "0",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.type": "block",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:                 "ceph.vdo": "0"
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             },
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "type": "block",
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:             "vg_name": "ceph_vg0"
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:         }
Jan 31 07:34:42 compute-0 compassionate_panini[260837]:     ]
Jan 31 07:34:42 compute-0 compassionate_panini[260837]: }
Jan 31 07:34:42 compute-0 systemd[1]: libpod-ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477.scope: Deactivated successfully.
Jan 31 07:34:42 compute-0 conmon[260837]: conmon ff6d3caf0e76f1624853 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477.scope/container/memory.events
Jan 31 07:34:42 compute-0 podman[260848]: 2026-01-31 07:34:42.779865696 +0000 UTC m=+0.021010187 container died ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f42b989a62f195e584a37eb774908cfdb8d202eb67d65d771ba33e49ac560a1-merged.mount: Deactivated successfully.
Jan 31 07:34:42 compute-0 podman[260848]: 2026-01-31 07:34:42.90257393 +0000 UTC m=+0.143718431 container remove ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:34:42 compute-0 systemd[1]: libpod-conmon-ff6d3caf0e76f1624853dcdc0993d2d32cdef4ca6a82cae2091583618e57a477.scope: Deactivated successfully.
Jan 31 07:34:42 compute-0 sudo[260713]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:43 compute-0 sudo[260862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:43 compute-0 sudo[260862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:43 compute-0 sudo[260862]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:43 compute-0 sudo[260887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:34:43 compute-0 sudo[260887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:43 compute-0 sudo[260887]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:43 compute-0 sudo[260912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:43 compute-0 sudo[260912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:43 compute-0 sudo[260912]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:43 compute-0 sudo[260939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:34:43 compute-0 sudo[260939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:43 compute-0 sshd-session[260921]: Invalid user sol from 45.148.10.240 port 39430
Jan 31 07:34:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:43 compute-0 sshd-session[260921]: Connection closed by invalid user sol 45.148.10.240 port 39430 [preauth]
Jan 31 07:34:43 compute-0 podman[261004]: 2026-01-31 07:34:43.508798754 +0000 UTC m=+0.020427683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:34:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:43.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:43 compute-0 podman[261004]: 2026-01-31 07:34:43.729127888 +0000 UTC m=+0.240756717 container create cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:34:43 compute-0 systemd[1]: Started libpod-conmon-cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc.scope.
Jan 31 07:34:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:34:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:43.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:44 compute-0 ceph-mon[74496]: pgmap v1017: 305 pgs: 305 active+clean; 198 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 07:34:44 compute-0 podman[261004]: 2026-01-31 07:34:44.146508936 +0000 UTC m=+0.658137865 container init cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:34:44 compute-0 podman[261004]: 2026-01-31 07:34:44.152200324 +0000 UTC m=+0.663829183 container start cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:34:44 compute-0 silly_dhawan[261020]: 167 167
Jan 31 07:34:44 compute-0 systemd[1]: libpod-cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc.scope: Deactivated successfully.
Jan 31 07:34:44 compute-0 podman[261004]: 2026-01-31 07:34:44.263949263 +0000 UTC m=+0.775578092 container attach cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:34:44 compute-0 podman[261004]: 2026-01-31 07:34:44.264871385 +0000 UTC m=+0.776500244 container died cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:34:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab34c09cb07c5aecfd4e7ef1d2a684a0b8e033a893e0d5dff39e39c838d8d187-merged.mount: Deactivated successfully.
Jan 31 07:34:44 compute-0 podman[261004]: 2026-01-31 07:34:44.74276883 +0000 UTC m=+1.254397699 container remove cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:34:44 compute-0 systemd[1]: libpod-conmon-cb499a618ea0e0845ac363569d96921b16b6074319a99d6f35687b0cf04d8afc.scope: Deactivated successfully.
Jan 31 07:34:44 compute-0 podman[261044]: 2026-01-31 07:34:44.903062389 +0000 UTC m=+0.032095804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:34:45 compute-0 podman[261044]: 2026-01-31 07:34:45.017473293 +0000 UTC m=+0.146506648 container create 2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:34:45 compute-0 systemd[1]: Started libpod-conmon-2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2.scope.
Jan 31 07:34:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a28a7958dc11e5764690719fcbeb64aac29f9e19335172421387a503c506e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a28a7958dc11e5764690719fcbeb64aac29f9e19335172421387a503c506e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a28a7958dc11e5764690719fcbeb64aac29f9e19335172421387a503c506e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83a28a7958dc11e5764690719fcbeb64aac29f9e19335172421387a503c506e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:34:45 compute-0 podman[261044]: 2026-01-31 07:34:45.295807093 +0000 UTC m=+0.424840558 container init 2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:34:45 compute-0 podman[261044]: 2026-01-31 07:34:45.301355867 +0000 UTC m=+0.430389202 container start 2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:34:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2215771126' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:34:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2215771126' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:34:45 compute-0 podman[261044]: 2026-01-31 07:34:45.366885124 +0000 UTC m=+0.495918529 container attach 2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:34:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:45 compute-0 nova_compute[247704]: 2026-01-31 07:34:45.676 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:45 compute-0 nova_compute[247704]: 2026-01-31 07:34:45.788 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:45 compute-0 nova_compute[247704]: 2026-01-31 07:34:45.789 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:45 compute-0 nova_compute[247704]: 2026-01-31 07:34:45.789 247708 DEBUG nova.network.neutron [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:34:45 compute-0 nova_compute[247704]: 2026-01-31 07:34:45.893 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:34:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]: {
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:         "osd_id": 0,
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:         "type": "bluestore"
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]:     }
Jan 31 07:34:46 compute-0 exciting_cartwright[261060]: }
Jan 31 07:34:46 compute-0 systemd[1]: libpod-2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2.scope: Deactivated successfully.
Jan 31 07:34:46 compute-0 podman[261044]: 2026-01-31 07:34:46.098740793 +0000 UTC m=+1.227774148 container died 2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.119 247708 DEBUG nova.network.neutron [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-83a28a7958dc11e5764690719fcbeb64aac29f9e19335172421387a503c506e1-merged.mount: Deactivated successfully.
Jan 31 07:34:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:34:46 compute-0 ceph-mon[74496]: pgmap v1018: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.517 247708 DEBUG nova.network.neutron [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.540 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.662 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.665 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.665 247708 INFO nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Creating image(s)
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.878 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.879 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.879 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.880 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:46 compute-0 nova_compute[247704]: 2026-01-31 07:34:46.886 247708 DEBUG nova.storage.rbd_utils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] creating snapshot(nova-resize) on rbd image(34693a4b-4cec-41ed-a872-facd378ad627_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:34:47 compute-0 podman[261044]: 2026-01-31 07:34:47.139363854 +0000 UTC m=+2.268397209 container remove 2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:34:47 compute-0 systemd[1]: libpod-conmon-2de13f1e8e4472c456aca7e4f17308af823448de0eb75fbede7bec5169baeef2.scope: Deactivated successfully.
Jan 31 07:34:47 compute-0 sudo[260939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:34:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:34:47 compute-0 nova_compute[247704]: 2026-01-31 07:34:47.316 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:34:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a9468169-45be-4be7-8ab5-9d60d486e036 does not exist
Jan 31 07:34:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7bd26ce1-354f-4bdf-804d-ad63c521e9c6 does not exist
Jan 31 07:34:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2806227e-2b13-4ba5-833a-17357c675a32 does not exist
Jan 31 07:34:47 compute-0 sudo[261132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:47 compute-0 sudo[261132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-0 sudo[261132]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-0 sudo[261157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:34:47 compute-0 sudo[261157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-0 sudo[261157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:47.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:47 compute-0 sudo[261182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:47 compute-0 sudo[261182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-0 sudo[261182]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-0 sudo[261207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:34:47 compute-0 sudo[261207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:34:47 compute-0 sudo[261207]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:47.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 31 07:34:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 07:34:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 31 07:34:48 compute-0 nova_compute[247704]: 2026-01-31 07:34:48.339 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:34:48 compute-0 nova_compute[247704]: 2026-01-31 07:34:48.361 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:34:48 compute-0 nova_compute[247704]: 2026-01-31 07:34:48.362 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:34:48 compute-0 ceph-mon[74496]: pgmap v1019: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:34:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:34:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 251 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Jan 31 07:34:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:48 compute-0 nova_compute[247704]: 2026-01-31 07:34:48.763 247708 DEBUG nova.objects.instance [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'trusted_certs' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.056 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.057 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Ensure instance console log exists: /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.057 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.058 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.058 247708 DEBUG oslo_concurrency.lockutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.061 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.069 247708 WARNING nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.078 247708 DEBUG nova.virt.libvirt.host [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.080 247708 DEBUG nova.virt.libvirt.host [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.085 247708 DEBUG nova.virt.libvirt.host [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.087 247708 DEBUG nova.virt.libvirt.host [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.089 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.090 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.091 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.091 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.092 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.092 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.093 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.093 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.094 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.094 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.094 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.095 247708 DEBUG nova.virt.hardware [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.095 247708 DEBUG nova.objects.instance [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.132 247708 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:34:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806352133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:49.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.637 247708 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:49 compute-0 nova_compute[247704]: 2026-01-31 07:34:49.684 247708 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:49 compute-0 ceph-mon[74496]: osdmap e144: 3 total, 3 up, 3 in
Jan 31 07:34:49 compute-0 ceph-mon[74496]: pgmap v1021: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 251 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Jan 31 07:34:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:49.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:34:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:34:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794457443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.143 247708 DEBUG oslo_concurrency.processutils [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.152 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <uuid>34693a4b-4cec-41ed-a872-facd378ad627</uuid>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <name>instance-0000000f</name>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <memory>196608</memory>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:name>tempest-MigrationsAdminTest-server-919055966</nova:name>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:34:49</nova:creationTime>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:flavor name="m1.micro">
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:memory>192</nova:memory>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <system>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <entry name="serial">34693a4b-4cec-41ed-a872-facd378ad627</entry>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <entry name="uuid">34693a4b-4cec-41ed-a872-facd378ad627</entry>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </system>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <os>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </os>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <features>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </features>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/34693a4b-4cec-41ed-a872-facd378ad627_disk">
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       </source>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/34693a4b-4cec-41ed-a872-facd378ad627_disk.config">
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       </source>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:34:50 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627/console.log" append="off"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <video>
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </video>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:34:50 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:34:50 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:34:50 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:34:50 compute-0 nova_compute[247704]: </domain>
Jan 31 07:34:50 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.208 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.209 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.209 247708 INFO nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Using config drive
Jan 31 07:34:50 compute-0 systemd-machined[214448]: New machine qemu-8-instance-0000000f.
Jan 31 07:34:50 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000f.
Jan 31 07:34:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 916 KiB/s wr, 33 op/s
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.678 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3806352133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2794457443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:34:50 compute-0 nova_compute[247704]: 2026-01-31 07:34:50.895 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.423 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844891.422969, 34693a4b-4cec-41ed-a872-facd378ad627 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.424 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] VM Resumed (Lifecycle Event)
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.426 247708 DEBUG nova.compute.manager [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.431 247708 INFO nova.virt.libvirt.driver [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance running successfully.
Jan 31 07:34:51 compute-0 virtqemud[247621]: argument unsupported: QEMU guest agent is not configured
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.435 247708 DEBUG nova.virt.libvirt.guest [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.436 247708 DEBUG nova.virt.libvirt.driver [None req-dd4d8f59-a498-4c57-bc64-d3a012d26447 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.456 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.464 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.548 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.549 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844891.4255276, 34693a4b-4cec-41ed-a872-facd378ad627 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.549 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] VM Started (Lifecycle Event)
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.572 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.577 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.630 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.631 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.631 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:51.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.632 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:34:51 compute-0 nova_compute[247704]: 2026-01-31 07:34:51.634 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:51 compute-0 ceph-mon[74496]: pgmap v1022: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 92 KiB/s rd, 916 KiB/s wr, 33 op/s
Jan 31 07:34:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1232797084' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:51.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433445899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.075 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.224 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.225 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.230 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.230 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.391 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.392 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4572MB free_disk=20.897266387939453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.392 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.392 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.463 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Applying migration context for instance 34693a4b-4cec-41ed-a872-facd378ad627 as it has an incoming, in-progress migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.464 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating resource usage from migration d2cd5cab-5814-4f5a-8fff-f5e2478f3078
Jan 31 07:34:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 105 KiB/s wr, 30 op/s
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.697 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance d7327aed-ddc6-4772-8d2e-6b8be365dd2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.698 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 34693a4b-4cec-41ed-a872-facd378ad627 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.698 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.699 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:34:52 compute-0 nova_compute[247704]: 2026-01-31 07:34:52.793 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:34:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2232137789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1433445899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:34:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2380325277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:53 compute-0 nova_compute[247704]: 2026-01-31 07:34:53.444 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:34:53 compute-0 nova_compute[247704]: 2026-01-31 07:34:53.450 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:34:53 compute-0 nova_compute[247704]: 2026-01-31 07:34:53.465 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:34:53 compute-0 nova_compute[247704]: 2026-01-31 07:34:53.501 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:34:53 compute-0 nova_compute[247704]: 2026-01-31 07:34:53.502 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:34:53 compute-0 podman[261453]: 2026-01-31 07:34:53.516846651 +0000 UTC m=+0.091443403 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 07:34:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:53.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:54 compute-0 ceph-mon[74496]: pgmap v1023: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 105 KiB/s wr, 30 op/s
Jan 31 07:34:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2380325277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1992311293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 921 B/s wr, 86 op/s
Jan 31 07:34:54 compute-0 nova_compute[247704]: 2026-01-31 07:34:54.501 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:54 compute-0 nova_compute[247704]: 2026-01-31 07:34:54.502 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:54 compute-0 nova_compute[247704]: 2026-01-31 07:34:54.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1768258591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:55 compute-0 nova_compute[247704]: 2026-01-31 07:34:55.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:34:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:55.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:55 compute-0 nova_compute[247704]: 2026-01-31 07:34:55.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:55 compute-0 nova_compute[247704]: 2026-01-31 07:34:55.897 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:34:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:55.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 31 07:34:56 compute-0 ceph-mon[74496]: pgmap v1024: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 921 B/s wr, 86 op/s
Jan 31 07:34:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 31 07:34:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 31 07:34:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 KiB/s wr, 107 op/s
Jan 31 07:34:57 compute-0 ceph-mon[74496]: osdmap e145: 3 total, 3 up, 3 in
Jan 31 07:34:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2027388659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:57.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:34:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:57.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:34:57 compute-0 ovn_controller[149457]: 2026-01-31T07:34:57Z|00087|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 07:34:58 compute-0 ceph-mon[74496]: pgmap v1026: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 KiB/s wr, 107 op/s
Jan 31 07:34:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/586764486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:34:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 KiB/s wr, 107 op/s
Jan 31 07:34:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:34:59 compute-0 ceph-mon[74496]: pgmap v1027: 305 pgs: 305 active+clean; 202 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 KiB/s wr, 107 op/s
Jan 31 07:34:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:34:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:34:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:34:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:59.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 217 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 804 KiB/s wr, 121 op/s
Jan 31 07:35:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4086487062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:00 compute-0 nova_compute[247704]: 2026-01-31 07:35:00.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:00 compute-0 nova_compute[247704]: 2026-01-31 07:35:00.946 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:01 compute-0 ceph-mon[74496]: pgmap v1028: 305 pgs: 305 active+clean; 217 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 804 KiB/s wr, 121 op/s
Jan 31 07:35:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2055595108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:01.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:01.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 225 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 130 op/s
Jan 31 07:35:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 31 07:35:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 31 07:35:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 31 07:35:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:03.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:03 compute-0 ceph-mon[74496]: pgmap v1029: 305 pgs: 305 active+clean; 225 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 130 op/s
Jan 31 07:35:03 compute-0 ceph-mon[74496]: osdmap e146: 3 total, 3 up, 3 in
Jan 31 07:35:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 31 07:35:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:05.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:05 compute-0 nova_compute[247704]: 2026-01-31 07:35:05.684 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:05 compute-0 ceph-mon[74496]: pgmap v1031: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 31 07:35:05 compute-0 nova_compute[247704]: 2026-01-31 07:35:05.948 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:05.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 07:35:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1665908925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:07.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:07.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:07 compute-0 sudo[261488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:07 compute-0 sudo[261488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:07 compute-0 sudo[261488]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:08 compute-0 sudo[261513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:08 compute-0 sudo[261513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:08 compute-0 sudo[261513]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:08 compute-0 ceph-mon[74496]: pgmap v1032: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.266 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.267 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.285 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.401 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.402 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.416 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.416 247708 INFO nova.compute.claims [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:35:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 264 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 182 op/s
Jan 31 07:35:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:08 compute-0 nova_compute[247704]: 2026-01-31 07:35:08.629 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:08 compute-0 podman[261558]: 2026-01-31 07:35:08.87912414 +0000 UTC m=+0.057163097 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 07:35:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369829920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.091 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.096 247708 DEBUG nova.compute.provider_tree [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.115 247708 DEBUG nova.scheduler.client.report [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.138 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.139 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.186 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.187 247708 DEBUG nova.network.neutron [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.205 247708 INFO nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.226 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:35:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1369829920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.327 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.328 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.329 247708 INFO nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Creating image(s)
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.359 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.389 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.415 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.420 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.483 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.484 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.485 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.485 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.512 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:09 compute-0 nova_compute[247704]: 2026-01-31 07:35:09.517 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:09.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:10 compute-0 ceph-mon[74496]: pgmap v1033: 305 pgs: 305 active+clean; 264 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 182 op/s
Jan 31 07:35:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1004872550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2775679440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.273 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.360 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] resizing rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:35:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 281 MiB data, 390 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 170 op/s
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.686 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.775 247708 DEBUG nova.objects.instance [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lazy-loading 'migration_context' on Instance uuid fff94f3d-9997-4e6e-b9f3-89ae621d60f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.803 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.803 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Ensure instance console log exists: /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.803 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.804 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.804 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:10 compute-0 nova_compute[247704]: 2026-01-31 07:35:10.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.102 247708 DEBUG nova.network.neutron [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.102 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.103 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.107 247708 WARNING nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.113 247708 DEBUG nova.virt.libvirt.host [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.113 247708 DEBUG nova.virt.libvirt.host [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.117 247708 DEBUG nova.virt.libvirt.host [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.117 247708 DEBUG nova.virt.libvirt.host [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.118 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.118 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.118 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.118 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.119 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.119 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.119 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.119 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.119 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.119 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.120 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.120 247708 DEBUG nova.virt.hardware [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.122 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:11.144 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:11.145 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:11.145 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:11 compute-0 ceph-mon[74496]: pgmap v1034: 305 pgs: 305 active+clean; 281 MiB data, 390 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 170 op/s
Jan 31 07:35:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2981617375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.596 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.632 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:11 compute-0 nova_compute[247704]: 2026-01-31 07:35:11.640 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2042362911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.090 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.092 247708 DEBUG nova.objects.instance [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lazy-loading 'pci_devices' on Instance uuid fff94f3d-9997-4e6e-b9f3-89ae621d60f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.169 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <uuid>fff94f3d-9997-4e6e-b9f3-89ae621d60f3</uuid>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <name>instance-00000012</name>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerDiagnosticsTest-server-1762220058</nova:name>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:35:11</nova:creationTime>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:user uuid="2c8010fa6d8d439396c0b91d810beefe">tempest-ServerDiagnosticsTest-1204744019-project-member</nova:user>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <nova:project uuid="f3a12130126d47a989bb0dad09717f61">tempest-ServerDiagnosticsTest-1204744019</nova:project>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <system>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <entry name="serial">fff94f3d-9997-4e6e-b9f3-89ae621d60f3</entry>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <entry name="uuid">fff94f3d-9997-4e6e-b9f3-89ae621d60f3</entry>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </system>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <os>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </os>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <features>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </features>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk">
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       </source>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk.config">
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       </source>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:35:12 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/console.log" append="off"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <video>
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </video>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:35:12 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:35:12 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:35:12 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:35:12 compute-0 nova_compute[247704]: </domain>
Jan 31 07:35:12 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.412 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.413 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.413 247708 INFO nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Using config drive
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.443 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2981617375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2042362911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 311 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.769 247708 INFO nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Creating config drive at /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/disk.config
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.775 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp41f4zc7u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.897 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp41f4zc7u" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.926 247708 DEBUG nova.storage.rbd_utils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] rbd image fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:12 compute-0 nova_compute[247704]: 2026-01-31 07:35:12.930 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/disk.config fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:13 compute-0 nova_compute[247704]: 2026-01-31 07:35:13.430 247708 DEBUG oslo_concurrency.processutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/disk.config fff94f3d-9997-4e6e-b9f3-89ae621d60f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:13 compute-0 nova_compute[247704]: 2026-01-31 07:35:13.431 247708 INFO nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Deleting local config drive /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3/disk.config because it was imported into RBD.
Jan 31 07:35:13 compute-0 ceph-mon[74496]: pgmap v1035: 305 pgs: 305 active+clean; 311 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Jan 31 07:35:13 compute-0 systemd-machined[214448]: New machine qemu-9-instance-00000012.
Jan 31 07:35:13 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000012.
Jan 31 07:35:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.328 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844914.3277588, fff94f3d-9997-4e6e-b9f3-89ae621d60f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.329 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] VM Resumed (Lifecycle Event)
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.333 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.334 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.340 247708 INFO nova.virt.libvirt.driver [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Instance spawned successfully.
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.341 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.351 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.355 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.365 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.366 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.367 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.367 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.368 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.369 247708 DEBUG nova.virt.libvirt.driver [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.374 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.375 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844914.3281732, fff94f3d-9997-4e6e-b9f3-89ae621d60f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.375 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] VM Started (Lifecycle Event)
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.418 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.422 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 343 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 233 op/s
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.690 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.724 247708 INFO nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Took 5.40 seconds to spawn the instance on the hypervisor.
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.725 247708 DEBUG nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.818 247708 INFO nova.compute.manager [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Took 6.45 seconds to build instance.
Jan 31 07:35:14 compute-0 nova_compute[247704]: 2026-01-31 07:35:14.870 247708 DEBUG oslo_concurrency.lockutils [None req-5f64106b-58bf-4194-9d32-0a84d0f54a44 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:15 compute-0 nova_compute[247704]: 2026-01-31 07:35:15.687 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:15 compute-0 nova_compute[247704]: 2026-01-31 07:35:15.909 247708 DEBUG nova.compute.manager [None req-be3dbc25-fa12-4e92-a427-873ed9bfa34c 145560009db54310b119cdfe4b20ff88 2e4a081706bc46858cd0e13edf15d312 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:15 compute-0 nova_compute[247704]: 2026-01-31 07:35:15.914 247708 INFO nova.compute.manager [None req-be3dbc25-fa12-4e92-a427-873ed9bfa34c 145560009db54310b119cdfe4b20ff88 2e4a081706bc46858cd0e13edf15d312 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Retrieving diagnostics
Jan 31 07:35:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:15.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:15 compute-0 ceph-mon[74496]: pgmap v1036: 305 pgs: 305 active+clean; 343 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.1 MiB/s wr, 233 op/s
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 357 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 227 op/s
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.565 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.567 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.567 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.568 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.568 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.570 247708 INFO nova.compute.manager [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Terminating instance
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.571 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "refresh_cache-fff94f3d-9997-4e6e-b9f3-89ae621d60f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.572 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquired lock "refresh_cache-fff94f3d-9997-4e6e-b9f3-89ae621d60f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:16 compute-0 nova_compute[247704]: 2026-01-31 07:35:16.572 247708 DEBUG nova.network.neutron [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:17 compute-0 nova_compute[247704]: 2026-01-31 07:35:17.172 247708 DEBUG nova.network.neutron [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:17 compute-0 nova_compute[247704]: 2026-01-31 07:35:17.567 247708 DEBUG nova.network.neutron [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:17 compute-0 nova_compute[247704]: 2026-01-31 07:35:17.946 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Releasing lock "refresh_cache-fff94f3d-9997-4e6e-b9f3-89ae621d60f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:17 compute-0 nova_compute[247704]: 2026-01-31 07:35:17.947 247708 DEBUG nova.compute.manager [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:35:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:17.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:18 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 31 07:35:18 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Consumed 4.343s CPU time.
Jan 31 07:35:18 compute-0 systemd-machined[214448]: Machine qemu-9-instance-00000012 terminated.
Jan 31 07:35:18 compute-0 nova_compute[247704]: 2026-01-31 07:35:18.165 247708 INFO nova.virt.libvirt.driver [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Instance destroyed successfully.
Jan 31 07:35:18 compute-0 nova_compute[247704]: 2026-01-31 07:35:18.166 247708 DEBUG nova.objects.instance [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lazy-loading 'resources' on Instance uuid fff94f3d-9997-4e6e-b9f3-89ae621d60f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:18 compute-0 ceph-mon[74496]: pgmap v1037: 305 pgs: 305 active+clean; 357 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 227 op/s
Jan 31 07:35:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 361 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 253 op/s
Jan 31 07:35:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:19 compute-0 ceph-mon[74496]: pgmap v1038: 305 pgs: 305 active+clean; 361 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 253 op/s
Jan 31 07:35:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:19.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:19.808 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:35:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:19.808 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:35:19 compute-0 nova_compute[247704]: 2026-01-31 07:35:19.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:19.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:35:20
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'volumes', 'images']
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 345 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.2 MiB/s wr, 252 op/s
Jan 31 07:35:20 compute-0 nova_compute[247704]: 2026-01-31 07:35:20.689 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1939639995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:20 compute-0 ceph-mgr[74791]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.029 247708 INFO nova.virt.libvirt.driver [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Deleting instance files /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3_del
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.030 247708 INFO nova.virt.libvirt.driver [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Deletion of /var/lib/nova/instances/fff94f3d-9997-4e6e-b9f3-89ae621d60f3_del complete
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.074 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.292 247708 INFO nova.compute.manager [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Took 3.35 seconds to destroy the instance on the hypervisor.
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.293 247708 DEBUG oslo.service.loopingcall [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.293 247708 DEBUG nova.compute.manager [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.293 247708 DEBUG nova.network.neutron [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.528 247708 DEBUG nova.network.neutron [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:21.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.676 247708 DEBUG nova.network.neutron [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:21 compute-0 ceph-mon[74496]: pgmap v1039: 305 pgs: 305 active+clean; 345 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.2 MiB/s wr, 252 op/s
Jan 31 07:35:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:21.811 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:21 compute-0 nova_compute[247704]: 2026-01-31 07:35:21.878 247708 INFO nova.compute.manager [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Took 0.59 seconds to deallocate network for instance.
Jan 31 07:35:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:21.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 309 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 282 op/s
Jan 31 07:35:22 compute-0 nova_compute[247704]: 2026-01-31 07:35:22.507 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:22 compute-0 nova_compute[247704]: 2026-01-31 07:35:22.508 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:22 compute-0 nova_compute[247704]: 2026-01-31 07:35:22.583 247708 DEBUG oslo_concurrency.processutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3973763842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/577500255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:23 compute-0 nova_compute[247704]: 2026-01-31 07:35:23.051 247708 DEBUG oslo_concurrency.processutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:23 compute-0 nova_compute[247704]: 2026-01-31 07:35:23.058 247708 DEBUG nova.compute.provider_tree [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:23 compute-0 nova_compute[247704]: 2026-01-31 07:35:23.358 247708 DEBUG nova.scheduler.client.report [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:23 compute-0 nova_compute[247704]: 2026-01-31 07:35:23.622 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:23.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:23 compute-0 nova_compute[247704]: 2026-01-31 07:35:23.725 247708 INFO nova.scheduler.client.report [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Deleted allocations for instance fff94f3d-9997-4e6e-b9f3-89ae621d60f3
Jan 31 07:35:23 compute-0 ceph-mon[74496]: pgmap v1040: 305 pgs: 305 active+clean; 309 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 282 op/s
Jan 31 07:35:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/577500255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:23 compute-0 podman[261974]: 2026-01-31 07:35:23.952036064 +0000 UTC m=+0.111640179 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 07:35:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:23.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:24 compute-0 nova_compute[247704]: 2026-01-31 07:35:24.006 247708 DEBUG oslo_concurrency.lockutils [None req-716f2528-2202-4825-bcba-12e335d91f36 2c8010fa6d8d439396c0b91d810beefe f3a12130126d47a989bb0dad09717f61 - - default default] Lock "fff94f3d-9997-4e6e-b9f3-89ae621d60f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 260 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 270 op/s
Jan 31 07:35:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:25.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:25 compute-0 nova_compute[247704]: 2026-01-31 07:35:25.691 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:25 compute-0 ceph-mon[74496]: pgmap v1041: 305 pgs: 305 active+clean; 260 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 270 op/s
Jan 31 07:35:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:25.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:26 compute-0 nova_compute[247704]: 2026-01-31 07:35:26.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 264 MiB data, 386 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 238 op/s
Jan 31 07:35:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:27.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:27 compute-0 ceph-mon[74496]: pgmap v1042: 305 pgs: 305 active+clean; 264 MiB data, 386 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.3 MiB/s wr, 238 op/s
Jan 31 07:35:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:27.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:28 compute-0 sudo[262003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:28 compute-0 sudo[262003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:28 compute-0 sudo[262003]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:28 compute-0 sudo[262028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:28 compute-0 sudo[262028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:28 compute-0 sudo[262028]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 276 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 200 op/s
Jan 31 07:35:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:29.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:29 compute-0 ceph-mon[74496]: pgmap v1043: 305 pgs: 305 active+clean; 276 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 200 op/s
Jan 31 07:35:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3479424062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:30.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 280 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 157 op/s
Jan 31 07:35:30 compute-0 nova_compute[247704]: 2026-01-31 07:35:30.693 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:31 compute-0 nova_compute[247704]: 2026-01-31 07:35:31.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:31.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:31 compute-0 ceph-mon[74496]: pgmap v1044: 305 pgs: 305 active+clean; 280 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 157 op/s
Jan 31 07:35:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:32.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 301 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 377 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Jan 31 07:35:33 compute-0 nova_compute[247704]: 2026-01-31 07:35:33.164 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844918.1635563, fff94f3d-9997-4e6e-b9f3-89ae621d60f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:33 compute-0 nova_compute[247704]: 2026-01-31 07:35:33.165 247708 INFO nova.compute.manager [-] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] VM Stopped (Lifecycle Event)
Jan 31 07:35:33 compute-0 nova_compute[247704]: 2026-01-31 07:35:33.187 247708 DEBUG nova.compute.manager [None req-e2ca7571-94e3-4545-830f-8110b5cafd03 - - - - - -] [instance: fff94f3d-9997-4e6e-b9f3-89ae621d60f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:34.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:34 compute-0 ceph-mon[74496]: pgmap v1045: 305 pgs: 305 active+clean; 301 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 377 KiB/s rd, 2.8 MiB/s wr, 117 op/s
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007495339835752839 of space, bias 1.0, pg target 2.2486019507258517 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:35:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:35:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3916158877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:35 compute-0 nova_compute[247704]: 2026-01-31 07:35:35.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:35.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.123 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.132 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.132 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.178 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.266 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.267 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.277 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.278 247708 INFO nova.compute.claims [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:35:36 compute-0 ceph-mon[74496]: pgmap v1046: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 349 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Jan 31 07:35:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/749401607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.435 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 318 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 31 07:35:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3628664984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.858 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.865 247708 DEBUG nova.compute.provider_tree [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:36 compute-0 nova_compute[247704]: 2026-01-31 07:35:36.958 247708 DEBUG nova.scheduler.client.report [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.187 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.187 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.232 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.232 247708 DEBUG nova.network.neutron [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.259 247708 INFO nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.278 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.383 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.385 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.386 247708 INFO nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Creating image(s)
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.424 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 31 07:35:37 compute-0 ceph-mon[74496]: pgmap v1047: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 318 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 31 07:35:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3628664984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.476 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.507 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.510 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.572 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.573 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.573 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.573 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.598 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.602 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.662 247708 DEBUG nova.network.neutron [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:35:37 compute-0 nova_compute[247704]: 2026-01-31 07:35:37.663 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:35:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:37.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.004 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:38.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.100 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] resizing rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.246 247708 DEBUG nova.objects.instance [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lazy-loading 'migration_context' on Instance uuid cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.336 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.337 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Ensure instance console log exists: /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.337 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.337 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.338 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.339 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.345 247708 WARNING nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.354 247708 DEBUG nova.virt.libvirt.host [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.355 247708 DEBUG nova.virt.libvirt.host [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.362 247708 DEBUG nova.virt.libvirt.host [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.363 247708 DEBUG nova.virt.libvirt.host [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.365 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.365 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.365 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.365 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.366 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.366 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.366 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.366 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.367 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.367 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.367 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.367 247708 DEBUG nova.virt.hardware [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.370 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:38 compute-0 ceph-mon[74496]: osdmap e147: 3 total, 3 up, 3 in
Jan 31 07:35:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1976072540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 07:35:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628810261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.837 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.865 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:38 compute-0 nova_compute[247704]: 2026-01-31 07:35:38.870 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120825983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.350 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.352 247708 DEBUG nova.objects.instance [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lazy-loading 'pci_devices' on Instance uuid cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.370 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <uuid>cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80</uuid>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <name>instance-00000014</name>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-1183273749</nova:name>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:35:38</nova:creationTime>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:user uuid="0ef1c858b0934fe997a48574deb2c816">tempest-ServerDiagnosticsNegativeTest-245507066-project-member</nova:user>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <nova:project uuid="7aa7e38311db4342a351e18eeb31f79c">tempest-ServerDiagnosticsNegativeTest-245507066</nova:project>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <system>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <entry name="serial">cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80</entry>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <entry name="uuid">cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80</entry>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </system>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <os>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </os>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <features>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </features>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk">
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       </source>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk.config">
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       </source>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:35:39 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/console.log" append="off"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <video>
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </video>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:35:39 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:35:39 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:35:39 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:35:39 compute-0 nova_compute[247704]: </domain>
Jan 31 07:35:39 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:35:39 compute-0 podman[262310]: 2026-01-31 07:35:39.451012635 +0000 UTC m=+0.049355010 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.460 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.460 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.461 247708 INFO nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Using config drive
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.485 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:39 compute-0 ceph-mon[74496]: pgmap v1049: 305 pgs: 305 active+clean; 329 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 07:35:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/628810261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3834992716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/120825983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2133135503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:39.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.771 247708 INFO nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Creating config drive at /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/disk.config
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.775 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbju4kidf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.896 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbju4kidf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.925 247708 DEBUG nova.storage.rbd_utils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] rbd image cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:39 compute-0 nova_compute[247704]: 2026-01-31 07:35:39.929 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/disk.config cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:40.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.079 247708 DEBUG oslo_concurrency.processutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/disk.config cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.080 247708 INFO nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Deleting local config drive /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80/disk.config because it was imported into RBD.
Jan 31 07:35:40 compute-0 systemd-machined[214448]: New machine qemu-10-instance-00000014.
Jan 31 07:35:40 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000014.
Jan 31 07:35:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 349 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 650 KiB/s rd, 3.5 MiB/s wr, 110 op/s
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.734 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844940.734443, cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.735 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] VM Resumed (Lifecycle Event)
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.737 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.738 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.741 247708 INFO nova.virt.libvirt.driver [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Instance spawned successfully.
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.741 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.767 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.771 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.771 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.771 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.772 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.772 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.772 247708 DEBUG nova.virt.libvirt.driver [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.777 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.813 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.813 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844940.7374854, cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.813 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] VM Started (Lifecycle Event)
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.842 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.845 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.908 247708 INFO nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Took 3.52 seconds to spawn the instance on the hypervisor.
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.908 247708 DEBUG nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.913 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:40 compute-0 nova_compute[247704]: 2026-01-31 07:35:40.992 247708 INFO nova.compute.manager [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Took 4.77 seconds to build instance.
Jan 31 07:35:41 compute-0 nova_compute[247704]: 2026-01-31 07:35:41.015 247708 DEBUG oslo_concurrency.lockutils [None req-20796d7d-cf66-47d6-9548-67789d8f9b07 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:41 compute-0 nova_compute[247704]: 2026-01-31 07:35:41.124 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:41 compute-0 ceph-mon[74496]: pgmap v1050: 305 pgs: 305 active+clean; 349 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 650 KiB/s rd, 3.5 MiB/s wr, 110 op/s
Jan 31 07:35:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/935936433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4081352400' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:41.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:41 compute-0 nova_compute[247704]: 2026-01-31 07:35:41.998 247708 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Creating tmpfile /var/lib/nova/instances/tmp1qs86hm9 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:41.999 247708 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 31 07:35:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:42.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.026 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.027 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.027 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.027 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.027 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.028 247708 INFO nova.compute.manager [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Terminating instance
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.029 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "refresh_cache-cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.029 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquired lock "refresh_cache-cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.029 247708 DEBUG nova.network.neutron [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.244 247708 DEBUG nova.network.neutron [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.455 247708 DEBUG nova.network.neutron [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.476 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Releasing lock "refresh_cache-cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.477 247708 DEBUG nova.compute.manager [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:35:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 378 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 153 op/s
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.578 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:35:42 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 31 07:35:42 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Consumed 2.470s CPU time.
Jan 31 07:35:42 compute-0 systemd-machined[214448]: Machine qemu-10-instance-00000014 terminated.
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.900 247708 INFO nova.virt.libvirt.driver [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Instance destroyed successfully.
Jan 31 07:35:42 compute-0 nova_compute[247704]: 2026-01-31 07:35:42.901 247708 DEBUG nova.objects.instance [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lazy-loading 'resources' on Instance uuid cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:43 compute-0 nova_compute[247704]: 2026-01-31 07:35:43.147 247708 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b9a76fba-cff2-455a-9aa6-7b839819e78b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 31 07:35:43 compute-0 nova_compute[247704]: 2026-01-31 07:35:43.197 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:43 compute-0 nova_compute[247704]: 2026-01-31 07:35:43.198 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquired lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:43 compute-0 nova_compute[247704]: 2026-01-31 07:35:43.199 247708 DEBUG nova.network.neutron [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:35:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:43.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:43 compute-0 ceph-mon[74496]: pgmap v1051: 305 pgs: 305 active+clean; 378 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.7 MiB/s wr, 153 op/s
Jan 31 07:35:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2520892971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:44.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 402 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.3 MiB/s wr, 330 op/s
Jan 31 07:35:44 compute-0 nova_compute[247704]: 2026-01-31 07:35:44.974 247708 INFO nova.virt.libvirt.driver [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Deleting instance files /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_del
Jan 31 07:35:44 compute-0 nova_compute[247704]: 2026-01-31 07:35:44.975 247708 INFO nova.virt.libvirt.driver [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Deletion of /var/lib/nova/instances/cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80_del complete
Jan 31 07:35:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.022 247708 INFO nova.compute.manager [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Took 2.54 seconds to destroy the instance on the hypervisor.
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.023 247708 DEBUG oslo.service.loopingcall [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.023 247708 DEBUG nova.compute.manager [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.024 247708 DEBUG nova.network.neutron [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:35:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3352251804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:35:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3352251804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:35:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 31 07:35:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.258 247708 DEBUG nova.network.neutron [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.273 247708 DEBUG nova.network.neutron [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.295 247708 INFO nova.compute.manager [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Took 0.27 seconds to deallocate network for instance.
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.310 247708 DEBUG nova.network.neutron [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.338 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Releasing lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.339 247708 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b9a76fba-cff2-455a-9aa6-7b839819e78b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.340 247708 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Creating instance directory: /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.340 247708 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Ensure instance console log exists: /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.340 247708 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.341 247708 DEBUG nova.virt.libvirt.vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-879430066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-879430066',id=19,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:35:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-9uooh3ao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:35:38Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.341 247708 DEBUG nova.network.os_vif_util [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converting VIF {"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.342 247708 DEBUG nova.network.os_vif_util [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.342 247708 DEBUG os_vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.342 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.343 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.343 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.346 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.346 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap596408d2-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.346 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap596408d2-96, col_values=(('external_ids', {'iface-id': '596408d2-9689-472f-b2fb-a85a75df2923', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:f1:d6', 'vm-uuid': 'b9a76fba-cff2-455a-9aa6-7b839819e78b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:45 compute-0 NetworkManager[49108]: <info>  [1769844945.3494] manager: (tap596408d2-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.351 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.354 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.356 247708 INFO os_vif [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96')
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.356 247708 DEBUG nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.356 247708 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b9a76fba-cff2-455a-9aa6-7b839819e78b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.359 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.360 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.458 247708 DEBUG oslo_concurrency.processutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.698 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:45.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105849663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.943 247708 DEBUG oslo_concurrency.processutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.948 247708 DEBUG nova.compute.provider_tree [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.965 247708 DEBUG nova.scheduler.client.report [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:45 compute-0 nova_compute[247704]: 2026-01-31 07:35:45.984 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:46 compute-0 nova_compute[247704]: 2026-01-31 07:35:46.006 247708 INFO nova.scheduler.client.report [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Deleted allocations for instance cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80
Jan 31 07:35:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:46.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:46 compute-0 nova_compute[247704]: 2026-01-31 07:35:46.077 247708 DEBUG oslo_concurrency.lockutils [None req-44098cac-3633-43c0-97ee-045b33cd7431 0ef1c858b0934fe997a48574deb2c816 7aa7e38311db4342a351e18eeb31f79c - - default default] Lock "cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:46 compute-0 ceph-mon[74496]: pgmap v1052: 305 pgs: 305 active+clean; 402 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.3 MiB/s wr, 330 op/s
Jan 31 07:35:46 compute-0 ceph-mon[74496]: osdmap e148: 3 total, 3 up, 3 in
Jan 31 07:35:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/128629596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4105849663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 388 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 8.6 MiB/s rd, 4.7 MiB/s wr, 452 op/s
Jan 31 07:35:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1401127398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.488 247708 DEBUG nova.network.neutron [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Port 596408d2-9689-472f-b2fb-a85a75df2923 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.490 247708 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp1qs86hm9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b9a76fba-cff2-455a-9aa6-7b839819e78b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 31 07:35:47 compute-0 kernel: tap596408d2-96: entered promiscuous mode
Jan 31 07:35:47 compute-0 NetworkManager[49108]: <info>  [1769844947.6964] manager: (tap596408d2-96): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 31 07:35:47 compute-0 ovn_controller[149457]: 2026-01-31T07:35:47Z|00088|binding|INFO|Claiming lport 596408d2-9689-472f-b2fb-a85a75df2923 for this additional chassis.
Jan 31 07:35:47 compute-0 ovn_controller[149457]: 2026-01-31T07:35:47Z|00089|binding|INFO|596408d2-9689-472f-b2fb-a85a75df2923: Claiming fa:16:3e:45:f1:d6 10.100.0.14
Jan 31 07:35:47 compute-0 ovn_controller[149457]: 2026-01-31T07:35:47Z|00090|binding|INFO|Claiming lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 for this additional chassis.
Jan 31 07:35:47 compute-0 ovn_controller[149457]: 2026-01-31T07:35:47Z|00091|binding|INFO|b2e77a7e-2125-46bd-8c49-dd619c7caf36: Claiming fa:16:3e:cc:5d:23 19.80.0.215
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.701 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.704 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.707 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:47.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:47 compute-0 systemd-udevd[262506]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:35:47 compute-0 systemd-machined[214448]: New machine qemu-11-instance-00000013.
Jan 31 07:35:47 compute-0 NetworkManager[49108]: <info>  [1769844947.7308] device (tap596408d2-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:35:47 compute-0 NetworkManager[49108]: <info>  [1769844947.7314] device (tap596408d2-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:35:47 compute-0 ovn_controller[149457]: 2026-01-31T07:35:47Z|00092|binding|INFO|Setting lport 596408d2-9689-472f-b2fb-a85a75df2923 ovn-installed in OVS
Jan 31 07:35:47 compute-0 nova_compute[247704]: 2026-01-31 07:35:47.752 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:47 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000013.
Jan 31 07:35:47 compute-0 sudo[262516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:47 compute-0 sudo[262516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:47 compute-0 sudo[262516]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:47 compute-0 sudo[262541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:35:47 compute-0 sudo[262541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:47 compute-0 sudo[262541]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:47 compute-0 sudo[262566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:47 compute-0 sudo[262566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:47 compute-0 sudo[262566]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 sudo[262591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:35:48 compute-0 sudo[262591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:48 compute-0 ceph-mon[74496]: pgmap v1054: 305 pgs: 305 active+clean; 388 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 8.6 MiB/s rd, 4.7 MiB/s wr, 452 op/s
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.263 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844948.2630439, b9a76fba-cff2-455a-9aa6-7b839819e78b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.264 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Started (Lifecycle Event)
Jan 31 07:35:48 compute-0 sudo[262671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:48 compute-0 sudo[262671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 sudo[262671]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.283 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:48 compute-0 sudo[262697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:48 compute-0 sudo[262697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 sudo[262697]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 sudo[262591]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 369 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 424 op/s
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:35:48 compute-0 sudo[262738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:48 compute-0 sudo[262738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 sudo[262738]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.628 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844948.6281574, b9a76fba-cff2-455a-9aa6-7b839819e78b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.629 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Resumed (Lifecycle Event)
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.652 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:48 compute-0 sudo[262763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:35:48 compute-0 sudo[262763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.658 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:48 compute-0 sudo[262763]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.678 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 07:35:48 compute-0 sudo[262788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:48 compute-0 sudo[262788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 sudo[262788]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 sudo[262813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 07:35:48 compute-0 sudo[262813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.904 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.905 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:48 compute-0 nova_compute[247704]: 2026-01-31 07:35:48.905 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:35:48 compute-0 sudo[262813]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:35:49 compute-0 nova_compute[247704]: 2026-01-31 07:35:49.080 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 441bdb5f-57ed-4267-9fbb-8a0ee37b365c does not exist
Jan 31 07:35:49 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 06bbd40a-21ee-43cc-a3fd-93ca3051dca0 does not exist
Jan 31 07:35:49 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 512fe617-2493-48c9-b9fd-b3bd1f37816a does not exist
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:35:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:35:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:35:49 compute-0 sudo[262857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:49 compute-0 sudo[262857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:49 compute-0 sudo[262857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:49 compute-0 sudo[262883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:35:49 compute-0 sudo[262883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:49 compute-0 sudo[262883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:49 compute-0 sudo[262908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:49 compute-0 sudo[262908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:49 compute-0 sudo[262908]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1412638885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:35:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:35:49 compute-0 sudo[262933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:35:49 compute-0 sudo[262933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:49 compute-0 nova_compute[247704]: 2026-01-31 07:35:49.471 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:49 compute-0 nova_compute[247704]: 2026-01-31 07:35:49.492 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:49 compute-0 nova_compute[247704]: 2026-01-31 07:35:49.493 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:35:49 compute-0 nova_compute[247704]: 2026-01-31 07:35:49.494 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:49.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.748060288 +0000 UTC m=+0.045729972 container create 685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:35:49 compute-0 systemd[1]: Started libpod-conmon-685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d.scope.
Jan 31 07:35:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.725067555 +0000 UTC m=+0.022737269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.823405892 +0000 UTC m=+0.121075586 container init 685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.827824999 +0000 UTC m=+0.125494683 container start 685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_faraday, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.831391475 +0000 UTC m=+0.129061159 container attach 685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:35:49 compute-0 jovial_faraday[263014]: 167 167
Jan 31 07:35:49 compute-0 systemd[1]: libpod-685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d.scope: Deactivated successfully.
Jan 31 07:35:49 compute-0 conmon[263014]: conmon 685ca3e3181dee0fcbb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d.scope/container/memory.events
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.832994893 +0000 UTC m=+0.130664607 container died 685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_faraday, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-266119120eb7ca23b9fd62661bed02d622e2618d559ec4a5a69eab20928a58ae-merged.mount: Deactivated successfully.
Jan 31 07:35:49 compute-0 podman[262998]: 2026-01-31 07:35:49.879339469 +0000 UTC m=+0.177009153 container remove 685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_faraday, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:35:49 compute-0 systemd[1]: libpod-conmon-685ca3e3181dee0fcbb102bb713119bfdf118cc28d70eb8e2bac0bee8f2ddb5d.scope: Deactivated successfully.
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:35:50 compute-0 podman[263038]: 2026-01-31 07:35:50.044226268 +0000 UTC m=+0.044711307 container create 8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:35:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:50 compute-0 systemd[1]: Started libpod-conmon-8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba.scope.
Jan 31 07:35:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b4f83160c0db657e49964ef6de6f0cfd0ce707d4178d1dad3c32c0afcfcef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b4f83160c0db657e49964ef6de6f0cfd0ce707d4178d1dad3c32c0afcfcef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b4f83160c0db657e49964ef6de6f0cfd0ce707d4178d1dad3c32c0afcfcef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b4f83160c0db657e49964ef6de6f0cfd0ce707d4178d1dad3c32c0afcfcef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94b4f83160c0db657e49964ef6de6f0cfd0ce707d4178d1dad3c32c0afcfcef3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:50 compute-0 podman[263038]: 2026-01-31 07:35:50.023648833 +0000 UTC m=+0.024133892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:35:50 compute-0 podman[263038]: 2026-01-31 07:35:50.127508453 +0000 UTC m=+0.127993502 container init 8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:35:50 compute-0 podman[263038]: 2026-01-31 07:35:50.135483815 +0000 UTC m=+0.135968834 container start 8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:35:50 compute-0 podman[263038]: 2026-01-31 07:35:50.138626551 +0000 UTC m=+0.139111610 container attach 8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:35:50 compute-0 nova_compute[247704]: 2026-01-31 07:35:50.348 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:50 compute-0 ceph-mon[74496]: pgmap v1055: 305 pgs: 305 active+clean; 369 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 424 op/s
Jan 31 07:35:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:35:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:35:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00093|binding|INFO|Claiming lport 596408d2-9689-472f-b2fb-a85a75df2923 for this chassis.
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00094|binding|INFO|596408d2-9689-472f-b2fb-a85a75df2923: Claiming fa:16:3e:45:f1:d6 10.100.0.14
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00095|binding|INFO|Claiming lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 for this chassis.
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00096|binding|INFO|b2e77a7e-2125-46bd-8c49-dd619c7caf36: Claiming fa:16:3e:cc:5d:23 19.80.0.215
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00097|binding|INFO|Setting lport 596408d2-9689-472f-b2fb-a85a75df2923 up in Southbound
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00098|binding|INFO|Setting lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 up in Southbound
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.423 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:23 19.80.0.215'], port_security=['fa:16:3e:cc:5d:23 19.80.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['596408d2-9689-472f-b2fb-a85a75df2923'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1541952823', 'neutron:cidrs': '19.80.0.215/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-994f8d42-6738-4c92-b80e-8dbb63919128', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1541952823', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=67507551-f2d4-4c0b-ab9d-4c732fbaf469, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b2e77a7e-2125-46bd-8c49-dd619c7caf36) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.425 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f1:d6 10.100.0.14'], port_security=['fa:16:3e:45:f1:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-334582475', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9a76fba-cff2-455a-9aa6-7b839819e78b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-334582475', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=596408d2-9689-472f-b2fb-a85a75df2923) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.427 160028 INFO neutron.agent.ovn.metadata.agent [-] Port b2e77a7e-2125-46bd-8c49-dd619c7caf36 in datapath 994f8d42-6738-4c92-b80e-8dbb63919128 bound to our chassis
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.429 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 994f8d42-6738-4c92-b80e-8dbb63919128
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.443 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d70ec759-d4e5-4c48-8f10-07f9d743ef8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.444 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap994f8d42-61 in ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.446 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap994f8d42-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.447 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9565a50f-fbc6-43a7-a15c-8a0aa62f924a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.447 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[da060f8a-7301-4987-be5e-ecb47217b608]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.457 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[669124c2-81a4-4617-8da1-9e2aa2556b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.472 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[732d8ab5-75c6-490c-ae79-cbc2c83dd0bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 nova_compute[247704]: 2026-01-31 07:35:50.499 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.500 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[20e1f167-80d9-418d-8195-371ba654d01b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 NetworkManager[49108]: <info>  [1769844950.5120] manager: (tap994f8d42-60): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 31 07:35:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 344 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 9.5 MiB/s rd, 3.0 MiB/s wr, 481 op/s
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.511 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9eee61fd-3b7c-4f85-b43a-95d2c09ba2ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.550 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b5f77b-a216-44a6-974d-4248c22f60bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.554 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3b27bf72-2935-44d8-8de2-7c5660982de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 NetworkManager[49108]: <info>  [1769844950.5723] device (tap994f8d42-60): carrier: link connected
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.578 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[241e8687-6fba-46b3-8f8f-6695a73cebce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.595 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[45bd5a65-0de9-41a9-b89d-a06317213ecf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap994f8d42-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:ce:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519319, 'reachable_time': 32051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263084, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.610 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[22f9f590-c16e-4f01-8267-4b9e1f37ac9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:ceb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519319, 'tstamp': 519319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263085, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.626 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[569798f7-b882-4059-b7ac-c92a57c5f0e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap994f8d42-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:ce:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519319, 'reachable_time': 32051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263086, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 nova_compute[247704]: 2026-01-31 07:35:50.651 247708 INFO nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Post operation of migration started
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.652 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d387fe3b-4706-4112-b5e3-115a1a1f08e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.692 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fec4b36d-c004-40c4-b90f-ff4b96cb28b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.693 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994f8d42-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.693 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.694 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap994f8d42-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:50 compute-0 nova_compute[247704]: 2026-01-31 07:35:50.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:50 compute-0 kernel: tap994f8d42-60: entered promiscuous mode
Jan 31 07:35:50 compute-0 NetworkManager[49108]: <info>  [1769844950.7392] manager: (tap994f8d42-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.741 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap994f8d42-60, col_values=(('external_ids', {'iface-id': 'a50841ea-6849-4032-a8d6-6ba9e6fd3a95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:50 compute-0 ovn_controller[149457]: 2026-01-31T07:35:50Z|00099|binding|INFO|Releasing lport a50841ea-6849-4032-a8d6-6ba9e6fd3a95 from this chassis (sb_readonly=0)
Jan 31 07:35:50 compute-0 nova_compute[247704]: 2026-01-31 07:35:50.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:50 compute-0 nova_compute[247704]: 2026-01-31 07:35:50.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.748 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/994f8d42-6738-4c92-b80e-8dbb63919128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/994f8d42-6738-4c92-b80e-8dbb63919128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.749 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b341c6a-42a5-44bc-ba03-4773fe5433b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.750 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-994f8d42-6738-4c92-b80e-8dbb63919128
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/994f8d42-6738-4c92-b80e-8dbb63919128.pid.haproxy
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 994f8d42-6738-4c92-b80e-8dbb63919128
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:35:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:50.750 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'env', 'PROCESS_TAG=haproxy-994f8d42-6738-4c92-b80e-8dbb63919128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/994f8d42-6738-4c92-b80e-8dbb63919128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:35:50 compute-0 pensive_northcutt[263055]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:35:50 compute-0 pensive_northcutt[263055]: --> relative data size: 1.0
Jan 31 07:35:50 compute-0 pensive_northcutt[263055]: --> All data devices are unavailable
Jan 31 07:35:50 compute-0 systemd[1]: libpod-8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba.scope: Deactivated successfully.
Jan 31 07:35:50 compute-0 podman[263038]: 2026-01-31 07:35:50.972901394 +0000 UTC m=+0.973386423 container died 8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-94b4f83160c0db657e49964ef6de6f0cfd0ce707d4178d1dad3c32c0afcfcef3-merged.mount: Deactivated successfully.
Jan 31 07:35:51 compute-0 podman[263038]: 2026-01-31 07:35:51.021447702 +0000 UTC m=+1.021932731 container remove 8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_northcutt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:35:51 compute-0 systemd[1]: libpod-conmon-8b0ec285dae0d0bdc1458dc25d5e3ef5d392b16e1b9f3c04526a148cef4549ba.scope: Deactivated successfully.
Jan 31 07:35:51 compute-0 sudo[262933]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:51 compute-0 podman[263143]: 2026-01-31 07:35:51.107003423 +0000 UTC m=+0.044974025 container create c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:35:51 compute-0 sudo[263144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:51 compute-0 sudo[263144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:51 compute-0 sudo[263144]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:51 compute-0 systemd[1]: Started libpod-conmon-c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9.scope.
Jan 31 07:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:51 compute-0 sudo[263182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:35:51 compute-0 sudo[263182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/029d9617648300bceb919187bf826b0f2f3a82cfc95eb5dacf64ecdbaf496358/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:51 compute-0 podman[263143]: 2026-01-31 07:35:51.083444756 +0000 UTC m=+0.021415368 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:35:51 compute-0 sudo[263182]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:51 compute-0 podman[263143]: 2026-01-31 07:35:51.187224864 +0000 UTC m=+0.125195476 container init c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:35:51 compute-0 podman[263143]: 2026-01-31 07:35:51.192228714 +0000 UTC m=+0.130199306 container start c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:35:51 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [NOTICE]   (263217) : New worker (263236) forked
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.210 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:35:51 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [NOTICE]   (263217) : Loading success.
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.210 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquired lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.211 247708 DEBUG nova.network.neutron [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:35:51 compute-0 sudo[263210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:51 compute-0 sudo[263210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:51 compute-0 sudo[263210]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.245 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 596408d2-9689-472f-b2fb-a85a75df2923 in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.246 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.254 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c4f1fc-ec9b-4445-b648-d26951ed4488]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.254 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d1862b-21 in ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.256 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d1862b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.257 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a82fe8-0fd9-406a-8073-2dcc0bb27f44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.258 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8d62dd-edb1-4fed-9acf-6b8e4bd7aee8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 sudo[263247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:35:51 compute-0 sudo[263247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.272 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[332575eb-5454-44a2-8dac-f997c46b2196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.294 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b74ae29e-07e2-4c3f-b511-1143c12bfcde]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.314 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2f7a6a-6e3a-4c89-be73-9119cf756ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 NetworkManager[49108]: <info>  [1769844951.3201] manager: (tape4d1862b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.318 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd932d38-340a-4141-a046-0455eaa08ed8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 systemd-udevd[263280]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.349 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[40531e3c-38e7-4b33-9c4d-399f9b359b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.351 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb594ca-2eb7-4830-97b3-52c149721efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 NetworkManager[49108]: <info>  [1769844951.3713] device (tape4d1862b-20): carrier: link connected
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.374 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea412da-4108-4cd9-a39c-e6b0c1c8a221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.390 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d28a2a1-cebf-444a-8a44-b8f8c01c281b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519398, 'reachable_time': 33162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263299, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ceph-mon[74496]: pgmap v1056: 305 pgs: 305 active+clean; 344 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 9.5 MiB/s rd, 3.0 MiB/s wr, 481 op/s
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.405 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bec73673-c8d0-4784-b9bd-af6c7b9ff50c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:3856'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519398, 'tstamp': 519398}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263302, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.416 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a10d44ba-5b19-46ff-be87-2f5d18d9c642]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d1862b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:38:56'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519398, 'reachable_time': 33162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263308, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.436 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fabcafde-e91a-418b-95e3-153861363e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.474 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[68403f1e-9027-4aba-b564-c99165d2fbc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.475 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.476 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.476 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d1862b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.477 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:51 compute-0 NetworkManager[49108]: <info>  [1769844951.4785] manager: (tape4d1862b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 31 07:35:51 compute-0 kernel: tape4d1862b-20: entered promiscuous mode
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.479 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d1862b-20, col_values=(('external_ids', {'iface-id': '632f26c5-40a9-4337-84da-ea4b4bbdf89c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:35:51 compute-0 ovn_controller[149457]: 2026-01-31T07:35:51Z|00100|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.481 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.482 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6b5198-50d3-40d2-a3bc-f1314a17024c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.483 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e4d1862b-2abc-4d60-bc48-19a5318038f4.pid.haproxy
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e4d1862b-2abc-4d60-bc48-19a5318038f4
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:35:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:35:51.483 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'env', 'PROCESS_TAG=haproxy-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d1862b-2abc-4d60-bc48-19a5318038f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.596477406 +0000 UTC m=+0.040253111 container create 260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:35:51 compute-0 systemd[1]: Started libpod-conmon-260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14.scope.
Jan 31 07:35:51 compute-0 nova_compute[247704]: 2026-01-31 07:35:51.649 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.669861493 +0000 UTC m=+0.113637218 container init 260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.675025996 +0000 UTC m=+0.118801701 container start 260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.57920652 +0000 UTC m=+0.022982245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.678940021 +0000 UTC m=+0.122715726 container attach 260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:35:51 compute-0 adoring_panini[263369]: 167 167
Jan 31 07:35:51 compute-0 systemd[1]: libpod-260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14.scope: Deactivated successfully.
Jan 31 07:35:51 compute-0 conmon[263369]: conmon 260b0931c08f96b52238 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14.scope/container/memory.events
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.681515353 +0000 UTC m=+0.125291078 container died 260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:35:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:51.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:51 compute-0 podman[263352]: 2026-01-31 07:35:51.716250019 +0000 UTC m=+0.160025724 container remove 260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:35:51 compute-0 systemd[1]: libpod-conmon-260b0931c08f96b522382c2f18c2a78d7451c90b7585b33bb74f9258e0954d14.scope: Deactivated successfully.
Jan 31 07:35:51 compute-0 podman[263409]: 2026-01-31 07:35:51.836539925 +0000 UTC m=+0.066133254 container create d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 07:35:51 compute-0 podman[263425]: 2026-01-31 07:35:51.847527859 +0000 UTC m=+0.043811076 container create 5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:35:51 compute-0 systemd[1]: Started libpod-conmon-d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0.scope.
Jan 31 07:35:51 compute-0 systemd[1]: Started libpod-conmon-5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077.scope.
Jan 31 07:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd0638396a3d6268c4f7574db7a14ca03fe4e553fb8c43ff6fd1f91e6da858b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd0638396a3d6268c4f7574db7a14ca03fe4e553fb8c43ff6fd1f91e6da858b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd0638396a3d6268c4f7574db7a14ca03fe4e553fb8c43ff6fd1f91e6da858b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d33c528a68c5aa889c02d03566f361be9f576229ba5939b5fe39cac7740466/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd0638396a3d6268c4f7574db7a14ca03fe4e553fb8c43ff6fd1f91e6da858b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:51 compute-0 podman[263409]: 2026-01-31 07:35:51.792621917 +0000 UTC m=+0.022215266 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:35:51 compute-0 podman[263409]: 2026-01-31 07:35:51.906317494 +0000 UTC m=+0.135910833 container init d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:35:51 compute-0 podman[263425]: 2026-01-31 07:35:51.910776592 +0000 UTC m=+0.107059869 container init 5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:35:51 compute-0 podman[263409]: 2026-01-31 07:35:51.912873772 +0000 UTC m=+0.142467101 container start d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 07:35:51 compute-0 podman[263425]: 2026-01-31 07:35:51.919355729 +0000 UTC m=+0.115638946 container start 5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:35:51 compute-0 podman[263425]: 2026-01-31 07:35:51.922855663 +0000 UTC m=+0.119138910 container attach 5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:35:51 compute-0 podman[263425]: 2026-01-31 07:35:51.83136234 +0000 UTC m=+0.027645567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:35:51 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [NOTICE]   (263453) : New worker (263456) forked
Jan 31 07:35:51 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [NOTICE]   (263453) : Loading success.
Jan 31 07:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3df3c6116af37c971d8f9c565a920425b073e27306211d444e165dc711a1c71-merged.mount: Deactivated successfully.
Jan 31 07:35:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:52 compute-0 ovn_controller[149457]: 2026-01-31T07:35:52Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:f1:d6 10.100.0.14
Jan 31 07:35:52 compute-0 ovn_controller[149457]: 2026-01-31T07:35:52Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:f1:d6 10.100.0.14
Jan 31 07:35:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 334 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 2.7 MiB/s wr, 471 op/s
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.587 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.587 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.588 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.588 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:35:52 compute-0 nova_compute[247704]: 2026-01-31 07:35:52.588 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]: {
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:     "0": [
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:         {
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "devices": [
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "/dev/loop3"
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             ],
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "lv_name": "ceph_lv0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "lv_size": "7511998464",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "name": "ceph_lv0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "tags": {
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.cluster_name": "ceph",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.crush_device_class": "",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.encrypted": "0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.osd_id": "0",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.type": "block",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:                 "ceph.vdo": "0"
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             },
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "type": "block",
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:             "vg_name": "ceph_vg0"
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:         }
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]:     ]
Jan 31 07:35:52 compute-0 sharp_visvesvaraya[263447]: }
Jan 31 07:35:52 compute-0 systemd[1]: libpod-5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077.scope: Deactivated successfully.
Jan 31 07:35:52 compute-0 conmon[263447]: conmon 5c6d026b93cd745606e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077.scope/container/memory.events
Jan 31 07:35:52 compute-0 podman[263425]: 2026-01-31 07:35:52.661872983 +0000 UTC m=+0.858156200 container died 5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dd0638396a3d6268c4f7574db7a14ca03fe4e553fb8c43ff6fd1f91e6da858b-merged.mount: Deactivated successfully.
Jan 31 07:35:52 compute-0 podman[263425]: 2026-01-31 07:35:52.733144979 +0000 UTC m=+0.929428196 container remove 5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:35:52 compute-0 systemd[1]: libpod-conmon-5c6d026b93cd745606e1d6cf295cb617627b3bb8f62213ba4da2e9cb0d1f5077.scope: Deactivated successfully.
Jan 31 07:35:52 compute-0 sudo[263247]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:52 compute-0 sudo[263500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:52 compute-0 sudo[263500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:52 compute-0 sudo[263500]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:52 compute-0 sudo[263525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:35:52 compute-0 sudo[263525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:52 compute-0 sudo[263525]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:52 compute-0 sudo[263550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:52 compute-0 sudo[263550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:52 compute-0 sudo[263550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:53 compute-0 sudo[263575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:35:53 compute-0 sudo[263575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651880508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.049 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.145 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.146 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.150 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.150 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.154 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.155 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.318 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.320 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4069MB free_disk=20.823078155517578GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.320 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.320 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.369512358 +0000 UTC m=+0.037355210 container create 6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.400 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Migration for instance b9a76fba-cff2-455a-9aa6-7b839819e78b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Jan 31 07:35:53 compute-0 systemd[1]: Started libpod-conmon-6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681.scope.
Jan 31 07:35:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.422 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating resource usage from migration 72833866-0b19-4638-bbd6-0ea0fd606531
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.423 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Starting to track incoming migration 72833866-0b19-4638-bbd6-0ea0fd606531 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.432599538 +0000 UTC m=+0.100442430 container init 6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.438154271 +0000 UTC m=+0.105997133 container start 6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:35:53 compute-0 focused_matsumoto[263659]: 167 167
Jan 31 07:35:53 compute-0 systemd[1]: libpod-6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681.scope: Deactivated successfully.
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.443306015 +0000 UTC m=+0.111148887 container attach 6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.443625862 +0000 UTC m=+0.111468724 container died 6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.353029722 +0000 UTC m=+0.020872614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e883b697fd664fee201c5b19cc3e006814c61676c3fea2b618d3641274fdce6-merged.mount: Deactivated successfully.
Jan 31 07:35:53 compute-0 podman[263643]: 2026-01-31 07:35:53.48134426 +0000 UTC m=+0.149187122 container remove 6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:35:53 compute-0 systemd[1]: libpod-conmon-6e2f825f13f553499ffd0a34db5764ffb06157ad74c2ac0d45468ed101ade681.scope: Deactivated successfully.
Jan 31 07:35:53 compute-0 ceph-mon[74496]: pgmap v1057: 305 pgs: 305 active+clean; 334 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 2.7 MiB/s wr, 471 op/s
Jan 31 07:35:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1651880508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 31 07:35:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 31 07:35:53 compute-0 podman[263683]: 2026-01-31 07:35:53.624782034 +0000 UTC m=+0.045580449 container create 4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:35:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 31 07:35:53 compute-0 systemd[1]: Started libpod-conmon-4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57.scope.
Jan 31 07:35:53 compute-0 podman[263683]: 2026-01-31 07:35:53.60095121 +0000 UTC m=+0.021749645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:35:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/babafdb448dd556e7468aaaa24fdca2ca2c56e537692a3ec96d4c3c468f81126/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/babafdb448dd556e7468aaaa24fdca2ca2c56e537692a3ec96d4c3c468f81126/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/babafdb448dd556e7468aaaa24fdca2ca2c56e537692a3ec96d4c3c468f81126/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/babafdb448dd556e7468aaaa24fdca2ca2c56e537692a3ec96d4c3c468f81126/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:35:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:53.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.727 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance d7327aed-ddc6-4772-8d2e-6b8be365dd2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.727 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 34693a4b-4cec-41ed-a872-facd378ad627 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:35:53 compute-0 podman[263683]: 2026-01-31 07:35:53.736171425 +0000 UTC m=+0.156969840 container init 4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:35:53 compute-0 podman[263683]: 2026-01-31 07:35:53.742932998 +0000 UTC m=+0.163731433 container start 4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:35:53 compute-0 podman[263683]: 2026-01-31 07:35:53.747610261 +0000 UTC m=+0.168408676 container attach 4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.754 247708 WARNING nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance b9a76fba-cff2-455a-9aa6-7b839819e78b has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.754 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.754 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.985 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.987 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:53 compute-0 nova_compute[247704]: 2026-01-31 07:35:53.987 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.008 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.011 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.011 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:35:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:35:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.080 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.090 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.103 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.262 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.282 247708 DEBUG nova.network.neutron [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [{"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.301 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Releasing lock "refresh_cache-b9a76fba-cff2-455a-9aa6-7b839819e78b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.314 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 355 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.7 MiB/s wr, 352 op/s
Jan 31 07:35:54 compute-0 elegant_panini[263700]: {
Jan 31 07:35:54 compute-0 elegant_panini[263700]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:35:54 compute-0 elegant_panini[263700]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:35:54 compute-0 elegant_panini[263700]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:35:54 compute-0 elegant_panini[263700]:         "osd_id": 0,
Jan 31 07:35:54 compute-0 elegant_panini[263700]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:35:54 compute-0 elegant_panini[263700]:         "type": "bluestore"
Jan 31 07:35:54 compute-0 elegant_panini[263700]:     }
Jan 31 07:35:54 compute-0 elegant_panini[263700]: }
Jan 31 07:35:54 compute-0 systemd[1]: libpod-4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57.scope: Deactivated successfully.
Jan 31 07:35:54 compute-0 podman[263683]: 2026-01-31 07:35:54.563844309 +0000 UTC m=+0.984642724 container died 4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:35:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-babafdb448dd556e7468aaaa24fdca2ca2c56e537692a3ec96d4c3c468f81126-merged.mount: Deactivated successfully.
Jan 31 07:35:54 compute-0 podman[263683]: 2026-01-31 07:35:54.634830419 +0000 UTC m=+1.055628864 container remove 4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:35:54 compute-0 ceph-mon[74496]: osdmap e149: 3 total, 3 up, 3 in
Jan 31 07:35:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/209638890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:54 compute-0 systemd[1]: libpod-conmon-4c4fe6219cbcc1378b9b6ede56efa4719edf8f5d0ee15a8f17faea7afecddc57.scope: Deactivated successfully.
Jan 31 07:35:54 compute-0 sudo[263575]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:35:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:35:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736452277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e24b1a99-c595-4a3b-abca-e11053bfbd12 does not exist
Jan 31 07:35:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 37541c68-79a5-41b3-81bb-edf5b64de911 does not exist
Jan 31 07:35:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3a2856a9-27ac-4e8f-902c-0a00cf4b4261 does not exist
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.706 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.712 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:54 compute-0 podman[263742]: 2026-01-31 07:35:54.719057526 +0000 UTC m=+0.126107677 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.735 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:54 compute-0 sudo[263781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:35:54 compute-0 sudo[263781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:54 compute-0 sudo[263781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.772 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.773 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.773 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.784 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.785 247708 INFO nova.compute.claims [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:35:54 compute-0 sudo[263809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:35:54 compute-0 sudo[263809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:35:54 compute-0 sudo[263809]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:54 compute-0 nova_compute[247704]: 2026-01-31 07:35:54.936 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.352 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:35:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3262193131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.390 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.395 247708 DEBUG nova.compute.provider_tree [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.412 247708 DEBUG nova.scheduler.client.report [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.435 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.436 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.440 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.440 247708 DEBUG oslo_concurrency.lockutils [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.448 247708 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 31 07:35:55 compute-0 virtqemud[247621]: Domain id=11 name='instance-00000013' uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b is tainted: custom-monitor
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.495 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.496 247708 DEBUG nova.network.neutron [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.532 247708 INFO nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.556 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.656 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.658 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.659 247708 INFO nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Creating image(s)
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.707 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.750 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:55 compute-0 ceph-mon[74496]: pgmap v1059: 305 pgs: 305 active+clean; 355 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.7 MiB/s wr, 352 op/s
Jan 31 07:35:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2736452277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:35:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/694674579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3262193131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.788 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.792 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.815 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.816 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.854 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.855 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.856 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.857 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.898 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:55 compute-0 nova_compute[247704]: 2026-01-31 07:35:55.901 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.053 247708 DEBUG nova.network.neutron [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.054 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:35:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:56.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.177 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.252 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] resizing rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.376 247708 DEBUG nova.objects.instance [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'migration_context' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.392 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.393 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Ensure instance console log exists: /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.394 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.395 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.395 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.398 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.403 247708 WARNING nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.408 247708 DEBUG nova.virt.libvirt.host [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.409 247708 DEBUG nova.virt.libvirt.host [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.413 247708 DEBUG nova.virt.libvirt.host [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.414 247708 DEBUG nova.virt.libvirt.host [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.416 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.416 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.417 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.417 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.418 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.418 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.418 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.419 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.419 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.420 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.420 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.421 247708 DEBUG nova.virt.hardware [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.425 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.459 247708 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 31 07:35:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 362 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 272 op/s
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:35:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/781511916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150089042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.876 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.903 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:56 compute-0 nova_compute[247704]: 2026-01-31 07:35:56.908 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:35:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250225815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.366 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.369 247708 DEBUG nova.objects.instance [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.394 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <uuid>6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</uuid>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <name>instance-00000016</name>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:name>tempest-MigrationsAdminTest-server-1458295136</nova:name>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:35:56</nova:creationTime>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <system>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <entry name="serial">6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</entry>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <entry name="uuid">6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</entry>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </system>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <os>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </os>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <features>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </features>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk">
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       </source>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config">
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       </source>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:35:57 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/console.log" append="off"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <video>
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </video>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:35:57 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:35:57 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:35:57 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:35:57 compute-0 nova_compute[247704]: </domain>
Jan 31 07:35:57 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.458 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.458 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.459 247708 INFO nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Using config drive
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.493 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.499 247708 INFO nova.virt.libvirt.driver [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.504 247708 DEBUG nova.compute.manager [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.528 247708 DEBUG nova.objects.instance [None req-e3481e81-6360-4dc5-b17b-508a41cc41fd 12bc5e483ed94041a17decb8f76639cf 1a708ef799b84fb89a8e1e8b49b5407c - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.714 247708 INFO nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Creating config drive at /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/disk.config
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.720 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7tkoz40m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:35:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:57.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:35:57 compute-0 ceph-mon[74496]: pgmap v1060: 305 pgs: 305 active+clean; 362 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 272 op/s
Jan 31 07:35:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1150089042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3645022635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2250225815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1877029634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.843 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7tkoz40m" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.880 247708 DEBUG nova.storage.rbd_utils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rbd image 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.884 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/disk.config 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.901 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844942.898061, cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.901 247708 INFO nova.compute.manager [-] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] VM Stopped (Lifecycle Event)
Jan 31 07:35:57 compute-0 nova_compute[247704]: 2026-01-31 07:35:57.919 247708 DEBUG nova.compute.manager [None req-fd2f38da-0ed9-4a1c-a4d0-6b384edf7f50 - - - - - -] [instance: cbbe787b-8b0f-4e43-a9c2-c8b78c01fc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.066 247708 DEBUG oslo_concurrency.processutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/disk.config 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.067 247708 INFO nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Deleting local config drive /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/disk.config because it was imported into RBD.
Jan 31 07:35:58 compute-0 systemd-machined[214448]: New machine qemu-12-instance-00000016.
Jan 31 07:35:58 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000016.
Jan 31 07:35:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 371 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 250 op/s
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.555 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844958.5550094, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.555 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Resumed (Lifecycle Event)
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.557 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.558 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.562 247708 INFO nova.virt.libvirt.driver [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance spawned successfully.
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.562 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.578 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.583 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.588 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.588 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.589 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.589 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.589 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.590 247708 DEBUG nova.virt.libvirt.driver [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:35:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.611 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.611 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844958.5577314, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.611 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Started (Lifecycle Event)
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.641 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.645 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.650 247708 INFO nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Took 2.99 seconds to spawn the instance on the hypervisor.
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.651 247708 DEBUG nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.665 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.727 247708 INFO nova.compute.manager [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Took 4.67 seconds to build instance.
Jan 31 07:35:58 compute-0 nova_compute[247704]: 2026-01-31 07:35:58.805 247708 DEBUG oslo_concurrency.lockutils [None req-15180f00-8a0e-4002-9a4c-b0091b0d37ab 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:35:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1301746315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3732196518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:35:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:35:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:59.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:35:59 compute-0 ceph-mon[74496]: pgmap v1061: 305 pgs: 305 active+clean; 371 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 250 op/s
Jan 31 07:35:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1681356475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:35:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1061276305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.877527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959877594, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2138, "num_deletes": 254, "total_data_size": 3554213, "memory_usage": 3605280, "flush_reason": "Manual Compaction"}
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959913277, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3479806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22120, "largest_seqno": 24257, "table_properties": {"data_size": 3470339, "index_size": 5833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21135, "raw_average_key_size": 20, "raw_value_size": 3450801, "raw_average_value_size": 3393, "num_data_blocks": 257, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844773, "oldest_key_time": 1769844773, "file_creation_time": 1769844959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 35776 microseconds, and 6507 cpu microseconds.
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.913323) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3479806 bytes OK
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.913339) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.914835) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.914850) EVENT_LOG_v1 {"time_micros": 1769844959914844, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.914866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3545245, prev total WAL file size 3545245, number of live WAL files 2.
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.915575) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3398KB)], [53(7354KB)]
Jan 31 07:35:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844959915607, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11010557, "oldest_snapshot_seqno": -1}
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4971 keys, 8995982 bytes, temperature: kUnknown
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844960012943, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 8995982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8961938, "index_size": 20494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125693, "raw_average_key_size": 25, "raw_value_size": 8871415, "raw_average_value_size": 1784, "num_data_blocks": 837, "num_entries": 4971, "num_filter_entries": 4971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769844959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.013471) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8995982 bytes
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.016245) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.9 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5496, records dropped: 525 output_compression: NoCompression
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.016284) EVENT_LOG_v1 {"time_micros": 1769844960016266, "job": 28, "event": "compaction_finished", "compaction_time_micros": 97531, "compaction_time_cpu_micros": 13410, "output_level": 6, "num_output_files": 1, "total_output_size": 8995982, "num_input_records": 5496, "num_output_records": 4971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844960017045, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844960018329, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:35:59.915486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.018425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.018432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.018433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.018435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:36:00.018437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:36:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:00.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:00 compute-0 nova_compute[247704]: 2026-01-31 07:36:00.355 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 417 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 217 op/s
Jan 31 07:36:00 compute-0 nova_compute[247704]: 2026-01-31 07:36:00.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1477228523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:01 compute-0 nova_compute[247704]: 2026-01-31 07:36:01.703 247708 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:01 compute-0 nova_compute[247704]: 2026-01-31 07:36:01.703 247708 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquired lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:01 compute-0 nova_compute[247704]: 2026-01-31 07:36:01.704 247708 DEBUG nova.network.neutron [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:01.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:01 compute-0 ceph-mon[74496]: pgmap v1062: 305 pgs: 305 active+clean; 417 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 217 op/s
Jan 31 07:36:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/200743085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:02 compute-0 nova_compute[247704]: 2026-01-31 07:36:02.306 247708 DEBUG nova.network.neutron [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 451 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 31 07:36:02 compute-0 nova_compute[247704]: 2026-01-31 07:36:02.588 247708 DEBUG nova.network.neutron [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:02 compute-0 nova_compute[247704]: 2026-01-31 07:36:02.703 247708 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Releasing lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3216309909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2003429390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.340 247708 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.341 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Creating file /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/d19041e00f2448419da87987ceaa4e9f.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.342 247708 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/d19041e00f2448419da87987ceaa4e9f.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.766 247708 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/d19041e00f2448419da87987ceaa4e9f.tmp" returned: 1 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.768 247708 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/d19041e00f2448419da87987ceaa4e9f.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.769 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Creating directory /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.770 247708 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:03 compute-0 ceph-mon[74496]: pgmap v1063: 305 pgs: 305 active+clean; 451 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.978 247708 DEBUG oslo_concurrency.processutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:03 compute-0 nova_compute[247704]: 2026-01-31 07:36:03.983 247708 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:36:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 329 op/s
Jan 31 07:36:05 compute-0 nova_compute[247704]: 2026-01-31 07:36:05.358 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:05.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:05 compute-0 nova_compute[247704]: 2026-01-31 07:36:05.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:05 compute-0 ceph-mon[74496]: pgmap v1064: 305 pgs: 305 active+clean; 484 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.7 MiB/s wr, 329 op/s
Jan 31 07:36:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1924817307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:06.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 501 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 293 op/s
Jan 31 07:36:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2434256068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:07.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:08 compute-0 ceph-mon[74496]: pgmap v1065: 305 pgs: 305 active+clean; 501 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 293 op/s
Jan 31 07:36:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2234691212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:08.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:08 compute-0 sudo[264209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:08 compute-0 sudo[264209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:08 compute-0 sudo[264209]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:08 compute-0 sudo[264234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:08 compute-0 sudo[264234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:08 compute-0 sudo[264234]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 510 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.6 MiB/s wr, 314 op/s
Jan 31 07:36:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:09.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:09 compute-0 podman[264260]: 2026-01-31 07:36:09.90989265 +0000 UTC m=+0.063326895 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:36:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:10.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:10 compute-0 ceph-mon[74496]: pgmap v1066: 305 pgs: 305 active+clean; 510 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.6 MiB/s wr, 314 op/s
Jan 31 07:36:10 compute-0 nova_compute[247704]: 2026-01-31 07:36:10.360 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 548 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.6 MiB/s wr, 347 op/s
Jan 31 07:36:10 compute-0 nova_compute[247704]: 2026-01-31 07:36:10.800 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:36:11.145 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:36:11.146 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:36:11.148 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:12.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:12 compute-0 ceph-mon[74496]: pgmap v1067: 305 pgs: 305 active+clean; 548 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.6 MiB/s wr, 347 op/s
Jan 31 07:36:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 579 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.2 MiB/s wr, 360 op/s
Jan 31 07:36:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:13.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:14 compute-0 nova_compute[247704]: 2026-01-31 07:36:14.028 247708 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:36:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:14.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:14 compute-0 ceph-mon[74496]: pgmap v1068: 305 pgs: 305 active+clean; 579 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.2 MiB/s wr, 360 op/s
Jan 31 07:36:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 640 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.8 MiB/s wr, 425 op/s
Jan 31 07:36:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/125616842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:15 compute-0 nova_compute[247704]: 2026-01-31 07:36:15.364 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:15 compute-0 nova_compute[247704]: 2026-01-31 07:36:15.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:16 compute-0 ceph-mon[74496]: pgmap v1069: 305 pgs: 305 active+clean; 640 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.8 MiB/s wr, 425 op/s
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.464 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.500 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.501 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 34693a4b-4cec-41ed-a872-facd378ad627 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.501 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.501 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid b9a76fba-cff2-455a-9aa6-7b839819e78b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.502 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.503 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.503 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "34693a4b-4cec-41ed-a872-facd378ad627" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.504 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "34693a4b-4cec-41ed-a872-facd378ad627" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.504 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.505 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.505 247708 INFO nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] During sync_power_state the instance has a pending task (resize_migrating). Skip.
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.505 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.506 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.506 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 659 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.8 MiB/s wr, 327 op/s
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.562 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.564 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "34693a4b-4cec-41ed-a872-facd378ad627" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:16 compute-0 nova_compute[247704]: 2026-01-31 07:36:16.601 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:16 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 31 07:36:16 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000016.scope: Consumed 14.338s CPU time.
Jan 31 07:36:16 compute-0 systemd-machined[214448]: Machine qemu-12-instance-00000016 terminated.
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.044 247708 INFO nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance shutdown successfully after 13 seconds.
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.050 247708 INFO nova.virt.libvirt.driver [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance destroyed successfully.
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.053 247708 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.053 247708 DEBUG nova.virt.libvirt.driver [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.161 247708 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Acquiring lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.161 247708 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:17 compute-0 nova_compute[247704]: 2026-01-31 07:36:17.162 247708 DEBUG oslo_concurrency.lockutils [None req-0c961d4f-8fe6-4b04-88db-bbb1bebd86fa ff4d577316214792ba020f6b5cbfdc61 b7005ab5fee841f097fa31ad33b90ee5 - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:17 compute-0 ceph-mon[74496]: pgmap v1070: 305 pgs: 305 active+clean; 659 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.8 MiB/s wr, 327 op/s
Jan 31 07:36:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:17.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1923683185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1305993749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/54137756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 657 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 8.1 MiB/s wr, 321 op/s
Jan 31 07:36:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 31 07:36:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 31 07:36:19 compute-0 ceph-mon[74496]: pgmap v1071: 305 pgs: 305 active+clean; 657 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 8.1 MiB/s wr, 321 op/s
Jan 31 07:36:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 31 07:36:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:19.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:36:20
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'backups', 'cephfs.cephfs.data']
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:36:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:36:20.045 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:36:20 compute-0 nova_compute[247704]: 2026-01-31 07:36:20.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:36:20.046 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:36:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:36:20.047 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:36:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:36:20 compute-0 nova_compute[247704]: 2026-01-31 07:36:20.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 651 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.1 MiB/s wr, 346 op/s
Jan 31 07:36:20 compute-0 ceph-mon[74496]: osdmap e150: 3 total, 3 up, 3 in
Jan 31 07:36:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1679574900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:20 compute-0 nova_compute[247704]: 2026-01-31 07:36:20.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:21 compute-0 ceph-mon[74496]: pgmap v1073: 305 pgs: 305 active+clean; 651 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.1 MiB/s wr, 346 op/s
Jan 31 07:36:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4154236657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:21.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:22.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 624 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.3 MiB/s wr, 323 op/s
Jan 31 07:36:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3505978811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:23 compute-0 ceph-mon[74496]: pgmap v1074: 305 pgs: 305 active+clean; 624 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.3 MiB/s wr, 323 op/s
Jan 31 07:36:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:23.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 516 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 358 op/s
Jan 31 07:36:24 compute-0 podman[264290]: 2026-01-31 07:36:24.916241071 +0000 UTC m=+0.086789850 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:36:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1987441858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.278 247708 INFO nova.compute.manager [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Swapping old allocation on dict_keys(['39dae8fb-a3d6-4f01-ab04-67eb06f4b735']) held by migration ea0c264e-0692-46a6-bf07-d98d6a516761 for instance
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.331 247708 DEBUG nova.scheduler.client.report [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Overwriting current allocation {'allocations': {'09a2f316-8f9d-47b2-922f-864a1d14c517': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 28}}, 'project_id': '7c1ddd67115f4f7bab056dbb2f270ccc', 'user_id': '71f887fd92fb486a959e5ca100cb1e10', 'consumer_generation': 1} on consumer 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.368 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.622 247708 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.623 247708 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.623 247708 DEBUG nova.network.neutron [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:25 compute-0 ceph-mon[74496]: pgmap v1075: 305 pgs: 305 active+clean; 516 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 358 op/s
Jan 31 07:36:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1987441858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:25.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:25 compute-0 nova_compute[247704]: 2026-01-31 07:36:25.863 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:26.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:26 compute-0 nova_compute[247704]: 2026-01-31 07:36:26.297 247708 DEBUG nova.network.neutron [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 498 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 399 op/s
Jan 31 07:36:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2460872254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3819849537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:27 compute-0 nova_compute[247704]: 2026-01-31 07:36:27.297 247708 DEBUG nova.network.neutron [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:27 compute-0 nova_compute[247704]: 2026-01-31 07:36:27.332 247708 DEBUG oslo_concurrency.lockutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:27 compute-0 nova_compute[247704]: 2026-01-31 07:36:27.334 247708 DEBUG nova.virt.libvirt.driver [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 31 07:36:27 compute-0 nova_compute[247704]: 2026-01-31 07:36:27.449 247708 DEBUG nova.storage.rbd_utils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] rolling back rbd image(6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 31 07:36:27 compute-0 nova_compute[247704]: 2026-01-31 07:36:27.582 247708 DEBUG nova.storage.rbd_utils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] removing snapshot(nova-resize) on rbd image(6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:36:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 31 07:36:27 compute-0 ceph-mon[74496]: pgmap v1076: 305 pgs: 305 active+clean; 498 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 399 op/s
Jan 31 07:36:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 31 07:36:27 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 31 07:36:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.293 247708 DEBUG nova.virt.libvirt.driver [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.296 247708 WARNING nova.virt.libvirt.driver [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.306 247708 DEBUG nova.virt.libvirt.host [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.307 247708 DEBUG nova.virt.libvirt.host [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.324 247708 DEBUG nova.virt.libvirt.host [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.324 247708 DEBUG nova.virt.libvirt.host [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.326 247708 DEBUG nova.virt.libvirt.driver [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.326 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.327 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.327 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.327 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.328 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.328 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.328 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.328 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.329 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.329 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.329 247708 DEBUG nova.virt.hardware [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.329 247708 DEBUG nova.objects.instance [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:28 compute-0 sudo[264372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:28 compute-0 sudo[264372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:28 compute-0 sudo[264372]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.517 247708 DEBUG oslo_concurrency.processutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 510 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.4 MiB/s wr, 380 op/s
Jan 31 07:36:28 compute-0 sudo[264398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:28 compute-0 sudo[264398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:28 compute-0 sudo[264398]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:28 compute-0 ceph-mon[74496]: osdmap e151: 3 total, 3 up, 3 in
Jan 31 07:36:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2498875659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:28 compute-0 nova_compute[247704]: 2026-01-31 07:36:28.961 247708 DEBUG oslo_concurrency.processutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:29 compute-0 nova_compute[247704]: 2026-01-31 07:36:29.004 247708 DEBUG oslo_concurrency.processutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:36:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 50K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3494 syncs, 3.67 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4452 writes, 16K keys, 4452 commit groups, 1.0 writes per commit group, ingest: 15.90 MB, 0.03 MB/s
                                           Interval WAL: 4452 writes, 1824 syncs, 2.44 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 07:36:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4127953444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:29 compute-0 nova_compute[247704]: 2026-01-31 07:36:29.473 247708 DEBUG oslo_concurrency.processutils [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:29 compute-0 nova_compute[247704]: 2026-01-31 07:36:29.476 247708 DEBUG nova.virt.libvirt.driver [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <uuid>6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</uuid>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <name>instance-00000016</name>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:name>tempest-MigrationsAdminTest-server-1458295136</nova:name>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:36:28</nova:creationTime>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:user uuid="71f887fd92fb486a959e5ca100cb1e10">tempest-MigrationsAdminTest-137263588-project-member</nova:user>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <nova:project uuid="7c1ddd67115f4f7bab056dbb2f270ccc">tempest-MigrationsAdminTest-137263588</nova:project>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <system>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <entry name="serial">6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</entry>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <entry name="uuid">6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31</entry>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </system>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <os>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </os>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <features>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </features>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk">
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       </source>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_disk.config">
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       </source>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:36:29 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31/console.log" append="off"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <video>
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </video>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <input type="keyboard" bus="usb"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:36:29 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:36:29 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:36:29 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:36:29 compute-0 nova_compute[247704]: </domain>
Jan 31 07:36:29 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:29 compute-0 systemd-machined[214448]: New machine qemu-13-instance-00000016.
Jan 31 07:36:29 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000016.
Jan 31 07:36:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:29.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:29 compute-0 ceph-mon[74496]: pgmap v1078: 305 pgs: 305 active+clean; 510 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.4 MiB/s wr, 380 op/s
Jan 31 07:36:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2498875659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4127953444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:30.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.256 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.258 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844990.2562602, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.258 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Resumed (Lifecycle Event)
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.261 247708 DEBUG nova.compute.manager [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.265 247708 INFO nova.virt.libvirt.driver [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance running successfully.
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.266 247708 DEBUG nova.virt.libvirt.driver [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.313 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.316 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.365 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.366 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769844990.257532, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.366 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Started (Lifecycle Event)
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.392 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.397 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.408 247708 INFO nova.compute.manager [None req-4853600f-8f65-46c5-bf09-a52289e848fd 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance to original state: 'active'
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.416 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 07:36:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 536 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 3.2 MiB/s wr, 382 op/s
Jan 31 07:36:30 compute-0 nova_compute[247704]: 2026-01-31 07:36:30.914 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:32 compute-0 ceph-mon[74496]: pgmap v1079: 305 pgs: 305 active+clean; 536 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 3.2 MiB/s wr, 382 op/s
Jan 31 07:36:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3923018290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 536 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.2 MiB/s wr, 340 op/s
Jan 31 07:36:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:34 compute-0 ceph-mon[74496]: pgmap v1080: 305 pgs: 305 active+clean; 536 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.2 MiB/s wr, 340 op/s
Jan 31 07:36:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 544 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 296 op/s
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.746 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.748 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.749 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.749 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.750 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.751 247708 INFO nova.compute.manager [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Terminating instance
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.753 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.753 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:34 compute-0 nova_compute[247704]: 2026-01-31 07:36:34.753 247708 DEBUG nova.network.neutron [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011847431602456729 of space, bias 1.0, pg target 3.5542294807370185 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0015525398985669085 of space, bias 1.0, pg target 0.4611043498743718 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:36:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 07:36:35 compute-0 nova_compute[247704]: 2026-01-31 07:36:35.037 247708 DEBUG nova.network.neutron [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:35 compute-0 nova_compute[247704]: 2026-01-31 07:36:35.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:35 compute-0 nova_compute[247704]: 2026-01-31 07:36:35.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:36 compute-0 ceph-mon[74496]: pgmap v1081: 305 pgs: 305 active+clean; 544 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 296 op/s
Jan 31 07:36:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/267814869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:36 compute-0 nova_compute[247704]: 2026-01-31 07:36:36.293 247708 DEBUG nova.network.neutron [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:36 compute-0 nova_compute[247704]: 2026-01-31 07:36:36.320 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:36 compute-0 nova_compute[247704]: 2026-01-31 07:36:36.322 247708 DEBUG nova.compute.manager [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:36:36 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 31 07:36:36 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000016.scope: Consumed 7.026s CPU time.
Jan 31 07:36:36 compute-0 systemd-machined[214448]: Machine qemu-13-instance-00000016 terminated.
Jan 31 07:36:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 566 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 275 op/s
Jan 31 07:36:36 compute-0 nova_compute[247704]: 2026-01-31 07:36:36.551 247708 INFO nova.virt.libvirt.driver [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance destroyed successfully.
Jan 31 07:36:36 compute-0 nova_compute[247704]: 2026-01-31 07:36:36.552 247708 DEBUG nova.objects.instance [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:37 compute-0 ceph-mon[74496]: pgmap v1082: 305 pgs: 305 active+clean; 566 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 275 op/s
Jan 31 07:36:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:37.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:38.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:38 compute-0 nova_compute[247704]: 2026-01-31 07:36:38.361 247708 INFO nova.virt.libvirt.driver [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Deleting instance files /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_del
Jan 31 07:36:38 compute-0 nova_compute[247704]: 2026-01-31 07:36:38.362 247708 INFO nova.virt.libvirt.driver [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Deletion of /var/lib/nova/instances/6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31_del complete
Jan 31 07:36:38 compute-0 nova_compute[247704]: 2026-01-31 07:36:38.447 247708 INFO nova.compute.manager [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Took 2.12 seconds to destroy the instance on the hypervisor.
Jan 31 07:36:38 compute-0 nova_compute[247704]: 2026-01-31 07:36:38.448 247708 DEBUG oslo.service.loopingcall [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:36:38 compute-0 nova_compute[247704]: 2026-01-31 07:36:38.449 247708 DEBUG nova.compute.manager [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:36:38 compute-0 nova_compute[247704]: 2026-01-31 07:36:38.449 247708 DEBUG nova.network.neutron [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:36:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 569 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.4 MiB/s wr, 255 op/s
Jan 31 07:36:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 31 07:36:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 31 07:36:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 31 07:36:39 compute-0 nova_compute[247704]: 2026-01-31 07:36:39.312 247708 DEBUG nova.network.neutron [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:39 compute-0 nova_compute[247704]: 2026-01-31 07:36:39.328 247708 DEBUG nova.network.neutron [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:39 compute-0 nova_compute[247704]: 2026-01-31 07:36:39.344 247708 INFO nova.compute.manager [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Took 0.89 seconds to deallocate network for instance.
Jan 31 07:36:39 compute-0 nova_compute[247704]: 2026-01-31 07:36:39.394 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:39 compute-0 nova_compute[247704]: 2026-01-31 07:36:39.395 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:39 compute-0 nova_compute[247704]: 2026-01-31 07:36:39.559 247708 DEBUG oslo_concurrency.processutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:39 compute-0 ceph-mon[74496]: pgmap v1083: 305 pgs: 305 active+clean; 569 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.4 MiB/s wr, 255 op/s
Jan 31 07:36:39 compute-0 ceph-mon[74496]: osdmap e152: 3 total, 3 up, 3 in
Jan 31 07:36:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2898010128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1723160795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358593603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.020 247708 DEBUG oslo_concurrency.processutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.027 247708 DEBUG nova.compute.provider_tree [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.049 247708 DEBUG nova.scheduler.client.report [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.071 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.105 247708 INFO nova.scheduler.client.report [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Deleted allocations for instance 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.174 247708 DEBUG oslo_concurrency.lockutils [None req-6d9f5809-f035-4208-bcc9-9a129c3c8512 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.377 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 544 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 283 op/s
Jan 31 07:36:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1358593603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:40 compute-0 podman[264592]: 2026-01-31 07:36:40.893112493 +0000 UTC m=+0.056763546 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 07:36:40 compute-0 nova_compute[247704]: 2026-01-31 07:36:40.968 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 07:36:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:41 compute-0 ceph-mon[74496]: pgmap v1085: 305 pgs: 305 active+clean; 544 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 283 op/s
Jan 31 07:36:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:42.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 540 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 331 op/s
Jan 31 07:36:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:36:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:36:43 compute-0 ceph-mon[74496]: pgmap v1086: 305 pgs: 305 active+clean; 540 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 331 op/s
Jan 31 07:36:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:44.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 521 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 292 op/s
Jan 31 07:36:44 compute-0 nova_compute[247704]: 2026-01-31 07:36:44.894 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "aed3a1a9-7791-4982-b2ac-a714c6efd240" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:44 compute-0 nova_compute[247704]: 2026-01-31 07:36:44.895 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "aed3a1a9-7791-4982-b2ac-a714c6efd240" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:44 compute-0 nova_compute[247704]: 2026-01-31 07:36:44.922 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:36:44 compute-0 nova_compute[247704]: 2026-01-31 07:36:44.947 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "f2805127-2822-400e-9a60-aaf797f11954" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:44 compute-0 nova_compute[247704]: 2026-01-31 07:36:44.947 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "f2805127-2822-400e-9a60-aaf797f11954" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:44 compute-0 nova_compute[247704]: 2026-01-31 07:36:44.984 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:36:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2658974630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1356891215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:36:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1356891215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.022 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.023 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.031 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.032 247708 INFO nova.compute.claims [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.059 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.235 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.380 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146995631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.714 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.721 247708 DEBUG nova.compute.provider_tree [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.738 247708 DEBUG nova.scheduler.client.report [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:45.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.771 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.772 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.779 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.780 247708 INFO nova.compute.claims [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.836 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "debc8dfc-30a3-4db5-ab7e-6bade87e94eb" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.836 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "debc8dfc-30a3-4db5-ab7e-6bade87e94eb" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.878 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "debc8dfc-30a3-4db5-ab7e-6bade87e94eb" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.879 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.948 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.948 247708 DEBUG nova.network.neutron [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:36:45 compute-0 nova_compute[247704]: 2026-01-31 07:36:45.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.003 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.022 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.048 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:46 compute-0 ovn_controller[149457]: 2026-01-31T07:36:46Z|00101|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 07:36:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:46.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.111 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.113 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.113 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Creating image(s)
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.140 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.169 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.205 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.211 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:46 compute-0 ceph-mon[74496]: pgmap v1087: 305 pgs: 305 active+clean; 521 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 292 op/s
Jan 31 07:36:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3469651264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3146995631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.286 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.287 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.287 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.287 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.315 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.318 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 aed3a1a9-7791-4982-b2ac-a714c6efd240_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771992844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.504 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 516 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 274 op/s
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.542 247708 DEBUG nova.compute.provider_tree [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.562 247708 DEBUG nova.scheduler.client.report [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.580 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.585 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 aed3a1a9-7791-4982-b2ac-a714c6efd240_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.620 247708 DEBUG nova.network.neutron [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.621 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.622 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "debc8dfc-30a3-4db5-ab7e-6bade87e94eb" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.622 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "debc8dfc-30a3-4db5-ab7e-6bade87e94eb" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.662 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "debc8dfc-30a3-4db5-ab7e-6bade87e94eb" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.663 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.671 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] resizing rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.720 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.720 247708 DEBUG nova.network.neutron [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.745 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.793 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.801 247708 DEBUG nova.objects.instance [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'migration_context' on Instance uuid aed3a1a9-7791-4982-b2ac-a714c6efd240 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.827 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.828 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Ensure instance console log exists: /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.828 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.828 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.829 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.830 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.835 247708 WARNING nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.843 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.843 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.851 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.852 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.854 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.854 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.854 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.854 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.855 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.855 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.855 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.855 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.855 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.855 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.856 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.856 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.859 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.925 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.926 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.927 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Creating image(s)
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.958 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:46 compute-0 nova_compute[247704]: 2026-01-31 07:36:46.992 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.034 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.040 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.060 247708 DEBUG nova.network.neutron [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.061 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.093 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.094 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.095 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.096 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.132 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.136 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f2805127-2822-400e-9a60-aaf797f11954_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.184 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "34693a4b-4cec-41ed-a872-facd378ad627" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.185 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.185 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "34693a4b-4cec-41ed-a872-facd378ad627-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.185 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.185 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.187 247708 INFO nova.compute.manager [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Terminating instance
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.188 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.189 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.189 247708 DEBUG nova.network.neutron [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120442856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.323 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.364 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.370 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1771992844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/567785149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.493 247708 DEBUG nova.network.neutron [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:47.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233105137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.862 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.863 247708 DEBUG nova.objects.instance [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid aed3a1a9-7791-4982-b2ac-a714c6efd240 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.877 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <uuid>aed3a1a9-7791-4982-b2ac-a714c6efd240</uuid>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <name>instance-0000001c</name>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1760433654-1</nova:name>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:36:46</nova:creationTime>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:user uuid="741e8133b32342e083b6dd5f0e316abf">tempest-ServersOnMultiNodesTest-1893229170-project-member</nova:user>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <nova:project uuid="b2c9f3f1d94b49ae835ac14aae70bd73">tempest-ServersOnMultiNodesTest-1893229170</nova:project>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <system>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <entry name="serial">aed3a1a9-7791-4982-b2ac-a714c6efd240</entry>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <entry name="uuid">aed3a1a9-7791-4982-b2ac-a714c6efd240</entry>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </system>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <os>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </os>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <features>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </features>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/aed3a1a9-7791-4982-b2ac-a714c6efd240_disk">
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       </source>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/aed3a1a9-7791-4982-b2ac-a714c6efd240_disk.config">
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       </source>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:36:47 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/console.log" append="off"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <video>
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </video>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:36:47 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:36:47 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:36:47 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:36:47 compute-0 nova_compute[247704]: </domain>
Jan 31 07:36:47 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.925 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.926 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.927 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Using config drive
Jan 31 07:36:47 compute-0 nova_compute[247704]: 2026-01-31 07:36:47.966 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.101 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f2805127-2822-400e-9a60-aaf797f11954_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.964s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:48.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.189 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] resizing rbd image f2805127-2822-400e-9a60-aaf797f11954_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.308 247708 DEBUG nova.objects.instance [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'migration_context' on Instance uuid f2805127-2822-400e-9a60-aaf797f11954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.320 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.320 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Ensure instance console log exists: /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.320 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.321 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.321 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.322 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.326 247708 WARNING nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.330 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.331 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.337 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.337 247708 DEBUG nova.virt.libvirt.host [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.338 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.339 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.339 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.340 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.340 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.340 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.340 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.341 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.341 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.341 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.341 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.342 247708 DEBUG nova.virt.hardware [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.344 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.376 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Creating config drive at /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/disk.config
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.383 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9ok9jmqs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:48 compute-0 ceph-mon[74496]: pgmap v1088: 305 pgs: 305 active+clean; 516 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 274 op/s
Jan 31 07:36:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4120442856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2233105137' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.487 247708 DEBUG nova.network.neutron [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.514 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-34693a4b-4cec-41ed-a872-facd378ad627" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.514 247708 DEBUG nova.compute.manager [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.515 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9ok9jmqs" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 479 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 276 op/s
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.546 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image aed3a1a9-7791-4982-b2ac-a714c6efd240_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.550 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/disk.config aed3a1a9-7791-4982-b2ac-a714c6efd240_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:48 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 31 07:36:48 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000f.scope: Consumed 17.930s CPU time.
Jan 31 07:36:48 compute-0 systemd-machined[214448]: Machine qemu-8-instance-0000000f terminated.
Jan 31 07:36:48 compute-0 sudo[265121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:48 compute-0 sudo[265121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:48 compute-0 sudo[265121]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:48 compute-0 sudo[265156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:48 compute-0 sudo[265156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:48 compute-0 sudo[265156]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.731 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/disk.config aed3a1a9-7791-4982-b2ac-a714c6efd240_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.733 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Deleting local config drive /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240/disk.config because it was imported into RBD.
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.794 247708 INFO nova.virt.libvirt.driver [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance destroyed successfully.
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.795 247708 DEBUG nova.objects.instance [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid 34693a4b-4cec-41ed-a872-facd378ad627 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:48 compute-0 systemd-machined[214448]: New machine qemu-14-instance-0000001c.
Jan 31 07:36:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3986344607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:48 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000001c.
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.838 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.864 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:48 compute-0 nova_compute[247704]: 2026-01-31 07:36:48.868 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:36:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569068459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.339 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.342 247708 DEBUG nova.objects.instance [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid f2805127-2822-400e-9a60-aaf797f11954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.358 247708 INFO nova.virt.libvirt.driver [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Deleting instance files /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627_del
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.359 247708 INFO nova.virt.libvirt.driver [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Deletion of /var/lib/nova/instances/34693a4b-4cec-41ed-a872-facd378ad627_del complete
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.366 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <uuid>f2805127-2822-400e-9a60-aaf797f11954</uuid>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <name>instance-0000001d</name>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1760433654-2</nova:name>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:36:48</nova:creationTime>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:user uuid="741e8133b32342e083b6dd5f0e316abf">tempest-ServersOnMultiNodesTest-1893229170-project-member</nova:user>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <nova:project uuid="b2c9f3f1d94b49ae835ac14aae70bd73">tempest-ServersOnMultiNodesTest-1893229170</nova:project>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <system>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <entry name="serial">f2805127-2822-400e-9a60-aaf797f11954</entry>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <entry name="uuid">f2805127-2822-400e-9a60-aaf797f11954</entry>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </system>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <os>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </os>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <features>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </features>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f2805127-2822-400e-9a60-aaf797f11954_disk">
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       </source>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f2805127-2822-400e-9a60-aaf797f11954_disk.config">
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       </source>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:36:49 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/console.log" append="off"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <video>
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </video>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:36:49 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:36:49 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:36:49 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:36:49 compute-0 nova_compute[247704]: </domain>
Jan 31 07:36:49 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:36:49 compute-0 ceph-mon[74496]: pgmap v1089: 305 pgs: 305 active+clean; 479 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 276 op/s
Jan 31 07:36:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3986344607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/569068459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.437 247708 INFO nova.compute.manager [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Took 0.92 seconds to destroy the instance on the hypervisor.
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.438 247708 DEBUG oslo.service.loopingcall [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.439 247708 DEBUG nova.compute.manager [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.439 247708 DEBUG nova.network.neutron [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.444 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.445 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.445 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Using config drive
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.480 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.604 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.605 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.605 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.650 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.650 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.651 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.739 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845009.7391436, aed3a1a9-7791-4982-b2ac-a714c6efd240 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.739 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] VM Resumed (Lifecycle Event)
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.741 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.741 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.744 247708 INFO nova.virt.libvirt.driver [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Instance spawned successfully.
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.744 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.767 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:49.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.775 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.778 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.779 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.779 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.780 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.780 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.781 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.808 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.809 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845009.7411294, aed3a1a9-7791-4982-b2ac-a714c6efd240 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.809 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] VM Started (Lifecycle Event)
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.849 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.856 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.861 247708 INFO nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Took 3.75 seconds to spawn the instance on the hypervisor.
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.862 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.890 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.936 247708 INFO nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Took 4.94 seconds to build instance.
Jan 31 07:36:49 compute-0 nova_compute[247704]: 2026-01-31 07:36:49.964 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "aed3a1a9-7791-4982-b2ac-a714c6efd240" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:36:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:36:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.369 247708 DEBUG nova.network.neutron [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.372 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Creating config drive at /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/disk.config
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.377 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpefppsi9s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.398 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.401 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.401 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.401 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.401 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.404 247708 DEBUG nova.network.neutron [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/926484333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.432 247708 INFO nova.compute.manager [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Took 0.99 seconds to deallocate network for instance.
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.510 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpefppsi9s" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 463 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.1 MiB/s wr, 351 op/s
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.557 247708 DEBUG nova.storage.rbd_utils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] rbd image f2805127-2822-400e-9a60-aaf797f11954_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.563 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/disk.config f2805127-2822-400e-9a60-aaf797f11954_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.586 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.587 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.711 247708 DEBUG oslo_concurrency.processutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.764 247708 DEBUG oslo_concurrency.processutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/disk.config f2805127-2822-400e-9a60-aaf797f11954_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.765 247708 INFO nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Deleting local config drive /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954/disk.config because it was imported into RBD.
Jan 31 07:36:50 compute-0 systemd-machined[214448]: New machine qemu-15-instance-0000001d.
Jan 31 07:36:50 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000001d.
Jan 31 07:36:50 compute-0 nova_compute[247704]: 2026-01-31 07:36:50.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.185 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.186 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.187 247708 DEBUG oslo_concurrency.processutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.188 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845011.1870413, f2805127-2822-400e-9a60-aaf797f11954 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.188 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] VM Resumed (Lifecycle Event)
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.192 247708 INFO nova.virt.libvirt.driver [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] Instance spawned successfully.
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.193 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.196 247708 DEBUG nova.compute.provider_tree [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.225 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.229 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.234 247708 DEBUG nova.scheduler.client.report [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.241 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.241 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.242 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.242 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.242 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.242 247708 DEBUG nova.virt.libvirt.driver [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.267 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.267 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845011.1873043, f2805127-2822-400e-9a60-aaf797f11954 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.267 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] VM Started (Lifecycle Event)
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.270 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.301 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.304 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.307 247708 INFO nova.scheduler.client.report [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Deleted allocations for instance 34693a4b-4cec-41ed-a872-facd378ad627
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.313 247708 INFO nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Took 4.39 seconds to spawn the instance on the hypervisor.
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.313 247708 DEBUG nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.328 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3894641582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:51 compute-0 ceph-mon[74496]: pgmap v1090: 305 pgs: 305 active+clean; 463 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.1 MiB/s wr, 351 op/s
Jan 31 07:36:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/339851413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.527 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.548 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769844996.5469947, 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.549 247708 INFO nova.compute.manager [-] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] VM Stopped (Lifecycle Event)
Jan 31 07:36:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:51.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.780 247708 DEBUG nova.compute.manager [None req-ddb9da02-0398-4b05-a659-315bd5dafab0 - - - - - -] [instance: 6ee0dd1c-cafb-45e0-9a2d-954afd9f6c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.788 247708 INFO nova.compute.manager [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Took 6.75 seconds to build instance.
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.822 247708 DEBUG oslo_concurrency.lockutils [None req-39dcda9e-e9f9-4682-b8a8-745094e8fbef 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "f2805127-2822-400e-9a60-aaf797f11954" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:51 compute-0 nova_compute[247704]: 2026-01-31 07:36:51.860 247708 DEBUG oslo_concurrency.lockutils [None req-239f90c6-4756-4287-afa8-6ea8042933b2 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "34693a4b-4cec-41ed-a872-facd378ad627" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.311 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.326 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.327 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.328 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 466 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 355 op/s
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.677 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.678 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.679 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.679 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.679 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.681 247708 INFO nova.compute.manager [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Terminating instance
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.682 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.683 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquired lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:36:52 compute-0 nova_compute[247704]: 2026-01-31 07:36:52.683 247708 DEBUG nova.network.neutron [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:36:53 compute-0 nova_compute[247704]: 2026-01-31 07:36:53.301 247708 DEBUG nova.network.neutron [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:53 compute-0 ceph-mon[74496]: pgmap v1091: 305 pgs: 305 active+clean; 466 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.9 MiB/s wr, 355 op/s
Jan 31 07:36:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3600492494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:53 compute-0 nova_compute[247704]: 2026-01-31 07:36:53.709 247708 DEBUG nova.network.neutron [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:53 compute-0 nova_compute[247704]: 2026-01-31 07:36:53.732 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Releasing lock "refresh_cache-d7327aed-ddc6-4772-8d2e-6b8be365dd2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:36:53 compute-0 nova_compute[247704]: 2026-01-31 07:36:53.733 247708 DEBUG nova.compute.manager [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:36:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:53.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:53 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 31 07:36:53 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000e.scope: Consumed 18.589s CPU time.
Jan 31 07:36:53 compute-0 systemd-machined[214448]: Machine qemu-7-instance-0000000e terminated.
Jan 31 07:36:53 compute-0 nova_compute[247704]: 2026-01-31 07:36:53.954 247708 INFO nova.virt.libvirt.driver [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance destroyed successfully.
Jan 31 07:36:53 compute-0 nova_compute[247704]: 2026-01-31 07:36:53.954 247708 DEBUG nova.objects.instance [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lazy-loading 'resources' on Instance uuid d7327aed-ddc6-4772-8d2e-6b8be365dd2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:36:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.279 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.446 247708 INFO nova.virt.libvirt.driver [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Deleting instance files /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b_del
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.448 247708 INFO nova.virt.libvirt.driver [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Deletion of /var/lib/nova/instances/d7327aed-ddc6-4772-8d2e-6b8be365dd2b_del complete
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.536 247708 INFO nova.compute.manager [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Took 0.80 seconds to destroy the instance on the hypervisor.
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.537 247708 DEBUG oslo.service.loopingcall [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.539 247708 DEBUG nova.compute.manager [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.539 247708 DEBUG nova.network.neutron [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:36:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 454 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 8.0 MiB/s wr, 439 op/s
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.591 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.593 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.593 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.594 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.594 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/483490000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:36:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/654419686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.685 247708 DEBUG nova.network.neutron [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.702 247708 DEBUG nova.network.neutron [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.721 247708 INFO nova.compute.manager [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Took 0.18 seconds to deallocate network for instance.
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.769 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.772 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:54 compute-0 nova_compute[247704]: 2026-01-31 07:36:54.906 247708 DEBUG oslo_concurrency.processutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667545813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.083 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:55 compute-0 sudo[265505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-0 sudo[265505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 sudo[265505]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 sudo[265538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:36:55 compute-0 sudo[265538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 sudo[265538]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.192 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.192 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.223 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.224 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.231 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.231 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:36:55 compute-0 podman[265531]: 2026-01-31 07:36:55.242205805 +0000 UTC m=+0.118812900 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 07:36:55 compute-0 sudo[265573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-0 sudo[265573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 sudo[265573]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 sudo[265608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:36:55 compute-0 sudo[265608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.399 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360608350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.433 247708 DEBUG oslo_concurrency.processutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.436 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.437 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4216MB free_disk=20.820262908935547GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.437 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.439 247708 DEBUG nova.compute.provider_tree [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.464 247708 DEBUG nova.scheduler.client.report [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.509 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.512 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:36:55 compute-0 sudo[265608]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:36:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.558 247708 INFO nova.scheduler.client.report [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Deleted allocations for instance d7327aed-ddc6-4772-8d2e-6b8be365dd2b
Jan 31 07:36:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.600 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance b9a76fba-cff2-455a-9aa6-7b839819e78b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.601 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance aed3a1a9-7791-4982-b2ac-a714c6efd240 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.601 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance f2805127-2822-400e-9a60-aaf797f11954 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.601 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.601 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.613 247708 DEBUG oslo_concurrency.lockutils [None req-9e666fa8-6a72-450d-a4f9-c2b792fc3554 71f887fd92fb486a959e5ca100cb1e10 7c1ddd67115f4f7bab056dbb2f270ccc - - default default] Lock "d7327aed-ddc6-4772-8d2e-6b8be365dd2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:55 compute-0 ceph-mon[74496]: pgmap v1092: 305 pgs: 305 active+clean; 454 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 8.0 MiB/s wr, 439 op/s
Jan 31 07:36:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1667545813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/360608350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:55 compute-0 nova_compute[247704]: 2026-01-31 07:36:55.681 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:36:55 compute-0 sudo[265655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-0 sudo[265655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 sudo[265655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 sudo[265681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:36:55 compute-0 sudo[265681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 sudo[265681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:55.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:55 compute-0 sudo[265725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:55 compute-0 sudo[265725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:55 compute-0 sudo[265725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:55 compute-0 sudo[265750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:36:55 compute-0 sudo[265750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-0 nova_compute[247704]: 2026-01-31 07:36:56.018 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:36:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:36:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607464021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:56 compute-0 nova_compute[247704]: 2026-01-31 07:36:56.093 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:36:56 compute-0 nova_compute[247704]: 2026-01-31 07:36:56.098 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:36:56 compute-0 nova_compute[247704]: 2026-01-31 07:36:56.123 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:36:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:36:56 compute-0 nova_compute[247704]: 2026-01-31 07:36:56.148 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:36:56 compute-0 nova_compute[247704]: 2026-01-31 07:36:56.148 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:36:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:36:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 sudo[265750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:36:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:36:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 sudo[265807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:56 compute-0 sudo[265807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-0 sudo[265807]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-0 sudo[265832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:36:56 compute-0 sudo[265832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-0 sudo[265832]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-0 sudo[265857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:56 compute-0 sudo[265857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-0 sudo[265857]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:56 compute-0 sudo[265882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 07:36:56 compute-0 sudo[265882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 409 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.3 MiB/s wr, 456 op/s
Jan 31 07:36:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2607464021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1402203378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:56 compute-0 podman[265945]: 2026-01-31 07:36:56.826874847 +0000 UTC m=+0.075249207 container create 20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:36:56 compute-0 podman[265945]: 2026-01-31 07:36:56.771512256 +0000 UTC m=+0.019886636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:36:56 compute-0 systemd[1]: Started libpod-conmon-20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4.scope.
Jan 31 07:36:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:36:56 compute-0 podman[265945]: 2026-01-31 07:36:56.951647021 +0000 UTC m=+0.200021391 container init 20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:36:56 compute-0 podman[265945]: 2026-01-31 07:36:56.959957084 +0000 UTC m=+0.208331424 container start 20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:36:56 compute-0 hardcore_maxwell[265961]: 167 167
Jan 31 07:36:56 compute-0 systemd[1]: libpod-20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4.scope: Deactivated successfully.
Jan 31 07:36:56 compute-0 conmon[265961]: conmon 20afab56c33390ea9969 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4.scope/container/memory.events
Jan 31 07:36:56 compute-0 podman[265945]: 2026-01-31 07:36:56.971413203 +0000 UTC m=+0.219787573 container attach 20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 07:36:56 compute-0 podman[265945]: 2026-01-31 07:36:56.972291334 +0000 UTC m=+0.220665734 container died 20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-824ce915ea47fcddd3e3e9646de4c0885cf8afdbbd106eca901800ea0c5cf5a9-merged.mount: Deactivated successfully.
Jan 31 07:36:57 compute-0 podman[265945]: 2026-01-31 07:36:57.046262179 +0000 UTC m=+0.294636519 container remove 20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:36:57 compute-0 systemd[1]: libpod-conmon-20afab56c33390ea99691dd51e7388cb88e6fe48420368ad799f0cb3b264f9f4.scope: Deactivated successfully.
Jan 31 07:36:57 compute-0 nova_compute[247704]: 2026-01-31 07:36:57.149 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:57 compute-0 nova_compute[247704]: 2026-01-31 07:36:57.150 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:36:57 compute-0 podman[265984]: 2026-01-31 07:36:57.221996317 +0000 UTC m=+0.071471395 container create 7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:36:57 compute-0 podman[265984]: 2026-01-31 07:36:57.174931998 +0000 UTC m=+0.024407116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:36:57 compute-0 systemd[1]: Started libpod-conmon-7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223.scope.
Jan 31 07:36:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c345d279876468846c223e6b8191584e9139fcc34cc476118cb2cac2cb1796/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c345d279876468846c223e6b8191584e9139fcc34cc476118cb2cac2cb1796/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c345d279876468846c223e6b8191584e9139fcc34cc476118cb2cac2cb1796/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:36:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7c345d279876468846c223e6b8191584e9139fcc34cc476118cb2cac2cb1796/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:36:57 compute-0 podman[265984]: 2026-01-31 07:36:57.329562462 +0000 UTC m=+0.179037570 container init 7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gates, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:36:57 compute-0 podman[265984]: 2026-01-31 07:36:57.336576503 +0000 UTC m=+0.186051581 container start 7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gates, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:36:57 compute-0 podman[265984]: 2026-01-31 07:36:57.340586461 +0000 UTC m=+0.190061539 container attach 7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gates, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:36:57 compute-0 ceph-mon[74496]: pgmap v1093: 305 pgs: 305 active+clean; 409 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.3 MiB/s wr, 456 op/s
Jan 31 07:36:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1944305336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:57.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 388 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 476 op/s
Jan 31 07:36:58 compute-0 vigilant_gates[266001]: [
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:     {
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "available": false,
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "ceph_device": false,
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "lsm_data": {},
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "lvs": [],
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "path": "/dev/sr0",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "rejected_reasons": [
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "Has a FileSystem",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "Insufficient space (<5GB)"
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         ],
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         "sys_api": {
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "actuators": null,
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "device_nodes": "sr0",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "devname": "sr0",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "human_readable_size": "482.00 KB",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "id_bus": "ata",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "model": "QEMU DVD-ROM",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "nr_requests": "2",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "parent": "/dev/sr0",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "partitions": {},
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "path": "/dev/sr0",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "removable": "1",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "rev": "2.5+",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "ro": "0",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "rotational": "1",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "sas_address": "",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "sas_device_handle": "",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "scheduler_mode": "mq-deadline",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "sectors": 0,
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "sectorsize": "2048",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "size": 493568.0,
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "support_discard": "2048",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "type": "disk",
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:             "vendor": "QEMU"
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:         }
Jan 31 07:36:58 compute-0 vigilant_gates[266001]:     }
Jan 31 07:36:58 compute-0 vigilant_gates[266001]: ]
Jan 31 07:36:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:36:58 compute-0 systemd[1]: libpod-7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223.scope: Deactivated successfully.
Jan 31 07:36:58 compute-0 systemd[1]: libpod-7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223.scope: Consumed 1.229s CPU time.
Jan 31 07:36:58 compute-0 podman[265984]: 2026-01-31 07:36:58.64735093 +0000 UTC m=+1.496826028 container died 7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:36:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3246726691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:36:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7c345d279876468846c223e6b8191584e9139fcc34cc476118cb2cac2cb1796-merged.mount: Deactivated successfully.
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:36:59 compute-0 podman[265984]: 2026-01-31 07:36:59.510532769 +0000 UTC m=+2.360007887 container remove 7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:36:59 compute-0 systemd[1]: libpod-conmon-7eb46d08a33ad2d81fae3b95c59907f1688a48cf454965e70b5767da76f5f223.scope: Deactivated successfully.
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 sudo[265882]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ffae93b0-6799-4037-9b79-4b67b2ab5999 does not exist
Jan 31 07:36:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fc1615fa-ee10-456f-8cbc-065456135b77 does not exist
Jan 31 07:36:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f5042d00-672f-4a2a-98dc-d48b0559a5be does not exist
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:36:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:36:59 compute-0 sudo[267274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:59 compute-0 sudo[267274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:59 compute-0 sudo[267274]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:59 compute-0 sudo[267299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:36:59 compute-0 sudo[267299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:59 compute-0 sudo[267299]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:36:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:36:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:36:59 compute-0 sudo[267324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:36:59 compute-0 sudo[267324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:36:59 compute-0 sudo[267324]: pam_unix(sudo:session): session closed for user root
Jan 31 07:36:59 compute-0 ceph-mon[74496]: pgmap v1094: 305 pgs: 305 active+clean; 388 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.5 MiB/s wr, 476 op/s
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:36:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:36:59 compute-0 sudo[267349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:36:59 compute-0 sudo[267349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.185001175 +0000 UTC m=+0.032257818 container create c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:37:00 compute-0 systemd[1]: Started libpod-conmon-c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015.scope.
Jan 31 07:37:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.170140572 +0000 UTC m=+0.017397225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.273322459 +0000 UTC m=+0.120579142 container init c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.279849149 +0000 UTC m=+0.127105792 container start c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.284367409 +0000 UTC m=+0.131624072 container attach c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:37:00 compute-0 loving_proskuriakova[267432]: 167 167
Jan 31 07:37:00 compute-0 systemd[1]: libpod-c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015.scope: Deactivated successfully.
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.287929276 +0000 UTC m=+0.135185909 container died c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fac5b1ac23dc314f799cde406f9814b542177fd54df6a2dd0a3cae5026d1985-merged.mount: Deactivated successfully.
Jan 31 07:37:00 compute-0 podman[267416]: 2026-01-31 07:37:00.331711494 +0000 UTC m=+0.178968127 container remove c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:37:00 compute-0 systemd[1]: libpod-conmon-c308cb8e892f60b670682dcc41d97f3e425e24114db02955ace1341c6b9d4015.scope: Deactivated successfully.
Jan 31 07:37:00 compute-0 nova_compute[247704]: 2026-01-31 07:37:00.402 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:00 compute-0 podman[267456]: 2026-01-31 07:37:00.468738647 +0000 UTC m=+0.037773802 container create 5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:37:00 compute-0 systemd[1]: Started libpod-conmon-5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4.scope.
Jan 31 07:37:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35981daf65a916bad8630731f34b8d4fa08ca514e5468a69b29de05665bc585/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35981daf65a916bad8630731f34b8d4fa08ca514e5468a69b29de05665bc585/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35981daf65a916bad8630731f34b8d4fa08ca514e5468a69b29de05665bc585/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35981daf65a916bad8630731f34b8d4fa08ca514e5468a69b29de05665bc585/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35981daf65a916bad8630731f34b8d4fa08ca514e5468a69b29de05665bc585/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 372 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.1 MiB/s wr, 472 op/s
Jan 31 07:37:00 compute-0 podman[267456]: 2026-01-31 07:37:00.54633606 +0000 UTC m=+0.115371245 container init 5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:37:00 compute-0 podman[267456]: 2026-01-31 07:37:00.450569664 +0000 UTC m=+0.019604839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:37:00 compute-0 podman[267456]: 2026-01-31 07:37:00.551453435 +0000 UTC m=+0.120488570 container start 5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 31 07:37:00 compute-0 podman[267456]: 2026-01-31 07:37:00.555838002 +0000 UTC m=+0.124873167 container attach 5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:37:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1256924332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/351743600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:01 compute-0 nova_compute[247704]: 2026-01-31 07:37:01.020 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:01 compute-0 ecstatic_engelbart[267472]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:37:01 compute-0 ecstatic_engelbart[267472]: --> relative data size: 1.0
Jan 31 07:37:01 compute-0 ecstatic_engelbart[267472]: --> All data devices are unavailable
Jan 31 07:37:01 compute-0 systemd[1]: libpod-5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4.scope: Deactivated successfully.
Jan 31 07:37:01 compute-0 podman[267456]: 2026-01-31 07:37:01.320504968 +0000 UTC m=+0.889540113 container died 5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 07:37:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d35981daf65a916bad8630731f34b8d4fa08ca514e5468a69b29de05665bc585-merged.mount: Deactivated successfully.
Jan 31 07:37:01 compute-0 podman[267456]: 2026-01-31 07:37:01.36813465 +0000 UTC m=+0.937169795 container remove 5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:37:01 compute-0 systemd[1]: libpod-conmon-5b76ee3b099816bc09183526d354bb3e86f1d68e46e796344b835a917ea099a4.scope: Deactivated successfully.
Jan 31 07:37:01 compute-0 sudo[267349]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:01 compute-0 sudo[267500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:01 compute-0 sudo[267500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:01 compute-0 sudo[267500]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:01 compute-0 sudo[267526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:37:01 compute-0 sudo[267526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:01 compute-0 sudo[267526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:01 compute-0 sudo[267551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:01 compute-0 sudo[267551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:01 compute-0 sudo[267551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:01 compute-0 sudo[267576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:37:01 compute-0 sudo[267576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:01 compute-0 sshd-session[267499]: Invalid user sol from 45.148.10.240 port 37346
Jan 31 07:37:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:01 compute-0 sshd-session[267499]: Connection closed by invalid user sol 45.148.10.240 port 37346 [preauth]
Jan 31 07:37:01 compute-0 podman[267642]: 2026-01-31 07:37:01.860925953 +0000 UTC m=+0.030879894 container create 0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:37:01 compute-0 systemd[1]: Started libpod-conmon-0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c.scope.
Jan 31 07:37:01 compute-0 ceph-mon[74496]: pgmap v1095: 305 pgs: 305 active+clean; 372 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.1 MiB/s wr, 472 op/s
Jan 31 07:37:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3581421991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4239516949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:37:01 compute-0 podman[267642]: 2026-01-31 07:37:01.847540796 +0000 UTC m=+0.017494757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:37:02 compute-0 podman[267642]: 2026-01-31 07:37:02.028998024 +0000 UTC m=+0.198951985 container init 0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:37:02 compute-0 podman[267642]: 2026-01-31 07:37:02.037539732 +0000 UTC m=+0.207493673 container start 0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:37:02 compute-0 wonderful_easley[267659]: 167 167
Jan 31 07:37:02 compute-0 systemd[1]: libpod-0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c.scope: Deactivated successfully.
Jan 31 07:37:02 compute-0 podman[267642]: 2026-01-31 07:37:02.121313886 +0000 UTC m=+0.291267857 container attach 0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:37:02 compute-0 podman[267642]: 2026-01-31 07:37:02.121821288 +0000 UTC m=+0.291775229 container died 0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:37:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-eccdf45aaaeb68e5802d47a75a81f25ee5fad2d1ea3746537c6d85fe660776d2-merged.mount: Deactivated successfully.
Jan 31 07:37:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 419 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.1 MiB/s wr, 390 op/s
Jan 31 07:37:02 compute-0 podman[267642]: 2026-01-31 07:37:02.607276832 +0000 UTC m=+0.777230773 container remove 0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:37:02 compute-0 systemd[1]: libpod-conmon-0b55673d94b73860b5156e60917ce03025fb58f42203011e87d7c61f46d51d0c.scope: Deactivated successfully.
Jan 31 07:37:02 compute-0 podman[267683]: 2026-01-31 07:37:02.734261011 +0000 UTC m=+0.022348447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:37:02 compute-0 podman[267683]: 2026-01-31 07:37:02.863554005 +0000 UTC m=+0.151641441 container create 76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:02 compute-0 systemd[1]: Started libpod-conmon-76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f.scope.
Jan 31 07:37:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb97b3e3b0982220d7f19407de6375427dc487bec120d34a36897ae826d635cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb97b3e3b0982220d7f19407de6375427dc487bec120d34a36897ae826d635cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb97b3e3b0982220d7f19407de6375427dc487bec120d34a36897ae826d635cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb97b3e3b0982220d7f19407de6375427dc487bec120d34a36897ae826d635cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:03 compute-0 podman[267683]: 2026-01-31 07:37:03.16791734 +0000 UTC m=+0.456004806 container init 76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:03 compute-0 podman[267683]: 2026-01-31 07:37:03.175900985 +0000 UTC m=+0.463988431 container start 76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:37:03 compute-0 podman[267683]: 2026-01-31 07:37:03.193538346 +0000 UTC m=+0.481625802 container attach 76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:37:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2162880359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3213225161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:03.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:03 compute-0 nova_compute[247704]: 2026-01-31 07:37:03.811 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845008.787731, 34693a4b-4cec-41ed-a872-facd378ad627 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:03 compute-0 nova_compute[247704]: 2026-01-31 07:37:03.812 247708 INFO nova.compute.manager [-] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] VM Stopped (Lifecycle Event)
Jan 31 07:37:03 compute-0 epic_knuth[267702]: {
Jan 31 07:37:03 compute-0 epic_knuth[267702]:     "0": [
Jan 31 07:37:03 compute-0 epic_knuth[267702]:         {
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "devices": [
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "/dev/loop3"
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             ],
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "lv_name": "ceph_lv0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "lv_size": "7511998464",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "name": "ceph_lv0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "tags": {
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.cluster_name": "ceph",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.crush_device_class": "",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.encrypted": "0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.osd_id": "0",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.type": "block",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:                 "ceph.vdo": "0"
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             },
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "type": "block",
Jan 31 07:37:03 compute-0 epic_knuth[267702]:             "vg_name": "ceph_vg0"
Jan 31 07:37:03 compute-0 epic_knuth[267702]:         }
Jan 31 07:37:03 compute-0 epic_knuth[267702]:     ]
Jan 31 07:37:03 compute-0 epic_knuth[267702]: }
Jan 31 07:37:03 compute-0 systemd[1]: libpod-76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f.scope: Deactivated successfully.
Jan 31 07:37:03 compute-0 podman[267683]: 2026-01-31 07:37:03.964915066 +0000 UTC m=+1.253002522 container died 76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb97b3e3b0982220d7f19407de6375427dc487bec120d34a36897ae826d635cf-merged.mount: Deactivated successfully.
Jan 31 07:37:04 compute-0 podman[267683]: 2026-01-31 07:37:04.029734727 +0000 UTC m=+1.317822173 container remove 76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_knuth, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:37:04 compute-0 systemd[1]: libpod-conmon-76f6c2da8fc376d5e7d62ed01faf4d577103382004ad4ff1de7060cc5d8de92f.scope: Deactivated successfully.
Jan 31 07:37:04 compute-0 sudo[267576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:04 compute-0 sudo[267725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:04 compute-0 sudo[267725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:04 compute-0 sudo[267725]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:04 compute-0 sudo[267750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:37:04 compute-0 sudo[267750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:04 compute-0 sudo[267750]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:04 compute-0 sudo[267775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:04 compute-0 sudo[267775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:04 compute-0 sudo[267775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:04 compute-0 ceph-mon[74496]: pgmap v1096: 305 pgs: 305 active+clean; 419 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.1 MiB/s wr, 390 op/s
Jan 31 07:37:04 compute-0 sudo[267800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:37:04 compute-0 sudo[267800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:04 compute-0 nova_compute[247704]: 2026-01-31 07:37:04.502 247708 DEBUG nova.compute.manager [None req-e04ccf87-ddc2-40c1-9972-99026fefe60a - - - - - -] [instance: 34693a4b-4cec-41ed-a872-facd378ad627] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 473 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.6 MiB/s wr, 405 op/s
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.598773669 +0000 UTC m=+0.035735482 container create 13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:37:04 compute-0 systemd[1]: Started libpod-conmon-13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46.scope.
Jan 31 07:37:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.67379181 +0000 UTC m=+0.110753643 container init 13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.583048786 +0000 UTC m=+0.020010629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.678854984 +0000 UTC m=+0.115816797 container start 13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.682710687 +0000 UTC m=+0.119672520 container attach 13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:37:04 compute-0 amazing_nash[267878]: 167 167
Jan 31 07:37:04 compute-0 systemd[1]: libpod-13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46.scope: Deactivated successfully.
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.685524976 +0000 UTC m=+0.122486789 container died 13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:37:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e548adeaba760068530f766b03d7e6cf449c5e8c7d2efc437abd5bcb9d28a5-merged.mount: Deactivated successfully.
Jan 31 07:37:04 compute-0 podman[267862]: 2026-01-31 07:37:04.730890903 +0000 UTC m=+0.167852726 container remove 13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_nash, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:37:04 compute-0 systemd[1]: libpod-conmon-13ac6fd1aea8e517ae1c4398237ec88b77ef8f042a58ed55dc2d926fb8b08f46.scope: Deactivated successfully.
Jan 31 07:37:04 compute-0 podman[267902]: 2026-01-31 07:37:04.893750466 +0000 UTC m=+0.042289602 container create 1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:37:04 compute-0 systemd[1]: Started libpod-conmon-1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2.scope.
Jan 31 07:37:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42711b26373479662f4c0dc73df6aec8be02714f4de8ea37d616ea50bb167e6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42711b26373479662f4c0dc73df6aec8be02714f4de8ea37d616ea50bb167e6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42711b26373479662f4c0dc73df6aec8be02714f4de8ea37d616ea50bb167e6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42711b26373479662f4c0dc73df6aec8be02714f4de8ea37d616ea50bb167e6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:37:04 compute-0 podman[267902]: 2026-01-31 07:37:04.96850993 +0000 UTC m=+0.117049116 container init 1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:37:04 compute-0 podman[267902]: 2026-01-31 07:37:04.874761714 +0000 UTC m=+0.023300870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:37:04 compute-0 podman[267902]: 2026-01-31 07:37:04.977533031 +0000 UTC m=+0.126072167 container start 1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:04 compute-0 podman[267902]: 2026-01-31 07:37:04.980431271 +0000 UTC m=+0.128970467 container attach 1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:37:05 compute-0 nova_compute[247704]: 2026-01-31 07:37:05.406 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]: {
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:         "osd_id": 0,
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:         "type": "bluestore"
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]:     }
Jan 31 07:37:05 compute-0 gallant_driscoll[267919]: }
Jan 31 07:37:05 compute-0 systemd[1]: libpod-1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2.scope: Deactivated successfully.
Jan 31 07:37:05 compute-0 podman[267902]: 2026-01-31 07:37:05.78332649 +0000 UTC m=+0.931865646 container died 1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:37:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-42711b26373479662f4c0dc73df6aec8be02714f4de8ea37d616ea50bb167e6f-merged.mount: Deactivated successfully.
Jan 31 07:37:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:05.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:05 compute-0 podman[267902]: 2026-01-31 07:37:05.842964155 +0000 UTC m=+0.991503291 container remove 1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:05 compute-0 systemd[1]: libpod-conmon-1097ec9296aaf932d62e961dfab23412eca9ee9fd8b57c1a61576398f37392f2.scope: Deactivated successfully.
Jan 31 07:37:05 compute-0 sudo[267800]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:37:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:37:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:37:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:37:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 497269ef-4764-41e9-928d-920d8dc71bd7 does not exist
Jan 31 07:37:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 58c256a9-107e-4343-8dd6-8aa42e14e3c5 does not exist
Jan 31 07:37:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 58d6179d-d6bb-4a20-a6b9-3366478eba56 does not exist
Jan 31 07:37:05 compute-0 sudo[267955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:05 compute-0 sudo[267955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:05 compute-0 sudo[267955]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:06 compute-0 nova_compute[247704]: 2026-01-31 07:37:06.021 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:06 compute-0 sudo[267980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:37:06 compute-0 sudo[267980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:06 compute-0 sudo[267980]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:06.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:06 compute-0 ceph-mon[74496]: pgmap v1097: 305 pgs: 305 active+clean; 473 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.6 MiB/s wr, 405 op/s
Jan 31 07:37:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:37:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:37:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 509 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 8.1 MiB/s wr, 349 op/s
Jan 31 07:37:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3541215565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:07.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:08.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:08 compute-0 ceph-mon[74496]: pgmap v1098: 305 pgs: 305 active+clean; 509 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 8.1 MiB/s wr, 349 op/s
Jan 31 07:37:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/741240861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 524 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 8.7 MiB/s wr, 350 op/s
Jan 31 07:37:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:08 compute-0 sudo[268006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:08 compute-0 sudo[268006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:08 compute-0 sudo[268006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:08 compute-0 sudo[268031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:08 compute-0 sudo[268031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:08 compute-0 sudo[268031]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:08 compute-0 nova_compute[247704]: 2026-01-31 07:37:08.952 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845013.9515383, d7327aed-ddc6-4772-8d2e-6b8be365dd2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:08 compute-0 nova_compute[247704]: 2026-01-31 07:37:08.953 247708 INFO nova.compute.manager [-] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] VM Stopped (Lifecycle Event)
Jan 31 07:37:08 compute-0 nova_compute[247704]: 2026-01-31 07:37:08.979 247708 DEBUG nova.compute.manager [None req-3ae6dbe7-6a79-43e6-81ce-d95d6523ec14 - - - - - -] [instance: d7327aed-ddc6-4772-8d2e-6b8be365dd2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:10.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:10 compute-0 ceph-mon[74496]: pgmap v1099: 305 pgs: 305 active+clean; 524 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 8.7 MiB/s wr, 350 op/s
Jan 31 07:37:10 compute-0 nova_compute[247704]: 2026-01-31 07:37:10.409 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 496 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.8 MiB/s wr, 366 op/s
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.024 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:11.146 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:11.147 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:11.148 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2091659878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1151136867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:11.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:11 compute-0 podman[268058]: 2026-01-31 07:37:11.893910871 +0000 UTC m=+0.065356286 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.995 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "aed3a1a9-7791-4982-b2ac-a714c6efd240" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.996 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "aed3a1a9-7791-4982-b2ac-a714c6efd240" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.996 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "aed3a1a9-7791-4982-b2ac-a714c6efd240-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.996 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "aed3a1a9-7791-4982-b2ac-a714c6efd240-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.996 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "aed3a1a9-7791-4982-b2ac-a714c6efd240-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.998 247708 INFO nova.compute.manager [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Terminating instance
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.999 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "refresh_cache-aed3a1a9-7791-4982-b2ac-a714c6efd240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.999 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquired lock "refresh_cache-aed3a1a9-7791-4982-b2ac-a714c6efd240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:11 compute-0 nova_compute[247704]: 2026-01-31 07:37:11.999 247708 DEBUG nova.network.neutron [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:37:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:12.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.185 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "f2805127-2822-400e-9a60-aaf797f11954" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.186 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "f2805127-2822-400e-9a60-aaf797f11954" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.186 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "f2805127-2822-400e-9a60-aaf797f11954-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.186 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "f2805127-2822-400e-9a60-aaf797f11954-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.187 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "f2805127-2822-400e-9a60-aaf797f11954-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.189 247708 INFO nova.compute.manager [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Terminating instance
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.190 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "refresh_cache-f2805127-2822-400e-9a60-aaf797f11954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.190 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquired lock "refresh_cache-f2805127-2822-400e-9a60-aaf797f11954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.191 247708 DEBUG nova.network.neutron [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.193 247708 DEBUG nova.network.neutron [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:12 compute-0 ceph-mon[74496]: pgmap v1100: 305 pgs: 305 active+clean; 496 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.8 MiB/s wr, 366 op/s
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.526 247708 DEBUG nova.network.neutron [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 437 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.8 MiB/s wr, 390 op/s
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.591 247708 DEBUG nova.network.neutron [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.822 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Releasing lock "refresh_cache-aed3a1a9-7791-4982-b2ac-a714c6efd240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.823 247708 DEBUG nova.compute.manager [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:37:12 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 31 07:37:12 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Consumed 13.202s CPU time.
Jan 31 07:37:12 compute-0 systemd-machined[214448]: Machine qemu-14-instance-0000001c terminated.
Jan 31 07:37:12 compute-0 nova_compute[247704]: 2026-01-31 07:37:12.937 247708 DEBUG nova.network.neutron [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.024 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Releasing lock "refresh_cache-f2805127-2822-400e-9a60-aaf797f11954" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.025 247708 DEBUG nova.compute.manager [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.040 247708 INFO nova.virt.libvirt.driver [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Instance destroyed successfully.
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.041 247708 DEBUG nova.objects.instance [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'resources' on Instance uuid aed3a1a9-7791-4982-b2ac-a714c6efd240 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:13 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 31 07:37:13 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Consumed 13.656s CPU time.
Jan 31 07:37:13 compute-0 systemd-machined[214448]: Machine qemu-15-instance-0000001d terminated.
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.249 247708 INFO nova.virt.libvirt.driver [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] Instance destroyed successfully.
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.249 247708 DEBUG nova.objects.instance [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lazy-loading 'resources' on Instance uuid f2805127-2822-400e-9a60-aaf797f11954 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:13 compute-0 ceph-mon[74496]: pgmap v1101: 305 pgs: 305 active+clean; 437 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.8 MiB/s wr, 390 op/s
Jan 31 07:37:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2138114906' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:37:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2138114906' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.422 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.422 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.423 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.423 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.423 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.424 247708 INFO nova.compute.manager [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Terminating instance
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.425 247708 DEBUG nova.compute.manager [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:37:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:13 compute-0 kernel: tap596408d2-96 (unregistering): left promiscuous mode
Jan 31 07:37:13 compute-0 NetworkManager[49108]: <info>  [1769845033.6727] device (tap596408d2-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00102|binding|INFO|Releasing lport 596408d2-9689-472f-b2fb-a85a75df2923 from this chassis (sb_readonly=0)
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.683 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00103|binding|INFO|Setting lport 596408d2-9689-472f-b2fb-a85a75df2923 down in Southbound
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00104|binding|INFO|Releasing lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 from this chassis (sb_readonly=0)
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00105|binding|INFO|Setting lport b2e77a7e-2125-46bd-8c49-dd619c7caf36 down in Southbound
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00106|binding|INFO|Removing iface tap596408d2-96 ovn-installed in OVS
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.686 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00107|binding|INFO|Releasing lport 632f26c5-40a9-4337-84da-ea4b4bbdf89c from this chassis (sb_readonly=0)
Jan 31 07:37:13 compute-0 ovn_controller[149457]: 2026-01-31T07:37:13Z|00108|binding|INFO|Releasing lport a50841ea-6849-4032-a8d6-6ba9e6fd3a95 from this chassis (sb_readonly=0)
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.691 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:23 19.80.0.215'], port_security=['fa:16:3e:cc:5d:23 19.80.0.215'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['596408d2-9689-472f-b2fb-a85a75df2923'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1541952823', 'neutron:cidrs': '19.80.0.215/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-994f8d42-6738-4c92-b80e-8dbb63919128', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1541952823', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=67507551-f2d4-4c0b-ab9d-4c732fbaf469, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b2e77a7e-2125-46bd-8c49-dd619c7caf36) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.693 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f1:d6 10.100.0.14'], port_security=['fa:16:3e:45:f1:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-334582475', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b9a76fba-cff2-455a-9aa6-7b839819e78b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-334582475', 'neutron:project_id': '29d136be5e384689a95acd607131dfd0', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'd466f767-252b-4335-8dfa-0f3f94d2209b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6f42f93-f94d-4170-883d-d45cddf5fdad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=596408d2-9689-472f-b2fb-a85a75df2923) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.695 160028 INFO neutron.agent.ovn.metadata.agent [-] Port b2e77a7e-2125-46bd-8c49-dd619c7caf36 in datapath 994f8d42-6738-4c92-b80e-8dbb63919128 unbound from our chassis
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.698 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 994f8d42-6738-4c92-b80e-8dbb63919128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.699 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e79f69cf-83c4-4d7e-941f-bd3b83baaef5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.700 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 namespace which is not needed anymore
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.711 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 31 07:37:13 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000013.scope: Consumed 6.910s CPU time.
Jan 31 07:37:13 compute-0 systemd-machined[214448]: Machine qemu-11-instance-00000013 terminated.
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [NOTICE]   (263217) : haproxy version is 2.8.14-c23fe91
Jan 31 07:37:13 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [NOTICE]   (263217) : path to executable is /usr/sbin/haproxy
Jan 31 07:37:13 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [WARNING]  (263217) : Exiting Master process...
Jan 31 07:37:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:13 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [ALERT]    (263217) : Current worker (263236) exited with code 143 (Terminated)
Jan 31 07:37:13 compute-0 neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128[263199]: [WARNING]  (263217) : All workers exited. Exiting... (0)
Jan 31 07:37:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:13 compute-0 systemd[1]: libpod-c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9.scope: Deactivated successfully.
Jan 31 07:37:13 compute-0 podman[268144]: 2026-01-31 07:37:13.835012819 +0000 UTC m=+0.049736734 container died c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:37:13 compute-0 NetworkManager[49108]: <info>  [1769845033.8395] manager: (tap596408d2-96): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.840 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.860 247708 INFO nova.virt.libvirt.driver [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Instance destroyed successfully.
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.860 247708 DEBUG nova.objects.instance [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lazy-loading 'resources' on Instance uuid b9a76fba-cff2-455a-9aa6-7b839819e78b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9-userdata-shm.mount: Deactivated successfully.
Jan 31 07:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-029d9617648300bceb919187bf826b0f2f3a82cfc95eb5dacf64ecdbaf496358-merged.mount: Deactivated successfully.
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.878 247708 DEBUG nova.virt.libvirt.vif [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:35:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-879430066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-879430066',id=19,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:35:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='29d136be5e384689a95acd607131dfd0',ramdisk_id='',reservation_id='r-9uooh3ao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1421195096',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1421195096-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:35:57Z,user_data=None,user_id='ea44c45fe7df4f36b5c722fbfc214f2e',uuid=b9a76fba-cff2-455a-9aa6-7b839819e78b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.878 247708 DEBUG nova.network.os_vif_util [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converting VIF {"id": "596408d2-9689-472f-b2fb-a85a75df2923", "address": "fa:16:3e:45:f1:d6", "network": {"id": "e4d1862b-2abc-4d60-bc48-19a5318038f4", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-681970246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "29d136be5e384689a95acd607131dfd0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap596408d2-96", "ovs_interfaceid": "596408d2-9689-472f-b2fb-a85a75df2923", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.879 247708 DEBUG nova.network.os_vif_util [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.879 247708 DEBUG os_vif [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.881 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap596408d2-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.882 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 podman[268144]: 2026-01-31 07:37:13.886252759 +0000 UTC m=+0.100976684 container cleanup c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.887 247708 INFO os_vif [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:f1:d6,bridge_name='br-int',has_traffic_filtering=True,id=596408d2-9689-472f-b2fb-a85a75df2923,network=Network(e4d1862b-2abc-4d60-bc48-19a5318038f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap596408d2-96')
Jan 31 07:37:13 compute-0 systemd[1]: libpod-conmon-c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9.scope: Deactivated successfully.
Jan 31 07:37:13 compute-0 podman[268187]: 2026-01-31 07:37:13.942719517 +0000 UTC m=+0.039237369 container remove c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.946 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[024beee1-2aa0-4964-82d3-572138bb7b87]: (4, ('Sat Jan 31 07:37:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 (c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9)\nc0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9\nSat Jan 31 07:37:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 (c0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9)\nc0c09688d261e049f2c92d76d5c12409b6f471c190bb9d53cea3bc193c3fb3c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.947 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[39c11f98-683a-4029-ae35-d71e8b63faaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.948 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994f8d42-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.950 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 kernel: tap994f8d42-60: left promiscuous mode
Jan 31 07:37:13 compute-0 nova_compute[247704]: 2026-01-31 07:37:13.956 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.958 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[08207311-1d10-46a2-9a32-2e4f0716ac21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.968 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[543c312b-416d-4969-a32f-c2fb8b512066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.969 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f092568a-cdde-4faf-b42d-794b4b0fc6c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.977 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[50f5d9ff-9dda-476a-949f-2eda7596ecbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519311, 'reachable_time': 39009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268220, 'error': None, 'target': 'ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.980 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-994f8d42-6738-4c92-b80e-8dbb63919128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:37:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d994f8d42\x2d6738\x2d4c92\x2db80e\x2d8dbb63919128.mount: Deactivated successfully.
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.980 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2cc0f7-ab8e-45c0-8491-bce457b2c618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.981 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 596408d2-9689-472f-b2fb-a85a75df2923 in datapath e4d1862b-2abc-4d60-bc48-19a5318038f4 unbound from our chassis
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.982 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d1862b-2abc-4d60-bc48-19a5318038f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.983 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1223bb5a-c487-406c-9f1f-4aa7744b6f71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:13.983 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 namespace which is not needed anymore
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.022 247708 DEBUG nova.compute.manager [req-17031be2-30bf-4019-9da4-71f17d72ab14 req-f2302541-02d3-4a72-abb9-baa35a162117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.022 247708 DEBUG oslo_concurrency.lockutils [req-17031be2-30bf-4019-9da4-71f17d72ab14 req-f2302541-02d3-4a72-abb9-baa35a162117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.023 247708 DEBUG oslo_concurrency.lockutils [req-17031be2-30bf-4019-9da4-71f17d72ab14 req-f2302541-02d3-4a72-abb9-baa35a162117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.023 247708 DEBUG oslo_concurrency.lockutils [req-17031be2-30bf-4019-9da4-71f17d72ab14 req-f2302541-02d3-4a72-abb9-baa35a162117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.023 247708 DEBUG nova.compute.manager [req-17031be2-30bf-4019-9da4-71f17d72ab14 req-f2302541-02d3-4a72-abb9-baa35a162117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.023 247708 DEBUG nova.compute.manager [req-17031be2-30bf-4019-9da4-71f17d72ab14 req-f2302541-02d3-4a72-abb9-baa35a162117 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-unplugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.050 247708 INFO nova.virt.libvirt.driver [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Deleting instance files /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240_del
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.051 247708 INFO nova.virt.libvirt.driver [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Deletion of /var/lib/nova/instances/aed3a1a9-7791-4982-b2ac-a714c6efd240_del complete
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.086 247708 INFO nova.virt.libvirt.driver [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Deleting instance files /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954_del
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.087 247708 INFO nova.virt.libvirt.driver [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Deletion of /var/lib/nova/instances/f2805127-2822-400e-9a60-aaf797f11954_del complete
Jan 31 07:37:14 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [NOTICE]   (263453) : haproxy version is 2.8.14-c23fe91
Jan 31 07:37:14 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [NOTICE]   (263453) : path to executable is /usr/sbin/haproxy
Jan 31 07:37:14 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [WARNING]  (263453) : Exiting Master process...
Jan 31 07:37:14 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [ALERT]    (263453) : Current worker (263456) exited with code 143 (Terminated)
Jan 31 07:37:14 compute-0 neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4[263445]: [WARNING]  (263453) : All workers exited. Exiting... (0)
Jan 31 07:37:14 compute-0 systemd[1]: libpod-d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0.scope: Deactivated successfully.
Jan 31 07:37:14 compute-0 conmon[263445]: conmon d8874cefc8cb7ceafa90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0.scope/container/memory.events
Jan 31 07:37:14 compute-0 podman[268238]: 2026-01-31 07:37:14.103324125 +0000 UTC m=+0.048276639 container died d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0-userdata-shm.mount: Deactivated successfully.
Jan 31 07:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d33c528a68c5aa889c02d03566f361be9f576229ba5939b5fe39cac7740466-merged.mount: Deactivated successfully.
Jan 31 07:37:14 compute-0 podman[268238]: 2026-01-31 07:37:14.137285584 +0000 UTC m=+0.082238098 container cleanup d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:37:14 compute-0 systemd[1]: libpod-conmon-d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0.scope: Deactivated successfully.
Jan 31 07:37:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:14.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.173 247708 INFO nova.compute.manager [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Took 1.35 seconds to destroy the instance on the hypervisor.
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.174 247708 DEBUG oslo.service.loopingcall [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.174 247708 DEBUG nova.compute.manager [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.174 247708 DEBUG nova.network.neutron [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.187 247708 INFO nova.compute.manager [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] [instance: f2805127-2822-400e-9a60-aaf797f11954] Took 1.16 seconds to destroy the instance on the hypervisor.
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.188 247708 DEBUG oslo.service.loopingcall [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.188 247708 DEBUG nova.compute.manager [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.188 247708 DEBUG nova.network.neutron [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:37:14 compute-0 podman[268269]: 2026-01-31 07:37:14.194991861 +0000 UTC m=+0.042502417 container remove d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.199 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[71a1f4e1-47e3-4415-b9ff-b404fa9c77c5]: (4, ('Sat Jan 31 07:37:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0)\nd8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0\nSat Jan 31 07:37:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 (d8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0)\nd8874cefc8cb7ceafa90461c1bf8337e53364e54e584f4650b76fe243c6da0f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.200 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[112342ec-196b-4efc-bbd9-4eed1b01fdb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.201 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d1862b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.203 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:14 compute-0 kernel: tape4d1862b-20: left promiscuous mode
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.215 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.217 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[59ba3108-4d60-44ad-bcdf-d2f641eed20f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.239 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b660a6-68ec-4f2e-a50f-86ac5c8125bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.241 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c0e911-2afd-47f8-b845-f770557bf7f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.254 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ededc141-3fae-48ae-82f7-0f8f194f5112]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519393, 'reachable_time': 38827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268284, 'error': None, 'target': 'ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.256 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d1862b-2abc-4d60-bc48-19a5318038f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:37:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:14.256 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[d537681c-cdb7-43b1-8fef-dffe7bffa20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.287 247708 INFO nova.virt.libvirt.driver [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Deleting instance files /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b_del
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.288 247708 INFO nova.virt.libvirt.driver [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Deletion of /var/lib/nova/instances/b9a76fba-cff2-455a-9aa6-7b839819e78b_del complete
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.352 247708 DEBUG nova.network.neutron [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.356 247708 INFO nova.compute.manager [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Took 0.93 seconds to destroy the instance on the hypervisor.
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.357 247708 DEBUG oslo.service.loopingcall [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.357 247708 DEBUG nova.compute.manager [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.357 247708 DEBUG nova.network.neutron [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.364 247708 DEBUG nova.network.neutron [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.379 247708 INFO nova.compute.manager [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Took 0.20 seconds to deallocate network for instance.
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.385 247708 DEBUG nova.network.neutron [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.404 247708 DEBUG nova.network.neutron [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.425 247708 INFO nova.compute.manager [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] Took 0.24 seconds to deallocate network for instance.
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.435 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.436 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.507 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 292 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.1 MiB/s wr, 411 op/s
Jan 31 07:37:14 compute-0 nova_compute[247704]: 2026-01-31 07:37:14.604 247708 DEBUG oslo_concurrency.processutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:14 compute-0 systemd[1]: run-netns-ovnmeta\x2de4d1862b\x2d2abc\x2d4d60\x2dbc48\x2d19a5318038f4.mount: Deactivated successfully.
Jan 31 07:37:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545881058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.029 247708 DEBUG oslo_concurrency.processutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.035 247708 DEBUG nova.compute.provider_tree [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.056 247708 DEBUG nova.scheduler.client.report [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.078 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.083 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.119 247708 INFO nova.scheduler.client.report [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Deleted allocations for instance aed3a1a9-7791-4982-b2ac-a714c6efd240
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.182 247708 DEBUG oslo_concurrency.processutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.206 247708 DEBUG oslo_concurrency.lockutils [None req-155b4a2b-c533-4926-9596-f704d9b3b03a 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "aed3a1a9-7791-4982-b2ac-a714c6efd240" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4023877950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.654 247708 DEBUG oslo_concurrency.processutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.658 247708 DEBUG nova.compute.provider_tree [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.675 247708 DEBUG nova.scheduler.client.report [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.697 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:15 compute-0 ceph-mon[74496]: pgmap v1102: 305 pgs: 305 active+clean; 292 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.1 MiB/s wr, 411 op/s
Jan 31 07:37:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3545881058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4023877950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.742 247708 INFO nova.scheduler.client.report [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Deleted allocations for instance f2805127-2822-400e-9a60-aaf797f11954
Jan 31 07:37:15 compute-0 nova_compute[247704]: 2026-01-31 07:37:15.805 247708 DEBUG oslo_concurrency.lockutils [None req-3de0aaac-7081-4a64-b0a1-3be880b5e651 741e8133b32342e083b6dd5f0e316abf b2c9f3f1d94b49ae835ac14aae70bd73 - - default default] Lock "f2805127-2822-400e-9a60-aaf797f11954" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:15.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.026 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.119 247708 DEBUG nova.compute.manager [req-675c1642-db58-4c2a-bed9-b8ee2c29634e req-0ca9b125-3154-45c0-b94f-ad28c4e67d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.120 247708 DEBUG oslo_concurrency.lockutils [req-675c1642-db58-4c2a-bed9-b8ee2c29634e req-0ca9b125-3154-45c0-b94f-ad28c4e67d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.121 247708 DEBUG oslo_concurrency.lockutils [req-675c1642-db58-4c2a-bed9-b8ee2c29634e req-0ca9b125-3154-45c0-b94f-ad28c4e67d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.121 247708 DEBUG oslo_concurrency.lockutils [req-675c1642-db58-4c2a-bed9-b8ee2c29634e req-0ca9b125-3154-45c0-b94f-ad28c4e67d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.122 247708 DEBUG nova.compute.manager [req-675c1642-db58-4c2a-bed9-b8ee2c29634e req-0ca9b125-3154-45c0-b94f-ad28c4e67d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] No waiting events found dispatching network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.122 247708 WARNING nova.compute.manager [req-675c1642-db58-4c2a-bed9-b8ee2c29634e req-0ca9b125-3154-45c0-b94f-ad28c4e67d2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Received unexpected event network-vif-plugged-596408d2-9689-472f-b2fb-a85a75df2923 for instance with vm_state active and task_state deleting.
Jan 31 07:37:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:16.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 181 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 377 op/s
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.678 247708 DEBUG nova.network.neutron [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.697 247708 INFO nova.compute.manager [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Took 2.34 seconds to deallocate network for instance.
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.739 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.740 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:16 compute-0 nova_compute[247704]: 2026-01-31 07:37:16.797 247708 DEBUG oslo_concurrency.processutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312642852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:17 compute-0 nova_compute[247704]: 2026-01-31 07:37:17.245 247708 DEBUG oslo_concurrency.processutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:17 compute-0 nova_compute[247704]: 2026-01-31 07:37:17.251 247708 DEBUG nova.compute.provider_tree [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:17 compute-0 nova_compute[247704]: 2026-01-31 07:37:17.268 247708 DEBUG nova.scheduler.client.report [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:17 compute-0 nova_compute[247704]: 2026-01-31 07:37:17.305 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:17 compute-0 nova_compute[247704]: 2026-01-31 07:37:17.328 247708 INFO nova.scheduler.client.report [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Deleted allocations for instance b9a76fba-cff2-455a-9aa6-7b839819e78b
Jan 31 07:37:17 compute-0 nova_compute[247704]: 2026-01-31 07:37:17.413 247708 DEBUG oslo_concurrency.lockutils [None req-e52879d3-249f-49b1-8424-0e4f9b3f3fbc ea44c45fe7df4f36b5c722fbfc214f2e 29d136be5e384689a95acd607131dfd0 - - default default] Lock "b9a76fba-cff2-455a-9aa6-7b839819e78b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:17 compute-0 ceph-mon[74496]: pgmap v1103: 305 pgs: 305 active+clean; 181 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 377 op/s
Jan 31 07:37:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2312642852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:17.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:18.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 306 op/s
Jan 31 07:37:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:18 compute-0 nova_compute[247704]: 2026-01-31 07:37:18.894 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:19.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 31 07:37:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 31 07:37:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 31 07:37:19 compute-0 ceph-mon[74496]: pgmap v1104: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 306 op/s
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:37:20
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr']
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:37:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:20.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:37:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 67 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 21 KiB/s wr, 180 op/s
Jan 31 07:37:20 compute-0 ceph-mon[74496]: osdmap e153: 3 total, 3 up, 3 in
Jan 31 07:37:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3163764212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:21 compute-0 nova_compute[247704]: 2026-01-31 07:37:21.029 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:22 compute-0 ceph-mon[74496]: pgmap v1106: 305 pgs: 305 active+clean; 67 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 123 KiB/s rd, 21 KiB/s wr, 180 op/s
Jan 31 07:37:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:22.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:22.244 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:37:22 compute-0 nova_compute[247704]: 2026-01-31 07:37:22.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:22.246 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:37:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 20 KiB/s wr, 158 op/s
Jan 31 07:37:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 31 07:37:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 31 07:37:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 31 07:37:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:23 compute-0 nova_compute[247704]: 2026-01-31 07:37:23.897 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:24.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:24 compute-0 ceph-mon[74496]: pgmap v1107: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 20 KiB/s wr, 158 op/s
Jan 31 07:37:24 compute-0 ceph-mon[74496]: osdmap e154: 3 total, 3 up, 3 in
Jan 31 07:37:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 5.7 KiB/s wr, 105 op/s
Jan 31 07:37:25 compute-0 ceph-mon[74496]: pgmap v1109: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 5.7 KiB/s wr, 105 op/s
Jan 31 07:37:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:25 compute-0 podman[268357]: 2026-01-31 07:37:25.912896709 +0000 UTC m=+0.087222099 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 07:37:26 compute-0 nova_compute[247704]: 2026-01-31 07:37:26.030 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 4.7 KiB/s wr, 83 op/s
Jan 31 07:37:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:37:27.249 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:37:27 compute-0 ceph-mon[74496]: pgmap v1110: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 4.7 KiB/s wr, 83 op/s
Jan 31 07:37:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:27.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.040 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845033.0391161, aed3a1a9-7791-4982-b2ac-a714c6efd240 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.041 247708 INFO nova.compute.manager [-] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] VM Stopped (Lifecycle Event)
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.068 247708 DEBUG nova.compute.manager [None req-d08c1782-5b26-4083-b80f-0c5d92b00b6f - - - - - -] [instance: aed3a1a9-7791-4982-b2ac-a714c6efd240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.248 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845033.2461588, f2805127-2822-400e-9a60-aaf797f11954 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.248 247708 INFO nova.compute.manager [-] [instance: f2805127-2822-400e-9a60-aaf797f11954] VM Stopped (Lifecycle Event)
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.274 247708 DEBUG nova.compute.manager [None req-c1045db0-25e5-45cf-9048-8b26a24c3b30 - - - - - -] [instance: f2805127-2822-400e-9a60-aaf797f11954] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 4.4 KiB/s wr, 74 op/s
Jan 31 07:37:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.858 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845033.8567803, b9a76fba-cff2-455a-9aa6-7b839819e78b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.859 247708 INFO nova.compute.manager [-] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] VM Stopped (Lifecycle Event)
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.885 247708 DEBUG nova.compute.manager [None req-4b6dae2d-a1cf-43a1-9165-5110b4799ed6 - - - - - -] [instance: b9a76fba-cff2-455a-9aa6-7b839819e78b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:28 compute-0 nova_compute[247704]: 2026-01-31 07:37:28.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:29 compute-0 sudo[268385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:29 compute-0 sudo[268385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:29 compute-0 sudo[268385]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:29 compute-0 sudo[268410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:29 compute-0 sudo[268410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:29 compute-0 sudo[268410]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:29 compute-0 ceph-mon[74496]: pgmap v1111: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 4.4 KiB/s wr, 74 op/s
Jan 31 07:37:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:29.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 62 op/s
Jan 31 07:37:31 compute-0 nova_compute[247704]: 2026-01-31 07:37:31.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:31 compute-0 ceph-mon[74496]: pgmap v1112: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 62 op/s
Jan 31 07:37:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Jan 31 07:37:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 31 07:37:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 31 07:37:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 31 07:37:33 compute-0 ceph-mon[74496]: pgmap v1113: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Jan 31 07:37:33 compute-0 ceph-mon[74496]: osdmap e155: 3 total, 3 up, 3 in
Jan 31 07:37:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:33.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:33 compute-0 nova_compute[247704]: 2026-01-31 07:37:33.944 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 716 B/s wr, 7 op/s
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:37:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:37:35 compute-0 ceph-mon[74496]: pgmap v1115: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 716 B/s wr, 7 op/s
Jan 31 07:37:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:35.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:36 compute-0 nova_compute[247704]: 2026-01-31 07:37:36.061 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:36.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Jan 31 07:37:37 compute-0 ceph-mon[74496]: pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 716 B/s wr, 6 op/s
Jan 31 07:37:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:37.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 5 op/s
Jan 31 07:37:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:38 compute-0 nova_compute[247704]: 2026-01-31 07:37:38.949 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:39 compute-0 ceph-mon[74496]: pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 409 B/s wr, 5 op/s
Jan 31 07:37:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:39.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:41 compute-0 nova_compute[247704]: 2026-01-31 07:37:41.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:41.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:41 compute-0 ceph-mon[74496]: pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:42.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:42 compute-0 podman[268442]: 2026-01-31 07:37:42.901794326 +0000 UTC m=+0.066069973 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:37:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:43.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:43 compute-0 ceph-mon[74496]: pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:43 compute-0 nova_compute[247704]: 2026-01-31 07:37:43.952 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:37:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:44.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:37:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:37:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3203814439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:37:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:37:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3203814439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:37:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3203814439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:37:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3203814439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:37:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:45.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:45 compute-0 ceph-mon[74496]: pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.100 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:46.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.397 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.398 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.424 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.563 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.564 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.571 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.571 247708 INFO nova.compute.claims [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:37:46 compute-0 nova_compute[247704]: 2026-01-31 07:37:46.713 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405675773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.143 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.148 247708 DEBUG nova.compute.provider_tree [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.164 247708 DEBUG nova.scheduler.client.report [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.186 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.186 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.255 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.256 247708 DEBUG nova.network.neutron [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.275 247708 INFO nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.296 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.410 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.412 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.413 247708 INFO nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Creating image(s)
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.448 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.485 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.512 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.516 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.605 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.606 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.607 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.607 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.634 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:47 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.639 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:47 compute-0 ceph-mon[74496]: pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2405675773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:47.999 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.069 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] resizing rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.166 247708 DEBUG nova.network.neutron [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.166 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.169 247708 DEBUG nova.objects.instance [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lazy-loading 'migration_context' on Instance uuid ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.206 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.206 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Ensure instance console log exists: /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.207 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.207 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.208 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.210 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.214 247708 WARNING nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.218 247708 DEBUG nova.virt.libvirt.host [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.218 247708 DEBUG nova.virt.libvirt.host [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.221 247708 DEBUG nova.virt.libvirt.host [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.222 247708 DEBUG nova.virt.libvirt.host [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.223 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.223 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.224 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.224 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.225 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.225 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.225 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.226 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.226 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.226 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.227 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.227 247708 DEBUG nova.virt.hardware [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.230 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:37:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058046253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.698 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.736 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.741 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:48 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 07:37:48 compute-0 nova_compute[247704]: 2026-01-31 07:37:48.955 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2122428780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4058046253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:49 compute-0 sudo[268713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:49 compute-0 sudo[268713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:49 compute-0 sudo[268713]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:37:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3753196070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:49 compute-0 sudo[268738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:37:49 compute-0 sudo[268738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:37:49 compute-0 sudo[268738]: pam_unix(sudo:session): session closed for user root
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.224 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.226 247708 DEBUG nova.objects.instance [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lazy-loading 'pci_devices' on Instance uuid ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.242 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <uuid>ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7</uuid>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <name>instance-00000020</name>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:name>tempest-TenantUsagesTestJSON-server-1891261661</nova:name>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:37:48</nova:creationTime>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:user uuid="e119f4e1eb6947e59dddb00c375c95f9">tempest-TenantUsagesTestJSON-1295249587-project-member</nova:user>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <nova:project uuid="16da4a5c9f77452fb3a1ae8fb23c3636">tempest-TenantUsagesTestJSON-1295249587</nova:project>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <nova:ports/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <system>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <entry name="serial">ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7</entry>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <entry name="uuid">ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7</entry>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </system>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <os>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </os>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <features>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </features>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk">
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       </source>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk.config">
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       </source>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:37:49 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/console.log" append="off"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <video>
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </video>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:37:49 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:37:49 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:37:49 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:37:49 compute-0 nova_compute[247704]: </domain>
Jan 31 07:37:49 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.297 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.297 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.298 247708 INFO nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Using config drive
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.319 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.712 247708 INFO nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Creating config drive at /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/disk.config
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.715 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmph3f42iuh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.835 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmph3f42iuh" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.867 247708 DEBUG nova.storage.rbd_utils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] rbd image ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:49 compute-0 nova_compute[247704]: 2026-01-31 07:37:49.871 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/disk.config ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:37:50 compute-0 ceph-mon[74496]: pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:37:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3753196070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.040 247708 DEBUG oslo_concurrency.processutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/disk.config ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.041 247708 INFO nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Deleting local config drive /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7/disk.config because it was imported into RBD.
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:37:50 compute-0 systemd-machined[214448]: New machine qemu-16-instance-00000020.
Jan 31 07:37:50 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000020.
Jan 31 07:37:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:50.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.522 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845070.5216856, ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.523 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] VM Resumed (Lifecycle Event)
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.527 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.528 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.532 247708 INFO nova.virt.libvirt.driver [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Instance spawned successfully.
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.532 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:37:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 104 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.2 MiB/s wr, 28 op/s
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.580 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.589 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.595 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.596 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.597 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.598 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.599 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.600 247708 DEBUG nova.virt.libvirt.driver [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.711 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.711 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845070.522931, ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.712 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] VM Started (Lifecycle Event)
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.744 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.747 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.798 247708 INFO nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Took 3.39 seconds to spawn the instance on the hypervisor.
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.798 247708 DEBUG nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.801 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.898 247708 INFO nova.compute.manager [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Took 4.38 seconds to build instance.
Jan 31 07:37:50 compute-0 nova_compute[247704]: 2026-01-31 07:37:50.932 247708 DEBUG oslo_concurrency.lockutils [None req-ef91b072-f788-4f40-9e81-7577ad8d0ce5 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:51 compute-0 nova_compute[247704]: 2026-01-31 07:37:51.103 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:51 compute-0 nova_compute[247704]: 2026-01-31 07:37:51.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:51 compute-0 nova_compute[247704]: 2026-01-31 07:37:51.589 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:51 compute-0 nova_compute[247704]: 2026-01-31 07:37:51.589 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:37:51 compute-0 nova_compute[247704]: 2026-01-31 07:37:51.631 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:37:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:51.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:52 compute-0 ceph-mon[74496]: pgmap v1123: 305 pgs: 305 active+clean; 104 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.2 MiB/s wr, 28 op/s
Jan 31 07:37:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1392450364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.312 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.313 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.313 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.313 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.314 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.315 247708 INFO nova.compute.manager [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Terminating instance
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.316 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "refresh_cache-ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.316 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquired lock "refresh_cache-ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.316 247708 DEBUG nova.network.neutron [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 143 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 449 KiB/s rd, 4.0 MiB/s wr, 69 op/s
Jan 31 07:37:52 compute-0 nova_compute[247704]: 2026-01-31 07:37:52.671 247708 DEBUG nova.network.neutron [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2146859592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2987293693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1404436523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.106 247708 DEBUG nova.network.neutron [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.120 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Releasing lock "refresh_cache-ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.121 247708 DEBUG nova.compute.manager [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:37:53 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000020.scope: Deactivated successfully.
Jan 31 07:37:53 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000020.scope: Consumed 3.181s CPU time.
Jan 31 07:37:53 compute-0 systemd-machined[214448]: Machine qemu-16-instance-00000020 terminated.
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.343 247708 INFO nova.virt.libvirt.driver [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Instance destroyed successfully.
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.344 247708 DEBUG nova.objects.instance [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lazy-loading 'resources' on Instance uuid ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:53 compute-0 ovn_controller[149457]: 2026-01-31T07:37:53Z|00109|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 07:37:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:53.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:53 compute-0 nova_compute[247704]: 2026-01-31 07:37:53.957 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:54.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:54 compute-0 nova_compute[247704]: 2026-01-31 07:37:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 133 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.9 MiB/s wr, 147 op/s
Jan 31 07:37:54 compute-0 ceph-mon[74496]: pgmap v1124: 305 pgs: 305 active+clean; 143 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 449 KiB/s rd, 4.0 MiB/s wr, 69 op/s
Jan 31 07:37:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/343125461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:54.998 247708 INFO nova.virt.libvirt.driver [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Deleting instance files /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_del
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:54.999 247708 INFO nova.virt.libvirt.driver [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Deletion of /var/lib/nova/instances/ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7_del complete
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.060 247708 INFO nova.compute.manager [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Took 1.94 seconds to destroy the instance on the hypervisor.
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.060 247708 DEBUG oslo.service.loopingcall [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.061 247708 DEBUG nova.compute.manager [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.061 247708 DEBUG nova.network.neutron [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.461 247708 DEBUG nova.network.neutron [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.494 247708 DEBUG nova.network.neutron [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.515 247708 INFO nova.compute.manager [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Took 0.45 seconds to deallocate network for instance.
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.632 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.633 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:55 compute-0 ceph-mon[74496]: pgmap v1125: 305 pgs: 305 active+clean; 133 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.9 MiB/s wr, 147 op/s
Jan 31 07:37:55 compute-0 nova_compute[247704]: 2026-01-31 07:37:55.683 247708 DEBUG oslo_concurrency.processutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:55.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.105 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241698449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.138 247708 DEBUG oslo_concurrency.processutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.144 247708 DEBUG nova.compute.provider_tree [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.180 247708 DEBUG nova.scheduler.client.report [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:56.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.225 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.257 247708 INFO nova.scheduler.client.report [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Deleted allocations for instance ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.344 247708 DEBUG oslo_concurrency.lockutils [None req-c8a6892b-00ce-432b-b55b-ccf9a5efc272 e119f4e1eb6947e59dddb00c375c95f9 16da4a5c9f77452fb3a1ae8fb23c3636 - - default default] Lock "ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 142 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.586 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:37:56 compute-0 nova_compute[247704]: 2026-01-31 07:37:56.587 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/348254901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4241698449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/567436037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:37:56 compute-0 podman[268948]: 2026-01-31 07:37:56.920824714 +0000 UTC m=+0.090013767 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:37:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084864772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.076 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.230 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.232 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4696MB free_disk=20.942790985107422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.232 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.232 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.291 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.291 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.309 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.631 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.632 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.672 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:37:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113696239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-0 ceph-mon[74496]: pgmap v1126: 305 pgs: 305 active+clean; 142 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 167 op/s
Jan 31 07:37:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3136175005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4084864772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.755 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.760 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.775 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.780 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:37:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:57.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.916 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.916 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.916 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.922 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:37:57 compute-0 nova_compute[247704]: 2026-01-31 07:37:57.922 247708 INFO nova.compute.claims [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.083 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:58.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:37:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:37:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287800186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.542 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.551 247708 DEBUG nova.compute.provider_tree [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:37:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 212 op/s
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.585 247708 DEBUG nova.scheduler.client.report [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.610 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.611 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.653 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.653 247708 DEBUG nova.network.neutron [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:37:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.678 247708 INFO nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.707 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:37:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3113696239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1574188447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3287800186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.817 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.818 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.819 247708 INFO nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Creating image(s)
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.850 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.881 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.911 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.915 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.934 247708 DEBUG nova.policy [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '533eaca1e9c4430dabe2b0a39039ca65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.937 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.968 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.969 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.970 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.970 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:58 compute-0 nova_compute[247704]: 2026-01-31 07:37:58.998 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.003 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c9aecf4d-ed14-40d4-9751-902d85959701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.246 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c9aecf4d-ed14-40d4-9751-902d85959701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.326 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] resizing rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.445 247708 DEBUG nova.objects.instance [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'migration_context' on Instance uuid c9aecf4d-ed14-40d4-9751-902d85959701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.494 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.495 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Ensure instance console log exists: /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.495 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.496 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:37:59 compute-0 nova_compute[247704]: 2026-01-31 07:37:59.497 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:37:59 compute-0 ceph-mon[74496]: pgmap v1127: 305 pgs: 305 active+clean; 134 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 212 op/s
Jan 31 07:37:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:37:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:37:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:00.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:00 compute-0 nova_compute[247704]: 2026-01-31 07:38:00.429 247708 DEBUG nova.network.neutron [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Successfully created port: 3dae7286-8488-4698-a4e7-232e763fe00e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:38:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 148 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.8 MiB/s wr, 289 op/s
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.106 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.675 247708 DEBUG nova.network.neutron [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Successfully updated port: 3dae7286-8488-4698-a4e7-232e763fe00e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.690 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "refresh_cache-c9aecf4d-ed14-40d4-9751-902d85959701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.691 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquired lock "refresh_cache-c9aecf4d-ed14-40d4-9751-902d85959701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.691 247708 DEBUG nova.network.neutron [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:38:01 compute-0 ceph-mon[74496]: pgmap v1128: 305 pgs: 305 active+clean; 148 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.8 MiB/s wr, 289 op/s
Jan 31 07:38:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:01.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.990 247708 DEBUG nova.compute.manager [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-changed-3dae7286-8488-4698-a4e7-232e763fe00e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.991 247708 DEBUG nova.compute.manager [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Refreshing instance network info cache due to event network-changed-3dae7286-8488-4698-a4e7-232e763fe00e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:38:01 compute-0 nova_compute[247704]: 2026-01-31 07:38:01.992 247708 DEBUG oslo_concurrency.lockutils [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c9aecf4d-ed14-40d4-9751-902d85959701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:02 compute-0 nova_compute[247704]: 2026-01-31 07:38:02.182 247708 DEBUG nova.network.neutron [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:38:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:02.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 157 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.403 247708 DEBUG nova.network.neutron [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Updating instance_info_cache with network_info: [{"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.426 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Releasing lock "refresh_cache-c9aecf4d-ed14-40d4-9751-902d85959701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.427 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Instance network_info: |[{"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.427 247708 DEBUG oslo_concurrency.lockutils [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c9aecf4d-ed14-40d4-9751-902d85959701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.427 247708 DEBUG nova.network.neutron [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Refreshing network info cache for port 3dae7286-8488-4698-a4e7-232e763fe00e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.432 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Start _get_guest_xml network_info=[{"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.438 247708 WARNING nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.443 247708 DEBUG nova.virt.libvirt.host [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.443 247708 DEBUG nova.virt.libvirt.host [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.451 247708 DEBUG nova.virt.libvirt.host [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.452 247708 DEBUG nova.virt.libvirt.host [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.454 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.454 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.455 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.455 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.456 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.456 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.457 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.457 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.458 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.458 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.458 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.459 247708 DEBUG nova.virt.hardware [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.463 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:03 compute-0 ceph-mon[74496]: pgmap v1129: 305 pgs: 305 active+clean; 157 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Jan 31 07:38:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:03.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:03 compute-0 nova_compute[247704]: 2026-01-31 07:38:03.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:38:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435878554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.002 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.023 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.026 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:04.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:38:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1164294150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.508 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.510 247708 DEBUG nova.virt.libvirt.vif [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:37:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1578568736',display_name='tempest-ImagesTestJSON-server-1578568736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1578568736',id=35,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-bo6pnm9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:37:58Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=c9aecf4d-ed14-40d4-9751-902d85959701,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.511 247708 DEBUG nova.network.os_vif_util [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.512 247708 DEBUG nova.network.os_vif_util [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.514 247708 DEBUG nova.objects.instance [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid c9aecf4d-ed14-40d4-9751-902d85959701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.535 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <uuid>c9aecf4d-ed14-40d4-9751-902d85959701</uuid>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <name>instance-00000023</name>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:name>tempest-ImagesTestJSON-server-1578568736</nova:name>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:38:03</nova:creationTime>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:user uuid="533eaca1e9c4430dabe2b0a39039ca65">tempest-ImagesTestJSON-533495031-project-member</nova:user>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:project uuid="b3e3e6f216d24c1f9f68777cfb63dbf8">tempest-ImagesTestJSON-533495031</nova:project>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <nova:port uuid="3dae7286-8488-4698-a4e7-232e763fe00e">
Jan 31 07:38:04 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <system>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <entry name="serial">c9aecf4d-ed14-40d4-9751-902d85959701</entry>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <entry name="uuid">c9aecf4d-ed14-40d4-9751-902d85959701</entry>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </system>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <os>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </os>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <features>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </features>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c9aecf4d-ed14-40d4-9751-902d85959701_disk">
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </source>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c9aecf4d-ed14-40d4-9751-902d85959701_disk.config">
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </source>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:38:04 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:2f:0e:a1"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <target dev="tap3dae7286-84"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/console.log" append="off"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <video>
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </video>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:38:04 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:38:04 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:38:04 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:38:04 compute-0 nova_compute[247704]: </domain>
Jan 31 07:38:04 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.537 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Preparing to wait for external event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.537 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.538 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.538 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.539 247708 DEBUG nova.virt.libvirt.vif [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:37:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1578568736',display_name='tempest-ImagesTestJSON-server-1578568736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1578568736',id=35,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-bo6pnm9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:37:58Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=c9aecf4d-ed14-40d4-9751-902d85959701,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.540 247708 DEBUG nova.network.os_vif_util [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.541 247708 DEBUG nova.network.os_vif_util [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.542 247708 DEBUG os_vif [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.543 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.543 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.544 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.548 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.548 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3dae7286-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.549 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3dae7286-84, col_values=(('external_ids', {'iface-id': '3dae7286-8488-4698-a4e7-232e763fe00e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:0e:a1', 'vm-uuid': 'c9aecf4d-ed14-40d4-9751-902d85959701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.551 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:04 compute-0 NetworkManager[49108]: <info>  [1769845084.5522] manager: (tap3dae7286-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.553 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.558 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.559 247708 INFO os_vif [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84')
Jan 31 07:38:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 285 op/s
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.605 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.605 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.605 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No VIF found with MAC fa:16:3e:2f:0e:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.606 247708 INFO nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Using config drive
Jan 31 07:38:04 compute-0 nova_compute[247704]: 2026-01-31 07:38:04.640 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3435878554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1164294150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.313 247708 INFO nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Creating config drive at /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/disk.config
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.317 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc_2i5xy6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.443 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc_2i5xy6" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.470 247708 DEBUG nova.storage.rbd_utils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c9aecf4d-ed14-40d4-9751-902d85959701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.473 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/disk.config c9aecf4d-ed14-40d4-9751-902d85959701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.563 247708 DEBUG nova.network.neutron [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Updated VIF entry in instance network info cache for port 3dae7286-8488-4698-a4e7-232e763fe00e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.564 247708 DEBUG nova.network.neutron [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Updating instance_info_cache with network_info: [{"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.620 247708 DEBUG oslo_concurrency.lockutils [req-e26ba986-4eab-4ac6-83fe-cf2e6a44c1bc req-89ab1f82-5b81-4e55-a5f9-b794ef7d96a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c9aecf4d-ed14-40d4-9751-902d85959701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.655 247708 DEBUG oslo_concurrency.processutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/disk.config c9aecf4d-ed14-40d4-9751-902d85959701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.655 247708 INFO nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Deleting local config drive /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701/disk.config because it was imported into RBD.
Jan 31 07:38:05 compute-0 kernel: tap3dae7286-84: entered promiscuous mode
Jan 31 07:38:05 compute-0 ovn_controller[149457]: 2026-01-31T07:38:05Z|00110|binding|INFO|Claiming lport 3dae7286-8488-4698-a4e7-232e763fe00e for this chassis.
Jan 31 07:38:05 compute-0 NetworkManager[49108]: <info>  [1769845085.7007] manager: (tap3dae7286-84): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:05 compute-0 ovn_controller[149457]: 2026-01-31T07:38:05Z|00111|binding|INFO|3dae7286-8488-4698-a4e7-232e763fe00e: Claiming fa:16:3e:2f:0e:a1 10.100.0.14
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.703 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.709 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:05 compute-0 systemd-udevd[269329]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:38:05 compute-0 systemd-machined[214448]: New machine qemu-17-instance-00000023.
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.742 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:0e:a1 10.100.0.14'], port_security=['fa:16:3e:2f:0e:a1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c9aecf4d-ed14-40d4-9751-902d85959701', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3dae7286-8488-4698-a4e7-232e763fe00e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.743 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3dae7286-8488-4698-a4e7-232e763fe00e in datapath cffffabd-62a6-4362-9315-bd726adce623 bound to our chassis
Jan 31 07:38:05 compute-0 ovn_controller[149457]: 2026-01-31T07:38:05Z|00112|binding|INFO|Setting lport 3dae7286-8488-4698-a4e7-232e763fe00e ovn-installed in OVS
Jan 31 07:38:05 compute-0 ovn_controller[149457]: 2026-01-31T07:38:05Z|00113|binding|INFO|Setting lport 3dae7286-8488-4698-a4e7-232e763fe00e up in Southbound
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.746 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.746 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:05 compute-0 NetworkManager[49108]: <info>  [1769845085.7480] device (tap3dae7286-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:38:05 compute-0 NetworkManager[49108]: <info>  [1769845085.7487] device (tap3dae7286-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:38:05 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000023.
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.758 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[249c1f87-7656-41f9-b159-070889071765]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.761 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcffffabd-61 in ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.763 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcffffabd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.763 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c82c0f-5ecb-43cf-abfe-04d5d9d5df14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.766 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[42083f03-65aa-4be8-840b-9cf7bfe3d091]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.778 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[b712af1d-ba33-4612-a0ff-a92baa4475f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.788 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2819fa46-e304-470c-8285-7565f56c5e19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.806 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[61a3246b-9d64-4cf1-b1ba-a092722ab4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 NetworkManager[49108]: <info>  [1769845085.8134] manager: (tapcffffabd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.812 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7c88f128-b4c5-4ef5-8738-7de5a33e429f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 systemd-udevd[269331]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.834 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5366e4-d793-461f-9fe9-567fcb1bf837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.837 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8f783066-e533-4241-add0-21451162ddd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 NetworkManager[49108]: <info>  [1769845085.8558] device (tapcffffabd-60): carrier: link connected
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.861 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f32ab25a-dd23-4b11-a057-2415e0b0fdd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.875 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24a53c87-05a7-475c-ae07-e6a487de216e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532847, 'reachable_time': 33625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269362, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:05.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:05 compute-0 ceph-mon[74496]: pgmap v1130: 305 pgs: 305 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 285 op/s
Jan 31 07:38:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4156463602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.890 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7be8e495-dc02-4fa9-876e-972075479408]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:96c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532847, 'tstamp': 532847}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269363, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.903 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[59d15171-3274-4876-a1b1-5bd13ddc19eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532847, 'reachable_time': 33625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269364, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.930 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[17fc48c8-d5ce-4d64-95d2-154a86e9cd48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.984 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[09d85903-08d2-429a-843f-4d336b12eeb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.986 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.987 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.988 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcffffabd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:05 compute-0 NetworkManager[49108]: <info>  [1769845085.9913] manager: (tapcffffabd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 31 07:38:05 compute-0 nova_compute[247704]: 2026-01-31 07:38:05.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:05 compute-0 kernel: tapcffffabd-60: entered promiscuous mode
Jan 31 07:38:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:05.994 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcffffabd-60, col_values=(('external_ids', {'iface-id': '549e70cf-ed02-45f9-9021-3a04088f580f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:05 compute-0 ovn_controller[149457]: 2026-01-31T07:38:05Z|00114|binding|INFO|Releasing lport 549e70cf-ed02-45f9-9021-3a04088f580f from this chassis (sb_readonly=0)
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.003 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:06.004 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:06.005 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa30d8e-c328-4b83-af74-e21c73a6f803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:06.007 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:38:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:06.009 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'env', 'PROCESS_TAG=haproxy-cffffabd-62a6-4362-9315-bd726adce623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cffffabd-62a6-4362-9315-bd726adce623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.108 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.113 247708 DEBUG nova.compute.manager [req-11c7a2a8-6d0e-4bfe-b4b0-e270e8b12ce8 req-44e484fe-3052-4ced-b22e-6a7a00d769dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.114 247708 DEBUG oslo_concurrency.lockutils [req-11c7a2a8-6d0e-4bfe-b4b0-e270e8b12ce8 req-44e484fe-3052-4ced-b22e-6a7a00d769dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.114 247708 DEBUG oslo_concurrency.lockutils [req-11c7a2a8-6d0e-4bfe-b4b0-e270e8b12ce8 req-44e484fe-3052-4ced-b22e-6a7a00d769dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.114 247708 DEBUG oslo_concurrency.lockutils [req-11c7a2a8-6d0e-4bfe-b4b0-e270e8b12ce8 req-44e484fe-3052-4ced-b22e-6a7a00d769dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.114 247708 DEBUG nova.compute.manager [req-11c7a2a8-6d0e-4bfe-b4b0-e270e8b12ce8 req-44e484fe-3052-4ced-b22e-6a7a00d769dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Processing event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:38:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:38:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:06.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.298 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845086.2982905, c9aecf4d-ed14-40d4-9751-902d85959701 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.299 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] VM Started (Lifecycle Event)
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.301 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.304 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.307 247708 INFO nova.virt.libvirt.driver [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Instance spawned successfully.
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.307 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.320 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.326 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.330 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.330 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.331 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.331 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.331 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.332 247708 DEBUG nova.virt.libvirt.driver [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.362 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.363 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845086.2997272, c9aecf4d-ed14-40d4-9751-902d85959701 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.363 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] VM Paused (Lifecycle Event)
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.389 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:06 compute-0 sudo[269444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.393 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845086.3041775, c9aecf4d-ed14-40d4-9751-902d85959701 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.393 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] VM Resumed (Lifecycle Event)
Jan 31 07:38:06 compute-0 sudo[269444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-0 sudo[269444]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-0 podman[269438]: 2026-01-31 07:38:06.403967161 +0000 UTC m=+0.075572195 container create 2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.417 247708 INFO nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Took 7.60 seconds to spawn the instance on the hypervisor.
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.418 247708 DEBUG nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.421 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.427 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:06 compute-0 sudo[269474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:06 compute-0 sudo[269474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-0 sudo[269474]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-0 systemd[1]: Started libpod-conmon-2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395.scope.
Jan 31 07:38:06 compute-0 podman[269438]: 2026-01-31 07:38:06.363717168 +0000 UTC m=+0.035322292 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.458 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:38:06 compute-0 sudo[269501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:06 compute-0 sudo[269501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-0 sudo[269501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4770c7e1b4586ed850bdbd241e3d604853ef8c7cf211da0c3441a47f781a9d55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:06 compute-0 podman[269438]: 2026-01-31 07:38:06.519297294 +0000 UTC m=+0.190902408 container init 2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 07:38:06 compute-0 podman[269438]: 2026-01-31 07:38:06.524191043 +0000 UTC m=+0.195796097 container start 2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:38:06 compute-0 sudo[269529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:38:06 compute-0 sudo[269529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.546 247708 INFO nova.compute.manager [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Took 8.80 seconds to build instance.
Jan 31 07:38:06 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [NOTICE]   (269553) : New worker (269557) forked
Jan 31 07:38:06 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [NOTICE]   (269553) : Loading success.
Jan 31 07:38:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 31 07:38:06 compute-0 nova_compute[247704]: 2026-01-31 07:38:06.580 247708 DEBUG oslo_concurrency.lockutils [None req-96caf86a-b6c1-4865-a729-783632d3224b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:38:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:38:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:06 compute-0 sudo[269529]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:38:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:38:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:38:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:38:06 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:38:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:38:07 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 07:38:07 compute-0 ceph-mon[74496]: pgmap v1131: 305 pgs: 305 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 31 07:38:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:38:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:38:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:07.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:08.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.340 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845073.3399382, ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.341 247708 INFO nova.compute.manager [-] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] VM Stopped (Lifecycle Event)
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.391 247708 DEBUG nova.compute.manager [req-7ea44b63-20ca-44a6-85a7-63f59c585467 req-58a704e2-266e-427c-b6f5-6a9007391460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.391 247708 DEBUG oslo_concurrency.lockutils [req-7ea44b63-20ca-44a6-85a7-63f59c585467 req-58a704e2-266e-427c-b6f5-6a9007391460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.392 247708 DEBUG oslo_concurrency.lockutils [req-7ea44b63-20ca-44a6-85a7-63f59c585467 req-58a704e2-266e-427c-b6f5-6a9007391460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.392 247708 DEBUG oslo_concurrency.lockutils [req-7ea44b63-20ca-44a6-85a7-63f59c585467 req-58a704e2-266e-427c-b6f5-6a9007391460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.392 247708 DEBUG nova.compute.manager [req-7ea44b63-20ca-44a6-85a7-63f59c585467 req-58a704e2-266e-427c-b6f5-6a9007391460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] No waiting events found dispatching network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.392 247708 WARNING nova.compute.manager [req-7ea44b63-20ca-44a6-85a7-63f59c585467 req-58a704e2-266e-427c-b6f5-6a9007391460 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received unexpected event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e for instance with vm_state active and task_state pausing.
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.438 247708 INFO nova.compute.manager [None req-8b5446ea-393f-48f9-99c6-aaa788e13d74 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Pausing
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.438 247708 DEBUG nova.objects.instance [None req-8b5446ea-393f-48f9-99c6-aaa788e13d74 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'flavor' on Instance uuid c9aecf4d-ed14-40d4-9751-902d85959701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 186 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 31 07:38:08 compute-0 nova_compute[247704]: 2026-01-31 07:38:08.655 247708 DEBUG nova.compute.manager [None req-a23bac90-225e-451d-8c87-fe1ea6b9cd0f - - - - - -] [instance: ad17cb8f-7050-4cf0-a100-0c5d71ba7dd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:09 compute-0 sudo[269598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:09 compute-0 sudo[269598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:09 compute-0 sudo[269598]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:09 compute-0 sudo[269624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:09 compute-0 sudo[269624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:09 compute-0 sudo[269624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:09 compute-0 nova_compute[247704]: 2026-01-31 07:38:09.552 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:09 compute-0 nova_compute[247704]: 2026-01-31 07:38:09.656 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845089.65605, c9aecf4d-ed14-40d4-9751-902d85959701 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:09 compute-0 nova_compute[247704]: 2026-01-31 07:38:09.656 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] VM Paused (Lifecycle Event)
Jan 31 07:38:09 compute-0 nova_compute[247704]: 2026-01-31 07:38:09.658 247708 DEBUG nova.compute.manager [None req-8b5446ea-393f-48f9-99c6-aaa788e13d74 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:09 compute-0 ceph-mon[74496]: pgmap v1132: 305 pgs: 305 active+clean; 186 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 31 07:38:09 compute-0 nova_compute[247704]: 2026-01-31 07:38:09.847 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:09 compute-0 nova_compute[247704]: 2026-01-31 07:38:09.852 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:38:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:09.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:38:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:10.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bc621f23-f955-43a8-aa3c-b658f7ca2537 does not exist
Jan 31 07:38:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ad135963-24cd-4f04-a773-e08b7717705c does not exist
Jan 31 07:38:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d4e5cd93-94b7-49c9-89b7-44b5020059b6 does not exist
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:38:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:38:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:38:10 compute-0 sudo[269649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:10 compute-0 sudo[269649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:10 compute-0 sudo[269649]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:10 compute-0 sudo[269674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:10 compute-0 sudo[269674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:10 compute-0 sudo[269674]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:10 compute-0 sudo[269699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:10 compute-0 sudo[269699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:10 compute-0 sudo[269699]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:10 compute-0 sudo[269724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:38:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 259 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 290 op/s
Jan 31 07:38:10 compute-0 sudo[269724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:10 compute-0 podman[269790]: 2026-01-31 07:38:10.922223825 +0000 UTC m=+0.045925072 container create 17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:38:10 compute-0 systemd[1]: Started libpod-conmon-17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae.scope.
Jan 31 07:38:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:10 compute-0 podman[269790]: 2026-01-31 07:38:10.901816457 +0000 UTC m=+0.025517794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:11 compute-0 podman[269790]: 2026-01-31 07:38:11.006354957 +0000 UTC m=+0.130056244 container init 17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:38:11 compute-0 podman[269790]: 2026-01-31 07:38:11.015809657 +0000 UTC m=+0.139510934 container start 17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:38:11 compute-0 recursing_greider[269807]: 167 167
Jan 31 07:38:11 compute-0 systemd[1]: libpod-17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae.scope: Deactivated successfully.
Jan 31 07:38:11 compute-0 conmon[269807]: conmon 17edfb071ba593f730b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae.scope/container/memory.events
Jan 31 07:38:11 compute-0 podman[269790]: 2026-01-31 07:38:11.020749378 +0000 UTC m=+0.144450725 container attach 17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:38:11 compute-0 podman[269790]: 2026-01-31 07:38:11.021724272 +0000 UTC m=+0.145425549 container died 17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:38:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-db7c409fd0f756b226661e064770d3928ba14e7e099037183bb718115930e0c1-merged.mount: Deactivated successfully.
Jan 31 07:38:11 compute-0 podman[269790]: 2026-01-31 07:38:11.064150757 +0000 UTC m=+0.187852044 container remove 17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:38:11 compute-0 systemd[1]: libpod-conmon-17edfb071ba593f730b4c931adeb62d8971e4e72e542952c6b39807720d8c9ae.scope: Deactivated successfully.
Jan 31 07:38:11 compute-0 nova_compute[247704]: 2026-01-31 07:38:11.110 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:11.147 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:11.148 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:11.150 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:11 compute-0 podman[269830]: 2026-01-31 07:38:11.227271446 +0000 UTC m=+0.060222650 container create f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:38:11 compute-0 systemd[1]: Started libpod-conmon-f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a.scope.
Jan 31 07:38:11 compute-0 podman[269830]: 2026-01-31 07:38:11.202707988 +0000 UTC m=+0.035659252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b061cc998798077cc2d8b2988130185694048bce6ea4eacab0370208f56e7d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b061cc998798077cc2d8b2988130185694048bce6ea4eacab0370208f56e7d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b061cc998798077cc2d8b2988130185694048bce6ea4eacab0370208f56e7d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b061cc998798077cc2d8b2988130185694048bce6ea4eacab0370208f56e7d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b061cc998798077cc2d8b2988130185694048bce6ea4eacab0370208f56e7d73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:11 compute-0 podman[269830]: 2026-01-31 07:38:11.336370078 +0000 UTC m=+0.169321282 container init f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_perlman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:38:11 compute-0 podman[269830]: 2026-01-31 07:38:11.342218802 +0000 UTC m=+0.175170006 container start f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_perlman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:38:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:38:11 compute-0 ceph-mon[74496]: pgmap v1133: 305 pgs: 305 active+clean; 259 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 290 op/s
Jan 31 07:38:11 compute-0 podman[269830]: 2026-01-31 07:38:11.346533737 +0000 UTC m=+0.179485001 container attach f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:38:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:12 compute-0 elegant_perlman[269848]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:38:12 compute-0 elegant_perlman[269848]: --> relative data size: 1.0
Jan 31 07:38:12 compute-0 elegant_perlman[269848]: --> All data devices are unavailable
Jan 31 07:38:12 compute-0 systemd[1]: libpod-f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a.scope: Deactivated successfully.
Jan 31 07:38:12 compute-0 podman[269830]: 2026-01-31 07:38:12.087812672 +0000 UTC m=+0.920763896 container died f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:38:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b061cc998798077cc2d8b2988130185694048bce6ea4eacab0370208f56e7d73-merged.mount: Deactivated successfully.
Jan 31 07:38:12 compute-0 podman[269830]: 2026-01-31 07:38:12.155555714 +0000 UTC m=+0.988506918 container remove f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:38:12 compute-0 systemd[1]: libpod-conmon-f11fa85828846f3ffcf96f7e20d1ba2314a874d564d11fe6d9655d9547dab18a.scope: Deactivated successfully.
Jan 31 07:38:12 compute-0 sudo[269724]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:38:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:38:12 compute-0 sudo[269877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:12 compute-0 sudo[269877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:12 compute-0 sudo[269877]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:12 compute-0 sudo[269902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:12 compute-0 sudo[269902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:12 compute-0 sudo[269902]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:12 compute-0 sudo[269927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:12 compute-0 sudo[269927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:12 compute-0 sudo[269927]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:12 compute-0 sudo[269952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:38:12 compute-0 sudo[269952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 277 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.4 MiB/s wr, 260 op/s
Jan 31 07:38:12 compute-0 podman[270018]: 2026-01-31 07:38:12.860192656 +0000 UTC m=+0.050695548 container create 5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:38:12 compute-0 systemd[1]: Started libpod-conmon-5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73.scope.
Jan 31 07:38:12 compute-0 podman[270018]: 2026-01-31 07:38:12.835123665 +0000 UTC m=+0.025626597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:12 compute-0 podman[270018]: 2026-01-31 07:38:12.948763637 +0000 UTC m=+0.139266649 container init 5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:38:12 compute-0 podman[270018]: 2026-01-31 07:38:12.959010037 +0000 UTC m=+0.149512889 container start 5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:38:12 compute-0 podman[270018]: 2026-01-31 07:38:12.9632319 +0000 UTC m=+0.153734852 container attach 5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:38:12 compute-0 systemd[1]: libpod-5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73.scope: Deactivated successfully.
Jan 31 07:38:12 compute-0 lucid_euler[270035]: 167 167
Jan 31 07:38:12 compute-0 podman[270018]: 2026-01-31 07:38:12.966279154 +0000 UTC m=+0.156782006 container died 5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:38:12 compute-0 conmon[270035]: conmon 5e3c7fa13c22a2293e15 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73.scope/container/memory.events
Jan 31 07:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-87b3b704bc61152c454305a76392eb37d09702beba588e2f65cd9baa03f54108-merged.mount: Deactivated successfully.
Jan 31 07:38:13 compute-0 podman[270018]: 2026-01-31 07:38:13.012998904 +0000 UTC m=+0.203501776 container remove 5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:38:13 compute-0 systemd[1]: libpod-conmon-5e3c7fa13c22a2293e15525f72332c0fa8b150e1b783e49e72339356576c3f73.scope: Deactivated successfully.
Jan 31 07:38:13 compute-0 podman[270038]: 2026-01-31 07:38:13.037978614 +0000 UTC m=+0.086158744 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 07:38:13 compute-0 podman[270079]: 2026-01-31 07:38:13.187139253 +0000 UTC m=+0.052825351 container create 61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:38:13 compute-0 systemd[1]: Started libpod-conmon-61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f.scope.
Jan 31 07:38:13 compute-0 podman[270079]: 2026-01-31 07:38:13.165121895 +0000 UTC m=+0.030808003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9702fc245c0d425445ccb8d5ed7da2c9aa45124e36ac8252ddab96ede0d40757/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9702fc245c0d425445ccb8d5ed7da2c9aa45124e36ac8252ddab96ede0d40757/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9702fc245c0d425445ccb8d5ed7da2c9aa45124e36ac8252ddab96ede0d40757/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9702fc245c0d425445ccb8d5ed7da2c9aa45124e36ac8252ddab96ede0d40757/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:13 compute-0 podman[270079]: 2026-01-31 07:38:13.28906797 +0000 UTC m=+0.154754158 container init 61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:38:13 compute-0 podman[270079]: 2026-01-31 07:38:13.296139442 +0000 UTC m=+0.161825530 container start 61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:38:13 compute-0 podman[270079]: 2026-01-31 07:38:13.299812302 +0000 UTC m=+0.165498430 container attach 61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:38:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:13 compute-0 ceph-mon[74496]: pgmap v1134: 305 pgs: 305 active+clean; 277 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.4 MiB/s wr, 260 op/s
Jan 31 07:38:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:13.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]: {
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:     "0": [
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:         {
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "devices": [
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "/dev/loop3"
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             ],
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "lv_name": "ceph_lv0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "lv_size": "7511998464",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "name": "ceph_lv0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "tags": {
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.cluster_name": "ceph",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.crush_device_class": "",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.encrypted": "0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.osd_id": "0",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.type": "block",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:                 "ceph.vdo": "0"
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             },
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "type": "block",
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:             "vg_name": "ceph_vg0"
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:         }
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]:     ]
Jan 31 07:38:14 compute-0 hungry_goldwasser[270096]: }
Jan 31 07:38:14 compute-0 systemd[1]: libpod-61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f.scope: Deactivated successfully.
Jan 31 07:38:14 compute-0 podman[270079]: 2026-01-31 07:38:14.139923409 +0000 UTC m=+1.005609497 container died 61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9702fc245c0d425445ccb8d5ed7da2c9aa45124e36ac8252ddab96ede0d40757-merged.mount: Deactivated successfully.
Jan 31 07:38:14 compute-0 podman[270079]: 2026-01-31 07:38:14.207377584 +0000 UTC m=+1.073063672 container remove 61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:38:14 compute-0 systemd[1]: libpod-conmon-61277442da015db56f45cb1eaaf2c81b14de3aef122153f36a581057117d019f.scope: Deactivated successfully.
Jan 31 07:38:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:14.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:14 compute-0 sudo[269952]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:14 compute-0 sudo[270120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:14 compute-0 sudo[270120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:14 compute-0 sudo[270120]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:14 compute-0 sudo[270145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:38:14 compute-0 sudo[270145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:14 compute-0 sudo[270145]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:14 compute-0 sudo[270170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:14 compute-0 sudo[270170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:14 compute-0 sudo[270170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:14 compute-0 sudo[270195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:38:14 compute-0 sudo[270195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:14 compute-0 nova_compute[247704]: 2026-01-31 07:38:14.554 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 291 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.9 MiB/s wr, 267 op/s
Jan 31 07:38:14 compute-0 podman[270263]: 2026-01-31 07:38:14.86587718 +0000 UTC m=+0.043228596 container create 5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:38:14 compute-0 systemd[1]: Started libpod-conmon-5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180.scope.
Jan 31 07:38:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:14 compute-0 podman[270263]: 2026-01-31 07:38:14.846934258 +0000 UTC m=+0.024285714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:14 compute-0 podman[270263]: 2026-01-31 07:38:14.948001624 +0000 UTC m=+0.125353130 container init 5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:38:14 compute-0 podman[270263]: 2026-01-31 07:38:14.953332404 +0000 UTC m=+0.130683820 container start 5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heyrovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:38:14 compute-0 podman[270263]: 2026-01-31 07:38:14.957285929 +0000 UTC m=+0.134637415 container attach 5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heyrovsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:38:14 compute-0 tender_heyrovsky[270280]: 167 167
Jan 31 07:38:14 compute-0 systemd[1]: libpod-5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180.scope: Deactivated successfully.
Jan 31 07:38:14 compute-0 podman[270263]: 2026-01-31 07:38:14.960781835 +0000 UTC m=+0.138133281 container died 5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heyrovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ce932f3dee0b8bbb7246f051765506103be102ab0b9d6acc25ddee6a606f447-merged.mount: Deactivated successfully.
Jan 31 07:38:15 compute-0 podman[270263]: 2026-01-31 07:38:15.0040209 +0000 UTC m=+0.181372356 container remove 5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:38:15 compute-0 systemd[1]: libpod-conmon-5ef82d2d362aeffe308f1f9356464091b7f5afd67d39a5dee59aa003d7cf3180.scope: Deactivated successfully.
Jan 31 07:38:15 compute-0 podman[270303]: 2026-01-31 07:38:15.188353127 +0000 UTC m=+0.049396656 container create dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:38:15 compute-0 systemd[1]: Started libpod-conmon-dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac.scope.
Jan 31 07:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db5df4faeadc2b60cddd230c31a9b8ddf90ed2ee7b4da5c37b06014d58f2259/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db5df4faeadc2b60cddd230c31a9b8ddf90ed2ee7b4da5c37b06014d58f2259/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db5df4faeadc2b60cddd230c31a9b8ddf90ed2ee7b4da5c37b06014d58f2259/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db5df4faeadc2b60cddd230c31a9b8ddf90ed2ee7b4da5c37b06014d58f2259/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:15 compute-0 podman[270303]: 2026-01-31 07:38:15.172359067 +0000 UTC m=+0.033402616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:38:15 compute-0 podman[270303]: 2026-01-31 07:38:15.279532492 +0000 UTC m=+0.140576051 container init dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:38:15 compute-0 podman[270303]: 2026-01-31 07:38:15.286257166 +0000 UTC m=+0.147300685 container start dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:38:15 compute-0 podman[270303]: 2026-01-31 07:38:15.289555637 +0000 UTC m=+0.150599166 container attach dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:38:15 compute-0 nova_compute[247704]: 2026-01-31 07:38:15.505 247708 DEBUG nova.compute.manager [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:15 compute-0 nova_compute[247704]: 2026-01-31 07:38:15.594 247708 INFO nova.compute.manager [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] instance snapshotting
Jan 31 07:38:15 compute-0 nova_compute[247704]: 2026-01-31 07:38:15.596 247708 WARNING nova.compute.manager [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] trying to snapshot a non-running instance: (state: 3 expected: 1)
Jan 31 07:38:15 compute-0 ceph-mon[74496]: pgmap v1135: 305 pgs: 305 active+clean; 291 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.9 MiB/s wr, 267 op/s
Jan 31 07:38:15 compute-0 nova_compute[247704]: 2026-01-31 07:38:15.879 247708 INFO nova.virt.libvirt.driver [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Beginning live snapshot process
Jan 31 07:38:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:15.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:16 compute-0 nova_compute[247704]: 2026-01-31 07:38:16.025 247708 DEBUG nova.virt.libvirt.imagebackend [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:38:16 compute-0 nova_compute[247704]: 2026-01-31 07:38:16.112 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:16 compute-0 agitated_tharp[270320]: {
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:         "osd_id": 0,
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:         "type": "bluestore"
Jan 31 07:38:16 compute-0 agitated_tharp[270320]:     }
Jan 31 07:38:16 compute-0 agitated_tharp[270320]: }
Jan 31 07:38:16 compute-0 systemd[1]: libpod-dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac.scope: Deactivated successfully.
Jan 31 07:38:16 compute-0 podman[270303]: 2026-01-31 07:38:16.159634574 +0000 UTC m=+1.020678133 container died dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db5df4faeadc2b60cddd230c31a9b8ddf90ed2ee7b4da5c37b06014d58f2259-merged.mount: Deactivated successfully.
Jan 31 07:38:16 compute-0 podman[270303]: 2026-01-31 07:38:16.213467108 +0000 UTC m=+1.074510677 container remove dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:38:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:16 compute-0 systemd[1]: libpod-conmon-dd5f8dcb658451b64f44c35a9ad56adc5e2c1e1c40032e03bea66f3c4c15e4ac.scope: Deactivated successfully.
Jan 31 07:38:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:16.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:16 compute-0 sudo[270195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:38:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:38:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3f94fda0-2d22-41b2-861a-7e2f7713af1f does not exist
Jan 31 07:38:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5d70f141-ae35-4d8e-bc07-a85aebc992e1 does not exist
Jan 31 07:38:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d51b134c-abc1-4adb-b8c3-20a8b54c0bee does not exist
Jan 31 07:38:16 compute-0 nova_compute[247704]: 2026-01-31 07:38:16.282 247708 DEBUG nova.storage.rbd_utils [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(5a20416861a04f7c96e8e785c97da4db) on rbd image(c9aecf4d-ed14-40d4-9751-902d85959701_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:38:16 compute-0 sudo[270386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:16 compute-0 sudo[270386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:16 compute-0 sudo[270386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:16 compute-0 sudo[270429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:38:16 compute-0 sudo[270429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:16 compute-0 sudo[270429]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 293 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 227 op/s
Jan 31 07:38:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3568645214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:38:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4036725515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 31 07:38:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 31 07:38:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 31 07:38:17 compute-0 nova_compute[247704]: 2026-01-31 07:38:17.353 247708 DEBUG nova.storage.rbd_utils [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] cloning vms/c9aecf4d-ed14-40d4-9751-902d85959701_disk@5a20416861a04f7c96e8e785c97da4db to images/e939cee4-d39b-49b2-a849-509ee7169ea6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:38:17 compute-0 nova_compute[247704]: 2026-01-31 07:38:17.472 247708 DEBUG nova.storage.rbd_utils [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] flattening images/e939cee4-d39b-49b2-a849-509ee7169ea6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:38:17 compute-0 nova_compute[247704]: 2026-01-31 07:38:17.764 247708 DEBUG nova.storage.rbd_utils [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] removing snapshot(5a20416861a04f7c96e8e785c97da4db) on rbd image(c9aecf4d-ed14-40d4-9751-902d85959701_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:38:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:17.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 31 07:38:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 31 07:38:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 31 07:38:18 compute-0 ceph-mon[74496]: pgmap v1136: 305 pgs: 305 active+clean; 293 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 227 op/s
Jan 31 07:38:18 compute-0 ceph-mon[74496]: osdmap e156: 3 total, 3 up, 3 in
Jan 31 07:38:18 compute-0 nova_compute[247704]: 2026-01-31 07:38:18.566 247708 DEBUG nova.storage.rbd_utils [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(snap) on rbd image(e939cee4-d39b-49b2-a849-509ee7169ea6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:38:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 293 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Jan 31 07:38:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 31 07:38:19 compute-0 nova_compute[247704]: 2026-01-31 07:38:19.557 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:19 compute-0 ceph-mon[74496]: osdmap e157: 3 total, 3 up, 3 in
Jan 31 07:38:19 compute-0 ceph-mon[74496]: pgmap v1139: 305 pgs: 305 active+clean; 293 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Jan 31 07:38:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 31 07:38:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 31 07:38:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:19.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:38:20
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', 'images']
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:38:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:20.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:38:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 322 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 344 op/s
Jan 31 07:38:20 compute-0 ceph-mon[74496]: osdmap e158: 3 total, 3 up, 3 in
Jan 31 07:38:21 compute-0 nova_compute[247704]: 2026-01-31 07:38:21.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:21 compute-0 nova_compute[247704]: 2026-01-31 07:38:21.303 247708 INFO nova.virt.libvirt.driver [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Snapshot image upload complete
Jan 31 07:38:21 compute-0 nova_compute[247704]: 2026-01-31 07:38:21.304 247708 INFO nova.compute.manager [None req-c32a440a-446c-402c-b0e6-69517f1fa6f5 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Took 5.71 seconds to snapshot the instance on the hypervisor.
Jan 31 07:38:21 compute-0 ceph-mon[74496]: pgmap v1141: 305 pgs: 305 active+clean; 322 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.8 MiB/s wr, 344 op/s
Jan 31 07:38:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:21.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.005000118s ======
Jan 31 07:38:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:22.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000118s
Jan 31 07:38:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 339 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.6 MiB/s wr, 474 op/s
Jan 31 07:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 31 07:38:23 compute-0 ceph-mon[74496]: pgmap v1142: 305 pgs: 305 active+clean; 339 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.6 MiB/s wr, 474 op/s
Jan 31 07:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 31 07:38:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 31 07:38:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:23.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.045 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.046 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.078 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.182 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.182 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.193 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.193 247708 INFO nova.compute.claims [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:38:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:24.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.349 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.430 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.431 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.431 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.432 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.432 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.435 247708 INFO nova.compute.manager [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Terminating instance
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.437 247708 DEBUG nova.compute.manager [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:38:24 compute-0 kernel: tap3dae7286-84 (unregistering): left promiscuous mode
Jan 31 07:38:24 compute-0 NetworkManager[49108]: <info>  [1769845104.4812] device (tap3dae7286-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:38:24 compute-0 ovn_controller[149457]: 2026-01-31T07:38:24Z|00115|binding|INFO|Releasing lport 3dae7286-8488-4698-a4e7-232e763fe00e from this chassis (sb_readonly=0)
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.493 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 ovn_controller[149457]: 2026-01-31T07:38:24Z|00116|binding|INFO|Setting lport 3dae7286-8488-4698-a4e7-232e763fe00e down in Southbound
Jan 31 07:38:24 compute-0 ovn_controller[149457]: 2026-01-31T07:38:24Z|00117|binding|INFO|Removing iface tap3dae7286-84 ovn-installed in OVS
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.498 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.503 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:0e:a1 10.100.0.14'], port_security=['fa:16:3e:2f:0e:a1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c9aecf4d-ed14-40d4-9751-902d85959701', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3dae7286-8488-4698-a4e7-232e763fe00e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.505 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3dae7286-8488-4698-a4e7-232e763fe00e in datapath cffffabd-62a6-4362-9315-bd726adce623 unbound from our chassis
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.508 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cffffabd-62a6-4362-9315-bd726adce623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.509 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[162a9159-e929-4ab7-b824-f81ce2980698]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.510 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace which is not needed anymore
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.520 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000023.scope: Deactivated successfully.
Jan 31 07:38:24 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000023.scope: Consumed 4.015s CPU time.
Jan 31 07:38:24 compute-0 systemd-machined[214448]: Machine qemu-17-instance-00000023 terminated.
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.559 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 332 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 3.5 MiB/s wr, 545 op/s
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.656 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.662 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.669 247708 INFO nova.virt.libvirt.driver [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Instance destroyed successfully.
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.670 247708 DEBUG nova.objects.instance [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'resources' on Instance uuid c9aecf4d-ed14-40d4-9751-902d85959701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:24 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [NOTICE]   (269553) : haproxy version is 2.8.14-c23fe91
Jan 31 07:38:24 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [NOTICE]   (269553) : path to executable is /usr/sbin/haproxy
Jan 31 07:38:24 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [WARNING]  (269553) : Exiting Master process...
Jan 31 07:38:24 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [ALERT]    (269553) : Current worker (269557) exited with code 143 (Terminated)
Jan 31 07:38:24 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[269505]: [WARNING]  (269553) : All workers exited. Exiting... (0)
Jan 31 07:38:24 compute-0 systemd[1]: libpod-2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395.scope: Deactivated successfully.
Jan 31 07:38:24 compute-0 podman[270591]: 2026-01-31 07:38:24.687699742 +0000 UTC m=+0.069245130 container died 2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.689 247708 DEBUG nova.virt.libvirt.vif [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:37:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1578568736',display_name='tempest-ImagesTestJSON-server-1578568736',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1578568736',id=35,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:38:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-bo6pnm9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:38:21Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=c9aecf4d-ed14-40d4-9751-902d85959701,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.690 247708 DEBUG nova.network.os_vif_util [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "3dae7286-8488-4698-a4e7-232e763fe00e", "address": "fa:16:3e:2f:0e:a1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3dae7286-84", "ovs_interfaceid": "3dae7286-8488-4698-a4e7-232e763fe00e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.692 247708 DEBUG nova.network.os_vif_util [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:24 compute-0 ceph-mon[74496]: osdmap e159: 3 total, 3 up, 3 in
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.692 247708 DEBUG os_vif [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.695 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.696 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3dae7286-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.699 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.704 247708 INFO os_vif [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:0e:a1,bridge_name='br-int',has_traffic_filtering=True,id=3dae7286-8488-4698-a4e7-232e763fe00e,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3dae7286-84')
Jan 31 07:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395-userdata-shm.mount: Deactivated successfully.
Jan 31 07:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4770c7e1b4586ed850bdbd241e3d604853ef8c7cf211da0c3441a47f781a9d55-merged.mount: Deactivated successfully.
Jan 31 07:38:24 compute-0 podman[270591]: 2026-01-31 07:38:24.751799676 +0000 UTC m=+0.133345044 container cleanup 2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:38:24 compute-0 systemd[1]: libpod-conmon-2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395.scope: Deactivated successfully.
Jan 31 07:38:24 compute-0 podman[270647]: 2026-01-31 07:38:24.807545667 +0000 UTC m=+0.040174352 container remove 2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.812 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1818890d-79ca-4db3-a38e-6be885ab4b66]: (4, ('Sat Jan 31 07:38:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395)\n2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395\nSat Jan 31 07:38:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395)\n2ecf6846ef85a4cb7fc91f99a70033f37d723abeaf70436e019df217c36d8395\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.814 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6016df7b-9cb0-4b6b-82fe-1215b64d641c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.815 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430571829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 kernel: tapcffffabd-60: left promiscuous mode
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.823 247708 DEBUG nova.compute.manager [req-2296bc73-3a3b-4555-b8e5-06752651b429 req-1f7aec03-f768-41b7-b421-c0e4fb23658f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-vif-unplugged-3dae7286-8488-4698-a4e7-232e763fe00e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.823 247708 DEBUG oslo_concurrency.lockutils [req-2296bc73-3a3b-4555-b8e5-06752651b429 req-1f7aec03-f768-41b7-b421-c0e4fb23658f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.824 247708 DEBUG oslo_concurrency.lockutils [req-2296bc73-3a3b-4555-b8e5-06752651b429 req-1f7aec03-f768-41b7-b421-c0e4fb23658f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.824 247708 DEBUG oslo_concurrency.lockutils [req-2296bc73-3a3b-4555-b8e5-06752651b429 req-1f7aec03-f768-41b7-b421-c0e4fb23658f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.825 247708 DEBUG nova.compute.manager [req-2296bc73-3a3b-4555-b8e5-06752651b429 req-1f7aec03-f768-41b7-b421-c0e4fb23658f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] No waiting events found dispatching network-vif-unplugged-3dae7286-8488-4698-a4e7-232e763fe00e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.825 247708 DEBUG nova.compute.manager [req-2296bc73-3a3b-4555-b8e5-06752651b429 req-1f7aec03-f768-41b7-b421-c0e4fb23658f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-vif-unplugged-3dae7286-8488-4698-a4e7-232e763fe00e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.827 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.829 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdc28e6-f328-46e3-883b-69c8bf0a5fe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.834 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.842 247708 DEBUG nova.compute.provider_tree [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.842 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24d0d06a-691d-4004-a619-e0f1b72fbaf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.843 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5e3a9f-c426-4044-9f87-20d313296f7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.855 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[71255522-6014-4dd9-b3d5-f9d2a17965c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532842, 'reachable_time': 15230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270667, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 systemd[1]: run-netns-ovnmeta\x2dcffffabd\x2d62a6\x2d4362\x2d9315\x2dbd726adce623.mount: Deactivated successfully.
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.858 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.858 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[b482f04e-0750-45f8-a6c2-b48d7d292fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.860 247708 DEBUG nova.scheduler.client.report [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.884 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.885 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.943 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.944 247708 DEBUG nova.network.neutron [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.956 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.957 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:24.957 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.965 247708 INFO nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:38:24 compute-0 nova_compute[247704]: 2026-01-31 07:38:24.994 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.101 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.103 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.103 247708 INFO nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Creating image(s)
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.137 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.168 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.196 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.199 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.215 247708 DEBUG nova.policy [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a44db09acbd4aeb990147dc979f0bfd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b0554655ad0a48c8bf0551298dd31919', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.224 247708 INFO nova.virt.libvirt.driver [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Deleting instance files /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701_del
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.225 247708 INFO nova.virt.libvirt.driver [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Deletion of /var/lib/nova/instances/c9aecf4d-ed14-40d4-9751-902d85959701_del complete
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.243 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.245 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.245 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.246 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.278 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.282 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 944d6dce-4c82-4846-a5eb-57f141812e21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.303 247708 INFO nova.compute.manager [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.304 247708 DEBUG oslo.service.loopingcall [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.304 247708 DEBUG nova.compute.manager [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.304 247708 DEBUG nova.network.neutron [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.516 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 944d6dce-4c82-4846-a5eb-57f141812e21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.605 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] resizing rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.722 247708 DEBUG nova.objects.instance [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'migration_context' on Instance uuid 944d6dce-4c82-4846-a5eb-57f141812e21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:25 compute-0 ceph-mon[74496]: pgmap v1144: 305 pgs: 305 active+clean; 332 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 3.5 MiB/s wr, 545 op/s
Jan 31 07:38:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3430571829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.749 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.750 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Ensure instance console log exists: /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.750 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.751 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:25 compute-0 nova_compute[247704]: 2026-01-31 07:38:25.751 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.150 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:26.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 325 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.0 MiB/s wr, 497 op/s
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.822 247708 DEBUG nova.network.neutron [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Successfully created port: edf86251-2fc9-49e5-81f5-bec6102d9c52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.955 247708 DEBUG nova.network.neutron [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.962 247708 DEBUG nova.compute.manager [req-9e772a54-5a81-4606-92c2-91e5ccbdbf4a req-6e66aac7-7e83-43cf-a4d3-72c550ad29b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.962 247708 DEBUG oslo_concurrency.lockutils [req-9e772a54-5a81-4606-92c2-91e5ccbdbf4a req-6e66aac7-7e83-43cf-a4d3-72c550ad29b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.963 247708 DEBUG oslo_concurrency.lockutils [req-9e772a54-5a81-4606-92c2-91e5ccbdbf4a req-6e66aac7-7e83-43cf-a4d3-72c550ad29b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.963 247708 DEBUG oslo_concurrency.lockutils [req-9e772a54-5a81-4606-92c2-91e5ccbdbf4a req-6e66aac7-7e83-43cf-a4d3-72c550ad29b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.963 247708 DEBUG nova.compute.manager [req-9e772a54-5a81-4606-92c2-91e5ccbdbf4a req-6e66aac7-7e83-43cf-a4d3-72c550ad29b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] No waiting events found dispatching network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.964 247708 WARNING nova.compute.manager [req-9e772a54-5a81-4606-92c2-91e5ccbdbf4a req-6e66aac7-7e83-43cf-a4d3-72c550ad29b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received unexpected event network-vif-plugged-3dae7286-8488-4698-a4e7-232e763fe00e for instance with vm_state paused and task_state deleting.
Jan 31 07:38:26 compute-0 nova_compute[247704]: 2026-01-31 07:38:26.988 247708 INFO nova.compute.manager [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Took 1.68 seconds to deallocate network for instance.
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.080 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.080 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.083 247708 DEBUG nova.compute.manager [req-175056f0-4b30-4857-8041-21824fb35f97 req-47805d3b-1868-4713-b68e-2fe957a10257 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Received event network-vif-deleted-3dae7286-8488-4698-a4e7-232e763fe00e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.244 247708 DEBUG oslo_concurrency.processutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091500420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.678 247708 DEBUG oslo_concurrency.processutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.684 247708 DEBUG nova.compute.provider_tree [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.724 247708 DEBUG nova.scheduler.client.report [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:38:27 compute-0 ceph-mon[74496]: pgmap v1145: 305 pgs: 305 active+clean; 325 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.0 MiB/s wr, 497 op/s
Jan 31 07:38:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4091500420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:27.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:27 compute-0 podman[270859]: 2026-01-31 07:38:27.942372307 +0000 UTC m=+0.114204618 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:38:27 compute-0 nova_compute[247704]: 2026-01-31 07:38:27.955 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.086 247708 INFO nova.scheduler.client.report [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Deleted allocations for instance c9aecf4d-ed14-40d4-9751-902d85959701
Jan 31 07:38:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:28.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.355 247708 DEBUG oslo_concurrency.lockutils [None req-3be47483-96eb-4844-9f09-31ccd8080176 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c9aecf4d-ed14-40d4-9751-902d85959701" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 302 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 426 op/s
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.613 247708 DEBUG nova.network.neutron [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Successfully updated port: edf86251-2fc9-49e5-81f5-bec6102d9c52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:38:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 31 07:38:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 31 07:38:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.738 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.738 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquired lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.739 247708 DEBUG nova.network.neutron [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.818 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:28 compute-0 nova_compute[247704]: 2026-01-31 07:38:28.818 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.006 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.268 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.269 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.278 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.278 247708 INFO nova.compute.claims [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.316 247708 DEBUG nova.compute.manager [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-changed-edf86251-2fc9-49e5-81f5-bec6102d9c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.316 247708 DEBUG nova.compute.manager [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Refreshing instance network info cache due to event network-changed-edf86251-2fc9-49e5-81f5-bec6102d9c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.316 247708 DEBUG oslo_concurrency.lockutils [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.425 247708 DEBUG nova.network.neutron [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:38:29 compute-0 sudo[270886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:29 compute-0 sudo[270886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:29 compute-0 sudo[270886]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:29 compute-0 sudo[270911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:29 compute-0 sudo[270911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:29 compute-0 sudo[270911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:29 compute-0 ceph-mon[74496]: pgmap v1146: 305 pgs: 305 active+clean; 302 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 426 op/s
Jan 31 07:38:29 compute-0 ceph-mon[74496]: osdmap e160: 3 total, 3 up, 3 in
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.699 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:29 compute-0 nova_compute[247704]: 2026-01-31 07:38:29.900 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:29.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2789639205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.330 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.337 247708 DEBUG nova.compute.provider_tree [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.414 247708 DEBUG nova.scheduler.client.report [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.585 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.586 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:38:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 307 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Jan 31 07:38:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2789639205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.741 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.742 247708 DEBUG nova.network.neutron [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.800 247708 DEBUG nova.network.neutron [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updating instance_info_cache with network_info: [{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.809 247708 INFO nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.860 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Releasing lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.861 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Instance network_info: |[{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.862 247708 DEBUG oslo_concurrency.lockutils [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.863 247708 DEBUG nova.network.neutron [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Refreshing network info cache for port edf86251-2fc9-49e5-81f5-bec6102d9c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.867 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Start _get_guest_xml network_info=[{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.870 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.878 247708 WARNING nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.891 247708 DEBUG nova.virt.libvirt.host [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.892 247708 DEBUG nova.virt.libvirt.host [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.899 247708 DEBUG nova.virt.libvirt.host [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.899 247708 DEBUG nova.virt.libvirt.host [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.901 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.901 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.902 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.902 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.902 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.902 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.902 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.903 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.903 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.903 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.903 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.904 247708 DEBUG nova.virt.hardware [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:38:30 compute-0 nova_compute[247704]: 2026-01-31 07:38:30.907 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.098 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.100 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.101 247708 INFO nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Creating image(s)
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.141 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.178 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.206 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.210 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.237 247708 DEBUG nova.policy [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '533eaca1e9c4430dabe2b0a39039ca65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.286 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.287 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.287 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.287 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.310 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.313 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:38:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176314634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.346 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.375 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.379 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.607 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.703 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] resizing rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:38:31 compute-0 ceph-mon[74496]: pgmap v1148: 305 pgs: 305 active+clean; 307 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Jan 31 07:38:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2176314634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:38:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392069509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.853 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.856 247708 DEBUG nova.virt.libvirt.vif [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1344131571',display_name='tempest-ServersAdminTestJSON-server-1344131571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1344131571',id=37,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-fcqt5btw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:25Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=944d6dce-4c82-4846-a5eb-57f141812e21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.856 247708 DEBUG nova.network.os_vif_util [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.858 247708 DEBUG nova.network.os_vif_util [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.860 247708 DEBUG nova.objects.instance [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'pci_devices' on Instance uuid 944d6dce-4c82-4846-a5eb-57f141812e21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.867 247708 DEBUG nova.objects.instance [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'migration_context' on Instance uuid ec81b99d-17fb-4dd1-aa76-67b3780fce15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.904 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <uuid>944d6dce-4c82-4846-a5eb-57f141812e21</uuid>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <name>instance-00000025</name>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersAdminTestJSON-server-1344131571</nova:name>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:38:30</nova:creationTime>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:user uuid="8a44db09acbd4aeb990147dc979f0bfd">tempest-ServersAdminTestJSON-1156607975-project-member</nova:user>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:project uuid="b0554655ad0a48c8bf0551298dd31919">tempest-ServersAdminTestJSON-1156607975</nova:project>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <nova:port uuid="edf86251-2fc9-49e5-81f5-bec6102d9c52">
Jan 31 07:38:31 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <system>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <entry name="serial">944d6dce-4c82-4846-a5eb-57f141812e21</entry>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <entry name="uuid">944d6dce-4c82-4846-a5eb-57f141812e21</entry>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </system>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <os>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </os>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <features>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </features>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/944d6dce-4c82-4846-a5eb-57f141812e21_disk">
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </source>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/944d6dce-4c82-4846-a5eb-57f141812e21_disk.config">
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </source>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:38:31 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:5e:e4:9b"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <target dev="tapedf86251-2f"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/console.log" append="off"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <video>
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </video>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:38:31 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:38:31 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:38:31 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:38:31 compute-0 nova_compute[247704]: </domain>
Jan 31 07:38:31 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.906 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Preparing to wait for external event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.907 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.907 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.908 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:31.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.909 247708 DEBUG nova.virt.libvirt.vif [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1344131571',display_name='tempest-ServersAdminTestJSON-server-1344131571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1344131571',id=37,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-fcqt5btw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:25Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=944d6dce-4c82-4846-a5eb-57f141812e21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.909 247708 DEBUG nova.network.os_vif_util [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.910 247708 DEBUG nova.network.os_vif_util [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.910 247708 DEBUG os_vif [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.911 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.912 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.912 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.915 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedf86251-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.916 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedf86251-2f, col_values=(('external_ids', {'iface-id': 'edf86251-2fc9-49e5-81f5-bec6102d9c52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:e4:9b', 'vm-uuid': '944d6dce-4c82-4846-a5eb-57f141812e21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.918 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:31 compute-0 NetworkManager[49108]: <info>  [1769845111.9194] manager: (tapedf86251-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.921 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.924 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.925 247708 INFO os_vif [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f')
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.957 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.958 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Ensure instance console log exists: /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.958 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.959 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:31 compute-0 nova_compute[247704]: 2026-01-31 07:38:31.959 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.037 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.038 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.038 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] No VIF found with MAC fa:16:3e:5e:e4:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.039 247708 INFO nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Using config drive
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.067 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:38:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:38:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 323 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 230 op/s
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.707 247708 DEBUG nova.network.neutron [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Successfully created port: 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:38:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/392069509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.771 247708 INFO nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Creating config drive at /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/disk.config
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.778 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9nwzsqco execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.862 247708 DEBUG nova.network.neutron [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updated VIF entry in instance network info cache for port edf86251-2fc9-49e5-81f5-bec6102d9c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.863 247708 DEBUG nova.network.neutron [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updating instance_info_cache with network_info: [{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.896 247708 DEBUG oslo_concurrency.lockutils [req-39b3f390-7cb4-4434-b33b-4d2bb78bcedb req-e0badc27-bdd5-4a54-b69c-11bee1acb210 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.916 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9nwzsqco" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.950 247708 DEBUG nova.storage.rbd_utils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] rbd image 944d6dce-4c82-4846-a5eb-57f141812e21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:32 compute-0 nova_compute[247704]: 2026-01-31 07:38:32.954 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/disk.config 944d6dce-4c82-4846-a5eb-57f141812e21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.221 247708 DEBUG oslo_concurrency.processutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/disk.config 944d6dce-4c82-4846-a5eb-57f141812e21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.222 247708 INFO nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Deleting local config drive /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21/disk.config because it was imported into RBD.
Jan 31 07:38:33 compute-0 kernel: tapedf86251-2f: entered promiscuous mode
Jan 31 07:38:33 compute-0 ovn_controller[149457]: 2026-01-31T07:38:33Z|00118|binding|INFO|Claiming lport edf86251-2fc9-49e5-81f5-bec6102d9c52 for this chassis.
Jan 31 07:38:33 compute-0 ovn_controller[149457]: 2026-01-31T07:38:33Z|00119|binding|INFO|edf86251-2fc9-49e5-81f5-bec6102d9c52: Claiming fa:16:3e:5e:e4:9b 10.100.0.12
Jan 31 07:38:33 compute-0 NetworkManager[49108]: <info>  [1769845113.2697] manager: (tapedf86251-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.268 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.271 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.289 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:e4:9b 10.100.0.12'], port_security=['fa:16:3e:5e:e4:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '944d6dce-4c82-4846-a5eb-57f141812e21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=edf86251-2fc9-49e5-81f5-bec6102d9c52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.290 160028 INFO neutron.agent.ovn.metadata.agent [-] Port edf86251-2fc9-49e5-81f5-bec6102d9c52 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 bound to our chassis
Jan 31 07:38:33 compute-0 systemd-udevd[271262]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.291 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 07:38:33 compute-0 systemd-machined[214448]: New machine qemu-18-instance-00000025.
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.297 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000025.
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.298 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[88bca48c-55f6-45a6-a2cc-a6d70f96fdd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.299 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c92e27e-f1 in ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.301 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.300 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c92e27e-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.301 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a6a1ce-1f9f-408f-886d-3f0d5cdd7cb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.302 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[78923365-746d-4261-9909-be195b5ce07b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_controller[149457]: 2026-01-31T07:38:33Z|00120|binding|INFO|Setting lport edf86251-2fc9-49e5-81f5-bec6102d9c52 ovn-installed in OVS
Jan 31 07:38:33 compute-0 ovn_controller[149457]: 2026-01-31T07:38:33Z|00121|binding|INFO|Setting lport edf86251-2fc9-49e5-81f5-bec6102d9c52 up in Southbound
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 NetworkManager[49108]: <info>  [1769845113.3065] device (tapedf86251-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:38:33 compute-0 NetworkManager[49108]: <info>  [1769845113.3070] device (tapedf86251-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.311 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[50687dfe-5e23-473d-baa7-f7d0de0d7dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.321 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1aaa8f38-a3f2-4156-916e-3d9b3335d415]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.341 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b26ca645-a44d-4902-978c-49c7955ca692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.346 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[db5c3331-1fae-49ed-aa33-c902dd7f0ed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 NetworkManager[49108]: <info>  [1769845113.3470] manager: (tap8c92e27e-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.371 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[25ffd07f-329b-44ea-bb9c-3b4a6b7f3900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.374 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3067bb30-ad62-4503-855b-40d6b51b9266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 NetworkManager[49108]: <info>  [1769845113.3879] device (tap8c92e27e-f0): carrier: link connected
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.391 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[82c2542e-3132-4aff-805e-01f1dd201f5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.403 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4748553d-0467-4ff4-aa89-9cd40fc73e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535600, 'reachable_time': 15615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271295, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.413 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d872e4-0149-40a3-8f1a-b562bb52ca35]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:629b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535600, 'tstamp': 535600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271296, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.426 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2c177b15-c8fd-4408-b68f-57d14f4552ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c92e27e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:62:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535600, 'reachable_time': 15615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271297, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.447 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4476c1c6-4e88-4b91-ab7e-c6da58565fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.491 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[15559bd2-b260-4351-8e11-4a2232a7aedc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.493 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.493 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.494 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c92e27e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 NetworkManager[49108]: <info>  [1769845113.4974] manager: (tap8c92e27e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 31 07:38:33 compute-0 kernel: tap8c92e27e-f0: entered promiscuous mode
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.501 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c92e27e-f0, col_values=(('external_ids', {'iface-id': 'b682c189-93d2-4c14-8b2a-bafbda6df8a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.502 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 ovn_controller[149457]: 2026-01-31T07:38:33Z|00122|binding|INFO|Releasing lport b682c189-93d2-4c14-8b2a-bafbda6df8a4 from this chassis (sb_readonly=0)
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.510 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.512 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.513 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[276c98cf-5739-45db-8aa2-8215351b5d6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.514 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/8c92e27e-f16c-4df2-a299-60ef2ca44f53.pid.haproxy
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 8c92e27e-f16c-4df2-a299-60ef2ca44f53
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.516 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'env', 'PROCESS_TAG=haproxy-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c92e27e-f16c-4df2-a299-60ef2ca44f53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:38:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 31 07:38:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 31 07:38:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.771 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845113.7707546, 944d6dce-4c82-4846-a5eb-57f141812e21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.772 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] VM Started (Lifecycle Event)
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.828 247708 DEBUG nova.compute.manager [req-d0839631-c5ee-4a61-bacb-07fb0a82e966 req-d1ad8ce8-b40c-4b38-bd8d-39f91fede901 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.828 247708 DEBUG oslo_concurrency.lockutils [req-d0839631-c5ee-4a61-bacb-07fb0a82e966 req-d1ad8ce8-b40c-4b38-bd8d-39f91fede901 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.829 247708 DEBUG oslo_concurrency.lockutils [req-d0839631-c5ee-4a61-bacb-07fb0a82e966 req-d1ad8ce8-b40c-4b38-bd8d-39f91fede901 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.829 247708 DEBUG oslo_concurrency.lockutils [req-d0839631-c5ee-4a61-bacb-07fb0a82e966 req-d1ad8ce8-b40c-4b38-bd8d-39f91fede901 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.829 247708 DEBUG nova.compute.manager [req-d0839631-c5ee-4a61-bacb-07fb0a82e966 req-d1ad8ce8-b40c-4b38-bd8d-39f91fede901 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Processing event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.830 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.835 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.839 247708 INFO nova.virt.libvirt.driver [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Instance spawned successfully.
Jan 31 07:38:33 compute-0 ceph-mon[74496]: pgmap v1149: 305 pgs: 305 active+clean; 323 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 230 op/s
Jan 31 07:38:33 compute-0 ceph-mon[74496]: osdmap e161: 3 total, 3 up, 3 in
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.840 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:38:33 compute-0 podman[271371]: 2026-01-31 07:38:33.880815089 +0000 UTC m=+0.068797989 container create eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 07:38:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:33.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:33 compute-0 systemd[1]: Started libpod-conmon-eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527.scope.
Jan 31 07:38:33 compute-0 podman[271371]: 2026-01-31 07:38:33.839490901 +0000 UTC m=+0.027473881 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.940 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:33 compute-0 nova_compute[247704]: 2026-01-31 07:38:33.947 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:33.960 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906139511d6ddad5518f488cd515ff4557896db59b44b917e0b49bd767436da7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:33 compute-0 podman[271371]: 2026-01-31 07:38:33.979130138 +0000 UTC m=+0.167113078 container init eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:38:33 compute-0 podman[271371]: 2026-01-31 07:38:33.986225221 +0000 UTC m=+0.174208121 container start eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:38:34 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [NOTICE]   (271390) : New worker (271392) forked
Jan 31 07:38:34 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [NOTICE]   (271390) : Loading success.
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.087 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.088 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.089 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.089 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.090 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.091 247708 DEBUG nova.virt.libvirt.driver [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.161 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.161 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845113.7709343, 944d6dce-4c82-4846-a5eb-57f141812e21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.162 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] VM Paused (Lifecycle Event)
Jan 31 07:38:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.305 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.309 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845113.836013, 944d6dce-4c82-4846-a5eb-57f141812e21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.310 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] VM Resumed (Lifecycle Event)
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.355 247708 INFO nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Took 9.25 seconds to spawn the instance on the hypervisor.
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.355 247708 DEBUG nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.410 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.414 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 352 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 6.4 MiB/s wr, 183 op/s
Jan 31 07:38:34 compute-0 nova_compute[247704]: 2026-01-31 07:38:34.727 247708 INFO nova.compute.manager [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Took 10.59 seconds to build instance.
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008160921796482413 of space, bias 1.0, pg target 2.448276538944724 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:38:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:38:35 compute-0 nova_compute[247704]: 2026-01-31 07:38:35.040 247708 DEBUG oslo_concurrency.lockutils [None req-73facdec-f44f-4d6f-8807-e26184a5d534 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:35 compute-0 ceph-mon[74496]: pgmap v1151: 305 pgs: 305 active+clean; 352 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 6.4 MiB/s wr, 183 op/s
Jan 31 07:38:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:35.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:36 compute-0 nova_compute[247704]: 2026-01-31 07:38:36.197 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 372 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.2 MiB/s wr, 226 op/s
Jan 31 07:38:36 compute-0 nova_compute[247704]: 2026-01-31 07:38:36.918 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.578 247708 DEBUG nova.network.neutron [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Successfully updated port: 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.649 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "refresh_cache-ec81b99d-17fb-4dd1-aa76-67b3780fce15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.649 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquired lock "refresh_cache-ec81b99d-17fb-4dd1-aa76-67b3780fce15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.649 247708 DEBUG nova.network.neutron [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:38:37 compute-0 ceph-mon[74496]: pgmap v1152: 305 pgs: 305 active+clean; 372 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.2 MiB/s wr, 226 op/s
Jan 31 07:38:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.928 247708 DEBUG nova.compute.manager [req-7f7ef0d0-06ba-4303-b6cc-c74eed45b347 req-583b0a37-5cfd-41ec-935c-57aaa9b437a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.929 247708 DEBUG oslo_concurrency.lockutils [req-7f7ef0d0-06ba-4303-b6cc-c74eed45b347 req-583b0a37-5cfd-41ec-935c-57aaa9b437a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.929 247708 DEBUG oslo_concurrency.lockutils [req-7f7ef0d0-06ba-4303-b6cc-c74eed45b347 req-583b0a37-5cfd-41ec-935c-57aaa9b437a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.930 247708 DEBUG oslo_concurrency.lockutils [req-7f7ef0d0-06ba-4303-b6cc-c74eed45b347 req-583b0a37-5cfd-41ec-935c-57aaa9b437a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.930 247708 DEBUG nova.compute.manager [req-7f7ef0d0-06ba-4303-b6cc-c74eed45b347 req-583b0a37-5cfd-41ec-935c-57aaa9b437a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] No waiting events found dispatching network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.930 247708 WARNING nova.compute.manager [req-7f7ef0d0-06ba-4303-b6cc-c74eed45b347 req-583b0a37-5cfd-41ec-935c-57aaa9b437a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received unexpected event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 for instance with vm_state active and task_state None.
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.931 247708 DEBUG nova.compute.manager [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received event network-changed-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.932 247708 DEBUG nova.compute.manager [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Refreshing instance network info cache due to event network-changed-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:38:37 compute-0 nova_compute[247704]: 2026-01-31 07:38:37.932 247708 DEBUG oslo_concurrency.lockutils [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ec81b99d-17fb-4dd1-aa76-67b3780fce15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:38 compute-0 nova_compute[247704]: 2026-01-31 07:38:38.223 247708 DEBUG oslo_concurrency.lockutils [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] Acquiring lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:38 compute-0 nova_compute[247704]: 2026-01-31 07:38:38.224 247708 DEBUG oslo_concurrency.lockutils [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] Acquired lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:38 compute-0 nova_compute[247704]: 2026-01-31 07:38:38.224 247708 DEBUG nova.network.neutron [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:38:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:38.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:38 compute-0 nova_compute[247704]: 2026-01-31 07:38:38.422 247708 DEBUG nova.network.neutron [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:38:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 372 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.8 MiB/s wr, 187 op/s
Jan 31 07:38:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:39 compute-0 nova_compute[247704]: 2026-01-31 07:38:39.669 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845104.6673763, c9aecf4d-ed14-40d4-9751-902d85959701 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:39 compute-0 nova_compute[247704]: 2026-01-31 07:38:39.669 247708 INFO nova.compute.manager [-] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] VM Stopped (Lifecycle Event)
Jan 31 07:38:39 compute-0 nova_compute[247704]: 2026-01-31 07:38:39.718 247708 DEBUG nova.compute.manager [None req-b1c93e0b-de1c-4ea4-ace8-2a1e49c2acf9 - - - - - -] [instance: c9aecf4d-ed14-40d4-9751-902d85959701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:39 compute-0 ceph-mon[74496]: pgmap v1153: 305 pgs: 305 active+clean; 372 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.8 MiB/s wr, 187 op/s
Jan 31 07:38:39 compute-0 nova_compute[247704]: 2026-01-31 07:38:39.975 247708 DEBUG nova.network.neutron [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Updating instance_info_cache with network_info: [{"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.085 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Releasing lock "refresh_cache-ec81b99d-17fb-4dd1-aa76-67b3780fce15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.086 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance network_info: |[{"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.087 247708 DEBUG oslo_concurrency.lockutils [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ec81b99d-17fb-4dd1-aa76-67b3780fce15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.087 247708 DEBUG nova.network.neutron [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Refreshing network info cache for port 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.092 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Start _get_guest_xml network_info=[{"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.097 247708 WARNING nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.102 247708 DEBUG nova.virt.libvirt.host [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.103 247708 DEBUG nova.virt.libvirt.host [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.108 247708 DEBUG nova.virt.libvirt.host [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.109 247708 DEBUG nova.virt.libvirt.host [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.111 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.112 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.112 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.113 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.113 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.114 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.114 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.114 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.115 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.115 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.116 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.116 247708 DEBUG nova.virt.hardware [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.121 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.317 247708 DEBUG nova.network.neutron [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updating instance_info_cache with network_info: [{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.402 247708 DEBUG oslo_concurrency.lockutils [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] Releasing lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.403 247708 DEBUG nova.compute.manager [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.404 247708 DEBUG nova.compute.manager [None req-d0552713-d450-4442-b79a-0b7685475626 23ba3308a0674023959c49bf22c45dc5 1507f1405efb469b92c95f52cdf32332 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] network_info to inject: |[{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.553 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.586 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:40 compute-0 nova_compute[247704]: 2026-01-31 07:38:40.591 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 170 op/s
Jan 31 07:38:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:38:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154855559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.017 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.020 247708 DEBUG nova.virt.libvirt.vif [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-528624837',display_name='tempest-ImagesTestJSON-server-528624837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-528624837',id=38,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-dk4ltwv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:30Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=ec81b99d-17fb-4dd1-aa76-67b3780fce15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.020 247708 DEBUG nova.network.os_vif_util [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.022 247708 DEBUG nova.network.os_vif_util [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.024 247708 DEBUG nova.objects.instance [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec81b99d-17fb-4dd1-aa76-67b3780fce15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3320545784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.064 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <uuid>ec81b99d-17fb-4dd1-aa76-67b3780fce15</uuid>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <name>instance-00000026</name>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:name>tempest-ImagesTestJSON-server-528624837</nova:name>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:38:40</nova:creationTime>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:user uuid="533eaca1e9c4430dabe2b0a39039ca65">tempest-ImagesTestJSON-533495031-project-member</nova:user>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:project uuid="b3e3e6f216d24c1f9f68777cfb63dbf8">tempest-ImagesTestJSON-533495031</nova:project>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <nova:port uuid="6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61">
Jan 31 07:38:41 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <system>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <entry name="serial">ec81b99d-17fb-4dd1-aa76-67b3780fce15</entry>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <entry name="uuid">ec81b99d-17fb-4dd1-aa76-67b3780fce15</entry>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </system>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <os>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </os>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <features>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </features>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk">
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </source>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk.config">
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </source>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:38:41 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:4c:4f:e1"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <target dev="tap6c7bea49-10"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/console.log" append="off"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <video>
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </video>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:38:41 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:38:41 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:38:41 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:38:41 compute-0 nova_compute[247704]: </domain>
Jan 31 07:38:41 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.066 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Preparing to wait for external event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.067 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.067 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.068 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.069 247708 DEBUG nova.virt.libvirt.vif [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:38:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-528624837',display_name='tempest-ImagesTestJSON-server-528624837',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-528624837',id=38,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-dk4ltwv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:38:30Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=ec81b99d-17fb-4dd1-aa76-67b3780fce15,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.069 247708 DEBUG nova.network.os_vif_util [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.070 247708 DEBUG nova.network.os_vif_util [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.071 247708 DEBUG os_vif [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.072 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.073 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.078 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c7bea49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.079 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c7bea49-10, col_values=(('external_ids', {'iface-id': '6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:4f:e1', 'vm-uuid': 'ec81b99d-17fb-4dd1-aa76-67b3780fce15'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.121 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:41 compute-0 NetworkManager[49108]: <info>  [1769845121.1223] manager: (tap6c7bea49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.125 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.130 247708 INFO os_vif [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10')
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.367 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.368 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.368 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No VIF found with MAC fa:16:3e:4c:4f:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.369 247708 INFO nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Using config drive
Jan 31 07:38:41 compute-0 nova_compute[247704]: 2026-01-31 07:38:41.404 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.009 247708 INFO nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Creating config drive at /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/disk.config
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.015 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxixi8cx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:42 compute-0 ceph-mon[74496]: pgmap v1154: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 170 op/s
Jan 31 07:38:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2154855559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.156 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxixi8cx7" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.199 247708 DEBUG nova.storage.rbd_utils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.205 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/disk.config ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.392 247708 DEBUG nova.network.neutron [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Updated VIF entry in instance network info cache for port 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.394 247708 DEBUG nova.network.neutron [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Updating instance_info_cache with network_info: [{"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.447 247708 DEBUG oslo_concurrency.lockutils [req-f8e2adcc-0dca-405f-a315-9c63bba62c27 req-b0bd13d9-a020-43f3-a61d-27de87105d64 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ec81b99d-17fb-4dd1-aa76-67b3780fce15" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.482 247708 DEBUG oslo_concurrency.processutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/disk.config ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.484 247708 INFO nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Deleting local config drive /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15/disk.config because it was imported into RBD.
Jan 31 07:38:42 compute-0 kernel: tap6c7bea49-10: entered promiscuous mode
Jan 31 07:38:42 compute-0 NetworkManager[49108]: <info>  [1769845122.5471] manager: (tap6c7bea49-10): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Jan 31 07:38:42 compute-0 ovn_controller[149457]: 2026-01-31T07:38:42Z|00123|binding|INFO|Claiming lport 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 for this chassis.
Jan 31 07:38:42 compute-0 ovn_controller[149457]: 2026-01-31T07:38:42Z|00124|binding|INFO|6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61: Claiming fa:16:3e:4c:4f:e1 10.100.0.4
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.549 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 ovn_controller[149457]: 2026-01-31T07:38:42Z|00125|binding|INFO|Setting lport 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 ovn-installed in OVS
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.559 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.561 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.565 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 systemd-udevd[271540]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:38:42 compute-0 systemd-machined[214448]: New machine qemu-19-instance-00000026.
Jan 31 07:38:42 compute-0 ovn_controller[149457]: 2026-01-31T07:38:42Z|00126|binding|INFO|Setting lport 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 up in Southbound
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.593 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:4f:e1 10.100.0.4'], port_security=['fa:16:3e:4c:4f:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ec81b99d-17fb-4dd1-aa76-67b3780fce15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.596 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 in datapath cffffabd-62a6-4362-9315-bd726adce623 bound to our chassis
Jan 31 07:38:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Jan 31 07:38:42 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000026.
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.601 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:38:42 compute-0 NetworkManager[49108]: <info>  [1769845122.6081] device (tap6c7bea49-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:38:42 compute-0 NetworkManager[49108]: <info>  [1769845122.6093] device (tap6c7bea49-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.615 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2eed372e-bf6a-4fc3-ad7b-cbd36c6f9889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.616 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcffffabd-61 in ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.618 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcffffabd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.618 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[475f21df-38ea-48de-a98b-7850f3310e22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.619 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[78796556-15ba-4c9c-a0d5-10892269d576]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.632 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[550ae034-7821-414e-a68f-ad4f6e1c16a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.655 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2b88a07e-34a1-4251-8850-4e59334e3790]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.682 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[37063af2-e2d1-4f76-bbd9-aa9fc3419b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.687 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2528e898-39ef-487a-b660-3eca8c2a6cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 systemd-udevd[271543]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:38:42 compute-0 NetworkManager[49108]: <info>  [1769845122.6884] manager: (tapcffffabd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.710 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[12641a82-5e30-4e50-ae64-c3cf356b5394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.712 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[adee154d-6472-459e-b118-061d99b46e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 NetworkManager[49108]: <info>  [1769845122.7352] device (tapcffffabd-60): carrier: link connected
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.738 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[901dfa5c-f427-475b-8e82-abcd0780a17d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.751 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4338b857-cdbe-4cd4-b521-3230119544e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536535, 'reachable_time': 33911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271574, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.764 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0b809d66-5451-4287-b448-d451c4d4425c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:96c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 536535, 'tstamp': 536535}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271575, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.777 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1265bf28-246b-4c93-8870-370d4093a84d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536535, 'reachable_time': 33911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271576, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.800 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[86dee83d-9d07-49de-85db-b9280cbcf006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.874 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f71fbd-9673-450d-9c4a-a08637a3d8f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.875 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.876 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.877 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcffffabd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:42 compute-0 kernel: tapcffffabd-60: entered promiscuous mode
Jan 31 07:38:42 compute-0 NetworkManager[49108]: <info>  [1769845122.8813] manager: (tapcffffabd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.879 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.882 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcffffabd-60, col_values=(('external_ids', {'iface-id': '549e70cf-ed02-45f9-9021-3a04088f580f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.882 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 ovn_controller[149457]: 2026-01-31T07:38:42Z|00127|binding|INFO|Releasing lport 549e70cf-ed02-45f9-9021-3a04088f580f from this chassis (sb_readonly=0)
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 nova_compute[247704]: 2026-01-31 07:38:42.894 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.896 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.897 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[60904504-e271-47f7-b3aa-b7f2b545044a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.899 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:38:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:38:42.900 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'env', 'PROCESS_TAG=haproxy-cffffabd-62a6-4362-9315-bd726adce623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cffffabd-62a6-4362-9315-bd726adce623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:38:43 compute-0 podman[271608]: 2026-01-31 07:38:43.25560152 +0000 UTC m=+0.036744077 container create 677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:38:43 compute-0 systemd[1]: Started libpod-conmon-677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22.scope.
Jan 31 07:38:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2ba8bc7c4a33f693a8492826be85891efadae34e76607b08ecdddc17562926/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:38:43 compute-0 podman[271608]: 2026-01-31 07:38:43.330829666 +0000 UTC m=+0.111972243 container init 677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:38:43 compute-0 podman[271608]: 2026-01-31 07:38:43.234992558 +0000 UTC m=+0.016135155 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.335 247708 DEBUG nova.compute.manager [req-58c5dcb4-44bf-485a-80c9-e27611878c56 req-be54ad83-bec3-43f3-aaf0-ac21530e5ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.337 247708 DEBUG oslo_concurrency.lockutils [req-58c5dcb4-44bf-485a-80c9-e27611878c56 req-be54ad83-bec3-43f3-aaf0-ac21530e5ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.337 247708 DEBUG oslo_concurrency.lockutils [req-58c5dcb4-44bf-485a-80c9-e27611878c56 req-be54ad83-bec3-43f3-aaf0-ac21530e5ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.338 247708 DEBUG oslo_concurrency.lockutils [req-58c5dcb4-44bf-485a-80c9-e27611878c56 req-be54ad83-bec3-43f3-aaf0-ac21530e5ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.338 247708 DEBUG nova.compute.manager [req-58c5dcb4-44bf-485a-80c9-e27611878c56 req-be54ad83-bec3-43f3-aaf0-ac21530e5ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Processing event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:38:43 compute-0 podman[271608]: 2026-01-31 07:38:43.339331143 +0000 UTC m=+0.120473710 container start 677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:38:43 compute-0 podman[271621]: 2026-01-31 07:38:43.34615687 +0000 UTC m=+0.057042033 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 07:38:43 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [NOTICE]   (271644) : New worker (271646) forked
Jan 31 07:38:43 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [NOTICE]   (271644) : Loading success.
Jan 31 07:38:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.713 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845123.7128952, ec81b99d-17fb-4dd1-aa76-67b3780fce15 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.714 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] VM Started (Lifecycle Event)
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.717 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.736 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.741 247708 INFO nova.virt.libvirt.driver [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance spawned successfully.
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.741 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.868 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.870 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.884 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.885 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.885 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.886 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.887 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:43 compute-0 nova_compute[247704]: 2026-01-31 07:38:43.887 247708 DEBUG nova.virt.libvirt.driver [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:38:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:43.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:44 compute-0 ceph-mon[74496]: pgmap v1155: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Jan 31 07:38:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:44.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.410 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.412 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845123.713415, ec81b99d-17fb-4dd1-aa76-67b3780fce15 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.413 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] VM Paused (Lifecycle Event)
Jan 31 07:38:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.707 247708 INFO nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Took 13.61 seconds to spawn the instance on the hypervisor.
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.708 247708 DEBUG nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.829 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.834 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845123.7206256, ec81b99d-17fb-4dd1-aa76-67b3780fce15 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.834 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] VM Resumed (Lifecycle Event)
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.987 247708 INFO nova.compute.manager [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Took 15.74 seconds to build instance.
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.990 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:44 compute-0 nova_compute[247704]: 2026-01-31 07:38:44.994 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.148 247708 DEBUG oslo_concurrency.lockutils [None req-d127ec83-9ceb-4a7c-b02a-a51e0aaccf8b 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3533633623' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:38:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3533633623' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.583 247708 DEBUG nova.compute.manager [req-d4fc273f-dda7-43b0-b90f-bcaf3ad1caab req-557c9bc2-737d-4e54-bb45-03f11d9b428b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.584 247708 DEBUG oslo_concurrency.lockutils [req-d4fc273f-dda7-43b0-b90f-bcaf3ad1caab req-557c9bc2-737d-4e54-bb45-03f11d9b428b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.584 247708 DEBUG oslo_concurrency.lockutils [req-d4fc273f-dda7-43b0-b90f-bcaf3ad1caab req-557c9bc2-737d-4e54-bb45-03f11d9b428b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.585 247708 DEBUG oslo_concurrency.lockutils [req-d4fc273f-dda7-43b0-b90f-bcaf3ad1caab req-557c9bc2-737d-4e54-bb45-03f11d9b428b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.585 247708 DEBUG nova.compute.manager [req-d4fc273f-dda7-43b0-b90f-bcaf3ad1caab req-557c9bc2-737d-4e54-bb45-03f11d9b428b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] No waiting events found dispatching network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:38:45 compute-0 nova_compute[247704]: 2026-01-31 07:38:45.586 247708 WARNING nova.compute.manager [req-d4fc273f-dda7-43b0-b90f-bcaf3ad1caab req-557c9bc2-737d-4e54-bb45-03f11d9b428b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received unexpected event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 for instance with vm_state active and task_state None.
Jan 31 07:38:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:45.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:46 compute-0 nova_compute[247704]: 2026-01-31 07:38:46.122 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:46 compute-0 nova_compute[247704]: 2026-01-31 07:38:46.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:46 compute-0 ceph-mon[74496]: pgmap v1156: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Jan 31 07:38:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 642 KiB/s wr, 109 op/s
Jan 31 07:38:47 compute-0 nova_compute[247704]: 2026-01-31 07:38:47.312 247708 DEBUG oslo_concurrency.lockutils [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:47 compute-0 nova_compute[247704]: 2026-01-31 07:38:47.313 247708 DEBUG oslo_concurrency.lockutils [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:47 compute-0 nova_compute[247704]: 2026-01-31 07:38:47.313 247708 DEBUG nova.compute.manager [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:38:47 compute-0 nova_compute[247704]: 2026-01-31 07:38:47.318 247708 DEBUG nova.compute.manager [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 31 07:38:47 compute-0 nova_compute[247704]: 2026-01-31 07:38:47.320 247708 DEBUG nova.objects.instance [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'flavor' on Instance uuid ec81b99d-17fb-4dd1-aa76-67b3780fce15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:47 compute-0 nova_compute[247704]: 2026-01-31 07:38:47.362 247708 DEBUG nova.virt.libvirt.driver [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:38:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:47.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:47 compute-0 ovn_controller[149457]: 2026-01-31T07:38:47Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:e4:9b 10.100.0.12
Jan 31 07:38:47 compute-0 ovn_controller[149457]: 2026-01-31T07:38:47Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:e4:9b 10.100.0.12
Jan 31 07:38:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:48.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:48 compute-0 ceph-mon[74496]: pgmap v1157: 305 pgs: 305 active+clean; 372 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 642 KiB/s wr, 109 op/s
Jan 31 07:38:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 374 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 126 KiB/s wr, 93 op/s
Jan 31 07:38:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 31 07:38:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 31 07:38:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 31 07:38:49 compute-0 ceph-mon[74496]: pgmap v1158: 305 pgs: 305 active+clean; 374 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 126 KiB/s wr, 93 op/s
Jan 31 07:38:49 compute-0 sudo[271701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:49 compute-0 sudo[271701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:49 compute-0 sudo[271701]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:49 compute-0 sudo[271726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:38:49 compute-0 sudo[271726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:38:49 compute-0 sudo[271726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:38:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:49.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:38:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:50 compute-0 ceph-mon[74496]: osdmap e162: 3 total, 3 up, 3 in
Jan 31 07:38:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 399 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.8 MiB/s wr, 199 op/s
Jan 31 07:38:51 compute-0 nova_compute[247704]: 2026-01-31 07:38:51.157 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:51 compute-0 nova_compute[247704]: 2026-01-31 07:38:51.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:51 compute-0 nova_compute[247704]: 2026-01-31 07:38:51.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:51 compute-0 nova_compute[247704]: 2026-01-31 07:38:51.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:38:51 compute-0 nova_compute[247704]: 2026-01-31 07:38:51.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:38:51 compute-0 ceph-mon[74496]: pgmap v1160: 305 pgs: 305 active+clean; 399 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.8 MiB/s wr, 199 op/s
Jan 31 07:38:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:51.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:52 compute-0 nova_compute[247704]: 2026-01-31 07:38:52.170 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:38:52 compute-0 nova_compute[247704]: 2026-01-31 07:38:52.171 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:38:52 compute-0 nova_compute[247704]: 2026-01-31 07:38:52.171 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:38:52 compute-0 nova_compute[247704]: 2026-01-31 07:38:52.171 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 944d6dce-4c82-4846-a5eb-57f141812e21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:38:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 375 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 31 07:38:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3921444299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.693705) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133693802, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 259, "total_data_size": 3642278, "memory_usage": 3706160, "flush_reason": "Manual Compaction"}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133711492, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3527382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24258, "largest_seqno": 26406, "table_properties": {"data_size": 3517726, "index_size": 6022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20633, "raw_average_key_size": 20, "raw_value_size": 3497941, "raw_average_value_size": 3439, "num_data_blocks": 265, "num_entries": 1017, "num_filter_entries": 1017, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844960, "oldest_key_time": 1769844960, "file_creation_time": 1769845133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17827 microseconds, and 9062 cpu microseconds.
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.711544) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3527382 bytes OK
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.711572) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.731905) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.731964) EVENT_LOG_v1 {"time_micros": 1769845133731952, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.731996) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3633252, prev total WAL file size 3633252, number of live WAL files 2.
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.733176) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3444KB)], [56(8785KB)]
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133733221, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12523364, "oldest_snapshot_seqno": -1}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: pgmap v1161: 305 pgs: 305 active+clean; 375 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5451 keys, 12404056 bytes, temperature: kUnknown
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133873307, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 12404056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12363332, "index_size": 26022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 137147, "raw_average_key_size": 25, "raw_value_size": 12261081, "raw_average_value_size": 2249, "num_data_blocks": 1075, "num_entries": 5451, "num_filter_entries": 5451, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.874451) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12404056 bytes
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.892667) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.3 rd, 88.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.6 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 5988, records dropped: 537 output_compression: NoCompression
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.892713) EVENT_LOG_v1 {"time_micros": 1769845133892692, "job": 30, "event": "compaction_finished", "compaction_time_micros": 140254, "compaction_time_cpu_micros": 39288, "output_level": 6, "num_output_files": 1, "total_output_size": 12404056, "num_input_records": 5988, "num_output_records": 5451, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133893793, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845133895347, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.732973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.895485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.895492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.895494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.895496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:53 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:38:53.895498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:38:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:38:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:38:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 346 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 31 07:38:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1990709756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:55 compute-0 nova_compute[247704]: 2026-01-31 07:38:55.439 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updating instance_info_cache with network_info: [{"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:38:55 compute-0 nova_compute[247704]: 2026-01-31 07:38:55.596 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-944d6dce-4c82-4846-a5eb-57f141812e21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:38:55 compute-0 nova_compute[247704]: 2026-01-31 07:38:55.596 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:38:55 compute-0 nova_compute[247704]: 2026-01-31 07:38:55.596 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:55.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:56 compute-0 ceph-mon[74496]: pgmap v1162: 305 pgs: 305 active+clean; 346 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 31 07:38:56 compute-0 nova_compute[247704]: 2026-01-31 07:38:56.160 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:56 compute-0 nova_compute[247704]: 2026-01-31 07:38:56.206 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:38:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:56 compute-0 nova_compute[247704]: 2026-01-31 07:38:56.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:56 compute-0 nova_compute[247704]: 2026-01-31 07:38:56.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:56 compute-0 nova_compute[247704]: 2026-01-31 07:38:56.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 359 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.5 MiB/s wr, 204 op/s
Jan 31 07:38:56 compute-0 ovn_controller[149457]: 2026-01-31T07:38:56Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:4f:e1 10.100.0.4
Jan 31 07:38:56 compute-0 ovn_controller[149457]: 2026-01-31T07:38:56Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:4f:e1 10.100.0.4
Jan 31 07:38:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/78430908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3481407038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/880117336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.414 247708 DEBUG nova.virt.libvirt.driver [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.627 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.628 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.628 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.628 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:38:57 compute-0 nova_compute[247704]: 2026-01-31 07:38:57.629 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:57.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784302991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.096 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:58 compute-0 ceph-mon[74496]: pgmap v1163: 305 pgs: 305 active+clean; 359 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.5 MiB/s wr, 204 op/s
Jan 31 07:38:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4008658442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:38:58 compute-0 podman[271779]: 2026-01-31 07:38:58.226714728 +0000 UTC m=+0.081355615 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.236 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.237 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.240 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.240 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:38:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:38:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.389 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.390 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4279MB free_disk=20.821880340576172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.390 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.391 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.544 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 944d6dce-4c82-4846-a5eb-57f141812e21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.545 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance ec81b99d-17fb-4dd1-aa76-67b3780fce15 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.545 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.546 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:38:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 397 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.4 MiB/s wr, 200 op/s
Jan 31 07:38:58 compute-0 nova_compute[247704]: 2026-01-31 07:38:58.615 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:38:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:38:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:38:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833713787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.083 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.128 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.151 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.187 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.188 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:38:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2784302991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2250944479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/833713787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:38:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:38:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:38:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:59.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.947 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.948 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:38:59 compute-0 nova_compute[247704]: 2026-01-31 07:38:59.985 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.005534) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140005601, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 321, "num_deletes": 251, "total_data_size": 121152, "memory_usage": 127576, "flush_reason": "Manual Compaction"}
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140009028, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 120120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26407, "largest_seqno": 26727, "table_properties": {"data_size": 118076, "index_size": 208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5360, "raw_average_key_size": 18, "raw_value_size": 114012, "raw_average_value_size": 395, "num_data_blocks": 9, "num_entries": 288, "num_filter_entries": 288, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845133, "oldest_key_time": 1769845133, "file_creation_time": 1769845140, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3562 microseconds, and 1042 cpu microseconds.
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.009106) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 120120 bytes OK
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.009126) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.010284) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.010304) EVENT_LOG_v1 {"time_micros": 1769845140010297, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.010323) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 118882, prev total WAL file size 118882, number of live WAL files 2.
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.010812) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(117KB)], [59(11MB)]
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140010904, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 12524176, "oldest_snapshot_seqno": -1}
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.084 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.085 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.091 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5229 keys, 10608813 bytes, temperature: kUnknown
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140092395, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10608813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10571183, "index_size": 23440, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133253, "raw_average_key_size": 25, "raw_value_size": 10474376, "raw_average_value_size": 2003, "num_data_blocks": 960, "num_entries": 5229, "num_filter_entries": 5229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845140, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.092 247708 INFO nova.compute.claims [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.092744) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10608813 bytes
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.100007) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.5 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(192.6) write-amplify(88.3) OK, records in: 5739, records dropped: 510 output_compression: NoCompression
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.100069) EVENT_LOG_v1 {"time_micros": 1769845140100048, "job": 32, "event": "compaction_finished", "compaction_time_micros": 81586, "compaction_time_cpu_micros": 23845, "output_level": 6, "num_output_files": 1, "total_output_size": 10608813, "num_input_records": 5739, "num_output_records": 5229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140100394, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845140101635, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.010683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.101714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.101723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.101726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.101729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:39:00.101731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:39:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.308 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 448 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 194 op/s
Jan 31 07:39:00 compute-0 ceph-mon[74496]: pgmap v1164: 305 pgs: 305 active+clean; 397 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.4 MiB/s wr, 200 op/s
Jan 31 07:39:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116205950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.727 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.731 247708 DEBUG nova.compute.provider_tree [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.768 247708 DEBUG nova.scheduler.client.report [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.914 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:00 compute-0 nova_compute[247704]: 2026-01-31 07:39:00.915 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.007 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.007 247708 DEBUG nova.network.neutron [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.164 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.187 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.192 247708 DEBUG nova.policy [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7c95bb3f52804685a0ba62164a02b535', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72635784cc4840bba682e8305945e795', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.202 247708 INFO nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.303 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.475 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.477 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.478 247708 INFO nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Creating image(s)
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.516 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.564 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.587 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.590 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "b9f2679fd6c29acf70c52ec6988a633671574c3f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.590 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "b9f2679fd6c29acf70c52ec6988a633671574c3f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:01 compute-0 ceph-mon[74496]: pgmap v1165: 305 pgs: 305 active+clean; 448 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 194 op/s
Jan 31 07:39:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2116205950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:01 compute-0 nova_compute[247704]: 2026-01-31 07:39:01.885 247708 DEBUG nova.virt.libvirt.imagebackend [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/6e711352-ba7c-4c7d-858b-ee38dcbc90e8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/6e711352-ba7c-4c7d-858b-ee38dcbc90e8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:39:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:01.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 461 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 180 op/s
Jan 31 07:39:02 compute-0 nova_compute[247704]: 2026-01-31 07:39:02.837 247708 DEBUG nova.network.neutron [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Successfully created port: a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:02.999 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.069 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.part --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.070 247708 DEBUG nova.virt.images [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] 6e711352-ba7c-4c7d-858b-ee38dcbc90e8 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.072 247708 DEBUG nova.privsep.utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.073 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.part /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.290 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.part /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.converted" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.295 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:03 compute-0 kernel: tap6c7bea49-10 (unregistering): left promiscuous mode
Jan 31 07:39:03 compute-0 NetworkManager[49108]: <info>  [1769845143.3038] device (tap6c7bea49-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:39:03 compute-0 ovn_controller[149457]: 2026-01-31T07:39:03Z|00128|binding|INFO|Releasing lport 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 from this chassis (sb_readonly=0)
Jan 31 07:39:03 compute-0 ovn_controller[149457]: 2026-01-31T07:39:03Z|00129|binding|INFO|Setting lport 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 down in Southbound
Jan 31 07:39:03 compute-0 ovn_controller[149457]: 2026-01-31T07:39:03Z|00130|binding|INFO|Removing iface tap6c7bea49-10 ovn-installed in OVS
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.320 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.334 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:4f:e1 10.100.0.4'], port_security=['fa:16:3e:4c:4f:e1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ec81b99d-17fb-4dd1-aa76-67b3780fce15', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.337 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.340 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 in datapath cffffabd-62a6-4362-9315-bd726adce623 unbound from our chassis
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.346 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cffffabd-62a6-4362-9315-bd726adce623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.348 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[739f93a0-cc9a-4e5a-8976-82da9ab72dfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.349 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace which is not needed anymore
Jan 31 07:39:03 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 31 07:39:03 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000026.scope: Consumed 13.802s CPU time.
Jan 31 07:39:03 compute-0 systemd-machined[214448]: Machine qemu-19-instance-00000026 terminated.
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.385 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f.converted --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.386 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "b9f2679fd6c29acf70c52ec6988a633671574c3f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.427 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.432 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f 30b63014-8760-428b-a66d-201587534734_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:03 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [NOTICE]   (271644) : haproxy version is 2.8.14-c23fe91
Jan 31 07:39:03 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [NOTICE]   (271644) : path to executable is /usr/sbin/haproxy
Jan 31 07:39:03 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [WARNING]  (271644) : Exiting Master process...
Jan 31 07:39:03 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [ALERT]    (271644) : Current worker (271646) exited with code 143 (Terminated)
Jan 31 07:39:03 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[271630]: [WARNING]  (271644) : All workers exited. Exiting... (0)
Jan 31 07:39:03 compute-0 systemd[1]: libpod-677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22.scope: Deactivated successfully.
Jan 31 07:39:03 compute-0 podman[271963]: 2026-01-31 07:39:03.513712278 +0000 UTC m=+0.058743484 container died 677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.567 247708 INFO nova.virt.libvirt.driver [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance shutdown successfully after 16 seconds.
Jan 31 07:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22-userdata-shm.mount: Deactivated successfully.
Jan 31 07:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f2ba8bc7c4a33f693a8492826be85891efadae34e76607b08ecdddc17562926-merged.mount: Deactivated successfully.
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.583 247708 INFO nova.virt.libvirt.driver [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance destroyed successfully.
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.584 247708 DEBUG nova.objects.instance [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'numa_topology' on Instance uuid ec81b99d-17fb-4dd1-aa76-67b3780fce15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:03 compute-0 podman[271963]: 2026-01-31 07:39:03.594233433 +0000 UTC m=+0.139264639 container cleanup 677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:39:03 compute-0 systemd[1]: libpod-conmon-677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22.scope: Deactivated successfully.
Jan 31 07:39:03 compute-0 podman[272017]: 2026-01-31 07:39:03.656684676 +0000 UTC m=+0.044993778 container remove 677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.662 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[30e8179c-eaca-47ea-9aa7-5e8432f06236]: (4, ('Sat Jan 31 07:39:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22)\n677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22\nSat Jan 31 07:39:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22)\n677ec940f24033759ae12f234037d2e8ca9cd3abd5410ec8092f3bf43bbf5f22\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.663 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5e16501f-949f-497f-ad5d-42916cb7d509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.664 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:03 compute-0 kernel: tapcffffabd-60: left promiscuous mode
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.684 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.687 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7fcdb5-72d1-4e8f-bc75-f1626ea7b56b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.700 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bab22e6b-369f-4ee1-bab9-163557722df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.701 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[412e3106-84dc-405e-b08a-04313a58f14e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.719 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8cde4e6d-e5f8-46d3-8987-2acb8ec2a75a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536529, 'reachable_time': 42985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272035, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.722 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:39:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:03.723 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[13e0fa85-7582-49a0-bf94-a5d11b87f03a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dcffffabd\x2d62a6\x2d4362\x2d9315\x2dbd726adce623.mount: Deactivated successfully.
Jan 31 07:39:03 compute-0 ceph-mon[74496]: pgmap v1166: 305 pgs: 305 active+clean; 461 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 180 op/s
Jan 31 07:39:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2014041756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1255292434' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:03 compute-0 nova_compute[247704]: 2026-01-31 07:39:03.803 247708 DEBUG nova.compute.manager [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:03.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.183 247708 DEBUG nova.compute.manager [req-dfeb2589-4395-44f7-93e1-93936ca45a41 req-61d6a4d4-bf3e-433b-a869-cf158af593b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received event network-vif-unplugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.184 247708 DEBUG oslo_concurrency.lockutils [req-dfeb2589-4395-44f7-93e1-93936ca45a41 req-61d6a4d4-bf3e-433b-a869-cf158af593b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.184 247708 DEBUG oslo_concurrency.lockutils [req-dfeb2589-4395-44f7-93e1-93936ca45a41 req-61d6a4d4-bf3e-433b-a869-cf158af593b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.184 247708 DEBUG oslo_concurrency.lockutils [req-dfeb2589-4395-44f7-93e1-93936ca45a41 req-61d6a4d4-bf3e-433b-a869-cf158af593b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.184 247708 DEBUG nova.compute.manager [req-dfeb2589-4395-44f7-93e1-93936ca45a41 req-61d6a4d4-bf3e-433b-a869-cf158af593b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] No waiting events found dispatching network-vif-unplugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.185 247708 WARNING nova.compute.manager [req-dfeb2589-4395-44f7-93e1-93936ca45a41 req-61d6a4d4-bf3e-433b-a869-cf158af593b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received unexpected event network-vif-unplugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 for instance with vm_state active and task_state powering-off.
Jan 31 07:39:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:04.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.429 247708 DEBUG oslo_concurrency.lockutils [None req-5db2469e-cc85-49f7-a91c-3bfcb32e31aa 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 17.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 461 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.2 MiB/s wr, 170 op/s
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.677 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f 30b63014-8760-428b-a66d-201587534734_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:04 compute-0 nova_compute[247704]: 2026-01-31 07:39:04.791 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] resizing rbd image 30b63014-8760-428b-a66d-201587534734_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.106 247708 DEBUG nova.network.neutron [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Successfully updated port: a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.135 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.136 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquired lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.136 247708 DEBUG nova.network.neutron [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.303 247708 DEBUG nova.network.neutron [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.530 247708 DEBUG nova.objects.instance [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lazy-loading 'migration_context' on Instance uuid 30b63014-8760-428b-a66d-201587534734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.547 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.547 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Ensure instance console log exists: /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.548 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.548 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:05 compute-0 nova_compute[247704]: 2026-01-31 07:39:05.549 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:05 compute-0 ceph-mon[74496]: pgmap v1167: 305 pgs: 305 active+clean; 461 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.2 MiB/s wr, 170 op/s
Jan 31 07:39:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:05.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:06 compute-0 nova_compute[247704]: 2026-01-31 07:39:06.166 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:06 compute-0 nova_compute[247704]: 2026-01-31 07:39:06.211 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:06.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 496 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Jan 31 07:39:07 compute-0 ceph-mon[74496]: pgmap v1168: 305 pgs: 305 active+clean; 496 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Jan 31 07:39:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:07.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.020 247708 DEBUG nova.compute.manager [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.020 247708 DEBUG oslo_concurrency.lockutils [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.021 247708 DEBUG oslo_concurrency.lockutils [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.021 247708 DEBUG oslo_concurrency.lockutils [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.022 247708 DEBUG nova.compute.manager [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] No waiting events found dispatching network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.022 247708 WARNING nova.compute.manager [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received unexpected event network-vif-plugged-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 for instance with vm_state stopped and task_state image_snapshot_pending.
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.023 247708 DEBUG nova.compute.manager [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-changed-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.023 247708 DEBUG nova.compute.manager [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Refreshing instance network info cache due to event network-changed-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.023 247708 DEBUG oslo_concurrency.lockutils [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.282 247708 DEBUG nova.network.neutron [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Updating instance_info_cache with network_info: [{"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.429 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Releasing lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.430 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Instance network_info: |[{"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.431 247708 DEBUG oslo_concurrency.lockutils [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.432 247708 DEBUG nova.network.neutron [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Refreshing network info cache for port a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.437 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Start _get_guest_xml network_info=[{"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:38:46Z,direct_url=<?>,disk_format='qcow2',id=6e711352-ba7c-4c7d-858b-ee38dcbc90e8,min_disk=0,min_ram=0,name='',owner='a066230603dc4a20a9c1449125db0ac6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:38:50Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'scsi', 'boot_index': 0, 'device_name': '/dev/sda', 'encrypted': False, 'image_id': '6e711352-ba7c-4c7d-858b-ee38dcbc90e8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.443 247708 WARNING nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.452 247708 DEBUG nova.virt.libvirt.host [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.453 247708 DEBUG nova.virt.libvirt.host [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.458 247708 DEBUG nova.virt.libvirt.host [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.459 247708 DEBUG nova.virt.libvirt.host [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.461 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.462 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:38:46Z,direct_url=<?>,disk_format='qcow2',id=6e711352-ba7c-4c7d-858b-ee38dcbc90e8,min_disk=0,min_ram=0,name='',owner='a066230603dc4a20a9c1449125db0ac6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:38:50Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.462 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.463 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.463 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.463 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.464 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.464 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.465 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.465 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.465 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.466 247708 DEBUG nova.virt.hardware [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.470 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.532 247708 DEBUG nova.compute.manager [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 518 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 223 op/s
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.621 247708 INFO nova.compute.manager [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] instance snapshotting
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.622 247708 WARNING nova.compute.manager [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] trying to snapshot a non-running instance: (state: 4 expected: 1)
Jan 31 07:39:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.861 247708 INFO nova.virt.libvirt.driver [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Beginning cold snapshot process
Jan 31 07:39:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:39:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330843390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.918 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.955 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:08 compute-0 nova_compute[247704]: 2026-01-31 07:39:08.960 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3330843390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.281 247708 DEBUG nova.virt.libvirt.imagebackend [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:39:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:39:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640104396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.367 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.369 247708 DEBUG nova.virt.libvirt.vif [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-566270408',display_name='tempest-AttachSCSIVolumeTestJSON-server-566270408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-566270408',id=40,image_ref='6e711352-ba7c-4c7d-858b-ee38dcbc90e8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ8vBVWoL3Kx71hGbZmgL+RMyK5pg4RSZWIQfJEgplPiT0Fp1pGPD1OLdKdjcCnYq1EimTVe6rq8ALChl/h7Yg9BhByf9wCDvRf6vvgfRnT8vgLocvPXjafhyrcrgxpSeA==',key_name='tempest-keypair-2023988493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72635784cc4840bba682e8305945e795',ramdisk_id='',reservation_id='r-0g6dsac9',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6e711352-ba7c-4c7d-858b-ee38dcbc90e8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1695700312',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1695700312-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c95bb3f52804685a0ba62164a02b535',uuid=30b63014-8760-428b-a66d-201587534734,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.369 247708 DEBUG nova.network.os_vif_util [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Converting VIF {"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.370 247708 DEBUG nova.network.os_vif_util [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.372 247708 DEBUG nova.objects.instance [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30b63014-8760-428b-a66d-201587534734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.458 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <uuid>30b63014-8760-428b-a66d-201587534734</uuid>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <name>instance-00000028</name>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:name>tempest-AttachSCSIVolumeTestJSON-server-566270408</nova:name>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:39:08</nova:creationTime>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:user uuid="7c95bb3f52804685a0ba62164a02b535">tempest-AttachSCSIVolumeTestJSON-1695700312-project-member</nova:user>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:project uuid="72635784cc4840bba682e8305945e795">tempest-AttachSCSIVolumeTestJSON-1695700312</nova:project>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="6e711352-ba7c-4c7d-858b-ee38dcbc90e8"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <nova:port uuid="a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7">
Jan 31 07:39:09 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <system>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <entry name="serial">30b63014-8760-428b-a66d-201587534734</entry>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <entry name="uuid">30b63014-8760-428b-a66d-201587534734</entry>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </system>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <os>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </os>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <features>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </features>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/30b63014-8760-428b-a66d-201587534734_disk">
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </source>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <target dev="sda" bus="scsi"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <address type="drive" controller="0" unit="0"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/30b63014-8760-428b-a66d-201587534734_disk.config">
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </source>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:39:09 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <target dev="sdb" bus="scsi"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <address type="drive" controller="0" unit="1"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="scsi" index="0" model="virtio-scsi"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:7d:a3:77"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <target dev="tapa9ae1f79-8a"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/console.log" append="off"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <video>
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </video>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:39:09 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:39:09 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:39:09 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:39:09 compute-0 nova_compute[247704]: </domain>
Jan 31 07:39:09 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.458 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Preparing to wait for external event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.458 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.459 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.459 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.459 247708 DEBUG nova.virt.libvirt.vif [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-566270408',display_name='tempest-AttachSCSIVolumeTestJSON-server-566270408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-566270408',id=40,image_ref='6e711352-ba7c-4c7d-858b-ee38dcbc90e8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ8vBVWoL3Kx71hGbZmgL+RMyK5pg4RSZWIQfJEgplPiT0Fp1pGPD1OLdKdjcCnYq1EimTVe6rq8ALChl/h7Yg9BhByf9wCDvRf6vvgfRnT8vgLocvPXjafhyrcrgxpSeA==',key_name='tempest-keypair-2023988493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72635784cc4840bba682e8305945e795',ramdisk_id='',reservation_id='r-0g6dsac9',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6e711352-ba7c-4c7d-858b-ee38dcbc90e8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1695700312',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1695700312-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:39:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c95bb3f52804685a0ba62164a02b535',uuid=30b63014-8760-428b-a66d-201587534734,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.460 247708 DEBUG nova.network.os_vif_util [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Converting VIF {"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.460 247708 DEBUG nova.network.os_vif_util [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.460 247708 DEBUG os_vif [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.461 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.461 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.462 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.464 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.465 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9ae1f79-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.465 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9ae1f79-8a, col_values=(('external_ids', {'iface-id': 'a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:a3:77', 'vm-uuid': '30b63014-8760-428b-a66d-201587534734'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.466 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:09 compute-0 NetworkManager[49108]: <info>  [1769845149.4669] manager: (tapa9ae1f79-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.468 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.473 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.474 247708 INFO os_vif [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a')
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.494 247708 DEBUG nova.storage.rbd_utils [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(bc17f523f79e4a64a95c2b37d10f810f) on rbd image(ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.712 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.713 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.713 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No VIF found with MAC fa:16:3e:7d:a3:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.714 247708 INFO nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Using config drive
Jan 31 07:39:09 compute-0 nova_compute[247704]: 2026-01-31 07:39:09.749 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:09 compute-0 sudo[272238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:09 compute-0 sudo[272238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:09 compute-0 sudo[272238]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:09 compute-0 sudo[272274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:09 compute-0 sudo[272274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:09 compute-0 sudo[272274]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 31 07:39:10 compute-0 ceph-mon[74496]: pgmap v1169: 305 pgs: 305 active+clean; 518 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 223 op/s
Jan 31 07:39:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/640104396' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 518 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.1 MiB/s wr, 268 op/s
Jan 31 07:39:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 31 07:39:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.079 247708 INFO nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Creating config drive at /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/disk.config
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.086 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuz0sjwdg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.147 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.148 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.149 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.212 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.221 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpuz0sjwdg" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.367 247708 DEBUG nova.storage.rbd_utils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] rbd image 30b63014-8760-428b-a66d-201587534734_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.372 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/disk.config 30b63014-8760-428b-a66d-201587534734_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.395 247708 DEBUG nova.network.neutron [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Updated VIF entry in instance network info cache for port a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.396 247708 DEBUG nova.network.neutron [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Updating instance_info_cache with network_info: [{"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.425 247708 DEBUG oslo_concurrency.lockutils [req-71da01cd-6a75-42df-ae2d-79970ecd2e87 req-79f90a99-0d10-4ad3-9277-4e074df7400d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.727 247708 DEBUG oslo_concurrency.processutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/disk.config 30b63014-8760-428b-a66d-201587534734_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.729 247708 INFO nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Deleting local config drive /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734/disk.config because it was imported into RBD.
Jan 31 07:39:11 compute-0 kernel: tapa9ae1f79-8a: entered promiscuous mode
Jan 31 07:39:11 compute-0 NetworkManager[49108]: <info>  [1769845151.8032] manager: (tapa9ae1f79-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Jan 31 07:39:11 compute-0 ovn_controller[149457]: 2026-01-31T07:39:11Z|00131|binding|INFO|Claiming lport a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 for this chassis.
Jan 31 07:39:11 compute-0 ovn_controller[149457]: 2026-01-31T07:39:11Z|00132|binding|INFO|a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7: Claiming fa:16:3e:7d:a3:77 10.100.0.7
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.809 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.825 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:a3:77 10.100.0.7'], port_security=['fa:16:3e:7d:a3:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '30b63014-8760-428b-a66d-201587534734', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa5ee378-6843-499e-8557-33660028a787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72635784cc4840bba682e8305945e795', 'neutron:revision_number': '2', 'neutron:security_group_ids': '23fa9621-004a-48f7-bc75-80280285867b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b38c7904-e395-4a73-b096-305f428a7f2a, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.828 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 in datapath aa5ee378-6843-499e-8557-33660028a787 bound to our chassis
Jan 31 07:39:11 compute-0 systemd-udevd[272352]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.832 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa5ee378-6843-499e-8557-33660028a787
Jan 31 07:39:11 compute-0 NetworkManager[49108]: <info>  [1769845151.8445] device (tapa9ae1f79-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:39:11 compute-0 NetworkManager[49108]: <info>  [1769845151.8463] device (tapa9ae1f79-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:39:11 compute-0 systemd-machined[214448]: New machine qemu-20-instance-00000028.
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.853 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[54fc9d8b-b1d4-43ed-8fd4-1519313f2591]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.855 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa5ee378-61 in ovnmeta-aa5ee378-6843-499e-8557-33660028a787 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.857 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa5ee378-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.857 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8f87c415-f8c9-4c6a-9f9a-f0ed72993d40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.859 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[56ff0a0a-381a-4deb-badd-3f0d8aa1fccd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000028.
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.873 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a88b20f9-cc44-49fd-86aa-1b413adf91c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:11 compute-0 ovn_controller[149457]: 2026-01-31T07:39:11Z|00133|binding|INFO|Setting lport a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 ovn-installed in OVS
Jan 31 07:39:11 compute-0 ovn_controller[149457]: 2026-01-31T07:39:11Z|00134|binding|INFO|Setting lport a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 up in Southbound
Jan 31 07:39:11 compute-0 nova_compute[247704]: 2026-01-31 07:39:11.886 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.890 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65f64d08-95e8-47b1-934f-ff11534c171f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.925 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b7fa35-5548-4305-8ed0-00f22fd4f429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.932 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9943b85e-8e00-49bf-8c0e-4e40ac0b72c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 NetworkManager[49108]: <info>  [1769845151.9347] manager: (tapaa5ee378-60): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Jan 31 07:39:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.971 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1d2b7b-4b45-4466-91e4-85fe3e7f67fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:11.975 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd0c80e-9a90-4cc5-8da7-dec02f8e77d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 NetworkManager[49108]: <info>  [1769845152.0012] device (tapaa5ee378-60): carrier: link connected
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.005 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2d241d7f-f825-422a-82f1-bf6cd5351fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.021 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a3807dab-46db-4fa9-b3ea-271c46e23404]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa5ee378-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:e3:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539461, 'reachable_time': 28986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272392, 'error': None, 'target': 'ovnmeta-aa5ee378-6843-499e-8557-33660028a787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.037 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[53220266-5e88-4afe-9b55-a1b5bc32c315]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:e3e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539461, 'tstamp': 539461}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272393, 'error': None, 'target': 'ovnmeta-aa5ee378-6843-499e-8557-33660028a787', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.055 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b678586-e891-4a7d-889a-4716c663fb8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa5ee378-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:e3:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539461, 'reachable_time': 28986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272394, 'error': None, 'target': 'ovnmeta-aa5ee378-6843-499e-8557-33660028a787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.090 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9c9aef-7692-402e-84a2-8db4ab900d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.158 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a07d78-4275-461d-a2ec-9d3e7095f44b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.159 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa5ee378-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.160 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.160 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa5ee378-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:12 compute-0 kernel: tapaa5ee378-60: entered promiscuous mode
Jan 31 07:39:12 compute-0 NetworkManager[49108]: <info>  [1769845152.1622] manager: (tapaa5ee378-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.164 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.168 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.170 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa5ee378-60, col_values=(('external_ids', {'iface-id': '5326ae6b-6b24-4b0e-ac79-ec2b44f4ff3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.171 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:12 compute-0 ovn_controller[149457]: 2026-01-31T07:39:12Z|00135|binding|INFO|Releasing lport 5326ae6b-6b24-4b0e-ac79-ec2b44f4ff3e from this chassis (sb_readonly=0)
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.188 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa5ee378-6843-499e-8557-33660028a787.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa5ee378-6843-499e-8557-33660028a787.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.189 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[96b24823-c3bc-4a3d-9d43-467e9723141e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.190 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-aa5ee378-6843-499e-8557-33660028a787
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/aa5ee378-6843-499e-8557-33660028a787.pid.haproxy
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID aa5ee378-6843-499e-8557-33660028a787
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:39:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:12.190 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa5ee378-6843-499e-8557-33660028a787', 'env', 'PROCESS_TAG=haproxy-aa5ee378-6843-499e-8557-33660028a787', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa5ee378-6843-499e-8557-33660028a787.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:39:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.373 247708 DEBUG nova.compute.manager [req-9f78ecc5-8eda-407d-ab42-37cfc89b97f9 req-0c28c066-43ed-4507-86e6-b50569ec9948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.374 247708 DEBUG oslo_concurrency.lockutils [req-9f78ecc5-8eda-407d-ab42-37cfc89b97f9 req-0c28c066-43ed-4507-86e6-b50569ec9948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.374 247708 DEBUG oslo_concurrency.lockutils [req-9f78ecc5-8eda-407d-ab42-37cfc89b97f9 req-0c28c066-43ed-4507-86e6-b50569ec9948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.374 247708 DEBUG oslo_concurrency.lockutils [req-9f78ecc5-8eda-407d-ab42-37cfc89b97f9 req-0c28c066-43ed-4507-86e6-b50569ec9948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.375 247708 DEBUG nova.compute.manager [req-9f78ecc5-8eda-407d-ab42-37cfc89b97f9 req-0c28c066-43ed-4507-86e6-b50569ec9948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Processing event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:39:12 compute-0 ceph-mon[74496]: pgmap v1170: 305 pgs: 305 active+clean; 518 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.1 MiB/s wr, 268 op/s
Jan 31 07:39:12 compute-0 ceph-mon[74496]: osdmap e163: 3 total, 3 up, 3 in
Jan 31 07:39:12 compute-0 podman[272429]: 2026-01-31 07:39:12.56559196 +0000 UTC m=+0.057938825 container create a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 07:39:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 214 op/s
Jan 31 07:39:12 compute-0 systemd[1]: Started libpod-conmon-a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f.scope.
Jan 31 07:39:12 compute-0 podman[272429]: 2026-01-31 07:39:12.530062403 +0000 UTC m=+0.022409238 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:39:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f1b1eb3185f96cd5a14c9549aee704c02b38b3727b18d216789a038bf52e0be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:12 compute-0 podman[272429]: 2026-01-31 07:39:12.66888757 +0000 UTC m=+0.161234475 container init a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:39:12 compute-0 podman[272429]: 2026-01-31 07:39:12.675250835 +0000 UTC m=+0.167597700 container start a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:39:12 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [NOTICE]   (272467) : New worker (272469) forked
Jan 31 07:39:12 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [NOTICE]   (272467) : Loading success.
Jan 31 07:39:12 compute-0 nova_compute[247704]: 2026-01-31 07:39:12.920 247708 DEBUG nova.storage.rbd_utils [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] cloning vms/ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk@bc17f523f79e4a64a95c2b37d10f810f to images/627417b9-a501-49db-939e-bae37ed6ee98 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.014 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845153.0134206, 30b63014-8760-428b-a66d-201587534734 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.014 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] VM Started (Lifecycle Event)
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.018 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.021 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.026 247708 INFO nova.virt.libvirt.driver [-] [instance: 30b63014-8760-428b-a66d-201587534734] Instance spawned successfully.
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.027 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.033 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.034 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.035 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.037 247708 DEBUG nova.virt.libvirt.driver [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.046 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.052 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.100 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.101 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845153.0135615, 30b63014-8760-428b-a66d-201587534734 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.102 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] VM Paused (Lifecycle Event)
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.116 247708 INFO nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Took 11.64 seconds to spawn the instance on the hypervisor.
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.117 247708 DEBUG nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.128 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.132 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845153.0213852, 30b63014-8760-428b-a66d-201587534734 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.133 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] VM Resumed (Lifecycle Event)
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.165 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.171 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.190 247708 INFO nova.compute.manager [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Took 13.13 seconds to build instance.
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.206 247708 DEBUG oslo_concurrency.lockutils [None req-7494f19d-628d-4be8-bf44-d968cba2c9e1 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:13 compute-0 nova_compute[247704]: 2026-01-31 07:39:13.238 247708 DEBUG nova.storage.rbd_utils [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] flattening images/627417b9-a501-49db-939e-bae37ed6ee98 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:39:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 31 07:39:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 31 07:39:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 31 07:39:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:13 compute-0 podman[272559]: 2026-01-31 07:39:13.892396771 +0000 UTC m=+0.068171045 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:39:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:14 compute-0 ceph-mon[74496]: pgmap v1172: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 214 op/s
Jan 31 07:39:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.467 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 164 op/s
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.802 247708 DEBUG nova.compute.manager [req-90e3ed0d-6791-4a84-b024-0d1c45fb6c58 req-48e1d68d-8451-4235-81d7-cc1a5426c532 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.803 247708 DEBUG oslo_concurrency.lockutils [req-90e3ed0d-6791-4a84-b024-0d1c45fb6c58 req-48e1d68d-8451-4235-81d7-cc1a5426c532 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.803 247708 DEBUG oslo_concurrency.lockutils [req-90e3ed0d-6791-4a84-b024-0d1c45fb6c58 req-48e1d68d-8451-4235-81d7-cc1a5426c532 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.803 247708 DEBUG oslo_concurrency.lockutils [req-90e3ed0d-6791-4a84-b024-0d1c45fb6c58 req-48e1d68d-8451-4235-81d7-cc1a5426c532 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.804 247708 DEBUG nova.compute.manager [req-90e3ed0d-6791-4a84-b024-0d1c45fb6c58 req-48e1d68d-8451-4235-81d7-cc1a5426c532 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] No waiting events found dispatching network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.804 247708 WARNING nova.compute.manager [req-90e3ed0d-6791-4a84-b024-0d1c45fb6c58 req-48e1d68d-8451-4235-81d7-cc1a5426c532 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received unexpected event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 for instance with vm_state active and task_state None.
Jan 31 07:39:14 compute-0 nova_compute[247704]: 2026-01-31 07:39:14.980 247708 DEBUG nova.storage.rbd_utils [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] removing snapshot(bc17f523f79e4a64a95c2b37d10f810f) on rbd image(ec81b99d-17fb-4dd1-aa76-67b3780fce15_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:39:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 31 07:39:15 compute-0 ceph-mon[74496]: osdmap e164: 3 total, 3 up, 3 in
Jan 31 07:39:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 31 07:39:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 31 07:39:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:16 compute-0 nova_compute[247704]: 2026-01-31 07:39:16.252 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:16 compute-0 ceph-mon[74496]: pgmap v1174: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 164 op/s
Jan 31 07:39:16 compute-0 ceph-mon[74496]: osdmap e165: 3 total, 3 up, 3 in
Jan 31 07:39:16 compute-0 nova_compute[247704]: 2026-01-31 07:39:16.483 247708 DEBUG nova.storage.rbd_utils [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(snap) on rbd image(627417b9-a501-49db-939e-bae37ed6ee98) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:39:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 633 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 12 MiB/s wr, 263 op/s
Jan 31 07:39:16 compute-0 sudo[272616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:16 compute-0 sudo[272616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:16 compute-0 sudo[272616]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:16 compute-0 sudo[272641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:39:16 compute-0 sudo[272641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:16 compute-0 sudo[272641]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:16 compute-0 sudo[272666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:16 compute-0 sudo[272666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:16 compute-0 sudo[272666]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:16 compute-0 sudo[272691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:39:16 compute-0 sudo[272691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:17 compute-0 sudo[272691]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 31 07:39:17 compute-0 ceph-mon[74496]: pgmap v1176: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 633 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 12 MiB/s wr, 263 op/s
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 31 07:39:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev baa36c11-357f-4ea3-bd73-d3ac3aeb8349 does not exist
Jan 31 07:39:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 375a8a18-488b-4537-92ef-f11df5bf2a50 does not exist
Jan 31 07:39:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7b260d67-d56c-4b65-a252-7c6b1e9422c1 does not exist
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:39:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:39:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:39:17 compute-0 sudo[272749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:17 compute-0 sudo[272749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:17 compute-0 sudo[272749]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:17 compute-0 sudo[272774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:39:17 compute-0 sudo[272774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:17 compute-0 sudo[272774]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:17 compute-0 nova_compute[247704]: 2026-01-31 07:39:17.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:17 compute-0 NetworkManager[49108]: <info>  [1769845157.9632] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 31 07:39:17 compute-0 NetworkManager[49108]: <info>  [1769845157.9646] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 31 07:39:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:17.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:18 compute-0 sudo[272799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:18 compute-0 sudo[272799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:18 compute-0 sudo[272799]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.026 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:18 compute-0 ovn_controller[149457]: 2026-01-31T07:39:18Z|00136|binding|INFO|Releasing lport 5326ae6b-6b24-4b0e-ac79-ec2b44f4ff3e from this chassis (sb_readonly=0)
Jan 31 07:39:18 compute-0 ovn_controller[149457]: 2026-01-31T07:39:18Z|00137|binding|INFO|Releasing lport b682c189-93d2-4c14-8b2a-bafbda6df8a4 from this chassis (sb_readonly=0)
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.054 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:18 compute-0 sudo[272824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:39:18 compute-0 sudo[272824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.491684692 +0000 UTC m=+0.070820919 container create cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:39:18 compute-0 systemd[1]: Started libpod-conmon-cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be.scope.
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.454621597 +0000 UTC m=+0.033757914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.568 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845143.567056, ec81b99d-17fb-4dd1-aa76-67b3780fce15 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.568 247708 INFO nova.compute.manager [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] VM Stopped (Lifecycle Event)
Jan 31 07:39:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.603819048 +0000 UTC m=+0.182955355 container init cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:39:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 660 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 388 op/s
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.609 247708 DEBUG nova.compute.manager [None req-44c7ad8a-b281-45ec-a47d-e3f196c12b3b - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.612844748 +0000 UTC m=+0.191981005 container start cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.613 247708 DEBUG nova.compute.manager [None req-44c7ad8a-b281-45ec-a47d-e3f196c12b3b - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: image_uploading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.620034663 +0000 UTC m=+0.199170910 container attach cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ganguly, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:39:18 compute-0 quizzical_ganguly[272910]: 167 167
Jan 31 07:39:18 compute-0 systemd[1]: libpod-cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be.scope: Deactivated successfully.
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.623804105 +0000 UTC m=+0.202940352 container died cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.642 247708 INFO nova.compute.manager [None req-44c7ad8a-b281-45ec-a47d-e3f196c12b3b - - - - - -] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] During sync_power_state the instance has a pending task (image_uploading). Skip.
Jan 31 07:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb5980617b60ee1b8f7ed5095f6d48eea2b3ffecdd92b3ee985b58385aa64426-merged.mount: Deactivated successfully.
Jan 31 07:39:18 compute-0 podman[272892]: 2026-01-31 07:39:18.682829885 +0000 UTC m=+0.261966122 container remove cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:39:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:18 compute-0 systemd[1]: libpod-conmon-cb2d0726a107504b50bfb20e518156582d93ddecf710c20e630d086dc38ed2be.scope: Deactivated successfully.
Jan 31 07:39:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.860 247708 DEBUG nova.compute.manager [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-changed-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.863 247708 DEBUG nova.compute.manager [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Refreshing instance network info cache due to event network-changed-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.864 247708 DEBUG oslo_concurrency.lockutils [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.864 247708 DEBUG oslo_concurrency.lockutils [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:39:18 compute-0 nova_compute[247704]: 2026-01-31 07:39:18.865 247708 DEBUG nova.network.neutron [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Refreshing network info cache for port a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:39:18 compute-0 podman[272934]: 2026-01-31 07:39:18.915872711 +0000 UTC m=+0.074003016 container create dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mayer, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:39:18 compute-0 systemd[1]: Started libpod-conmon-dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874.scope.
Jan 31 07:39:18 compute-0 podman[272934]: 2026-01-31 07:39:18.884911595 +0000 UTC m=+0.043041900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:39:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:18 compute-0 sshd-session[272906]: Invalid user sol from 45.148.10.240 port 57500
Jan 31 07:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142d67c75f20d3ed971c3b4622a3d2d1efa014c42d49071df5f717b4135788fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142d67c75f20d3ed971c3b4622a3d2d1efa014c42d49071df5f717b4135788fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142d67c75f20d3ed971c3b4622a3d2d1efa014c42d49071df5f717b4135788fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142d67c75f20d3ed971c3b4622a3d2d1efa014c42d49071df5f717b4135788fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/142d67c75f20d3ed971c3b4622a3d2d1efa014c42d49071df5f717b4135788fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:19 compute-0 podman[272934]: 2026-01-31 07:39:19.019809497 +0000 UTC m=+0.177939772 container init dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mayer, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:39:19 compute-0 podman[272934]: 2026-01-31 07:39:19.026068899 +0000 UTC m=+0.184199164 container start dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:39:19 compute-0 podman[272934]: 2026-01-31 07:39:19.029596916 +0000 UTC m=+0.187727181 container attach dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mayer, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:19 compute-0 ceph-mon[74496]: osdmap e166: 3 total, 3 up, 3 in
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:39:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:39:19 compute-0 sshd-session[272906]: Connection closed by invalid user sol 45.148.10.240 port 57500 [preauth]
Jan 31 07:39:19 compute-0 nova_compute[247704]: 2026-01-31 07:39:19.469 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 31 07:39:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 31 07:39:19 compute-0 heuristic_mayer[272950]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:39:19 compute-0 heuristic_mayer[272950]: --> relative data size: 1.0
Jan 31 07:39:19 compute-0 heuristic_mayer[272950]: --> All data devices are unavailable
Jan 31 07:39:19 compute-0 systemd[1]: libpod-dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874.scope: Deactivated successfully.
Jan 31 07:39:19 compute-0 conmon[272950]: conmon dc14e2e9bd3171b3acc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874.scope/container/memory.events
Jan 31 07:39:19 compute-0 podman[272934]: 2026-01-31 07:39:19.876323384 +0000 UTC m=+1.034453709 container died dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-142d67c75f20d3ed971c3b4622a3d2d1efa014c42d49071df5f717b4135788fe-merged.mount: Deactivated successfully.
Jan 31 07:39:19 compute-0 podman[272934]: 2026-01-31 07:39:19.939237268 +0000 UTC m=+1.097367573 container remove dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mayer, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:39:19 compute-0 systemd[1]: libpod-conmon-dc14e2e9bd3171b3acc802588312d03230d7817d3f93d1c7c8031d39dd63a874.scope: Deactivated successfully.
Jan 31 07:39:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:19.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:19 compute-0 sudo[272824]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:39:20
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data']
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:39:20 compute-0 sudo[272978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:20 compute-0 sudo[272978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:20 compute-0 sudo[272978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:20 compute-0 sudo[273003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:39:20 compute-0 sudo[273003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:20 compute-0 sudo[273003]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:20 compute-0 nova_compute[247704]: 2026-01-31 07:39:20.144 247708 DEBUG nova.network.neutron [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Updated VIF entry in instance network info cache for port a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:39:20 compute-0 nova_compute[247704]: 2026-01-31 07:39:20.145 247708 DEBUG nova.network.neutron [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Updating instance_info_cache with network_info: [{"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:20 compute-0 nova_compute[247704]: 2026-01-31 07:39:20.173 247708 DEBUG oslo_concurrency.lockutils [req-6afc0eac-b3e3-493a-98e4-7b61638ebf0a req-0129b130-12e1-4ffe-a0ec-34f6f16b448c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-30b63014-8760-428b-a66d-201587534734" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:39:20 compute-0 ceph-mon[74496]: pgmap v1178: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 660 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 388 op/s
Jan 31 07:39:20 compute-0 ceph-mon[74496]: osdmap e167: 3 total, 3 up, 3 in
Jan 31 07:39:20 compute-0 sudo[273028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:20 compute-0 sudo[273028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:20 compute-0 sudo[273028]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:20 compute-0 sudo[273053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:39:20 compute-0 sudo[273053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:20 compute-0 nova_compute[247704]: 2026-01-31 07:39:20.288 247708 INFO nova.virt.libvirt.driver [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Snapshot image upload complete
Jan 31 07:39:20 compute-0 nova_compute[247704]: 2026-01-31 07:39:20.290 247708 INFO nova.compute.manager [None req-d82ad3ba-20e5-493e-a44a-6cf924b9ae26 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Took 11.67 seconds to snapshot the instance on the hypervisor.
Jan 31 07:39:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:20.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:39:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 14 MiB/s wr, 500 op/s
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.663393547 +0000 UTC m=+0.055349312 container create 33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:39:20 compute-0 systemd[1]: Started libpod-conmon-33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c.scope.
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.641758928 +0000 UTC m=+0.033714693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:39:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.756792444 +0000 UTC m=+0.148748169 container init 33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.765804645 +0000 UTC m=+0.157760400 container start 33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.770326035 +0000 UTC m=+0.162281760 container attach 33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:39:20 compute-0 busy_haibt[273134]: 167 167
Jan 31 07:39:20 compute-0 systemd[1]: libpod-33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c.scope: Deactivated successfully.
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.773100583 +0000 UTC m=+0.165056308 container died 33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf29a17f07eebccf5f20c424e66241ca53c0734bc9dbd5663ac70ae74d775403-merged.mount: Deactivated successfully.
Jan 31 07:39:20 compute-0 podman[273118]: 2026-01-31 07:39:20.819777961 +0000 UTC m=+0.211733726 container remove 33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:39:20 compute-0 systemd[1]: libpod-conmon-33ac6c1d1c1a1da74e8f4b7ef7c5204dbec75638a19420481cf5b97ff5f8787c.scope: Deactivated successfully.
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:21.018769956 +0000 UTC m=+0.058620920 container create fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:39:21 compute-0 systemd[1]: Started libpod-conmon-fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3.scope.
Jan 31 07:39:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:20.993670035 +0000 UTC m=+0.033521039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf509047d166fc9a07ddddfe69dac8276e66fb554f0012fd3706e4cdf820059/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf509047d166fc9a07ddddfe69dac8276e66fb554f0012fd3706e4cdf820059/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf509047d166fc9a07ddddfe69dac8276e66fb554f0012fd3706e4cdf820059/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf509047d166fc9a07ddddfe69dac8276e66fb554f0012fd3706e4cdf820059/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:21.118374627 +0000 UTC m=+0.158225621 container init fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:21.127766935 +0000 UTC m=+0.167617929 container start fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:21.132270796 +0000 UTC m=+0.172121790 container attach fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:39:21 compute-0 nova_compute[247704]: 2026-01-31 07:39:21.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:21 compute-0 youthful_galois[273174]: {
Jan 31 07:39:21 compute-0 youthful_galois[273174]:     "0": [
Jan 31 07:39:21 compute-0 youthful_galois[273174]:         {
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "devices": [
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "/dev/loop3"
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             ],
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "lv_name": "ceph_lv0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "lv_size": "7511998464",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "name": "ceph_lv0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "tags": {
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.cluster_name": "ceph",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.crush_device_class": "",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.encrypted": "0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.osd_id": "0",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.type": "block",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:                 "ceph.vdo": "0"
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             },
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "type": "block",
Jan 31 07:39:21 compute-0 youthful_galois[273174]:             "vg_name": "ceph_vg0"
Jan 31 07:39:21 compute-0 youthful_galois[273174]:         }
Jan 31 07:39:21 compute-0 youthful_galois[273174]:     ]
Jan 31 07:39:21 compute-0 youthful_galois[273174]: }
Jan 31 07:39:21 compute-0 systemd[1]: libpod-fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3.scope: Deactivated successfully.
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:21.806950346 +0000 UTC m=+0.846801350 container died fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cf509047d166fc9a07ddddfe69dac8276e66fb554f0012fd3706e4cdf820059-merged.mount: Deactivated successfully.
Jan 31 07:39:21 compute-0 podman[273158]: 2026-01-31 07:39:21.863874915 +0000 UTC m=+0.903725879 container remove fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_galois, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:39:21 compute-0 systemd[1]: libpod-conmon-fb2effec89b632c96b0ddc62ba8cc7dd8d694466a133f291b34bb6705c1abce3.scope: Deactivated successfully.
Jan 31 07:39:21 compute-0 sudo[273053]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:21 compute-0 sudo[273195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:21 compute-0 sudo[273195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:21 compute-0 sudo[273195]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:21.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:22 compute-0 sudo[273220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:39:22 compute-0 sudo[273220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:22 compute-0 sudo[273220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:22 compute-0 sudo[273245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:22 compute-0 sudo[273245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:22 compute-0 sudo[273245]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:22 compute-0 sudo[273270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:39:22 compute-0 sudo[273270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.566504488 +0000 UTC m=+0.060568459 container create d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:39:22 compute-0 systemd[1]: Started libpod-conmon-d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e.scope.
Jan 31 07:39:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 8.1 MiB/s wr, 322 op/s
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.539323614 +0000 UTC m=+0.033387635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:39:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.66212875 +0000 UTC m=+0.156192781 container init d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.670781201 +0000 UTC m=+0.164845172 container start d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.674704267 +0000 UTC m=+0.168768238 container attach d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:39:22 compute-0 adoring_turing[273353]: 167 167
Jan 31 07:39:22 compute-0 systemd[1]: libpod-d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e.scope: Deactivated successfully.
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.678800267 +0000 UTC m=+0.172864238 container died d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-55a432e7f0f2da932a4f571b0106758542d8977ccf0854f76043c2d4b814ba18-merged.mount: Deactivated successfully.
Jan 31 07:39:22 compute-0 podman[273336]: 2026-01-31 07:39:22.725203969 +0000 UTC m=+0.219267900 container remove d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:39:22 compute-0 systemd[1]: libpod-conmon-d182c8a9a1a780c0ea4b50c1f4c15a7e0117f4baefedd1268aa8fa90019b960e.scope: Deactivated successfully.
Jan 31 07:39:22 compute-0 podman[273377]: 2026-01-31 07:39:22.883999304 +0000 UTC m=+0.038287336 container create 54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ganguly, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 07:39:22 compute-0 systemd[1]: Started libpod-conmon-54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0.scope.
Jan 31 07:39:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a592c01889f5adb8a36b6adb81fe24409798aa40d858ee6093170fca512533/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a592c01889f5adb8a36b6adb81fe24409798aa40d858ee6093170fca512533/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a592c01889f5adb8a36b6adb81fe24409798aa40d858ee6093170fca512533/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a592c01889f5adb8a36b6adb81fe24409798aa40d858ee6093170fca512533/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:39:22 compute-0 podman[273377]: 2026-01-31 07:39:22.86623279 +0000 UTC m=+0.020520852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:39:22 compute-0 podman[273377]: 2026-01-31 07:39:22.970822722 +0000 UTC m=+0.125110824 container init 54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ganguly, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:39:22 compute-0 podman[273377]: 2026-01-31 07:39:22.980710993 +0000 UTC m=+0.134999025 container start 54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:39:22 compute-0 podman[273377]: 2026-01-31 07:39:22.989141999 +0000 UTC m=+0.143430101 container attach 54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ganguly, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:39:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]: {
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:         "osd_id": 0,
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:         "type": "bluestore"
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]:     }
Jan 31 07:39:23 compute-0 hungry_ganguly[273393]: }
Jan 31 07:39:23 compute-0 systemd[1]: libpod-54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0.scope: Deactivated successfully.
Jan 31 07:39:23 compute-0 podman[273377]: 2026-01-31 07:39:23.828411665 +0000 UTC m=+0.982699737 container died 54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ganguly, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a592c01889f5adb8a36b6adb81fe24409798aa40d858ee6093170fca512533-merged.mount: Deactivated successfully.
Jan 31 07:39:23 compute-0 podman[273377]: 2026-01-31 07:39:23.892697923 +0000 UTC m=+1.046985975 container remove 54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:39:23 compute-0 systemd[1]: libpod-conmon-54873c59d9cd2690abb147eaff35b9c2330b86826c473846545a306323782bf0.scope: Deactivated successfully.
Jan 31 07:39:23 compute-0 sudo[273270]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:39:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:23.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:24 compute-0 nova_compute[247704]: 2026-01-31 07:39:24.471 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 230 op/s
Jan 31 07:39:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:25.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:26.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:26 compute-0 nova_compute[247704]: 2026-01-31 07:39:26.348 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 31 07:39:27 compute-0 ceph-mon[74496]: pgmap v1180: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 14 MiB/s wr, 500 op/s
Jan 31 07:39:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:39:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ca053517-7186-49d2-86f0-31b20cc8c475 does not exist
Jan 31 07:39:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b0d1c89b-206f-4de3-ac48-e8fb91ebe193 does not exist
Jan 31 07:39:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ab3de2c4-57a2-473d-bf06-010e03da72bc does not exist
Jan 31 07:39:27 compute-0 sudo[273432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:27 compute-0 sudo[273432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:27 compute-0 sudo[273432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:27 compute-0 sudo[273457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:39:27 compute-0 sudo[273457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:27 compute-0 sudo[273457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:27.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 929 KiB/s wr, 88 op/s
Jan 31 07:39:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 31 07:39:28 compute-0 ceph-mon[74496]: pgmap v1181: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 8.1 MiB/s wr, 322 op/s
Jan 31 07:39:28 compute-0 ceph-mon[74496]: pgmap v1182: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 2.7 MiB/s wr, 230 op/s
Jan 31 07:39:28 compute-0 ceph-mon[74496]: pgmap v1183: 305 pgs: 305 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 31 07:39:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:39:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 31 07:39:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 31 07:39:29 compute-0 podman[273482]: 2026-01-31 07:39:29.012989444 +0000 UTC m=+0.175643726 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.480 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.502 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.502 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.503 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.504 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.504 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.506 247708 INFO nova.compute.manager [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Terminating instance
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.507 247708 DEBUG nova.compute.manager [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.518 247708 INFO nova.virt.libvirt.driver [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Instance destroyed successfully.
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.519 247708 DEBUG nova.objects.instance [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'resources' on Instance uuid ec81b99d-17fb-4dd1-aa76-67b3780fce15 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:29 compute-0 ovn_controller[149457]: 2026-01-31T07:39:29Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:a3:77 10.100.0.7
Jan 31 07:39:29 compute-0 ovn_controller[149457]: 2026-01-31T07:39:29Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:a3:77 10.100.0.7
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.539 247708 DEBUG nova.virt.libvirt.vif [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-528624837',display_name='tempest-ImagesTestJSON-server-528624837',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-528624837',id=38,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:38:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-dk4ltwv1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:39:20Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=ec81b99d-17fb-4dd1-aa76-67b3780fce15,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.540 247708 DEBUG nova.network.os_vif_util [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "address": "fa:16:3e:4c:4f:e1", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7bea49-10", "ovs_interfaceid": "6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.542 247708 DEBUG nova.network.os_vif_util [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.542 247708 DEBUG os_vif [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.545 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.546 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c7bea49-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.552 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:39:29 compute-0 nova_compute[247704]: 2026-01-31 07:39:29.555 247708 INFO os_vif [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:4f:e1,bridge_name='br-int',has_traffic_filtering=True,id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7bea49-10')
Jan 31 07:39:29 compute-0 ceph-mon[74496]: pgmap v1184: 305 pgs: 305 active+clean; 676 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 929 KiB/s wr, 88 op/s
Jan 31 07:39:29 compute-0 ceph-mon[74496]: osdmap e168: 3 total, 3 up, 3 in
Jan 31 07:39:29 compute-0 sudo[273527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:29 compute-0 sudo[273527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:29 compute-0 sudo[273527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:29.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:30 compute-0 sudo[273552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:30 compute-0 sudo[273552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:30 compute-0 sudo[273552]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 614 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 591 KiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 07:39:31 compute-0 nova_compute[247704]: 2026-01-31 07:39:31.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:31.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:32 compute-0 ceph-mon[74496]: pgmap v1186: 305 pgs: 305 active+clean; 614 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 591 KiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 07:39:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 578 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 5.5 MiB/s wr, 205 op/s
Jan 31 07:39:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 31 07:39:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:33.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 31 07:39:34 compute-0 ceph-mon[74496]: pgmap v1187: 305 pgs: 305 active+clean; 578 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 5.5 MiB/s wr, 205 op/s
Jan 31 07:39:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 31 07:39:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:34 compute-0 nova_compute[247704]: 2026-01-31 07:39:34.548 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 578 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 935 KiB/s rd, 6.9 MiB/s wr, 236 op/s
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013192045295924064 of space, bias 1.0, pg target 3.9576135887772192 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003843544981853713 of space, bias 1.0, pg target 1.1415328596105527 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:39:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 07:39:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1659632397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1211290531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:35 compute-0 ceph-mon[74496]: osdmap e169: 3 total, 3 up, 3 in
Jan 31 07:39:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2681325381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:39:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:35.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.274 247708 INFO nova.virt.libvirt.driver [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Deleting instance files /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15_del
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.274 247708 INFO nova.virt.libvirt.driver [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Deletion of /var/lib/nova/instances/ec81b99d-17fb-4dd1-aa76-67b3780fce15_del complete
Jan 31 07:39:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:36 compute-0 ceph-mon[74496]: pgmap v1189: 305 pgs: 305 active+clean; 578 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 935 KiB/s rd, 6.9 MiB/s wr, 236 op/s
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.444 247708 INFO nova.compute.manager [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Took 6.94 seconds to destroy the instance on the hypervisor.
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.444 247708 DEBUG oslo.service.loopingcall [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.444 247708 DEBUG nova.compute.manager [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:39:36 compute-0 nova_compute[247704]: 2026-01-31 07:39:36.445 247708 DEBUG nova.network.neutron [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:39:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 545 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 9.5 MiB/s wr, 367 op/s
Jan 31 07:39:37 compute-0 ceph-mon[74496]: pgmap v1190: 305 pgs: 305 active+clean; 545 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 9.5 MiB/s wr, 367 op/s
Jan 31 07:39:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 519 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 936 KiB/s rd, 8.4 MiB/s wr, 315 op/s
Jan 31 07:39:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 31 07:39:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 31 07:39:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 31 07:39:39 compute-0 nova_compute[247704]: 2026-01-31 07:39:39.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 31 07:39:39 compute-0 ceph-mon[74496]: pgmap v1191: 305 pgs: 305 active+clean; 519 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 936 KiB/s rd, 8.4 MiB/s wr, 315 op/s
Jan 31 07:39:39 compute-0 ceph-mon[74496]: osdmap e170: 3 total, 3 up, 3 in
Jan 31 07:39:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 31 07:39:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 31 07:39:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:40.550 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.551 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:40.553 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:39:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 551 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 363 op/s
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.650 247708 DEBUG nova.network.neutron [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.683 247708 DEBUG nova.compute.manager [req-46d1a0a6-12c1-4c4a-8cba-c5d0856e7e71 req-c8058546-ae7a-4826-b2e8-72be32395b21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Received event network-vif-deleted-6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.684 247708 INFO nova.compute.manager [req-46d1a0a6-12c1-4c4a-8cba-c5d0856e7e71 req-c8058546-ae7a-4826-b2e8-72be32395b21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Neutron deleted interface 6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61; detaching it from the instance and deleting it from the info cache
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.684 247708 DEBUG nova.network.neutron [req-46d1a0a6-12c1-4c4a-8cba-c5d0856e7e71 req-c8058546-ae7a-4826-b2e8-72be32395b21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.811 247708 INFO nova.compute.manager [-] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Took 4.37 seconds to deallocate network for instance.
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.823 247708 DEBUG nova.compute.manager [req-46d1a0a6-12c1-4c4a-8cba-c5d0856e7e71 req-c8058546-ae7a-4826-b2e8-72be32395b21 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ec81b99d-17fb-4dd1-aa76-67b3780fce15] Detach interface failed, port_id=6c7bea49-10b1-4a9e-bdb3-2c7c5f15bb61, reason: Instance ec81b99d-17fb-4dd1-aa76-67b3780fce15 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:39:40 compute-0 ceph-mon[74496]: osdmap e171: 3 total, 3 up, 3 in
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.965 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:40 compute-0 nova_compute[247704]: 2026-01-31 07:39:40.966 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.044 247708 DEBUG oslo_concurrency.processutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.441 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874652538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.504 247708 DEBUG oslo_concurrency.processutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.512 247708 DEBUG nova.compute.provider_tree [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.563 247708 DEBUG nova.scheduler.client.report [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.781 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:41 compute-0 nova_compute[247704]: 2026-01-31 07:39:41.876 247708 INFO nova.scheduler.client.report [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Deleted allocations for instance ec81b99d-17fb-4dd1-aa76-67b3780fce15
Jan 31 07:39:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.049 247708 DEBUG oslo_concurrency.lockutils [None req-a4dafebc-3b72-4939-b3bd-06551142b99a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ec81b99d-17fb-4dd1-aa76-67b3780fce15" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:42 compute-0 ceph-mon[74496]: pgmap v1194: 305 pgs: 305 active+clean; 551 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.0 MiB/s wr, 363 op/s
Jan 31 07:39:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3874652538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:42.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 581 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 358 op/s
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.709 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.709 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.710 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.710 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.711 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.712 247708 INFO nova.compute.manager [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Terminating instance
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.714 247708 DEBUG nova.compute.manager [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:39:42 compute-0 kernel: tapedf86251-2f (unregistering): left promiscuous mode
Jan 31 07:39:42 compute-0 NetworkManager[49108]: <info>  [1769845182.9325] device (tapedf86251-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:39:42 compute-0 ovn_controller[149457]: 2026-01-31T07:39:42Z|00138|binding|INFO|Releasing lport edf86251-2fc9-49e5-81f5-bec6102d9c52 from this chassis (sb_readonly=0)
Jan 31 07:39:42 compute-0 ovn_controller[149457]: 2026-01-31T07:39:42Z|00139|binding|INFO|Setting lport edf86251-2fc9-49e5-81f5-bec6102d9c52 down in Southbound
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.946 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:42 compute-0 ovn_controller[149457]: 2026-01-31T07:39:42Z|00140|binding|INFO|Removing iface tapedf86251-2f ovn-installed in OVS
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.948 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:42 compute-0 nova_compute[247704]: 2026-01-31 07:39:42.962 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:42.969 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:e4:9b 10.100.0.12'], port_security=['fa:16:3e:5e:e4:9b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '944d6dce-4c82-4846-a5eb-57f141812e21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b0554655ad0a48c8bf0551298dd31919', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b25af7c2-ecfe-428a-9b4f-51874d47219e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3f23c6f-f389-4487-9d19-0cf4a6c28cbc, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=edf86251-2fc9-49e5-81f5-bec6102d9c52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:42.970 160028 INFO neutron.agent.ovn.metadata.agent [-] Port edf86251-2fc9-49e5-81f5-bec6102d9c52 in datapath 8c92e27e-f16c-4df2-a299-60ef2ca44f53 unbound from our chassis
Jan 31 07:39:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:42.973 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c92e27e-f16c-4df2-a299-60ef2ca44f53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:39:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:42.974 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a01ba28f-0bf5-4572-9585-4d70634bbbfd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:42.975 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 namespace which is not needed anymore
Jan 31 07:39:42 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 31 07:39:42 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000025.scope: Consumed 15.958s CPU time.
Jan 31 07:39:42 compute-0 systemd-machined[214448]: Machine qemu-18-instance-00000025 terminated.
Jan 31 07:39:43 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [NOTICE]   (271390) : haproxy version is 2.8.14-c23fe91
Jan 31 07:39:43 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [NOTICE]   (271390) : path to executable is /usr/sbin/haproxy
Jan 31 07:39:43 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [WARNING]  (271390) : Exiting Master process...
Jan 31 07:39:43 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [ALERT]    (271390) : Current worker (271392) exited with code 143 (Terminated)
Jan 31 07:39:43 compute-0 neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53[271386]: [WARNING]  (271390) : All workers exited. Exiting... (0)
Jan 31 07:39:43 compute-0 systemd[1]: libpod-eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527.scope: Deactivated successfully.
Jan 31 07:39:43 compute-0 podman[273631]: 2026-01-31 07:39:43.121933726 +0000 UTC m=+0.051660912 container died eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.139 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.145 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.156 247708 INFO nova.virt.libvirt.driver [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Instance destroyed successfully.
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.157 247708 DEBUG nova.objects.instance [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lazy-loading 'resources' on Instance uuid 944d6dce-4c82-4846-a5eb-57f141812e21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527-userdata-shm.mount: Deactivated successfully.
Jan 31 07:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-906139511d6ddad5518f488cd515ff4557896db59b44b917e0b49bd767436da7-merged.mount: Deactivated successfully.
Jan 31 07:39:43 compute-0 podman[273631]: 2026-01-31 07:39:43.182024632 +0000 UTC m=+0.111751858 container cleanup eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:39:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 31 07:39:43 compute-0 systemd[1]: libpod-conmon-eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527.scope: Deactivated successfully.
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.220 247708 DEBUG nova.virt.libvirt.vif [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1344131571',display_name='tempest-ServersAdminTestJSON-server-1344131571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1344131571',id=37,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:38:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b0554655ad0a48c8bf0551298dd31919',ramdisk_id='',reservation_id='r-fcqt5btw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1156607975',owner_user_name='tempest-ServersAdminTestJSON-1156607975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:38:34Z,user_data=None,user_id='8a44db09acbd4aeb990147dc979f0bfd',uuid=944d6dce-4c82-4846-a5eb-57f141812e21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.221 247708 DEBUG nova.network.os_vif_util [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converting VIF {"id": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "address": "fa:16:3e:5e:e4:9b", "network": {"id": "8c92e27e-f16c-4df2-a299-60ef2ca44f53", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1693867911-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b0554655ad0a48c8bf0551298dd31919", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedf86251-2f", "ovs_interfaceid": "edf86251-2fc9-49e5-81f5-bec6102d9c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.223 247708 DEBUG nova.network.os_vif_util [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.224 247708 DEBUG os_vif [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.227 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.228 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedf86251-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.231 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.233 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.236 247708 INFO os_vif [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5e:e4:9b,bridge_name='br-int',has_traffic_filtering=True,id=edf86251-2fc9-49e5-81f5-bec6102d9c52,network=Network(8c92e27e-f16c-4df2-a299-60ef2ca44f53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedf86251-2f')
Jan 31 07:39:43 compute-0 podman[273668]: 2026-01-31 07:39:43.254141731 +0000 UTC m=+0.049505668 container remove eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.261 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d72a80ac-b19e-4d1c-a966-a678d73e0500]: (4, ('Sat Jan 31 07:39:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 (eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527)\neb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527\nSat Jan 31 07:39:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 (eb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527)\neb703718936e6e950ba92767dbd94bc52ed34eeb219fefefd323acd8dbc95527\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.263 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fd068ac1-4030-4567-9aa4-616f565b8d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.264 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c92e27e-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 kernel: tap8c92e27e-f0: left promiscuous mode
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.275 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.278 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2348bb16-2880-40b5-9264-62daa1d80494]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 31 07:39:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2104040955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1549460415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.298 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a9df664e-18fd-408b-8634-fb29ed1bc544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.300 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8097b63f-32e9-458b-99bd-33a8307e6f9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.316 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[046a9532-c13e-4732-977e-b66b90ad2bf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535595, 'reachable_time': 32563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273699, 'error': None, 'target': 'ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.320 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c92e27e-f16c-4df2-a299-60ef2ca44f53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:39:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:43.320 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[fe374de1-d82b-4e8f-bbc2-1e0ac3d75947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d8c92e27e\x2df16c\x2d4df2\x2da299\x2d60ef2ca44f53.mount: Deactivated successfully.
Jan 31 07:39:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.723 247708 DEBUG nova.compute.manager [req-1d192313-c09f-44ac-b079-4a0ecd481a2a req-56525552-b891-4eeb-9767-3df696c4c749 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-vif-unplugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.724 247708 DEBUG oslo_concurrency.lockutils [req-1d192313-c09f-44ac-b079-4a0ecd481a2a req-56525552-b891-4eeb-9767-3df696c4c749 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.724 247708 DEBUG oslo_concurrency.lockutils [req-1d192313-c09f-44ac-b079-4a0ecd481a2a req-56525552-b891-4eeb-9767-3df696c4c749 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.724 247708 DEBUG oslo_concurrency.lockutils [req-1d192313-c09f-44ac-b079-4a0ecd481a2a req-56525552-b891-4eeb-9767-3df696c4c749 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.725 247708 DEBUG nova.compute.manager [req-1d192313-c09f-44ac-b079-4a0ecd481a2a req-56525552-b891-4eeb-9767-3df696c4c749 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] No waiting events found dispatching network-vif-unplugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:43 compute-0 nova_compute[247704]: 2026-01-31 07:39:43.725 247708 DEBUG nova.compute.manager [req-1d192313-c09f-44ac-b079-4a0ecd481a2a req-56525552-b891-4eeb-9767-3df696c4c749 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-vif-unplugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:39:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 31 07:39:44 compute-0 ceph-mon[74496]: pgmap v1195: 305 pgs: 305 active+clean; 581 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 358 op/s
Jan 31 07:39:44 compute-0 ceph-mon[74496]: osdmap e172: 3 total, 3 up, 3 in
Jan 31 07:39:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 31 07:39:44 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 31 07:39:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:44.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 581 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.3 MiB/s wr, 282 op/s
Jan 31 07:39:44 compute-0 nova_compute[247704]: 2026-01-31 07:39:44.698 247708 INFO nova.virt.libvirt.driver [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Deleting instance files /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21_del
Jan 31 07:39:44 compute-0 nova_compute[247704]: 2026-01-31 07:39:44.699 247708 INFO nova.virt.libvirt.driver [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Deletion of /var/lib/nova/instances/944d6dce-4c82-4846-a5eb-57f141812e21_del complete
Jan 31 07:39:44 compute-0 nova_compute[247704]: 2026-01-31 07:39:44.861 247708 INFO nova.compute.manager [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Took 2.15 seconds to destroy the instance on the hypervisor.
Jan 31 07:39:44 compute-0 nova_compute[247704]: 2026-01-31 07:39:44.862 247708 DEBUG oslo.service.loopingcall [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:39:44 compute-0 nova_compute[247704]: 2026-01-31 07:39:44.862 247708 DEBUG nova.compute.manager [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:39:44 compute-0 nova_compute[247704]: 2026-01-31 07:39:44.863 247708 DEBUG nova.network.neutron [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:39:44 compute-0 podman[273701]: 2026-01-31 07:39:44.916241732 +0000 UTC m=+0.079264374 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:39:45 compute-0 ceph-mon[74496]: osdmap e173: 3 total, 3 up, 3 in
Jan 31 07:39:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1935598199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:39:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1935598199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:39:45 compute-0 nova_compute[247704]: 2026-01-31 07:39:45.882 247708 DEBUG nova.compute.manager [req-4f432c27-350a-4b61-a1ac-b731d890caf9 req-f5d74127-503b-400f-96fb-3e2d8c05018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:45 compute-0 nova_compute[247704]: 2026-01-31 07:39:45.882 247708 DEBUG oslo_concurrency.lockutils [req-4f432c27-350a-4b61-a1ac-b731d890caf9 req-f5d74127-503b-400f-96fb-3e2d8c05018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:45 compute-0 nova_compute[247704]: 2026-01-31 07:39:45.883 247708 DEBUG oslo_concurrency.lockutils [req-4f432c27-350a-4b61-a1ac-b731d890caf9 req-f5d74127-503b-400f-96fb-3e2d8c05018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:45 compute-0 nova_compute[247704]: 2026-01-31 07:39:45.883 247708 DEBUG oslo_concurrency.lockutils [req-4f432c27-350a-4b61-a1ac-b731d890caf9 req-f5d74127-503b-400f-96fb-3e2d8c05018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:45 compute-0 nova_compute[247704]: 2026-01-31 07:39:45.884 247708 DEBUG nova.compute.manager [req-4f432c27-350a-4b61-a1ac-b731d890caf9 req-f5d74127-503b-400f-96fb-3e2d8c05018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] No waiting events found dispatching network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:45 compute-0 nova_compute[247704]: 2026-01-31 07:39:45.884 247708 WARNING nova.compute.manager [req-4f432c27-350a-4b61-a1ac-b731d890caf9 req-f5d74127-503b-400f-96fb-3e2d8c05018a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received unexpected event network-vif-plugged-edf86251-2fc9-49e5-81f5-bec6102d9c52 for instance with vm_state active and task_state deleting.
Jan 31 07:39:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:46 compute-0 ceph-mon[74496]: pgmap v1198: 305 pgs: 305 active+clean; 581 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.3 MiB/s wr, 282 op/s
Jan 31 07:39:46 compute-0 nova_compute[247704]: 2026-01-31 07:39:46.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:46.556 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 560 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 6.9 MiB/s wr, 266 op/s
Jan 31 07:39:47 compute-0 ceph-mon[74496]: pgmap v1199: 305 pgs: 305 active+clean; 560 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 6.9 MiB/s wr, 266 op/s
Jan 31 07:39:47 compute-0 nova_compute[247704]: 2026-01-31 07:39:47.763 247708 DEBUG nova.network.neutron [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:39:47 compute-0 nova_compute[247704]: 2026-01-31 07:39:47.843 247708 INFO nova.compute.manager [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Took 2.98 seconds to deallocate network for instance.
Jan 31 07:39:47 compute-0 nova_compute[247704]: 2026-01-31 07:39:47.991 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:47 compute-0 nova_compute[247704]: 2026-01-31 07:39:47.992 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.025 247708 DEBUG nova.compute.manager [req-9a94312e-8d35-4a92-8aad-12406d9224d4 req-32556bc6-4334-450c-ba2a-5b40c73ca756 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Received event network-vif-deleted-edf86251-2fc9-49e5-81f5-bec6102d9c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.079 247708 DEBUG oslo_concurrency.processutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.232 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:48.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1664142076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.511 247708 DEBUG oslo_concurrency.processutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.516 247708 DEBUG nova.compute.provider_tree [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1664142076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.596 247708 DEBUG nova.scheduler.client.report [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 551 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.8 MiB/s wr, 191 op/s
Jan 31 07:39:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.726 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:48 compute-0 nova_compute[247704]: 2026-01-31 07:39:48.828 247708 INFO nova.scheduler.client.report [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Deleted allocations for instance 944d6dce-4c82-4846-a5eb-57f141812e21
Jan 31 07:39:49 compute-0 nova_compute[247704]: 2026-01-31 07:39:49.136 247708 DEBUG oslo_concurrency.lockutils [None req-1fa71bf3-3239-483b-9fe0-7c1aa22575fc 8a44db09acbd4aeb990147dc979f0bfd b0554655ad0a48c8bf0551298dd31919 - - default default] Lock "944d6dce-4c82-4846-a5eb-57f141812e21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 31 07:39:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 31 07:39:49 compute-0 ceph-mon[74496]: pgmap v1200: 305 pgs: 305 active+clean; 551 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.8 MiB/s wr, 191 op/s
Jan 31 07:39:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 31 07:39:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:50.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:39:50 compute-0 sudo[273745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:50 compute-0 sudo[273745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:50 compute-0 sudo[273745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:50 compute-0 sudo[273770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:39:50 compute-0 sudo[273770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:39:50 compute-0 sudo[273770]: pam_unix(sudo:session): session closed for user root
Jan 31 07:39:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:50.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:50 compute-0 ceph-mon[74496]: osdmap e174: 3 total, 3 up, 3 in
Jan 31 07:39:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 545 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.6 MiB/s wr, 330 op/s
Jan 31 07:39:51 compute-0 nova_compute[247704]: 2026-01-31 07:39:51.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:51 compute-0 nova_compute[247704]: 2026-01-31 07:39:51.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:51 compute-0 nova_compute[247704]: 2026-01-31 07:39:51.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:39:51 compute-0 ceph-mon[74496]: pgmap v1202: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 545 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 6.6 MiB/s wr, 330 op/s
Jan 31 07:39:51 compute-0 nova_compute[247704]: 2026-01-31 07:39:51.789 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:39:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:52.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 514 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.4 MiB/s wr, 348 op/s
Jan 31 07:39:52 compute-0 nova_compute[247704]: 2026-01-31 07:39:52.927 247708 DEBUG oslo_concurrency.lockutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:52 compute-0 nova_compute[247704]: 2026-01-31 07:39:52.927 247708 DEBUG oslo_concurrency.lockutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:52 compute-0 nova_compute[247704]: 2026-01-31 07:39:52.956 247708 DEBUG nova.objects.instance [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lazy-loading 'flavor' on Instance uuid 30b63014-8760-428b-a66d-201587534734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.027 247708 DEBUG oslo_concurrency.lockutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.235 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.338 247708 DEBUG oslo_concurrency.lockutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.339 247708 DEBUG oslo_concurrency.lockutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.339 247708 INFO nova.compute.manager [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Attaching volume 0de5a428-04b5-4163-84c5-f4e760bbda69 to /dev/sdc
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.553 247708 DEBUG os_brick.utils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.555 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.568 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.569 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[5058c178-83b5-4272-a9c1-21d4e00eee17]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.571 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.579 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.580 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[b65b2547-23bc-477a-b1a9-685462609d73]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.582 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.590 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.590 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d6f561-767c-400b-8d34-4ade6e2e6a3a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.591 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d0c118-dca5-41cd-8ea3-2ee50fde35ac]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.592 247708 DEBUG oslo_concurrency.processutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.619 247708 DEBUG oslo_concurrency.processutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.621 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.621 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.621 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.622 247708 DEBUG os_brick.utils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:39:53 compute-0 nova_compute[247704]: 2026-01-31 07:39:53.622 247708 DEBUG nova.virt.block_device [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Updating existing volume attachment record: 727ce9e7-c88c-41c6-a7bc-1248fde884b8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:39:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 31 07:39:54 compute-0 ceph-mon[74496]: pgmap v1203: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 514 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.4 MiB/s wr, 348 op/s
Jan 31 07:39:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2608596935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:54.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 31 07:39:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 31 07:39:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:39:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.554 247708 DEBUG nova.objects.instance [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lazy-loading 'flavor' on Instance uuid 30b63014-8760-428b-a66d-201587534734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.597 247708 DEBUG nova.virt.libvirt.guest [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 07:39:54 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-0de5a428-04b5-4163-84c5-f4e760bbda69">
Jan 31 07:39:54 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   </source>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 07:39:54 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   </auth>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   <target dev="sdc" bus="scsi"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   <serial>0de5a428-04b5-4163-84c5-f4e760bbda69</serial>
Jan 31 07:39:54 compute-0 nova_compute[247704]:   <address type="drive" controller="0" unit="2"/>
Jan 31 07:39:54 compute-0 nova_compute[247704]: </disk>
Jan 31 07:39:54 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:39:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 514 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 253 op/s
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.733 247708 DEBUG nova.virt.libvirt.driver [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.734 247708 DEBUG nova.virt.libvirt.driver [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.734 247708 DEBUG nova.virt.libvirt.driver [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No BDM found with device name sdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.734 247708 DEBUG nova.virt.libvirt.driver [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] No VIF found with MAC fa:16:3e:7d:a3:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:39:54 compute-0 nova_compute[247704]: 2026-01-31 07:39:54.944 247708 DEBUG oslo_concurrency.lockutils [None req-a9d60c5e-f379-4ed5-b633-353d29f6f17c 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2782953610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:55 compute-0 ceph-mon[74496]: osdmap e175: 3 total, 3 up, 3 in
Jan 31 07:39:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1480405234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:39:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/579142581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:56.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 31 07:39:56 compute-0 ceph-mon[74496]: pgmap v1205: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 514 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 253 op/s
Jan 31 07:39:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3209765064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 31 07:39:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 31 07:39:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:56.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.447 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.595 247708 DEBUG oslo_concurrency.lockutils [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.596 247708 DEBUG oslo_concurrency.lockutils [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.618 247708 INFO nova.compute.manager [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Detaching volume 0de5a428-04b5-4163-84c5-f4e760bbda69
Jan 31 07:39:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 335 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 363 op/s
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.751 247708 INFO nova.virt.block_device [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Attempting to driver detach volume 0de5a428-04b5-4163-84c5-f4e760bbda69 from mountpoint /dev/sdc
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.759 247708 DEBUG nova.virt.libvirt.driver [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Attempting to detach device sdc from instance 30b63014-8760-428b-a66d-201587534734 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.760 247708 DEBUG nova.virt.libvirt.guest [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-0de5a428-04b5-4163-84c5-f4e760bbda69">
Jan 31 07:39:56 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   </source>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <target dev="sdc" bus="scsi"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <serial>0de5a428-04b5-4163-84c5-f4e760bbda69</serial>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]: </disk>
Jan 31 07:39:56 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.767 247708 INFO nova.virt.libvirt.driver [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Successfully detached device sdc from instance 30b63014-8760-428b-a66d-201587534734 from the persistent domain config.
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.767 247708 DEBUG nova.virt.libvirt.driver [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] (1/8): Attempting to detach device sdc with device alias scsi0-0-0-2 from instance 30b63014-8760-428b-a66d-201587534734 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.767 247708 DEBUG nova.virt.libvirt.guest [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-0de5a428-04b5-4163-84c5-f4e760bbda69">
Jan 31 07:39:56 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   </source>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <target dev="sdc" bus="scsi"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <serial>0de5a428-04b5-4163-84c5-f4e760bbda69</serial>
Jan 31 07:39:56 compute-0 nova_compute[247704]:   <address type="drive" controller="0" bus="0" target="0" unit="2"/>
Jan 31 07:39:56 compute-0 nova_compute[247704]: </disk>
Jan 31 07:39:56 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.839 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769845196.8393877, 30b63014-8760-428b-a66d-201587534734 => scsi0-0-0-2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.841 247708 DEBUG nova.virt.libvirt.driver [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Start waiting for the detach event from libvirt for device sdc with device alias scsi0-0-0-2 for instance 30b63014-8760-428b-a66d-201587534734 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 07:39:56 compute-0 nova_compute[247704]: 2026-01-31 07:39:56.843 247708 INFO nova.virt.libvirt.driver [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Successfully detached device sdc from instance 30b63014-8760-428b-a66d-201587534734 from the live domain config.
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.056 247708 DEBUG nova.objects.instance [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lazy-loading 'flavor' on Instance uuid 30b63014-8760-428b-a66d-201587534734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 31 07:39:57 compute-0 ceph-mon[74496]: osdmap e176: 3 total, 3 up, 3 in
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.113 247708 DEBUG oslo_concurrency.lockutils [None req-02a2f9c8-0dc1-48e6-b6f5-5b98d3540f43 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 31 07:39:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.815 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.815 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.815 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.816 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:39:57 compute-0 nova_compute[247704]: 2026-01-31 07:39:57.816 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:58.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.152 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845183.1512117, 944d6dce-4c82-4846-a5eb-57f141812e21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.153 247708 INFO nova.compute.manager [-] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] VM Stopped (Lifecycle Event)
Jan 31 07:39:58 compute-0 ceph-mon[74496]: pgmap v1207: 305 pgs: 305 active+clean; 335 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 363 op/s
Jan 31 07:39:58 compute-0 ceph-mon[74496]: osdmap e177: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/92165256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.238 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244405028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.294 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:39:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:39:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.513 247708 DEBUG nova.compute.manager [None req-132d098b-003f-4764-bf46-ea85c0f85083 - - - - - -] [instance: 944d6dce-4c82-4846-a5eb-57f141812e21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:39:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 310 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 504 KiB/s wr, 216 op/s
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.631 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.632 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.693 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.694 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.694 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.694 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.695 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.696 247708 INFO nova.compute.manager [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Terminating instance
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.697 247708 DEBUG nova.compute.manager [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 31 07:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 31 07:39:58 compute-0 kernel: tapa9ae1f79-8a (unregistering): left promiscuous mode
Jan 31 07:39:58 compute-0 NetworkManager[49108]: <info>  [1769845198.8302] device (tapa9ae1f79-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.841 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:58 compute-0 ovn_controller[149457]: 2026-01-31T07:39:58Z|00141|binding|INFO|Releasing lport a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 from this chassis (sb_readonly=0)
Jan 31 07:39:58 compute-0 ovn_controller[149457]: 2026-01-31T07:39:58Z|00142|binding|INFO|Setting lport a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 down in Southbound
Jan 31 07:39:58 compute-0 ovn_controller[149457]: 2026-01-31T07:39:58Z|00143|binding|INFO|Removing iface tapa9ae1f79-8a ovn-installed in OVS
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.862 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.864 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4467MB free_disk=20.83910369873047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.864 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.865 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:58 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000028.scope: Deactivated successfully.
Jan 31 07:39:58 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000028.scope: Consumed 15.930s CPU time.
Jan 31 07:39:58 compute-0 systemd-machined[214448]: Machine qemu-20-instance-00000028 terminated.
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.937 247708 INFO nova.virt.libvirt.driver [-] [instance: 30b63014-8760-428b-a66d-201587534734] Instance destroyed successfully.
Jan 31 07:39:58 compute-0 nova_compute[247704]: 2026-01-31 07:39:58.938 247708 DEBUG nova.objects.instance [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lazy-loading 'resources' on Instance uuid 30b63014-8760-428b-a66d-201587534734 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:39:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:58.946 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:a3:77 10.100.0.7'], port_security=['fa:16:3e:7d:a3:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '30b63014-8760-428b-a66d-201587534734', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa5ee378-6843-499e-8557-33660028a787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72635784cc4840bba682e8305945e795', 'neutron:revision_number': '4', 'neutron:security_group_ids': '23fa9621-004a-48f7-bc75-80280285867b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b38c7904-e395-4a73-b096-305f428a7f2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:39:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:58.948 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 in datapath aa5ee378-6843-499e-8557-33660028a787 unbound from our chassis
Jan 31 07:39:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:58.952 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa5ee378-6843-499e-8557-33660028a787, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:39:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:58.953 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6deb04b0-e8cf-436e-b6b4-901a0efbb11e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:58.954 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa5ee378-6843-499e-8557-33660028a787 namespace which is not needed anymore
Jan 31 07:39:59 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [NOTICE]   (272467) : haproxy version is 2.8.14-c23fe91
Jan 31 07:39:59 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [NOTICE]   (272467) : path to executable is /usr/sbin/haproxy
Jan 31 07:39:59 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [WARNING]  (272467) : Exiting Master process...
Jan 31 07:39:59 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [ALERT]    (272467) : Current worker (272469) exited with code 143 (Terminated)
Jan 31 07:39:59 compute-0 neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787[272463]: [WARNING]  (272467) : All workers exited. Exiting... (0)
Jan 31 07:39:59 compute-0 systemd[1]: libpod-a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f.scope: Deactivated successfully.
Jan 31 07:39:59 compute-0 conmon[272463]: conmon a974a3c4bec571bc3882 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f.scope/container/memory.events
Jan 31 07:39:59 compute-0 podman[273888]: 2026-01-31 07:39:59.116383849 +0000 UTC m=+0.050928693 container died a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f-userdata-shm.mount: Deactivated successfully.
Jan 31 07:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f1b1eb3185f96cd5a14c9549aee704c02b38b3727b18d216789a038bf52e0be-merged.mount: Deactivated successfully.
Jan 31 07:39:59 compute-0 podman[273888]: 2026-01-31 07:39:59.156207641 +0000 UTC m=+0.090752485 container cleanup a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:39:59 compute-0 systemd[1]: libpod-conmon-a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f.scope: Deactivated successfully.
Jan 31 07:39:59 compute-0 podman[273930]: 2026-01-31 07:39:59.225331457 +0000 UTC m=+0.048695189 container remove a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:39:59 compute-0 podman[273904]: 2026-01-31 07:39:59.229354165 +0000 UTC m=+0.091570984 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.228 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3246ef-b8c5-4b52-81fc-776f446f2ed2]: (4, ('Sat Jan 31 07:39:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787 (a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f)\na974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f\nSat Jan 31 07:39:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aa5ee378-6843-499e-8557-33660028a787 (a974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f)\na974a3c4bec571bc38821eed283e693b6760a0050e2bc65a0a82b14345c4ea1f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.231 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[18f42437-ca3c-402e-8316-5e4f23f38005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.233 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa5ee378-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.235 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-0 kernel: tapaa5ee378-60: left promiscuous mode
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.244 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2244405028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:59 compute-0 ceph-mon[74496]: osdmap e178: 3 total, 3 up, 3 in
Jan 31 07:39:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2259836008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:59 compute-0 ceph-mon[74496]: osdmap e179: 3 total, 3 up, 3 in
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.247 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9711dab6-6a8a-4106-a3ba-bde9ef27548b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.261 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b457d5e-d1b6-426e-9bfa-cbc08c96bfb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.263 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae3020e-2ddf-4d18-8cf1-f7c208c35cab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.279 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[89e087f2-8826-4fe7-81dd-55cf13e3e5e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539453, 'reachable_time': 42022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273963, 'error': None, 'target': 'ovnmeta-aa5ee378-6843-499e-8557-33660028a787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.281 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa5ee378-6843-499e-8557-33660028a787 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:39:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:39:59.281 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4f4905-c5fd-4e3d-aca7-49bc7319f81c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:39:59 compute-0 systemd[1]: run-netns-ovnmeta\x2daa5ee378\x2d6843\x2d499e\x2d8557\x2d33660028a787.mount: Deactivated successfully.
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.341 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 30b63014-8760-428b-a66d-201587534734 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.341 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.341 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.365 247708 DEBUG nova.virt.libvirt.vif [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-566270408',display_name='tempest-AttachSCSIVolumeTestJSON-server-566270408',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-566270408',id=40,image_ref='6e711352-ba7c-4c7d-858b-ee38dcbc90e8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ8vBVWoL3Kx71hGbZmgL+RMyK5pg4RSZWIQfJEgplPiT0Fp1pGPD1OLdKdjcCnYq1EimTVe6rq8ALChl/h7Yg9BhByf9wCDvRf6vvgfRnT8vgLocvPXjafhyrcrgxpSeA==',key_name='tempest-keypair-2023988493',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:39:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72635784cc4840bba682e8305945e795',ramdisk_id='',reservation_id='r-0g6dsac9',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6e711352-ba7c-4c7d-858b-ee38dcbc90e8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_scsi_model='virtio-scsi',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachSCSIVolumeTestJSON-1695700312',owner_user_name='tempest-AttachSCSIVolumeTestJSON-1695700312-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:39:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c95bb3f52804685a0ba62164a02b535',uuid=30b63014-8760-428b-a66d-201587534734,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.366 247708 DEBUG nova.network.os_vif_util [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Converting VIF {"id": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "address": "fa:16:3e:7d:a3:77", "network": {"id": "aa5ee378-6843-499e-8557-33660028a787", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-1279273932-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72635784cc4840bba682e8305945e795", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9ae1f79-8a", "ovs_interfaceid": "a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.367 247708 DEBUG nova.network.os_vif_util [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.367 247708 DEBUG os_vif [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.369 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9ae1f79-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.375 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.398 247708 INFO os_vif [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:a3:77,bridge_name='br-int',has_traffic_filtering=True,id=a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7,network=Network(aa5ee378-6843-499e-8557-33660028a787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9ae1f79-8a')
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.457 247708 DEBUG nova.compute.manager [req-6189ba03-6357-461a-a8b4-1a24784f6b9e req-8256ca6c-b9a7-4321-a959-6ca74bea5db6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-vif-unplugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.458 247708 DEBUG oslo_concurrency.lockutils [req-6189ba03-6357-461a-a8b4-1a24784f6b9e req-8256ca6c-b9a7-4321-a959-6ca74bea5db6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.458 247708 DEBUG oslo_concurrency.lockutils [req-6189ba03-6357-461a-a8b4-1a24784f6b9e req-8256ca6c-b9a7-4321-a959-6ca74bea5db6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.458 247708 DEBUG oslo_concurrency.lockutils [req-6189ba03-6357-461a-a8b4-1a24784f6b9e req-8256ca6c-b9a7-4321-a959-6ca74bea5db6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.458 247708 DEBUG nova.compute.manager [req-6189ba03-6357-461a-a8b4-1a24784f6b9e req-8256ca6c-b9a7-4321-a959-6ca74bea5db6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] No waiting events found dispatching network-vif-unplugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.459 247708 DEBUG nova.compute.manager [req-6189ba03-6357-461a-a8b4-1a24784f6b9e req-8256ca6c-b9a7-4321-a959-6ca74bea5db6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-vif-unplugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.719 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:39:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:39:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501468041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.839 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.844 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.882 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.951 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:39:59 compute-0 nova_compute[247704]: 2026-01-31 07:39:59.952 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:40:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:00.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:00 compute-0 ceph-mon[74496]: pgmap v1210: 305 pgs: 305 active+clean; 310 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 504 KiB/s wr, 216 op/s
Jan 31 07:40:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2270609077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1501468041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:40:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 293 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 271 op/s
Jan 31 07:40:00 compute-0 nova_compute[247704]: 2026-01-31 07:40:00.954 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:00 compute-0 nova_compute[247704]: 2026-01-31 07:40:00.955 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.449 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.569 247708 DEBUG nova.compute.manager [req-7c3c3a1d-109d-4664-bc83-18825b24edb0 req-e7f93ec0-8a97-4a1d-9a13-9ba80ddc3585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.569 247708 DEBUG oslo_concurrency.lockutils [req-7c3c3a1d-109d-4664-bc83-18825b24edb0 req-e7f93ec0-8a97-4a1d-9a13-9ba80ddc3585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30b63014-8760-428b-a66d-201587534734-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.569 247708 DEBUG oslo_concurrency.lockutils [req-7c3c3a1d-109d-4664-bc83-18825b24edb0 req-e7f93ec0-8a97-4a1d-9a13-9ba80ddc3585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.569 247708 DEBUG oslo_concurrency.lockutils [req-7c3c3a1d-109d-4664-bc83-18825b24edb0 req-e7f93ec0-8a97-4a1d-9a13-9ba80ddc3585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30b63014-8760-428b-a66d-201587534734-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.570 247708 DEBUG nova.compute.manager [req-7c3c3a1d-109d-4664-bc83-18825b24edb0 req-e7f93ec0-8a97-4a1d-9a13-9ba80ddc3585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] No waiting events found dispatching network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.570 247708 WARNING nova.compute.manager [req-7c3c3a1d-109d-4664-bc83-18825b24edb0 req-e7f93ec0-8a97-4a1d-9a13-9ba80ddc3585 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received unexpected event network-vif-plugged-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 for instance with vm_state active and task_state deleting.
Jan 31 07:40:01 compute-0 ceph-mon[74496]: pgmap v1212: 305 pgs: 305 active+clean; 293 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 271 op/s
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.826 247708 INFO nova.virt.libvirt.driver [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Deleting instance files /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734_del
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.827 247708 INFO nova.virt.libvirt.driver [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Deletion of /var/lib/nova/instances/30b63014-8760-428b-a66d-201587534734_del complete
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.944 247708 INFO nova.compute.manager [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Took 3.25 seconds to destroy the instance on the hypervisor.
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.945 247708 DEBUG oslo.service.loopingcall [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.946 247708 DEBUG nova.compute.manager [-] [instance: 30b63014-8760-428b-a66d-201587534734] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:40:01 compute-0 nova_compute[247704]: 2026-01-31 07:40:01.946 247708 DEBUG nova.network.neutron [-] [instance: 30b63014-8760-428b-a66d-201587534734] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:40:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 256 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Jan 31 07:40:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 31 07:40:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 31 07:40:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.146 247708 DEBUG nova.network.neutron [-] [instance: 30b63014-8760-428b-a66d-201587534734] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.215 247708 INFO nova.compute.manager [-] [instance: 30b63014-8760-428b-a66d-201587534734] Took 1.27 seconds to deallocate network for instance.
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.394 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.395 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.439 247708 DEBUG oslo_concurrency.processutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 31 07:40:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-0 ceph-mon[74496]: pgmap v1213: 305 pgs: 305 active+clean; 256 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 239 op/s
Jan 31 07:40:03 compute-0 ceph-mon[74496]: osdmap e180: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-0 ceph-mon[74496]: osdmap e181: 3 total, 3 up, 3 in
Jan 31 07:40:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/287017321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.884 247708 DEBUG oslo_concurrency.processutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.890 247708 DEBUG nova.compute.provider_tree [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.904 247708 DEBUG nova.scheduler.client.report [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.931 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:03 compute-0 nova_compute[247704]: 2026-01-31 07:40:03.961 247708 INFO nova.scheduler.client.report [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Deleted allocations for instance 30b63014-8760-428b-a66d-201587534734
Jan 31 07:40:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:04 compute-0 nova_compute[247704]: 2026-01-31 07:40:04.030 247708 DEBUG oslo_concurrency.lockutils [None req-057478a9-acaa-46ff-8654-b90479262298 7c95bb3f52804685a0ba62164a02b535 72635784cc4840bba682e8305945e795 - - default default] Lock "30b63014-8760-428b-a66d-201587534734" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:04 compute-0 nova_compute[247704]: 2026-01-31 07:40:04.371 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 256 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 191 op/s
Jan 31 07:40:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/287017321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:04 compute-0 nova_compute[247704]: 2026-01-31 07:40:04.871 247708 DEBUG nova.compute.manager [req-f36843e3-ab0a-44ce-9cfc-86bbf0af5350 req-8a39ad25-b464-4a7d-af95-6745488ffce8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30b63014-8760-428b-a66d-201587534734] Received event network-vif-deleted-a9ae1f79-8ae0-4df4-9c50-2f7aaf0c87c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:05 compute-0 ceph-mon[74496]: pgmap v1216: 305 pgs: 305 active+clean; 256 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 191 op/s
Jan 31 07:40:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1676011270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:06.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.451 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.560 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.560 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.605 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:40:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 80 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.693 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.694 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.699 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.700 247708 INFO nova.compute.claims [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:40:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 31 07:40:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/979781824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 31 07:40:06 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 31 07:40:06 compute-0 nova_compute[247704]: 2026-01-31 07:40:06.913 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1190711260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:07 compute-0 nova_compute[247704]: 2026-01-31 07:40:07.393 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:07 compute-0 nova_compute[247704]: 2026-01-31 07:40:07.398 247708 DEBUG nova.compute.provider_tree [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:40:07 compute-0 nova_compute[247704]: 2026-01-31 07:40:07.817 247708 DEBUG nova.scheduler.client.report [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:40:07 compute-0 ceph-mon[74496]: pgmap v1217: 305 pgs: 305 active+clean; 80 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Jan 31 07:40:07 compute-0 ceph-mon[74496]: osdmap e182: 3 total, 3 up, 3 in
Jan 31 07:40:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1190711260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:08 compute-0 nova_compute[247704]: 2026-01-31 07:40:08.136 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:08 compute-0 nova_compute[247704]: 2026-01-31 07:40:08.138 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:40:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:40:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:08.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:40:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 54 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 9.3 KiB/s wr, 167 op/s
Jan 31 07:40:08 compute-0 nova_compute[247704]: 2026-01-31 07:40:08.699 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:40:08 compute-0 nova_compute[247704]: 2026-01-31 07:40:08.700 247708 DEBUG nova.network.neutron [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:40:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.037 247708 INFO nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.222 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.496 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.498 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.498 247708 INFO nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Creating image(s)
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.529 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.566 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.603 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.608 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.686 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.687 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.688 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.688 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.718 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:09 compute-0 nova_compute[247704]: 2026-01-31 07:40:09.723 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:09 compute-0 ceph-mon[74496]: pgmap v1219: 305 pgs: 305 active+clean; 54 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 9.3 KiB/s wr, 167 op/s
Jan 31 07:40:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:40:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.059 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.173 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] resizing rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.229 247708 DEBUG nova.policy [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '533eaca1e9c4430dabe2b0a39039ca65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:40:10 compute-0 sudo[274192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:10 compute-0 sudo[274192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:10 compute-0 sudo[274192]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:10 compute-0 sudo[274228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:10 compute-0 sudo[274228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:10 compute-0 sudo[274228]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.351 247708 DEBUG nova.objects.instance [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'migration_context' on Instance uuid 314f0738-9cae-4fe8-8b90-3d18f72488ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:10.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.377 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.378 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Ensure instance console log exists: /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.378 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.379 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:10 compute-0 nova_compute[247704]: 2026-01-31 07:40:10.379 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 53 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 111 KiB/s rd, 202 KiB/s wr, 163 op/s
Jan 31 07:40:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/694355676' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:40:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/694355676' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:40:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:11.149 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:11.150 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:11.151 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:11 compute-0 nova_compute[247704]: 2026-01-31 07:40:11.453 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:11 compute-0 nova_compute[247704]: 2026-01-31 07:40:11.603 247708 DEBUG nova.network.neutron [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Successfully created port: cf7d03dc-440f-4773-a9d7-9519fc850bf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:40:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:12.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:12 compute-0 ceph-mon[74496]: pgmap v1220: 305 pgs: 305 active+clean; 53 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 111 KiB/s rd, 202 KiB/s wr, 163 op/s
Jan 31 07:40:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 68 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 968 KiB/s wr, 162 op/s
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.219 247708 DEBUG nova.network.neutron [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Successfully updated port: cf7d03dc-440f-4773-a9d7-9519fc850bf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.265 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.265 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquired lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.265 247708 DEBUG nova.network.neutron [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.324 247708 DEBUG nova.compute.manager [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-changed-cf7d03dc-440f-4773-a9d7-9519fc850bf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.324 247708 DEBUG nova.compute.manager [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Refreshing instance network info cache due to event network-changed-cf7d03dc-440f-4773-a9d7-9519fc850bf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.325 247708 DEBUG oslo_concurrency.lockutils [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.495 247708 DEBUG nova.network.neutron [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:40:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 31 07:40:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 31 07:40:13 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.936 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845198.9350162, 30b63014-8760-428b-a66d-201587534734 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:13 compute-0 nova_compute[247704]: 2026-01-31 07:40:13.936 247708 INFO nova.compute.manager [-] [instance: 30b63014-8760-428b-a66d-201587534734] VM Stopped (Lifecycle Event)
Jan 31 07:40:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:14.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:14 compute-0 ceph-mon[74496]: pgmap v1221: 305 pgs: 305 active+clean; 68 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 968 KiB/s wr, 162 op/s
Jan 31 07:40:14 compute-0 ceph-mon[74496]: osdmap e183: 3 total, 3 up, 3 in
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.340 247708 DEBUG nova.compute.manager [None req-825739a8-79a7-4a36-b208-a89c2c190207 - - - - - -] [instance: 30b63014-8760-428b-a66d-201587534734] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.375 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.424 247708 DEBUG nova.network.neutron [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updating instance_info_cache with network_info: [{"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.446 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Releasing lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.446 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Instance network_info: |[{"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.447 247708 DEBUG oslo_concurrency.lockutils [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.448 247708 DEBUG nova.network.neutron [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Refreshing network info cache for port cf7d03dc-440f-4773-a9d7-9519fc850bf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.453 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Start _get_guest_xml network_info=[{"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.458 247708 WARNING nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.466 247708 DEBUG nova.virt.libvirt.host [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.466 247708 DEBUG nova.virt.libvirt.host [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.473 247708 DEBUG nova.virt.libvirt.host [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.473 247708 DEBUG nova.virt.libvirt.host [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.475 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.475 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.475 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.476 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.476 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.476 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.476 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.477 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.477 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.477 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.478 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.478 247708 DEBUG nova.virt.hardware [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.481 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 68 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 1.0 MiB/s wr, 62 op/s
Jan 31 07:40:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:40:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735678213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.926 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.964 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:14 compute-0 nova_compute[247704]: 2026-01-31 07:40:14.971 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/735678213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:40:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167689424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.461 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.464 247708 DEBUG nova.virt.libvirt.vif [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-480511753',display_name='tempest-ImagesTestJSON-server-480511753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-480511753',id=42,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-c0v7dnna',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:40:09Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=314f0738-9cae-4fe8-8b90-3d18f72488ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.465 247708 DEBUG nova.network.os_vif_util [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.466 247708 DEBUG nova.network.os_vif_util [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.468 247708 DEBUG nova.objects.instance [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 314f0738-9cae-4fe8-8b90-3d18f72488ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.595 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <uuid>314f0738-9cae-4fe8-8b90-3d18f72488ef</uuid>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <name>instance-0000002a</name>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:name>tempest-ImagesTestJSON-server-480511753</nova:name>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:40:14</nova:creationTime>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:user uuid="533eaca1e9c4430dabe2b0a39039ca65">tempest-ImagesTestJSON-533495031-project-member</nova:user>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:project uuid="b3e3e6f216d24c1f9f68777cfb63dbf8">tempest-ImagesTestJSON-533495031</nova:project>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <nova:port uuid="cf7d03dc-440f-4773-a9d7-9519fc850bf4">
Jan 31 07:40:15 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <system>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <entry name="serial">314f0738-9cae-4fe8-8b90-3d18f72488ef</entry>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <entry name="uuid">314f0738-9cae-4fe8-8b90-3d18f72488ef</entry>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </system>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <os>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </os>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <features>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </features>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/314f0738-9cae-4fe8-8b90-3d18f72488ef_disk">
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </source>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/314f0738-9cae-4fe8-8b90-3d18f72488ef_disk.config">
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </source>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:40:15 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:48:2b:4e"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <target dev="tapcf7d03dc-44"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/console.log" append="off"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <video>
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </video>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:40:15 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:40:15 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:40:15 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:40:15 compute-0 nova_compute[247704]: </domain>
Jan 31 07:40:15 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.597 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Preparing to wait for external event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.597 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.597 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.598 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.598 247708 DEBUG nova.virt.libvirt.vif [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-480511753',display_name='tempest-ImagesTestJSON-server-480511753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-480511753',id=42,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-c0v7dnna',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:40:09Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=314f0738-9cae-4fe8-8b90-3d18f72488ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.599 247708 DEBUG nova.network.os_vif_util [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.599 247708 DEBUG nova.network.os_vif_util [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.600 247708 DEBUG os_vif [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.600 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.601 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.601 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.606 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.606 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcf7d03dc-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.606 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcf7d03dc-44, col_values=(('external_ids', {'iface-id': 'cf7d03dc-440f-4773-a9d7-9519fc850bf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:2b:4e', 'vm-uuid': '314f0738-9cae-4fe8-8b90-3d18f72488ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.608 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:15 compute-0 NetworkManager[49108]: <info>  [1769845215.6093] manager: (tapcf7d03dc-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.614 247708 INFO os_vif [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44')
Jan 31 07:40:15 compute-0 systemd[1]: Starting dnf makecache...
Jan 31 07:40:15 compute-0 podman[274339]: 2026-01-31 07:40:15.753200702 +0000 UTC m=+0.090743605 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.823 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.823 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.824 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No VIF found with MAC fa:16:3e:48:2b:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.824 247708 INFO nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Using config drive
Jan 31 07:40:15 compute-0 nova_compute[247704]: 2026-01-31 07:40:15.848 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:15 compute-0 dnf[274340]: Metadata cache refreshed recently.
Jan 31 07:40:15 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 07:40:15 compute-0 systemd[1]: Finished dnf makecache.
Jan 31 07:40:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:16.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:16.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:16 compute-0 ceph-mon[74496]: pgmap v1223: 305 pgs: 305 active+clean; 68 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 1.0 MiB/s wr, 62 op/s
Jan 31 07:40:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4167689424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.397 247708 INFO nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Creating config drive at /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/disk.config
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.402 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqwsrvgvp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.456 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.540 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqwsrvgvp" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.577 247708 DEBUG nova.storage.rbd_utils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.582 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/disk.config 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.854 247708 DEBUG oslo_concurrency.processutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/disk.config 314f0738-9cae-4fe8-8b90-3d18f72488ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.855 247708 INFO nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Deleting local config drive /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef/disk.config because it was imported into RBD.
Jan 31 07:40:16 compute-0 kernel: tapcf7d03dc-44: entered promiscuous mode
Jan 31 07:40:16 compute-0 NetworkManager[49108]: <info>  [1769845216.9082] manager: (tapcf7d03dc-44): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.910 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:16 compute-0 ovn_controller[149457]: 2026-01-31T07:40:16Z|00144|binding|INFO|Claiming lport cf7d03dc-440f-4773-a9d7-9519fc850bf4 for this chassis.
Jan 31 07:40:16 compute-0 ovn_controller[149457]: 2026-01-31T07:40:16Z|00145|binding|INFO|cf7d03dc-440f-4773-a9d7-9519fc850bf4: Claiming fa:16:3e:48:2b:4e 10.100.0.6
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:16 compute-0 systemd-udevd[274431]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.939 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:16 compute-0 systemd-machined[214448]: New machine qemu-21-instance-0000002a.
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.944 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.944 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:2b:4e 10.100.0.6'], port_security=['fa:16:3e:48:2b:4e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '314f0738-9cae-4fe8-8b90-3d18f72488ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=cf7d03dc-440f-4773-a9d7-9519fc850bf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.945 160028 INFO neutron.agent.ovn.metadata.agent [-] Port cf7d03dc-440f-4773-a9d7-9519fc850bf4 in datapath cffffabd-62a6-4362-9315-bd726adce623 bound to our chassis
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.946 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:40:16 compute-0 ovn_controller[149457]: 2026-01-31T07:40:16Z|00146|binding|INFO|Setting lport cf7d03dc-440f-4773-a9d7-9519fc850bf4 ovn-installed in OVS
Jan 31 07:40:16 compute-0 ovn_controller[149457]: 2026-01-31T07:40:16Z|00147|binding|INFO|Setting lport cf7d03dc-440f-4773-a9d7-9519fc850bf4 up in Southbound
Jan 31 07:40:16 compute-0 nova_compute[247704]: 2026-01-31 07:40:16.948 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:16 compute-0 NetworkManager[49108]: <info>  [1769845216.9510] device (tapcf7d03dc-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:40:16 compute-0 NetworkManager[49108]: <info>  [1769845216.9515] device (tapcf7d03dc-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:40:16 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000002a.
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.960 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc22d7e-8d8e-4225-bb31-37ea4077866d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.961 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcffffabd-61 in ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.963 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcffffabd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.963 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dfa2f8-d618-49da-82d6-91d42ef3b1c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.964 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23074d64-19f8-4879-8c93-3102ced1f1e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.975 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[7b18616e-fa40-4926-87ef-573ecc23380e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:16.988 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[655abd18-7498-49dc-b7c3-645841733980]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.015 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[285ac572-cea3-4cb0-9bea-7781cd7c5524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.020 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[82e768c1-12dd-4399-9ec6-009bcd51519b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 NetworkManager[49108]: <info>  [1769845217.0213] manager: (tapcffffabd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Jan 31 07:40:17 compute-0 systemd-udevd[274433]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.046 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f7df9c3f-3b14-4357-80e3-63d8bf85f067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.049 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fd97ec85-2b0b-4819-8d8a-1eb430b91321]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 NetworkManager[49108]: <info>  [1769845217.0702] device (tapcffffabd-60): carrier: link connected
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.072 247708 DEBUG nova.network.neutron [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updated VIF entry in instance network info cache for port cf7d03dc-440f-4773-a9d7-9519fc850bf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.072 247708 DEBUG nova.network.neutron [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updating instance_info_cache with network_info: [{"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.074 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6291f4-8f88-45c7-a6e0-a39c04215e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.092 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ae539fee-3a03-4d68-945f-27a11b5d0b0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545968, 'reachable_time': 21032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274464, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.097 247708 DEBUG oslo_concurrency.lockutils [req-c5e59a6c-0435-44d0-9518-2637d2895e83 req-8bcfa1b5-ef88-4557-90aa-c6361beb1b04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.111 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9baa96-edae-4403-911b-56d057198cb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:96c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545968, 'tstamp': 545968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274465, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.133 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0fae9c83-3c76-4dc8-95ad-e423fd899207]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545968, 'reachable_time': 21032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274466, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.169 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[247c46a9-7056-4fdf-b90b-66a076b0ce85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.238 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[481d1002-6f3a-4ce5-b8d9-67b5141636b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.240 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.240 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.241 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcffffabd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:17 compute-0 NetworkManager[49108]: <info>  [1769845217.2456] manager: (tapcffffabd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 31 07:40:17 compute-0 kernel: tapcffffabd-60: entered promiscuous mode
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.248 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.249 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcffffabd-60, col_values=(('external_ids', {'iface-id': '549e70cf-ed02-45f9-9021-3a04088f580f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:17 compute-0 ovn_controller[149457]: 2026-01-31T07:40:17Z|00148|binding|INFO|Releasing lport 549e70cf-ed02-45f9-9021-3a04088f580f from this chassis (sb_readonly=0)
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.261 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.263 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.265 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[803e81c4-4690-465d-b3d0-fb34e25843af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.266 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.267 247708 DEBUG nova.compute.manager [req-da2d08b6-a305-4d7a-ba9a-e29b0768f63b req-fb04a323-6287-413b-b9dc-1003153765f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.268 247708 DEBUG oslo_concurrency.lockutils [req-da2d08b6-a305-4d7a-ba9a-e29b0768f63b req-fb04a323-6287-413b-b9dc-1003153765f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.268 247708 DEBUG oslo_concurrency.lockutils [req-da2d08b6-a305-4d7a-ba9a-e29b0768f63b req-fb04a323-6287-413b-b9dc-1003153765f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.269 247708 DEBUG oslo_concurrency.lockutils [req-da2d08b6-a305-4d7a-ba9a-e29b0768f63b req-fb04a323-6287-413b-b9dc-1003153765f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.269 247708 DEBUG nova.compute.manager [req-da2d08b6-a305-4d7a-ba9a-e29b0768f63b req-fb04a323-6287-413b-b9dc-1003153765f9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Processing event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:40:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:17.271 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'env', 'PROCESS_TAG=haproxy-cffffabd-62a6-4362-9315-bd726adce623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cffffabd-62a6-4362-9315-bd726adce623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:40:17 compute-0 ceph-mon[74496]: pgmap v1224: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 2.2 MiB/s wr, 80 op/s
Jan 31 07:40:17 compute-0 podman[274517]: 2026-01-31 07:40:17.642008395 +0000 UTC m=+0.064521026 container create f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:40:17 compute-0 systemd[1]: Started libpod-conmon-f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325.scope.
Jan 31 07:40:17 compute-0 podman[274517]: 2026-01-31 07:40:17.60370942 +0000 UTC m=+0.026222031 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:40:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462000c524e7d1630158b2e838e7ec03d46758f6639f3e36b3566feee7304b2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:17 compute-0 podman[274517]: 2026-01-31 07:40:17.73442315 +0000 UTC m=+0.156935781 container init f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 07:40:17 compute-0 podman[274517]: 2026-01-31 07:40:17.742137777 +0000 UTC m=+0.164650378 container start f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:40:17 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [NOTICE]   (274559) : New worker (274562) forked
Jan 31 07:40:17 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [NOTICE]   (274559) : Loading success.
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.798 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845217.797663, 314f0738-9cae-4fe8-8b90-3d18f72488ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.799 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] VM Started (Lifecycle Event)
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.803 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.807 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.812 247708 INFO nova.virt.libvirt.driver [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Instance spawned successfully.
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.812 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.881 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.882 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.883 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.883 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.884 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.885 247708 DEBUG nova.virt.libvirt.driver [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.892 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.897 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.988 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.989 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845217.7977798, 314f0738-9cae-4fe8-8b90-3d18f72488ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:17 compute-0 nova_compute[247704]: 2026-01-31 07:40:17.989 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] VM Paused (Lifecycle Event)
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.016 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.021 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845217.805673, 314f0738-9cae-4fe8-8b90-3d18f72488ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.022 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] VM Resumed (Lifecycle Event)
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.032 247708 INFO nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Took 8.54 seconds to spawn the instance on the hypervisor.
Jan 31 07:40:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.033 247708 DEBUG nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.044 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.048 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.075 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.106 247708 INFO nova.compute.manager [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Took 11.43 seconds to build instance.
Jan 31 07:40:18 compute-0 nova_compute[247704]: 2026-01-31 07:40:18.126 247708 DEBUG oslo_concurrency.lockutils [None req-41c4b9a0-679b-4e9a-ac0c-7cc6987d80a2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:40:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:19 compute-0 nova_compute[247704]: 2026-01-31 07:40:19.382 247708 DEBUG nova.compute.manager [req-03a6e35f-cc58-445d-b2b7-b72db8f3cf79 req-9f928a0a-c5a1-43ab-8489-e11b0902e226 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:40:19 compute-0 nova_compute[247704]: 2026-01-31 07:40:19.384 247708 DEBUG oslo_concurrency.lockutils [req-03a6e35f-cc58-445d-b2b7-b72db8f3cf79 req-9f928a0a-c5a1-43ab-8489-e11b0902e226 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:19 compute-0 nova_compute[247704]: 2026-01-31 07:40:19.384 247708 DEBUG oslo_concurrency.lockutils [req-03a6e35f-cc58-445d-b2b7-b72db8f3cf79 req-9f928a0a-c5a1-43ab-8489-e11b0902e226 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:19 compute-0 nova_compute[247704]: 2026-01-31 07:40:19.385 247708 DEBUG oslo_concurrency.lockutils [req-03a6e35f-cc58-445d-b2b7-b72db8f3cf79 req-9f928a0a-c5a1-43ab-8489-e11b0902e226 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:19 compute-0 nova_compute[247704]: 2026-01-31 07:40:19.385 247708 DEBUG nova.compute.manager [req-03a6e35f-cc58-445d-b2b7-b72db8f3cf79 req-9f928a0a-c5a1-43ab-8489-e11b0902e226 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] No waiting events found dispatching network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:40:19 compute-0 nova_compute[247704]: 2026-01-31 07:40:19.386 247708 WARNING nova.compute.manager [req-03a6e35f-cc58-445d-b2b7-b72db8f3cf79 req-9f928a0a-c5a1-43ab-8489-e11b0902e226 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received unexpected event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 for instance with vm_state active and task_state None.
Jan 31 07:40:19 compute-0 ceph-mon[74496]: pgmap v1225: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:40:20
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.control']
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:40:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:20.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:40:20 compute-0 nova_compute[247704]: 2026-01-31 07:40:20.293 247708 DEBUG nova.compute.manager [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:40:20 compute-0 nova_compute[247704]: 2026-01-31 07:40:20.366 247708 INFO nova.compute.manager [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] instance snapshotting
Jan 31 07:40:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:20 compute-0 nova_compute[247704]: 2026-01-31 07:40:20.609 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Jan 31 07:40:21 compute-0 nova_compute[247704]: 2026-01-31 07:40:21.071 247708 INFO nova.virt.libvirt.driver [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Beginning live snapshot process
Jan 31 07:40:21 compute-0 nova_compute[247704]: 2026-01-31 07:40:21.281 247708 DEBUG nova.virt.libvirt.imagebackend [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:40:21 compute-0 nova_compute[247704]: 2026-01-31 07:40:21.458 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:21 compute-0 nova_compute[247704]: 2026-01-31 07:40:21.579 247708 DEBUG nova.storage.rbd_utils [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(ebd79bfc3299422097429f8d391b92c8) on rbd image(314f0738-9cae-4fe8-8b90-3d18f72488ef_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:40:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 31 07:40:21 compute-0 ceph-mon[74496]: pgmap v1226: 305 pgs: 305 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Jan 31 07:40:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 31 07:40:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 31 07:40:21 compute-0 nova_compute[247704]: 2026-01-31 07:40:21.927 247708 DEBUG nova.storage.rbd_utils [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] cloning vms/314f0738-9cae-4fe8-8b90-3d18f72488ef_disk@ebd79bfc3299422097429f8d391b92c8 to images/1c1b7167-294e-43fd-b811-31bea4078f3d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:40:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:22.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:22 compute-0 nova_compute[247704]: 2026-01-31 07:40:22.078 247708 DEBUG nova.storage.rbd_utils [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] flattening images/1c1b7167-294e-43fd-b811-31bea4078f3d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:40:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:22.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:22 compute-0 nova_compute[247704]: 2026-01-31 07:40:22.424 247708 DEBUG nova.storage.rbd_utils [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] removing snapshot(ebd79bfc3299422097429f8d391b92c8) on rbd image(314f0738-9cae-4fe8-8b90-3d18f72488ef_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:40:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 107 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 31 07:40:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 31 07:40:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 31 07:40:22 compute-0 ceph-mon[74496]: osdmap e184: 3 total, 3 up, 3 in
Jan 31 07:40:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 31 07:40:23 compute-0 nova_compute[247704]: 2026-01-31 07:40:23.120 247708 DEBUG nova.storage.rbd_utils [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(snap) on rbd image(1c1b7167-294e-43fd-b811-31bea4078f3d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:40:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 31 07:40:23 compute-0 ceph-mon[74496]: pgmap v1228: 305 pgs: 305 active+clean; 107 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 31 07:40:23 compute-0 ceph-mon[74496]: osdmap e185: 3 total, 3 up, 3 in
Jan 31 07:40:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 31 07:40:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 31 07:40:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:24.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 107 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.1 MiB/s wr, 163 op/s
Jan 31 07:40:25 compute-0 ceph-mon[74496]: osdmap e186: 3 total, 3 up, 3 in
Jan 31 07:40:25 compute-0 nova_compute[247704]: 2026-01-31 07:40:25.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:25 compute-0 nova_compute[247704]: 2026-01-31 07:40:25.961 247708 INFO nova.virt.libvirt.driver [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Snapshot image upload complete
Jan 31 07:40:25 compute-0 nova_compute[247704]: 2026-01-31 07:40:25.961 247708 INFO nova.compute.manager [None req-67128c3a-f7a2-4d4c-8a67-c33d553eb4d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Took 5.59 seconds to snapshot the instance on the hypervisor.
Jan 31 07:40:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:26.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:26 compute-0 ceph-mon[74496]: pgmap v1231: 305 pgs: 305 active+clean; 107 MiB data, 453 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.1 MiB/s wr, 163 op/s
Jan 31 07:40:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3020357619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:26.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:26 compute-0 nova_compute[247704]: 2026-01-31 07:40:26.495 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.5 MiB/s wr, 142 op/s
Jan 31 07:40:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:28.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:28 compute-0 ceph-mon[74496]: pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.5 MiB/s wr, 142 op/s
Jan 31 07:40:28 compute-0 sudo[274717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:28 compute-0 sudo[274717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-0 sudo[274717]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-0 sudo[274742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:40:28 compute-0 sudo[274742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-0 sudo[274742]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-0 sudo[274767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:28 compute-0 sudo[274767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-0 sudo[274767]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-0 sudo[274792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:40:28 compute-0 sudo[274792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 138 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:28 compute-0 sudo[274792]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:40:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:40:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:40:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b040fb38-d5c7-4d9c-8b9d-72c1e2749c97 does not exist
Jan 31 07:40:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0770eea3-b665-4b3f-88c2-cdf6a36ae655 does not exist
Jan 31 07:40:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5b4f75b8-8ca4-431a-9062-aaae249df92b does not exist
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:40:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:40:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:40:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:40:28 compute-0 sudo[274848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:29 compute-0 sudo[274848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:29 compute-0 sudo[274848]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:29 compute-0 sudo[274873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:40:29 compute-0 sudo[274873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:29 compute-0 sudo[274873]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:29 compute-0 sudo[274898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:29 compute-0 sudo[274898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:29 compute-0 sudo[274898]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:40:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:40:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:40:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:40:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:40:29 compute-0 sudo[274923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:40:29 compute-0 sudo[274923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.502407247 +0000 UTC m=+0.036606104 container create e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:40:29 compute-0 systemd[1]: Started libpod-conmon-e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669.scope.
Jan 31 07:40:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.581411045 +0000 UTC m=+0.115609912 container init e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.484235553 +0000 UTC m=+0.018434430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.58736193 +0000 UTC m=+0.121560797 container start e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.590795754 +0000 UTC m=+0.124994621 container attach e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:40:29 compute-0 fervent_kapitsa[275006]: 167 167
Jan 31 07:40:29 compute-0 systemd[1]: libpod-e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669.scope: Deactivated successfully.
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.592898855 +0000 UTC m=+0.127097712 container died e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a2d995231a6f9744d1f69704377e2ba4869498312ac8ac6a5ae7bece7ecffd7-merged.mount: Deactivated successfully.
Jan 31 07:40:29 compute-0 podman[275003]: 2026-01-31 07:40:29.629777714 +0000 UTC m=+0.083710922 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:40:29 compute-0 podman[274988]: 2026-01-31 07:40:29.637208026 +0000 UTC m=+0.171406883 container remove e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_kapitsa, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:40:29 compute-0 systemd[1]: libpod-conmon-e49b37677e4cb00bb18ff117a7e58005f99cc2fb00f7552077fec6ce1c60b669.scope: Deactivated successfully.
Jan 31 07:40:29 compute-0 podman[275054]: 2026-01-31 07:40:29.789598224 +0000 UTC m=+0.044205100 container create b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:40:29 compute-0 systemd[1]: Started libpod-conmon-b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75.scope.
Jan 31 07:40:29 compute-0 podman[275054]: 2026-01-31 07:40:29.771238686 +0000 UTC m=+0.025845622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:40:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0153f30db074f08fe9d0e81e63b300f98bb7da725924e9b8d74e42fbb7ccb5c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0153f30db074f08fe9d0e81e63b300f98bb7da725924e9b8d74e42fbb7ccb5c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0153f30db074f08fe9d0e81e63b300f98bb7da725924e9b8d74e42fbb7ccb5c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0153f30db074f08fe9d0e81e63b300f98bb7da725924e9b8d74e42fbb7ccb5c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0153f30db074f08fe9d0e81e63b300f98bb7da725924e9b8d74e42fbb7ccb5c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:29 compute-0 podman[275054]: 2026-01-31 07:40:29.920378605 +0000 UTC m=+0.174985471 container init b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:40:29 compute-0 podman[275054]: 2026-01-31 07:40:29.929935448 +0000 UTC m=+0.184542284 container start b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:40:29 compute-0 podman[275054]: 2026-01-31 07:40:29.936682193 +0000 UTC m=+0.191289029 container attach b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 07:40:30 compute-0 ovn_controller[149457]: 2026-01-31T07:40:30Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:2b:4e 10.100.0.6
Jan 31 07:40:30 compute-0 ovn_controller[149457]: 2026-01-31T07:40:30Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:2b:4e 10.100.0.6
Jan 31 07:40:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:30.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:30 compute-0 sudo[275076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:30 compute-0 sudo[275076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:30 compute-0 sudo[275076]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:30 compute-0 sudo[275101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:30 compute-0 sudo[275101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:30 compute-0 sudo[275101]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:30 compute-0 nova_compute[247704]: 2026-01-31 07:40:30.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 197 MiB data, 508 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.9 MiB/s wr, 187 op/s
Jan 31 07:40:30 compute-0 charming_chatelet[275070]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:40:30 compute-0 charming_chatelet[275070]: --> relative data size: 1.0
Jan 31 07:40:30 compute-0 charming_chatelet[275070]: --> All data devices are unavailable
Jan 31 07:40:30 compute-0 systemd[1]: libpod-b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75.scope: Deactivated successfully.
Jan 31 07:40:30 compute-0 podman[275054]: 2026-01-31 07:40:30.773829657 +0000 UTC m=+1.028436573 container died b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:40:31 compute-0 nova_compute[247704]: 2026-01-31 07:40:31.537 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:32.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 206 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.2 MiB/s wr, 161 op/s
Jan 31 07:40:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 31 07:40:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:34.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:34.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 206 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 147 op/s
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031208923669272287 of space, bias 1.0, pg target 0.9362677100781687 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:40:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:40:35 compute-0 nova_compute[247704]: 2026-01-31 07:40:35.615 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:36 compute-0 nova_compute[247704]: 2026-01-31 07:40:36.595 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 206 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 136 op/s
Jan 31 07:40:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:38.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:38.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 206 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.8 MiB/s wr, 99 op/s
Jan 31 07:40:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:40.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:40 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_flush, latency = 7.832721710s
Jan 31 07:40:40 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 8.070281982s
Jan 31 07:40:40 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.905389786s, txc = 0x55f9d9b09800
Jan 31 07:40:40 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.905065536s, txc = 0x55f9d9ae9200
Jan 31 07:40:40 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.904620171s, txc = 0x55f9d9b9a900
Jan 31 07:40:40 compute-0 ceph-mon[74496]: pgmap v1233: 305 pgs: 305 active+clean; 138 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Jan 31 07:40:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0153f30db074f08fe9d0e81e63b300f98bb7da725924e9b8d74e42fbb7ccb5c1-merged.mount: Deactivated successfully.
Jan 31 07:40:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:40:40 compute-0 ceph-mon[74496]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 31 07:40:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:40.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:40 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:40 compute-0 podman[275054]: 2026-01-31 07:40:40.552544493 +0000 UTC m=+10.807151359 container remove b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:40:40 compute-0 sudo[274923]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:40 compute-0 nova_compute[247704]: 2026-01-31 07:40:40.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 206 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.8 MiB/s wr, 86 op/s
Jan 31 07:40:40 compute-0 systemd[1]: libpod-conmon-b2de871addb2832be5049b8ff42b6e1e92abeb21e8212569fdd0717bc17eef75.scope: Deactivated successfully.
Jan 31 07:40:40 compute-0 sudo[275156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:40 compute-0 sudo[275156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:40 compute-0 sudo[275156]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:40 compute-0 sudo[275181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:40:40 compute-0 sudo[275181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:40 compute-0 sudo[275181]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:40 compute-0 sudo[275206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:40 compute-0 sudo[275206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:40 compute-0 sudo[275206]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:40 compute-0 sudo[275231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:40:40 compute-0 sudo[275231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:40.869 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:40:40 compute-0 nova_compute[247704]: 2026-01-31 07:40:40.869 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:40.870 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:40:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:40:40.871 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.211284044 +0000 UTC m=+0.045452349 container create a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tesla, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:40:41 compute-0 systemd[1]: Started libpod-conmon-a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b.scope.
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.188318654 +0000 UTC m=+0.022486979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:40:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.360235558 +0000 UTC m=+0.194403903 container init a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tesla, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.367168947 +0000 UTC m=+0.201337302 container start a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:40:41 compute-0 keen_tesla[275316]: 167 167
Jan 31 07:40:41 compute-0 systemd[1]: libpod-a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b.scope: Deactivated successfully.
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.400128061 +0000 UTC m=+0.234296376 container attach a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tesla, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.401141617 +0000 UTC m=+0.235309932 container died a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tesla, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-441063425bf6bb2b1f01cce0debc66e4477df6bce218bdb7a40f93f910cd1c74-merged.mount: Deactivated successfully.
Jan 31 07:40:41 compute-0 podman[275299]: 2026-01-31 07:40:41.516774087 +0000 UTC m=+0.350942392 container remove a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:40:41 compute-0 systemd[1]: libpod-conmon-a865fb7470ef6114eaa117ee22018a8e6cb49965828fc74108f98ac64f61221b.scope: Deactivated successfully.
Jan 31 07:40:41 compute-0 nova_compute[247704]: 2026-01-31 07:40:41.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:41 compute-0 podman[275342]: 2026-01-31 07:40:41.678000251 +0000 UTC m=+0.059225866 container create f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:40:41 compute-0 systemd[1]: Started libpod-conmon-f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115.scope.
Jan 31 07:40:41 compute-0 podman[275342]: 2026-01-31 07:40:41.646890122 +0000 UTC m=+0.028115797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:40:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae1e3c0d16d678be96d0d687a28b2e0e44d274180691596b6fa6fba7894603/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae1e3c0d16d678be96d0d687a28b2e0e44d274180691596b6fa6fba7894603/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae1e3c0d16d678be96d0d687a28b2e0e44d274180691596b6fa6fba7894603/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae1e3c0d16d678be96d0d687a28b2e0e44d274180691596b6fa6fba7894603/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:41 compute-0 podman[275342]: 2026-01-31 07:40:41.775539471 +0000 UTC m=+0.156765096 container init f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:40:41 compute-0 podman[275342]: 2026-01-31 07:40:41.785945625 +0000 UTC m=+0.167171210 container start f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:40:41 compute-0 podman[275342]: 2026-01-31 07:40:41.790400484 +0000 UTC m=+0.171626139 container attach f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 07:40:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:42 compute-0 friendly_bouman[275358]: {
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:     "0": [
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:         {
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "devices": [
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "/dev/loop3"
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             ],
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "lv_name": "ceph_lv0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "lv_size": "7511998464",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "name": "ceph_lv0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "tags": {
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.cluster_name": "ceph",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.crush_device_class": "",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.encrypted": "0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.osd_id": "0",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.type": "block",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:                 "ceph.vdo": "0"
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             },
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "type": "block",
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:             "vg_name": "ceph_vg0"
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:         }
Jan 31 07:40:42 compute-0 friendly_bouman[275358]:     ]
Jan 31 07:40:42 compute-0 friendly_bouman[275358]: }
Jan 31 07:40:42 compute-0 systemd[1]: libpod-f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115.scope: Deactivated successfully.
Jan 31 07:40:42 compute-0 podman[275342]: 2026-01-31 07:40:42.61903184 +0000 UTC m=+1.000257455 container died f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:40:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 206 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 500 KiB/s wr, 17 op/s
Jan 31 07:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-90ae1e3c0d16d678be96d0d687a28b2e0e44d274180691596b6fa6fba7894603-merged.mount: Deactivated successfully.
Jan 31 07:40:42 compute-0 podman[275342]: 2026-01-31 07:40:42.686062965 +0000 UTC m=+1.067288560 container remove f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bouman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:40:42 compute-0 systemd[1]: libpod-conmon-f9e8b246bedd3ce571363ce1d482857489d9e0e8ec515ee26877e044a3e95115.scope: Deactivated successfully.
Jan 31 07:40:42 compute-0 sudo[275231]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:42 compute-0 sudo[275381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:42 compute-0 sudo[275381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:42 compute-0 sudo[275381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:42 compute-0 sudo[275406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:40:42 compute-0 sudo[275406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:42 compute-0 sudo[275406]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:42 compute-0 sudo[275431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:42 compute-0 sudo[275431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:42 compute-0 sudo[275431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:42 compute-0 sudo[275456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:40:42 compute-0 sudo[275456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.322533114 +0000 UTC m=+0.060034546 container create 34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:40:43 compute-0 systemd[1]: Started libpod-conmon-34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5.scope.
Jan 31 07:40:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.29901871 +0000 UTC m=+0.036520132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.399669375 +0000 UTC m=+0.137170797 container init 34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.403594771 +0000 UTC m=+0.141096203 container start 34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:40:43 compute-0 clever_brahmagupta[275539]: 167 167
Jan 31 07:40:43 compute-0 systemd[1]: libpod-34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5.scope: Deactivated successfully.
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.41048547 +0000 UTC m=+0.147986882 container attach 34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brahmagupta, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.410950201 +0000 UTC m=+0.148451593 container died 34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brahmagupta, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-54b7869dd146343b7c5d5b4166199ac299cae15cfc7ced8be9b037ae28ad61a3-merged.mount: Deactivated successfully.
Jan 31 07:40:43 compute-0 podman[275521]: 2026-01-31 07:40:43.45725573 +0000 UTC m=+0.194757132 container remove 34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:40:43 compute-0 systemd[1]: libpod-conmon-34d9eba52af0c5d58850c572a6d52ea7e232b3edc3dc0a2e94379fd6538d24a5.scope: Deactivated successfully.
Jan 31 07:40:43 compute-0 nova_compute[247704]: 2026-01-31 07:40:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:43 compute-0 nova_compute[247704]: 2026-01-31 07:40:43.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:40:43 compute-0 nova_compute[247704]: 2026-01-31 07:40:43.579 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:40:43 compute-0 podman[275562]: 2026-01-31 07:40:43.62241814 +0000 UTC m=+0.059163074 container create 9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:40:43 compute-0 systemd[1]: Started libpod-conmon-9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08.scope.
Jan 31 07:40:43 compute-0 podman[275562]: 2026-01-31 07:40:43.601181412 +0000 UTC m=+0.037926426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:40:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d923250a1a8fa5292ca40a209e16c4f0971032d1b4ea2a72435a31bfba5f5209/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d923250a1a8fa5292ca40a209e16c4f0971032d1b4ea2a72435a31bfba5f5209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d923250a1a8fa5292ca40a209e16c4f0971032d1b4ea2a72435a31bfba5f5209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d923250a1a8fa5292ca40a209e16c4f0971032d1b4ea2a72435a31bfba5f5209/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:40:43 compute-0 podman[275562]: 2026-01-31 07:40:43.755894306 +0000 UTC m=+0.192639270 container init 9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 31 07:40:43 compute-0 podman[275562]: 2026-01-31 07:40:43.768880843 +0000 UTC m=+0.205625817 container start 9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:40:43 compute-0 podman[275562]: 2026-01-31 07:40:43.773701351 +0000 UTC m=+0.210446305 container attach 9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:40:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:44.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]: {
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:         "osd_id": 0,
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:         "type": "bluestore"
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]:     }
Jan 31 07:40:44 compute-0 amazing_roentgen[275578]: }
Jan 31 07:40:44 compute-0 systemd[1]: libpod-9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08.scope: Deactivated successfully.
Jan 31 07:40:44 compute-0 podman[275600]: 2026-01-31 07:40:44.64957835 +0000 UTC m=+0.028135227 container died 9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:40:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 206 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 10 op/s
Jan 31 07:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d923250a1a8fa5292ca40a209e16c4f0971032d1b4ea2a72435a31bfba5f5209-merged.mount: Deactivated successfully.
Jan 31 07:40:44 compute-0 podman[275600]: 2026-01-31 07:40:44.726239571 +0000 UTC m=+0.104796448 container remove 9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_roentgen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:40:44 compute-0 systemd[1]: libpod-conmon-9556636f231c61303d0f84a45d588045e027c53b45482f208d2be85e9797eb08.scope: Deactivated successfully.
Jan 31 07:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:44 compute-0 sudo[275456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 07:40:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:40:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 35m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] :     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:40:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9e7ceeda-690e-468c-94c1-bea41ee8be1a does not exist
Jan 31 07:40:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 291e2e0b-d664-4bef-8fce-6e03746ea1c9 does not exist
Jan 31 07:40:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fd39c783-14d2-4862-8687-34b6134040f8 does not exist
Jan 31 07:40:45 compute-0 nova_compute[247704]: 2026-01-31 07:40:45.621 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:45 compute-0 sudo[275614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:45 compute-0 sudo[275614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:45 compute-0 sudo[275614]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:45 compute-0 sudo[275639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:40:45 compute-0 sudo[275639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:45 compute-0 sudo[275639]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:45 compute-0 podman[275664]: 2026-01-31 07:40:45.900308525 +0000 UTC m=+0.068295598 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 07:40:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:40:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:46.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:40:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1789897216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:46 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:40:46 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:40:46 compute-0 ceph-mon[74496]: pgmap v1239: 305 pgs: 305 active+clean; 206 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.8 MiB/s wr, 86 op/s
Jan 31 07:40:46 compute-0 ceph-mon[74496]: pgmap v1240: 305 pgs: 305 active+clean; 206 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 500 KiB/s wr, 17 op/s
Jan 31 07:40:46 compute-0 ceph-mon[74496]: pgmap v1241: 305 pgs: 305 active+clean; 206 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 10 op/s
Jan 31 07:40:46 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 07:40:46 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:40:46 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:40:46 compute-0 ceph-mon[74496]: osdmap e187: 3 total, 3 up, 3 in
Jan 31 07:40:46 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 35m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:40:46 compute-0 ceph-mon[74496]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 07:40:46 compute-0 ceph-mon[74496]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 07:40:46 compute-0 ceph-mon[74496]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 07:40:46 compute-0 ceph-mon[74496]:     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 07:40:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:40:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1486128835' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:40:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1486128835' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:40:46 compute-0 nova_compute[247704]: 2026-01-31 07:40:46.598 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 243 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.5 MiB/s wr, 40 op/s
Jan 31 07:40:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2139412687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:47 compute-0 ceph-mon[74496]: pgmap v1243: 305 pgs: 305 active+clean; 243 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.5 MiB/s wr, 40 op/s
Jan 31 07:40:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1559322790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3158256233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2664734762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:48.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2581716136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:40:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 259 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 2.2 MiB/s wr, 61 op/s
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:40:48 compute-0 ceph-mon[74496]: paxos.0).electionLogic(18) init, last seen epoch 18
Jan 31 07:40:48 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:40:48 compute-0 ceph-mon[74496]: paxos.0).electionLogic(21) init, last seen epoch 21, mid-election, bumping
Jan 31 07:40:48 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:40:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 35m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 07:40:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:40:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2010659746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/24656606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:40:49 compute-0 ceph-mon[74496]: pgmap v1244: 305 pgs: 305 active+clean; 259 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 2.2 MiB/s wr, 61 op/s
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:40:49 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:40:49 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:40:49 compute-0 ceph-mon[74496]: osdmap e187: 3 total, 3 up, 3 in
Jan 31 07:40:49 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 35m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:40:49 compute-0 ceph-mon[74496]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 07:40:49 compute-0 ceph-mon[74496]: Cluster is now healthy
Jan 31 07:40:49 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:40:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:50.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:40:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:50.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:40:50 compute-0 sudo[275686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:50 compute-0 sudo[275686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:50 compute-0 sudo[275686]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:50 compute-0 sudo[275711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:40:50 compute-0 sudo[275711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:40:50 compute-0 sudo[275711]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:50 compute-0 nova_compute[247704]: 2026-01-31 07:40:50.623 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Jan 31 07:40:51 compute-0 nova_compute[247704]: 2026-01-31 07:40:51.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:51 compute-0 nova_compute[247704]: 2026-01-31 07:40:51.600 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:51 compute-0 ceph-mon[74496]: pgmap v1245: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 109 KiB/s rd, 2.3 MiB/s wr, 90 op/s
Jan 31 07:40:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:40:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:40:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Jan 31 07:40:53 compute-0 nova_compute[247704]: 2026-01-31 07:40:53.711 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:53 compute-0 nova_compute[247704]: 2026-01-31 07:40:53.711 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:40:53 compute-0 nova_compute[247704]: 2026-01-31 07:40:53.711 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:40:53 compute-0 ceph-mon[74496]: pgmap v1246: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Jan 31 07:40:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:40:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:54.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:40:54 compute-0 nova_compute[247704]: 2026-01-31 07:40:54.209 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:40:54 compute-0 nova_compute[247704]: 2026-01-31 07:40:54.210 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:40:54 compute-0 nova_compute[247704]: 2026-01-31 07:40:54.211 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:40:54 compute-0 nova_compute[247704]: 2026-01-31 07:40:54.211 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 314f0738-9cae-4fe8-8b90-3d18f72488ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:40:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:54.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Jan 31 07:40:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1081832384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:40:55 compute-0 nova_compute[247704]: 2026-01-31 07:40:55.626 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-0 ceph-mon[74496]: pgmap v1247: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 136 op/s
Jan 31 07:40:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3409420697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:56.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.626 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updating instance_info_cache with network_info: [{"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:40:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 225 op/s
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.728 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-314f0738-9cae-4fe8-8b90-3d18f72488ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.728 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.728 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.729 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:56 compute-0 nova_compute[247704]: 2026-01-31 07:40:56.729 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:40:58 compute-0 ceph-mon[74496]: pgmap v1248: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.4 MiB/s wr, 225 op/s
Jan 31 07:40:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:40:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:40:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:40:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:40:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.581 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.582 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.608 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.609 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.610 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.610 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:40:58 compute-0 nova_compute[247704]: 2026-01-31 07:40:58.610 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:40:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 648 KiB/s wr, 216 op/s
Jan 31 07:40:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:40:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550577645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1164369848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.086 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.193 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.194 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.397 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.399 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4471MB free_disk=20.90084457397461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.399 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.399 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.630 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 314f0738-9cae-4fe8-8b90-3d18f72488ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.630 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.630 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.728 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.851 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.852 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.886 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.908 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:40:59 compute-0 podman[275765]: 2026-01-31 07:40:59.93786124 +0000 UTC m=+0.102435770 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:40:59 compute-0 nova_compute[247704]: 2026-01-31 07:40:59.962 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:00 compute-0 ceph-mon[74496]: pgmap v1249: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 648 KiB/s wr, 216 op/s
Jan 31 07:41:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3550577645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3007936790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575247536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:00 compute-0 nova_compute[247704]: 2026-01-31 07:41:00.401 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:00 compute-0 nova_compute[247704]: 2026-01-31 07:41:00.410 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:00 compute-0 nova_compute[247704]: 2026-01-31 07:41:00.436 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:00.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:00 compute-0 nova_compute[247704]: 2026-01-31 07:41:00.497 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:41:00 compute-0 nova_compute[247704]: 2026-01-31 07:41:00.497 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:00 compute-0 nova_compute[247704]: 2026-01-31 07:41:00.630 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 50 KiB/s wr, 247 op/s
Jan 31 07:41:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2575247536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1330953625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.480 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.481 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.482 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.482 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:01 compute-0 nova_compute[247704]: 2026-01-31 07:41:01.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:02.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 31 07:41:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 31 07:41:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 31 07:41:02 compute-0 ceph-mon[74496]: pgmap v1250: 305 pgs: 305 active+clean; 260 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 50 KiB/s wr, 247 op/s
Jan 31 07:41:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 255 op/s
Jan 31 07:41:03 compute-0 ceph-mon[74496]: osdmap e188: 3 total, 3 up, 3 in
Jan 31 07:41:03 compute-0 ceph-mon[74496]: pgmap v1252: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 255 op/s
Jan 31 07:41:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:04.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 255 op/s
Jan 31 07:41:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.531 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.532 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.532 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.532 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.532 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.534 247708 INFO nova.compute.manager [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Terminating instance
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.535 247708 DEBUG nova.compute.manager [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:41:05 compute-0 kernel: tapcf7d03dc-44 (unregistering): left promiscuous mode
Jan 31 07:41:05 compute-0 NetworkManager[49108]: <info>  [1769845265.6089] device (tapcf7d03dc-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:41:05 compute-0 ovn_controller[149457]: 2026-01-31T07:41:05Z|00149|binding|INFO|Releasing lport cf7d03dc-440f-4773-a9d7-9519fc850bf4 from this chassis (sb_readonly=0)
Jan 31 07:41:05 compute-0 ovn_controller[149457]: 2026-01-31T07:41:05Z|00150|binding|INFO|Setting lport cf7d03dc-440f-4773-a9d7-9519fc850bf4 down in Southbound
Jan 31 07:41:05 compute-0 ovn_controller[149457]: 2026-01-31T07:41:05Z|00151|binding|INFO|Removing iface tapcf7d03dc-44 ovn-installed in OVS
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.642 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.645 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.649 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.649 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:2b:4e 10.100.0.6'], port_security=['fa:16:3e:48:2b:4e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '314f0738-9cae-4fe8-8b90-3d18f72488ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=cf7d03dc-440f-4773-a9d7-9519fc850bf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.650 160028 INFO neutron.agent.ovn.metadata.agent [-] Port cf7d03dc-440f-4773-a9d7-9519fc850bf4 in datapath cffffabd-62a6-4362-9315-bd726adce623 unbound from our chassis
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.652 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cffffabd-62a6-4362-9315-bd726adce623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.654 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7be32028-5735-4cc3-9f2a-6c304495600b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.655 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace which is not needed anymore
Jan 31 07:41:05 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 31 07:41:05 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002a.scope: Consumed 14.493s CPU time.
Jan 31 07:41:05 compute-0 systemd-machined[214448]: Machine qemu-21-instance-0000002a terminated.
Jan 31 07:41:05 compute-0 ceph-mon[74496]: pgmap v1253: 305 pgs: 305 active+clean; 279 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 255 op/s
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.770 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.779 247708 INFO nova.virt.libvirt.driver [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Instance destroyed successfully.
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.780 247708 DEBUG nova.objects.instance [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'resources' on Instance uuid 314f0738-9cae-4fe8-8b90-3d18f72488ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:05 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [NOTICE]   (274559) : haproxy version is 2.8.14-c23fe91
Jan 31 07:41:05 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [NOTICE]   (274559) : path to executable is /usr/sbin/haproxy
Jan 31 07:41:05 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [WARNING]  (274559) : Exiting Master process...
Jan 31 07:41:05 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [WARNING]  (274559) : Exiting Master process...
Jan 31 07:41:05 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [ALERT]    (274559) : Current worker (274562) exited with code 143 (Terminated)
Jan 31 07:41:05 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[274550]: [WARNING]  (274559) : All workers exited. Exiting... (0)
Jan 31 07:41:05 compute-0 systemd[1]: libpod-f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325.scope: Deactivated successfully.
Jan 31 07:41:05 compute-0 conmon[274550]: conmon f34ccdef0628dace5504 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325.scope/container/memory.events
Jan 31 07:41:05 compute-0 podman[275840]: 2026-01-31 07:41:05.802642405 +0000 UTC m=+0.064850550 container died f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.809 247708 DEBUG nova.virt.libvirt.vif [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-480511753',display_name='tempest-ImagesTestJSON-server-480511753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-480511753',id=42,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:40:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-c0v7dnna',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:40:26Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=314f0738-9cae-4fe8-8b90-3d18f72488ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.810 247708 DEBUG nova.network.os_vif_util [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "address": "fa:16:3e:48:2b:4e", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcf7d03dc-44", "ovs_interfaceid": "cf7d03dc-440f-4773-a9d7-9519fc850bf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.811 247708 DEBUG nova.network.os_vif_util [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.811 247708 DEBUG os_vif [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.813 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcf7d03dc-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.815 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.820 247708 INFO os_vif [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:2b:4e,bridge_name='br-int',has_traffic_filtering=True,id=cf7d03dc-440f-4773-a9d7-9519fc850bf4,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcf7d03dc-44')
Jan 31 07:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-462000c524e7d1630158b2e838e7ec03d46758f6639f3e36b3566feee7304b2c-merged.mount: Deactivated successfully.
Jan 31 07:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325-userdata-shm.mount: Deactivated successfully.
Jan 31 07:41:05 compute-0 podman[275840]: 2026-01-31 07:41:05.853124551 +0000 UTC m=+0.115332696 container cleanup f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:41:05 compute-0 systemd[1]: libpod-conmon-f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325.scope: Deactivated successfully.
Jan 31 07:41:05 compute-0 podman[275893]: 2026-01-31 07:41:05.939584437 +0000 UTC m=+0.062286116 container remove f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.946 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ed69b6dd-23f0-4fef-bbff-627275113eb9]: (4, ('Sat Jan 31 07:41:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325)\nf34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325\nSat Jan 31 07:41:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (f34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325)\nf34ccdef0628dace550422a2d06e34ecb531ca271ad374379fb2a34a797d4325\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.949 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0bc9f1-d29d-4057-87aa-9cb566688190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.950 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.952 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 kernel: tapcffffabd-60: left promiscuous mode
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.962 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.966 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca0f923-c110-4239-9d1e-649c3b52870d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.970 247708 DEBUG nova.compute.manager [req-12cff691-fb7d-4e6e-8cb6-4f259eff0e67 req-f00265f3-d4aa-4ceb-8ccb-8a420ca8e752 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-vif-unplugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.971 247708 DEBUG oslo_concurrency.lockutils [req-12cff691-fb7d-4e6e-8cb6-4f259eff0e67 req-f00265f3-d4aa-4ceb-8ccb-8a420ca8e752 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.971 247708 DEBUG oslo_concurrency.lockutils [req-12cff691-fb7d-4e6e-8cb6-4f259eff0e67 req-f00265f3-d4aa-4ceb-8ccb-8a420ca8e752 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.972 247708 DEBUG oslo_concurrency.lockutils [req-12cff691-fb7d-4e6e-8cb6-4f259eff0e67 req-f00265f3-d4aa-4ceb-8ccb-8a420ca8e752 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.972 247708 DEBUG nova.compute.manager [req-12cff691-fb7d-4e6e-8cb6-4f259eff0e67 req-f00265f3-d4aa-4ceb-8ccb-8a420ca8e752 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] No waiting events found dispatching network-vif-unplugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:41:05 compute-0 nova_compute[247704]: 2026-01-31 07:41:05.972 247708 DEBUG nova.compute.manager [req-12cff691-fb7d-4e6e-8cb6-4f259eff0e67 req-f00265f3-d4aa-4ceb-8ccb-8a420ca8e752 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-vif-unplugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.980 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bb18bac8-19ed-436a-b566-d984fecd9694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:05.984 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bf917bcf-c45a-4ace-a228-4db5a8e533d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:06.004 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[39888711-8370-48d5-b72b-725d572fe756]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545962, 'reachable_time': 30494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275910, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:06.007 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:41:06 compute-0 systemd[1]: run-netns-ovnmeta\x2dcffffabd\x2d62a6\x2d4362\x2d9315\x2dbd726adce623.mount: Deactivated successfully.
Jan 31 07:41:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:06.007 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[614eb9c6-e638-4743-b429-a7e1b855326e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:06.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:06 compute-0 nova_compute[247704]: 2026-01-31 07:41:06.608 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 296 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Jan 31 07:41:07 compute-0 nova_compute[247704]: 2026-01-31 07:41:07.317 247708 INFO nova.virt.libvirt.driver [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Deleting instance files /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef_del
Jan 31 07:41:07 compute-0 nova_compute[247704]: 2026-01-31 07:41:07.318 247708 INFO nova.virt.libvirt.driver [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Deletion of /var/lib/nova/instances/314f0738-9cae-4fe8-8b90-3d18f72488ef_del complete
Jan 31 07:41:07 compute-0 nova_compute[247704]: 2026-01-31 07:41:07.375 247708 INFO nova.compute.manager [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Took 1.84 seconds to destroy the instance on the hypervisor.
Jan 31 07:41:07 compute-0 nova_compute[247704]: 2026-01-31 07:41:07.376 247708 DEBUG oslo.service.loopingcall [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:41:07 compute-0 nova_compute[247704]: 2026-01-31 07:41:07.377 247708 DEBUG nova.compute.manager [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:41:07 compute-0 nova_compute[247704]: 2026-01-31 07:41:07.377 247708 DEBUG nova.network.neutron [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:41:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:08.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 31 07:41:08 compute-0 ceph-mon[74496]: pgmap v1254: 305 pgs: 305 active+clean; 296 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Jan 31 07:41:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3200938819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 31 07:41:08 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 31 07:41:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:41:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:08.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:41:08 compute-0 nova_compute[247704]: 2026-01-31 07:41:08.621 247708 DEBUG nova.compute.manager [req-3997caaa-9d33-40c3-bff4-f3b6ca93891b req-1debaa33-0da8-4833-bc2f-aa0e67f256dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:08 compute-0 nova_compute[247704]: 2026-01-31 07:41:08.622 247708 DEBUG oslo_concurrency.lockutils [req-3997caaa-9d33-40c3-bff4-f3b6ca93891b req-1debaa33-0da8-4833-bc2f-aa0e67f256dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:08 compute-0 nova_compute[247704]: 2026-01-31 07:41:08.622 247708 DEBUG oslo_concurrency.lockutils [req-3997caaa-9d33-40c3-bff4-f3b6ca93891b req-1debaa33-0da8-4833-bc2f-aa0e67f256dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:08 compute-0 nova_compute[247704]: 2026-01-31 07:41:08.622 247708 DEBUG oslo_concurrency.lockutils [req-3997caaa-9d33-40c3-bff4-f3b6ca93891b req-1debaa33-0da8-4833-bc2f-aa0e67f256dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:08 compute-0 nova_compute[247704]: 2026-01-31 07:41:08.623 247708 DEBUG nova.compute.manager [req-3997caaa-9d33-40c3-bff4-f3b6ca93891b req-1debaa33-0da8-4833-bc2f-aa0e67f256dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] No waiting events found dispatching network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:41:08 compute-0 nova_compute[247704]: 2026-01-31 07:41:08.623 247708 WARNING nova.compute.manager [req-3997caaa-9d33-40c3-bff4-f3b6ca93891b req-1debaa33-0da8-4833-bc2f-aa0e67f256dc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received unexpected event network-vif-plugged-cf7d03dc-440f-4773-a9d7-9519fc850bf4 for instance with vm_state active and task_state deleting.
Jan 31 07:41:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 255 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 6.4 MiB/s wr, 224 op/s
Jan 31 07:41:09 compute-0 nova_compute[247704]: 2026-01-31 07:41:09.098 247708 DEBUG nova.network.neutron [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:09 compute-0 nova_compute[247704]: 2026-01-31 07:41:09.151 247708 DEBUG nova.compute.manager [req-2546b8e9-90ff-4428-9a93-b49c7cda5955 req-fae45083-3103-46b9-b96d-b21eb68aa354 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Received event network-vif-deleted-cf7d03dc-440f-4773-a9d7-9519fc850bf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:09 compute-0 nova_compute[247704]: 2026-01-31 07:41:09.152 247708 INFO nova.compute.manager [req-2546b8e9-90ff-4428-9a93-b49c7cda5955 req-fae45083-3103-46b9-b96d-b21eb68aa354 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Neutron deleted interface cf7d03dc-440f-4773-a9d7-9519fc850bf4; detaching it from the instance and deleting it from the info cache
Jan 31 07:41:09 compute-0 nova_compute[247704]: 2026-01-31 07:41:09.152 247708 DEBUG nova.network.neutron [req-2546b8e9-90ff-4428-9a93-b49c7cda5955 req-fae45083-3103-46b9-b96d-b21eb68aa354 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:09 compute-0 ceph-mon[74496]: osdmap e189: 3 total, 3 up, 3 in
Jan 31 07:41:09 compute-0 nova_compute[247704]: 2026-01-31 07:41:09.942 247708 INFO nova.compute.manager [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Took 2.57 seconds to deallocate network for instance.
Jan 31 07:41:09 compute-0 nova_compute[247704]: 2026-01-31 07:41:09.955 247708 DEBUG nova.compute.manager [req-2546b8e9-90ff-4428-9a93-b49c7cda5955 req-fae45083-3103-46b9-b96d-b21eb68aa354 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Detach interface failed, port_id=cf7d03dc-440f-4773-a9d7-9519fc850bf4, reason: Instance 314f0738-9cae-4fe8-8b90-3d18f72488ef could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:41:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:10.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 31 07:41:10 compute-0 nova_compute[247704]: 2026-01-31 07:41:10.430 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:10 compute-0 nova_compute[247704]: 2026-01-31 07:41:10.431 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:10.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:10 compute-0 nova_compute[247704]: 2026-01-31 07:41:10.475 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:10 compute-0 nova_compute[247704]: 2026-01-31 07:41:10.476 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 258 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 326 op/s
Jan 31 07:41:10 compute-0 sudo[275914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:10 compute-0 sudo[275914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:10 compute-0 sudo[275914]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:10 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:41:10 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:41:10 compute-0 sudo[275940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:10 compute-0 sudo[275940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:10 compute-0 sudo[275940]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 31 07:41:10 compute-0 nova_compute[247704]: 2026-01-31 07:41:10.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 31 07:41:10 compute-0 nova_compute[247704]: 2026-01-31 07:41:10.869 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:41:10 compute-0 ceph-mon[74496]: pgmap v1256: 305 pgs: 305 active+clean; 255 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 6.4 MiB/s wr, 224 op/s
Jan 31 07:41:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:11.150 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:11.151 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:11.151 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:11 compute-0 nova_compute[247704]: 2026-01-31 07:41:11.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 31 07:41:12 compute-0 ceph-mon[74496]: pgmap v1257: 305 pgs: 305 active+clean; 258 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 326 op/s
Jan 31 07:41:12 compute-0 ceph-mon[74496]: osdmap e190: 3 total, 3 up, 3 in
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.074 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:12.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.172 247708 DEBUG oslo_concurrency.processutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 31 07:41:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 31 07:41:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:12.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069176226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.616 247708 DEBUG oslo_concurrency.processutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.622 247708 DEBUG nova.compute.provider_tree [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.641 247708 DEBUG nova.scheduler.client.report [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 302 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 9.5 MiB/s wr, 257 op/s
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.665 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.668 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.675 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.676 247708 INFO nova.compute.claims [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.694 247708 INFO nova.scheduler.client.report [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Deleted allocations for instance 314f0738-9cae-4fe8-8b90-3d18f72488ef
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.777 247708 DEBUG oslo_concurrency.lockutils [None req-88735fc7-03c1-4241-974c-009b699c7a03 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "314f0738-9cae-4fe8-8b90-3d18f72488ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:12 compute-0 nova_compute[247704]: 2026-01-31 07:41:12.786 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175813159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.226 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.236 247708 DEBUG nova.compute.provider_tree [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.255 247708 DEBUG nova.scheduler.client.report [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.285 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.287 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.338 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.339 247708 DEBUG nova.network.neutron [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.363 247708 INFO nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.382 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:41:13 compute-0 ceph-mon[74496]: osdmap e191: 3 total, 3 up, 3 in
Jan 31 07:41:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2069176226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2175813159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.720 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.723 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.723 247708 INFO nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Creating image(s)
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.765 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.805 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.840 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.843 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.864 247708 DEBUG nova.policy [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '533eaca1e9c4430dabe2b0a39039ca65', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.902 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.903 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.903 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.904 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.932 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:13 compute-0 nova_compute[247704]: 2026-01-31 07:41:13.937 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:14 compute-0 ceph-mon[74496]: pgmap v1260: 305 pgs: 305 active+clean; 302 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 9.5 MiB/s wr, 257 op/s
Jan 31 07:41:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 302 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 8.7 MiB/s wr, 207 op/s
Jan 31 07:41:14 compute-0 nova_compute[247704]: 2026-01-31 07:41:14.693 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.756s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:14 compute-0 nova_compute[247704]: 2026-01-31 07:41:14.743 247708 DEBUG nova.network.neutron [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Successfully created port: 97a6ff46-4d74-473f-b3ea-55c79a77ab31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:41:14 compute-0 nova_compute[247704]: 2026-01-31 07:41:14.808 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] resizing rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.174 247708 DEBUG nova.objects.instance [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'migration_context' on Instance uuid c15e2982-5d12-4b02-9e21-c5ae81e3f478 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.206 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.206 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Ensure instance console log exists: /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.207 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.207 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.207 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:15 compute-0 ceph-mon[74496]: pgmap v1261: 305 pgs: 305 active+clean; 302 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 8.7 MiB/s wr, 207 op/s
Jan 31 07:41:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3783863931' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.707 247708 DEBUG nova.network.neutron [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Successfully updated port: 97a6ff46-4d74-473f-b3ea-55c79a77ab31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:41:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.819 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.823 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "refresh_cache-c15e2982-5d12-4b02-9e21-c5ae81e3f478" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.824 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquired lock "refresh_cache-c15e2982-5d12-4b02-9e21-c5ae81e3f478" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.824 247708 DEBUG nova.network.neutron [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.889 247708 DEBUG nova.compute.manager [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-changed-97a6ff46-4d74-473f-b3ea-55c79a77ab31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.889 247708 DEBUG nova.compute.manager [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Refreshing instance network info cache due to event network-changed-97a6ff46-4d74-473f-b3ea-55c79a77ab31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:41:15 compute-0 nova_compute[247704]: 2026-01-31 07:41:15.890 247708 DEBUG oslo_concurrency.lockutils [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c15e2982-5d12-4b02-9e21-c5ae81e3f478" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.011 247708 DEBUG nova.network.neutron [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:41:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:16.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 368 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 241 op/s
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.749 247708 DEBUG nova.network.neutron [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Updating instance_info_cache with network_info: [{"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1684801822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:16 compute-0 podman[276178]: 2026-01-31 07:41:16.885248294 +0000 UTC m=+0.055340677 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.988 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Releasing lock "refresh_cache-c15e2982-5d12-4b02-9e21-c5ae81e3f478" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.989 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Instance network_info: |[{"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.989 247708 DEBUG oslo_concurrency.lockutils [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c15e2982-5d12-4b02-9e21-c5ae81e3f478" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.990 247708 DEBUG nova.network.neutron [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Refreshing network info cache for port 97a6ff46-4d74-473f-b3ea-55c79a77ab31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:41:16 compute-0 nova_compute[247704]: 2026-01-31 07:41:16.997 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Start _get_guest_xml network_info=[{"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.004 247708 WARNING nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.010 247708 DEBUG nova.virt.libvirt.host [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.011 247708 DEBUG nova.virt.libvirt.host [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.016 247708 DEBUG nova.virt.libvirt.host [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.017 247708 DEBUG nova.virt.libvirt.host [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.019 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.019 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.020 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.020 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.021 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.021 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.022 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.022 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.023 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.023 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.024 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.024 247708 DEBUG nova.virt.hardware [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.029 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:41:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673872824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.508 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.546 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.551 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:17 compute-0 ceph-mon[74496]: pgmap v1262: 305 pgs: 305 active+clean; 368 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 241 op/s
Jan 31 07:41:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2673872824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:41:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454434337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.993 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.996 247708 DEBUG nova.virt.libvirt.vif [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:41:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1676067128',display_name='tempest-ImagesTestJSON-server-1676067128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1676067128',id=47,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-1fy00414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:41:13Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=c15e2982-5d12-4b02-9e21-c5ae81e3f478,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.997 247708 DEBUG nova.network.os_vif_util [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:41:17 compute-0 nova_compute[247704]: 2026-01-31 07:41:17.998 247708 DEBUG nova.network.os_vif_util [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.000 247708 DEBUG nova.objects.instance [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'pci_devices' on Instance uuid c15e2982-5d12-4b02-9e21-c5ae81e3f478 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.051 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <uuid>c15e2982-5d12-4b02-9e21-c5ae81e3f478</uuid>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <name>instance-0000002f</name>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:name>tempest-ImagesTestJSON-server-1676067128</nova:name>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:41:17</nova:creationTime>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:user uuid="533eaca1e9c4430dabe2b0a39039ca65">tempest-ImagesTestJSON-533495031-project-member</nova:user>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:project uuid="b3e3e6f216d24c1f9f68777cfb63dbf8">tempest-ImagesTestJSON-533495031</nova:project>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <nova:port uuid="97a6ff46-4d74-473f-b3ea-55c79a77ab31">
Jan 31 07:41:18 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <system>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <entry name="serial">c15e2982-5d12-4b02-9e21-c5ae81e3f478</entry>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <entry name="uuid">c15e2982-5d12-4b02-9e21-c5ae81e3f478</entry>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </system>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <os>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </os>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <features>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </features>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk">
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </source>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk.config">
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </source>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:41:18 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:7f:0e:6b"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <target dev="tap97a6ff46-4d"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/console.log" append="off"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <video>
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </video>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:41:18 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:41:18 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:41:18 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:41:18 compute-0 nova_compute[247704]: </domain>
Jan 31 07:41:18 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.053 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Preparing to wait for external event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.054 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.054 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.055 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.056 247708 DEBUG nova.virt.libvirt.vif [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:41:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1676067128',display_name='tempest-ImagesTestJSON-server-1676067128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1676067128',id=47,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-1fy00414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:41:13Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=c15e2982-5d12-4b02-9e21-c5ae81e3f478,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.056 247708 DEBUG nova.network.os_vif_util [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.058 247708 DEBUG nova.network.os_vif_util [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.058 247708 DEBUG os_vif [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.059 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.060 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.061 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.065 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.066 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97a6ff46-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.066 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97a6ff46-4d, col_values=(('external_ids', {'iface-id': '97a6ff46-4d74-473f-b3ea-55c79a77ab31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:0e:6b', 'vm-uuid': 'c15e2982-5d12-4b02-9e21-c5ae81e3f478'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:18 compute-0 NetworkManager[49108]: <info>  [1769845278.0711] manager: (tap97a6ff46-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.081 247708 INFO os_vif [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d')
Jan 31 07:41:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:18.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.238 247708 DEBUG nova.network.neutron [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Updated VIF entry in instance network info cache for port 97a6ff46-4d74-473f-b3ea-55c79a77ab31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.240 247708 DEBUG nova.network.neutron [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Updating instance_info_cache with network_info: [{"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.291 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.292 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.292 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No VIF found with MAC fa:16:3e:7f:0e:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.292 247708 INFO nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Using config drive
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.324 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.332 247708 DEBUG oslo_concurrency.lockutils [req-bf1948e9-838c-4f80-9fe8-8173a1bb9684 req-f00b15dd-8560-482e-807c-8813ec93a0bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c15e2982-5d12-4b02-9e21-c5ae81e3f478" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:41:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:18.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 330 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.3 MiB/s wr, 147 op/s
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.817 247708 INFO nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Creating config drive at /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/disk.config
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.822 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpanrl1aij execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3454434337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2857570762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.962 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpanrl1aij" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:18 compute-0 nova_compute[247704]: 2026-01-31 07:41:18.998 247708 DEBUG nova.storage.rbd_utils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] rbd image c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.003 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/disk.config c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.599 247708 DEBUG oslo_concurrency.processutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/disk.config c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.600 247708 INFO nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Deleting local config drive /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478/disk.config because it was imported into RBD.
Jan 31 07:41:19 compute-0 kernel: tap97a6ff46-4d: entered promiscuous mode
Jan 31 07:41:19 compute-0 NetworkManager[49108]: <info>  [1769845279.6448] manager: (tap97a6ff46-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Jan 31 07:41:19 compute-0 ovn_controller[149457]: 2026-01-31T07:41:19Z|00152|binding|INFO|Claiming lport 97a6ff46-4d74-473f-b3ea-55c79a77ab31 for this chassis.
Jan 31 07:41:19 compute-0 ovn_controller[149457]: 2026-01-31T07:41:19Z|00153|binding|INFO|97a6ff46-4d74-473f-b3ea-55c79a77ab31: Claiming fa:16:3e:7f:0e:6b 10.100.0.7
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 ovn_controller[149457]: 2026-01-31T07:41:19Z|00154|binding|INFO|Setting lport 97a6ff46-4d74-473f-b3ea-55c79a77ab31 ovn-installed in OVS
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.655 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.656 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 systemd-machined[214448]: New machine qemu-22-instance-0000002f.
Jan 31 07:41:19 compute-0 systemd-udevd[276334]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:41:19 compute-0 NetworkManager[49108]: <info>  [1769845279.6822] device (tap97a6ff46-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:41:19 compute-0 NetworkManager[49108]: <info>  [1769845279.6829] device (tap97a6ff46-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:41:19 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-0000002f.
Jan 31 07:41:19 compute-0 ovn_controller[149457]: 2026-01-31T07:41:19Z|00155|binding|INFO|Setting lport 97a6ff46-4d74-473f-b3ea-55c79a77ab31 up in Southbound
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.691 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:0e:6b 10.100.0.7'], port_security=['fa:16:3e:7f:0e:6b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c15e2982-5d12-4b02-9e21-c5ae81e3f478', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=97a6ff46-4d74-473f-b3ea-55c79a77ab31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.693 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 97a6ff46-4d74-473f-b3ea-55c79a77ab31 in datapath cffffabd-62a6-4362-9315-bd726adce623 bound to our chassis
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.696 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.708 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2a71f442-99fe-4a92-83b3-5789b220cdde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.709 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcffffabd-61 in ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.711 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcffffabd-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.711 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[01d48ad3-fb4a-4c28-834f-558d1176209c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.712 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c2351e99-a767-470a-a6f0-1f97679acaac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.726 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[fadcc833-8e5f-4139-9626-1c47afd599de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.739 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4e209a91-8ceb-4b75-844b-7ee2c3b30b39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.762 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[30e07562-151c-456d-902a-0d62adaeb969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.767 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b99741ea-24cc-4c31-b151-b722a3ac2993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 NetworkManager[49108]: <info>  [1769845279.7685] manager: (tapcffffabd-60): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Jan 31 07:41:19 compute-0 systemd-udevd[276336]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.791 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[33168ea0-3c3f-4c59-8e04-5752fbc99813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.794 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bcda45ff-d39c-44b7-b176-9f321efbf478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 NetworkManager[49108]: <info>  [1769845279.8114] device (tapcffffabd-60): carrier: link connected
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.816 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[254682cb-8574-4cf3-8056-1a2c69796bd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.834 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5c675707-b594-4e1b-bc3b-7748eea31719]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552242, 'reachable_time': 34351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276367, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.850 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4f60609f-f257-412a-91c8-90d801e76086]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:96c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 552242, 'tstamp': 552242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276368, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.867 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ba675876-6799-4c38-be78-820bcfbba1b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcffffabd-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:96:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552242, 'reachable_time': 34351, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276369, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.895 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2e6c97-477a-4c15-b0c4-3b4b17caa325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ceph-mon[74496]: pgmap v1263: 305 pgs: 305 active+clean; 330 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.3 MiB/s wr, 147 op/s
Jan 31 07:41:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3533032308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.954 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[98de42d3-5684-4c17-91c8-6d932429b6a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.956 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.956 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.957 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcffffabd-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.959 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 NetworkManager[49108]: <info>  [1769845279.9602] manager: (tapcffffabd-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Jan 31 07:41:19 compute-0 kernel: tapcffffabd-60: entered promiscuous mode
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.966 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcffffabd-60, col_values=(('external_ids', {'iface-id': '549e70cf-ed02-45f9-9021-3a04088f580f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.967 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 ovn_controller[149457]: 2026-01-31T07:41:19Z|00156|binding|INFO|Releasing lport 549e70cf-ed02-45f9-9021-3a04088f580f from this chassis (sb_readonly=0)
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.979 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 nova_compute[247704]: 2026-01-31 07:41:19.982 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.982 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.984 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f9915e79-671d-41fe-af70-2a19dc14f98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.985 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/cffffabd-62a6-4362-9315-bd726adce623.pid.haproxy
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID cffffabd-62a6-4362-9315-bd726adce623
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:41:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:19.986 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'env', 'PROCESS_TAG=haproxy-cffffabd-62a6-4362-9315-bd726adce623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cffffabd-62a6-4362-9315-bd726adce623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:41:20
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'images']
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:41:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:20.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:41:20 compute-0 podman[276401]: 2026-01-31 07:41:20.337022654 +0000 UTC m=+0.036200548 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:41:20 compute-0 podman[276401]: 2026-01-31 07:41:20.488036002 +0000 UTC m=+0.187213816 container create 5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:41:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:20 compute-0 systemd[1]: Started libpod-conmon-5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0.scope.
Jan 31 07:41:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c1c9fdd8fcbe61104c5eebe3d4bf6a2f35b702b14691438aece11a06333648/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 242 op/s
Jan 31 07:41:20 compute-0 podman[276401]: 2026-01-31 07:41:20.691842423 +0000 UTC m=+0.391020307 container init 5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:41:20 compute-0 podman[276401]: 2026-01-31 07:41:20.699848729 +0000 UTC m=+0.399026573 container start 5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:41:20 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [NOTICE]   (276420) : New worker (276422) forked
Jan 31 07:41:20 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [NOTICE]   (276420) : Loading success.
Jan 31 07:41:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:20 compute-0 nova_compute[247704]: 2026-01-31 07:41:20.778 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845265.7766407, 314f0738-9cae-4fe8-8b90-3d18f72488ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 31 07:41:20 compute-0 nova_compute[247704]: 2026-01-31 07:41:20.779 247708 INFO nova.compute.manager [-] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] VM Stopped (Lifecycle Event)
Jan 31 07:41:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 31 07:41:20 compute-0 nova_compute[247704]: 2026-01-31 07:41:20.896 247708 DEBUG nova.compute.manager [None req-ff340f41-51c2-44d4-afca-2ad547de3f73 - - - - - -] [instance: 314f0738-9cae-4fe8-8b90-3d18f72488ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.268 247708 DEBUG nova.compute.manager [req-5c01eb41-3cfe-4af2-98fe-670b6627d56d req-2df8f7cf-22ad-4e93-adf1-14d4def15db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.268 247708 DEBUG oslo_concurrency.lockutils [req-5c01eb41-3cfe-4af2-98fe-670b6627d56d req-2df8f7cf-22ad-4e93-adf1-14d4def15db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.269 247708 DEBUG oslo_concurrency.lockutils [req-5c01eb41-3cfe-4af2-98fe-670b6627d56d req-2df8f7cf-22ad-4e93-adf1-14d4def15db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.270 247708 DEBUG oslo_concurrency.lockutils [req-5c01eb41-3cfe-4af2-98fe-670b6627d56d req-2df8f7cf-22ad-4e93-adf1-14d4def15db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.270 247708 DEBUG nova.compute.manager [req-5c01eb41-3cfe-4af2-98fe-670b6627d56d req-2df8f7cf-22ad-4e93-adf1-14d4def15db5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Processing event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.298 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845281.2975278, c15e2982-5d12-4b02-9e21-c5ae81e3f478 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.298 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] VM Started (Lifecycle Event)
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.301 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.306 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.310 247708 INFO nova.virt.libvirt.driver [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Instance spawned successfully.
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.310 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.439 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.445 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.446 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.447 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.448 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.448 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.449 247708 DEBUG nova.virt.libvirt.driver [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.455 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.714 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.715 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845281.2978606, c15e2982-5d12-4b02-9e21-c5ae81e3f478 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.715 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] VM Paused (Lifecycle Event)
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.908 247708 INFO nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Took 8.19 seconds to spawn the instance on the hypervisor.
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.909 247708 DEBUG nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.911 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.922 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845281.3048277, c15e2982-5d12-4b02-9e21-c5ae81e3f478 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:21 compute-0 nova_compute[247704]: 2026-01-31 07:41:21.922 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] VM Resumed (Lifecycle Event)
Jan 31 07:41:22 compute-0 nova_compute[247704]: 2026-01-31 07:41:22.021 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:22 compute-0 nova_compute[247704]: 2026-01-31 07:41:22.026 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:41:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:22.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:22 compute-0 nova_compute[247704]: 2026-01-31 07:41:22.206 247708 INFO nova.compute.manager [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Took 10.96 seconds to build instance.
Jan 31 07:41:22 compute-0 nova_compute[247704]: 2026-01-31 07:41:22.347 247708 DEBUG oslo_concurrency.lockutils [None req-e710d60b-1c28-4ab6-869d-8ee1c973f6d2 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:22.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:22 compute-0 ceph-mon[74496]: pgmap v1264: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 242 op/s
Jan 31 07:41:22 compute-0 ceph-mon[74496]: osdmap e192: 3 total, 3 up, 3 in
Jan 31 07:41:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Jan 31 07:41:23 compute-0 nova_compute[247704]: 2026-01-31 07:41:23.071 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:23 compute-0 ceph-mon[74496]: pgmap v1266: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Jan 31 07:41:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3141007097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:24.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Jan 31 07:41:24 compute-0 nova_compute[247704]: 2026-01-31 07:41:24.792 247708 DEBUG nova.compute.manager [req-4cfad7ce-0ba1-457d-85f1-7bee0337f656 req-e49e6f9f-e127-4b4c-b475-77c365430966 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:24 compute-0 nova_compute[247704]: 2026-01-31 07:41:24.792 247708 DEBUG oslo_concurrency.lockutils [req-4cfad7ce-0ba1-457d-85f1-7bee0337f656 req-e49e6f9f-e127-4b4c-b475-77c365430966 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:24 compute-0 nova_compute[247704]: 2026-01-31 07:41:24.793 247708 DEBUG oslo_concurrency.lockutils [req-4cfad7ce-0ba1-457d-85f1-7bee0337f656 req-e49e6f9f-e127-4b4c-b475-77c365430966 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:24 compute-0 nova_compute[247704]: 2026-01-31 07:41:24.793 247708 DEBUG oslo_concurrency.lockutils [req-4cfad7ce-0ba1-457d-85f1-7bee0337f656 req-e49e6f9f-e127-4b4c-b475-77c365430966 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:24 compute-0 nova_compute[247704]: 2026-01-31 07:41:24.794 247708 DEBUG nova.compute.manager [req-4cfad7ce-0ba1-457d-85f1-7bee0337f656 req-e49e6f9f-e127-4b4c-b475-77c365430966 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] No waiting events found dispatching network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:41:24 compute-0 nova_compute[247704]: 2026-01-31 07:41:24.794 247708 WARNING nova.compute.manager [req-4cfad7ce-0ba1-457d-85f1-7bee0337f656 req-e49e6f9f-e127-4b4c-b475-77c365430966 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received unexpected event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 for instance with vm_state active and task_state image_snapshot_pending.
Jan 31 07:41:25 compute-0 nova_compute[247704]: 2026-01-31 07:41:25.414 247708 DEBUG nova.compute.manager [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:25 compute-0 nova_compute[247704]: 2026-01-31 07:41:25.607 247708 INFO nova.compute.manager [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] instance snapshotting
Jan 31 07:41:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:25 compute-0 ceph-mon[74496]: pgmap v1267: 305 pgs: 305 active+clean; 213 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Jan 31 07:41:25 compute-0 nova_compute[247704]: 2026-01-31 07:41:25.943 247708 INFO nova.virt.libvirt.driver [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Beginning live snapshot process
Jan 31 07:41:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:26 compute-0 nova_compute[247704]: 2026-01-31 07:41:26.614 247708 DEBUG nova.virt.libvirt.imagebackend [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:41:26 compute-0 nova_compute[247704]: 2026-01-31 07:41:26.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 213 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 86 KiB/s wr, 251 op/s
Jan 31 07:41:26 compute-0 nova_compute[247704]: 2026-01-31 07:41:26.830 247708 DEBUG nova.storage.rbd_utils [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(9c56ae9aae594e55b89a882a01100b9d) on rbd image(c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:41:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 31 07:41:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 31 07:41:27 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 31 07:41:27 compute-0 ceph-mon[74496]: pgmap v1268: 305 pgs: 305 active+clean; 213 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 86 KiB/s wr, 251 op/s
Jan 31 07:41:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2654768386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:27 compute-0 nova_compute[247704]: 2026-01-31 07:41:27.971 247708 DEBUG nova.storage.rbd_utils [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] cloning vms/c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk@9c56ae9aae594e55b89a882a01100b9d to images/18697fed-07f1-495d-b1ed-b20c2cf25fce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:41:28 compute-0 nova_compute[247704]: 2026-01-31 07:41:28.073 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:41:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:41:28 compute-0 nova_compute[247704]: 2026-01-31 07:41:28.177 247708 DEBUG nova.storage.rbd_utils [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] flattening images/18697fed-07f1-495d-b1ed-b20c2cf25fce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:41:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:41:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:41:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 243 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 31 07:41:28 compute-0 nova_compute[247704]: 2026-01-31 07:41:28.893 247708 DEBUG nova.storage.rbd_utils [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] removing snapshot(9c56ae9aae594e55b89a882a01100b9d) on rbd image(c15e2982-5d12-4b02-9e21-c5ae81e3f478_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:41:28 compute-0 ceph-mon[74496]: osdmap e193: 3 total, 3 up, 3 in
Jan 31 07:41:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/628568958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 31 07:41:30 compute-0 ceph-mon[74496]: pgmap v1270: 305 pgs: 305 active+clean; 243 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 31 07:41:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 31 07:41:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 31 07:41:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:30 compute-0 nova_compute[247704]: 2026-01-31 07:41:30.156 247708 DEBUG nova.storage.rbd_utils [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] creating snapshot(snap) on rbd image(18697fed-07f1-495d-b1ed-b20c2cf25fce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:41:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 325 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 325 op/s
Jan 31 07:41:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:30 compute-0 sudo[276619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:30 compute-0 sudo[276619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:30 compute-0 sudo[276619]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:30 compute-0 podman[276627]: 2026-01-31 07:41:30.943320459 +0000 UTC m=+0.116864772 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:41:30 compute-0 sudo[276660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:30 compute-0 sudo[276660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:30 compute-0 sudo[276660]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 31 07:41:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 31 07:41:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 31 07:41:31 compute-0 ceph-mon[74496]: osdmap e194: 3 total, 3 up, 3 in
Jan 31 07:41:31 compute-0 nova_compute[247704]: 2026-01-31 07:41:31.662 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 18697fed-07f1-495d-b1ed-b20c2cf25fce could not be found.
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 18697fed-07f1-495d-b1ed-b20c2cf25fce
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 18697fed-07f1-495d-b1ed-b20c2cf25fce could not be found.
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.061 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:41:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:32.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:32 compute-0 nova_compute[247704]: 2026-01-31 07:41:32.151 247708 DEBUG nova.storage.rbd_utils [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] removing snapshot(snap) on rbd image(18697fed-07f1-495d-b1ed-b20c2cf25fce) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:41:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:32.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 31 07:41:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 324 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 385 op/s
Jan 31 07:41:32 compute-0 ceph-mon[74496]: pgmap v1272: 305 pgs: 305 active+clean; 325 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 325 op/s
Jan 31 07:41:32 compute-0 ceph-mon[74496]: osdmap e195: 3 total, 3 up, 3 in
Jan 31 07:41:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 31 07:41:32 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 31 07:41:33 compute-0 nova_compute[247704]: 2026-01-31 07:41:33.076 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:33 compute-0 nova_compute[247704]: 2026-01-31 07:41:33.386 247708 WARNING nova.compute.manager [None req-250d875a-f554-4a23-adc4-7111124c82f7 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Image not found during snapshot: nova.exception.ImageNotFound: Image 18697fed-07f1-495d-b1ed-b20c2cf25fce could not be found.
Jan 31 07:41:33 compute-0 ceph-mon[74496]: pgmap v1274: 305 pgs: 305 active+clean; 324 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 14 MiB/s wr, 385 op/s
Jan 31 07:41:33 compute-0 ceph-mon[74496]: osdmap e196: 3 total, 3 up, 3 in
Jan 31 07:41:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:34.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 324 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 10 MiB/s wr, 372 op/s
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004858094000093058 of space, bias 1.0, pg target 1.4574282000279173 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004288296226502885 of space, bias 1.0, pg target 1.2822005717243625 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:41:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.537 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.538 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.538 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.539 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.539 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.541 247708 INFO nova.compute.manager [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Terminating instance
Jan 31 07:41:35 compute-0 nova_compute[247704]: 2026-01-31 07:41:35.543 247708 DEBUG nova.compute.manager [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:41:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:35 compute-0 sshd-session[276733]: Invalid user sol from 45.148.10.240 port 54220
Jan 31 07:41:35 compute-0 sshd-session[276733]: Connection closed by invalid user sol 45.148.10.240 port 54220 [preauth]
Jan 31 07:41:36 compute-0 kernel: tap97a6ff46-4d (unregistering): left promiscuous mode
Jan 31 07:41:36 compute-0 NetworkManager[49108]: <info>  [1769845296.0068] device (tap97a6ff46-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.061 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 ovn_controller[149457]: 2026-01-31T07:41:36Z|00157|binding|INFO|Releasing lport 97a6ff46-4d74-473f-b3ea-55c79a77ab31 from this chassis (sb_readonly=0)
Jan 31 07:41:36 compute-0 ovn_controller[149457]: 2026-01-31T07:41:36Z|00158|binding|INFO|Setting lport 97a6ff46-4d74-473f-b3ea-55c79a77ab31 down in Southbound
Jan 31 07:41:36 compute-0 ovn_controller[149457]: 2026-01-31T07:41:36Z|00159|binding|INFO|Removing iface tap97a6ff46-4d ovn-installed in OVS
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.064 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Jan 31 07:41:36 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002f.scope: Consumed 13.813s CPU time.
Jan 31 07:41:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:36 compute-0 systemd-machined[214448]: Machine qemu-22-instance-0000002f terminated.
Jan 31 07:41:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:36.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.188 247708 INFO nova.virt.libvirt.driver [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Instance destroyed successfully.
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.188 247708 DEBUG nova.objects.instance [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lazy-loading 'resources' on Instance uuid c15e2982-5d12-4b02-9e21-c5ae81e3f478 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:41:36 compute-0 ceph-mon[74496]: pgmap v1276: 305 pgs: 305 active+clean; 324 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 10 MiB/s wr, 372 op/s
Jan 31 07:41:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.528 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:0e:6b 10.100.0.7'], port_security=['fa:16:3e:7f:0e:6b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c15e2982-5d12-4b02-9e21-c5ae81e3f478', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cffffabd-62a6-4362-9315-bd726adce623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3e3e6f216d24c1f9f68777cfb63dbf8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd60d680e-d6aa-48ac-a8a2-519ea9a8ff01', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9a503d6-c9cb-4329-87a2-a939359a3572, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=97a6ff46-4d74-473f-b3ea-55c79a77ab31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.531 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 97a6ff46-4d74-473f-b3ea-55c79a77ab31 in datapath cffffabd-62a6-4362-9315-bd726adce623 unbound from our chassis
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.533 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cffffabd-62a6-4362-9315-bd726adce623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.535 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[92ea9099-282d-4b3c-bcba-d795d8f28241]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.535 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 namespace which is not needed anymore
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.612 247708 DEBUG nova.virt.libvirt.vif [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:41:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1676067128',display_name='tempest-ImagesTestJSON-server-1676067128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1676067128',id=47,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:41:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3e3e6f216d24c1f9f68777cfb63dbf8',ramdisk_id='',reservation_id='r-1fy00414',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-533495031',owner_user_name='tempest-ImagesTestJSON-533495031-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:41:33Z,user_data=None,user_id='533eaca1e9c4430dabe2b0a39039ca65',uuid=c15e2982-5d12-4b02-9e21-c5ae81e3f478,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.613 247708 DEBUG nova.network.os_vif_util [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converting VIF {"id": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "address": "fa:16:3e:7f:0e:6b", "network": {"id": "cffffabd-62a6-4362-9315-bd726adce623", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1696843136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3e3e6f216d24c1f9f68777cfb63dbf8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97a6ff46-4d", "ovs_interfaceid": "97a6ff46-4d74-473f-b3ea-55c79a77ab31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.614 247708 DEBUG nova.network.os_vif_util [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.614 247708 DEBUG os_vif [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.616 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.616 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97a6ff46-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.620 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.622 247708 INFO os_vif [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:0e:6b,bridge_name='br-int',has_traffic_filtering=True,id=97a6ff46-4d74-473f-b3ea-55c79a77ab31,network=Network(cffffabd-62a6-4362-9315-bd726adce623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97a6ff46-4d')
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.664 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 268 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 9.4 MiB/s wr, 467 op/s
Jan 31 07:41:36 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [NOTICE]   (276420) : haproxy version is 2.8.14-c23fe91
Jan 31 07:41:36 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [NOTICE]   (276420) : path to executable is /usr/sbin/haproxy
Jan 31 07:41:36 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [WARNING]  (276420) : Exiting Master process...
Jan 31 07:41:36 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [WARNING]  (276420) : Exiting Master process...
Jan 31 07:41:36 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [ALERT]    (276420) : Current worker (276422) exited with code 143 (Terminated)
Jan 31 07:41:36 compute-0 neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623[276416]: [WARNING]  (276420) : All workers exited. Exiting... (0)
Jan 31 07:41:36 compute-0 systemd[1]: libpod-5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0.scope: Deactivated successfully.
Jan 31 07:41:36 compute-0 podman[276770]: 2026-01-31 07:41:36.805833955 +0000 UTC m=+0.167985246 container died 5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 07:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0-userdata-shm.mount: Deactivated successfully.
Jan 31 07:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-24c1c9fdd8fcbe61104c5eebe3d4bf6a2f35b702b14691438aece11a06333648-merged.mount: Deactivated successfully.
Jan 31 07:41:36 compute-0 podman[276770]: 2026-01-31 07:41:36.873309348 +0000 UTC m=+0.235460629 container cleanup 5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:41:36 compute-0 systemd[1]: libpod-conmon-5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0.scope: Deactivated successfully.
Jan 31 07:41:36 compute-0 podman[276818]: 2026-01-31 07:41:36.934295611 +0000 UTC m=+0.042994173 container remove 5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.938 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bef475fd-3845-47d4-861b-ada21d3ca2d5]: (4, ('Sat Jan 31 07:41:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0)\n5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0\nSat Jan 31 07:41:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 (5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0)\n5b29895db36ed7dbb6850544d5be5a8b9708e1a17fe876a844f6aea7073d8ce0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.940 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d5eb343a-8bac-4e40-813c-824035466e64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.940 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcffffabd-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.942 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 kernel: tapcffffabd-60: left promiscuous mode
Jan 31 07:41:36 compute-0 nova_compute[247704]: 2026-01-31 07:41:36.950 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.952 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[72520b4f-ca31-4c74-9b51-82da46220798]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.967 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfc0064-f313-4a9e-b997-0518c34cd0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.968 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[37809f48-df00-43f0-92c3-006c3294667a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.983 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad7dfb0-1139-4c1d-83ca-209b55858beb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 552237, 'reachable_time': 30959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276832, 'error': None, 'target': 'ovnmeta-cffffabd-62a6-4362-9315-bd726adce623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.986 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cffffabd-62a6-4362-9315-bd726adce623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:41:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:36.986 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[cac43aa3-0e8c-4fa9-af4a-248390cd7ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:41:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dcffffabd\x2d62a6\x2d4362\x2d9315\x2dbd726adce623.mount: Deactivated successfully.
Jan 31 07:41:37 compute-0 ceph-mon[74496]: pgmap v1277: 305 pgs: 305 active+clean; 268 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 9.4 MiB/s wr, 467 op/s
Jan 31 07:41:37 compute-0 nova_compute[247704]: 2026-01-31 07:41:37.814 247708 DEBUG nova.compute.manager [req-341b84dd-864e-4677-bc34-77f879c7380b req-7f71589a-c8f7-4d07-b2d0-fda3777a18f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-vif-unplugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:37 compute-0 nova_compute[247704]: 2026-01-31 07:41:37.814 247708 DEBUG oslo_concurrency.lockutils [req-341b84dd-864e-4677-bc34-77f879c7380b req-7f71589a-c8f7-4d07-b2d0-fda3777a18f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:37 compute-0 nova_compute[247704]: 2026-01-31 07:41:37.815 247708 DEBUG oslo_concurrency.lockutils [req-341b84dd-864e-4677-bc34-77f879c7380b req-7f71589a-c8f7-4d07-b2d0-fda3777a18f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:37 compute-0 nova_compute[247704]: 2026-01-31 07:41:37.815 247708 DEBUG oslo_concurrency.lockutils [req-341b84dd-864e-4677-bc34-77f879c7380b req-7f71589a-c8f7-4d07-b2d0-fda3777a18f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:37 compute-0 nova_compute[247704]: 2026-01-31 07:41:37.815 247708 DEBUG nova.compute.manager [req-341b84dd-864e-4677-bc34-77f879c7380b req-7f71589a-c8f7-4d07-b2d0-fda3777a18f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] No waiting events found dispatching network-vif-unplugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:41:37 compute-0 nova_compute[247704]: 2026-01-31 07:41:37.816 247708 DEBUG nova.compute.manager [req-341b84dd-864e-4677-bc34-77f879c7380b req-7f71589a-c8f7-4d07-b2d0-fda3777a18f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-vif-unplugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:41:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:38.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 261 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 269 op/s
Jan 31 07:41:39 compute-0 nova_compute[247704]: 2026-01-31 07:41:39.100 247708 INFO nova.virt.libvirt.driver [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Deleting instance files /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478_del
Jan 31 07:41:39 compute-0 nova_compute[247704]: 2026-01-31 07:41:39.101 247708 INFO nova.virt.libvirt.driver [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Deletion of /var/lib/nova/instances/c15e2982-5d12-4b02-9e21-c5ae81e3f478_del complete
Jan 31 07:41:39 compute-0 nova_compute[247704]: 2026-01-31 07:41:39.334 247708 INFO nova.compute.manager [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Took 3.79 seconds to destroy the instance on the hypervisor.
Jan 31 07:41:39 compute-0 nova_compute[247704]: 2026-01-31 07:41:39.335 247708 DEBUG oslo.service.loopingcall [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:41:39 compute-0 nova_compute[247704]: 2026-01-31 07:41:39.336 247708 DEBUG nova.compute.manager [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:41:39 compute-0 nova_compute[247704]: 2026-01-31 07:41:39.336 247708 DEBUG nova.network.neutron [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:41:39 compute-0 ceph-mon[74496]: pgmap v1278: 305 pgs: 305 active+clean; 261 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.5 MiB/s wr, 269 op/s
Jan 31 07:41:40 compute-0 nova_compute[247704]: 2026-01-31 07:41:40.008 247708 DEBUG nova.compute.manager [req-f0b146d0-56b9-4f01-9a51-2b1c9b2614c0 req-e09b7725-7da7-4d0f-b79c-4f1991645f97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:40 compute-0 nova_compute[247704]: 2026-01-31 07:41:40.008 247708 DEBUG oslo_concurrency.lockutils [req-f0b146d0-56b9-4f01-9a51-2b1c9b2614c0 req-e09b7725-7da7-4d0f-b79c-4f1991645f97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:40 compute-0 nova_compute[247704]: 2026-01-31 07:41:40.009 247708 DEBUG oslo_concurrency.lockutils [req-f0b146d0-56b9-4f01-9a51-2b1c9b2614c0 req-e09b7725-7da7-4d0f-b79c-4f1991645f97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:40 compute-0 nova_compute[247704]: 2026-01-31 07:41:40.009 247708 DEBUG oslo_concurrency.lockutils [req-f0b146d0-56b9-4f01-9a51-2b1c9b2614c0 req-e09b7725-7da7-4d0f-b79c-4f1991645f97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:40 compute-0 nova_compute[247704]: 2026-01-31 07:41:40.009 247708 DEBUG nova.compute.manager [req-f0b146d0-56b9-4f01-9a51-2b1c9b2614c0 req-e09b7725-7da7-4d0f-b79c-4f1991645f97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] No waiting events found dispatching network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:41:40 compute-0 nova_compute[247704]: 2026-01-31 07:41:40.009 247708 WARNING nova.compute.manager [req-f0b146d0-56b9-4f01-9a51-2b1c9b2614c0 req-e09b7725-7da7-4d0f-b79c-4f1991645f97 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received unexpected event network-vif-plugged-97a6ff46-4d74-473f-b3ea-55c79a77ab31 for instance with vm_state active and task_state deleting.
Jan 31 07:41:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:40.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.8 MiB/s wr, 271 op/s
Jan 31 07:41:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.106 247708 DEBUG nova.network.neutron [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.181 247708 INFO nova.compute.manager [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Took 1.85 seconds to deallocate network for instance.
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.203 247708 DEBUG nova.compute.manager [req-db30be59-c1ce-48a8-abf5-0870deace002 req-b2a42c4a-74b7-4dfc-a945-0246e433402a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Received event network-vif-deleted-97a6ff46-4d74-473f-b3ea-55c79a77ab31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.204 247708 INFO nova.compute.manager [req-db30be59-c1ce-48a8-abf5-0870deace002 req-b2a42c4a-74b7-4dfc-a945-0246e433402a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Neutron deleted interface 97a6ff46-4d74-473f-b3ea-55c79a77ab31; detaching it from the instance and deleting it from the info cache
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.205 247708 DEBUG nova.network.neutron [req-db30be59-c1ce-48a8-abf5-0870deace002 req-b2a42c4a-74b7-4dfc-a945-0246e433402a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:41:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.280 247708 DEBUG nova.compute.manager [req-db30be59-c1ce-48a8-abf5-0870deace002 req-b2a42c4a-74b7-4dfc-a945-0246e433402a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Detach interface failed, port_id=97a6ff46-4d74-473f-b3ea-55c79a77ab31, reason: Instance c15e2982-5d12-4b02-9e21-c5ae81e3f478 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.335 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.336 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.403 247708 DEBUG oslo_concurrency.processutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.620 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.665 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:41:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739341122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.857 247708 DEBUG oslo_concurrency.processutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.866 247708 DEBUG nova.compute.provider_tree [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:41:41 compute-0 nova_compute[247704]: 2026-01-31 07:41:41.912 247708 DEBUG nova.scheduler.client.report [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:41:42 compute-0 nova_compute[247704]: 2026-01-31 07:41:42.047 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:42 compute-0 nova_compute[247704]: 2026-01-31 07:41:42.109 247708 INFO nova.scheduler.client.report [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Deleted allocations for instance c15e2982-5d12-4b02-9e21-c5ae81e3f478
Jan 31 07:41:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:42.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:42 compute-0 ceph-mon[74496]: pgmap v1279: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.8 MiB/s wr, 271 op/s
Jan 31 07:41:42 compute-0 ceph-mon[74496]: osdmap e197: 3 total, 3 up, 3 in
Jan 31 07:41:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2739341122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 222 op/s
Jan 31 07:41:42 compute-0 nova_compute[247704]: 2026-01-31 07:41:42.707 247708 DEBUG oslo_concurrency.lockutils [None req-39966c76-779f-4d5c-9fb4-c6cd7902d33a 533eaca1e9c4430dabe2b0a39039ca65 b3e3e6f216d24c1f9f68777cfb63dbf8 - - default default] Lock "c15e2982-5d12-4b02-9e21-c5ae81e3f478" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:41:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:43.223 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:41:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:43.224 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:41:43 compute-0 nova_compute[247704]: 2026-01-31 07:41:43.224 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:43 compute-0 ceph-mon[74496]: pgmap v1281: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 222 op/s
Jan 31 07:41:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:41:44.227 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:41:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 218 op/s
Jan 31 07:41:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:41:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1538759407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:41:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:41:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1538759407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:41:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:45 compute-0 ceph-mon[74496]: pgmap v1282: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 218 op/s
Jan 31 07:41:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1538759407' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:41:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1538759407' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:41:46 compute-0 sudo[276861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:46 compute-0 sudo[276861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-0 sudo[276861]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-0 sudo[276886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:41:46 compute-0 sudo[276886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-0 sudo[276886]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:46.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:46 compute-0 sudo[276911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:46 compute-0 sudo[276911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-0 sudo[276911]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-0 sudo[276936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:41:46 compute-0 sudo[276936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:46 compute-0 nova_compute[247704]: 2026-01-31 07:41:46.624 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:46 compute-0 sudo[276936]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:46 compute-0 nova_compute[247704]: 2026-01-31 07:41:46.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 592 KiB/s rd, 1.0 MiB/s wr, 97 op/s
Jan 31 07:41:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/851585430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:41:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:41:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:41:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:47 compute-0 podman[276993]: 2026-01-31 07:41:47.889029978 +0000 UTC m=+0.068290403 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:41:48 compute-0 ceph-mon[74496]: pgmap v1283: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 592 KiB/s rd, 1.0 MiB/s wr, 97 op/s
Jan 31 07:41:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:41:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:41:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:41:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 87bc0738-890e-4c82-9211-909424d79e96 does not exist
Jan 31 07:41:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c576aa42-b603-42ef-90c3-25c17f8e7946 does not exist
Jan 31 07:41:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e56dfe84-1ab6-42ba-9e49-0ff2720405df does not exist
Jan 31 07:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:41:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:41:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:41:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:41:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 695 KiB/s rd, 272 KiB/s wr, 102 op/s
Jan 31 07:41:48 compute-0 sudo[277012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:48 compute-0 sudo[277012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:48 compute-0 sudo[277012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:48 compute-0 sudo[277037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:41:48 compute-0 sudo[277037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:48 compute-0 sudo[277037]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:48 compute-0 sudo[277062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:48 compute-0 sudo[277062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:48 compute-0 sudo[277062]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:48 compute-0 sudo[277087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:41:48 compute-0 sudo[277087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:41:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:41:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:41:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:41:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:41:49 compute-0 podman[277152]: 2026-01-31 07:41:49.27990655 +0000 UTC m=+0.030572680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:41:49 compute-0 podman[277152]: 2026-01-31 07:41:49.417757215 +0000 UTC m=+0.168423365 container create 243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_nobel, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:41:49 compute-0 systemd[1]: Started libpod-conmon-243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4.scope.
Jan 31 07:41:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:49 compute-0 podman[277152]: 2026-01-31 07:41:49.864219288 +0000 UTC m=+0.614885488 container init 243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:41:49 compute-0 podman[277152]: 2026-01-31 07:41:49.873401384 +0000 UTC m=+0.624067524 container start 243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:41:49 compute-0 awesome_nobel[277171]: 167 167
Jan 31 07:41:49 compute-0 systemd[1]: libpod-243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4.scope: Deactivated successfully.
Jan 31 07:41:49 compute-0 podman[277152]: 2026-01-31 07:41:49.895264688 +0000 UTC m=+0.645930838 container attach 243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_nobel, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:41:49 compute-0 podman[277152]: 2026-01-31 07:41:49.896716654 +0000 UTC m=+0.647382794 container died 243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_nobel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-04d96e0e4606117817781eea78b4ff23decd5355e7c7ab39f978eb3d41fa537c-merged.mount: Deactivated successfully.
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:41:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:50 compute-0 podman[277152]: 2026-01-31 07:41:50.176654469 +0000 UTC m=+0.927320619 container remove 243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:41:50 compute-0 systemd[1]: libpod-conmon-243d0dc04fa5d75f94f431946140d9a7c4c1a3b0c1080258cbe4df17d8efb5e4.scope: Deactivated successfully.
Jan 31 07:41:50 compute-0 podman[277197]: 2026-01-31 07:41:50.328040867 +0000 UTC m=+0.019658073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:41:50 compute-0 podman[277197]: 2026-01-31 07:41:50.441456834 +0000 UTC m=+0.133074020 container create 5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:41:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:50 compute-0 ceph-mon[74496]: pgmap v1284: 305 pgs: 305 active+clean; 200 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 695 KiB/s rd, 272 KiB/s wr, 102 op/s
Jan 31 07:41:50 compute-0 systemd[1]: Started libpod-conmon-5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de.scope.
Jan 31 07:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc46d1b2eeece0bc093fc4c5420e343a11ef167129736ffaed6f9eea4543a0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc46d1b2eeece0bc093fc4c5420e343a11ef167129736ffaed6f9eea4543a0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc46d1b2eeece0bc093fc4c5420e343a11ef167129736ffaed6f9eea4543a0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc46d1b2eeece0bc093fc4c5420e343a11ef167129736ffaed6f9eea4543a0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc46d1b2eeece0bc093fc4c5420e343a11ef167129736ffaed6f9eea4543a0d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 46 KiB/s wr, 64 op/s
Jan 31 07:41:50 compute-0 podman[277197]: 2026-01-31 07:41:50.775981676 +0000 UTC m=+0.467598882 container init 5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:41:50 compute-0 podman[277197]: 2026-01-31 07:41:50.784771931 +0000 UTC m=+0.476389107 container start 5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:41:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:50 compute-0 podman[277197]: 2026-01-31 07:41:50.8819062 +0000 UTC m=+0.573523466 container attach 5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:41:51 compute-0 sudo[277219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:51 compute-0 sudo[277219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:51 compute-0 sudo[277219]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:51 compute-0 sudo[277244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:51 compute-0 sudo[277244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:51 compute-0 sudo[277244]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:51 compute-0 nova_compute[247704]: 2026-01-31 07:41:51.186 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845296.185311, c15e2982-5d12-4b02-9e21-c5ae81e3f478 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:41:51 compute-0 nova_compute[247704]: 2026-01-31 07:41:51.188 247708 INFO nova.compute.manager [-] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] VM Stopped (Lifecycle Event)
Jan 31 07:41:51 compute-0 nova_compute[247704]: 2026-01-31 07:41:51.308 247708 DEBUG nova.compute.manager [None req-c0e340ea-0a28-4436-9d5f-dc2b091afaf6 - - - - - -] [instance: c15e2982-5d12-4b02-9e21-c5ae81e3f478] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:41:51 compute-0 flamboyant_shirley[277214]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:41:51 compute-0 flamboyant_shirley[277214]: --> relative data size: 1.0
Jan 31 07:41:51 compute-0 flamboyant_shirley[277214]: --> All data devices are unavailable
Jan 31 07:41:51 compute-0 systemd[1]: libpod-5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de.scope: Deactivated successfully.
Jan 31 07:41:51 compute-0 podman[277197]: 2026-01-31 07:41:51.602506146 +0000 UTC m=+1.294123412 container died 5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shirley, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:41:51 compute-0 nova_compute[247704]: 2026-01-31 07:41:51.629 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:51 compute-0 nova_compute[247704]: 2026-01-31 07:41:51.712 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fc46d1b2eeece0bc093fc4c5420e343a11ef167129736ffaed6f9eea4543a0d-merged.mount: Deactivated successfully.
Jan 31 07:41:51 compute-0 ceph-mon[74496]: pgmap v1285: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 46 KiB/s wr, 64 op/s
Jan 31 07:41:52 compute-0 podman[277197]: 2026-01-31 07:41:52.073185443 +0000 UTC m=+1.764802619 container remove 5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:41:52 compute-0 systemd[1]: libpod-conmon-5b2f8a118d3bb4b50cbaca16cf3b6ceff8836192e7bb1875c1a7e1461afc17de.scope: Deactivated successfully.
Jan 31 07:41:52 compute-0 sudo[277087]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 07:41:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:52.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 07:41:52 compute-0 sudo[277293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:52 compute-0 sudo[277293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:52 compute-0 sudo[277293]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:52 compute-0 sudo[277318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:41:52 compute-0 sudo[277318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:52 compute-0 sudo[277318]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:52 compute-0 sudo[277343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:52 compute-0 sudo[277343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:52 compute-0 sudo[277343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:52 compute-0 sudo[277368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:41:52 compute-0 sudo[277368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:52.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 46 KiB/s wr, 57 op/s
Jan 31 07:41:52 compute-0 podman[277436]: 2026-01-31 07:41:52.836217289 +0000 UTC m=+0.087766130 container create 7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:41:52 compute-0 podman[277436]: 2026-01-31 07:41:52.781885869 +0000 UTC m=+0.033434760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:41:52 compute-0 systemd[1]: Started libpod-conmon-7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091.scope.
Jan 31 07:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:53 compute-0 podman[277436]: 2026-01-31 07:41:53.067661607 +0000 UTC m=+0.319210458 container init 7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:41:53 compute-0 podman[277436]: 2026-01-31 07:41:53.074655338 +0000 UTC m=+0.326204169 container start 7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:41:53 compute-0 eager_keller[277453]: 167 167
Jan 31 07:41:53 compute-0 systemd[1]: libpod-7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091.scope: Deactivated successfully.
Jan 31 07:41:53 compute-0 podman[277436]: 2026-01-31 07:41:53.104989241 +0000 UTC m=+0.356538142 container attach 7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:41:53 compute-0 podman[277436]: 2026-01-31 07:41:53.106305432 +0000 UTC m=+0.357854273 container died 7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-38a2ecedf9284713d13fb5974bec67c55bf78ee7c4f8e25d3bda7f90d7949e48-merged.mount: Deactivated successfully.
Jan 31 07:41:53 compute-0 podman[277436]: 2026-01-31 07:41:53.265366228 +0000 UTC m=+0.516915029 container remove 7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_keller, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:41:53 compute-0 systemd[1]: libpod-conmon-7a011557db776f796d5118b054de8fcf67fd1f0302624a585a414c925a50d091.scope: Deactivated successfully.
Jan 31 07:41:53 compute-0 podman[277478]: 2026-01-31 07:41:53.480745753 +0000 UTC m=+0.084045479 container create 819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:41:53 compute-0 podman[277478]: 2026-01-31 07:41:53.435145795 +0000 UTC m=+0.038445571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:41:53 compute-0 systemd[1]: Started libpod-conmon-819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66.scope.
Jan 31 07:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f0cfcbed8bcf0d296c4341c0f8cd6a477730c76504253f109bbeca02128d55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f0cfcbed8bcf0d296c4341c0f8cd6a477730c76504253f109bbeca02128d55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f0cfcbed8bcf0d296c4341c0f8cd6a477730c76504253f109bbeca02128d55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77f0cfcbed8bcf0d296c4341c0f8cd6a477730c76504253f109bbeca02128d55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:53 compute-0 podman[277478]: 2026-01-31 07:41:53.600612099 +0000 UTC m=+0.203911795 container init 819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:41:53 compute-0 podman[277478]: 2026-01-31 07:41:53.612258434 +0000 UTC m=+0.215558160 container start 819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:41:53 compute-0 podman[277478]: 2026-01-31 07:41:53.640590068 +0000 UTC m=+0.243889774 container attach 819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:41:53 compute-0 ceph-mon[74496]: pgmap v1286: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 46 KiB/s wr, 57 op/s
Jan 31 07:41:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:54 compute-0 nova_compute[247704]: 2026-01-31 07:41:54.217 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]: {
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:     "0": [
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:         {
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "devices": [
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "/dev/loop3"
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             ],
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "lv_name": "ceph_lv0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "lv_size": "7511998464",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "name": "ceph_lv0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "tags": {
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.cluster_name": "ceph",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.crush_device_class": "",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.encrypted": "0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.osd_id": "0",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.type": "block",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:                 "ceph.vdo": "0"
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             },
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "type": "block",
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:             "vg_name": "ceph_vg0"
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:         }
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]:     ]
Jan 31 07:41:54 compute-0 inspiring_thompson[277495]: }
Jan 31 07:41:54 compute-0 systemd[1]: libpod-819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66.scope: Deactivated successfully.
Jan 31 07:41:54 compute-0 podman[277478]: 2026-01-31 07:41:54.365838128 +0000 UTC m=+0.969137814 container died 819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-77f0cfcbed8bcf0d296c4341c0f8cd6a477730c76504253f109bbeca02128d55-merged.mount: Deactivated successfully.
Jan 31 07:41:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:41:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:54.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:41:54 compute-0 podman[277478]: 2026-01-31 07:41:54.580835014 +0000 UTC m=+1.184134710 container remove 819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:41:54 compute-0 systemd[1]: libpod-conmon-819f7438248b44049ab715cf2eb9ca80fd42613a2ce1196a08473d275db68e66.scope: Deactivated successfully.
Jan 31 07:41:54 compute-0 sudo[277368]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:54 compute-0 sudo[277518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 35 KiB/s wr, 43 op/s
Jan 31 07:41:54 compute-0 sudo[277518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:54 compute-0 sudo[277518]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:54 compute-0 sudo[277543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:41:54 compute-0 sudo[277543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:54 compute-0 sudo[277543]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:54 compute-0 sudo[277568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:54 compute-0 sudo[277568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:54 compute-0 sudo[277568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:54 compute-0 sudo[277593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:41:54 compute-0 sudo[277593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/294200435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.247863579 +0000 UTC m=+0.068962430 container create b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.20420631 +0000 UTC m=+0.025305161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:41:55 compute-0 systemd[1]: Started libpod-conmon-b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb.scope.
Jan 31 07:41:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.421700046 +0000 UTC m=+0.242798967 container init b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.428504452 +0000 UTC m=+0.249603293 container start b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:41:55 compute-0 nifty_babbage[277677]: 167 167
Jan 31 07:41:55 compute-0 systemd[1]: libpod-b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb.scope: Deactivated successfully.
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.45985945 +0000 UTC m=+0.280958281 container attach b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.46025219 +0000 UTC m=+0.281351031 container died b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:41:55 compute-0 nova_compute[247704]: 2026-01-31 07:41:55.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:55 compute-0 nova_compute[247704]: 2026-01-31 07:41:55.566 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:41:55 compute-0 nova_compute[247704]: 2026-01-31 07:41:55.567 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d79deda16689a58c1f0ad45e1a8d879e4ee51c5e4c4264e40f4be9a28019a93-merged.mount: Deactivated successfully.
Jan 31 07:41:55 compute-0 nova_compute[247704]: 2026-01-31 07:41:55.592 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:41:55 compute-0 podman[277659]: 2026-01-31 07:41:55.834450203 +0000 UTC m=+0.655549064 container remove b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:41:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:41:55 compute-0 systemd[1]: libpod-conmon-b870c95e2ec1b974e0d563ea1a7a7022103a781ce835c3e11f058163786d14cb.scope: Deactivated successfully.
Jan 31 07:41:56 compute-0 podman[277704]: 2026-01-31 07:41:56.050184416 +0000 UTC m=+0.080525802 container create 6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:41:56 compute-0 podman[277704]: 2026-01-31 07:41:55.996642515 +0000 UTC m=+0.026983961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:41:56 compute-0 systemd[1]: Started libpod-conmon-6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b.scope.
Jan 31 07:41:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f13aa007edd52b3d3a0e8aa208bb4d51cef7f12aad4330516a410a3f767687a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f13aa007edd52b3d3a0e8aa208bb4d51cef7f12aad4330516a410a3f767687a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f13aa007edd52b3d3a0e8aa208bb4d51cef7f12aad4330516a410a3f767687a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f13aa007edd52b3d3a0e8aa208bb4d51cef7f12aad4330516a410a3f767687a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:41:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:56.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:56 compute-0 podman[277704]: 2026-01-31 07:41:56.171707422 +0000 UTC m=+0.202048798 container init 6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:41:56 compute-0 podman[277704]: 2026-01-31 07:41:56.177507254 +0000 UTC m=+0.207848620 container start 6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:41:56 compute-0 podman[277704]: 2026-01-31 07:41:56.192494441 +0000 UTC m=+0.222835827 container attach 6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:41:56 compute-0 ceph-mon[74496]: pgmap v1287: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 35 KiB/s wr, 43 op/s
Jan 31 07:41:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/570766913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:41:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:56 compute-0 nova_compute[247704]: 2026-01-31 07:41:56.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:56 compute-0 nova_compute[247704]: 2026-01-31 07:41:56.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 31 07:41:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 35 KiB/s wr, 46 op/s
Jan 31 07:41:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 31 07:41:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 31 07:41:56 compute-0 nova_compute[247704]: 2026-01-31 07:41:56.713 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:41:57 compute-0 reverent_clarke[277720]: {
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:         "osd_id": 0,
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:         "type": "bluestore"
Jan 31 07:41:57 compute-0 reverent_clarke[277720]:     }
Jan 31 07:41:57 compute-0 reverent_clarke[277720]: }
Jan 31 07:41:57 compute-0 systemd[1]: libpod-6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b.scope: Deactivated successfully.
Jan 31 07:41:57 compute-0 podman[277704]: 2026-01-31 07:41:57.134637103 +0000 UTC m=+1.164978489 container died 6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f13aa007edd52b3d3a0e8aa208bb4d51cef7f12aad4330516a410a3f767687a-merged.mount: Deactivated successfully.
Jan 31 07:41:57 compute-0 podman[277704]: 2026-01-31 07:41:57.244233797 +0000 UTC m=+1.274575213 container remove 6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:41:57 compute-0 systemd[1]: libpod-conmon-6b2579c54161a1f790485f3730e17291ab69963e5e048a034f7c57ab89a3708b.scope: Deactivated successfully.
Jan 31 07:41:57 compute-0 sudo[277593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:41:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:41:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e3c3b8d3-0f2f-48f9-87cf-a65871c3a4b7 does not exist
Jan 31 07:41:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 84cebe27-c0e5-4ab9-99e0-5110a38bd4bc does not exist
Jan 31 07:41:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6e153edc-94e6-4af8-b9f6-c481d1a02cd5 does not exist
Jan 31 07:41:57 compute-0 sudo[277756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:41:57 compute-0 sudo[277756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:57 compute-0 sudo[277756]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:57 compute-0 sudo[277781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:41:57 compute-0 sudo[277781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:41:57 compute-0 sudo[277781]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:57 compute-0 ceph-mon[74496]: pgmap v1288: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 35 KiB/s wr, 46 op/s
Jan 31 07:41:57 compute-0 ceph-mon[74496]: osdmap e198: 3 total, 3 up, 3 in
Jan 31 07:41:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:41:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:41:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:41:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:41:58 compute-0 nova_compute[247704]: 2026-01-31 07:41:58.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:41:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 24 KiB/s wr, 9 op/s
Jan 31 07:41:59 compute-0 nova_compute[247704]: 2026-01-31 07:41:59.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:00.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:00 compute-0 ceph-mon[74496]: pgmap v1290: 305 pgs: 305 active+clean; 202 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 8.5 KiB/s rd, 24 KiB/s wr, 9 op/s
Jan 31 07:42:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 209 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 560 KiB/s wr, 33 op/s
Jan 31 07:42:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.244 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.245 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.303 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.304 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.304 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.305 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.305 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.635 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991141632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.769 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:01 compute-0 podman[277831]: 2026-01-31 07:42:01.90937244 +0000 UTC m=+0.092841725 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.964 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.966 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4648MB free_disk=20.89716339111328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.967 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:01 compute-0 nova_compute[247704]: 2026-01-31 07:42:01.967 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:02 compute-0 nova_compute[247704]: 2026-01-31 07:42:02.143 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:42:02 compute-0 nova_compute[247704]: 2026-01-31 07:42:02.143 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:42:02 compute-0 nova_compute[247704]: 2026-01-31 07:42:02.163 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 31 07:42:02 compute-0 ceph-mon[74496]: pgmap v1291: 305 pgs: 305 active+clean; 209 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 560 KiB/s wr, 33 op/s
Jan 31 07:42:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1991141632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2218541968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 31 07:42:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 31 07:42:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:02.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473502299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:02 compute-0 nova_compute[247704]: 2026-01-31 07:42:02.615 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:02 compute-0 nova_compute[247704]: 2026-01-31 07:42:02.623 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 235 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 57 op/s
Jan 31 07:42:02 compute-0 nova_compute[247704]: 2026-01-31 07:42:02.847 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.512 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.513 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 31 07:42:03 compute-0 ceph-mon[74496]: osdmap e199: 3 total, 3 up, 3 in
Jan 31 07:42:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3473502299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:03 compute-0 ceph-mon[74496]: pgmap v1293: 305 pgs: 305 active+clean; 235 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 57 op/s
Jan 31 07:42:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1371050203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.830 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.831 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.831 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.832 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:03 compute-0 nova_compute[247704]: 2026-01-31 07:42:03.832 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:42:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:04.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:04 compute-0 ceph-mon[74496]: osdmap e200: 3 total, 3 up, 3 in
Jan 31 07:42:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 235 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 53 op/s
Jan 31 07:42:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:05 compute-0 ceph-mon[74496]: pgmap v1295: 305 pgs: 305 active+clean; 235 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 53 op/s
Jan 31 07:42:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:06 compute-0 nova_compute[247704]: 2026-01-31 07:42:06.640 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 283 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 105 op/s
Jan 31 07:42:06 compute-0 nova_compute[247704]: 2026-01-31 07:42:06.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:07 compute-0 ceph-mon[74496]: pgmap v1296: 305 pgs: 305 active+clean; 283 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 105 op/s
Jan 31 07:42:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:08.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:08.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 283 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.1 MiB/s wr, 89 op/s
Jan 31 07:42:08 compute-0 nova_compute[247704]: 2026-01-31 07:42:08.821 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:08 compute-0 nova_compute[247704]: 2026-01-31 07:42:08.822 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:08 compute-0 nova_compute[247704]: 2026-01-31 07:42:08.986 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:42:09 compute-0 nova_compute[247704]: 2026-01-31 07:42:09.231 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:09 compute-0 nova_compute[247704]: 2026-01-31 07:42:09.232 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:09 compute-0 nova_compute[247704]: 2026-01-31 07:42:09.241 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:42:09 compute-0 nova_compute[247704]: 2026-01-31 07:42:09.241 247708 INFO nova.compute.claims [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:42:09 compute-0 ceph-mon[74496]: pgmap v1297: 305 pgs: 305 active+clean; 283 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.1 MiB/s wr, 89 op/s
Jan 31 07:42:09 compute-0 nova_compute[247704]: 2026-01-31 07:42:09.787 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:42:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:10.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:42:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679202319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.199 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.204 247708 DEBUG nova.compute.provider_tree [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.320 247708 DEBUG nova.scheduler.client.report [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:10.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.572 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.574 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:42:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 265 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 103 op/s
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.771 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.772 247708 DEBUG nova.network.neutron [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.809 247708 INFO nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:42:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3679202319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 31 07:42:10 compute-0 nova_compute[247704]: 2026-01-31 07:42:10.932 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:42:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.031 247708 DEBUG nova.policy [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b873da8845e6461088fcff99c5c140b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '016f45da455049d7aad578f0a534a0f2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:42:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.137 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.139 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.139 247708 INFO nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Creating image(s)
Jan 31 07:42:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:11.151 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:11.151 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:11.152 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:11 compute-0 sudo[277907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:11 compute-0 sudo[277907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:11 compute-0 sudo[277907]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.187 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.228 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:11 compute-0 sudo[277950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:11 compute-0 sudo[277950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:11 compute-0 sudo[277950]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.269 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.273 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.333 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.334 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.335 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.335 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.361 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.366 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 dcef0d49-1223-4316-994c-4036983d5b73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:11 compute-0 nova_compute[247704]: 2026-01-31 07:42:11.755 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:12 compute-0 ceph-mon[74496]: pgmap v1298: 305 pgs: 305 active+clean; 265 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 103 op/s
Jan 31 07:42:12 compute-0 ceph-mon[74496]: osdmap e201: 3 total, 3 up, 3 in
Jan 31 07:42:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:12.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:12.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 246 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 184 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Jan 31 07:42:12 compute-0 nova_compute[247704]: 2026-01-31 07:42:12.728 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 dcef0d49-1223-4316-994c-4036983d5b73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:12 compute-0 nova_compute[247704]: 2026-01-31 07:42:12.834 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] resizing rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.192 247708 DEBUG nova.network.neutron [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Successfully created port: b6a51902-7f7f-4b64-931c-f81b110d6551 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.384 247708 DEBUG nova.objects.instance [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'migration_context' on Instance uuid dcef0d49-1223-4316-994c-4036983d5b73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.577 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.577 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Ensure instance console log exists: /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.578 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.578 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:13 compute-0 nova_compute[247704]: 2026-01-31 07:42:13.578 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:14.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:14 compute-0 ceph-mon[74496]: pgmap v1300: 305 pgs: 305 active+clean; 246 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 184 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Jan 31 07:42:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1072138478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 246 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 169 KiB/s rd, 3.2 MiB/s wr, 72 op/s
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.024 247708 DEBUG nova.network.neutron [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Successfully updated port: b6a51902-7f7f-4b64-931c-f81b110d6551 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.280 247708 DEBUG nova.compute.manager [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-changed-b6a51902-7f7f-4b64-931c-f81b110d6551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.280 247708 DEBUG nova.compute.manager [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Refreshing instance network info cache due to event network-changed-b6a51902-7f7f-4b64-931c-f81b110d6551. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.280 247708 DEBUG oslo_concurrency.lockutils [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dcef0d49-1223-4316-994c-4036983d5b73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.281 247708 DEBUG oslo_concurrency.lockutils [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dcef0d49-1223-4316-994c-4036983d5b73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.281 247708 DEBUG nova.network.neutron [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Refreshing network info cache for port b6a51902-7f7f-4b64-931c-f81b110d6551 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.292 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "refresh_cache-dcef0d49-1223-4316-994c-4036983d5b73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:42:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1190926015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.631984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335632072, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2678, "num_deletes": 517, "total_data_size": 3880285, "memory_usage": 3929840, "flush_reason": "Manual Compaction"}
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335721451, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3344969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26728, "largest_seqno": 29405, "table_properties": {"data_size": 3333974, "index_size": 6529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 28423, "raw_average_key_size": 21, "raw_value_size": 3309124, "raw_average_value_size": 2456, "num_data_blocks": 282, "num_entries": 1347, "num_filter_entries": 1347, "num_deletions": 517, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845140, "oldest_key_time": 1769845140, "file_creation_time": 1769845335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 89562 microseconds, and 6638 cpu microseconds.
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.721553) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3344969 bytes OK
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.721597) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.800751) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.800831) EVENT_LOG_v1 {"time_micros": 1769845335800815, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.800885) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3868054, prev total WAL file size 3868054, number of live WAL files 2.
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.802391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3266KB)], [62(10MB)]
Jan 31 07:42:15 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845335802486, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 13953782, "oldest_snapshot_seqno": -1}
Jan 31 07:42:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:15 compute-0 nova_compute[247704]: 2026-01-31 07:42:15.871 247708 DEBUG nova.network.neutron [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5553 keys, 8660194 bytes, temperature: kUnknown
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845336104692, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8660194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8623005, "index_size": 22231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 141873, "raw_average_key_size": 25, "raw_value_size": 8522988, "raw_average_value_size": 1534, "num_data_blocks": 895, "num_entries": 5553, "num_filter_entries": 5553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.105232) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8660194 bytes
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.116110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.2 rd, 28.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.1 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.8) write-amplify(2.6) OK, records in: 6576, records dropped: 1023 output_compression: NoCompression
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.116172) EVENT_LOG_v1 {"time_micros": 1769845336116152, "job": 34, "event": "compaction_finished", "compaction_time_micros": 302142, "compaction_time_cpu_micros": 29284, "output_level": 6, "num_output_files": 1, "total_output_size": 8660194, "num_input_records": 6576, "num_output_records": 5553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845336116954, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845336118387, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:15.802217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.118529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.118535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.118537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.118538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:42:16.118540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:42:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:16.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:16 compute-0 ceph-mon[74496]: pgmap v1301: 305 pgs: 305 active+clean; 246 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 169 KiB/s rd, 3.2 MiB/s wr, 72 op/s
Jan 31 07:42:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:16.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:16 compute-0 nova_compute[247704]: 2026-01-31 07:42:16.648 247708 DEBUG nova.network.neutron [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:16 compute-0 nova_compute[247704]: 2026-01-31 07:42:16.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 248 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 31 07:42:16 compute-0 nova_compute[247704]: 2026-01-31 07:42:16.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:16 compute-0 nova_compute[247704]: 2026-01-31 07:42:16.810 247708 DEBUG oslo_concurrency.lockutils [req-c0dcf239-debe-48ec-9f96-c89980829b60 req-17598708-daa7-43b9-9e83-a2c6f5f96663 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dcef0d49-1223-4316-994c-4036983d5b73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:42:16 compute-0 nova_compute[247704]: 2026-01-31 07:42:16.811 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquired lock "refresh_cache-dcef0d49-1223-4316-994c-4036983d5b73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:42:16 compute-0 nova_compute[247704]: 2026-01-31 07:42:16.811 247708 DEBUG nova.network.neutron [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:42:17 compute-0 nova_compute[247704]: 2026-01-31 07:42:17.700 247708 DEBUG nova.network.neutron [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:42:17 compute-0 ceph-mon[74496]: pgmap v1302: 305 pgs: 305 active+clean; 248 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 31 07:42:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 248 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 07:42:18 compute-0 podman[278127]: 2026-01-31 07:42:18.890639839 +0000 UTC m=+0.061045596 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.317 247708 DEBUG nova.network.neutron [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Updating instance_info_cache with network_info: [{"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.808 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Releasing lock "refresh_cache-dcef0d49-1223-4316-994c-4036983d5b73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.808 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Instance network_info: |[{"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.810 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Start _get_guest_xml network_info=[{"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.817 247708 WARNING nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.824 247708 DEBUG nova.virt.libvirt.host [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.824 247708 DEBUG nova.virt.libvirt.host [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.828 247708 DEBUG nova.virt.libvirt.host [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.828 247708 DEBUG nova.virt.libvirt.host [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.829 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.830 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.830 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.830 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.830 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.830 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.831 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.831 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.831 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.831 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.831 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.832 247708 DEBUG nova.virt.hardware [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:42:19 compute-0 nova_compute[247704]: 2026-01-31 07:42:19.834 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:19 compute-0 ceph-mon[74496]: pgmap v1303: 305 pgs: 305 active+clean; 248 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:42:20
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'vms', '.mgr', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:42:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:20.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:42:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4050484851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.276 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.317 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.323 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:42:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 279 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Jan 31 07:42:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:42:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812285109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.815 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.816 247708 DEBUG nova.virt.libvirt.vif [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2034679151',display_name='tempest-VolumesAdminNegativeTest-server-2034679151',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2034679151',id=48,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='016f45da455049d7aad578f0a534a0f2',ramdisk_id='',reservation_id='r-9n7pms5q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1814342344',owner_user_name='tempest-VolumesAdminNegativeTest-1814342344-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:42:11Z,user_data=None,user_id='b873da8845e6461088fcff99c5c140b1',uuid=dcef0d49-1223-4316-994c-4036983d5b73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.817 247708 DEBUG nova.network.os_vif_util [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converting VIF {"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.817 247708 DEBUG nova.network.os_vif_util [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.818 247708 DEBUG nova.objects.instance [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcef0d49-1223-4316-994c-4036983d5b73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.969 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <uuid>dcef0d49-1223-4316-994c-4036983d5b73</uuid>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <name>instance-00000030</name>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:name>tempest-VolumesAdminNegativeTest-server-2034679151</nova:name>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:42:19</nova:creationTime>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:user uuid="b873da8845e6461088fcff99c5c140b1">tempest-VolumesAdminNegativeTest-1814342344-project-member</nova:user>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:project uuid="016f45da455049d7aad578f0a534a0f2">tempest-VolumesAdminNegativeTest-1814342344</nova:project>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <nova:port uuid="b6a51902-7f7f-4b64-931c-f81b110d6551">
Jan 31 07:42:20 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <system>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <entry name="serial">dcef0d49-1223-4316-994c-4036983d5b73</entry>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <entry name="uuid">dcef0d49-1223-4316-994c-4036983d5b73</entry>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </system>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <os>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </os>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <features>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </features>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/dcef0d49-1223-4316-994c-4036983d5b73_disk">
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </source>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/dcef0d49-1223-4316-994c-4036983d5b73_disk.config">
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </source>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:42:20 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:ca:b9:88"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <target dev="tapb6a51902-7f"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/console.log" append="off"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <video>
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </video>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:42:20 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:42:20 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:42:20 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:42:20 compute-0 nova_compute[247704]: </domain>
Jan 31 07:42:20 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.970 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Preparing to wait for external event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.971 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.972 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.972 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.974 247708 DEBUG nova.virt.libvirt.vif [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2034679151',display_name='tempest-VolumesAdminNegativeTest-server-2034679151',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2034679151',id=48,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='016f45da455049d7aad578f0a534a0f2',ramdisk_id='',reservation_id='r-9n7pms5q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1814342344',owner_user_name='tempest-VolumesAdminNegativeTest-1814342344-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:42:11Z,user_data=None,user_id='b873da8845e6461088fcff99c5c140b1',uuid=dcef0d49-1223-4316-994c-4036983d5b73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.975 247708 DEBUG nova.network.os_vif_util [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converting VIF {"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.976 247708 DEBUG nova.network.os_vif_util [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.977 247708 DEBUG os_vif [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.978 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.979 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.979 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.984 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.984 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6a51902-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.985 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6a51902-7f, col_values=(('external_ids', {'iface-id': 'b6a51902-7f7f-4b64-931c-f81b110d6551', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:b9:88', 'vm-uuid': 'dcef0d49-1223-4316-994c-4036983d5b73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:20 compute-0 NetworkManager[49108]: <info>  [1769845340.9881] manager: (tapb6a51902-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.988 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:20 compute-0 nova_compute[247704]: 2026-01-31 07:42:20.997 247708 INFO os_vif [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f')
Jan 31 07:42:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4050484851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3812285109' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3650789939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:21 compute-0 nova_compute[247704]: 2026-01-31 07:42:21.325 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:42:21 compute-0 nova_compute[247704]: 2026-01-31 07:42:21.325 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:42:21 compute-0 nova_compute[247704]: 2026-01-31 07:42:21.326 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] No VIF found with MAC fa:16:3e:ca:b9:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:42:21 compute-0 nova_compute[247704]: 2026-01-31 07:42:21.326 247708 INFO nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Using config drive
Jan 31 07:42:21 compute-0 nova_compute[247704]: 2026-01-31 07:42:21.355 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:21 compute-0 nova_compute[247704]: 2026-01-31 07:42:21.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:22.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.224 247708 INFO nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Creating config drive at /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/disk.config
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.228 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqzo5c5rp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:22 compute-0 ceph-mon[74496]: pgmap v1304: 305 pgs: 305 active+clean; 279 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.348 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqzo5c5rp" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.379 247708 DEBUG nova.storage.rbd_utils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] rbd image dcef0d49-1223-4316-994c-4036983d5b73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.384 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/disk.config dcef0d49-1223-4316-994c-4036983d5b73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:22.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 294 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.7 MiB/s wr, 74 op/s
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.815 247708 DEBUG oslo_concurrency.processutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/disk.config dcef0d49-1223-4316-994c-4036983d5b73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.817 247708 INFO nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Deleting local config drive /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73/disk.config because it was imported into RBD.
Jan 31 07:42:22 compute-0 kernel: tapb6a51902-7f: entered promiscuous mode
Jan 31 07:42:22 compute-0 NetworkManager[49108]: <info>  [1769845342.8647] manager: (tapb6a51902-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.865 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 ovn_controller[149457]: 2026-01-31T07:42:22Z|00160|binding|INFO|Claiming lport b6a51902-7f7f-4b64-931c-f81b110d6551 for this chassis.
Jan 31 07:42:22 compute-0 ovn_controller[149457]: 2026-01-31T07:42:22Z|00161|binding|INFO|b6a51902-7f7f-4b64-931c-f81b110d6551: Claiming fa:16:3e:ca:b9:88 10.100.0.10
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.867 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.873 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.876 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 systemd-machined[214448]: New machine qemu-23-instance-00000030.
Jan 31 07:42:22 compute-0 NetworkManager[49108]: <info>  [1769845342.8936] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.892 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 NetworkManager[49108]: <info>  [1769845342.8943] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 31 07:42:22 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000030.
Jan 31 07:42:22 compute-0 systemd-udevd[278289]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:42:22 compute-0 NetworkManager[49108]: <info>  [1769845342.9531] device (tapb6a51902-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:42:22 compute-0 NetworkManager[49108]: <info>  [1769845342.9540] device (tapb6a51902-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.956 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:22 compute-0 nova_compute[247704]: 2026-01-31 07:42:22.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.158 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:b9:88 10.100.0.10'], port_security=['fa:16:3e:ca:b9:88 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dcef0d49-1223-4316-994c-4036983d5b73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bff39063-463a-42de-b52a-d9ff7905f368', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '016f45da455049d7aad578f0a534a0f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2987a9ee-ea68-4972-886f-e309fe168748', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64424fb0-e7eb-43b1-ba76-473962c1b131, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=b6a51902-7f7f-4b64-931c-f81b110d6551) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.160 160028 INFO neutron.agent.ovn.metadata.agent [-] Port b6a51902-7f7f-4b64-931c-f81b110d6551 in datapath bff39063-463a-42de-b52a-d9ff7905f368 bound to our chassis
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.161 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bff39063-463a-42de-b52a-d9ff7905f368
Jan 31 07:42:23 compute-0 ovn_controller[149457]: 2026-01-31T07:42:23Z|00162|binding|INFO|Setting lport b6a51902-7f7f-4b64-931c-f81b110d6551 ovn-installed in OVS
Jan 31 07:42:23 compute-0 ovn_controller[149457]: 2026-01-31T07:42:23Z|00163|binding|INFO|Setting lport b6a51902-7f7f-4b64-931c-f81b110d6551 up in Southbound
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.171 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.175 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.183 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[67a2148f-0883-46f2-bef5-957d97ef8150]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.184 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbff39063-41 in ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.186 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbff39063-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.186 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c83df811-93b5-4013-bc39-4e871a235d49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.187 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4532b3-ce7f-4747-afaf-7e1ffad6dd2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.197 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9e608f-1570-42cf-88cd-2f8f8881578f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.212 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6d033e33-cb68-4418-96a5-b45aec6b058e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.250 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[97b59ecc-8c3d-45ce-87c0-756dfb727ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 NetworkManager[49108]: <info>  [1769845343.2576] manager: (tapbff39063-40): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.257 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce569a8-31dc-4b5e-aee4-834e0dc8d762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.285 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2a228953-71cd-493b-b822-bafce0eca01a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.290 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1b39dd2e-505e-4e99-beb1-ac73ac024013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 NetworkManager[49108]: <info>  [1769845343.3126] device (tapbff39063-40): carrier: link connected
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.317 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[62b8c131-e875-4075-8255-9d0bdf670d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.328 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[44130b91-b1de-4e8a-8176-97178bd087b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbff39063-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:f4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558593, 'reachable_time': 18951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278360, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.337 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[30e73518-7646-431a-83ba-0f8d5c19714e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:f47e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558593, 'tstamp': 558593}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278361, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.349 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0d50c7c5-0e23-45ad-8ed3-84e4d132ad2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbff39063-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:f4:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558593, 'reachable_time': 18951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278362, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.364 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845343.3636396, dcef0d49-1223-4316-994c-4036983d5b73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.364 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] VM Started (Lifecycle Event)
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.368 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[50bee639-4d8e-43e7-a783-c2e8e4d42aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.404 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[45491fb2-fecb-4d84-b57c-95a2087d2e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.405 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbff39063-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.405 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.406 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbff39063-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:23 compute-0 kernel: tapbff39063-40: entered promiscuous mode
Jan 31 07:42:23 compute-0 NetworkManager[49108]: <info>  [1769845343.4082] manager: (tapbff39063-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.407 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.410 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbff39063-40, col_values=(('external_ids', {'iface-id': 'b96729ad-cf99-4a3d-b17d-6bdafe5723db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.412 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.412 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bff39063-463a-42de-b52a-d9ff7905f368.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bff39063-463a-42de-b52a-d9ff7905f368.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.413 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[01dbbd89-b7f2-43f0-97be-b6ab79e8ec01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.414 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-bff39063-463a-42de-b52a-d9ff7905f368
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/bff39063-463a-42de-b52a-d9ff7905f368.pid.haproxy
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID bff39063-463a-42de-b52a-d9ff7905f368
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:23.414 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'env', 'PROCESS_TAG=haproxy-bff39063-463a-42de-b52a-d9ff7905f368', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bff39063-463a-42de-b52a-d9ff7905f368.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:42:23 compute-0 ovn_controller[149457]: 2026-01-31T07:42:23Z|00164|binding|INFO|Releasing lport b96729ad-cf99-4a3d-b17d-6bdafe5723db from this chassis (sb_readonly=0)
Jan 31 07:42:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2103826521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.482 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.531 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.536 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845343.3644776, dcef0d49-1223-4316-994c-4036983d5b73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.537 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] VM Paused (Lifecycle Event)
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.581 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.585 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:42:23 compute-0 nova_compute[247704]: 2026-01-31 07:42:23.707 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:42:23 compute-0 podman[278394]: 2026-01-31 07:42:23.764199907 +0000 UTC m=+0.026619172 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:42:23 compute-0 podman[278394]: 2026-01-31 07:42:23.913320549 +0000 UTC m=+0.175739804 container create 1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:42:23 compute-0 systemd[1]: Started libpod-conmon-1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8.scope.
Jan 31 07:42:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9696ad1ff0e0f5974f516515eec409073fd46a234b34f1e93ec532ce6fa95d3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:42:24 compute-0 podman[278394]: 2026-01-31 07:42:24.01625824 +0000 UTC m=+0.278677505 container init 1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:42:24 compute-0 podman[278394]: 2026-01-31 07:42:24.020311009 +0000 UTC m=+0.282730234 container start 1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:42:24 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [NOTICE]   (278414) : New worker (278416) forked
Jan 31 07:42:24 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [NOTICE]   (278414) : Loading success.
Jan 31 07:42:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:24.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.374 247708 DEBUG nova.compute.manager [req-336ffffe-51dc-4d23-a15c-ebb214083ae3 req-acc13ffc-b115-4f35-abae-1b7377265e01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.374 247708 DEBUG oslo_concurrency.lockutils [req-336ffffe-51dc-4d23-a15c-ebb214083ae3 req-acc13ffc-b115-4f35-abae-1b7377265e01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.375 247708 DEBUG oslo_concurrency.lockutils [req-336ffffe-51dc-4d23-a15c-ebb214083ae3 req-acc13ffc-b115-4f35-abae-1b7377265e01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.375 247708 DEBUG oslo_concurrency.lockutils [req-336ffffe-51dc-4d23-a15c-ebb214083ae3 req-acc13ffc-b115-4f35-abae-1b7377265e01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.376 247708 DEBUG nova.compute.manager [req-336ffffe-51dc-4d23-a15c-ebb214083ae3 req-acc13ffc-b115-4f35-abae-1b7377265e01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Processing event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.377 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.383 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845344.3824701, dcef0d49-1223-4316-994c-4036983d5b73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.383 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] VM Resumed (Lifecycle Event)
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.386 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.391 247708 INFO nova.virt.libvirt.driver [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Instance spawned successfully.
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.391 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:42:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:24.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:24 compute-0 ceph-mon[74496]: pgmap v1305: 305 pgs: 305 active+clean; 294 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.7 MiB/s wr, 74 op/s
Jan 31 07:42:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3775045493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 294 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.749 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.758 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.763 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.764 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.765 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.766 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.767 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:24 compute-0 nova_compute[247704]: 2026-01-31 07:42:24.768 247708 DEBUG nova.virt.libvirt.driver [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:42:25 compute-0 nova_compute[247704]: 2026-01-31 07:42:25.022 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:42:25 compute-0 nova_compute[247704]: 2026-01-31 07:42:25.099 247708 INFO nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Took 13.96 seconds to spawn the instance on the hypervisor.
Jan 31 07:42:25 compute-0 nova_compute[247704]: 2026-01-31 07:42:25.100 247708 DEBUG nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:25 compute-0 nova_compute[247704]: 2026-01-31 07:42:25.375 247708 INFO nova.compute.manager [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Took 16.17 seconds to build instance.
Jan 31 07:42:25 compute-0 ceph-mon[74496]: pgmap v1306: 305 pgs: 305 active+clean; 294 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Jan 31 07:42:25 compute-0 nova_compute[247704]: 2026-01-31 07:42:25.655 247708 DEBUG oslo_concurrency.lockutils [None req-d0f0a877-d734-4fe2-bced-6eefdf94baa1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:42:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4197615563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:25 compute-0 nova_compute[247704]: 2026-01-31 07:42:25.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:26.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.218 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4197615563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1043671265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 356 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 132 op/s
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.707 247708 DEBUG nova.compute.manager [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.708 247708 DEBUG oslo_concurrency.lockutils [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.708 247708 DEBUG oslo_concurrency.lockutils [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.709 247708 DEBUG oslo_concurrency.lockutils [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.709 247708 DEBUG nova.compute.manager [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] No waiting events found dispatching network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.709 247708 WARNING nova.compute.manager [req-9be07d5e-6e4c-4433-aa4a-2273bf3e396b req-be5d2481-3041-42ec-b5be-f31755470e39 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received unexpected event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 for instance with vm_state active and task_state None.
Jan 31 07:42:26 compute-0 nova_compute[247704]: 2026-01-31 07:42:26.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:27 compute-0 ceph-mon[74496]: pgmap v1307: 305 pgs: 305 active+clean; 356 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.4 MiB/s wr, 132 op/s
Jan 31 07:42:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:28.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:28.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 365 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 147 op/s
Jan 31 07:42:29 compute-0 ceph-mon[74496]: pgmap v1308: 305 pgs: 305 active+clean; 365 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 147 op/s
Jan 31 07:42:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2678096094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2678096094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.111 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.112 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.112 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.113 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.113 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.115 247708 INFO nova.compute.manager [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Terminating instance
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.116 247708 DEBUG nova.compute.manager [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:42:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:30.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:30 compute-0 kernel: tapb6a51902-7f (unregistering): left promiscuous mode
Jan 31 07:42:30 compute-0 NetworkManager[49108]: <info>  [1769845350.2373] device (tapb6a51902-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:42:30 compute-0 ovn_controller[149457]: 2026-01-31T07:42:30Z|00165|binding|INFO|Releasing lport b6a51902-7f7f-4b64-931c-f81b110d6551 from this chassis (sb_readonly=0)
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.243 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 ovn_controller[149457]: 2026-01-31T07:42:30Z|00166|binding|INFO|Setting lport b6a51902-7f7f-4b64-931c-f81b110d6551 down in Southbound
Jan 31 07:42:30 compute-0 ovn_controller[149457]: 2026-01-31T07:42:30Z|00167|binding|INFO|Removing iface tapb6a51902-7f ovn-installed in OVS
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.246 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.250 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:b9:88 10.100.0.10'], port_security=['fa:16:3e:ca:b9:88 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dcef0d49-1223-4316-994c-4036983d5b73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bff39063-463a-42de-b52a-d9ff7905f368', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '016f45da455049d7aad578f0a534a0f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2987a9ee-ea68-4972-886f-e309fe168748', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64424fb0-e7eb-43b1-ba76-473962c1b131, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=b6a51902-7f7f-4b64-931c-f81b110d6551) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.252 160028 INFO neutron.agent.ovn.metadata.agent [-] Port b6a51902-7f7f-4b64-931c-f81b110d6551 in datapath bff39063-463a-42de-b52a-d9ff7905f368 unbound from our chassis
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.254 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bff39063-463a-42de-b52a-d9ff7905f368, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.255 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9a5398-b550-4b52-9576-00dad3d0a1b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.256 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 namespace which is not needed anymore
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.258 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000030.scope: Deactivated successfully.
Jan 31 07:42:30 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000030.scope: Consumed 6.175s CPU time.
Jan 31 07:42:30 compute-0 systemd-machined[214448]: Machine qemu-23-instance-00000030 terminated.
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.357 247708 INFO nova.virt.libvirt.driver [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Instance destroyed successfully.
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.359 247708 DEBUG nova.objects.instance [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lazy-loading 'resources' on Instance uuid dcef0d49-1223-4316-994c-4036983d5b73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.381 247708 DEBUG nova.virt.libvirt.vif [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:42:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2034679151',display_name='tempest-VolumesAdminNegativeTest-server-2034679151',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2034679151',id=48,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:42:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='016f45da455049d7aad578f0a534a0f2',ramdisk_id='',reservation_id='r-9n7pms5q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1814342344',owner_user_name='tempest-VolumesAdminNegativeTest-1814342344-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:42:25Z,user_data=None,user_id='b873da8845e6461088fcff99c5c140b1',uuid=dcef0d49-1223-4316-994c-4036983d5b73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.383 247708 DEBUG nova.network.os_vif_util [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converting VIF {"id": "b6a51902-7f7f-4b64-931c-f81b110d6551", "address": "fa:16:3e:ca:b9:88", "network": {"id": "bff39063-463a-42de-b52a-d9ff7905f368", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-2000138722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "016f45da455049d7aad578f0a534a0f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6a51902-7f", "ovs_interfaceid": "b6a51902-7f7f-4b64-931c-f81b110d6551", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.383 247708 DEBUG nova.network.os_vif_util [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.384 247708 DEBUG os_vif [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.386 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.386 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6a51902-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.390 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.393 247708 INFO os_vif [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:b9:88,bridge_name='br-int',has_traffic_filtering=True,id=b6a51902-7f7f-4b64-931c-f81b110d6551,network=Network(bff39063-463a-42de-b52a-d9ff7905f368),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6a51902-7f')
Jan 31 07:42:30 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [NOTICE]   (278414) : haproxy version is 2.8.14-c23fe91
Jan 31 07:42:30 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [NOTICE]   (278414) : path to executable is /usr/sbin/haproxy
Jan 31 07:42:30 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [WARNING]  (278414) : Exiting Master process...
Jan 31 07:42:30 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [ALERT]    (278414) : Current worker (278416) exited with code 143 (Terminated)
Jan 31 07:42:30 compute-0 neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368[278410]: [WARNING]  (278414) : All workers exited. Exiting... (0)
Jan 31 07:42:30 compute-0 systemd[1]: libpod-1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8.scope: Deactivated successfully.
Jan 31 07:42:30 compute-0 podman[278458]: 2026-01-31 07:42:30.412658349 +0000 UTC m=+0.060340019 container died 1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8-userdata-shm.mount: Deactivated successfully.
Jan 31 07:42:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9696ad1ff0e0f5974f516515eec409073fd46a234b34f1e93ec532ce6fa95d3c-merged.mount: Deactivated successfully.
Jan 31 07:42:30 compute-0 podman[278458]: 2026-01-31 07:42:30.485989355 +0000 UTC m=+0.133671015 container cleanup 1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:42:30 compute-0 systemd[1]: libpod-conmon-1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8.scope: Deactivated successfully.
Jan 31 07:42:30 compute-0 podman[278513]: 2026-01-31 07:42:30.575183789 +0000 UTC m=+0.064155802 container remove 1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.580 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[26e59f1c-899b-4afa-afb3-d0de038cbfb5]: (4, ('Sat Jan 31 07:42:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 (1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8)\n1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8\nSat Jan 31 07:42:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 (1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8)\n1a4109d5dacac8249166154f9e84893db7d059a37cbff43d63dbfe6fcd1a3dd8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.582 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5a79a683-49fc-4e85-a7e7-642dc09bbadc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.583 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbff39063-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 kernel: tapbff39063-40: left promiscuous mode
Jan 31 07:42:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:30 compute-0 nova_compute[247704]: 2026-01-31 07:42:30.594 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:30.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.598 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0354b6-213d-4bbf-8eda-dc98bff94841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.615 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[442f3963-4885-4488-94a4-c00eeca91eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.617 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3a13f59a-cf79-4650-991e-4724e5e0fda5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.638 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e081a828-b8e0-4be2-8e4f-5e508e03d209]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558586, 'reachable_time': 22457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278528, 'error': None, 'target': 'ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.642 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bff39063-463a-42de-b52a-d9ff7905f368 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:42:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:30.642 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[fe83005b-32a2-4dcd-b169-998b1166e56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:42:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dbff39063\x2d463a\x2d42de\x2db52a\x2dd9ff7905f368.mount: Deactivated successfully.
Jan 31 07:42:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 376 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.7 MiB/s wr, 247 op/s
Jan 31 07:42:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 31 07:42:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 31 07:42:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.193 247708 DEBUG nova.compute.manager [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-vif-unplugged-b6a51902-7f7f-4b64-931c-f81b110d6551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.194 247708 DEBUG oslo_concurrency.lockutils [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.194 247708 DEBUG oslo_concurrency.lockutils [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.194 247708 DEBUG oslo_concurrency.lockutils [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.194 247708 DEBUG nova.compute.manager [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] No waiting events found dispatching network-vif-unplugged-b6a51902-7f7f-4b64-931c-f81b110d6551 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.194 247708 DEBUG nova.compute.manager [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-vif-unplugged-b6a51902-7f7f-4b64-931c-f81b110d6551 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.195 247708 DEBUG nova.compute.manager [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.195 247708 DEBUG oslo_concurrency.lockutils [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcef0d49-1223-4316-994c-4036983d5b73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.195 247708 DEBUG oslo_concurrency.lockutils [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.195 247708 DEBUG oslo_concurrency.lockutils [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.195 247708 DEBUG nova.compute.manager [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] No waiting events found dispatching network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.195 247708 WARNING nova.compute.manager [req-aeb25f09-8b81-4662-8cd5-71482a36f449 req-cdad90d6-7bd8-4dea-abb6-27e06abae29d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received unexpected event network-vif-plugged-b6a51902-7f7f-4b64-931c-f81b110d6551 for instance with vm_state active and task_state deleting.
Jan 31 07:42:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1929837302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:31 compute-0 sudo[278529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:31 compute-0 sudo[278529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:31 compute-0 sudo[278529]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:31 compute-0 sudo[278555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:31 compute-0 sudo[278555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:31 compute-0 sudo[278555]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:31 compute-0 nova_compute[247704]: 2026-01-31 07:42:31.764 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:32.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:32 compute-0 ceph-mon[74496]: pgmap v1309: 305 pgs: 305 active+clean; 376 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 5.7 MiB/s wr, 247 op/s
Jan 31 07:42:32 compute-0 ceph-mon[74496]: osdmap e202: 3 total, 3 up, 3 in
Jan 31 07:42:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:32.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 377 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.7 MiB/s wr, 321 op/s
Jan 31 07:42:32 compute-0 podman[278580]: 2026-01-31 07:42:32.918998176 +0000 UTC m=+0.091474931 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 07:42:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:34 compute-0 ceph-mon[74496]: pgmap v1311: 305 pgs: 305 active+clean; 377 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.7 MiB/s wr, 321 op/s
Jan 31 07:42:34 compute-0 nova_compute[247704]: 2026-01-31 07:42:34.569 247708 INFO nova.virt.libvirt.driver [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Deleting instance files /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73_del
Jan 31 07:42:34 compute-0 nova_compute[247704]: 2026-01-31 07:42:34.570 247708 INFO nova.virt.libvirt.driver [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Deletion of /var/lib/nova/instances/dcef0d49-1223-4316-994c-4036983d5b73_del complete
Jan 31 07:42:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:34.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 377 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.7 MiB/s wr, 321 op/s
Jan 31 07:42:34 compute-0 nova_compute[247704]: 2026-01-31 07:42:34.799 247708 INFO nova.compute.manager [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Took 4.68 seconds to destroy the instance on the hypervisor.
Jan 31 07:42:34 compute-0 nova_compute[247704]: 2026-01-31 07:42:34.799 247708 DEBUG oslo.service.loopingcall [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:42:34 compute-0 nova_compute[247704]: 2026-01-31 07:42:34.799 247708 DEBUG nova.compute.manager [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:42:34 compute-0 nova_compute[247704]: 2026-01-31 07:42:34.799 247708 DEBUG nova.network.neutron [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006474429438860971 of space, bias 1.0, pg target 1.9423288316582914 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00406564797366462 of space, bias 1.0, pg target 1.2156287441257214 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:42:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:42:35 compute-0 nova_compute[247704]: 2026-01-31 07:42:35.389 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:35 compute-0 ceph-mon[74496]: pgmap v1312: 305 pgs: 305 active+clean; 377 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.7 MiB/s wr, 321 op/s
Jan 31 07:42:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:36.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:36.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 328 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 2.6 MiB/s wr, 387 op/s
Jan 31 07:42:36 compute-0 nova_compute[247704]: 2026-01-31 07:42:36.767 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:36 compute-0 nova_compute[247704]: 2026-01-31 07:42:36.810 247708 DEBUG nova.network.neutron [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:42:36 compute-0 nova_compute[247704]: 2026-01-31 07:42:36.890 247708 INFO nova.compute.manager [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Took 2.09 seconds to deallocate network for instance.
Jan 31 07:42:36 compute-0 nova_compute[247704]: 2026-01-31 07:42:36.965 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:42:36 compute-0 nova_compute[247704]: 2026-01-31 07:42:36.966 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.081 247708 DEBUG oslo_concurrency.processutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.117 247708 DEBUG nova.compute.manager [req-d7e3cffb-29dd-460b-9b6a-8bbe7d735f22 req-df93e7d3-225a-4bd1-9de9-fc1371bb0849 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Received event network-vif-deleted-b6a51902-7f7f-4b64-931c-f81b110d6551 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:42:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:42:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630656851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.543 247708 DEBUG oslo_concurrency.processutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.549 247708 DEBUG nova.compute.provider_tree [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.582 247708 DEBUG nova.scheduler.client.report [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.618 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.698 247708 INFO nova.scheduler.client.report [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Deleted allocations for instance dcef0d49-1223-4316-994c-4036983d5b73
Jan 31 07:42:37 compute-0 nova_compute[247704]: 2026-01-31 07:42:37.974 247708 DEBUG oslo_concurrency.lockutils [None req-557cf2ad-990a-460c-b57e-1e788db98ea1 b873da8845e6461088fcff99c5c140b1 016f45da455049d7aad578f0a534a0f2 - - default default] Lock "dcef0d49-1223-4316-994c-4036983d5b73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:42:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:38.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:38 compute-0 ceph-mon[74496]: pgmap v1313: 305 pgs: 305 active+clean; 328 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 2.6 MiB/s wr, 387 op/s
Jan 31 07:42:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1630656851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:38.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 295 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.5 MiB/s wr, 324 op/s
Jan 31 07:42:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1164054309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4228686676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:42:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:40.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:40 compute-0 nova_compute[247704]: 2026-01-31 07:42:40.392 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:40.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 303 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Jan 31 07:42:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 31 07:42:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 31 07:42:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:42:41 compute-0 ceph-mon[74496]: pgmap v1314: 305 pgs: 305 active+clean; 295 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.5 MiB/s wr, 324 op/s
Jan 31 07:42:41 compute-0 ceph-mon[74496]: osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:42:41 compute-0 nova_compute[247704]: 2026-01-31 07:42:41.769 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:42 compute-0 ceph-mon[74496]: pgmap v1315: 305 pgs: 305 active+clean; 303 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 229 op/s
Jan 31 07:42:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:42.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 307 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 202 op/s
Jan 31 07:42:43 compute-0 ceph-mon[74496]: pgmap v1317: 305 pgs: 305 active+clean; 307 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 202 op/s
Jan 31 07:42:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:42:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:44.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:42:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 307 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 202 op/s
Jan 31 07:42:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:42:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3784314569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:42:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3784314569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:45 compute-0 nova_compute[247704]: 2026-01-31 07:42:45.355 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845350.3540072, dcef0d49-1223-4316-994c-4036983d5b73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:42:45 compute-0 nova_compute[247704]: 2026-01-31 07:42:45.356 247708 INFO nova.compute.manager [-] [instance: dcef0d49-1223-4316-994c-4036983d5b73] VM Stopped (Lifecycle Event)
Jan 31 07:42:45 compute-0 nova_compute[247704]: 2026-01-31 07:42:45.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:45 compute-0 nova_compute[247704]: 2026-01-31 07:42:45.423 247708 DEBUG nova.compute.manager [None req-cff40d7c-ce95-44c1-bb34-5897ad2e95a2 - - - - - -] [instance: dcef0d49-1223-4316-994c-4036983d5b73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:42:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3227965996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3227965996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:46.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:46 compute-0 ceph-mon[74496]: pgmap v1318: 305 pgs: 305 active+clean; 307 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 202 op/s
Jan 31 07:42:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3784314569' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:42:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3784314569' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:42:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3880658820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 239 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 2.8 MiB/s wr, 140 op/s
Jan 31 07:42:46 compute-0 nova_compute[247704]: 2026-01-31 07:42:46.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:47 compute-0 ceph-mon[74496]: pgmap v1319: 305 pgs: 305 active+clean; 239 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 2.8 MiB/s wr, 140 op/s
Jan 31 07:42:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:48.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Jan 31 07:42:49 compute-0 podman[278638]: 2026-01-31 07:42:49.881755962 +0000 UTC m=+0.056831672 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:42:50 compute-0 ceph-mon[74496]: pgmap v1320: 305 pgs: 305 active+clean; 214 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Jan 31 07:42:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:50 compute-0 nova_compute[247704]: 2026-01-31 07:42:50.396 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:50.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 126 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.7 MiB/s wr, 251 op/s
Jan 31 07:42:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:51 compute-0 sudo[278658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:51 compute-0 sudo[278658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:51 compute-0 sudo[278658]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:51 compute-0 sudo[278683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:51 compute-0 sudo[278683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:51 compute-0 sudo[278683]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:51 compute-0 nova_compute[247704]: 2026-01-31 07:42:51.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:52 compute-0 ceph-mon[74496]: pgmap v1321: 305 pgs: 305 active+clean; 126 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.7 MiB/s wr, 251 op/s
Jan 31 07:42:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:52.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 121 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 216 op/s
Jan 31 07:42:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:53.879 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:42:53 compute-0 nova_compute[247704]: 2026-01-31 07:42:53.880 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:53.880 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:42:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:42:53.881 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:42:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:54 compute-0 ceph-mon[74496]: pgmap v1322: 305 pgs: 305 active+clean; 121 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 216 op/s
Jan 31 07:42:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/969411879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1960182664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:54.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 121 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 133 KiB/s wr, 185 op/s
Jan 31 07:42:55 compute-0 nova_compute[247704]: 2026-01-31 07:42:55.441 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:55 compute-0 ceph-mon[74496]: pgmap v1323: 305 pgs: 305 active+clean; 121 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 133 KiB/s wr, 185 op/s
Jan 31 07:42:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:42:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:56.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:56 compute-0 nova_compute[247704]: 2026-01-31 07:42:56.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:56 compute-0 nova_compute[247704]: 2026-01-31 07:42:56.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:42:56 compute-0 nova_compute[247704]: 2026-01-31 07:42:56.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:42:56 compute-0 nova_compute[247704]: 2026-01-31 07:42:56.595 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:42:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2725692687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:42:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:42:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 73 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 133 KiB/s wr, 201 op/s
Jan 31 07:42:56 compute-0 nova_compute[247704]: 2026-01-31 07:42:56.777 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:42:57 compute-0 nova_compute[247704]: 2026-01-31 07:42:57.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:42:57 compute-0 sudo[278712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:57 compute-0 sudo[278712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:57 compute-0 sudo[278712]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-0 sudo[278737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:42:58 compute-0 sudo[278737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:58 compute-0 sudo[278737]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-0 ceph-mon[74496]: pgmap v1324: 305 pgs: 305 active+clean; 73 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 133 KiB/s wr, 201 op/s
Jan 31 07:42:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2509227913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3908076386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:42:58 compute-0 sudo[278762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:58 compute-0 sudo[278762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:58 compute-0 sudo[278762]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:58 compute-0 sudo[278787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:42:58 compute-0 sudo[278787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:58.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:42:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:42:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:58.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:42:58 compute-0 podman[278883]: 2026-01-31 07:42:58.655745156 +0000 UTC m=+0.073181723 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 07:42:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 148 op/s
Jan 31 07:42:58 compute-0 podman[278883]: 2026-01-31 07:42:58.763384922 +0000 UTC m=+0.180821459 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:42:59 compute-0 podman[279040]: 2026-01-31 07:42:59.381104459 +0000 UTC m=+0.068440617 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:42:59 compute-0 podman[279040]: 2026-01-31 07:42:59.394402084 +0000 UTC m=+0.081738192 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:42:59 compute-0 podman[279107]: 2026-01-31 07:42:59.638310778 +0000 UTC m=+0.069675417 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph)
Jan 31 07:42:59 compute-0 podman[279107]: 2026-01-31 07:42:59.67559608 +0000 UTC m=+0.106960639 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 31 07:42:59 compute-0 sudo[278787]: pam_unix(sudo:session): session closed for user root
Jan 31 07:42:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:42:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:42:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:42:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:42:59 compute-0 sudo[279140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:42:59 compute-0 sudo[279140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:42:59 compute-0 sudo[279140]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:00 compute-0 sudo[279165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:43:00 compute-0 sudo[279165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:00 compute-0 sudo[279165]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:00 compute-0 sudo[279190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:00 compute-0 sudo[279190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:00 compute-0 sudo[279190]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:00 compute-0 sudo[279215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:43:00 compute-0 sudo[279215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:00.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:00 compute-0 ceph-mon[74496]: pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 53 KiB/s wr, 148 op/s
Jan 31 07:43:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.444 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:00 compute-0 sudo[279215]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:43:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:43:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:43:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:00.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.678 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.678 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.679 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.679 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.680 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:00 compute-0 nova_compute[247704]: 2026-01-31 07:43:00.708 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 35 KiB/s wr, 127 op/s
Jan 31 07:43:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 732ff54e-e21a-41d1-9b7a-062c470f7661 does not exist
Jan 31 07:43:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a77a04f4-ea12-44c1-806d-c9b66e440e33 does not exist
Jan 31 07:43:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cbacb517-f63d-4c72-82d9-4dc7dc4d4d00 does not exist
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:43:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:43:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:43:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:43:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:00 compute-0 sudo[279290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:00 compute-0 sudo[279290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:00 compute-0 sudo[279290]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:00 compute-0 sudo[279315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:43:00 compute-0 sudo[279315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:00 compute-0 sudo[279315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:01 compute-0 sudo[279340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:01 compute-0 sudo[279340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:01 compute-0 sudo[279340]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:01 compute-0 sudo[279365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:43:01 compute-0 sudo[279365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:43:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/204450215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.171 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.334 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.337 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4691MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.337 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.337 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.479599419 +0000 UTC m=+0.088125749 container create 96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.419115717 +0000 UTC m=+0.027642037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:43:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/204450215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:01 compute-0 systemd[1]: Started libpod-conmon-96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8.scope.
Jan 31 07:43:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.60134017 +0000 UTC m=+0.209866490 container init 96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.610999546 +0000 UTC m=+0.219525836 container start 96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:43:01 compute-0 brave_hodgkin[279451]: 167 167
Jan 31 07:43:01 compute-0 systemd[1]: libpod-96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8.scope: Deactivated successfully.
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.65279942 +0000 UTC m=+0.261325740 container attach 96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.653810085 +0000 UTC m=+0.262336405 container died 96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d9388adcb1c0bfbc565417eda8f7ba5d28f0b1fd78d6f734c430dc7a797910-merged.mount: Deactivated successfully.
Jan 31 07:43:01 compute-0 podman[279434]: 2026-01-31 07:43:01.716288535 +0000 UTC m=+0.324814855 container remove 96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:43:01 compute-0 systemd[1]: libpod-conmon-96af84377f7af3bef4662e400350b90d1f9b4982c29a15dcc288e1c1c9cfa8a8.scope: Deactivated successfully.
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.778 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.835 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.835 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:43:01 compute-0 nova_compute[247704]: 2026-01-31 07:43:01.882 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:43:01 compute-0 podman[279475]: 2026-01-31 07:43:01.907886957 +0000 UTC m=+0.081141258 container create d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:43:01 compute-0 podman[279475]: 2026-01-31 07:43:01.846207076 +0000 UTC m=+0.019461397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:43:01 compute-0 systemd[1]: Started libpod-conmon-d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142.scope.
Jan 31 07:43:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9417b42a71d451da86b32a98d09576de406ccc3750f1862a7829b8591c9df839/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9417b42a71d451da86b32a98d09576de406ccc3750f1862a7829b8591c9df839/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9417b42a71d451da86b32a98d09576de406ccc3750f1862a7829b8591c9df839/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9417b42a71d451da86b32a98d09576de406ccc3750f1862a7829b8591c9df839/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9417b42a71d451da86b32a98d09576de406ccc3750f1862a7829b8591c9df839/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:02 compute-0 podman[279475]: 2026-01-31 07:43:02.066893961 +0000 UTC m=+0.240148362 container init d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:43:02 compute-0 podman[279475]: 2026-01-31 07:43:02.078484644 +0000 UTC m=+0.251738965 container start d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 07:43:02 compute-0 podman[279475]: 2026-01-31 07:43:02.102868522 +0000 UTC m=+0.276122913 container attach d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:43:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:43:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316177051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:02 compute-0 nova_compute[247704]: 2026-01-31 07:43:02.375 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:43:02 compute-0 nova_compute[247704]: 2026-01-31 07:43:02.382 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:43:02 compute-0 nova_compute[247704]: 2026-01-31 07:43:02.460 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:43:02 compute-0 ceph-mon[74496]: pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 35 KiB/s wr, 127 op/s
Jan 31 07:43:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1316177051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1720823205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:02.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:02 compute-0 nova_compute[247704]: 2026-01-31 07:43:02.670 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:43:02 compute-0 nova_compute[247704]: 2026-01-31 07:43:02.671 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 07:43:02 compute-0 competent_chaplygin[279493]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:43:02 compute-0 competent_chaplygin[279493]: --> relative data size: 1.0
Jan 31 07:43:02 compute-0 competent_chaplygin[279493]: --> All data devices are unavailable
Jan 31 07:43:02 compute-0 systemd[1]: libpod-d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142.scope: Deactivated successfully.
Jan 31 07:43:02 compute-0 podman[279475]: 2026-01-31 07:43:02.832260404 +0000 UTC m=+1.005514745 container died d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaplygin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9417b42a71d451da86b32a98d09576de406ccc3750f1862a7829b8591c9df839-merged.mount: Deactivated successfully.
Jan 31 07:43:02 compute-0 podman[279475]: 2026-01-31 07:43:02.959077049 +0000 UTC m=+1.132331390 container remove d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:43:02 compute-0 systemd[1]: libpod-conmon-d269eb93ca16f9bef0de8645d00cbd2b13e0b15cc1767f6d07f2bd8a4b7a2142.scope: Deactivated successfully.
Jan 31 07:43:03 compute-0 sudo[279365]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:03 compute-0 sudo[279557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:03 compute-0 sudo[279557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:03 compute-0 sudo[279557]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:03 compute-0 podman[279544]: 2026-01-31 07:43:03.095838649 +0000 UTC m=+0.100598215 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:43:03 compute-0 sudo[279596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:43:03 compute-0 sudo[279596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:03 compute-0 sudo[279596]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:03 compute-0 sudo[279621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:03 compute-0 sudo[279621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:03 compute-0 sudo[279621]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:03 compute-0 sudo[279646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:43:03 compute-0 sudo[279646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:03 compute-0 podman[279713]: 2026-01-31 07:43:03.602973037 +0000 UTC m=+0.087600396 container create 461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_raman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:43:03 compute-0 podman[279713]: 2026-01-31 07:43:03.53898898 +0000 UTC m=+0.023616328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:43:03 compute-0 systemd[1]: Started libpod-conmon-461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334.scope.
Jan 31 07:43:03 compute-0 nova_compute[247704]: 2026-01-31 07:43:03.672 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:03 compute-0 nova_compute[247704]: 2026-01-31 07:43:03.675 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:03 compute-0 nova_compute[247704]: 2026-01-31 07:43:03.675 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:03 compute-0 nova_compute[247704]: 2026-01-31 07:43:03.675 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:03 compute-0 nova_compute[247704]: 2026-01-31 07:43:03.676 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:43:03 compute-0 ceph-mon[74496]: pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 07:43:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:43:03 compute-0 podman[279713]: 2026-01-31 07:43:03.777369679 +0000 UTC m=+0.261997087 container init 461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:43:03 compute-0 podman[279713]: 2026-01-31 07:43:03.787837995 +0000 UTC m=+0.272465333 container start 461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:43:03 compute-0 goofy_raman[279729]: 167 167
Jan 31 07:43:03 compute-0 systemd[1]: libpod-461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334.scope: Deactivated successfully.
Jan 31 07:43:03 compute-0 podman[279713]: 2026-01-31 07:43:03.914680581 +0000 UTC m=+0.399307939 container attach 461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_raman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:43:03 compute-0 podman[279713]: 2026-01-31 07:43:03.915651195 +0000 UTC m=+0.400278703 container died 461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_raman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8366f34ccc8d04f3127f9da10a55287d5f0e4370c45ae156308a937ce5d5a26e-merged.mount: Deactivated successfully.
Jan 31 07:43:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:04.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:04 compute-0 podman[279713]: 2026-01-31 07:43:04.249381757 +0000 UTC m=+0.734009085 container remove 461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:43:04 compute-0 systemd[1]: libpod-conmon-461c24d9c07a0cc5d00c04d4cd846d2ab88c3cdb3cf6e42e56747e724c527334.scope: Deactivated successfully.
Jan 31 07:43:04 compute-0 podman[279753]: 2026-01-31 07:43:04.421865222 +0000 UTC m=+0.060333239 container create 3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:43:04 compute-0 systemd[1]: Started libpod-conmon-3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0.scope.
Jan 31 07:43:04 compute-0 podman[279753]: 2026-01-31 07:43:04.3940332 +0000 UTC m=+0.032501257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:43:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a021d4a73829beb93db5006c4cf5340c5893c61cc21d8095d64be9653f1aee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a021d4a73829beb93db5006c4cf5340c5893c61cc21d8095d64be9653f1aee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a021d4a73829beb93db5006c4cf5340c5893c61cc21d8095d64be9653f1aee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a021d4a73829beb93db5006c4cf5340c5893c61cc21d8095d64be9653f1aee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:04 compute-0 podman[279753]: 2026-01-31 07:43:04.540240271 +0000 UTC m=+0.178708328 container init 3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:43:04 compute-0 podman[279753]: 2026-01-31 07:43:04.549732912 +0000 UTC m=+0.188200949 container start 3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:43:04 compute-0 podman[279753]: 2026-01-31 07:43:04.559985174 +0000 UTC m=+0.198453271 container attach 3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:43:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:04.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1847506575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]: {
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:     "0": [
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:         {
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "devices": [
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "/dev/loop3"
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             ],
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "lv_name": "ceph_lv0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "lv_size": "7511998464",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "name": "ceph_lv0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "tags": {
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.cluster_name": "ceph",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.crush_device_class": "",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.encrypted": "0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.osd_id": "0",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.type": "block",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:                 "ceph.vdo": "0"
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             },
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "type": "block",
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:             "vg_name": "ceph_vg0"
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:         }
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]:     ]
Jan 31 07:43:05 compute-0 hardcore_dijkstra[279769]: }
Jan 31 07:43:05 compute-0 systemd[1]: libpod-3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0.scope: Deactivated successfully.
Jan 31 07:43:05 compute-0 podman[279753]: 2026-01-31 07:43:05.327403137 +0000 UTC m=+0.965871134 container died 3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a021d4a73829beb93db5006c4cf5340c5893c61cc21d8095d64be9653f1aee-merged.mount: Deactivated successfully.
Jan 31 07:43:05 compute-0 podman[279753]: 2026-01-31 07:43:05.416190461 +0000 UTC m=+1.054658468 container remove 3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:43:05 compute-0 systemd[1]: libpod-conmon-3de075435d275772de51d2dd9983a0bae39001df2ee1fb1ba18680e04ac616e0.scope: Deactivated successfully.
Jan 31 07:43:05 compute-0 nova_compute[247704]: 2026-01-31 07:43:05.447 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:05 compute-0 sudo[279646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:05 compute-0 sudo[279792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:05 compute-0 sudo[279792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:05 compute-0 sudo[279792]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:05 compute-0 nova_compute[247704]: 2026-01-31 07:43:05.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:05 compute-0 sudo[279817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:43:05 compute-0 sudo[279817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:05 compute-0 sudo[279817]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:05 compute-0 sudo[279842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:05 compute-0 sudo[279842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:05 compute-0 sudo[279842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:05 compute-0 sudo[279867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:43:05 compute-0 sudo[279867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:05 compute-0 ceph-mon[74496]: pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:43:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.085509742 +0000 UTC m=+0.055194382 container create 89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:43:06 compute-0 systemd[1]: Started libpod-conmon-89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0.scope.
Jan 31 07:43:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.061152096 +0000 UTC m=+0.030836626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.161270827 +0000 UTC m=+0.130955287 container init 89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.166023394 +0000 UTC m=+0.135707834 container start 89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:43:06 compute-0 elastic_zhukovsky[279949]: 167 167
Jan 31 07:43:06 compute-0 systemd[1]: libpod-89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0.scope: Deactivated successfully.
Jan 31 07:43:06 compute-0 conmon[279949]: conmon 89d52e1aaf8a90ce3e6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0.scope/container/memory.events
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.17242806 +0000 UTC m=+0.142112500 container attach 89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.173478316 +0000 UTC m=+0.143162756 container died 89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-45b43fac9bf01b51e49dfb0c903d56cfbce0afd9106a84f726656c9c149628ca-merged.mount: Deactivated successfully.
Jan 31 07:43:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:06.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:06 compute-0 podman[279933]: 2026-01-31 07:43:06.223149623 +0000 UTC m=+0.192834063 container remove 89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:43:06 compute-0 systemd[1]: libpod-conmon-89d52e1aaf8a90ce3e6a1d6db5c4243b1a059d6d761ff5451e7cb574f38b91b0.scope: Deactivated successfully.
Jan 31 07:43:06 compute-0 podman[279974]: 2026-01-31 07:43:06.329953758 +0000 UTC m=+0.035274504 container create 763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cartwright, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:43:06 compute-0 systemd[1]: Started libpod-conmon-763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8.scope.
Jan 31 07:43:06 compute-0 podman[279974]: 2026-01-31 07:43:06.31533569 +0000 UTC m=+0.020656436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:43:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6b9db34f45d33bf083193268563d0c3e822b0fb86b2dde7d3e84bada8fe79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6b9db34f45d33bf083193268563d0c3e822b0fb86b2dde7d3e84bada8fe79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6b9db34f45d33bf083193268563d0c3e822b0fb86b2dde7d3e84bada8fe79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb6b9db34f45d33bf083193268563d0c3e822b0fb86b2dde7d3e84bada8fe79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:43:06 compute-0 podman[279974]: 2026-01-31 07:43:06.54439762 +0000 UTC m=+0.249718406 container init 763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cartwright, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:43:06 compute-0 podman[279974]: 2026-01-31 07:43:06.553956534 +0000 UTC m=+0.259277320 container start 763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cartwright, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:43:06 compute-0 podman[279974]: 2026-01-31 07:43:06.561695953 +0000 UTC m=+0.267016729 container attach 763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:43:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:43:06 compute-0 nova_compute[247704]: 2026-01-31 07:43:06.780 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]: {
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:         "osd_id": 0,
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:         "type": "bluestore"
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]:     }
Jan 31 07:43:07 compute-0 quirky_cartwright[279990]: }
Jan 31 07:43:07 compute-0 systemd[1]: libpod-763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8.scope: Deactivated successfully.
Jan 31 07:43:07 compute-0 podman[279974]: 2026-01-31 07:43:07.418550816 +0000 UTC m=+1.123871562 container died 763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bb6b9db34f45d33bf083193268563d0c3e822b0fb86b2dde7d3e84bada8fe79-merged.mount: Deactivated successfully.
Jan 31 07:43:07 compute-0 podman[279974]: 2026-01-31 07:43:07.504894511 +0000 UTC m=+1.210215257 container remove 763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:43:07 compute-0 systemd[1]: libpod-conmon-763f4e1ccd6cbcec57472d3460b75271b41cc96a60dd50ecc36b5880e58585d8.scope: Deactivated successfully.
Jan 31 07:43:07 compute-0 sudo[279867]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:43:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:43:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c06fc52b-724a-48e8-961c-52f4d45fda26 does not exist
Jan 31 07:43:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a70a8cdc-6b94-462f-8407-1002e579335a does not exist
Jan 31 07:43:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2b1c003d-de9b-4d4a-92d0-99e0b7ac45f3 does not exist
Jan 31 07:43:07 compute-0 sudo[280026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:07 compute-0 sudo[280026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:07 compute-0 sudo[280026]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:07 compute-0 sudo[280051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:43:07 compute-0 sudo[280051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:07 compute-0 sudo[280051]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:07 compute-0 ceph-mon[74496]: pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:43:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:07 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:43:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 682 B/s wr, 12 op/s
Jan 31 07:43:10 compute-0 ceph-mon[74496]: pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 682 B/s wr, 12 op/s
Jan 31 07:43:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:10 compute-0 nova_compute[247704]: 2026-01-31 07:43:10.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:43:11.152 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:43:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:43:11.152 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:43:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:43:11.153 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:43:11 compute-0 sudo[280078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:11 compute-0 sudo[280078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:11 compute-0 sudo[280078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:11 compute-0 sudo[280103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:11 compute-0 sudo[280103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:11 compute-0 sudo[280103]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:11 compute-0 nova_compute[247704]: 2026-01-31 07:43:11.782 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:12 compute-0 ceph-mon[74496]: pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:14 compute-0 ceph-mon[74496]: pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:15 compute-0 nova_compute[247704]: 2026-01-31 07:43:15.453 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:15 compute-0 ceph-mon[74496]: pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:16.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:16 compute-0 nova_compute[247704]: 2026-01-31 07:43:16.784 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:18.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:43:20
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control']
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:43:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:43:20 compute-0 nova_compute[247704]: 2026-01-31 07:43:20.455 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:20.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:20 compute-0 podman[280132]: 2026-01-31 07:43:20.918655314 +0000 UTC m=+0.080191175 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:43:21 compute-0 nova_compute[247704]: 2026-01-31 07:43:21.785 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:22.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:24.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:24.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:25 compute-0 nova_compute[247704]: 2026-01-31 07:43:25.458 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:25 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:43:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:26.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:26.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:26 compute-0 nova_compute[247704]: 2026-01-31 07:43:26.787 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:28.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.8473 seconds
Jan 31 07:43:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:43:28 compute-0 ceph-mon[74496]: paxos.0).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Jan 31 07:43:28 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:43:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:43:29 compute-0 ceph-mon[74496]: pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:43:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:43:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:43:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:43:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 38m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:43:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:43:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:30 compute-0 nova_compute[247704]: 2026-01-31 07:43:30.461 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:30 compute-0 ceph-mon[74496]: pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-0 ceph-mon[74496]: pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-0 ceph-mon[74496]: pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-0 ceph-mon[74496]: pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-0 ceph-mon[74496]: pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:43:30 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:43:30 compute-0 ceph-mon[74496]: pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:30 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:43:30 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:43:30 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:43:30 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:43:30 compute-0 ceph-mon[74496]: osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:43:30 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 38m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:43:30 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:43:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:31 compute-0 ceph-mon[74496]: pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:31 compute-0 sudo[280157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:31 compute-0 sudo[280157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:31 compute-0 sudo[280157]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:31 compute-0 sudo[280182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:31 compute-0 sudo[280182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:31 compute-0 sudo[280182]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:31 compute-0 nova_compute[247704]: 2026-01-31 07:43:31.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:33 compute-0 podman[280208]: 2026-01-31 07:43:33.960144085 +0000 UTC m=+0.129583145 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Jan 31 07:43:34 compute-0 ceph-mon[74496]: pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:34.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:43:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:43:35 compute-0 nova_compute[247704]: 2026-01-31 07:43:35.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:36 compute-0 ceph-mon[74496]: pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:36 compute-0 nova_compute[247704]: 2026-01-31 07:43:36.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:37 compute-0 ceph-mon[74496]: pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:38.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:39 compute-0 ceph-mon[74496]: pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:40.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:40 compute-0 nova_compute[247704]: 2026-01-31 07:43:40.502 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:41 compute-0 nova_compute[247704]: 2026-01-31 07:43:41.794 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:42 compute-0 ceph-mon[74496]: pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:43:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:42.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:43:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:44 compute-0 ceph-mon[74496]: pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3345716201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:43:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3345716201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:43:45 compute-0 nova_compute[247704]: 2026-01-31 07:43:45.505 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:46 compute-0 ceph-mon[74496]: pgmap v1348: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:46 compute-0 nova_compute[247704]: 2026-01-31 07:43:46.796 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:47 compute-0 ovn_controller[149457]: 2026-01-31T07:43:47Z|00168|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 07:43:47 compute-0 ceph-mon[74496]: pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:48.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:48.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:49 compute-0 sshd-session[280241]: Invalid user sol from 45.148.10.240 port 43218
Jan 31 07:43:49 compute-0 sshd-session[280241]: Connection closed by invalid user sol 45.148.10.240 port 43218 [preauth]
Jan 31 07:43:49 compute-0 ceph-mon[74496]: pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:43:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:50.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:50 compute-0 nova_compute[247704]: 2026-01-31 07:43:50.508 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:50 compute-0 sshd-session[280244]: banner exchange: Connection from 64.62.197.17 port 3320: invalid format
Jan 31 07:43:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:50.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Jan 31 07:43:51 compute-0 ceph-osd[84816]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 07:43:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2752754670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:51 compute-0 nova_compute[247704]: 2026-01-31 07:43:51.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:51 compute-0 sudo[280247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:51 compute-0 sudo[280247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:51 compute-0 sudo[280247]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:51 compute-0 podman[280246]: 2026-01-31 07:43:51.872071377 +0000 UTC m=+0.051807390 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 07:43:51 compute-0 sudo[280287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:43:51 compute-0 sudo[280287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:43:51 compute-0 sudo[280287]: pam_unix(sudo:session): session closed for user root
Jan 31 07:43:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:52.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:52 compute-0 ceph-mon[74496]: pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Jan 31 07:43:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:43:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:52.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:43:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 07:43:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:43:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:54.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:43:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 07:43:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:55 compute-0 ceph-mon[74496]: pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 07:43:55 compute-0 nova_compute[247704]: 2026-01-31 07:43:55.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:56 compute-0 ceph-mon[74496]: pgmap v1353: 305 pgs: 305 active+clean; 41 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 07:43:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:56.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 70 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 885 KiB/s wr, 19 op/s
Jan 31 07:43:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:56.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:56 compute-0 nova_compute[247704]: 2026-01-31 07:43:56.835 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/334924200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:43:57 compute-0 nova_compute[247704]: 2026-01-31 07:43:57.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:58 compute-0 ceph-mon[74496]: pgmap v1354: 305 pgs: 305 active+clean; 70 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 885 KiB/s wr, 19 op/s
Jan 31 07:43:58 compute-0 nova_compute[247704]: 2026-01-31 07:43:58.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:43:58 compute-0 nova_compute[247704]: 2026-01-31 07:43:58.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:43:58 compute-0 nova_compute[247704]: 2026-01-31 07:43:58.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:43:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:43:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 73 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 31 07:43:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:43:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:43:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:58.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:43:59 compute-0 nova_compute[247704]: 2026-01-31 07:43:59.213 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:43:59.213 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:43:59.214 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:43:59 compute-0 nova_compute[247704]: 2026-01-31 07:43:59.350 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:44:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:00.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:00 compute-0 nova_compute[247704]: 2026-01-31 07:44:00.514 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:00.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:01.217 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:01 compute-0 nova_compute[247704]: 2026-01-31 07:44:01.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.343 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:02.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.724 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.725 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.725 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.725 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:44:02 compute-0 nova_compute[247704]: 2026-01-31 07:44:02.726 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:02.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:04.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 07:44:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:04.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:04 compute-0 podman[280330]: 2026-01-31 07:44:04.936784795 +0000 UTC m=+0.106264373 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:44:05 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:44:05 compute-0 nova_compute[247704]: 2026-01-31 07:44:05.517 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:06.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 07:44:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:06.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:06 compute-0 nova_compute[247704]: 2026-01-31 07:44:06.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:07 compute-0 sudo[280359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:07 compute-0 sudo[280359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:07 compute-0 sudo[280359]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-0 sudo[280384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:44:08 compute-0 sudo[280384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:08 compute-0 sudo[280384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-0 sudo[280409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:08 compute-0 sudo[280409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:08 compute-0 sudo[280409]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-0 sudo[280434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:44:08 compute-0 sudo[280434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:08.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:08 compute-0 sudo[280434]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 930 KiB/s wr, 16 op/s
Jan 31 07:44:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:08.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:09 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:44:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:10.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:10 compute-0 nova_compute[247704]: 2026-01-31 07:44:10.524 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 754 KiB/s wr, 16 op/s
Jan 31 07:44:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 12.2881 seconds
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:44:11 compute-0 ceph-mon[74496]: paxos.0).electionLogic(29) init, last seen epoch 29, mid-election, bumping
Jan 31 07:44:11 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.214677334s
Jan 31 07:44:11 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.214677334s
Jan 31 07:44:11 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.214965343s, txc = 0x55f9daa45b00
Jan 31 07:44:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:11.154 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:11.154 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:11.154 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:44:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/914263149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:11 compute-0 ceph-mon[74496]: pgmap v1355: 305 pgs: 305 active+clean; 73 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.0 MiB/s wr, 19 op/s
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 38m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:44:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792388513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:11 compute-0 nova_compute[247704]: 2026-01-31 07:44:11.768 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:11 compute-0 nova_compute[247704]: 2026-01-31 07:44:11.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:11 compute-0 nova_compute[247704]: 2026-01-31 07:44:11.941 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:44:11 compute-0 nova_compute[247704]: 2026-01-31 07:44:11.943 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4715MB free_disk=20.969539642333984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:44:11 compute-0 nova_compute[247704]: 2026-01-31 07:44:11.943 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:11 compute-0 nova_compute[247704]: 2026-01-31 07:44:11.943 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:11 compute-0 sudo[280502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:11 compute-0 sudo[280502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:11 compute-0 sudo[280502]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-0 sudo[280527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:12 compute-0 sudo[280527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:12 compute-0 sudo[280527]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.087 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.088 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.122 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:12.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815318421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.549 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.556 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.661 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.663 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:44:12 compute-0 nova_compute[247704]: 2026-01-31 07:44:12.664 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cbe4c1a5-0bba-4932-8390-683b16ab109a does not exist
Jan 31 07:44:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7df4f5fb-5f7c-40f8-bd1d-3065901e46fc does not exist
Jan 31 07:44:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5df80aa8-07c5-4ab9-bf68-3692a18ff2ad does not exist
Jan 31 07:44:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:44:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:44:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:44:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:44:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:44:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:44:12 compute-0 sudo[280574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:12 compute-0 sudo[280574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:12 compute-0 sudo[280574]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 93 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 177 KiB/s wr, 13 op/s
Jan 31 07:44:12 compute-0 sudo[280599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:44:12 compute-0 sudo[280599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:12 compute-0 sudo[280599]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:12.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:12 compute-0 sudo[280624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:12 compute-0 sudo[280624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:12 compute-0 sudo[280624]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:12 compute-0 sudo[280649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:44:12 compute-0 sudo[280649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:13 compute-0 ceph-mon[74496]: pgmap v1356: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:13 compute-0 ceph-mon[74496]: pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/85849488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/991046489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: pgmap v1358: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3402555874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: pgmap v1359: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2181903236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:44:13 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:44:13 compute-0 ceph-mon[74496]: pgmap v1360: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 930 KiB/s wr, 16 op/s
Jan 31 07:44:13 compute-0 ceph-mon[74496]: pgmap v1361: 305 pgs: 305 active+clean; 88 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 754 KiB/s wr, 16 op/s
Jan 31 07:44:13 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:44:13 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:44:13 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:44:13 compute-0 ceph-mon[74496]: osdmap e203: 3 total, 3 up, 3 in
Jan 31 07:44:13 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 38m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:44:13 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3792388513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2815318421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:44:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.193012049 +0000 UTC m=+0.020585444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.500213043 +0000 UTC m=+0.327786348 container create 7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:44:13 compute-0 systemd[1]: Started libpod-conmon-7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b.scope.
Jan 31 07:44:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.634977963 +0000 UTC m=+0.462551338 container init 7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.644196329 +0000 UTC m=+0.471769624 container start 7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:44:13 compute-0 distracted_mendeleev[280731]: 167 167
Jan 31 07:44:13 compute-0 systemd[1]: libpod-7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b.scope: Deactivated successfully.
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.658154521 +0000 UTC m=+0.485727806 container attach 7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.658818557 +0000 UTC m=+0.486391862 container died 7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:44:13 compute-0 nova_compute[247704]: 2026-01-31 07:44:13.664 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd22343df27202fd6da413390805bdf7f50b9dcc960abe3357824d73305cd468-merged.mount: Deactivated successfully.
Jan 31 07:44:13 compute-0 podman[280714]: 2026-01-31 07:44:13.77737998 +0000 UTC m=+0.604953285 container remove 7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:44:13 compute-0 systemd[1]: libpod-conmon-7b5089231f46e5afa3f83bc99fda528bb812556230d2149eb68e36380a64ce6b.scope: Deactivated successfully.
Jan 31 07:44:13 compute-0 nova_compute[247704]: 2026-01-31 07:44:13.852 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:13 compute-0 nova_compute[247704]: 2026-01-31 07:44:13.852 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:13 compute-0 nova_compute[247704]: 2026-01-31 07:44:13.853 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:13 compute-0 nova_compute[247704]: 2026-01-31 07:44:13.853 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:13 compute-0 nova_compute[247704]: 2026-01-31 07:44:13.853 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:44:13 compute-0 podman[280758]: 2026-01-31 07:44:13.915154954 +0000 UTC m=+0.041668572 container create 2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:44:13 compute-0 systemd[1]: Started libpod-conmon-2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0.scope.
Jan 31 07:44:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517f91dac963d19d091f22e841f126ff26aa0a924b4b826efe9dbbec3004b965/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517f91dac963d19d091f22e841f126ff26aa0a924b4b826efe9dbbec3004b965/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517f91dac963d19d091f22e841f126ff26aa0a924b4b826efe9dbbec3004b965/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:13 compute-0 podman[280758]: 2026-01-31 07:44:13.895053812 +0000 UTC m=+0.021567450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517f91dac963d19d091f22e841f126ff26aa0a924b4b826efe9dbbec3004b965/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/517f91dac963d19d091f22e841f126ff26aa0a924b4b826efe9dbbec3004b965/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:14 compute-0 podman[280758]: 2026-01-31 07:44:14.009685009 +0000 UTC m=+0.136198617 container init 2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:44:14 compute-0 podman[280758]: 2026-01-31 07:44:14.018353601 +0000 UTC m=+0.144867209 container start 2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hugle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:44:14 compute-0 podman[280758]: 2026-01-31 07:44:14.022764499 +0000 UTC m=+0.149278127 container attach 2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:44:14 compute-0 ceph-mon[74496]: pgmap v1362: 305 pgs: 305 active+clean; 93 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 177 KiB/s wr, 13 op/s
Jan 31 07:44:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1285714422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:14.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 93 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s rd, 176 KiB/s wr, 10 op/s
Jan 31 07:44:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:14.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:14 compute-0 intelligent_hugle[280774]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:44:14 compute-0 intelligent_hugle[280774]: --> relative data size: 1.0
Jan 31 07:44:14 compute-0 intelligent_hugle[280774]: --> All data devices are unavailable
Jan 31 07:44:14 compute-0 systemd[1]: libpod-2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0.scope: Deactivated successfully.
Jan 31 07:44:14 compute-0 podman[280790]: 2026-01-31 07:44:14.953358179 +0000 UTC m=+0.029037253 container died 2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hugle, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-517f91dac963d19d091f22e841f126ff26aa0a924b4b826efe9dbbec3004b965-merged.mount: Deactivated successfully.
Jan 31 07:44:15 compute-0 podman[280790]: 2026-01-31 07:44:15.040316738 +0000 UTC m=+0.115995792 container remove 2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hugle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:44:15 compute-0 systemd[1]: libpod-conmon-2f044e4ff1e2c2d02e4d961e79d754b694d57db62d1dbcc818021219d84bbaa0.scope: Deactivated successfully.
Jan 31 07:44:15 compute-0 sudo[280649]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:15 compute-0 sudo[280805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:15 compute-0 sudo[280805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:15 compute-0 sudo[280805]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:15 compute-0 sudo[280830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:44:15 compute-0 sudo[280830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:15 compute-0 sudo[280830]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:15 compute-0 sudo[280855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:15 compute-0 sudo[280855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:15 compute-0 sudo[280855]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:15 compute-0 sudo[280880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:44:15 compute-0 sudo[280880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3142601042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:15 compute-0 nova_compute[247704]: 2026-01-31 07:44:15.560 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.631596578 +0000 UTC m=+0.043394404 container create c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:44:15 compute-0 systemd[1]: Started libpod-conmon-c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240.scope.
Jan 31 07:44:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.612482259 +0000 UTC m=+0.024280105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.713842982 +0000 UTC m=+0.125640828 container init c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hypatia, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.720493715 +0000 UTC m=+0.132291541 container start c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hypatia, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:44:15 compute-0 compassionate_hypatia[280963]: 167 167
Jan 31 07:44:15 compute-0 systemd[1]: libpod-c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240.scope: Deactivated successfully.
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.728018539 +0000 UTC m=+0.139816375 container attach c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.728389928 +0000 UTC m=+0.140187774 container died c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hypatia, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-859c64e96cb3680db00687580bcf3691c6e77225f68f5c61c82e995b60569c1b-merged.mount: Deactivated successfully.
Jan 31 07:44:15 compute-0 podman[280947]: 2026-01-31 07:44:15.799843878 +0000 UTC m=+0.211641704 container remove c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:44:15 compute-0 systemd[1]: libpod-conmon-c4bcdd70d629349fad491c0f9f60276351947f5efc13dc555d6ea71423cc1240.scope: Deactivated successfully.
Jan 31 07:44:15 compute-0 nova_compute[247704]: 2026-01-31 07:44:15.923 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:15 compute-0 nova_compute[247704]: 2026-01-31 07:44:15.924 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:15 compute-0 podman[280986]: 2026-01-31 07:44:15.971258456 +0000 UTC m=+0.059811016 container create 6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:44:15 compute-0 nova_compute[247704]: 2026-01-31 07:44:15.986 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:44:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:16 compute-0 systemd[1]: Started libpod-conmon-6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5.scope.
Jan 31 07:44:16 compute-0 podman[280986]: 2026-01-31 07:44:15.937888128 +0000 UTC m=+0.026440658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:44:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b909f630a448ffe2abdd8819ae6f87c08106df6261d26a5e34f54570ab687900/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b909f630a448ffe2abdd8819ae6f87c08106df6261d26a5e34f54570ab687900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b909f630a448ffe2abdd8819ae6f87c08106df6261d26a5e34f54570ab687900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b909f630a448ffe2abdd8819ae6f87c08106df6261d26a5e34f54570ab687900/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:16 compute-0 podman[280986]: 2026-01-31 07:44:16.077449286 +0000 UTC m=+0.166001816 container init 6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:44:16 compute-0 podman[280986]: 2026-01-31 07:44:16.088429505 +0000 UTC m=+0.176981975 container start 6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:44:16 compute-0 podman[280986]: 2026-01-31 07:44:16.097064116 +0000 UTC m=+0.185616606 container attach 6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dubinsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.105 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.106 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.136 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.137 247708 INFO nova.compute.claims [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.311 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:16.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:16 compute-0 ceph-mon[74496]: pgmap v1363: 305 pgs: 305 active+clean; 93 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s rd, 176 KiB/s wr, 10 op/s
Jan 31 07:44:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1037439228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 134 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]: {
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:     "0": [
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:         {
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "devices": [
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "/dev/loop3"
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             ],
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "lv_name": "ceph_lv0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "lv_size": "7511998464",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "name": "ceph_lv0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "tags": {
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.cluster_name": "ceph",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.crush_device_class": "",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.encrypted": "0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.osd_id": "0",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.type": "block",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:                 "ceph.vdo": "0"
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             },
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "type": "block",
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:             "vg_name": "ceph_vg0"
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:         }
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]:     ]
Jan 31 07:44:16 compute-0 vigorous_dubinsky[281003]: }
Jan 31 07:44:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187030413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:16 compute-0 systemd[1]: libpod-6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5.scope: Deactivated successfully.
Jan 31 07:44:16 compute-0 podman[280986]: 2026-01-31 07:44:16.799269393 +0000 UTC m=+0.887821833 container died 6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.818 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.827 247708 DEBUG nova.compute.provider_tree [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:44:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:16.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.860 247708 DEBUG nova.scheduler.client.report [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.886 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.925 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:16 compute-0 nova_compute[247704]: 2026-01-31 07:44:16.926 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.045 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.045 247708 DEBUG nova.network.neutron [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.093 247708 INFO nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:44:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b909f630a448ffe2abdd8819ae6f87c08106df6261d26a5e34f54570ab687900-merged.mount: Deactivated successfully.
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.151 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:44:17 compute-0 podman[280986]: 2026-01-31 07:44:17.27325976 +0000 UTC m=+1.361812240 container remove 6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:44:17 compute-0 sudo[280880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.324 247708 DEBUG nova.policy [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4865905ed4e4262a2242d3f323d4314', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9ddf930129cf4e0395f8c5e70fd9eda8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.342 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.344 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.345 247708 INFO nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Creating image(s)
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.376 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:17 compute-0 systemd[1]: libpod-conmon-6cd13ff3cc16cbf79cca79103777006199e434089194c35a07b1ec955c42e0d5.scope: Deactivated successfully.
Jan 31 07:44:17 compute-0 sudo[281048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:17 compute-0 sudo[281048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:17 compute-0 sudo[281048]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.410 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.437 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.441 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:17 compute-0 sudo[281107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:44:17 compute-0 sudo[281107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:17 compute-0 sudo[281107]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:17 compute-0 ceph-mon[74496]: pgmap v1364: 305 pgs: 305 active+clean; 134 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:44:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2187030413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.500 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.501 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.501 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.502 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:17 compute-0 sudo[281154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:17 compute-0 sudo[281154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:17 compute-0 sudo[281154]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:17 compute-0 sudo[281188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:44:17 compute-0 sudo[281188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.636 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:17 compute-0 nova_compute[247704]: 2026-01-31 07:44:17.644 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a2a97039-8813-4ebf-9ce0-488982bece16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:17 compute-0 podman[281278]: 2026-01-31 07:44:17.954325648 +0000 UTC m=+0.115370286 container create cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:44:17 compute-0 podman[281278]: 2026-01-31 07:44:17.86046925 +0000 UTC m=+0.021513898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:44:18 compute-0 systemd[1]: Started libpod-conmon-cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3.scope.
Jan 31 07:44:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:18 compute-0 podman[281278]: 2026-01-31 07:44:18.179596005 +0000 UTC m=+0.340640673 container init cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:44:18 compute-0 podman[281278]: 2026-01-31 07:44:18.191712531 +0000 UTC m=+0.352757199 container start cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_villani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:44:18 compute-0 busy_villani[281297]: 167 167
Jan 31 07:44:18 compute-0 systemd[1]: libpod-cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3.scope: Deactivated successfully.
Jan 31 07:44:18 compute-0 conmon[281297]: conmon cbb031fae632b35ae131 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3.scope/container/memory.events
Jan 31 07:44:18 compute-0 podman[281278]: 2026-01-31 07:44:18.234266524 +0000 UTC m=+0.395311182 container attach cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_villani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:44:18 compute-0 podman[281278]: 2026-01-31 07:44:18.235267159 +0000 UTC m=+0.396311797 container died cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_villani, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:44:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:18 compute-0 nova_compute[247704]: 2026-01-31 07:44:18.635 247708 DEBUG nova.network.neutron [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Successfully created port: a85d82a5-a910-4674-9460-0efb5cc7e0c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-591d82eec04b999bbdc458bdf2fd17bc9e3b782e4117946b6e54d48615a5e3d4-merged.mount: Deactivated successfully.
Jan 31 07:44:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 134 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:19 compute-0 podman[281278]: 2026-01-31 07:44:19.408539951 +0000 UTC m=+1.569584629 container remove cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_villani, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:44:19 compute-0 systemd[1]: libpod-conmon-cbb031fae632b35ae1319572d31251d1987fcbb461c52dbe65b62c610b44a4d3.scope: Deactivated successfully.
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.473 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 a2a97039-8813-4ebf-9ce0-488982bece16_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.829s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.552 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] resizing rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:44:19 compute-0 podman[281353]: 2026-01-31 07:44:19.597889748 +0000 UTC m=+0.049468763 container create 99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:44:19 compute-0 systemd[1]: Started libpod-conmon-99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c.scope.
Jan 31 07:44:19 compute-0 podman[281353]: 2026-01-31 07:44:19.570712962 +0000 UTC m=+0.022291997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:44:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811542c71563743b6a6710b340c803d2958aafe1b52437f4480b02bf5d2694ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811542c71563743b6a6710b340c803d2958aafe1b52437f4480b02bf5d2694ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811542c71563743b6a6710b340c803d2958aafe1b52437f4480b02bf5d2694ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/811542c71563743b6a6710b340c803d2958aafe1b52437f4480b02bf5d2694ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:19 compute-0 podman[281353]: 2026-01-31 07:44:19.70664666 +0000 UTC m=+0.158225695 container init 99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:44:19 compute-0 podman[281353]: 2026-01-31 07:44:19.713908519 +0000 UTC m=+0.165487524 container start 99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:44:19 compute-0 podman[281353]: 2026-01-31 07:44:19.720647763 +0000 UTC m=+0.172226778 container attach 99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.720 247708 DEBUG nova.network.neutron [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Successfully updated port: a85d82a5-a910-4674-9460-0efb5cc7e0c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.728 247708 DEBUG nova.objects.instance [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'migration_context' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.793 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.794 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquired lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.794 247708 DEBUG nova.network.neutron [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.815 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.816 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Ensure instance console log exists: /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.817 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.818 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.818 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.842 247708 DEBUG nova.compute.manager [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-changed-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.843 247708 DEBUG nova.compute.manager [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Refreshing instance network info cache due to event network-changed-a85d82a5-a910-4674-9460-0efb5cc7e0c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:44:19 compute-0 nova_compute[247704]: 2026-01-31 07:44:19.843 247708 DEBUG oslo_concurrency.lockutils [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:44:20
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log']
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:44:20 compute-0 nova_compute[247704]: 2026-01-31 07:44:20.126 247708 DEBUG nova.network.neutron [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:44:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:20.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:44:20 compute-0 ceph-mon[74496]: pgmap v1365: 305 pgs: 305 active+clean; 134 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 07:44:20 compute-0 condescending_austin[281392]: {
Jan 31 07:44:20 compute-0 condescending_austin[281392]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:44:20 compute-0 condescending_austin[281392]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:44:20 compute-0 condescending_austin[281392]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:44:20 compute-0 condescending_austin[281392]:         "osd_id": 0,
Jan 31 07:44:20 compute-0 condescending_austin[281392]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:44:20 compute-0 condescending_austin[281392]:         "type": "bluestore"
Jan 31 07:44:20 compute-0 condescending_austin[281392]:     }
Jan 31 07:44:20 compute-0 condescending_austin[281392]: }
Jan 31 07:44:20 compute-0 systemd[1]: libpod-99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c.scope: Deactivated successfully.
Jan 31 07:44:20 compute-0 podman[281353]: 2026-01-31 07:44:20.493414288 +0000 UTC m=+0.944993303 container died 99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-811542c71563743b6a6710b340c803d2958aafe1b52437f4480b02bf5d2694ca-merged.mount: Deactivated successfully.
Jan 31 07:44:20 compute-0 podman[281353]: 2026-01-31 07:44:20.544471939 +0000 UTC m=+0.996050954 container remove 99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 07:44:20 compute-0 systemd[1]: libpod-conmon-99a8f3d4991d993f863ad322017a7f3d1b0ca05b61b7ac52391996e5eaa4b52c.scope: Deactivated successfully.
Jan 31 07:44:20 compute-0 nova_compute[247704]: 2026-01-31 07:44:20.564 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:20 compute-0 sudo[281188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:44:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:44:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 862ed368-5bec-4c01-b6ef-effb0c8d01f5 does not exist
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7c621ecf-d402-48de-a46c-10613d0fd902 does not exist
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 11303a78-7623-4b6a-906a-82b979ea433d does not exist
Jan 31 07:44:20 compute-0 sudo[281445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:20 compute-0 sudo[281445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:20 compute-0 sudo[281445]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:20 compute-0 sudo[281470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:44:20 compute-0 sudo[281470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:20 compute-0 sudo[281470]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 166 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 189 op/s
Jan 31 07:44:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:44:21 compute-0 ceph-mon[74496]: pgmap v1366: 305 pgs: 305 active+clean; 166 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 189 op/s
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.700 247708 DEBUG nova.network.neutron [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating instance_info_cache with network_info: [{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.754 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Releasing lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.754 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Instance network_info: |[{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.755 247708 DEBUG oslo_concurrency.lockutils [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.755 247708 DEBUG nova.network.neutron [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Refreshing network info cache for port a85d82a5-a910-4674-9460-0efb5cc7e0c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.758 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Start _get_guest_xml network_info=[{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.763 247708 WARNING nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.770 247708 DEBUG nova.virt.libvirt.host [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.771 247708 DEBUG nova.virt.libvirt.host [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.777 247708 DEBUG nova.virt.libvirt.host [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.777 247708 DEBUG nova.virt.libvirt.host [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.779 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.779 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.779 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.780 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.780 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.780 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.780 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.780 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.781 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.781 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.781 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.781 247708 DEBUG nova.virt.hardware [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.784 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:21 compute-0 nova_compute[247704]: 2026-01-31 07:44:21.888 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:44:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053522155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.242 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.277 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.282 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:44:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050046995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2053522155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.741 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.744 247708 DEBUG nova.virt.libvirt.vif [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-77192972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-77192972',id=53,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/jDytd8Ym5FxcVX16m0xs7mZjnpulUTkIvV8si73F9lzKYe980w/3RbovGTB1QQOm/Ss45P0fTDRJRtI1toiRP5c4zSltvuzCoq9BdQDxvme5rWNAqRGyanoC79C91qw==',key_name='tempest-keypair-1249601107',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9ddf930129cf4e0395f8c5e70fd9eda8',ramdisk_id='',reservation_id='r-t5uat7rb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4865905ed4e4262a2242d3f323d4314',uuid=a2a97039-8813-4ebf-9ce0-488982bece16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.744 247708 DEBUG nova.network.os_vif_util [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converting VIF {"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.745 247708 DEBUG nova.network.os_vif_util [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.747 247708 DEBUG nova.objects.instance [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:44:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 175 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.787 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <uuid>a2a97039-8813-4ebf-9ce0-488982bece16</uuid>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <name>instance-00000035</name>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-77192972</nova:name>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:44:21</nova:creationTime>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:user uuid="b4865905ed4e4262a2242d3f323d4314">tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member</nova:user>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:project uuid="9ddf930129cf4e0395f8c5e70fd9eda8">tempest-UpdateMultiattachVolumeNegativeTest-600318888</nova:project>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <nova:port uuid="a85d82a5-a910-4674-9460-0efb5cc7e0c4">
Jan 31 07:44:22 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <system>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <entry name="serial">a2a97039-8813-4ebf-9ce0-488982bece16</entry>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <entry name="uuid">a2a97039-8813-4ebf-9ce0-488982bece16</entry>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </system>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <os>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </os>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <features>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </features>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/a2a97039-8813-4ebf-9ce0-488982bece16_disk">
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </source>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/a2a97039-8813-4ebf-9ce0-488982bece16_disk.config">
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </source>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:44:22 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:a5:98:a0"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <target dev="tapa85d82a5-a9"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/console.log" append="off"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <video>
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </video>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:44:22 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:44:22 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:44:22 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:44:22 compute-0 nova_compute[247704]: </domain>
Jan 31 07:44:22 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.789 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Preparing to wait for external event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.789 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.789 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.789 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.790 247708 DEBUG nova.virt.libvirt.vif [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-77192972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-77192972',id=53,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/jDytd8Ym5FxcVX16m0xs7mZjnpulUTkIvV8si73F9lzKYe980w/3RbovGTB1QQOm/Ss45P0fTDRJRtI1toiRP5c4zSltvuzCoq9BdQDxvme5rWNAqRGyanoC79C91qw==',key_name='tempest-keypair-1249601107',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9ddf930129cf4e0395f8c5e70fd9eda8',ramdisk_id='',reservation_id='r-t5uat7rb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4865905ed4e4262a2242d3f323d4314',uuid=a2a97039-8813-4ebf-9ce0-488982bece16,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.791 247708 DEBUG nova.network.os_vif_util [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converting VIF {"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.792 247708 DEBUG nova.network.os_vif_util [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.792 247708 DEBUG os_vif [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.793 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.794 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.799 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa85d82a5-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.800 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa85d82a5-a9, col_values=(('external_ids', {'iface-id': 'a85d82a5-a910-4674-9460-0efb5cc7e0c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:98:a0', 'vm-uuid': 'a2a97039-8813-4ebf-9ce0-488982bece16'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:22 compute-0 NetworkManager[49108]: <info>  [1769845462.8050] manager: (tapa85d82a5-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.807 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.813 247708 INFO os_vif [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9')
Jan 31 07:44:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:22.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:22 compute-0 podman[281559]: 2026-01-31 07:44:22.892212331 +0000 UTC m=+0.061144658 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.982 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.983 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.983 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No VIF found with MAC fa:16:3e:a5:98:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:44:22 compute-0 nova_compute[247704]: 2026-01-31 07:44:22.984 247708 INFO nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Using config drive
Jan 31 07:44:23 compute-0 nova_compute[247704]: 2026-01-31 07:44:23.017 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2050046995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:23 compute-0 ceph-mon[74496]: pgmap v1367: 305 pgs: 305 active+clean; 175 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.304 247708 INFO nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Creating config drive at /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/disk.config
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.309 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprdn9smfw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.337 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.338 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.409 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.436 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprdn9smfw" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.505 247708 DEBUG nova.storage.rbd_utils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] rbd image a2a97039-8813-4ebf-9ce0-488982bece16_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.512 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/disk.config a2a97039-8813-4ebf-9ce0-488982bece16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.637 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.639 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.652 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.653 247708 INFO nova.compute.claims [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.682 247708 DEBUG oslo_concurrency.processutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/disk.config a2a97039-8813-4ebf-9ce0-488982bece16_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.683 247708 INFO nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Deleting local config drive /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16/disk.config because it was imported into RBD.
Jan 31 07:44:24 compute-0 kernel: tapa85d82a5-a9: entered promiscuous mode
Jan 31 07:44:24 compute-0 ovn_controller[149457]: 2026-01-31T07:44:24Z|00169|binding|INFO|Claiming lport a85d82a5-a910-4674-9460-0efb5cc7e0c4 for this chassis.
Jan 31 07:44:24 compute-0 ovn_controller[149457]: 2026-01-31T07:44:24Z|00170|binding|INFO|a85d82a5-a910-4674-9460-0efb5cc7e0c4: Claiming fa:16:3e:a5:98:a0 10.100.0.4
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:24 compute-0 NetworkManager[49108]: <info>  [1769845464.7440] manager: (tapa85d82a5-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.746 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 175 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 196 op/s
Jan 31 07:44:24 compute-0 systemd-udevd[281648]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:44:24 compute-0 systemd-machined[214448]: New machine qemu-24-instance-00000035.
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.780 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:24 compute-0 ovn_controller[149457]: 2026-01-31T07:44:24Z|00171|binding|INFO|Setting lport a85d82a5-a910-4674-9460-0efb5cc7e0c4 ovn-installed in OVS
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:24 compute-0 NetworkManager[49108]: <info>  [1769845464.7909] device (tapa85d82a5-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:44:24 compute-0 NetworkManager[49108]: <info>  [1769845464.7914] device (tapa85d82a5-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:44:24 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000035.
Jan 31 07:44:24 compute-0 ovn_controller[149457]: 2026-01-31T07:44:24Z|00172|binding|INFO|Setting lport a85d82a5-a910-4674-9460-0efb5cc7e0c4 up in Southbound
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.809 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:98:a0 10.100.0.4'], port_security=['fa:16:3e:a5:98:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a2a97039-8813-4ebf-9ce0-488982bece16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ddf930129cf4e0395f8c5e70fd9eda8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be9fd1f2-df08-4f20-8be4-2f77d359418d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=530610e1-1646-4c1c-9b6d-a046ad77685d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a85d82a5-a910-4674-9460-0efb5cc7e0c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.810 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a85d82a5-a910-4674-9460-0efb5cc7e0c4 in datapath d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 bound to our chassis
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.811 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d52bfdcb-a5f3-4946-8fca-4e9f67091fc3
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.824 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcca060-7086-4275-a182-c5fee4dd2c4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.826 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd52bfdcb-a1 in ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.829 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd52bfdcb-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.829 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c9f076-0e9d-46be-8860-61b28c75611b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.831 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[03d4d65e-7665-40c3-8763-cb418a59364d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.846 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[80d95c1f-bb4a-49a2-b41c-d83090447ce7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:24.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.863 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[883edeb0-ae13-4e68-b36a-3adb86ae31ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.891 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdf7b42-e5b8-4310-bc02-7f9bba6f733c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.896 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d05fa29c-f8be-4523-9e14-d8b87f4331e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 NetworkManager[49108]: <info>  [1769845464.8984] manager: (tapd52bfdcb-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Jan 31 07:44:24 compute-0 systemd-udevd[281651]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.933 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[14af33f8-0a7b-404e-8a6b-7d44f1ba41a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.935 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2204f805-e429-44ce-8d1b-bbfe6fc6cff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 NetworkManager[49108]: <info>  [1769845464.9595] device (tapd52bfdcb-a0): carrier: link connected
Jan 31 07:44:24 compute-0 nova_compute[247704]: 2026-01-31 07:44:24.959 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.964 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0d89deef-efc8-4369-9674-37099f100d81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.978 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e2990a32-51dd-489c-9775-96678851e6c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd52bfdcb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:e6:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570757, 'reachable_time': 35118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281683, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:24.994 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bd03870c-8317-49c2-823c-ec7661b3d3e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:e6a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570757, 'tstamp': 570757}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281684, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.007 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[41eabf35-6ee5-42a7-966b-017ec5d61752]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd52bfdcb-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:e6:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570757, 'reachable_time': 35118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281685, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.026 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b3204dec-96fd-4f4c-a37d-dd53a25dcb95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.067 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e72dc87a-01fb-4594-95da-7153934d3205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.068 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd52bfdcb-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.069 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.069 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd52bfdcb-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:25 compute-0 kernel: tapd52bfdcb-a0: entered promiscuous mode
Jan 31 07:44:25 compute-0 NetworkManager[49108]: <info>  [1769845465.0714] manager: (tapd52bfdcb-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.074 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd52bfdcb-a0, col_values=(('external_ids', {'iface-id': '81353d2a-c386-4c97-aadf-683c4d8daa27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:25 compute-0 ovn_controller[149457]: 2026-01-31T07:44:25Z|00173|binding|INFO|Releasing lport 81353d2a-c386-4c97-aadf-683c4d8daa27 from this chassis (sb_readonly=0)
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.076 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.077 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[980bb61a-d74b-484d-b10a-fe45a5270474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.078 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.pid.haproxy
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID d52bfdcb-a5f3-4946-8fca-4e9f67091fc3
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:44:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:25.079 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'env', 'PROCESS_TAG=haproxy-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d52bfdcb-a5f3-4946-8fca-4e9f67091fc3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.081 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.164 247708 DEBUG nova.network.neutron [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updated VIF entry in instance network info cache for port a85d82a5-a910-4674-9460-0efb5cc7e0c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.165 247708 DEBUG nova.network.neutron [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating instance_info_cache with network_info: [{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.182 247708 DEBUG oslo_concurrency.lockutils [req-36013658-ab1b-447f-895e-b96e0d4157cf req-0037c636-3dbd-432c-8d69-12c1972ef830 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.259 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845465.2593756, a2a97039-8813-4ebf-9ce0-488982bece16 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.260 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] VM Started (Lifecycle Event)
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.316 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.323 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845465.26051, a2a97039-8813-4ebf-9ce0-488982bece16 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.324 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] VM Paused (Lifecycle Event)
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.351 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.355 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.385 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:44:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:44:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3591188333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:25 compute-0 podman[281779]: 2026-01-31 07:44:25.419735967 +0000 UTC m=+0.061214330 container create 5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.420 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.428 247708 DEBUG nova.compute.provider_tree [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:44:25 compute-0 systemd[1]: Started libpod-conmon-5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4.scope.
Jan 31 07:44:25 compute-0 podman[281779]: 2026-01-31 07:44:25.384702649 +0000 UTC m=+0.026181042 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:44:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.498 247708 DEBUG nova.scheduler.client.report [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:44:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd181ba3146e654a18c44a427d8ee68ae6f33844d413778e6e6e8e5329afcd84/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:25 compute-0 podman[281779]: 2026-01-31 07:44:25.514177249 +0000 UTC m=+0.155655622 container init 5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:44:25 compute-0 podman[281779]: 2026-01-31 07:44:25.51910062 +0000 UTC m=+0.160578973 container start 5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:44:25 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [NOTICE]   (281800) : New worker (281802) forked
Jan 31 07:44:25 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [NOTICE]   (281800) : Loading success.
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.685 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.686 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.802 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:44:25 compute-0 nova_compute[247704]: 2026-01-31 07:44:25.803 247708 DEBUG nova.network.neutron [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:44:25 compute-0 ceph-mon[74496]: pgmap v1368: 305 pgs: 305 active+clean; 175 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 196 op/s
Jan 31 07:44:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3591188333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2346548695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.017 247708 INFO nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:44:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.090 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.286 247708 DEBUG nova.compute.manager [req-f1b480cb-de37-4763-8cde-a4b04a0106a9 req-3078ec8e-773f-44db-8853-71129adf8e54 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.286 247708 DEBUG oslo_concurrency.lockutils [req-f1b480cb-de37-4763-8cde-a4b04a0106a9 req-3078ec8e-773f-44db-8853-71129adf8e54 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.287 247708 DEBUG oslo_concurrency.lockutils [req-f1b480cb-de37-4763-8cde-a4b04a0106a9 req-3078ec8e-773f-44db-8853-71129adf8e54 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.287 247708 DEBUG oslo_concurrency.lockutils [req-f1b480cb-de37-4763-8cde-a4b04a0106a9 req-3078ec8e-773f-44db-8853-71129adf8e54 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.287 247708 DEBUG nova.compute.manager [req-f1b480cb-de37-4763-8cde-a4b04a0106a9 req-3078ec8e-773f-44db-8853-71129adf8e54 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Processing event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.288 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.292 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845466.2912724, a2a97039-8813-4ebf-9ce0-488982bece16 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.292 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] VM Resumed (Lifecycle Event)
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.293 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.298 247708 INFO nova.virt.libvirt.driver [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Instance spawned successfully.
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.299 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.307 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.309 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.309 247708 INFO nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Creating image(s)
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.339 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.369 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:26.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.399 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.403 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.432 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.437 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.438 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.439 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.439 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.439 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.440 247708 DEBUG nova.virt.libvirt.driver [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.445 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.470 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.471 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.472 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.472 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.497 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.502 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 968b433b-941d-4472-af01-c19f6ff6377b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 134 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 226 op/s
Jan 31 07:44:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:26.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:26 compute-0 nova_compute[247704]: 2026-01-31 07:44:26.903 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.056 247708 DEBUG nova.policy [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a6a24b8bc028456fa6de79d3b792e79a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07ac56babd144839be6d08563340e6bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.165 247708 INFO nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Took 9.82 seconds to spawn the instance on the hypervisor.
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.166 247708 DEBUG nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.178 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.309 247708 INFO nova.compute.manager [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Took 11.23 seconds to build instance.
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.455 247708 DEBUG oslo_concurrency.lockutils [None req-e569614c-6cdb-4021-be42-6ab26613c0ae b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.803 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.834 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 968b433b-941d-4472-af01-c19f6ff6377b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:27 compute-0 nova_compute[247704]: 2026-01-31 07:44:27.930 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] resizing rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.043 247708 DEBUG nova.objects.instance [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lazy-loading 'migration_context' on Instance uuid 968b433b-941d-4472-af01-c19f6ff6377b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:44:28 compute-0 ceph-mon[74496]: pgmap v1369: 305 pgs: 305 active+clean; 134 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 226 op/s
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.094 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.095 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Ensure instance console log exists: /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.095 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.096 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.096 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:28 compute-0 nova_compute[247704]: 2026-01-31 07:44:28.251 247708 DEBUG nova.network.neutron [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Successfully created port: 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:44:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 145 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Jan 31 07:44:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:28.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:29 compute-0 nova_compute[247704]: 2026-01-31 07:44:29.723 247708 DEBUG nova.compute.manager [req-5c5ce085-bc57-4241-9c51-ff6413b70640 req-3b19f251-1fd6-4a4c-8f2e-a01eaeba2c5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:29 compute-0 nova_compute[247704]: 2026-01-31 07:44:29.725 247708 DEBUG oslo_concurrency.lockutils [req-5c5ce085-bc57-4241-9c51-ff6413b70640 req-3b19f251-1fd6-4a4c-8f2e-a01eaeba2c5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:29 compute-0 nova_compute[247704]: 2026-01-31 07:44:29.726 247708 DEBUG oslo_concurrency.lockutils [req-5c5ce085-bc57-4241-9c51-ff6413b70640 req-3b19f251-1fd6-4a4c-8f2e-a01eaeba2c5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:29 compute-0 nova_compute[247704]: 2026-01-31 07:44:29.726 247708 DEBUG oslo_concurrency.lockutils [req-5c5ce085-bc57-4241-9c51-ff6413b70640 req-3b19f251-1fd6-4a4c-8f2e-a01eaeba2c5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:29 compute-0 nova_compute[247704]: 2026-01-31 07:44:29.727 247708 DEBUG nova.compute.manager [req-5c5ce085-bc57-4241-9c51-ff6413b70640 req-3b19f251-1fd6-4a4c-8f2e-a01eaeba2c5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] No waiting events found dispatching network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:44:29 compute-0 nova_compute[247704]: 2026-01-31 07:44:29.727 247708 WARNING nova.compute.manager [req-5c5ce085-bc57-4241-9c51-ff6413b70640 req-3b19f251-1fd6-4a4c-8f2e-a01eaeba2c5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received unexpected event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 for instance with vm_state active and task_state None.
Jan 31 07:44:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:30 compute-0 ceph-mon[74496]: pgmap v1370: 305 pgs: 305 active+clean; 145 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 248 op/s
Jan 31 07:44:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 192 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 304 op/s
Jan 31 07:44:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:31 compute-0 ceph-mon[74496]: pgmap v1371: 305 pgs: 305 active+clean; 192 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.9 MiB/s wr, 304 op/s
Jan 31 07:44:31 compute-0 nova_compute[247704]: 2026-01-31 07:44:31.905 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:32 compute-0 sudo[281980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:32 compute-0 sudo[281980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:32 compute-0 sudo[281980]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:32 compute-0 sudo[282005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:32 compute-0 sudo[282005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:32 compute-0 sudo[282005]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 200 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Jan 31 07:44:32 compute-0 nova_compute[247704]: 2026-01-31 07:44:32.805 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:32.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:33 compute-0 ceph-mon[74496]: pgmap v1372: 305 pgs: 305 active+clean; 200 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.014 247708 DEBUG nova.network.neutron [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Successfully updated port: 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.185 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.186 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.186 247708 DEBUG nova.network.neutron [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:44:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:44:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6931 writes, 30K keys, 6930 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6931 writes, 6930 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1711 writes, 7494 keys, 1711 commit groups, 1.0 writes per commit group, ingest: 11.01 MB, 0.02 MB/s
                                           Interval WAL: 1711 writes, 1711 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.1      0.45              0.09        17    0.027       0      0       0.0       0.0
                                             L6      1/0    8.26 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7     71.8     59.3      2.32              0.34        16    0.145     80K   8935       0.0       0.0
                                            Sum      1/0    8.26 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     60.0     63.0      2.78              0.43        33    0.084     80K   8935       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     62.1     63.5      0.77              0.13         8    0.096     23K   2595       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     71.8     59.3      2.32              0.34        16    0.145     80K   8935       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.8      0.45              0.09        16    0.028       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.036, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.16 GB read, 0.07 MB/s read, 2.8 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 17.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000119 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1019,16.72 MB,5.49885%) FilterBlock(34,230.48 KB,0.0740403%) IndexBlock(34,420.86 KB,0.135196%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:44:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.505 247708 DEBUG nova.compute.manager [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.506 247708 DEBUG nova.compute.manager [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing instance network info cache due to event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:44:34 compute-0 nova_compute[247704]: 2026-01-31 07:44:34.507 247708 DEBUG oslo_concurrency.lockutils [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 200 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 143 op/s
Jan 31 07:44:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:34.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003950779650567653 of space, bias 1.0, pg target 1.185233895170296 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:44:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:44:35 compute-0 nova_compute[247704]: 2026-01-31 07:44:35.550 247708 DEBUG nova.network.neutron [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:44:35 compute-0 ceph-mon[74496]: pgmap v1373: 305 pgs: 305 active+clean; 200 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 143 op/s
Jan 31 07:44:35 compute-0 podman[282032]: 2026-01-31 07:44:35.936967851 +0000 UTC m=+0.113927511 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:44:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 213 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Jan 31 07:44:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 31 07:44:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:36.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 31 07:44:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 31 07:44:36 compute-0 nova_compute[247704]: 2026-01-31 07:44:36.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:37 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 07:44:37 compute-0 NetworkManager[49108]: <info>  [1769845477.4242] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 31 07:44:37 compute-0 NetworkManager[49108]: <info>  [1769845477.4248] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 31 07:44:37 compute-0 nova_compute[247704]: 2026-01-31 07:44:37.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:37 compute-0 nova_compute[247704]: 2026-01-31 07:44:37.480 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:37 compute-0 ovn_controller[149457]: 2026-01-31T07:44:37Z|00174|binding|INFO|Releasing lport 81353d2a-c386-4c97-aadf-683c4d8daa27 from this chassis (sb_readonly=0)
Jan 31 07:44:37 compute-0 nova_compute[247704]: 2026-01-31 07:44:37.505 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:37 compute-0 nova_compute[247704]: 2026-01-31 07:44:37.807 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:37 compute-0 ceph-mon[74496]: pgmap v1374: 305 pgs: 305 active+clean; 213 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Jan 31 07:44:37 compute-0 ceph-mon[74496]: osdmap e204: 3 total, 3 up, 3 in
Jan 31 07:44:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 31 07:44:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 31 07:44:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 31 07:44:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:38.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 238 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 121 op/s
Jan 31 07:44:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:38.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.103 247708 DEBUG nova.compute.manager [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-changed-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.103 247708 DEBUG nova.compute.manager [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Refreshing instance network info cache due to event network-changed-a85d82a5-a910-4674-9460-0efb5cc7e0c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.103 247708 DEBUG oslo_concurrency.lockutils [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.103 247708 DEBUG oslo_concurrency.lockutils [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.103 247708 DEBUG nova.network.neutron [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Refreshing network info cache for port a85d82a5-a910-4674-9460-0efb5cc7e0c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.567 247708 DEBUG nova.network.neutron [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:39 compute-0 ovn_controller[149457]: 2026-01-31T07:44:39Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:98:a0 10.100.0.4
Jan 31 07:44:39 compute-0 ovn_controller[149457]: 2026-01-31T07:44:39Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:98:a0 10.100.0.4
Jan 31 07:44:39 compute-0 ceph-mon[74496]: osdmap e205: 3 total, 3 up, 3 in
Jan 31 07:44:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 31 07:44:39 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.682 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.683 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Instance network_info: |[{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.684 247708 DEBUG oslo_concurrency.lockutils [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.684 247708 DEBUG nova.network.neutron [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.690 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Start _get_guest_xml network_info=[{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.697 247708 WARNING nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.715 247708 DEBUG nova.virt.libvirt.host [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.717 247708 DEBUG nova.virt.libvirt.host [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.725 247708 DEBUG nova.virt.libvirt.host [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.726 247708 DEBUG nova.virt.libvirt.host [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.727 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.728 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.728 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.729 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.729 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.729 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.730 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.730 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.730 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.731 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.731 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.732 247708 DEBUG nova.virt.hardware [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:44:39 compute-0 nova_compute[247704]: 2026-01-31 07:44:39.736 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:44:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609713837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.206 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.237 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.241 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 31 07:44:40 compute-0 ceph-mon[74496]: pgmap v1377: 305 pgs: 305 active+clean; 238 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 121 op/s
Jan 31 07:44:40 compute-0 ceph-mon[74496]: osdmap e206: 3 total, 3 up, 3 in
Jan 31 07:44:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2609713837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:44:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087877508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.713 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.715 247708 DEBUG nova.virt.libvirt.vif [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1699557048',display_name='tempest-FloatingIPsAssociationTestJSON-server-1699557048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1699557048',id=54,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07ac56babd144839be6d08563340e6bd',ramdisk_id='',reservation_id='r-edhbitw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-656325591',owner_user_name='tempest-FloatingIPsAssociationTestJSON-656325591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:26Z,user_data=None,user_id='a6a24b8bc028456fa6de79d3b792e79a',uuid=968b433b-941d-4472-af01-c19f6ff6377b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.715 247708 DEBUG nova.network.os_vif_util [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converting VIF {"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.716 247708 DEBUG nova.network.os_vif_util [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.717 247708 DEBUG nova.objects.instance [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 968b433b-941d-4472-af01-c19f6ff6377b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.742 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <uuid>968b433b-941d-4472-af01-c19f6ff6377b</uuid>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <name>instance-00000036</name>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1699557048</nova:name>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:44:39</nova:creationTime>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:user uuid="a6a24b8bc028456fa6de79d3b792e79a">tempest-FloatingIPsAssociationTestJSON-656325591-project-member</nova:user>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:project uuid="07ac56babd144839be6d08563340e6bd">tempest-FloatingIPsAssociationTestJSON-656325591</nova:project>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <nova:port uuid="9e0062f0-fd6d-4587-89d5-e10017aa4e88">
Jan 31 07:44:40 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <system>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <entry name="serial">968b433b-941d-4472-af01-c19f6ff6377b</entry>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <entry name="uuid">968b433b-941d-4472-af01-c19f6ff6377b</entry>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </system>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <os>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </os>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <features>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </features>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/968b433b-941d-4472-af01-c19f6ff6377b_disk">
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </source>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/968b433b-941d-4472-af01-c19f6ff6377b_disk.config">
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </source>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:44:40 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:65:ef:fc"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <target dev="tap9e0062f0-fd"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/console.log" append="off"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <video>
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </video>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:44:40 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:44:40 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:44:40 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:44:40 compute-0 nova_compute[247704]: </domain>
Jan 31 07:44:40 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.743 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Preparing to wait for external event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.743 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.743 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.743 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.744 247708 DEBUG nova.virt.libvirt.vif [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1699557048',display_name='tempest-FloatingIPsAssociationTestJSON-server-1699557048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1699557048',id=54,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07ac56babd144839be6d08563340e6bd',ramdisk_id='',reservation_id='r-edhbitw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-656325591',owner_user_name='tempest-FloatingIPsAssociationTestJSON-656325591-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:26Z,user_data=None,user_id='a6a24b8bc028456fa6de79d3b792e79a',uuid=968b433b-941d-4472-af01-c19f6ff6377b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.744 247708 DEBUG nova.network.os_vif_util [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converting VIF {"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.745 247708 DEBUG nova.network.os_vif_util [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.745 247708 DEBUG os_vif [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.746 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.746 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.748 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e0062f0-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.748 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e0062f0-fd, col_values=(('external_ids', {'iface-id': '9e0062f0-fd6d-4587-89d5-e10017aa4e88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:ef:fc', 'vm-uuid': '968b433b-941d-4472-af01-c19f6ff6377b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.750 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:40 compute-0 NetworkManager[49108]: <info>  [1769845480.7506] manager: (tap9e0062f0-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.752 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.756 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.757 247708 INFO os_vif [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd')
Jan 31 07:44:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 31 07:44:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 31 07:44:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 313 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 17 MiB/s wr, 346 op/s
Jan 31 07:44:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:40.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.888 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.888 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.889 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] No VIF found with MAC fa:16:3e:65:ef:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:44:40 compute-0 nova_compute[247704]: 2026-01-31 07:44:40.890 247708 INFO nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Using config drive
Jan 31 07:44:41 compute-0 nova_compute[247704]: 2026-01-31 07:44:41.023 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4087877508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1775213986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:41 compute-0 ceph-mon[74496]: osdmap e207: 3 total, 3 up, 3 in
Jan 31 07:44:41 compute-0 ceph-mon[74496]: pgmap v1380: 305 pgs: 305 active+clean; 313 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 17 MiB/s wr, 346 op/s
Jan 31 07:44:41 compute-0 nova_compute[247704]: 2026-01-31 07:44:41.938 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 301 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 12 MiB/s wr, 301 op/s
Jan 31 07:44:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:43 compute-0 ceph-mon[74496]: pgmap v1381: 305 pgs: 305 active+clean; 301 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 12 MiB/s wr, 301 op/s
Jan 31 07:44:44 compute-0 nova_compute[247704]: 2026-01-31 07:44:44.075 247708 INFO nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Creating config drive at /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/disk.config
Jan 31 07:44:44 compute-0 nova_compute[247704]: 2026-01-31 07:44:44.079 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfns401wn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:44 compute-0 nova_compute[247704]: 2026-01-31 07:44:44.200 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfns401wn" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:44 compute-0 nova_compute[247704]: 2026-01-31 07:44:44.231 247708 DEBUG nova.storage.rbd_utils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] rbd image 968b433b-941d-4472-af01-c19f6ff6377b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:44:44 compute-0 nova_compute[247704]: 2026-01-31 07:44:44.235 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/disk.config 968b433b-941d-4472-af01-c19f6ff6377b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:44:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 301 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 8.5 MiB/s wr, 220 op/s
Jan 31 07:44:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:44.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:44:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1878130030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:44:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:44:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1878130030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:44:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1878130030' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:44:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1878130030' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.193 247708 DEBUG oslo_concurrency.processutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/disk.config 968b433b-941d-4472-af01-c19f6ff6377b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.958s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.194 247708 INFO nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Deleting local config drive /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b/disk.config because it was imported into RBD.
Jan 31 07:44:45 compute-0 NetworkManager[49108]: <info>  [1769845485.2331] manager: (tap9e0062f0-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Jan 31 07:44:45 compute-0 kernel: tap9e0062f0-fd: entered promiscuous mode
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.236 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 ovn_controller[149457]: 2026-01-31T07:44:45Z|00175|binding|INFO|Claiming lport 9e0062f0-fd6d-4587-89d5-e10017aa4e88 for this chassis.
Jan 31 07:44:45 compute-0 ovn_controller[149457]: 2026-01-31T07:44:45Z|00176|binding|INFO|9e0062f0-fd6d-4587-89d5-e10017aa4e88: Claiming fa:16:3e:65:ef:fc 10.100.0.9
Jan 31 07:44:45 compute-0 ovn_controller[149457]: 2026-01-31T07:44:45Z|00177|binding|INFO|Setting lport 9e0062f0-fd6d-4587-89d5-e10017aa4e88 ovn-installed in OVS
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.243 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 systemd-udevd[282199]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:44:45 compute-0 NetworkManager[49108]: <info>  [1769845485.2771] device (tap9e0062f0-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:44:45 compute-0 NetworkManager[49108]: <info>  [1769845485.2782] device (tap9e0062f0-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:44:45 compute-0 systemd-machined[214448]: New machine qemu-25-instance-00000036.
Jan 31 07:44:45 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000036.
Jan 31 07:44:45 compute-0 ovn_controller[149457]: 2026-01-31T07:44:45Z|00178|binding|INFO|Setting lport 9e0062f0-fd6d-4587-89d5-e10017aa4e88 up in Southbound
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.367 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:ef:fc 10.100.0.9'], port_security=['fa:16:3e:65:ef:fc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '968b433b-941d-4472-af01-c19f6ff6377b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7077deb2-06a0-4e93-8714-7555d93557cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07ac56babd144839be6d08563340e6bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2da1fc29-dd97-45b5-a69f-a954c8d9f902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2803db5-d904-4d93-a43e-b71357b850fe, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=9e0062f0-fd6d-4587-89d5-e10017aa4e88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.368 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 in datapath 7077deb2-06a0-4e93-8714-7555d93557cf bound to our chassis
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.371 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7077deb2-06a0-4e93-8714-7555d93557cf
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.382 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[765c2be6-0770-4d2f-9aae-2f16e9f3213a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.384 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7077deb2-01 in ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.387 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7077deb2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.388 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6cfd37-afa4-4a17-b53a-ef9fa8725042]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.388 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[081dbba6-c064-4ff4-85fe-3f0cfc939a7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.400 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[cb627414-52bb-486f-bd2f-a43cfdb4d7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.425 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8117ebaf-f05d-41a2-af14-cfe255d91ba8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.449 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c14ebbe3-529a-4e28-9bcd-3e773f9cc410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 NetworkManager[49108]: <info>  [1769845485.4546] manager: (tap7077deb2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Jan 31 07:44:45 compute-0 systemd-udevd[282203]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.453 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[22f0cd9c-e416-4ca2-8699-4dca6b2d5af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.478 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6572af27-b622-4885-8d08-22b7dfc8d658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.481 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4e03eb67-d591-42f0-98b2-ca202147b10f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 NetworkManager[49108]: <info>  [1769845485.4967] device (tap7077deb2-00): carrier: link connected
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.501 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d70c3005-0af0-4778-9b10-3c6ba0634ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.517 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[06f9194c-cac4-4844-b4b2-2222b18eb43b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7077deb2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:9c:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572811, 'reachable_time': 32952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282235, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.531 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[54aa0e3b-bb14-42d6-a69c-118b42d02e43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:9c01'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572811, 'tstamp': 572811}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282236, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.546 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[10464862-4d72-4af5-af66-998b649c4c20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7077deb2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:9c:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572811, 'reachable_time': 32952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282237, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.574 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[82ebd795-70c2-42d1-841f-9a69c9010914]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.622 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1ece2f71-ea67-46ed-9aa0-4dc4be1ce173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.623 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7077deb2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.624 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.624 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7077deb2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.626 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 NetworkManager[49108]: <info>  [1769845485.6266] manager: (tap7077deb2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 31 07:44:45 compute-0 kernel: tap7077deb2-00: entered promiscuous mode
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.629 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7077deb2-00, col_values=(('external_ids', {'iface-id': '5e30ad3f-073b-4a38-b984-d0517ecbf784'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.630 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 ovn_controller[149457]: 2026-01-31T07:44:45Z|00179|binding|INFO|Releasing lport 5e30ad3f-073b-4a38-b984-d0517ecbf784 from this chassis (sb_readonly=0)
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.631 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7077deb2-06a0-4e93-8714-7555d93557cf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7077deb2-06a0-4e93-8714-7555d93557cf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.633 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[588f5bbf-f52f-4dec-9fd4-643758761809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.633 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-7077deb2-06a0-4e93-8714-7555d93557cf
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/7077deb2-06a0-4e93-8714-7555d93557cf.pid.haproxy
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 7077deb2-06a0-4e93-8714-7555d93557cf
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:44:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:44:45.634 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'env', 'PROCESS_TAG=haproxy-7077deb2-06a0-4e93-8714-7555d93557cf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7077deb2-06a0-4e93-8714-7555d93557cf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.635 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 nova_compute[247704]: 2026-01-31 07:44:45.750 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:45 compute-0 podman[282269]: 2026-01-31 07:44:45.974404944 +0000 UTC m=+0.071267146 container create 3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:44:46 compute-0 podman[282269]: 2026-01-31 07:44:45.927130847 +0000 UTC m=+0.023993069 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:44:46 compute-0 systemd[1]: Started libpod-conmon-3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93.scope.
Jan 31 07:44:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 31 07:44:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b9cf126752b95c1792a42110c6b013dcfd9521bae3c58c4ba6f038a0124b6da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:44:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 31 07:44:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 31 07:44:46 compute-0 podman[282269]: 2026-01-31 07:44:46.060466172 +0000 UTC m=+0.157328384 container init 3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:44:46 compute-0 podman[282269]: 2026-01-31 07:44:46.064661325 +0000 UTC m=+0.161523517 container start 3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:44:46 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [NOTICE]   (282304) : New worker (282307) forked
Jan 31 07:44:46 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [NOTICE]   (282304) : Loading success.
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.194 247708 DEBUG nova.network.neutron [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updated VIF entry in instance network info cache for port a85d82a5-a910-4674-9460-0efb5cc7e0c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.195 247708 DEBUG nova.network.neutron [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating instance_info_cache with network_info: [{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:46 compute-0 ceph-mon[74496]: pgmap v1382: 305 pgs: 305 active+clean; 301 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 8.5 MiB/s wr, 220 op/s
Jan 31 07:44:46 compute-0 ceph-mon[74496]: osdmap e208: 3 total, 3 up, 3 in
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.258 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845486.258019, 968b433b-941d-4472-af01-c19f6ff6377b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.259 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] VM Started (Lifecycle Event)
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.267 247708 DEBUG oslo_concurrency.lockutils [req-702e9fc9-2830-4927-a65a-c0748fab70fa req-07d582a5-7f45-4e52-b68a-cd7056562367 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.319 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.323 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845486.2591166, 968b433b-941d-4472-af01-c19f6ff6377b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.323 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] VM Paused (Lifecycle Event)
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.390 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.393 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:44:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.435 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:44:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 293 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 289 op/s
Jan 31 07:44:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:46.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.925 247708 DEBUG nova.compute.manager [req-1e723337-d315-42df-92dd-7fda47dada11 req-28a359e7-eb66-457c-8b38-25429ba3f810 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.925 247708 DEBUG oslo_concurrency.lockutils [req-1e723337-d315-42df-92dd-7fda47dada11 req-28a359e7-eb66-457c-8b38-25429ba3f810 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.926 247708 DEBUG oslo_concurrency.lockutils [req-1e723337-d315-42df-92dd-7fda47dada11 req-28a359e7-eb66-457c-8b38-25429ba3f810 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.926 247708 DEBUG oslo_concurrency.lockutils [req-1e723337-d315-42df-92dd-7fda47dada11 req-28a359e7-eb66-457c-8b38-25429ba3f810 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.927 247708 DEBUG nova.compute.manager [req-1e723337-d315-42df-92dd-7fda47dada11 req-28a359e7-eb66-457c-8b38-25429ba3f810 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Processing event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.928 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.931 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845486.9316337, 968b433b-941d-4472-af01-c19f6ff6377b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.932 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] VM Resumed (Lifecycle Event)
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.934 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.936 247708 INFO nova.virt.libvirt.driver [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Instance spawned successfully.
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.937 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:46 compute-0 nova_compute[247704]: 2026-01-31 07:44:46.997 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.001 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.002 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.002 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.003 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.003 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.003 247708 DEBUG nova.virt.libvirt.driver [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.007 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.085 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.220 247708 INFO nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Took 20.91 seconds to spawn the instance on the hypervisor.
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.221 247708 DEBUG nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.576 247708 INFO nova.compute.manager [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Took 23.03 seconds to build instance.
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.603 247708 DEBUG oslo_concurrency.lockutils [None req-2a4686a6-1648-464c-bb1d-16aaefa1e164 a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.659 247708 DEBUG nova.network.neutron [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updated VIF entry in instance network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.660 247708 DEBUG nova.network.neutron [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:44:47 compute-0 nova_compute[247704]: 2026-01-31 07:44:47.685 247708 DEBUG oslo_concurrency.lockutils [req-48ba53c1-6346-4abd-87c3-08ffa9665ad3 req-f2f92361-1315-4b89-ae24-2e030cac931c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:44:48 compute-0 ceph-mon[74496]: pgmap v1384: 305 pgs: 305 active+clean; 293 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 289 op/s
Jan 31 07:44:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:48.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 293 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 430 KiB/s rd, 3.4 MiB/s wr, 133 op/s
Jan 31 07:44:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:48.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:49 compute-0 nova_compute[247704]: 2026-01-31 07:44:49.242 247708 DEBUG nova.compute.manager [req-f5e2bbc3-aaa4-49aa-8fb0-927b8015e6ff req-ca410005-61d5-4a24-9fe9-70a7a9fbc83d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:44:49 compute-0 nova_compute[247704]: 2026-01-31 07:44:49.243 247708 DEBUG oslo_concurrency.lockutils [req-f5e2bbc3-aaa4-49aa-8fb0-927b8015e6ff req-ca410005-61d5-4a24-9fe9-70a7a9fbc83d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:44:49 compute-0 nova_compute[247704]: 2026-01-31 07:44:49.243 247708 DEBUG oslo_concurrency.lockutils [req-f5e2bbc3-aaa4-49aa-8fb0-927b8015e6ff req-ca410005-61d5-4a24-9fe9-70a7a9fbc83d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:44:49 compute-0 nova_compute[247704]: 2026-01-31 07:44:49.244 247708 DEBUG oslo_concurrency.lockutils [req-f5e2bbc3-aaa4-49aa-8fb0-927b8015e6ff req-ca410005-61d5-4a24-9fe9-70a7a9fbc83d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:44:49 compute-0 nova_compute[247704]: 2026-01-31 07:44:49.244 247708 DEBUG nova.compute.manager [req-f5e2bbc3-aaa4-49aa-8fb0-927b8015e6ff req-ca410005-61d5-4a24-9fe9-70a7a9fbc83d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] No waiting events found dispatching network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:44:49 compute-0 nova_compute[247704]: 2026-01-31 07:44:49.244 247708 WARNING nova.compute.manager [req-f5e2bbc3-aaa4-49aa-8fb0-927b8015e6ff req-ca410005-61d5-4a24-9fe9-70a7a9fbc83d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received unexpected event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 for instance with vm_state active and task_state None.
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:44:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:50.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:50 compute-0 ceph-mon[74496]: pgmap v1385: 305 pgs: 305 active+clean; 293 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 430 KiB/s rd, 3.4 MiB/s wr, 133 op/s
Jan 31 07:44:50 compute-0 nova_compute[247704]: 2026-01-31 07:44:50.756 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 225 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 197 op/s
Jan 31 07:44:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:50.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 31 07:44:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 31 07:44:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 31 07:44:51 compute-0 nova_compute[247704]: 2026-01-31 07:44:51.942 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:52 compute-0 sudo[282345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:52 compute-0 sudo[282345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:52 compute-0 sudo[282345]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:52 compute-0 sudo[282370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:44:52 compute-0 sudo[282370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:44:52 compute-0 sudo[282370]: pam_unix(sudo:session): session closed for user root
Jan 31 07:44:52 compute-0 ceph-mon[74496]: pgmap v1386: 305 pgs: 305 active+clean; 225 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 197 op/s
Jan 31 07:44:52 compute-0 ceph-mon[74496]: osdmap e209: 3 total, 3 up, 3 in
Jan 31 07:44:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:52.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 221 op/s
Jan 31 07:44:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:52.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2855609894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:53 compute-0 podman[282396]: 2026-01-31 07:44:53.881644895 +0000 UTC m=+0.048671583 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:44:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:54.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:54 compute-0 ceph-mon[74496]: pgmap v1388: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 221 op/s
Jan 31 07:44:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 137 op/s
Jan 31 07:44:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:54.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:55 compute-0 nova_compute[247704]: 2026-01-31 07:44:55.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:44:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:44:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:56.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:44:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1208867897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/884181136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:44:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 119 op/s
Jan 31 07:44:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:44:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:56.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:44:56 compute-0 nova_compute[247704]: 2026-01-31 07:44:56.945 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:44:57 compute-0 ceph-mon[74496]: pgmap v1389: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 137 op/s
Jan 31 07:44:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1424588764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:44:57 compute-0 ceph-mon[74496]: pgmap v1390: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 119 op/s
Jan 31 07:44:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:44:58 compute-0 nova_compute[247704]: 2026-01-31 07:44:58.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:44:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 110 op/s
Jan 31 07:44:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:44:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:44:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:58.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:00.151 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.152 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:00.154 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:45:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:45:00 compute-0 ceph-mon[74496]: pgmap v1391: 305 pgs: 305 active+clean; 213 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 110 op/s
Jan 31 07:45:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2076917329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.758 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 260 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 96 op/s
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.856 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.857 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.857 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:45:00 compute-0 nova_compute[247704]: 2026-01-31 07:45:00.857 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:00.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:01 compute-0 nova_compute[247704]: 2026-01-31 07:45:01.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1610713272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:02 compute-0 ceph-mon[74496]: pgmap v1392: 305 pgs: 305 active+clean; 260 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 96 op/s
Jan 31 07:45:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 276 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Jan 31 07:45:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:02.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:03 compute-0 ovn_controller[149457]: 2026-01-31T07:45:03Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:ef:fc 10.100.0.9
Jan 31 07:45:03 compute-0 ovn_controller[149457]: 2026-01-31T07:45:03Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:ef:fc 10.100.0.9
Jan 31 07:45:04 compute-0 ceph-mon[74496]: pgmap v1393: 305 pgs: 305 active+clean; 276 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Jan 31 07:45:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/7205415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:04.158 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 276 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1016 KiB/s rd, 3.5 MiB/s wr, 93 op/s
Jan 31 07:45:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:04.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:05 compute-0 nova_compute[247704]: 2026-01-31 07:45:05.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.029 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating instance_info_cache with network_info: [{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.317 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.318 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.319 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.319 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.319 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.320 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.320 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.320 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:06 compute-0 ceph-mon[74496]: pgmap v1394: 305 pgs: 305 active+clean; 276 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1016 KiB/s rd, 3.5 MiB/s wr, 93 op/s
Jan 31 07:45:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2026738879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.405 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.406 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.407 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.407 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.408 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:06.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 292 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 31 07:45:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:45:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/877251149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.844 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:06.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:06 compute-0 podman[282445]: 2026-01-31 07:45:06.914129891 +0000 UTC m=+0.089634896 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 07:45:06 compute-0 nova_compute[247704]: 2026-01-31 07:45:06.995 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.094 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.094 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.099 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.100 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.309 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.310 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4226MB free_disk=20.860008239746094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.311 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.311 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/877251149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3352389906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.435 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance a2a97039-8813-4ebf-9ce0-488982bece16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.436 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 968b433b-941d-4472-af01-c19f6ff6377b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.436 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.437 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:45:07 compute-0 nova_compute[247704]: 2026-01-31 07:45:07.636 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:45:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064234750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.091 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.098 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.170 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.349 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.350 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:08 compute-0 ceph-mon[74496]: pgmap v1395: 305 pgs: 305 active+clean; 292 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 31 07:45:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4064234750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/528887485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.592 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:08 compute-0 nova_compute[247704]: 2026-01-31 07:45:08.593 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 300 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 164 op/s
Jan 31 07:45:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:08.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.257 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.257 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.348 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:45:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3176177341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:09 compute-0 ceph-mon[74496]: pgmap v1396: 305 pgs: 305 active+clean; 300 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 164 op/s
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.518 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.519 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.546 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:45:09 compute-0 nova_compute[247704]: 2026-01-31 07:45:09.547 247708 INFO nova.compute.claims [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.212 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:10.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:45:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1027205063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.690 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.697 247708 DEBUG nova.compute.provider_tree [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.750 247708 DEBUG nova.scheduler.client.report [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.764 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 339 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.807 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.808 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:45:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1027205063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:10.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.934 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:45:10 compute-0 nova_compute[247704]: 2026-01-31 07:45:10.935 247708 DEBUG nova.network.neutron [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.009 247708 INFO nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:45:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.085 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:45:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:11.154 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:11.155 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:11.156 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.238 247708 DEBUG nova.policy [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4cdbfeb437d54df89a0fb0f6621b8fdc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.354 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.355 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.355 247708 INFO nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Creating image(s)
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.389 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.423 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.475 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.480 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.537 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.538 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.539 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.539 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.571 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.577 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:11 compute-0 nova_compute[247704]: 2026-01-31 07:45:11.970 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:12 compute-0 ceph-mon[74496]: pgmap v1397: 305 pgs: 305 active+clean; 339 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 31 07:45:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3572805502' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.057 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.101 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] resizing rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.232 247708 DEBUG nova.objects.instance [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'migration_context' on Instance uuid 14852aaa-992b-4b7c-9a9c-dd811ac8617a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.255 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.256 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Ensure instance console log exists: /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.256 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.256 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:12 compute-0 nova_compute[247704]: 2026-01-31 07:45:12.257 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:12 compute-0 sudo[282687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:12 compute-0 sudo[282687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:12 compute-0 sudo[282687]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:12 compute-0 sudo[282712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:12 compute-0 sudo[282712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:12 compute-0 sudo[282712]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 358 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 149 op/s
Jan 31 07:45:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:12.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:14 compute-0 ceph-mon[74496]: pgmap v1398: 305 pgs: 305 active+clean; 358 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.0 MiB/s wr, 149 op/s
Jan 31 07:45:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:14 compute-0 nova_compute[247704]: 2026-01-31 07:45:14.600 247708 DEBUG nova.network.neutron [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Successfully created port: ab718163-b906-4e5f-8ca2-f51416a1da02 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:45:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 358 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 120 op/s
Jan 31 07:45:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:14.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:15 compute-0 nova_compute[247704]: 2026-01-31 07:45:15.766 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:16 compute-0 ceph-mon[74496]: pgmap v1399: 305 pgs: 305 active+clean; 358 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 120 op/s
Jan 31 07:45:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:16.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 418 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 243 op/s
Jan 31 07:45:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:17 compute-0 nova_compute[247704]: 2026-01-31 07:45:17.062 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:18 compute-0 ceph-mon[74496]: pgmap v1400: 305 pgs: 305 active+clean; 418 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.1 MiB/s wr, 243 op/s
Jan 31 07:45:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:18.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 411 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 210 op/s
Jan 31 07:45:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:45:20
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images', 'vms']
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:45:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:20.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:20 compute-0 ceph-mon[74496]: pgmap v1401: 305 pgs: 305 active+clean; 411 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 210 op/s
Jan 31 07:45:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3042256607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1545433916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:20 compute-0 nova_compute[247704]: 2026-01-31 07:45:20.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 31 07:45:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:20.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:21 compute-0 sudo[282741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:21 compute-0 sudo[282741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-0 sudo[282741]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-0 sudo[282766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:45:21 compute-0 sudo[282766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-0 sudo[282766]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-0 sudo[282791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:21 compute-0 sudo[282791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-0 sudo[282791]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-0 sudo[282816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:45:21 compute-0 sudo[282816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:21 compute-0 nova_compute[247704]: 2026-01-31 07:45:21.579 247708 DEBUG nova.network.neutron [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Successfully updated port: ab718163-b906-4e5f-8ca2-f51416a1da02 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:45:21 compute-0 nova_compute[247704]: 2026-01-31 07:45:21.647 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "refresh_cache-14852aaa-992b-4b7c-9a9c-dd811ac8617a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:21 compute-0 nova_compute[247704]: 2026-01-31 07:45:21.647 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquired lock "refresh_cache-14852aaa-992b-4b7c-9a9c-dd811ac8617a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:21 compute-0 nova_compute[247704]: 2026-01-31 07:45:21.648 247708 DEBUG nova.network.neutron [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:45:21 compute-0 sudo[282816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:21 compute-0 ceph-mon[74496]: pgmap v1402: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 31 07:45:22 compute-0 nova_compute[247704]: 2026-01-31 07:45:22.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:22 compute-0 nova_compute[247704]: 2026-01-31 07:45:22.182 247708 DEBUG nova.network.neutron [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:45:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:22.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 07:45:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:22.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:45:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:45:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.169 247708 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-changed-ab718163-b906-4e5f-8ca2-f51416a1da02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.170 247708 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Refreshing instance network info cache due to event network-changed-ab718163-b906-4e5f-8ca2-f51416a1da02. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.170 247708 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-14852aaa-992b-4b7c-9a9c-dd811ac8617a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.678 247708 DEBUG nova.network.neutron [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Updating instance_info_cache with network_info: [{"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:45:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:45:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:45:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:45:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.727 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Releasing lock "refresh_cache-14852aaa-992b-4b7c-9a9c-dd811ac8617a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.728 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Instance network_info: |[{"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.728 247708 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-14852aaa-992b-4b7c-9a9c-dd811ac8617a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.729 247708 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Refreshing network info cache for port ab718163-b906-4e5f-8ca2-f51416a1da02 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.731 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Start _get_guest_xml network_info=[{"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.737 247708 WARNING nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.742 247708 DEBUG nova.virt.libvirt.host [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.743 247708 DEBUG nova.virt.libvirt.host [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.749 247708 DEBUG nova.virt.libvirt.host [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.750 247708 DEBUG nova.virt.libvirt.host [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.751 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.751 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.752 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.752 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.752 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.752 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.752 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.753 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.753 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.753 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.753 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.753 247708 DEBUG nova.virt.hardware [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:45:23 compute-0 nova_compute[247704]: 2026-01-31 07:45:23.756 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4f903aa1-f2d4-4f76-bbc6-e9550a55db29 does not exist
Jan 31 07:45:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev afb378bd-fd35-49df-8cbb-1767e5f2e675 does not exist
Jan 31 07:45:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8a88cd2c-3ecf-4a16-b5f8-b2ed1ef88a8c does not exist
Jan 31 07:45:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:45:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:45:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:45:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:45:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:45:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:45:24 compute-0 sudo[282896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:24 compute-0 sudo[282896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:24 compute-0 sudo[282896]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:24 compute-0 sudo[282927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:45:24 compute-0 sudo[282927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:24 compute-0 sudo[282927]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:24 compute-0 podman[282920]: 2026-01-31 07:45:24.183483006 +0000 UTC m=+0.077976641 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 07:45:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2415221445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.205 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:24 compute-0 sudo[282965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:24 compute-0 sudo[282965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:24 compute-0 sudo[282965]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:24 compute-0 ceph-mon[74496]: pgmap v1403: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 07:45:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/408421975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:45:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.248 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.255 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:24 compute-0 sudo[283006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:45:24 compute-0 sudo[283006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:24.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:45:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950759889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:24 compute-0 podman[283097]: 2026-01-31 07:45:24.617427722 +0000 UTC m=+0.028448468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.721 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.723 247708 DEBUG nova.virt.libvirt.vif [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:45:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2010764165',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2010764165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2010764165',id=58,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-z4ym6ps1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:45:11Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=14852aaa-992b-4b7c-9a9c-dd811ac8617a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.723 247708 DEBUG nova.network.os_vif_util [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.724 247708 DEBUG nova.network.os_vif_util [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.725 247708 DEBUG nova.objects.instance [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 14852aaa-992b-4b7c-9a9c-dd811ac8617a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.766 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <uuid>14852aaa-992b-4b7c-9a9c-dd811ac8617a</uuid>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <name>instance-0000003a</name>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-2010764165</nova:name>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:45:23</nova:creationTime>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:user uuid="4cdbfeb437d54df89a0fb0f6621b8fdc">tempest-ImagesOneServerNegativeTestJSON-536491326-project-member</nova:user>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:project uuid="9a5c5f11e8f24f898d16bceb9925aaa0">tempest-ImagesOneServerNegativeTestJSON-536491326</nova:project>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <nova:port uuid="ab718163-b906-4e5f-8ca2-f51416a1da02">
Jan 31 07:45:24 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <system>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <entry name="serial">14852aaa-992b-4b7c-9a9c-dd811ac8617a</entry>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <entry name="uuid">14852aaa-992b-4b7c-9a9c-dd811ac8617a</entry>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </system>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <os>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </os>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <features>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </features>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk">
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </source>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk.config">
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </source>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:45:24 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:19:f1:03"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <target dev="tapab718163-b9"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/console.log" append="off"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <video>
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </video>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:45:24 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:45:24 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:45:24 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:45:24 compute-0 nova_compute[247704]: </domain>
Jan 31 07:45:24 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.767 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Preparing to wait for external event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.767 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.767 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.767 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.768 247708 DEBUG nova.virt.libvirt.vif [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:45:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2010764165',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2010764165',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2010764165',id=58,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-z4ym6ps1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:45:11Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=14852aaa-992b-4b7c-9a9c-dd811ac8617a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.768 247708 DEBUG nova.network.os_vif_util [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.769 247708 DEBUG nova.network.os_vif_util [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.769 247708 DEBUG os_vif [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.770 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.770 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.770 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.773 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.773 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab718163-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.773 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab718163-b9, col_values=(('external_ids', {'iface-id': 'ab718163-b906-4e5f-8ca2-f51416a1da02', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:f1:03', 'vm-uuid': '14852aaa-992b-4b7c-9a9c-dd811ac8617a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:24 compute-0 NetworkManager[49108]: <info>  [1769845524.7763] manager: (tapab718163-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.777 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.782 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.782 247708 INFO os_vif [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9')
Jan 31 07:45:24 compute-0 podman[283097]: 2026-01-31 07:45:24.801324675 +0000 UTC m=+0.212345401 container create 8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:45:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 176 op/s
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.853 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.853 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.854 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No VIF found with MAC fa:16:3e:19:f1:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.854 247708 INFO nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Using config drive
Jan 31 07:45:24 compute-0 systemd[1]: Started libpod-conmon-8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1.scope.
Jan 31 07:45:24 compute-0 nova_compute[247704]: 2026-01-31 07:45:24.884 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:24 compute-0 podman[283097]: 2026-01-31 07:45:24.917134782 +0000 UTC m=+0.328155518 container init 8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:45:24 compute-0 podman[283097]: 2026-01-31 07:45:24.927599677 +0000 UTC m=+0.338620403 container start 8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:45:24 compute-0 podman[283097]: 2026-01-31 07:45:24.93872599 +0000 UTC m=+0.349746746 container attach 8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 07:45:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:24.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:24 compute-0 dazzling_engelbart[283125]: 167 167
Jan 31 07:45:24 compute-0 systemd[1]: libpod-8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1.scope: Deactivated successfully.
Jan 31 07:45:24 compute-0 conmon[283125]: conmon 8c5699b383a4db5ce955 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1.scope/container/memory.events
Jan 31 07:45:24 compute-0 podman[283097]: 2026-01-31 07:45:24.95503679 +0000 UTC m=+0.366057526 container died 8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb8dc1ce447b8aac327e06df0cf32900f52b6a877735cc1039c637bbc517c665-merged.mount: Deactivated successfully.
Jan 31 07:45:25 compute-0 podman[283097]: 2026-01-31 07:45:25.096045712 +0000 UTC m=+0.507066478 container remove 8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:45:25 compute-0 systemd[1]: libpod-conmon-8c5699b383a4db5ce955469e5b95a6da51af95c811db4270414fe3cc6c6dd7b1.scope: Deactivated successfully.
Jan 31 07:45:25 compute-0 podman[283159]: 2026-01-31 07:45:25.239667079 +0000 UTC m=+0.045667439 container create 9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:45:25 compute-0 systemd[1]: Started libpod-conmon-9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a.scope.
Jan 31 07:45:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70cebcb1606bd58b85a172d5cf5c4258f8d6d089f4f5a2c79b30348477987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70cebcb1606bd58b85a172d5cf5c4258f8d6d089f4f5a2c79b30348477987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70cebcb1606bd58b85a172d5cf5c4258f8d6d089f4f5a2c79b30348477987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70cebcb1606bd58b85a172d5cf5c4258f8d6d089f4f5a2c79b30348477987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60d70cebcb1606bd58b85a172d5cf5c4258f8d6d089f4f5a2c79b30348477987/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:25 compute-0 podman[283159]: 2026-01-31 07:45:25.218474241 +0000 UTC m=+0.024474611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:45:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:45:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:45:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:45:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2415221445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1950759889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:25 compute-0 podman[283159]: 2026-01-31 07:45:25.377138496 +0000 UTC m=+0.183138876 container init 9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_colden, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 07:45:25 compute-0 podman[283159]: 2026-01-31 07:45:25.382169719 +0000 UTC m=+0.188170079 container start 9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_colden, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 07:45:25 compute-0 podman[283159]: 2026-01-31 07:45:25.477169736 +0000 UTC m=+0.283170116 container attach 9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_colden, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:45:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:26 compute-0 reverent_colden[283175]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:45:26 compute-0 reverent_colden[283175]: --> relative data size: 1.0
Jan 31 07:45:26 compute-0 reverent_colden[283175]: --> All data devices are unavailable
Jan 31 07:45:26 compute-0 systemd[1]: libpod-9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a.scope: Deactivated successfully.
Jan 31 07:45:26 compute-0 podman[283159]: 2026-01-31 07:45:26.169697004 +0000 UTC m=+0.975697374 container died 9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.211 247708 INFO nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Creating config drive at /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/disk.config
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.218 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf_dczdj_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:26 compute-0 ceph-mon[74496]: pgmap v1404: 305 pgs: 305 active+clean; 339 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 176 op/s
Jan 31 07:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d70cebcb1606bd58b85a172d5cf5c4258f8d6d089f4f5a2c79b30348477987-merged.mount: Deactivated successfully.
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.349 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpf_dczdj_" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:26 compute-0 podman[283159]: 2026-01-31 07:45:26.377597116 +0000 UTC m=+1.183597476 container remove 9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_colden, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:45:26 compute-0 systemd[1]: libpod-conmon-9c487b9da5c30facd4f2fbd48ead5c604c38ee6a06e6300bc42aeaeb6851378a.scope: Deactivated successfully.
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.389 247708 DEBUG nova.storage.rbd_utils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] rbd image 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.394 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/disk.config 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:26 compute-0 sudo[283006]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:26.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:26 compute-0 sudo[283227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:26 compute-0 sudo[283227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:26 compute-0 sudo[283227]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:26 compute-0 sudo[283267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:45:26 compute-0 sudo[283267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:26 compute-0 sudo[283267]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:26 compute-0 sudo[283292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:26 compute-0 sudo[283292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:26 compute-0 sudo[283292]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:26 compute-0 sudo[283320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:45:26 compute-0 sudo[283320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 347 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.1 MiB/s wr, 248 op/s
Jan 31 07:45:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:26.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.955 247708 DEBUG oslo_concurrency.processutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/disk.config 14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:26 compute-0 nova_compute[247704]: 2026-01-31 07:45:26.956 247708 INFO nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Deleting local config drive /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a/disk.config because it was imported into RBD.
Jan 31 07:45:27 compute-0 kernel: tapab718163-b9: entered promiscuous mode
Jan 31 07:45:27 compute-0 NetworkManager[49108]: <info>  [1769845527.0290] manager: (tapab718163-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Jan 31 07:45:27 compute-0 ovn_controller[149457]: 2026-01-31T07:45:27Z|00180|binding|INFO|Claiming lport ab718163-b906-4e5f-8ca2-f51416a1da02 for this chassis.
Jan 31 07:45:27 compute-0 ovn_controller[149457]: 2026-01-31T07:45:27Z|00181|binding|INFO|ab718163-b906-4e5f-8ca2-f51416a1da02: Claiming fa:16:3e:19:f1:03 10.100.0.10
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 ovn_controller[149457]: 2026-01-31T07:45:27Z|00182|binding|INFO|Setting lport ab718163-b906-4e5f-8ca2-f51416a1da02 ovn-installed in OVS
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.047 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.049 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 systemd-udevd[283410]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 podman[283385]: 2026-01-31 07:45:26.978906761 +0000 UTC m=+0.029334890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:45:27 compute-0 NetworkManager[49108]: <info>  [1769845527.0764] device (tapab718163-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:45:27 compute-0 NetworkManager[49108]: <info>  [1769845527.0774] device (tapab718163-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:45:27 compute-0 systemd-machined[214448]: New machine qemu-26-instance-0000003a.
Jan 31 07:45:27 compute-0 podman[283385]: 2026-01-31 07:45:27.085899772 +0000 UTC m=+0.136327881 container create 0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:45:27 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000003a.
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.129 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:f1:03 10.100.0.10'], port_security=['fa:16:3e:19:f1:03 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '14852aaa-992b-4b7c-9a9c-dd811ac8617a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7130ed58-0d3f-4534-9498-e2d59204c82c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d514bed-4c59-42dc-a403-a5a9a9cfa795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0203aeab-482d-423f-9cbc-afbc1fe3631d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ab718163-b906-4e5f-8ca2-f51416a1da02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:45:27 compute-0 ovn_controller[149457]: 2026-01-31T07:45:27Z|00183|binding|INFO|Setting lport ab718163-b906-4e5f-8ca2-f51416a1da02 up in Southbound
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.131 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ab718163-b906-4e5f-8ca2-f51416a1da02 in datapath 7130ed58-0d3f-4534-9498-e2d59204c82c bound to our chassis
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.133 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.144 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ca466d18-8206-4620-b2f7-3eabe1b9d45b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.145 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7130ed58-01 in ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.148 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7130ed58-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.148 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[db302fc0-84ec-440b-ad8d-d5d631a26c4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.149 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9ffebf-7071-4a2d-9a1b-96b5c7118346]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.163 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[ca693fd8-f4ae-43aa-a0f9-427a850db946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.176 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c36d8926-681a-4c63-852d-f6018edc4ddb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.207 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d6751399-59cf-4720-898f-2d961f229573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.215 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[abbc5a5b-0fea-44a7-bd74-e4090d505ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 systemd[1]: Started libpod-conmon-0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72.scope.
Jan 31 07:45:27 compute-0 systemd-udevd[283414]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:45:27 compute-0 NetworkManager[49108]: <info>  [1769845527.2189] manager: (tap7130ed58-00): new Veth device (/org/freedesktop/NetworkManager/Devices/96)
Jan 31 07:45:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.252 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[48f5341a-63f8-4489-a2e2-66b1119d96d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.256 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7751c027-c20f-41ab-9445-2ab0424f65f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 NetworkManager[49108]: <info>  [1769845527.2793] device (tap7130ed58-00): carrier: link connected
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.288 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab543a2-a85a-42cc-ad51-dc9be7749234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 podman[283385]: 2026-01-31 07:45:27.309034675 +0000 UTC m=+0.359462774 container init 0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.310 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[08d5b51f-9428-43d7-ada8-1935589ab94c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7130ed58-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:8a:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576989, 'reachable_time': 20855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283451, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 podman[283385]: 2026-01-31 07:45:27.31859585 +0000 UTC m=+0.369023969 container start 0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:45:27 compute-0 nice_ptolemy[283430]: 167 167
Jan 31 07:45:27 compute-0 systemd[1]: libpod-0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72.scope: Deactivated successfully.
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.335 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[45ef144e-85f8-439a-a526-47e20c22517b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:8a47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576989, 'tstamp': 576989}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283455, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.348 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65c99ba3-b776-42cf-9210-dc301fb3b117]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7130ed58-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:8a:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576989, 'reachable_time': 20855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283462, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 podman[283385]: 2026-01-31 07:45:27.367678452 +0000 UTC m=+0.418106571 container attach 0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:45:27 compute-0 podman[283385]: 2026-01-31 07:45:27.370972862 +0000 UTC m=+0.421400961 container died 0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.380 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5a1ae9-11c0-42c5-a366-1076d5b887e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.447 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eecd0213-1997-41eb-a9a6-31c989ecf675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.449 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7130ed58-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.449 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.450 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7130ed58-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:27 compute-0 kernel: tap7130ed58-00: entered promiscuous mode
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.452 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 NetworkManager[49108]: <info>  [1769845527.4538] manager: (tap7130ed58-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.455 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7130ed58-00, col_values=(('external_ids', {'iface-id': '498a34f8-98b0-44b9-8d4d-24ad7111bb4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:27 compute-0 ovn_controller[149457]: 2026-01-31T07:45:27Z|00184|binding|INFO|Releasing lport 498a34f8-98b0-44b9-8d4d-24ad7111bb4f from this chassis (sb_readonly=0)
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.459 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.459 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.460 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[026a7a1b-41d8-440f-91f8-71efe40d3b03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.461 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/7130ed58-0d3f-4534-9498-e2d59204c82c.pid.haproxy
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 7130ed58-0d3f-4534-9498-e2d59204c82c
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:45:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:27.462 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'env', 'PROCESS_TAG=haproxy-7130ed58-0d3f-4534-9498-e2d59204c82c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7130ed58-0d3f-4534-9498-e2d59204c82c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:45:27 compute-0 nova_compute[247704]: 2026-01-31 07:45:27.464 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2e7e9b2c8b92d3793c491ca23bf9a4e6fce29ff6e494d8d8990a3c7c3c0e1b4-merged.mount: Deactivated successfully.
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.068 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845528.0678983, 14852aaa-992b-4b7c-9a9c-dd811ac8617a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.068 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] VM Started (Lifecycle Event)
Jan 31 07:45:28 compute-0 podman[283385]: 2026-01-31 07:45:28.244901144 +0000 UTC m=+1.295329243 container remove 0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:45:28 compute-0 systemd[1]: libpod-conmon-0c73813c0ea61216a15cf929c370b1bb4b91e3222b70bac783e7c73be2863e72.scope: Deactivated successfully.
Jan 31 07:45:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.469 247708 DEBUG nova.compute.manager [req-149dd0aa-f9c4-41ee-9717-8c79096abf6d req-5376c244-2c80-4fe6-95b6-4bcba8bd2f1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.474 247708 DEBUG oslo_concurrency.lockutils [req-149dd0aa-f9c4-41ee-9717-8c79096abf6d req-5376c244-2c80-4fe6-95b6-4bcba8bd2f1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.475 247708 DEBUG oslo_concurrency.lockutils [req-149dd0aa-f9c4-41ee-9717-8c79096abf6d req-5376c244-2c80-4fe6-95b6-4bcba8bd2f1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.475 247708 DEBUG oslo_concurrency.lockutils [req-149dd0aa-f9c4-41ee-9717-8c79096abf6d req-5376c244-2c80-4fe6-95b6-4bcba8bd2f1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.475 247708 DEBUG nova.compute.manager [req-149dd0aa-f9c4-41ee-9717-8c79096abf6d req-5376c244-2c80-4fe6-95b6-4bcba8bd2f1f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Processing event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.476 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:45:28 compute-0 podman[283546]: 2026-01-31 07:45:28.388710496 +0000 UTC m=+0.027701340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.481 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:45:28 compute-0 podman[283550]: 2026-01-31 07:45:28.393007701 +0000 UTC m=+0.028112969 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.489 247708 INFO nova.virt.libvirt.driver [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Instance spawned successfully.
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.489 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.600 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.612 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.618 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.619 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.620 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.621 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.622 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.622 247708 DEBUG nova.virt.libvirt.driver [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.684 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.685 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845528.0682328, 14852aaa-992b-4b7c-9a9c-dd811ac8617a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.685 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] VM Paused (Lifecycle Event)
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.714 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.720 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845528.4801352, 14852aaa-992b-4b7c-9a9c-dd811ac8617a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.720 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] VM Resumed (Lifecycle Event)
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.724 247708 INFO nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Took 17.37 seconds to spawn the instance on the hypervisor.
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.725 247708 DEBUG nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.761 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.766 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:45:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 353 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.831 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.834 247708 INFO nova.compute.manager [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Took 19.36 seconds to build instance.
Jan 31 07:45:28 compute-0 ceph-mon[74496]: pgmap v1405: 305 pgs: 305 active+clean; 347 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.1 MiB/s wr, 248 op/s
Jan 31 07:45:28 compute-0 nova_compute[247704]: 2026-01-31 07:45:28.858 247708 DEBUG oslo_concurrency.lockutils [None req-a4b3a085-9a8f-4ba8-9758-f72739399b87 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:28.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:29 compute-0 podman[283550]: 2026-01-31 07:45:29.066118925 +0000 UTC m=+0.701224213 container create f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.202 247708 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Updated VIF entry in instance network info cache for port ab718163-b906-4e5f-8ca2-f51416a1da02. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.204 247708 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Updating instance_info_cache with network_info: [{"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.230 247708 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-14852aaa-992b-4b7c-9a9c-dd811ac8617a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.231 247708 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-changed-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.231 247708 DEBUG nova.compute.manager [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Refreshing instance network info cache due to event network-changed-a85d82a5-a910-4674-9460-0efb5cc7e0c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.231 247708 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.231 247708 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.232 247708 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Refreshing network info cache for port a85d82a5-a910-4674-9460-0efb5cc7e0c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:29 compute-0 podman[283546]: 2026-01-31 07:45:29.338586197 +0000 UTC m=+0.977576961 container create 37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:45:29 compute-0 systemd[1]: Started libpod-conmon-f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a.scope.
Jan 31 07:45:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:29 compute-0 systemd[1]: Started libpod-conmon-37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee.scope.
Jan 31 07:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206c494c1a4d2ac7dcf239635a44514c5fac3fcc80640f2d7dabe7d68928218b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7338733f6dd8d9ec3ac18eb7314f395bc83666d3cb0a46cccda22a2b127bdb9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7338733f6dd8d9ec3ac18eb7314f395bc83666d3cb0a46cccda22a2b127bdb9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7338733f6dd8d9ec3ac18eb7314f395bc83666d3cb0a46cccda22a2b127bdb9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7338733f6dd8d9ec3ac18eb7314f395bc83666d3cb0a46cccda22a2b127bdb9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:29 compute-0 podman[283550]: 2026-01-31 07:45:29.479766794 +0000 UTC m=+1.114872062 container init f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 07:45:29 compute-0 podman[283550]: 2026-01-31 07:45:29.488779565 +0000 UTC m=+1.123884813 container start f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:45:29 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [NOTICE]   (283588) : New worker (283590) forked
Jan 31 07:45:29 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [NOTICE]   (283588) : Loading success.
Jan 31 07:45:29 compute-0 nova_compute[247704]: 2026-01-31 07:45:29.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:30 compute-0 podman[283546]: 2026-01-31 07:45:30.10232005 +0000 UTC m=+1.741310854 container init 37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatelet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.106 247708 DEBUG nova.compute.manager [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.106 247708 DEBUG nova.compute.manager [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing instance network info cache due to event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.107 247708 DEBUG oslo_concurrency.lockutils [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.107 247708 DEBUG oslo_concurrency.lockutils [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.107 247708 DEBUG nova.network.neutron [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:30 compute-0 podman[283546]: 2026-01-31 07:45:30.115349589 +0000 UTC m=+1.754340393 container start 37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:45:30 compute-0 podman[283546]: 2026-01-31 07:45:30.326648213 +0000 UTC m=+1.965639097 container attach 37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatelet, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:45:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:30.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:30 compute-0 ceph-mon[74496]: pgmap v1406: 305 pgs: 305 active+clean; 353 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.641 247708 DEBUG nova.compute.manager [req-de3131f0-3141-4094-9059-653dde2909c8 req-99756deb-4805-497d-b395-0d9ce797d83e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.641 247708 DEBUG oslo_concurrency.lockutils [req-de3131f0-3141-4094-9059-653dde2909c8 req-99756deb-4805-497d-b395-0d9ce797d83e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.642 247708 DEBUG oslo_concurrency.lockutils [req-de3131f0-3141-4094-9059-653dde2909c8 req-99756deb-4805-497d-b395-0d9ce797d83e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.642 247708 DEBUG oslo_concurrency.lockutils [req-de3131f0-3141-4094-9059-653dde2909c8 req-99756deb-4805-497d-b395-0d9ce797d83e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.642 247708 DEBUG nova.compute.manager [req-de3131f0-3141-4094-9059-653dde2909c8 req-99756deb-4805-497d-b395-0d9ce797d83e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] No waiting events found dispatching network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:45:30 compute-0 nova_compute[247704]: 2026-01-31 07:45:30.643 247708 WARNING nova.compute.manager [req-de3131f0-3141-4094-9059-653dde2909c8 req-99756deb-4805-497d-b395-0d9ce797d83e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received unexpected event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 for instance with vm_state active and task_state None.
Jan 31 07:45:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 370 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 218 op/s
Jan 31 07:45:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:30.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]: {
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:     "0": [
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:         {
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "devices": [
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "/dev/loop3"
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             ],
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "lv_name": "ceph_lv0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "lv_size": "7511998464",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "name": "ceph_lv0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "tags": {
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.cluster_name": "ceph",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.crush_device_class": "",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.encrypted": "0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.osd_id": "0",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.type": "block",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:                 "ceph.vdo": "0"
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             },
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "type": "block",
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:             "vg_name": "ceph_vg0"
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:         }
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]:     ]
Jan 31 07:45:31 compute-0 lucid_chatelet[283584]: }
Jan 31 07:45:31 compute-0 systemd[1]: libpod-37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee.scope: Deactivated successfully.
Jan 31 07:45:31 compute-0 podman[283546]: 2026-01-31 07:45:31.160027351 +0000 UTC m=+2.799018155 container died 37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7338733f6dd8d9ec3ac18eb7314f395bc83666d3cb0a46cccda22a2b127bdb9d-merged.mount: Deactivated successfully.
Jan 31 07:45:31 compute-0 ceph-mon[74496]: pgmap v1407: 305 pgs: 305 active+clean; 370 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 218 op/s
Jan 31 07:45:32 compute-0 nova_compute[247704]: 2026-01-31 07:45:32.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:32 compute-0 podman[283546]: 2026-01-31 07:45:32.145796973 +0000 UTC m=+3.784787737 container remove 37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_chatelet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:45:32 compute-0 systemd[1]: libpod-conmon-37bdba0ee502d9f35d6f929951c118fd92640d63a336dc422e8097ec6161bcee.scope: Deactivated successfully.
Jan 31 07:45:32 compute-0 sudo[283320]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-0 sudo[283618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:32 compute-0 sudo[283618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-0 sudo[283618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-0 sudo[283643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:45:32 compute-0 sudo[283643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-0 sudo[283643]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-0 sudo[283668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:32 compute-0 sudo[283668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-0 sudo[283668]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-0 sudo[283693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:45:32 compute-0 sudo[283693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:32 compute-0 sudo[283718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:32 compute-0 sudo[283718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-0 sudo[283718]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-0 sudo[283744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:32 compute-0 sudo[283744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:32 compute-0 sudo[283744]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:32 compute-0 nova_compute[247704]: 2026-01-31 07:45:32.656 247708 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updated VIF entry in instance network info cache for port a85d82a5-a910-4674-9460-0efb5cc7e0c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:32 compute-0 nova_compute[247704]: 2026-01-31 07:45:32.656 247708 DEBUG nova.network.neutron [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating instance_info_cache with network_info: [{"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:32 compute-0 nova_compute[247704]: 2026-01-31 07:45:32.728 247708 DEBUG oslo_concurrency.lockutils [req-d09fa02e-f663-4aa4-b660-69039a106946 req-199289d5-0581-4fec-ab4f-840edc43ee4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a2a97039-8813-4ebf-9ce0-488982bece16" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 372 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Jan 31 07:45:32 compute-0 podman[283807]: 2026-01-31 07:45:32.734730204 +0000 UTC m=+0.026312355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:45:32 compute-0 podman[283807]: 2026-01-31 07:45:32.883514548 +0000 UTC m=+0.175096739 container create 6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_feynman, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:45:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:32.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:33 compute-0 systemd[1]: Started libpod-conmon-6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa.scope.
Jan 31 07:45:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:33 compute-0 nova_compute[247704]: 2026-01-31 07:45:33.198 247708 DEBUG nova.network.neutron [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updated VIF entry in instance network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:33 compute-0 nova_compute[247704]: 2026-01-31 07:45:33.200 247708 DEBUG nova.network.neutron [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:33 compute-0 podman[283807]: 2026-01-31 07:45:33.211807378 +0000 UTC m=+0.503389619 container init 6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_feynman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:45:33 compute-0 podman[283807]: 2026-01-31 07:45:33.236330678 +0000 UTC m=+0.527912829 container start 6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_feynman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:45:33 compute-0 gifted_feynman[283824]: 167 167
Jan 31 07:45:33 compute-0 systemd[1]: libpod-6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa.scope: Deactivated successfully.
Jan 31 07:45:33 compute-0 nova_compute[247704]: 2026-01-31 07:45:33.292 247708 DEBUG oslo_concurrency.lockutils [req-72ee63b5-f81b-407c-952f-02419e29cbcb req-de477e2a-321b-4883-b408-8bb844c0f430 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:33 compute-0 podman[283807]: 2026-01-31 07:45:33.414932892 +0000 UTC m=+0.706515053 container attach 6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_feynman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:45:33 compute-0 podman[283807]: 2026-01-31 07:45:33.415620978 +0000 UTC m=+0.707203139 container died 6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ebd9e655707d13f32edc672cae5315aa9e1dc37e680a12410ff8f59d62480a-merged.mount: Deactivated successfully.
Jan 31 07:45:33 compute-0 podman[283807]: 2026-01-31 07:45:33.844794949 +0000 UTC m=+1.136377110 container remove 6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:45:33 compute-0 systemd[1]: libpod-conmon-6cb828af039627fd584a3c010df0798984ebcc228e6a821565d83bddba2807fa.scope: Deactivated successfully.
Jan 31 07:45:34 compute-0 podman[283850]: 2026-01-31 07:45:34.035847287 +0000 UTC m=+0.064368398 container create 512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:45:34 compute-0 systemd[1]: Started libpod-conmon-512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b.scope.
Jan 31 07:45:34 compute-0 podman[283850]: 2026-01-31 07:45:33.999794305 +0000 UTC m=+0.028315446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:45:34 compute-0 ceph-mon[74496]: pgmap v1408: 305 pgs: 305 active+clean; 372 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Jan 31 07:45:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4adebb150aa4d51bc36a17d46a153f8e55bc0430d22d4d7d09899ef484a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4adebb150aa4d51bc36a17d46a153f8e55bc0430d22d4d7d09899ef484a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4adebb150aa4d51bc36a17d46a153f8e55bc0430d22d4d7d09899ef484a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4adebb150aa4d51bc36a17d46a153f8e55bc0430d22d4d7d09899ef484a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:45:34 compute-0 podman[283850]: 2026-01-31 07:45:34.153996051 +0000 UTC m=+0.182517192 container init 512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jennings, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:45:34 compute-0 podman[283850]: 2026-01-31 07:45:34.160608823 +0000 UTC m=+0.189129924 container start 512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jennings, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:45:34 compute-0 podman[283850]: 2026-01-31 07:45:34.165369879 +0000 UTC m=+0.193891010 container attach 512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:45:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:34 compute-0 nova_compute[247704]: 2026-01-31 07:45:34.666 247708 DEBUG nova.compute.manager [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:45:34 compute-0 nova_compute[247704]: 2026-01-31 07:45:34.725 247708 INFO nova.compute.manager [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] instance snapshotting
Jan 31 07:45:34 compute-0 nova_compute[247704]: 2026-01-31 07:45:34.777 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 372 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 31 07:45:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:34.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008495348559929276 of space, bias 1.0, pg target 2.5486045679787828 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:45:34 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]: {
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:         "osd_id": 0,
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:         "type": "bluestore"
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]:     }
Jan 31 07:45:35 compute-0 eloquent_jennings[283867]: }
Jan 31 07:45:35 compute-0 systemd[1]: libpod-512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b.scope: Deactivated successfully.
Jan 31 07:45:35 compute-0 podman[283850]: 2026-01-31 07:45:35.075978239 +0000 UTC m=+1.104499390 container died 512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:45:35 compute-0 nova_compute[247704]: 2026-01-31 07:45:35.109 247708 INFO nova.virt.libvirt.driver [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Beginning live snapshot process
Jan 31 07:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f23f4adebb150aa4d51bc36a17d46a153f8e55bc0430d22d4d7d09899ef484a-merged.mount: Deactivated successfully.
Jan 31 07:45:35 compute-0 podman[283850]: 2026-01-31 07:45:35.212886091 +0000 UTC m=+1.241407202 container remove 512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:45:35 compute-0 systemd[1]: libpod-conmon-512d0517bf53da99cb73330cca4215070dcb36999f42eaadb23dc3266d37ee2b.scope: Deactivated successfully.
Jan 31 07:45:35 compute-0 sudo[283693]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:45:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:45:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a242b731-0e36-4895-81df-69e360b70961 does not exist
Jan 31 07:45:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 39ef4893-8cd1-451b-a039-617a8f357bbd does not exist
Jan 31 07:45:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ea2aafd4-730d-4da9-b489-f06cb7bc5253 does not exist
Jan 31 07:45:35 compute-0 sudo[283914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:35 compute-0 sudo[283914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:35 compute-0 sudo[283914]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:35 compute-0 nova_compute[247704]: 2026-01-31 07:45:35.406 247708 DEBUG nova.virt.libvirt.imagebackend [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 07:45:35 compute-0 sudo[283958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:45:35 compute-0 sudo[283958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:35 compute-0 sudo[283958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:36 compute-0 nova_compute[247704]: 2026-01-31 07:45:36.040 247708 DEBUG nova.storage.rbd_utils [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] creating snapshot(82e98f35f2b24ad7b3286c3151808633) on rbd image(14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:45:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 31 07:45:36 compute-0 ceph-mon[74496]: pgmap v1409: 305 pgs: 305 active+clean; 372 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 31 07:45:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:45:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 381 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.1 MiB/s wr, 237 op/s
Jan 31 07:45:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 31 07:45:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 31 07:45:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:36.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.161 247708 DEBUG nova.compute.manager [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.162 247708 DEBUG nova.compute.manager [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing instance network info cache due to event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.162 247708 DEBUG oslo_concurrency.lockutils [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.162 247708 DEBUG oslo_concurrency.lockutils [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.163 247708 DEBUG nova.network.neutron [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:45:37 compute-0 nova_compute[247704]: 2026-01-31 07:45:37.604 247708 DEBUG nova.storage.rbd_utils [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] cloning vms/14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk@82e98f35f2b24ad7b3286c3151808633 to images/d0b187ca-45f7-4a66-bf6d-895562fb5aaf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 07:45:37 compute-0 ceph-mon[74496]: pgmap v1410: 305 pgs: 305 active+clean; 381 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.1 MiB/s wr, 237 op/s
Jan 31 07:45:37 compute-0 ceph-mon[74496]: osdmap e210: 3 total, 3 up, 3 in
Jan 31 07:45:37 compute-0 podman[284035]: 2026-01-31 07:45:37.933457045 +0000 UTC m=+0.095649603 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:45:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:38 compute-0 nova_compute[247704]: 2026-01-31 07:45:38.483 247708 DEBUG nova.storage.rbd_utils [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] flattening images/d0b187ca-45f7-4a66-bf6d-895562fb5aaf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 07:45:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 385 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 164 op/s
Jan 31 07:45:38 compute-0 nova_compute[247704]: 2026-01-31 07:45:38.908 247708 DEBUG oslo_concurrency.lockutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:38 compute-0 nova_compute[247704]: 2026-01-31 07:45:38.910 247708 DEBUG oslo_concurrency.lockutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:38.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:38 compute-0 nova_compute[247704]: 2026-01-31 07:45:38.964 247708 DEBUG nova.objects.instance [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'flavor' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.033 247708 DEBUG oslo_concurrency.lockutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.483 247708 DEBUG oslo_concurrency.lockutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.485 247708 DEBUG oslo_concurrency.lockutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.485 247708 INFO nova.compute.manager [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Attaching volume d1aca578-ec32-4cbf-a124-22c55e831394 to /dev/vdb
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.779 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.906 247708 DEBUG os_brick.utils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.908 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.935 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.936 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc0bf2a-dbd5-4be7-a5f7-9893a28ffba7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.940 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.948 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.949 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6672ff-1239-4696-a5d1-f6e9970755fa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.951 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.960 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.961 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[cf11e66c-730e-42e4-a734-3b1f442a2c2a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.963 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[21a29370-6705-4b57-826b-b10d1308f6e9]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.964 247708 DEBUG oslo_concurrency.processutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.990 247708 DEBUG oslo_concurrency.processutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.993 247708 DEBUG os_brick.initiator.connectors.lightos [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.993 247708 DEBUG os_brick.initiator.connectors.lightos [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.994 247708 DEBUG os_brick.initiator.connectors.lightos [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.994 247708 DEBUG os_brick.utils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:45:39 compute-0 nova_compute[247704]: 2026-01-31 07:45:39.995 247708 DEBUG nova.virt.block_device [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating existing volume attachment record: a0d524fc-94e0-41e3-aefa-aa210cff89f9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:45:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3713299769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:45:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 07:45:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 07:45:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 398 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 127 op/s
Jan 31 07:45:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:40.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:42 compute-0 nova_compute[247704]: 2026-01-31 07:45:42.074 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:45:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:42.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:45:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 427 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Jan 31 07:45:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:42.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:43 compute-0 ceph-mon[74496]: pgmap v1412: 305 pgs: 305 active+clean; 385 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 164 op/s
Jan 31 07:45:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:45:44 compute-0 nova_compute[247704]: 2026-01-31 07:45:44.281 247708 DEBUG nova.network.neutron [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updated VIF entry in instance network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:45:44 compute-0 nova_compute[247704]: 2026-01-31 07:45:44.282 247708 DEBUG nova.network.neutron [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:45:44 compute-0 ceph-mon[74496]: pgmap v1413: 305 pgs: 305 active+clean; 398 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 127 op/s
Jan 31 07:45:44 compute-0 ceph-mon[74496]: pgmap v1414: 305 pgs: 305 active+clean; 427 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Jan 31 07:45:44 compute-0 nova_compute[247704]: 2026-01-31 07:45:44.306 247708 DEBUG oslo_concurrency.lockutils [req-2c1a3d70-90c1-4fec-a31d-e7daf6bbe270 req-83a2fd5b-5571-49a7-b3e3-7b32253d49bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:45:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:44.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:44 compute-0 nova_compute[247704]: 2026-01-31 07:45:44.781 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 427 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Jan 31 07:45:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:45:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3117920125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:45:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:45:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3117920125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:45:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:44.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:44 compute-0 nova_compute[247704]: 2026-01-31 07:45:44.983 247708 DEBUG nova.storage.rbd_utils [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] removing snapshot(82e98f35f2b24ad7b3286c3151808633) on rbd image(14852aaa-992b-4b7c-9a9c-dd811ac8617a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.136 247708 DEBUG nova.objects.instance [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'flavor' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.180 247708 DEBUG nova.virt.libvirt.driver [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Attempting to attach volume d1aca578-ec32-4cbf-a124-22c55e831394 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.184 247708 DEBUG nova.virt.libvirt.guest [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 07:45:45 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-d1aca578-ec32-4cbf-a124-22c55e831394">
Jan 31 07:45:45 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:45:45 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:45:45 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   </source>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 07:45:45 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   </auth>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   <serial>d1aca578-ec32-4cbf-a124-22c55e831394</serial>
Jan 31 07:45:45 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 07:45:45 compute-0 nova_compute[247704]: </disk>
Jan 31 07:45:45 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.337 247708 DEBUG nova.virt.libvirt.driver [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.340 247708 DEBUG nova.virt.libvirt.driver [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.341 247708 DEBUG nova.virt.libvirt.driver [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.341 247708 DEBUG nova.virt.libvirt.driver [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] No VIF found with MAC fa:16:3e:a5:98:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:45:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2852159801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3117920125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:45:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3117920125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:45:45 compute-0 nova_compute[247704]: 2026-01-31 07:45:45.846 247708 DEBUG oslo_concurrency.lockutils [None req-e77f9f1f-e198-48ae-837f-c5e251650c93 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 6.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 31 07:45:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 31 07:45:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 31 07:45:46 compute-0 nova_compute[247704]: 2026-01-31 07:45:46.613 247708 DEBUG nova.storage.rbd_utils [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] creating snapshot(snap) on rbd image(d0b187ca-45f7-4a66-bf6d-895562fb5aaf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 07:45:46 compute-0 ceph-mon[74496]: pgmap v1415: 305 pgs: 305 active+clean; 427 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Jan 31 07:45:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 508 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.2 MiB/s wr, 170 op/s
Jan 31 07:45:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:46.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:47 compute-0 nova_compute[247704]: 2026-01-31 07:45:47.081 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 31 07:45:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 31 07:45:47 compute-0 ceph-mon[74496]: osdmap e211: 3 total, 3 up, 3 in
Jan 31 07:45:47 compute-0 ceph-mon[74496]: pgmap v1417: 305 pgs: 305 active+clean; 508 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.2 MiB/s wr, 170 op/s
Jan 31 07:45:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 31 07:45:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:48 compute-0 ovn_controller[149457]: 2026-01-31T07:45:48Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:f1:03 10.100.0.10
Jan 31 07:45:48 compute-0 ovn_controller[149457]: 2026-01-31T07:45:48Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:f1:03 10.100.0.10
Jan 31 07:45:48 compute-0 ceph-mon[74496]: osdmap e212: 3 total, 3 up, 3 in
Jan 31 07:45:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 525 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.4 MiB/s wr, 164 op/s
Jan 31 07:45:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image d0b187ca-45f7-4a66-bf6d-895562fb5aaf could not be found.
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID d0b187ca-45f7-4a66-bf6d-895562fb5aaf
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image d0b187ca-45f7-4a66-bf6d-895562fb5aaf could not be found.
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.153 247708 ERROR nova.virt.libvirt.driver 
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.472 247708 DEBUG nova.storage.rbd_utils [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] removing snapshot(snap) on rbd image(d0b187ca-45f7-4a66-bf6d-895562fb5aaf) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.620 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:45:49 compute-0 nova_compute[247704]: 2026-01-31 07:45:49.784 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:50 compute-0 ceph-mon[74496]: pgmap v1419: 305 pgs: 305 active+clean; 525 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.4 MiB/s wr, 164 op/s
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:45:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 528 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 7.0 MiB/s wr, 184 op/s
Jan 31 07:45:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:50.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 31 07:45:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/773784101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 31 07:45:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 31 07:45:52 compute-0 nova_compute[247704]: 2026-01-31 07:45:52.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:52 compute-0 ceph-mon[74496]: pgmap v1420: 305 pgs: 305 active+clean; 528 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 7.0 MiB/s wr, 184 op/s
Jan 31 07:45:52 compute-0 ceph-mon[74496]: osdmap e213: 3 total, 3 up, 3 in
Jan 31 07:45:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:52.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:52 compute-0 sudo[284188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:52 compute-0 sudo[284188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:52 compute-0 sudo[284188]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:52 compute-0 sudo[284214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:45:52 compute-0 sudo[284214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:45:52 compute-0 sudo[284214]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 506 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 524 KiB/s rd, 1.8 MiB/s wr, 153 op/s
Jan 31 07:45:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:52.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:53 compute-0 nova_compute[247704]: 2026-01-31 07:45:53.407 247708 WARNING nova.compute.manager [None req-5e939e4f-4499-4ece-8b82-963a92bc19b4 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Image not found during snapshot: nova.exception.ImageNotFound: Image d0b187ca-45f7-4a66-bf6d-895562fb5aaf could not be found.
Jan 31 07:45:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/107301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1776851338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:45:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:45:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:45:54 compute-0 ceph-mon[74496]: pgmap v1422: 305 pgs: 305 active+clean; 506 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 524 KiB/s rd, 1.8 MiB/s wr, 153 op/s
Jan 31 07:45:54 compute-0 nova_compute[247704]: 2026-01-31 07:45:54.785 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 506 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 1.4 MiB/s wr, 119 op/s
Jan 31 07:45:54 compute-0 podman[284240]: 2026-01-31 07:45:54.886174404 +0000 UTC m=+0.059199060 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:45:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:54.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:55 compute-0 ceph-mon[74496]: pgmap v1423: 305 pgs: 305 active+clean; 506 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 1.4 MiB/s wr, 119 op/s
Jan 31 07:45:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:45:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 31 07:45:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 31 07:45:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 31 07:45:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 484 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 199 KiB/s wr, 88 op/s
Jan 31 07:45:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:56.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.390291) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557390379, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2514, "num_deletes": 511, "total_data_size": 3835514, "memory_usage": 3899584, "flush_reason": "Manual Compaction"}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 31 07:45:57 compute-0 ceph-mon[74496]: osdmap e214: 3 total, 3 up, 3 in
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557425528, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3746989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29406, "largest_seqno": 31919, "table_properties": {"data_size": 3736134, "index_size": 6453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26261, "raw_average_key_size": 20, "raw_value_size": 3712069, "raw_average_value_size": 2842, "num_data_blocks": 280, "num_entries": 1306, "num_filter_entries": 1306, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845336, "oldest_key_time": 1769845336, "file_creation_time": 1769845557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 35256 microseconds, and 5202 cpu microseconds.
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.425561) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3746989 bytes OK
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.425578) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.432180) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.432194) EVENT_LOG_v1 {"time_micros": 1769845557432189, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.432211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3824209, prev total WAL file size 3824209, number of live WAL files 2.
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.432894) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3659KB)], [65(8457KB)]
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557432951, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12407183, "oldest_snapshot_seqno": -1}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5816 keys, 10384905 bytes, temperature: kUnknown
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557561628, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10384905, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10343776, "index_size": 25467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 149806, "raw_average_key_size": 25, "raw_value_size": 10237003, "raw_average_value_size": 1760, "num_data_blocks": 1025, "num_entries": 5816, "num_filter_entries": 5816, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845557, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.562173) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10384905 bytes
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.563844) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.2 rd, 80.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6859, records dropped: 1043 output_compression: NoCompression
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.563880) EVENT_LOG_v1 {"time_micros": 1769845557563867, "job": 36, "event": "compaction_finished", "compaction_time_micros": 129035, "compaction_time_cpu_micros": 17156, "output_level": 6, "num_output_files": 1, "total_output_size": 10384905, "num_input_records": 6859, "num_output_records": 5816, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557564570, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845557565345, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.432788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.565407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.565412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.565414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.565415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:45:57.565416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.653 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.653 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.654 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.654 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.654 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.655 247708 INFO nova.compute.manager [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Terminating instance
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.656 247708 DEBUG nova.compute.manager [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:45:57 compute-0 kernel: tapab718163-b9 (unregistering): left promiscuous mode
Jan 31 07:45:57 compute-0 NetworkManager[49108]: <info>  [1769845557.7242] device (tapab718163-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.728 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 ovn_controller[149457]: 2026-01-31T07:45:57Z|00185|binding|INFO|Releasing lport ab718163-b906-4e5f-8ca2-f51416a1da02 from this chassis (sb_readonly=0)
Jan 31 07:45:57 compute-0 ovn_controller[149457]: 2026-01-31T07:45:57Z|00186|binding|INFO|Setting lport ab718163-b906-4e5f-8ca2-f51416a1da02 down in Southbound
Jan 31 07:45:57 compute-0 ovn_controller[149457]: 2026-01-31T07:45:57Z|00187|binding|INFO|Removing iface tapab718163-b9 ovn-installed in OVS
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.730 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:57.742 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:f1:03 10.100.0.10'], port_security=['fa:16:3e:19:f1:03 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '14852aaa-992b-4b7c-9a9c-dd811ac8617a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7130ed58-0d3f-4534-9498-e2d59204c82c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9a5c5f11e8f24f898d16bceb9925aaa0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d514bed-4c59-42dc-a403-a5a9a9cfa795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0203aeab-482d-423f-9cbc-afbc1fe3631d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ab718163-b906-4e5f-8ca2-f51416a1da02) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:45:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:57.745 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ab718163-b906-4e5f-8ca2-f51416a1da02 in datapath 7130ed58-0d3f-4534-9498-e2d59204c82c unbound from our chassis
Jan 31 07:45:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:57.748 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7130ed58-0d3f-4534-9498-e2d59204c82c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:45:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:57.750 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[91b6a825-b90a-4809-a80f-38e921e75c09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:57.751 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c namespace which is not needed anymore
Jan 31 07:45:57 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Jan 31 07:45:57 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003a.scope: Consumed 14.820s CPU time.
Jan 31 07:45:57 compute-0 systemd-machined[214448]: Machine qemu-26-instance-0000003a terminated.
Jan 31 07:45:57 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [NOTICE]   (283588) : haproxy version is 2.8.14-c23fe91
Jan 31 07:45:57 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [NOTICE]   (283588) : path to executable is /usr/sbin/haproxy
Jan 31 07:45:57 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [WARNING]  (283588) : Exiting Master process...
Jan 31 07:45:57 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [WARNING]  (283588) : Exiting Master process...
Jan 31 07:45:57 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [ALERT]    (283588) : Current worker (283590) exited with code 143 (Terminated)
Jan 31 07:45:57 compute-0 neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c[283579]: [WARNING]  (283588) : All workers exited. Exiting... (0)
Jan 31 07:45:57 compute-0 systemd[1]: libpod-f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a.scope: Deactivated successfully.
Jan 31 07:45:57 compute-0 podman[284286]: 2026-01-31 07:45:57.883641778 +0000 UTC m=+0.044698306 container died f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.892 247708 INFO nova.virt.libvirt.driver [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Instance destroyed successfully.
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.893 247708 DEBUG nova.objects.instance [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lazy-loading 'resources' on Instance uuid 14852aaa-992b-4b7c-9a9c-dd811ac8617a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.912 247708 DEBUG nova.virt.libvirt.vif [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:45:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-2010764165',display_name='tempest-ImagesOneServerNegativeTestJSON-server-2010764165',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-2010764165',id=58,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:45:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9a5c5f11e8f24f898d16bceb9925aaa0',ramdisk_id='',reservation_id='r-z4ym6ps1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-536491326',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-536491326-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:45:52Z,user_data=None,user_id='4cdbfeb437d54df89a0fb0f6621b8fdc',uuid=14852aaa-992b-4b7c-9a9c-dd811ac8617a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.912 247708 DEBUG nova.network.os_vif_util [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converting VIF {"id": "ab718163-b906-4e5f-8ca2-f51416a1da02", "address": "fa:16:3e:19:f1:03", "network": {"id": "7130ed58-0d3f-4534-9498-e2d59204c82c", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1190972624-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9a5c5f11e8f24f898d16bceb9925aaa0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab718163-b9", "ovs_interfaceid": "ab718163-b906-4e5f-8ca2-f51416a1da02", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.913 247708 DEBUG nova.network.os_vif_util [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.913 247708 DEBUG os_vif [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a-userdata-shm.mount: Deactivated successfully.
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.915 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab718163-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-206c494c1a4d2ac7dcf239635a44514c5fac3fcc80640f2d7dabe7d68928218b-merged.mount: Deactivated successfully.
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.919 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:57 compute-0 nova_compute[247704]: 2026-01-31 07:45:57.921 247708 INFO os_vif [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:f1:03,bridge_name='br-int',has_traffic_filtering=True,id=ab718163-b906-4e5f-8ca2-f51416a1da02,network=Network(7130ed58-0d3f-4534-9498-e2d59204c82c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab718163-b9')
Jan 31 07:45:57 compute-0 podman[284286]: 2026-01-31 07:45:57.929224984 +0000 UTC m=+0.090281512 container cleanup f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:45:57 compute-0 systemd[1]: libpod-conmon-f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a.scope: Deactivated successfully.
Jan 31 07:45:57 compute-0 podman[284340]: 2026-01-31 07:45:57.993924409 +0000 UTC m=+0.044445610 container remove f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.000 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8c11e1-162f-4c75-bc65-6ffac09d66cb]: (4, ('Sat Jan 31 07:45:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c (f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a)\nf76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a\nSat Jan 31 07:45:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c (f76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a)\nf76a3fde35afaf9d33205a7582f86d6d0e60dfd67fcd555192405e3bc6c3123a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.002 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3de87c44-5e0a-41e0-821c-154e1821c9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.004 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7130ed58-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:45:58 compute-0 kernel: tap7130ed58-00: left promiscuous mode
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.006 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.013 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.017 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1f83cb28-6860-4c9a-bf14-754f8ca582aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.032 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[684ba722-854c-4d65-b590-8ef994503466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.034 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1392de-2649-4199-8b38-33921ff076ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.049 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[95f9d452-1d89-4157-83c2-5e0e18ca3cb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576982, 'reachable_time': 26791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284358, 'error': None, 'target': 'ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.053 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7130ed58-0d3f-4534-9498-e2d59204c82c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:45:58.053 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[2cddfe4c-b07f-4441-9650-1c84fc97f88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:45:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d7130ed58\x2d0d3f\x2d4534\x2d9498\x2de2d59204c82c.mount: Deactivated successfully.
Jan 31 07:45:58 compute-0 ceph-mon[74496]: pgmap v1425: 305 pgs: 305 active+clean; 484 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 199 KiB/s wr, 88 op/s
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.439 247708 INFO nova.virt.libvirt.driver [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Deleting instance files /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a_del
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.440 247708 INFO nova.virt.libvirt.driver [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Deletion of /var/lib/nova/instances/14852aaa-992b-4b7c-9a9c-dd811ac8617a_del complete
Jan 31 07:45:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.588 247708 INFO nova.compute.manager [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Took 0.93 seconds to destroy the instance on the hypervisor.
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.589 247708 DEBUG oslo.service.loopingcall [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.589 247708 DEBUG nova.compute.manager [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:45:58 compute-0 nova_compute[247704]: 2026-01-31 07:45:58.589 247708 DEBUG nova.network.neutron [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:45:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 484 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 30 KiB/s wr, 51 op/s
Jan 31 07:45:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:45:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:45:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:58.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:45:59 compute-0 nova_compute[247704]: 2026-01-31 07:45:59.791 247708 DEBUG nova.compute.manager [req-f3b15aa3-5f65-4f94-bf9f-306bf10a8401 req-38bd3682-4fb2-4948-a995-bed919ae9722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-vif-unplugged-ab718163-b906-4e5f-8ca2-f51416a1da02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:45:59 compute-0 nova_compute[247704]: 2026-01-31 07:45:59.792 247708 DEBUG oslo_concurrency.lockutils [req-f3b15aa3-5f65-4f94-bf9f-306bf10a8401 req-38bd3682-4fb2-4948-a995-bed919ae9722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:45:59 compute-0 nova_compute[247704]: 2026-01-31 07:45:59.792 247708 DEBUG oslo_concurrency.lockutils [req-f3b15aa3-5f65-4f94-bf9f-306bf10a8401 req-38bd3682-4fb2-4948-a995-bed919ae9722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:45:59 compute-0 nova_compute[247704]: 2026-01-31 07:45:59.792 247708 DEBUG oslo_concurrency.lockutils [req-f3b15aa3-5f65-4f94-bf9f-306bf10a8401 req-38bd3682-4fb2-4948-a995-bed919ae9722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:45:59 compute-0 nova_compute[247704]: 2026-01-31 07:45:59.793 247708 DEBUG nova.compute.manager [req-f3b15aa3-5f65-4f94-bf9f-306bf10a8401 req-38bd3682-4fb2-4948-a995-bed919ae9722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] No waiting events found dispatching network-vif-unplugged-ab718163-b906-4e5f-8ca2-f51416a1da02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:45:59 compute-0 nova_compute[247704]: 2026-01-31 07:45:59.793 247708 DEBUG nova.compute.manager [req-f3b15aa3-5f65-4f94-bf9f-306bf10a8401 req-38bd3682-4fb2-4948-a995-bed919ae9722 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-vif-unplugged-ab718163-b906-4e5f-8ca2-f51416a1da02 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:46:00 compute-0 ceph-mon[74496]: pgmap v1426: 305 pgs: 305 active+clean; 484 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 30 KiB/s wr, 51 op/s
Jan 31 07:46:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:00 compute-0 nova_compute[247704]: 2026-01-31 07:46:00.477 247708 DEBUG nova.network.neutron [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:00 compute-0 nova_compute[247704]: 2026-01-31 07:46:00.579 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:00 compute-0 nova_compute[247704]: 2026-01-31 07:46:00.766 247708 INFO nova.compute.manager [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Took 2.18 seconds to deallocate network for instance.
Jan 31 07:46:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 429 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 58 KiB/s wr, 147 op/s
Jan 31 07:46:00 compute-0 nova_compute[247704]: 2026-01-31 07:46:00.955 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:00 compute-0 nova_compute[247704]: 2026-01-31 07:46:00.955 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:00 compute-0 nova_compute[247704]: 2026-01-31 07:46:00.959 247708 DEBUG nova.compute.manager [req-72ee9bee-53b9-4723-9a0f-6126a4ab7c97 req-0df9f11c-04b6-481c-9a50-cade2d9bb67d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-vif-deleted-ab718163-b906-4e5f-8ca2-f51416a1da02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.102 247708 DEBUG nova.scheduler.client.report [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.197 247708 DEBUG nova.scheduler.client.report [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.197 247708 DEBUG nova.compute.provider_tree [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.278 247708 DEBUG nova.scheduler.client.report [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.301 247708 DEBUG nova.scheduler.client.report [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:46:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 31 07:46:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 31 07:46:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.426 247708 DEBUG oslo_concurrency.processutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3982097981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:01 compute-0 ceph-mon[74496]: osdmap e215: 3 total, 3 up, 3 in
Jan 31 07:46:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332833272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.903 247708 DEBUG oslo_concurrency.processutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.910 247708 DEBUG nova.compute.provider_tree [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.947 247708 DEBUG nova.scheduler.client.report [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:01 compute-0 nova_compute[247704]: 2026-01-31 07:46:01.982 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.046 247708 INFO nova.scheduler.client.report [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Deleted allocations for instance 14852aaa-992b-4b7c-9a9c-dd811ac8617a
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.086 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.135 247708 DEBUG oslo_concurrency.lockutils [None req-d6bf8989-55ee-44b4-89f4-12e10e2f8e17 4cdbfeb437d54df89a0fb0f6621b8fdc 9a5c5f11e8f24f898d16bceb9925aaa0 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.220 247708 DEBUG nova.compute.manager [req-91fb351a-65bf-4e64-9934-28a681d5650b req-a0e2df70-7e54-4a0b-9d0c-582e7f47d143 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.220 247708 DEBUG oslo_concurrency.lockutils [req-91fb351a-65bf-4e64-9934-28a681d5650b req-a0e2df70-7e54-4a0b-9d0c-582e7f47d143 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.221 247708 DEBUG oslo_concurrency.lockutils [req-91fb351a-65bf-4e64-9934-28a681d5650b req-a0e2df70-7e54-4a0b-9d0c-582e7f47d143 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.221 247708 DEBUG oslo_concurrency.lockutils [req-91fb351a-65bf-4e64-9934-28a681d5650b req-a0e2df70-7e54-4a0b-9d0c-582e7f47d143 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "14852aaa-992b-4b7c-9a9c-dd811ac8617a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.221 247708 DEBUG nova.compute.manager [req-91fb351a-65bf-4e64-9934-28a681d5650b req-a0e2df70-7e54-4a0b-9d0c-582e7f47d143 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] No waiting events found dispatching network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.222 247708 WARNING nova.compute.manager [req-91fb351a-65bf-4e64-9934-28a681d5650b req-a0e2df70-7e54-4a0b-9d0c-582e7f47d143 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Received unexpected event network-vif-plugged-ab718163-b906-4e5f-8ca2-f51416a1da02 for instance with vm_state deleted and task_state None.
Jan 31 07:46:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:02.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:02 compute-0 ceph-mon[74496]: pgmap v1427: 305 pgs: 305 active+clean; 429 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 58 KiB/s wr, 147 op/s
Jan 31 07:46:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1332833272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:46:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 405 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 44 KiB/s wr, 170 op/s
Jan 31 07:46:02 compute-0 nova_compute[247704]: 2026-01-31 07:46:02.918 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:02.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.007 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.007 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.008 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.202 247708 DEBUG oslo_concurrency.lockutils [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.204 247708 DEBUG oslo_concurrency.lockutils [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.292 247708 INFO nova.compute.manager [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Detaching volume d1aca578-ec32-4cbf-a124-22c55e831394
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.602 247708 INFO nova.virt.block_device [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Attempting to driver detach volume d1aca578-ec32-4cbf-a124-22c55e831394 from mountpoint /dev/vdb
Jan 31 07:46:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2722501756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:03 compute-0 ceph-mon[74496]: pgmap v1429: 305 pgs: 305 active+clean; 405 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 44 KiB/s wr, 170 op/s
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.613 247708 DEBUG nova.virt.libvirt.driver [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Attempting to detach device vdb from instance a2a97039-8813-4ebf-9ce0-488982bece16 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.613 247708 DEBUG nova.virt.libvirt.guest [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-d1aca578-ec32-4cbf-a124-22c55e831394">
Jan 31 07:46:03 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   </source>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <serial>d1aca578-ec32-4cbf-a124-22c55e831394</serial>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]: </disk>
Jan 31 07:46:03 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.621 247708 INFO nova.virt.libvirt.driver [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully detached device vdb from instance a2a97039-8813-4ebf-9ce0-488982bece16 from the persistent domain config.
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.621 247708 DEBUG nova.virt.libvirt.driver [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a2a97039-8813-4ebf-9ce0-488982bece16 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.622 247708 DEBUG nova.virt.libvirt.guest [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-d1aca578-ec32-4cbf-a124-22c55e831394">
Jan 31 07:46:03 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   </source>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <serial>d1aca578-ec32-4cbf-a124-22c55e831394</serial>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 07:46:03 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 07:46:03 compute-0 nova_compute[247704]: </disk>
Jan 31 07:46:03 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.821 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769845563.8211608, a2a97039-8813-4ebf-9ce0-488982bece16 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.824 247708 DEBUG nova.virt.libvirt.driver [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a2a97039-8813-4ebf-9ce0-488982bece16 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 07:46:03 compute-0 nova_compute[247704]: 2026-01-31 07:46:03.826 247708 INFO nova.virt.libvirt.driver [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully detached device vdb from instance a2a97039-8813-4ebf-9ce0-488982bece16 from the live domain config.
Jan 31 07:46:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:04.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 405 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 40 KiB/s wr, 150 op/s
Jan 31 07:46:04 compute-0 nova_compute[247704]: 2026-01-31 07:46:04.902 247708 DEBUG nova.objects.instance [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'flavor' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:04 compute-0 nova_compute[247704]: 2026-01-31 07:46:04.996 247708 DEBUG oslo_concurrency.lockutils [None req-d5f8c2ce-bb48-4e41-b723-09056d16ea03 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:05.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:06 compute-0 ceph-mon[74496]: pgmap v1430: 305 pgs: 305 active+clean; 405 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 40 KiB/s wr, 150 op/s
Jan 31 07:46:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:06 compute-0 sshd-session[284389]: Invalid user sol from 45.148.10.240 port 55782
Jan 31 07:46:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 339 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 35 KiB/s wr, 149 op/s
Jan 31 07:46:06 compute-0 sshd-session[284389]: Connection closed by invalid user sol 45.148.10.240 port 55782 [preauth]
Jan 31 07:46:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:07.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.374 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.411 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.412 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.413 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.413 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.413 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.413 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.414 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.414 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.436 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.436 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.436 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.437 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.437 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.708 247708 DEBUG nova.compute.manager [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.709 247708 DEBUG nova.compute.manager [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing instance network info cache due to event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.709 247708 DEBUG oslo_concurrency.lockutils [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.710 247708 DEBUG oslo_concurrency.lockutils [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.710 247708 DEBUG nova.network.neutron [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:46:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3296735725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.884 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:07 compute-0 nova_compute[247704]: 2026-01-31 07:46:07.920 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.004 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.005 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.010 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.011 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.195 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.197 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4197MB free_disk=20.82366943359375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.198 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.198 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:08 compute-0 ceph-mon[74496]: pgmap v1431: 305 pgs: 305 active+clean; 339 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 35 KiB/s wr, 149 op/s
Jan 31 07:46:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4285144845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3296735725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.365 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance a2a97039-8813-4ebf-9ce0-488982bece16 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.366 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 968b433b-941d-4472-af01-c19f6ff6377b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.366 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.367 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.454 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:08.669 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.669 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:08.671 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:46:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 311 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 162 op/s
Jan 31 07:46:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2543815248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:08 compute-0 podman[284435]: 2026-01-31 07:46:08.938901519 +0000 UTC m=+0.106088499 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.941 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.947 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:08 compute-0 nova_compute[247704]: 2026-01-31 07:46:08.980 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:09.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.028 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.028 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.029 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.030 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.198 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.199 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:09 compute-0 nova_compute[247704]: 2026-01-31 07:46:09.242 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2543815248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:10 compute-0 ceph-mon[74496]: pgmap v1432: 305 pgs: 305 active+clean; 311 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 162 op/s
Jan 31 07:46:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2220353050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/254862565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 217 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 5.1 KiB/s wr, 106 op/s
Jan 31 07:46:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:11.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:11.156 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:11.156 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:11.157 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:11 compute-0 nova_compute[247704]: 2026-01-31 07:46:11.827 247708 DEBUG nova.network.neutron [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updated VIF entry in instance network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:46:11 compute-0 nova_compute[247704]: 2026-01-31 07:46:11.828 247708 DEBUG nova.network.neutron [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:11 compute-0 nova_compute[247704]: 2026-01-31 07:46:11.860 247708 DEBUG oslo_concurrency.lockutils [req-9c004eb2-c25b-45eb-b5fa-14756cf60b29 req-e8db08e2-cfb8-472d-8aca-6b6a30dd0a89 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:46:11 compute-0 ceph-mon[74496]: pgmap v1433: 305 pgs: 305 active+clean; 217 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 5.1 KiB/s wr, 106 op/s
Jan 31 07:46:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1753264854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:12 compute-0 nova_compute[247704]: 2026-01-31 07:46:12.091 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:12.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 200 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 4.8 KiB/s wr, 105 op/s
Jan 31 07:46:12 compute-0 sudo[284465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:12 compute-0 sudo[284465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:12 compute-0 sudo[284465]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:12 compute-0 sudo[284490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:12 compute-0 nova_compute[247704]: 2026-01-31 07:46:12.890 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845557.8889558, 14852aaa-992b-4b7c-9a9c-dd811ac8617a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:12 compute-0 nova_compute[247704]: 2026-01-31 07:46:12.890 247708 INFO nova.compute.manager [-] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] VM Stopped (Lifecycle Event)
Jan 31 07:46:12 compute-0 sudo[284490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:12 compute-0 sudo[284490]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:12 compute-0 nova_compute[247704]: 2026-01-31 07:46:12.911 247708 DEBUG nova.compute.manager [None req-b1ce3caf-caff-4106-ab07-e03e51740bc0 - - - - - -] [instance: 14852aaa-992b-4b7c-9a9c-dd811ac8617a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:12 compute-0 nova_compute[247704]: 2026-01-31 07:46:12.924 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:13.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3818271398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:14 compute-0 ceph-mon[74496]: pgmap v1434: 305 pgs: 305 active+clean; 200 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 4.8 KiB/s wr, 105 op/s
Jan 31 07:46:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 200 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.8 KiB/s wr, 78 op/s
Jan 31 07:46:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:46:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:15.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:46:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/352187108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.968 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.969 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.969 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.970 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.970 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.973 247708 INFO nova.compute.manager [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Terminating instance
Jan 31 07:46:15 compute-0 nova_compute[247704]: 2026-01-31 07:46:15.975 247708 DEBUG nova.compute.manager [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:46:16 compute-0 kernel: tapa85d82a5-a9 (unregistering): left promiscuous mode
Jan 31 07:46:16 compute-0 NetworkManager[49108]: <info>  [1769845576.2255] device (tapa85d82a5-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 ovn_controller[149457]: 2026-01-31T07:46:16Z|00188|binding|INFO|Releasing lport a85d82a5-a910-4674-9460-0efb5cc7e0c4 from this chassis (sb_readonly=0)
Jan 31 07:46:16 compute-0 ovn_controller[149457]: 2026-01-31T07:46:16Z|00189|binding|INFO|Setting lport a85d82a5-a910-4674-9460-0efb5cc7e0c4 down in Southbound
Jan 31 07:46:16 compute-0 ovn_controller[149457]: 2026-01-31T07:46:16Z|00190|binding|INFO|Removing iface tapa85d82a5-a9 ovn-installed in OVS
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.270 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.277 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:16.284 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:98:a0 10.100.0.4'], port_security=['fa:16:3e:a5:98:a0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a2a97039-8813-4ebf-9ce0-488982bece16', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9ddf930129cf4e0395f8c5e70fd9eda8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be9fd1f2-df08-4f20-8be4-2f77d359418d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=530610e1-1646-4c1c-9b6d-a046ad77685d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a85d82a5-a910-4674-9460-0efb5cc7e0c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:16.286 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a85d82a5-a910-4674-9460-0efb5cc7e0c4 in datapath d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 unbound from our chassis
Jan 31 07:46:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:16.289 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:46:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:16.290 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3fabc2-30df-4d55-b161-daf7ba2d383a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:16.291 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 namespace which is not needed anymore
Jan 31 07:46:16 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000035.scope: Deactivated successfully.
Jan 31 07:46:16 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000035.scope: Consumed 17.413s CPU time.
Jan 31 07:46:16 compute-0 systemd-machined[214448]: Machine qemu-24-instance-00000035 terminated.
Jan 31 07:46:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.397 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.401 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.408 247708 INFO nova.virt.libvirt.driver [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Instance destroyed successfully.
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.408 247708 DEBUG nova.objects.instance [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lazy-loading 'resources' on Instance uuid a2a97039-8813-4ebf-9ce0-488982bece16 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.436 247708 DEBUG nova.virt.libvirt.vif [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:44:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-77192972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-77192972',id=53,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/jDytd8Ym5FxcVX16m0xs7mZjnpulUTkIvV8si73F9lzKYe980w/3RbovGTB1QQOm/Ss45P0fTDRJRtI1toiRP5c4zSltvuzCoq9BdQDxvme5rWNAqRGyanoC79C91qw==',key_name='tempest-keypair-1249601107',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9ddf930129cf4e0395f8c5e70fd9eda8',ramdisk_id='',reservation_id='r-t5uat7rb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-600318888-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:44:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4865905ed4e4262a2242d3f323d4314',uuid=a2a97039-8813-4ebf-9ce0-488982bece16,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.437 247708 DEBUG nova.network.os_vif_util [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converting VIF {"id": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "address": "fa:16:3e:a5:98:a0", "network": {"id": "d52bfdcb-a5f3-4946-8fca-4e9f67091fc3", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-952450093-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ddf930129cf4e0395f8c5e70fd9eda8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85d82a5-a9", "ovs_interfaceid": "a85d82a5-a910-4674-9460-0efb5cc7e0c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.437 247708 DEBUG nova.network.os_vif_util [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.438 247708 DEBUG os_vif [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.440 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa85d82a5-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.441 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.444 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.447 247708 INFO os_vif [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:98:a0,bridge_name='br-int',has_traffic_filtering=True,id=a85d82a5-a910-4674-9460-0efb5cc7e0c4,network=Network(d52bfdcb-a5f3-4946-8fca-4e9f67091fc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85d82a5-a9')
Jan 31 07:46:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:16.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:16 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [NOTICE]   (281800) : haproxy version is 2.8.14-c23fe91
Jan 31 07:46:16 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [NOTICE]   (281800) : path to executable is /usr/sbin/haproxy
Jan 31 07:46:16 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [WARNING]  (281800) : Exiting Master process...
Jan 31 07:46:16 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [ALERT]    (281800) : Current worker (281802) exited with code 143 (Terminated)
Jan 31 07:46:16 compute-0 neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3[281796]: [WARNING]  (281800) : All workers exited. Exiting... (0)
Jan 31 07:46:16 compute-0 systemd[1]: libpod-5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4.scope: Deactivated successfully.
Jan 31 07:46:16 compute-0 podman[284541]: 2026-01-31 07:46:16.601350914 +0000 UTC m=+0.231179042 container died 5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:46:16 compute-0 ceph-mon[74496]: pgmap v1435: 305 pgs: 305 active+clean; 200 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.8 KiB/s wr, 78 op/s
Jan 31 07:46:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 232 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 31 07:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4-userdata-shm.mount: Deactivated successfully.
Jan 31 07:46:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd181ba3146e654a18c44a427d8ee68ae6f33844d413778e6e6e8e5329afcd84-merged.mount: Deactivated successfully.
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.961 247708 DEBUG nova.compute.manager [req-3bac2f27-411c-4636-b13b-464b12984f05 req-1b36ba3b-befe-41c5-8557-573497f7e27f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-vif-unplugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.962 247708 DEBUG oslo_concurrency.lockutils [req-3bac2f27-411c-4636-b13b-464b12984f05 req-1b36ba3b-befe-41c5-8557-573497f7e27f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.963 247708 DEBUG oslo_concurrency.lockutils [req-3bac2f27-411c-4636-b13b-464b12984f05 req-1b36ba3b-befe-41c5-8557-573497f7e27f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.963 247708 DEBUG oslo_concurrency.lockutils [req-3bac2f27-411c-4636-b13b-464b12984f05 req-1b36ba3b-befe-41c5-8557-573497f7e27f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.963 247708 DEBUG nova.compute.manager [req-3bac2f27-411c-4636-b13b-464b12984f05 req-1b36ba3b-befe-41c5-8557-573497f7e27f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] No waiting events found dispatching network-vif-unplugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:16 compute-0 nova_compute[247704]: 2026-01-31 07:46:16.963 247708 DEBUG nova.compute.manager [req-3bac2f27-411c-4636-b13b-464b12984f05 req-1b36ba3b-befe-41c5-8557-573497f7e27f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-vif-unplugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:46:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:17.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:17 compute-0 nova_compute[247704]: 2026-01-31 07:46:17.094 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:17 compute-0 podman[284541]: 2026-01-31 07:46:17.267231142 +0000 UTC m=+0.897059260 container cleanup 5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:46:17 compute-0 systemd[1]: libpod-conmon-5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4.scope: Deactivated successfully.
Jan 31 07:46:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:17.674 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:18 compute-0 ceph-mon[74496]: pgmap v1436: 305 pgs: 305 active+clean; 232 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 31 07:46:18 compute-0 podman[284601]: 2026-01-31 07:46:18.111295022 +0000 UTC m=+0.814509208 container remove 5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.119 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[64249341-9a3c-4b65-ba4a-d1b39abb2b38]: (4, ('Sat Jan 31 07:46:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 (5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4)\n5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4\nSat Jan 31 07:46:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 (5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4)\n5b6a76fb5fe673b43c6ca1db39af5c5e59ca1bb8a5275ff2b19943c6d261fdc4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.121 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cab4ef92-372b-4687-b4ec-dee9f87fa246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.123 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd52bfdcb-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:18 compute-0 nova_compute[247704]: 2026-01-31 07:46:18.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:18 compute-0 kernel: tapd52bfdcb-a0: left promiscuous mode
Jan 31 07:46:18 compute-0 nova_compute[247704]: 2026-01-31 07:46:18.141 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.143 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[116b0f4e-5db7-412b-bc52-a60ee3fb47ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.161 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e05897-c0ca-4d86-8b38-e1ab9827fecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.163 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b5c30d-5d0d-4fa5-9a24-95664d7d4f56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.182 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[82587f01-06d5-4b7c-b317-213921337c5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570750, 'reachable_time': 18421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284618, 'error': None, 'target': 'ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.186 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d52bfdcb-a5f3-4946-8fca-4e9f67091fc3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:46:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:18.186 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9158c8ad-bcb0-4aa7-9558-c30d54358f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:18 compute-0 systemd[1]: run-netns-ovnmeta\x2dd52bfdcb\x2da5f3\x2d4946\x2d8fca\x2d4e9f67091fc3.mount: Deactivated successfully.
Jan 31 07:46:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 07:46:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:19.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.102 247708 DEBUG nova.compute.manager [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.103 247708 DEBUG nova.compute.manager [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing instance network info cache due to event network-changed-9e0062f0-fd6d-4587-89d5-e10017aa4e88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.103 247708 DEBUG oslo_concurrency.lockutils [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.103 247708 DEBUG oslo_concurrency.lockutils [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.104 247708 DEBUG nova.network.neutron [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Refreshing network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.137 247708 DEBUG nova.compute.manager [req-8d1e207a-c2b5-4892-915f-c81e51ad2735 req-52ec16ec-51a6-46e7-bb47-33f53b9f81c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.138 247708 DEBUG oslo_concurrency.lockutils [req-8d1e207a-c2b5-4892-915f-c81e51ad2735 req-52ec16ec-51a6-46e7-bb47-33f53b9f81c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.138 247708 DEBUG oslo_concurrency.lockutils [req-8d1e207a-c2b5-4892-915f-c81e51ad2735 req-52ec16ec-51a6-46e7-bb47-33f53b9f81c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.139 247708 DEBUG oslo_concurrency.lockutils [req-8d1e207a-c2b5-4892-915f-c81e51ad2735 req-52ec16ec-51a6-46e7-bb47-33f53b9f81c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.139 247708 DEBUG nova.compute.manager [req-8d1e207a-c2b5-4892-915f-c81e51ad2735 req-52ec16ec-51a6-46e7-bb47-33f53b9f81c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] No waiting events found dispatching network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:19 compute-0 nova_compute[247704]: 2026-01-31 07:46:19.140 247708 WARNING nova.compute.manager [req-8d1e207a-c2b5-4892-915f-c81e51ad2735 req-52ec16ec-51a6-46e7-bb47-33f53b9f81c4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received unexpected event network-vif-plugged-a85d82a5-a910-4674-9460-0efb5cc7e0c4 for instance with vm_state active and task_state deleting.
Jan 31 07:46:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3675761999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:46:20
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:46:20 compute-0 ceph-mon[74496]: pgmap v1437: 305 pgs: 305 active+clean; 246 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:46:20 compute-0 nova_compute[247704]: 2026-01-31 07:46:20.462 247708 INFO nova.virt.libvirt.driver [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Deleting instance files /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16_del
Jan 31 07:46:20 compute-0 nova_compute[247704]: 2026-01-31 07:46:20.463 247708 INFO nova.virt.libvirt.driver [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Deletion of /var/lib/nova/instances/a2a97039-8813-4ebf-9ce0-488982bece16_del complete
Jan 31 07:46:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:20 compute-0 nova_compute[247704]: 2026-01-31 07:46:20.599 247708 INFO nova.compute.manager [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Took 4.62 seconds to destroy the instance on the hypervisor.
Jan 31 07:46:20 compute-0 nova_compute[247704]: 2026-01-31 07:46:20.599 247708 DEBUG oslo.service.loopingcall [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:46:20 compute-0 nova_compute[247704]: 2026-01-31 07:46:20.599 247708 DEBUG nova.compute.manager [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:46:20 compute-0 nova_compute[247704]: 2026-01-31 07:46:20.600 247708 DEBUG nova.network.neutron [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:46:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 231 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 3.2 MiB/s wr, 107 op/s
Jan 31 07:46:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:21.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:21 compute-0 nova_compute[247704]: 2026-01-31 07:46:21.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3836211079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.125 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.594 247708 DEBUG nova.network.neutron [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.632 247708 INFO nova.compute.manager [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Took 2.03 seconds to deallocate network for instance.
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.679 247708 DEBUG nova.network.neutron [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updated VIF entry in instance network info cache for port 9e0062f0-fd6d-4587-89d5-e10017aa4e88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.680 247708 DEBUG nova.network.neutron [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [{"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.711 247708 DEBUG nova.compute.manager [req-744f0061-189e-4ee8-88d2-bfc702aa4b62 req-76b7b7bf-4c42-40d2-89bd-665c92d7e998 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Received event network-vif-deleted-a85d82a5-a910-4674-9460-0efb5cc7e0c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.730 247708 DEBUG oslo_concurrency.lockutils [req-e537787a-f15d-4215-a738-29602b2d8df1 req-cbde52d3-224c-4d9d-b8be-40bd7725dc30 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-968b433b-941d-4472-af01-c19f6ff6377b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.748 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:22 compute-0 nova_compute[247704]: 2026-01-31 07:46:22.750 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 213 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 84 op/s
Jan 31 07:46:22 compute-0 ceph-mon[74496]: pgmap v1438: 305 pgs: 305 active+clean; 231 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 3.2 MiB/s wr, 107 op/s
Jan 31 07:46:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1517547197' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:23.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.086 247708 DEBUG oslo_concurrency.processutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870794508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.572 247708 DEBUG oslo_concurrency.processutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.578 247708 DEBUG nova.compute.provider_tree [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.595 247708 DEBUG nova.scheduler.client.report [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.619 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.697 247708 INFO nova.scheduler.client.report [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Deleted allocations for instance a2a97039-8813-4ebf-9ce0-488982bece16
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.825 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.825 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.825 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.826 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.826 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.827 247708 INFO nova.compute.manager [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Terminating instance
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.828 247708 DEBUG nova.compute.manager [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:46:23 compute-0 nova_compute[247704]: 2026-01-31 07:46:23.874 247708 DEBUG oslo_concurrency.lockutils [None req-2c9563a5-cf2f-4c91-ac09-7ec140f31a60 b4865905ed4e4262a2242d3f323d4314 9ddf930129cf4e0395f8c5e70fd9eda8 - - default default] Lock "a2a97039-8813-4ebf-9ce0-488982bece16" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:24 compute-0 ceph-mon[74496]: pgmap v1439: 305 pgs: 305 active+clean; 213 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 3.5 MiB/s wr, 84 op/s
Jan 31 07:46:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1870794508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:24 compute-0 kernel: tap9e0062f0-fd (unregistering): left promiscuous mode
Jan 31 07:46:24 compute-0 NetworkManager[49108]: <info>  [1769845584.4454] device (tap9e0062f0-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:46:24 compute-0 ovn_controller[149457]: 2026-01-31T07:46:24Z|00191|binding|INFO|Releasing lport 9e0062f0-fd6d-4587-89d5-e10017aa4e88 from this chassis (sb_readonly=0)
Jan 31 07:46:24 compute-0 ovn_controller[149457]: 2026-01-31T07:46:24Z|00192|binding|INFO|Setting lport 9e0062f0-fd6d-4587-89d5-e10017aa4e88 down in Southbound
Jan 31 07:46:24 compute-0 ovn_controller[149457]: 2026-01-31T07:46:24Z|00193|binding|INFO|Removing iface tap9e0062f0-fd ovn-installed in OVS
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:24.496 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:ef:fc 10.100.0.9'], port_security=['fa:16:3e:65:ef:fc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '968b433b-941d-4472-af01-c19f6ff6377b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7077deb2-06a0-4e93-8714-7555d93557cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07ac56babd144839be6d08563340e6bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2da1fc29-dd97-45b5-a69f-a954c8d9f902', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2803db5-d904-4d93-a43e-b71357b850fe, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=9e0062f0-fd6d-4587-89d5-e10017aa4e88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:46:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:24.498 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 9e0062f0-fd6d-4587-89d5-e10017aa4e88 in datapath 7077deb2-06a0-4e93-8714-7555d93557cf unbound from our chassis
Jan 31 07:46:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:24.500 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7077deb2-06a0-4e93-8714-7555d93557cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:46:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:24.501 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d4891888-f2c8-4d24-b6df-33544b7e5a26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:24.502 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf namespace which is not needed anymore
Jan 31 07:46:24 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000036.scope: Deactivated successfully.
Jan 31 07:46:24 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000036.scope: Consumed 16.575s CPU time.
Jan 31 07:46:24 compute-0 systemd-machined[214448]: Machine qemu-25-instance-00000036 terminated.
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.665 247708 INFO nova.virt.libvirt.driver [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Instance destroyed successfully.
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.666 247708 DEBUG nova.objects.instance [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lazy-loading 'resources' on Instance uuid 968b433b-941d-4472-af01-c19f6ff6377b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:46:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 213 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.841 247708 DEBUG nova.virt.libvirt.vif [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:44:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1699557048',display_name='tempest-FloatingIPsAssociationTestJSON-server-1699557048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1699557048',id=54,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:44:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07ac56babd144839be6d08563340e6bd',ramdisk_id='',reservation_id='r-edhbitw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-656325591',owner_user_name='tempest-FloatingIPsAssociationTestJSON-656325591-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:44:47Z,user_data=None,user_id='a6a24b8bc028456fa6de79d3b792e79a',uuid=968b433b-941d-4472-af01-c19f6ff6377b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.841 247708 DEBUG nova.network.os_vif_util [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converting VIF {"id": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "address": "fa:16:3e:65:ef:fc", "network": {"id": "7077deb2-06a0-4e93-8714-7555d93557cf", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-627284528-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07ac56babd144839be6d08563340e6bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e0062f0-fd", "ovs_interfaceid": "9e0062f0-fd6d-4587-89d5-e10017aa4e88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.842 247708 DEBUG nova.network.os_vif_util [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.843 247708 DEBUG os_vif [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.845 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.845 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e0062f0-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.848 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.851 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.853 247708 INFO os_vif [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:ef:fc,bridge_name='br-int',has_traffic_filtering=True,id=9e0062f0-fd6d-4587-89d5-e10017aa4e88,network=Network(7077deb2-06a0-4e93-8714-7555d93557cf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e0062f0-fd')
Jan 31 07:46:24 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [NOTICE]   (282304) : haproxy version is 2.8.14-c23fe91
Jan 31 07:46:24 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [NOTICE]   (282304) : path to executable is /usr/sbin/haproxy
Jan 31 07:46:24 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [WARNING]  (282304) : Exiting Master process...
Jan 31 07:46:24 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [WARNING]  (282304) : Exiting Master process...
Jan 31 07:46:24 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [ALERT]    (282304) : Current worker (282307) exited with code 143 (Terminated)
Jan 31 07:46:24 compute-0 neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf[282285]: [WARNING]  (282304) : All workers exited. Exiting... (0)
Jan 31 07:46:24 compute-0 systemd[1]: libpod-3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93.scope: Deactivated successfully.
Jan 31 07:46:24 compute-0 podman[284669]: 2026-01-31 07:46:24.928901965 +0000 UTC m=+0.352602636 container died 3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.972 247708 DEBUG nova.compute.manager [req-4b32a25e-91fb-4fee-9089-2ccaa4c0541d req-f8f7f9fa-842a-4a89-a22f-569751d4a109 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-vif-unplugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.973 247708 DEBUG oslo_concurrency.lockutils [req-4b32a25e-91fb-4fee-9089-2ccaa4c0541d req-f8f7f9fa-842a-4a89-a22f-569751d4a109 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.974 247708 DEBUG oslo_concurrency.lockutils [req-4b32a25e-91fb-4fee-9089-2ccaa4c0541d req-f8f7f9fa-842a-4a89-a22f-569751d4a109 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.974 247708 DEBUG oslo_concurrency.lockutils [req-4b32a25e-91fb-4fee-9089-2ccaa4c0541d req-f8f7f9fa-842a-4a89-a22f-569751d4a109 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.975 247708 DEBUG nova.compute.manager [req-4b32a25e-91fb-4fee-9089-2ccaa4c0541d req-f8f7f9fa-842a-4a89-a22f-569751d4a109 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] No waiting events found dispatching network-vif-unplugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:24 compute-0 nova_compute[247704]: 2026-01-31 07:46:24.975 247708 DEBUG nova.compute.manager [req-4b32a25e-91fb-4fee-9089-2ccaa4c0541d req-f8f7f9fa-842a-4a89-a22f-569751d4a109 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-vif-unplugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93-userdata-shm.mount: Deactivated successfully.
Jan 31 07:46:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:25.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b9cf126752b95c1792a42110c6b013dcfd9521bae3c58c4ba6f038a0124b6da-merged.mount: Deactivated successfully.
Jan 31 07:46:25 compute-0 podman[284669]: 2026-01-31 07:46:25.072678586 +0000 UTC m=+0.496379217 container cleanup 3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:46:25 compute-0 systemd[1]: libpod-conmon-3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93.scope: Deactivated successfully.
Jan 31 07:46:25 compute-0 podman[284714]: 2026-01-31 07:46:25.093737312 +0000 UTC m=+0.126144230 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:46:25 compute-0 podman[284747]: 2026-01-31 07:46:25.474732032 +0000 UTC m=+0.381051462 container remove 3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.480 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[efa4003f-e41a-4107-9750-67c98cfb7f35]: (4, ('Sat Jan 31 07:46:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf (3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93)\n3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93\nSat Jan 31 07:46:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf (3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93)\n3851a185a14aa5c789657abe45b28aa9c3bb7810ef4fa987d553065bed4edd93\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.483 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e77454-b957-4422-a4fa-efe5fd1b16b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.484 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7077deb2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:46:25 compute-0 nova_compute[247704]: 2026-01-31 07:46:25.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:25 compute-0 kernel: tap7077deb2-00: left promiscuous mode
Jan 31 07:46:25 compute-0 nova_compute[247704]: 2026-01-31 07:46:25.492 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.495 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d21ddf3-db64-4208-a86e-ade7e570ff2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.510 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4a2b10-9617-4877-9372-b798b738a14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.511 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[86dc29e0-94b2-4c5d-b6cf-da26a9ee1d12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d7077deb2\x2d06a0\x2d4e93\x2d8714\x2d7555d93557cf.mount: Deactivated successfully.
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.523 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1ea932-191b-457c-a018-7d4ed0f1f864]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572806, 'reachable_time': 30069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284764, 'error': None, 'target': 'ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.531 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7077deb2-06a0-4e93-8714-7555d93557cf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:46:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:46:25.531 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[6548c528-fa17-4ab5-873c-3227386f7dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:46:25 compute-0 ceph-mon[74496]: pgmap v1440: 305 pgs: 305 active+clean; 213 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 MiB/s wr, 73 op/s
Jan 31 07:46:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/356577153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:26.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:26 compute-0 nova_compute[247704]: 2026-01-31 07:46:26.712 247708 INFO nova.virt.libvirt.driver [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Deleting instance files /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b_del
Jan 31 07:46:26 compute-0 nova_compute[247704]: 2026-01-31 07:46:26.713 247708 INFO nova.virt.libvirt.driver [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Deletion of /var/lib/nova/instances/968b433b-941d-4472-af01-c19f6ff6377b_del complete
Jan 31 07:46:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 213 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 31 07:46:26 compute-0 nova_compute[247704]: 2026-01-31 07:46:26.842 247708 INFO nova.compute.manager [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Took 3.01 seconds to destroy the instance on the hypervisor.
Jan 31 07:46:26 compute-0 nova_compute[247704]: 2026-01-31 07:46:26.842 247708 DEBUG oslo.service.loopingcall [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:46:26 compute-0 nova_compute[247704]: 2026-01-31 07:46:26.843 247708 DEBUG nova.compute.manager [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:46:26 compute-0 nova_compute[247704]: 2026-01-31 07:46:26.843 247708 DEBUG nova.network.neutron [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:46:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2705593188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:46:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:27.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.127 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.206 247708 DEBUG nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.207 247708 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "968b433b-941d-4472-af01-c19f6ff6377b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.207 247708 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.208 247708 DEBUG oslo_concurrency.lockutils [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.208 247708 DEBUG nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] No waiting events found dispatching network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:46:27 compute-0 nova_compute[247704]: 2026-01-31 07:46:27.208 247708 WARNING nova.compute.manager [req-24aaf67f-4dd3-4779-9903-64510e15726e req-f5d44d17-03e2-4240-8c25-4f4876a671c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received unexpected event network-vif-plugged-9e0062f0-fd6d-4587-89d5-e10017aa4e88 for instance with vm_state active and task_state deleting.
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.061 247708 DEBUG nova.network.neutron [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.092 247708 INFO nova.compute.manager [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Took 1.25 seconds to deallocate network for instance.
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.157 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.158 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.199 247708 DEBUG nova.compute.manager [req-7b4cc83a-85a8-4d0d-a30d-b635e6c15f47 req-93411101-1c8e-4b9a-bc64-2f3e1414142b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Received event network-vif-deleted-9e0062f0-fd6d-4587-89d5-e10017aa4e88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:46:28 compute-0 ceph-mon[74496]: pgmap v1441: 305 pgs: 305 active+clean; 213 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.218 247708 DEBUG oslo_concurrency.processutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:46:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:28.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:46:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055626247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.675 247708 DEBUG oslo_concurrency.processutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.682 247708 DEBUG nova.compute.provider_tree [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.723 247708 DEBUG nova.scheduler.client.report [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.757 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.823 247708 INFO nova.scheduler.client.report [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Deleted allocations for instance 968b433b-941d-4472-af01-c19f6ff6377b
Jan 31 07:46:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 195 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 31 07:46:28 compute-0 nova_compute[247704]: 2026-01-31 07:46:28.947 247708 DEBUG oslo_concurrency.lockutils [None req-be0e941f-5eee-4bd7-850f-d4b8122735dc a6a24b8bc028456fa6de79d3b792e79a 07ac56babd144839be6d08563340e6bd - - default default] Lock "968b433b-941d-4472-af01-c19f6ff6377b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:46:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:29.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2055626247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:46:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 21K writes, 83K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 6883 syncs, 3.11 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8599 writes, 33K keys, 8599 commit groups, 1.0 writes per commit group, ingest: 32.92 MB, 0.05 MB/s
                                           Interval WAL: 8598 writes, 3389 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 07:46:29 compute-0 nova_compute[247704]: 2026-01-31 07:46:29.849 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:30 compute-0 ceph-mon[74496]: pgmap v1442: 305 pgs: 305 active+clean; 195 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 31 07:46:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 134 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Jan 31 07:46:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:46:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:46:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:31 compute-0 nova_compute[247704]: 2026-01-31 07:46:31.408 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845576.4066267, a2a97039-8813-4ebf-9ce0-488982bece16 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:31 compute-0 nova_compute[247704]: 2026-01-31 07:46:31.408 247708 INFO nova.compute.manager [-] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] VM Stopped (Lifecycle Event)
Jan 31 07:46:31 compute-0 nova_compute[247704]: 2026-01-31 07:46:31.433 247708 DEBUG nova.compute.manager [None req-ab388881-bdce-4e9d-ad59-7c5105c420bb - - - - - -] [instance: a2a97039-8813-4ebf-9ce0-488982bece16] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/50917927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/50917927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:32 compute-0 nova_compute[247704]: 2026-01-31 07:46:32.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:32 compute-0 nova_compute[247704]: 2026-01-31 07:46:32.463 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:46:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 134 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 423 KiB/s wr, 192 op/s
Jan 31 07:46:32 compute-0 ceph-mon[74496]: pgmap v1443: 305 pgs: 305 active+clean; 134 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Jan 31 07:46:32 compute-0 sudo[284791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:32 compute-0 sudo[284791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:32 compute-0 sudo[284791]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:33.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:33 compute-0 sudo[284816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:33 compute-0 sudo[284816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:33 compute-0 sudo[284816]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:34 compute-0 ceph-mon[74496]: pgmap v1444: 305 pgs: 305 active+clean; 134 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 423 KiB/s wr, 192 op/s
Jan 31 07:46:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 134 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 26 KiB/s wr, 180 op/s
Jan 31 07:46:34 compute-0 nova_compute[247704]: 2026-01-31 07:46:34.890 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019914750255909173 of space, bias 1.0, pg target 0.5974425076772752 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:46:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:35.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:35 compute-0 sudo[284843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:35 compute-0 sudo[284843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:35 compute-0 sudo[284843]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:35 compute-0 sudo[284868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:46:35 compute-0 sudo[284868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:35 compute-0 sudo[284868]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:35 compute-0 sudo[284893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:35 compute-0 sudo[284893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:35 compute-0 sudo[284893]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:35 compute-0 nova_compute[247704]: 2026-01-31 07:46:35.888 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:35 compute-0 sudo[284918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:46:35 compute-0 sudo[284918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:36 compute-0 nova_compute[247704]: 2026-01-31 07:46:36.052 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:36 compute-0 sudo[284918]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:36 compute-0 ceph-mon[74496]: pgmap v1445: 305 pgs: 305 active+clean; 134 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 26 KiB/s wr, 180 op/s
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:46:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7a135c87-a5ab-4ecf-8e7b-f27b421582eb does not exist
Jan 31 07:46:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 774c5ec2-3a38-43c9-a776-67bad8e20c45 does not exist
Jan 31 07:46:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 01d793af-99c0-42e7-af59-0c7d33f643dd does not exist
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:46:36 compute-0 sudo[284974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:36 compute-0 sudo[284974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 102 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 216 op/s
Jan 31 07:46:36 compute-0 sudo[284974]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:36 compute-0 sudo[284999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:46:36 compute-0 sudo[284999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:36 compute-0 sudo[284999]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:36 compute-0 sudo[285024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:36 compute-0 sudo[285024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:36 compute-0 sudo[285024]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:37 compute-0 sudo[285049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:46:37 compute-0 sudo[285049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:37.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:46:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/713672979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:46:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/713672979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:37 compute-0 nova_compute[247704]: 2026-01-31 07:46:37.131 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/713672979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/713672979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.331591851 +0000 UTC m=+0.024861310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.432311368 +0000 UTC m=+0.125580827 container create f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:46:37 compute-0 systemd[1]: Started libpod-conmon-f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949.scope.
Jan 31 07:46:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.540452856 +0000 UTC m=+0.233722365 container init f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamport, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.549371245 +0000 UTC m=+0.242640704 container start f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:46:37 compute-0 epic_lamport[285130]: 167 167
Jan 31 07:46:37 compute-0 systemd[1]: libpod-f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949.scope: Deactivated successfully.
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.564867064 +0000 UTC m=+0.258136513 container attach f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamport, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.566648658 +0000 UTC m=+0.259918077 container died f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamport, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c2b17d5ebee45fbac08bd337202f0ceae1947e5fc9e03393b3cc70de0b31c0-merged.mount: Deactivated successfully.
Jan 31 07:46:37 compute-0 podman[285113]: 2026-01-31 07:46:37.660624069 +0000 UTC m=+0.353893488 container remove f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:46:37 compute-0 systemd[1]: libpod-conmon-f56ee2252477cca3a95f2256f01ee9feeb30f51b9a62fd0666cf96cec7ce2949.scope: Deactivated successfully.
Jan 31 07:46:37 compute-0 podman[285154]: 2026-01-31 07:46:37.822274908 +0000 UTC m=+0.059656542 container create 6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:46:37 compute-0 systemd[1]: Started libpod-conmon-6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74.scope.
Jan 31 07:46:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affec050254d285f2287713d307b4aafc5bd2528c96f1bde0fafa04a0585d316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affec050254d285f2287713d307b4aafc5bd2528c96f1bde0fafa04a0585d316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affec050254d285f2287713d307b4aafc5bd2528c96f1bde0fafa04a0585d316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affec050254d285f2287713d307b4aafc5bd2528c96f1bde0fafa04a0585d316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/affec050254d285f2287713d307b4aafc5bd2528c96f1bde0fafa04a0585d316/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:37 compute-0 podman[285154]: 2026-01-31 07:46:37.792832277 +0000 UTC m=+0.030213901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:46:37 compute-0 podman[285154]: 2026-01-31 07:46:37.908370636 +0000 UTC m=+0.145752250 container init 6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:46:37 compute-0 podman[285154]: 2026-01-31 07:46:37.916002363 +0000 UTC m=+0.153383957 container start 6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 07:46:37 compute-0 podman[285154]: 2026-01-31 07:46:37.9277053 +0000 UTC m=+0.165086884 container attach 6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:46:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:38 compute-0 ceph-mon[74496]: pgmap v1446: 305 pgs: 305 active+clean; 102 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 216 op/s
Jan 31 07:46:38 compute-0 optimistic_haslett[285171]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:46:38 compute-0 optimistic_haslett[285171]: --> relative data size: 1.0
Jan 31 07:46:38 compute-0 optimistic_haslett[285171]: --> All data devices are unavailable
Jan 31 07:46:38 compute-0 systemd[1]: libpod-6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74.scope: Deactivated successfully.
Jan 31 07:46:38 compute-0 podman[285154]: 2026-01-31 07:46:38.767798882 +0000 UTC m=+1.005180496 container died 6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:46:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 218 op/s
Jan 31 07:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-affec050254d285f2287713d307b4aafc5bd2528c96f1bde0fafa04a0585d316-merged.mount: Deactivated successfully.
Jan 31 07:46:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:39.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:39 compute-0 podman[285154]: 2026-01-31 07:46:39.26141353 +0000 UTC m=+1.498795144 container remove 6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haslett, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:46:39 compute-0 systemd[1]: libpod-conmon-6df7fa4e791fe24e4e8e530c241c9e334eef6e57223d57eb8679cf048ec11f74.scope: Deactivated successfully.
Jan 31 07:46:39 compute-0 sudo[285049]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:39 compute-0 sudo[285207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:39 compute-0 sudo[285207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:39 compute-0 sudo[285207]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:39 compute-0 podman[285200]: 2026-01-31 07:46:39.425548781 +0000 UTC m=+0.118854412 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:46:39 compute-0 sudo[285252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:46:39 compute-0 sudo[285252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:39 compute-0 sudo[285252]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:39 compute-0 sudo[285278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:39 compute-0 sudo[285278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:39 compute-0 sudo[285278]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:39 compute-0 sudo[285303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:46:39 compute-0 sudo[285303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:39 compute-0 nova_compute[247704]: 2026-01-31 07:46:39.663 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845584.6619148, 968b433b-941d-4472-af01-c19f6ff6377b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:46:39 compute-0 nova_compute[247704]: 2026-01-31 07:46:39.664 247708 INFO nova.compute.manager [-] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] VM Stopped (Lifecycle Event)
Jan 31 07:46:39 compute-0 nova_compute[247704]: 2026-01-31 07:46:39.822 247708 DEBUG nova.compute.manager [None req-d033de6b-40fa-4272-88f9-db4a26c14554 - - - - - -] [instance: 968b433b-941d-4472-af01-c19f6ff6377b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:46:39 compute-0 nova_compute[247704]: 2026-01-31 07:46:39.893 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:39.920512591 +0000 UTC m=+0.024281985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:40.049481829 +0000 UTC m=+0.153251213 container create 1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:46:40 compute-0 systemd[1]: Started libpod-conmon-1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f.scope.
Jan 31 07:46:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:40.322037334 +0000 UTC m=+0.425806698 container init 1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:40.33209529 +0000 UTC m=+0.435864634 container start 1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:46:40 compute-0 quirky_dirac[285385]: 167 167
Jan 31 07:46:40 compute-0 systemd[1]: libpod-1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f.scope: Deactivated successfully.
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:40.41663343 +0000 UTC m=+0.520402774 container attach 1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:40.418290101 +0000 UTC m=+0.522059445 container died 1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:46:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:40.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:40 compute-0 ceph-mon[74496]: pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 218 op/s
Jan 31 07:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d01a8b110761c9ccede836559c9fed7b2772b39fa5fc4479774685cb71961a05-merged.mount: Deactivated successfully.
Jan 31 07:46:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 193 op/s
Jan 31 07:46:40 compute-0 podman[285369]: 2026-01-31 07:46:40.925892772 +0000 UTC m=+1.029662136 container remove 1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dirac, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:46:41 compute-0 systemd[1]: libpod-conmon-1e502c0da28a85028ef23fa94cf87132d3c5f316638de965800e40cec95ad92f.scope: Deactivated successfully.
Jan 31 07:46:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:41.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:41 compute-0 podman[285409]: 2026-01-31 07:46:41.088852513 +0000 UTC m=+0.045791683 container create ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:46:41 compute-0 systemd[1]: Started libpod-conmon-ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d.scope.
Jan 31 07:46:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332296d0bfe479e921167de56236c07fc93258eaf8a484349de2523d4dbd4f8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332296d0bfe479e921167de56236c07fc93258eaf8a484349de2523d4dbd4f8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332296d0bfe479e921167de56236c07fc93258eaf8a484349de2523d4dbd4f8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332296d0bfe479e921167de56236c07fc93258eaf8a484349de2523d4dbd4f8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:41 compute-0 podman[285409]: 2026-01-31 07:46:41.067367736 +0000 UTC m=+0.024306936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:46:41 compute-0 podman[285409]: 2026-01-31 07:46:41.183581482 +0000 UTC m=+0.140520662 container init ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:46:41 compute-0 podman[285409]: 2026-01-31 07:46:41.188946884 +0000 UTC m=+0.145886074 container start ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:46:41 compute-0 podman[285409]: 2026-01-31 07:46:41.197190886 +0000 UTC m=+0.154130116 container attach ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:46:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 07:46:41 compute-0 objective_spence[285426]: {
Jan 31 07:46:41 compute-0 objective_spence[285426]:     "0": [
Jan 31 07:46:41 compute-0 objective_spence[285426]:         {
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "devices": [
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "/dev/loop3"
Jan 31 07:46:41 compute-0 objective_spence[285426]:             ],
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "lv_name": "ceph_lv0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "lv_size": "7511998464",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "name": "ceph_lv0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "tags": {
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.cluster_name": "ceph",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.crush_device_class": "",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.encrypted": "0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.osd_id": "0",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.type": "block",
Jan 31 07:46:41 compute-0 objective_spence[285426]:                 "ceph.vdo": "0"
Jan 31 07:46:41 compute-0 objective_spence[285426]:             },
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "type": "block",
Jan 31 07:46:41 compute-0 objective_spence[285426]:             "vg_name": "ceph_vg0"
Jan 31 07:46:41 compute-0 objective_spence[285426]:         }
Jan 31 07:46:41 compute-0 objective_spence[285426]:     ]
Jan 31 07:46:41 compute-0 objective_spence[285426]: }
Jan 31 07:46:41 compute-0 systemd[1]: libpod-ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d.scope: Deactivated successfully.
Jan 31 07:46:41 compute-0 podman[285409]: 2026-01-31 07:46:41.905142633 +0000 UTC m=+0.862081843 container died ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:46:42 compute-0 nova_compute[247704]: 2026-01-31 07:46:42.134 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1379675515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:46:42 compute-0 ceph-mon[74496]: pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 193 op/s
Jan 31 07:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-332296d0bfe479e921167de56236c07fc93258eaf8a484349de2523d4dbd4f8a-merged.mount: Deactivated successfully.
Jan 31 07:46:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:42 compute-0 podman[285409]: 2026-01-31 07:46:42.761298489 +0000 UTC m=+1.718237699 container remove ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_spence, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:46:42 compute-0 sudo[285303]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 148 KiB/s wr, 100 op/s
Jan 31 07:46:42 compute-0 sudo[285448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:42 compute-0 sudo[285448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:42 compute-0 sudo[285448]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:42 compute-0 systemd[1]: libpod-conmon-ba37840c154bece7b00a73fe0c135cb6784e9569322eab0a2902dda42946167d.scope: Deactivated successfully.
Jan 31 07:46:42 compute-0 sudo[285473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:46:42 compute-0 sudo[285473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:42 compute-0 sudo[285473]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:42 compute-0 sudo[285498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:42 compute-0 sudo[285498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:42 compute-0 sudo[285498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:42 compute-0 sudo[285523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:46:42 compute-0 sudo[285523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:43.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:43 compute-0 podman[285587]: 2026-01-31 07:46:43.347387502 +0000 UTC m=+0.106975052 container create 9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:46:43 compute-0 podman[285587]: 2026-01-31 07:46:43.264145653 +0000 UTC m=+0.023733223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:46:43 compute-0 systemd[1]: Started libpod-conmon-9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78.scope.
Jan 31 07:46:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:46:43 compute-0 podman[285587]: 2026-01-31 07:46:43.534878084 +0000 UTC m=+0.294465654 container init 9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wescoff, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:46:43 compute-0 podman[285587]: 2026-01-31 07:46:43.54292003 +0000 UTC m=+0.302507620 container start 9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:46:43 compute-0 affectionate_wescoff[285604]: 167 167
Jan 31 07:46:43 compute-0 systemd[1]: libpod-9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78.scope: Deactivated successfully.
Jan 31 07:46:43 compute-0 podman[285587]: 2026-01-31 07:46:43.584364525 +0000 UTC m=+0.343952105 container attach 9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:46:43 compute-0 podman[285587]: 2026-01-31 07:46:43.584758284 +0000 UTC m=+0.344345834 container died 9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wescoff, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:46:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e8ffd9345fb316acc956259a08191eec50bb63c60c19f7427cb501863fd929d-merged.mount: Deactivated successfully.
Jan 31 07:46:44 compute-0 podman[285587]: 2026-01-31 07:46:44.272456806 +0000 UTC m=+1.032044366 container remove 9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:46:44 compute-0 systemd[1]: libpod-conmon-9709bb314aa3124101128a93fce590c5047ad40edc720f82acd337116e68ad78.scope: Deactivated successfully.
Jan 31 07:46:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:44 compute-0 podman[285628]: 2026-01-31 07:46:44.527180994 +0000 UTC m=+0.112890325 container create 49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:46:44 compute-0 podman[285628]: 2026-01-31 07:46:44.440262366 +0000 UTC m=+0.025971777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:46:44 compute-0 systemd[1]: Started libpod-conmon-49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7.scope.
Jan 31 07:46:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f009137b825ad696e72107ed013a3bf65e0507932d83f6d79ed4099f4c11cd0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f009137b825ad696e72107ed013a3bf65e0507932d83f6d79ed4099f4c11cd0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f009137b825ad696e72107ed013a3bf65e0507932d83f6d79ed4099f4c11cd0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f009137b825ad696e72107ed013a3bf65e0507932d83f6d79ed4099f4c11cd0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:46:44 compute-0 ceph-mon[74496]: pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 148 KiB/s wr, 100 op/s
Jan 31 07:46:44 compute-0 podman[285628]: 2026-01-31 07:46:44.707681404 +0000 UTC m=+0.293390755 container init 49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_sutherland, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:46:44 compute-0 podman[285628]: 2026-01-31 07:46:44.716297595 +0000 UTC m=+0.302006936 container start 49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_sutherland, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:46:44 compute-0 podman[285628]: 2026-01-31 07:46:44.769358364 +0000 UTC m=+0.355067755 container attach 49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:46:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 148 KiB/s wr, 54 op/s
Jan 31 07:46:44 compute-0 nova_compute[247704]: 2026-01-31 07:46:44.897 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:46:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861024559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:46:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861024559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:45.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]: {
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:         "osd_id": 0,
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:         "type": "bluestore"
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]:     }
Jan 31 07:46:45 compute-0 dreamy_sutherland[285644]: }
Jan 31 07:46:45 compute-0 podman[285628]: 2026-01-31 07:46:45.45642653 +0000 UTC m=+1.042135881 container died 49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:46:45 compute-0 systemd[1]: libpod-49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7.scope: Deactivated successfully.
Jan 31 07:46:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f009137b825ad696e72107ed013a3bf65e0507932d83f6d79ed4099f4c11cd0e-merged.mount: Deactivated successfully.
Jan 31 07:46:45 compute-0 podman[285628]: 2026-01-31 07:46:45.703035499 +0000 UTC m=+1.288744830 container remove 49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:46:45 compute-0 systemd[1]: libpod-conmon-49fc504cc5afb9062bf922fa41a07bae49916143305bcdbe694905949143bde7.scope: Deactivated successfully.
Jan 31 07:46:45 compute-0 ceph-mon[74496]: pgmap v1450: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 148 KiB/s wr, 54 op/s
Jan 31 07:46:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/861024559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:46:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/861024559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:46:45 compute-0 sudo[285523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:46:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:46:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d212c2e6-6ed6-458d-9ef7-6f2e8966f1ea does not exist
Jan 31 07:46:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d3529b4b-c8ff-43fe-99df-f4eda9853869 does not exist
Jan 31 07:46:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev db9ae492-26d7-4b93-bf71-877cc9c70696 does not exist
Jan 31 07:46:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:46.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:46 compute-0 sudo[285680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:46 compute-0 sudo[285680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:46 compute-0 sudo[285680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:46 compute-0 sudo[285705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:46:46 compute-0 sudo[285705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:46 compute-0 sudo[285705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 108 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 585 KiB/s rd, 1.9 MiB/s wr, 73 op/s
Jan 31 07:46:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:47.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:47 compute-0 nova_compute[247704]: 2026-01-31 07:46:47.137 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:46:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:48.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:48 compute-0 ceph-mon[74496]: pgmap v1451: 305 pgs: 305 active+clean; 108 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 585 KiB/s rd, 1.9 MiB/s wr, 73 op/s
Jan 31 07:46:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 109 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 31 07:46:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:49 compute-0 nova_compute[247704]: 2026-01-31 07:46:49.899 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:46:50 compute-0 ceph-mon[74496]: pgmap v1452: 305 pgs: 305 active+clean; 109 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:46:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:50.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 31 07:46:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:52 compute-0 nova_compute[247704]: 2026-01-31 07:46:52.139 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:46:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:46:52 compute-0 ceph-mon[74496]: pgmap v1453: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 31 07:46:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 07:46:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:46:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:46:53 compute-0 sudo[285733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:53 compute-0 sudo[285733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:53 compute-0 sudo[285733]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:53 compute-0 sudo[285758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:46:53 compute-0 sudo[285758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:46:53 compute-0 sudo[285758]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:53 compute-0 ceph-mon[74496]: pgmap v1454: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 07:46:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 56 op/s
Jan 31 07:46:54 compute-0 nova_compute[247704]: 2026-01-31 07:46:54.942 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:55.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:55 compute-0 podman[285785]: 2026-01-31 07:46:55.895172221 +0000 UTC m=+0.060915582 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 07:46:55 compute-0 ceph-mon[74496]: pgmap v1455: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 292 KiB/s rd, 2.0 MiB/s wr, 56 op/s
Jan 31 07:46:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:46:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:56.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 31 07:46:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:57 compute-0 nova_compute[247704]: 2026-01-31 07:46:57.142 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:46:58 compute-0 ceph-mon[74496]: pgmap v1456: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 31 07:46:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 211 KiB/s wr, 40 op/s
Jan 31 07:46:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:46:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:46:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:46:59 compute-0 nova_compute[247704]: 2026-01-31 07:46:59.984 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 31 07:47:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 31 07:47:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 31 07:47:00 compute-0 ceph-mon[74496]: pgmap v1457: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 211 KiB/s wr, 40 op/s
Jan 31 07:47:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:00.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 36 KiB/s wr, 13 op/s
Jan 31 07:47:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:01.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 31 07:47:01 compute-0 ceph-mon[74496]: osdmap e216: 3 total, 3 up, 3 in
Jan 31 07:47:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 31 07:47:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 31 07:47:01 compute-0 nova_compute[247704]: 2026-01-31 07:47:01.596 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:02 compute-0 nova_compute[247704]: 2026-01-31 07:47:02.143 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:02 compute-0 ceph-mon[74496]: pgmap v1459: 305 pgs: 305 active+clean; 121 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 36 KiB/s wr, 13 op/s
Jan 31 07:47:02 compute-0 ceph-mon[74496]: osdmap e217: 3 total, 3 up, 3 in
Jan 31 07:47:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:47:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:02.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:47:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 145 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 1.7 MiB/s wr, 29 op/s
Jan 31 07:47:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:03.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.566 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.566 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.566 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.568 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.568 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 31 07:47:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 31 07:47:03 compute-0 ceph-mon[74496]: pgmap v1461: 305 pgs: 305 active+clean; 145 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 1.7 MiB/s wr, 29 op/s
Jan 31 07:47:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/677423198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.855 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:47:03 compute-0 nova_compute[247704]: 2026-01-31 07:47:03.896 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:47:04 compute-0 nova_compute[247704]: 2026-01-31 07:47:04.441 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:04 compute-0 nova_compute[247704]: 2026-01-31 07:47:04.441 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:04 compute-0 nova_compute[247704]: 2026-01-31 07:47:04.452 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:47:04 compute-0 nova_compute[247704]: 2026-01-31 07:47:04.453 247708 INFO nova.compute.claims [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:47:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:04.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 31 07:47:04 compute-0 ceph-mon[74496]: osdmap e218: 3 total, 3 up, 3 in
Jan 31 07:47:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/446125117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 31 07:47:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 31 07:47:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 145 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 41 op/s
Jan 31 07:47:04 compute-0 nova_compute[247704]: 2026-01-31 07:47:04.972 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.029 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:05.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:47:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829126141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.407 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.414 247708 DEBUG nova.compute.provider_tree [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.494 247708 DEBUG nova.scheduler.client.report [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.654 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.655 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.835 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.836 247708 DEBUG nova.network.neutron [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.899 247708 INFO nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:47:05 compute-0 ceph-mon[74496]: osdmap e219: 3 total, 3 up, 3 in
Jan 31 07:47:05 compute-0 ceph-mon[74496]: pgmap v1464: 305 pgs: 305 active+clean; 145 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 41 op/s
Jan 31 07:47:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2829126141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:05 compute-0 nova_compute[247704]: 2026-01-31 07:47:05.959 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.226 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.228 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.228 247708 INFO nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Creating image(s)
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.267 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.321 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.363 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.368 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.402 247708 DEBUG nova.policy [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392b30fc4156484392c30737a737bcac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0324e00066c44c39612c68322b97366', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.449 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.450 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.450 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.450 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.481 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.486 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:06.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.801 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.802 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.802 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.802 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:47:06 compute-0 nova_compute[247704]: 2026-01-31 07:47:06.803 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 200 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 186 op/s
Jan 31 07:47:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3416463148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.012 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:07.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.130 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] resizing rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.173 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.256 247708 DEBUG nova.objects.instance [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lazy-loading 'migration_context' on Instance uuid 7820d682-a83a-42ad-93c8-b1cc08346f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:47:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:47:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1325665326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.336 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.489 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.491 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4658MB free_disk=20.942768096923828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.491 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.491 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.493 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.494 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Ensure instance console log exists: /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.494 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.494 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.494 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.865 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 7820d682-a83a-42ad-93c8-b1cc08346f3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.865 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.866 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:47:07 compute-0 nova_compute[247704]: 2026-01-31 07:47:07.914 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:08 compute-0 ceph-mon[74496]: pgmap v1465: 305 pgs: 305 active+clean; 200 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 186 op/s
Jan 31 07:47:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1325665326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/542345774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:08 compute-0 nova_compute[247704]: 2026-01-31 07:47:08.359 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:08 compute-0 nova_compute[247704]: 2026-01-31 07:47:08.365 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:47:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:08 compute-0 nova_compute[247704]: 2026-01-31 07:47:08.562 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:47:08 compute-0 nova_compute[247704]: 2026-01-31 07:47:08.690 247708 DEBUG nova.network.neutron [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Successfully created port: 08928e39-ac5d-4e86-973c-11d34ee7d4d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:47:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 209 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.7 MiB/s wr, 176 op/s
Jan 31 07:47:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 31 07:47:09 compute-0 nova_compute[247704]: 2026-01-31 07:47:09.133 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:47:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 31 07:47:09 compute-0 nova_compute[247704]: 2026-01-31 07:47:09.134 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 31 07:47:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/991230648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:09 compute-0 podman[286048]: 2026-01-31 07:47:09.939317657 +0000 UTC m=+0.107492214 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:47:10 compute-0 nova_compute[247704]: 2026-01-31 07:47:10.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:10 compute-0 nova_compute[247704]: 2026-01-31 07:47:10.135 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:10 compute-0 nova_compute[247704]: 2026-01-31 07:47:10.136 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:10 compute-0 nova_compute[247704]: 2026-01-31 07:47:10.136 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:47:10 compute-0 nova_compute[247704]: 2026-01-31 07:47:10.137 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:47:10 compute-0 ceph-mon[74496]: pgmap v1466: 305 pgs: 305 active+clean; 209 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.7 MiB/s wr, 176 op/s
Jan 31 07:47:10 compute-0 ceph-mon[74496]: osdmap e220: 3 total, 3 up, 3 in
Jan 31 07:47:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 246 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Jan 31 07:47:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:11.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:11.157 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:11.158 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:11.158 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 31 07:47:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 31 07:47:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 31 07:47:11 compute-0 nova_compute[247704]: 2026-01-31 07:47:11.415 247708 DEBUG nova.network.neutron [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Successfully updated port: 08928e39-ac5d-4e86-973c-11d34ee7d4d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:47:11 compute-0 nova_compute[247704]: 2026-01-31 07:47:11.567 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:47:11 compute-0 nova_compute[247704]: 2026-01-31 07:47:11.568 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquired lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:47:11 compute-0 nova_compute[247704]: 2026-01-31 07:47:11.568 247708 DEBUG nova.network.neutron [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:47:12 compute-0 nova_compute[247704]: 2026-01-31 07:47:12.178 247708 DEBUG nova.compute.manager [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-changed-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:12 compute-0 nova_compute[247704]: 2026-01-31 07:47:12.178 247708 DEBUG nova.compute.manager [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Refreshing instance network info cache due to event network-changed-08928e39-ac5d-4e86-973c-11d34ee7d4d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:47:12 compute-0 nova_compute[247704]: 2026-01-31 07:47:12.178 247708 DEBUG oslo_concurrency.lockutils [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:47:12 compute-0 nova_compute[247704]: 2026-01-31 07:47:12.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:12 compute-0 ceph-mon[74496]: pgmap v1468: 305 pgs: 305 active+clean; 246 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Jan 31 07:47:12 compute-0 ceph-mon[74496]: osdmap e221: 3 total, 3 up, 3 in
Jan 31 07:47:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.8 MiB/s wr, 201 op/s
Jan 31 07:47:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:13 compute-0 nova_compute[247704]: 2026-01-31 07:47:13.245 247708 DEBUG nova.network.neutron [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:47:13 compute-0 sudo[286075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:13 compute-0 sudo[286075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:13 compute-0 sudo[286075]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:13 compute-0 sudo[286100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:13 compute-0 sudo[286100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:13 compute-0 sudo[286100]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:13 compute-0 ceph-mon[74496]: pgmap v1470: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.8 MiB/s wr, 201 op/s
Jan 31 07:47:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 31 07:47:14 compute-0 nova_compute[247704]: 2026-01-31 07:47:14.939 247708 DEBUG nova.network.neutron [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updating instance_info_cache with network_info: [{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.034 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Releasing lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.035 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Instance network_info: |[{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.035 247708 DEBUG oslo_concurrency.lockutils [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.036 247708 DEBUG nova.network.neutron [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Refreshing network info cache for port 08928e39-ac5d-4e86-973c-11d34ee7d4d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.039 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Start _get_guest_xml network_info=[{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.044 247708 WARNING nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.056 247708 DEBUG nova.virt.libvirt.host [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.057 247708 DEBUG nova.virt.libvirt.host [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.062 247708 DEBUG nova.virt.libvirt.host [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.063 247708 DEBUG nova.virt.libvirt.host [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.064 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.065 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.065 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.066 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.066 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.066 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.066 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.067 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.067 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.067 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.068 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.068 247708 DEBUG nova.virt.hardware [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.071 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:15.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:47:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944526431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.537 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.571 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:15 compute-0 nova_compute[247704]: 2026-01-31 07:47:15.576 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:47:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2113549329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.030 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.032 247708 DEBUG nova.virt.libvirt.vif [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-100855759',id=62,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0324e00066c44c39612c68322b97366',ramdisk_id='',reservation_id='r-wtupnsuu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1518292426',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1518292426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:47:06Z,user_data=None,user_id='392b30fc4156484392c30737a737bcac',uuid=7820d682-a83a-42ad-93c8-b1cc08346f3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.033 247708 DEBUG nova.network.os_vif_util [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Converting VIF {"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.034 247708 DEBUG nova.network.os_vif_util [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.035 247708 DEBUG nova.objects.instance [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7820d682-a83a-42ad-93c8-b1cc08346f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:47:16 compute-0 ceph-mon[74496]: pgmap v1471: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 31 07:47:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/944526431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.121 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <uuid>7820d682-a83a-42ad-93c8-b1cc08346f3e</uuid>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <name>instance-0000003e</name>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590</nova:name>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:47:15</nova:creationTime>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:user uuid="392b30fc4156484392c30737a737bcac">tempest-FloatingIPsAssociationNegativeTestJSON-1518292426-project-member</nova:user>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:project uuid="f0324e00066c44c39612c68322b97366">tempest-FloatingIPsAssociationNegativeTestJSON-1518292426</nova:project>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <nova:port uuid="08928e39-ac5d-4e86-973c-11d34ee7d4d4">
Jan 31 07:47:16 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <system>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <entry name="serial">7820d682-a83a-42ad-93c8-b1cc08346f3e</entry>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <entry name="uuid">7820d682-a83a-42ad-93c8-b1cc08346f3e</entry>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </system>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <os>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </os>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <features>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </features>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/7820d682-a83a-42ad-93c8-b1cc08346f3e_disk">
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </source>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/7820d682-a83a-42ad-93c8-b1cc08346f3e_disk.config">
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </source>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:47:16 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:4a:85:bb"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <target dev="tap08928e39-ac"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/console.log" append="off"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <video>
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </video>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:47:16 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:47:16 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:47:16 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:47:16 compute-0 nova_compute[247704]: </domain>
Jan 31 07:47:16 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.122 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Preparing to wait for external event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.123 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.123 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.123 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.124 247708 DEBUG nova.virt.libvirt.vif [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-100855759',id=62,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0324e00066c44c39612c68322b97366',ramdisk_id='',reservation_id='r-wtupnsuu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1518292426',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1518292426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:47:06Z,user_data=None,user_id='392b30fc4156484392c30737a737bcac',uuid=7820d682-a83a-42ad-93c8-b1cc08346f3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.124 247708 DEBUG nova.network.os_vif_util [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Converting VIF {"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.125 247708 DEBUG nova.network.os_vif_util [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.125 247708 DEBUG os_vif [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.125 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.126 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.126 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.129 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08928e39-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.130 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08928e39-ac, col_values=(('external_ids', {'iface-id': '08928e39-ac5d-4e86-973c-11d34ee7d4d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:85:bb', 'vm-uuid': '7820d682-a83a-42ad-93c8-b1cc08346f3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.131 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.133 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:47:16 compute-0 NetworkManager[49108]: <info>  [1769845636.1337] manager: (tap08928e39-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.138 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.139 247708 INFO os_vif [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac')
Jan 31 07:47:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.541 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.541 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.541 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] No VIF found with MAC fa:16:3e:4a:85:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.542 247708 INFO nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Using config drive
Jan 31 07:47:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:16 compute-0 nova_compute[247704]: 2026-01-31 07:47:16.570 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Jan 31 07:47:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2113549329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2462762902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:17 compute-0 nova_compute[247704]: 2026-01-31 07:47:17.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:17 compute-0 nova_compute[247704]: 2026-01-31 07:47:17.807 247708 INFO nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Creating config drive at /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/disk.config
Jan 31 07:47:17 compute-0 nova_compute[247704]: 2026-01-31 07:47:17.810 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyecb3il8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:17 compute-0 nova_compute[247704]: 2026-01-31 07:47:17.931 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyecb3il8" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:17 compute-0 nova_compute[247704]: 2026-01-31 07:47:17.956 247708 DEBUG nova.storage.rbd_utils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] rbd image 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:47:17 compute-0 nova_compute[247704]: 2026-01-31 07:47:17.959 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/disk.config 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.167 247708 DEBUG nova.network.neutron [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updated VIF entry in instance network info cache for port 08928e39-ac5d-4e86-973c-11d34ee7d4d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.169 247708 DEBUG nova.network.neutron [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updating instance_info_cache with network_info: [{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.204 247708 DEBUG oslo_concurrency.lockutils [req-9bbf53a1-2534-4c15-a991-4adf67a02329 req-0e1413a6-3dc0-4dcd-833d-5726696e6d19 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:47:18 compute-0 ceph-mon[74496]: pgmap v1472: 305 pgs: 305 active+clean; 246 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Jan 31 07:47:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 237 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.880 247708 DEBUG oslo_concurrency.processutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/disk.config 7820d682-a83a-42ad-93c8-b1cc08346f3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.921s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.881 247708 INFO nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Deleting local config drive /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e/disk.config because it was imported into RBD.
Jan 31 07:47:18 compute-0 kernel: tap08928e39-ac: entered promiscuous mode
Jan 31 07:47:18 compute-0 NetworkManager[49108]: <info>  [1769845638.9540] manager: (tap08928e39-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.953 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:18 compute-0 ovn_controller[149457]: 2026-01-31T07:47:18Z|00194|binding|INFO|Claiming lport 08928e39-ac5d-4e86-973c-11d34ee7d4d4 for this chassis.
Jan 31 07:47:18 compute-0 ovn_controller[149457]: 2026-01-31T07:47:18Z|00195|binding|INFO|08928e39-ac5d-4e86-973c-11d34ee7d4d4: Claiming fa:16:3e:4a:85:bb 10.100.0.10
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.957 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.963 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:18 compute-0 systemd-udevd[286260]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:47:18 compute-0 ovn_controller[149457]: 2026-01-31T07:47:18Z|00196|binding|INFO|Setting lport 08928e39-ac5d-4e86-973c-11d34ee7d4d4 ovn-installed in OVS
Jan 31 07:47:18 compute-0 nova_compute[247704]: 2026-01-31 07:47:18.997 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:19 compute-0 NetworkManager[49108]: <info>  [1769845639.0109] device (tap08928e39-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:47:19 compute-0 NetworkManager[49108]: <info>  [1769845639.0122] device (tap08928e39-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:47:19 compute-0 systemd-machined[214448]: New machine qemu-27-instance-0000003e.
Jan 31 07:47:19 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000003e.
Jan 31 07:47:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:19.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:19 compute-0 ovn_controller[149457]: 2026-01-31T07:47:19Z|00197|binding|INFO|Setting lport 08928e39-ac5d-4e86-973c-11d34ee7d4d4 up in Southbound
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.135 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:85:bb 10.100.0.10'], port_security=['fa:16:3e:4a:85:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7820d682-a83a-42ad-93c8-b1cc08346f3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-281c222e-0b76-45d6-a8af-2dd38564086d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0324e00066c44c39612c68322b97366', 'neutron:revision_number': '2', 'neutron:security_group_ids': '572e2d2d-5b9e-4092-81c5-309971c0ef99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=213645cd-f029-4c15-8cb0-4306a402168a, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=08928e39-ac5d-4e86-973c-11d34ee7d4d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.137 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 08928e39-ac5d-4e86-973c-11d34ee7d4d4 in datapath 281c222e-0b76-45d6-a8af-2dd38564086d bound to our chassis
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.139 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 281c222e-0b76-45d6-a8af-2dd38564086d
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.148 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[611c07fc-f1d2-4681-9afd-f81a9c34e2bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.149 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap281c222e-01 in ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.151 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap281c222e-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.151 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[55ca8a98-6125-4a54-a8b8-6a626c55c634]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.152 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1be44f2a-c200-49c7-b83b-992b47955789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.162 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[354259ef-b4d6-4f3e-b0da-9ef4a5aacb8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.173 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a5db6183-6433-4371-b763-b9b6b181e2e3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.198 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b957119e-9521-4656-ba97-27fb50708658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 NetworkManager[49108]: <info>  [1769845639.2059] manager: (tap281c222e-00): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.204 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3e786a5b-589b-4d9e-89ea-4344a9de6afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.233 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5fef7f-61a5-4a2a-93c5-e6de5704fe7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.236 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bf215a40-cd1c-46f0-a130-8c5cbec7ba59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 NetworkManager[49108]: <info>  [1769845639.2541] device (tap281c222e-00): carrier: link connected
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.258 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[70826553-b2b6-4bfa-b90e-95025ee65a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.272 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4d63b6a1-6718-4bb6-8c1f-03f8fc55b18c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap281c222e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:0d:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588187, 'reachable_time': 39970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286296, 'error': None, 'target': 'ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.287 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d604c7a-750d-460c-bd48-a78602abe339]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:d16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588187, 'tstamp': 588187}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286297, 'error': None, 'target': 'ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.305 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[567e141c-6564-4f17-a16d-cb0754488155]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap281c222e-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:0d:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588187, 'reachable_time': 39970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286298, 'error': None, 'target': 'ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.326 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[63b932e3-1214-4dca-8990-a4a7733277cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2986912803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.372 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f8dc2b-bac4-4de6-b734-4ece5fd782ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.374 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281c222e-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.375 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.376 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap281c222e-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:19 compute-0 kernel: tap281c222e-00: entered promiscuous mode
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:19 compute-0 NetworkManager[49108]: <info>  [1769845639.3791] manager: (tap281c222e-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.384 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap281c222e-00, col_values=(('external_ids', {'iface-id': '483edb51-7139-4ce1-850e-821c07a9b2d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.385 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:19 compute-0 ovn_controller[149457]: 2026-01-31T07:47:19Z|00198|binding|INFO|Releasing lport 483edb51-7139-4ce1-850e-821c07a9b2d8 from this chassis (sb_readonly=0)
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.388 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/281c222e-0b76-45d6-a8af-2dd38564086d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/281c222e-0b76-45d6-a8af-2dd38564086d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.389 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a604ebac-584e-415b-821c-0eb0c414db25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.390 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-281c222e-0b76-45d6-a8af-2dd38564086d
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/281c222e-0b76-45d6-a8af-2dd38564086d.pid.haproxy
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 281c222e-0b76-45d6-a8af-2dd38564086d
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:47:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:19.391 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d', 'env', 'PROCESS_TAG=haproxy-281c222e-0b76-45d6-a8af-2dd38564086d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/281c222e-0b76-45d6-a8af-2dd38564086d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.711 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845639.7103305, 7820d682-a83a-42ad-93c8-b1cc08346f3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.712 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] VM Started (Lifecycle Event)
Jan 31 07:47:19 compute-0 podman[286372]: 2026-01-31 07:47:19.690751588 +0000 UTC m=+0.021508239 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.859 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.864 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845639.7104874, 7820d682-a83a-42ad-93c8-b1cc08346f3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:47:19 compute-0 nova_compute[247704]: 2026-01-31 07:47:19.864 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] VM Paused (Lifecycle Event)
Jan 31 07:47:19 compute-0 podman[286372]: 2026-01-31 07:47:19.886123512 +0000 UTC m=+0.216880143 container create 3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:47:19 compute-0 systemd[1]: Started libpod-conmon-3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014.scope.
Jan 31 07:47:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cf565faa54b8ab10c6e8185cbcad6cae361c9d8e61057fbe7cdc722199bcb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:47:20 compute-0 podman[286372]: 2026-01-31 07:47:20.028727944 +0000 UTC m=+0.359484615 container init 3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:47:20 compute-0 podman[286372]: 2026-01-31 07:47:20.033496621 +0000 UTC m=+0.364253262 container start 3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.041 247708 DEBUG nova.compute.manager [req-2897439f-77f1-48f1-9bb3-b8392b157d93 req-fee28285-bf66-4d05-bc40-01cd56a92f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.042 247708 DEBUG oslo_concurrency.lockutils [req-2897439f-77f1-48f1-9bb3-b8392b157d93 req-fee28285-bf66-4d05-bc40-01cd56a92f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.043 247708 DEBUG oslo_concurrency.lockutils [req-2897439f-77f1-48f1-9bb3-b8392b157d93 req-fee28285-bf66-4d05-bc40-01cd56a92f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.044 247708 DEBUG oslo_concurrency.lockutils [req-2897439f-77f1-48f1-9bb3-b8392b157d93 req-fee28285-bf66-4d05-bc40-01cd56a92f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.044 247708 DEBUG nova.compute.manager [req-2897439f-77f1-48f1-9bb3-b8392b157d93 req-fee28285-bf66-4d05-bc40-01cd56a92f68 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Processing event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.045 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:47:20
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups']
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.051 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:47:20 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [NOTICE]   (286393) : New worker (286395) forked
Jan 31 07:47:20 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [NOTICE]   (286393) : Loading success.
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.056 247708 INFO nova.virt.libvirt.driver [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Instance spawned successfully.
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.056 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.066 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.069 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845640.0503483, 7820d682-a83a-42ad-93c8-b1cc08346f3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.070 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] VM Resumed (Lifecycle Event)
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.191 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.196 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.199 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.200 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.200 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.201 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.201 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.201 247708 DEBUG nova.virt.libvirt.driver [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.338 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:47:20 compute-0 ceph-mon[74496]: pgmap v1473: 305 pgs: 305 active+clean; 237 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 07:47:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2357984663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3624245809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1947828328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.550 247708 INFO nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Took 14.32 seconds to spawn the instance on the hypervisor.
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.550 247708 DEBUG nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:47:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 209 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.862 247708 INFO nova.compute.manager [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Took 16.45 seconds to build instance.
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:20.916 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:47:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:20.918 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:47:20 compute-0 nova_compute[247704]: 2026-01-31 07:47:20.973 247708 DEBUG oslo_concurrency.lockutils [None req-e42c080c-7b37-407a-a320-663d4ef7df5a 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:21.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:21 compute-0 nova_compute[247704]: 2026-01-31 07:47:21.184 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 31 07:47:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 31 07:47:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.179 247708 DEBUG nova.compute.manager [req-a9108e0c-b2b9-4428-b12d-36011862fae5 req-a78789e6-e75e-46a7-9ca4-a2e87fe81338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.180 247708 DEBUG oslo_concurrency.lockutils [req-a9108e0c-b2b9-4428-b12d-36011862fae5 req-a78789e6-e75e-46a7-9ca4-a2e87fe81338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.181 247708 DEBUG oslo_concurrency.lockutils [req-a9108e0c-b2b9-4428-b12d-36011862fae5 req-a78789e6-e75e-46a7-9ca4-a2e87fe81338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.181 247708 DEBUG oslo_concurrency.lockutils [req-a9108e0c-b2b9-4428-b12d-36011862fae5 req-a78789e6-e75e-46a7-9ca4-a2e87fe81338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.183 247708 DEBUG nova.compute.manager [req-a9108e0c-b2b9-4428-b12d-36011862fae5 req-a78789e6-e75e-46a7-9ca4-a2e87fe81338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] No waiting events found dispatching network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.184 247708 WARNING nova.compute.manager [req-a9108e0c-b2b9-4428-b12d-36011862fae5 req-a78789e6-e75e-46a7-9ca4-a2e87fe81338 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received unexpected event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 for instance with vm_state active and task_state None.
Jan 31 07:47:22 compute-0 nova_compute[247704]: 2026-01-31 07:47:22.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:22 compute-0 ceph-mon[74496]: pgmap v1474: 305 pgs: 305 active+clean; 209 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 07:47:22 compute-0 ceph-mon[74496]: osdmap e222: 3 total, 3 up, 3 in
Jan 31 07:47:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 210 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Jan 31 07:47:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:23.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:23 compute-0 ceph-mon[74496]: pgmap v1476: 305 pgs: 305 active+clean; 210 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Jan 31 07:47:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:47:23.919 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:47:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 210 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Jan 31 07:47:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3540240285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3620649895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:25.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:26 compute-0 nova_compute[247704]: 2026-01-31 07:47:26.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 181 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 301 op/s
Jan 31 07:47:26 compute-0 podman[286407]: 2026-01-31 07:47:26.881717585 +0000 UTC m=+0.052275281 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:47:26 compute-0 ceph-mon[74496]: pgmap v1477: 305 pgs: 305 active+clean; 210 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Jan 31 07:47:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:27 compute-0 nova_compute[247704]: 2026-01-31 07:47:27.226 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:28 compute-0 ceph-mon[74496]: pgmap v1478: 305 pgs: 305 active+clean; 181 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 301 op/s
Jan 31 07:47:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 181 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 300 op/s
Jan 31 07:47:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:29.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:29 compute-0 ceph-mon[74496]: pgmap v1479: 305 pgs: 305 active+clean; 181 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 300 op/s
Jan 31 07:47:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:30.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 181 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Jan 31 07:47:30 compute-0 nova_compute[247704]: 2026-01-31 07:47:30.946 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:30 compute-0 NetworkManager[49108]: <info>  [1769845650.9481] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 31 07:47:30 compute-0 NetworkManager[49108]: <info>  [1769845650.9502] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Jan 31 07:47:31 compute-0 ovn_controller[149457]: 2026-01-31T07:47:31Z|00199|binding|INFO|Releasing lport 483edb51-7139-4ce1-850e-821c07a9b2d8 from this chassis (sb_readonly=0)
Jan 31 07:47:31 compute-0 nova_compute[247704]: 2026-01-31 07:47:31.046 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:31 compute-0 ovn_controller[149457]: 2026-01-31T07:47:31Z|00200|binding|INFO|Releasing lport 483edb51-7139-4ce1-850e-821c07a9b2d8 from this chassis (sb_readonly=0)
Jan 31 07:47:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:31 compute-0 nova_compute[247704]: 2026-01-31 07:47:31.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 31 07:47:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 31 07:47:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 31 07:47:32 compute-0 nova_compute[247704]: 2026-01-31 07:47:32.229 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:32 compute-0 ceph-mon[74496]: pgmap v1480: 305 pgs: 305 active+clean; 181 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Jan 31 07:47:32 compute-0 ceph-mon[74496]: osdmap e223: 3 total, 3 up, 3 in
Jan 31 07:47:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 183 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 289 op/s
Jan 31 07:47:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:33 compute-0 sudo[286431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:33 compute-0 sudo[286431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:33 compute-0 sudo[286431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:33 compute-0 sudo[286456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:33 compute-0 sudo[286456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:33 compute-0 sudo[286456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:33 compute-0 ceph-mon[74496]: pgmap v1482: 305 pgs: 305 active+clean; 183 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 289 op/s
Jan 31 07:47:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:34 compute-0 ovn_controller[149457]: 2026-01-31T07:47:34Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:85:bb 10.100.0.10
Jan 31 07:47:34 compute-0 ovn_controller[149457]: 2026-01-31T07:47:34Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:85:bb 10.100.0.10
Jan 31 07:47:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 183 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 289 op/s
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003107624348594826 of space, bias 1.0, pg target 0.9322873045784478 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019053237832681927 of space, bias 1.0, pg target 0.5715971349804578 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:47:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:35.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:35 compute-0 ceph-mon[74496]: pgmap v1483: 305 pgs: 305 active+clean; 183 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 289 op/s
Jan 31 07:47:36 compute-0 nova_compute[247704]: 2026-01-31 07:47:36.190 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:36 compute-0 nova_compute[247704]: 2026-01-31 07:47:36.198 247708 DEBUG nova.compute.manager [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-changed-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:36 compute-0 nova_compute[247704]: 2026-01-31 07:47:36.198 247708 DEBUG nova.compute.manager [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Refreshing instance network info cache due to event network-changed-08928e39-ac5d-4e86-973c-11d34ee7d4d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:47:36 compute-0 nova_compute[247704]: 2026-01-31 07:47:36.199 247708 DEBUG oslo_concurrency.lockutils [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:47:36 compute-0 nova_compute[247704]: 2026-01-31 07:47:36.199 247708 DEBUG oslo_concurrency.lockutils [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:47:36 compute-0 nova_compute[247704]: 2026-01-31 07:47:36.199 247708 DEBUG nova.network.neutron [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Refreshing network info cache for port 08928e39-ac5d-4e86-973c-11d34ee7d4d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:47:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 229 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Jan 31 07:47:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 31 07:47:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 31 07:47:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 31 07:47:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:37 compute-0 nova_compute[247704]: 2026-01-31 07:47:37.232 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:38 compute-0 ceph-mon[74496]: pgmap v1484: 305 pgs: 305 active+clean; 229 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Jan 31 07:47:38 compute-0 ceph-mon[74496]: osdmap e224: 3 total, 3 up, 3 in
Jan 31 07:47:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:38.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 246 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.4 MiB/s wr, 244 op/s
Jan 31 07:47:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:39.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:39 compute-0 nova_compute[247704]: 2026-01-31 07:47:39.617 247708 DEBUG nova.network.neutron [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updated VIF entry in instance network info cache for port 08928e39-ac5d-4e86-973c-11d34ee7d4d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:47:39 compute-0 nova_compute[247704]: 2026-01-31 07:47:39.617 247708 DEBUG nova.network.neutron [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updating instance_info_cache with network_info: [{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:47:39 compute-0 nova_compute[247704]: 2026-01-31 07:47:39.691 247708 DEBUG oslo_concurrency.lockutils [req-390c19a9-c31b-44ad-b77a-3354fbd4907c req-1fc56cbe-bdd3-4ba3-aabe-cd2010b74198 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:47:40 compute-0 ceph-mon[74496]: pgmap v1486: 305 pgs: 305 active+clean; 246 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.4 MiB/s wr, 244 op/s
Jan 31 07:47:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 332 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 277 op/s
Jan 31 07:47:40 compute-0 podman[286484]: 2026-01-31 07:47:40.902833938 +0000 UTC m=+0.068487949 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:47:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:41.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:41 compute-0 nova_compute[247704]: 2026-01-31 07:47:41.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4165145417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:47:42 compute-0 nova_compute[247704]: 2026-01-31 07:47:42.235 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 31 07:47:42 compute-0 ceph-mon[74496]: pgmap v1487: 305 pgs: 305 active+clean; 332 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 277 op/s
Jan 31 07:47:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 346 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 12 MiB/s wr, 253 op/s
Jan 31 07:47:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 31 07:47:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 31 07:47:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:43.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 31 07:47:44 compute-0 ceph-mon[74496]: pgmap v1488: 305 pgs: 305 active+clean; 346 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 12 MiB/s wr, 253 op/s
Jan 31 07:47:44 compute-0 ceph-mon[74496]: osdmap e225: 3 total, 3 up, 3 in
Jan 31 07:47:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 31 07:47:44 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 31 07:47:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 346 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 10 MiB/s wr, 159 op/s
Jan 31 07:47:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:45.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:45 compute-0 ceph-mon[74496]: osdmap e226: 3 total, 3 up, 3 in
Jan 31 07:47:45 compute-0 ceph-mon[74496]: pgmap v1491: 305 pgs: 305 active+clean; 346 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 10 MiB/s wr, 159 op/s
Jan 31 07:47:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/325263316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:47:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/325263316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:47:46 compute-0 nova_compute[247704]: 2026-01-31 07:47:46.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 365 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 9.8 MiB/s wr, 226 op/s
Jan 31 07:47:46 compute-0 sudo[286513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:46 compute-0 sudo[286513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:46 compute-0 sudo[286513]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:46 compute-0 sudo[286538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:47:46 compute-0 sudo[286538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:46 compute-0 sudo[286538]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-0 sudo[286563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:47 compute-0 sudo[286563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-0 sudo[286563]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-0 sudo[286588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:47:47 compute-0 sudo[286588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:47.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:47 compute-0 nova_compute[247704]: 2026-01-31 07:47:47.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:47 compute-0 sudo[286588]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:47:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:47:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:47 compute-0 sudo[286634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:47 compute-0 sudo[286634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-0 sudo[286634]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-0 sudo[286659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:47:47 compute-0 sudo[286659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-0 sudo[286659]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-0 sudo[286684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:47 compute-0 sudo[286684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:47 compute-0 sudo[286684]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:47 compute-0 sudo[286709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:47:47 compute-0 sudo[286709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:48 compute-0 sudo[286709]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:47:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:47:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:47:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 46d7f2d8-d99e-401b-886e-73d31a305f11 does not exist
Jan 31 07:47:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fb67f6d9-a954-4d5c-be91-db74b0adb4e7 does not exist
Jan 31 07:47:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ef10bedb-0bde-4665-8a9d-0e76a3e9b5db does not exist
Jan 31 07:47:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:47:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:47:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:47:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:47:48 compute-0 sudo[286765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:48 compute-0 sudo[286765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:48 compute-0 sudo[286765]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:48 compute-0 sudo[286790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:47:48 compute-0 sudo[286790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:48 compute-0 sudo[286790]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:48 compute-0 sudo[286815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:48 compute-0 sudo[286815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:48 compute-0 sudo[286815]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:48 compute-0 ceph-mon[74496]: pgmap v1492: 305 pgs: 305 active+clean; 365 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 9.8 MiB/s wr, 226 op/s
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:47:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:47:48 compute-0 sudo[286840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:47:48 compute-0 sudo[286840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:48.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.751981345 +0000 UTC m=+0.041832436 container create 4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:47:48 compute-0 systemd[1]: Started libpod-conmon-4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d.scope.
Jan 31 07:47:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.815875489 +0000 UTC m=+0.105726670 container init 4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.82288678 +0000 UTC m=+0.112737871 container start 4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.82693142 +0000 UTC m=+0.116782621 container attach 4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:47:48 compute-0 focused_black[286922]: 167 167
Jan 31 07:47:48 compute-0 systemd[1]: libpod-4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d.scope: Deactivated successfully.
Jan 31 07:47:48 compute-0 conmon[286922]: conmon 4d7eddc1e42142ad6ccd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d.scope/container/memory.events
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.829521443 +0000 UTC m=+0.119372544 container died 4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.736019953 +0000 UTC m=+0.025871044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0eddd0dbbf6df561dec70d24f5916c31b1ae8d6ff25d9d2968fff0f6ad174c02-merged.mount: Deactivated successfully.
Jan 31 07:47:48 compute-0 podman[286906]: 2026-01-31 07:47:48.862020169 +0000 UTC m=+0.151871270 container remove 4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_black, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:47:48 compute-0 systemd[1]: libpod-conmon-4d7eddc1e42142ad6ccde3c26e53f5f931a094ed8e907a1a399cf74e4ff0bc9d.scope: Deactivated successfully.
Jan 31 07:47:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 388 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Jan 31 07:47:48 compute-0 podman[286945]: 2026-01-31 07:47:48.983728479 +0000 UTC m=+0.044830469 container create 99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:47:49 compute-0 systemd[1]: Started libpod-conmon-99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a.scope.
Jan 31 07:47:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b64a8b626c9512f5669cfb5fdf7a3d8ed3a3d5bc1e07205f6d57a2b258f76a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b64a8b626c9512f5669cfb5fdf7a3d8ed3a3d5bc1e07205f6d57a2b258f76a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b64a8b626c9512f5669cfb5fdf7a3d8ed3a3d5bc1e07205f6d57a2b258f76a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b64a8b626c9512f5669cfb5fdf7a3d8ed3a3d5bc1e07205f6d57a2b258f76a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4b64a8b626c9512f5669cfb5fdf7a3d8ed3a3d5bc1e07205f6d57a2b258f76a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:49 compute-0 podman[286945]: 2026-01-31 07:47:48.963955105 +0000 UTC m=+0.025057125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:47:49 compute-0 podman[286945]: 2026-01-31 07:47:49.06664843 +0000 UTC m=+0.127750470 container init 99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:47:49 compute-0 podman[286945]: 2026-01-31 07:47:49.079738471 +0000 UTC m=+0.140840461 container start 99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:47:49 compute-0 podman[286945]: 2026-01-31 07:47:49.084000945 +0000 UTC m=+0.145102935 container attach 99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:47:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:49 compute-0 busy_brattain[286961]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:47:49 compute-0 busy_brattain[286961]: --> relative data size: 1.0
Jan 31 07:47:49 compute-0 busy_brattain[286961]: --> All data devices are unavailable
Jan 31 07:47:49 compute-0 systemd[1]: libpod-99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a.scope: Deactivated successfully.
Jan 31 07:47:49 compute-0 podman[286945]: 2026-01-31 07:47:49.8837421 +0000 UTC m=+0.944844090 container died 99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4b64a8b626c9512f5669cfb5fdf7a3d8ed3a3d5bc1e07205f6d57a2b258f76a-merged.mount: Deactivated successfully.
Jan 31 07:47:49 compute-0 podman[286945]: 2026-01-31 07:47:49.997450154 +0000 UTC m=+1.058552144 container remove 99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:47:50 compute-0 systemd[1]: libpod-conmon-99e8b6036f7f132839ebc523ed80c50a29317582ef6223c7ed2dc4570bd8788a.scope: Deactivated successfully.
Jan 31 07:47:50 compute-0 sudo[286840]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:47:50 compute-0 sudo[286987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:50 compute-0 sudo[286987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:50 compute-0 sudo[286987]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:50 compute-0 sudo[287012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:47:50 compute-0 sudo[287012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:50 compute-0 sudo[287012]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:50 compute-0 sudo[287037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:50 compute-0 sudo[287037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:50 compute-0 sudo[287037]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:50 compute-0 sudo[287062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:47:50 compute-0 sudo[287062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:50 compute-0 ceph-mon[74496]: pgmap v1493: 305 pgs: 305 active+clean; 388 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Jan 31 07:47:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:50.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.661887506 +0000 UTC m=+0.062414890 container create cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:47:50 compute-0 systemd[1]: Started libpod-conmon-cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e.scope.
Jan 31 07:47:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.633243634 +0000 UTC m=+0.033771068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.734329919 +0000 UTC m=+0.134857363 container init cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.742735285 +0000 UTC m=+0.143262669 container start cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.747164824 +0000 UTC m=+0.147692198 container attach cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:47:50 compute-0 systemd[1]: libpod-cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e.scope: Deactivated successfully.
Jan 31 07:47:50 compute-0 nostalgic_cartwright[287143]: 167 167
Jan 31 07:47:50 compute-0 conmon[287143]: conmon cf1c2fa7fc6052b0fbbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e.scope/container/memory.events
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.749307926 +0000 UTC m=+0.149835300 container died cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-35aedf6e66d71550d6b8f8c785bb2a7281c79b3d5e6a533f417d9972629c8d47-merged.mount: Deactivated successfully.
Jan 31 07:47:50 compute-0 podman[287127]: 2026-01-31 07:47:50.794196685 +0000 UTC m=+0.194724019 container remove cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:47:50 compute-0 systemd[1]: libpod-conmon-cf1c2fa7fc6052b0fbbb7f6ac337263cd462e39c803019f66dd77da2ef001b1e.scope: Deactivated successfully.
Jan 31 07:47:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.9 MiB/s wr, 136 op/s
Jan 31 07:47:50 compute-0 podman[287166]: 2026-01-31 07:47:50.970510723 +0000 UTC m=+0.040762719 container create 6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:47:51 compute-0 systemd[1]: Started libpod-conmon-6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949.scope.
Jan 31 07:47:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed4ad4a62125fda8d30ea86a5e6c3529f62e69ac847449a3029a455ad60bed2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed4ad4a62125fda8d30ea86a5e6c3529f62e69ac847449a3029a455ad60bed2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed4ad4a62125fda8d30ea86a5e6c3529f62e69ac847449a3029a455ad60bed2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed4ad4a62125fda8d30ea86a5e6c3529f62e69ac847449a3029a455ad60bed2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:51 compute-0 podman[287166]: 2026-01-31 07:47:50.94954928 +0000 UTC m=+0.019801266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:47:51 compute-0 podman[287166]: 2026-01-31 07:47:51.077988975 +0000 UTC m=+0.148240951 container init 6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:47:51 compute-0 podman[287166]: 2026-01-31 07:47:51.08676929 +0000 UTC m=+0.157021246 container start 6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:47:51 compute-0 podman[287166]: 2026-01-31 07:47:51.09291508 +0000 UTC m=+0.163167056 container attach 6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:47:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:51 compute-0 nova_compute[247704]: 2026-01-31 07:47:51.198 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 31 07:47:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 31 07:47:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 31 07:47:51 compute-0 interesting_rubin[287182]: {
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:     "0": [
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:         {
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "devices": [
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "/dev/loop3"
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             ],
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "lv_name": "ceph_lv0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "lv_size": "7511998464",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "name": "ceph_lv0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "tags": {
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.cluster_name": "ceph",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.crush_device_class": "",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.encrypted": "0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.osd_id": "0",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.type": "block",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:                 "ceph.vdo": "0"
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             },
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "type": "block",
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:             "vg_name": "ceph_vg0"
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:         }
Jan 31 07:47:51 compute-0 interesting_rubin[287182]:     ]
Jan 31 07:47:51 compute-0 interesting_rubin[287182]: }
Jan 31 07:47:51 compute-0 systemd[1]: libpod-6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949.scope: Deactivated successfully.
Jan 31 07:47:51 compute-0 podman[287166]: 2026-01-31 07:47:51.859636317 +0000 UTC m=+0.929888303 container died 6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:47:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ed4ad4a62125fda8d30ea86a5e6c3529f62e69ac847449a3029a455ad60bed2-merged.mount: Deactivated successfully.
Jan 31 07:47:51 compute-0 podman[287166]: 2026-01-31 07:47:51.954237234 +0000 UTC m=+1.024489190 container remove 6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:47:51 compute-0 systemd[1]: libpod-conmon-6ee2033cefad35370966a2a94033b5436882a1cc50c4e173b1ce1deb1358d949.scope: Deactivated successfully.
Jan 31 07:47:51 compute-0 sudo[287062]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:52 compute-0 sudo[287206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:52 compute-0 sudo[287206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:52 compute-0 sudo[287206]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:52 compute-0 sudo[287231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:47:52 compute-0 sudo[287231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:52 compute-0 sudo[287231]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:52 compute-0 sudo[287256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:52 compute-0 sudo[287256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:52 compute-0 sudo[287256]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:52 compute-0 sudo[287281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:47:52 compute-0 sudo[287281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:52 compute-0 nova_compute[247704]: 2026-01-31 07:47:52.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:52 compute-0 ceph-mon[74496]: pgmap v1494: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.9 MiB/s wr, 136 op/s
Jan 31 07:47:52 compute-0 ceph-mon[74496]: osdmap e227: 3 total, 3 up, 3 in
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.544664902 +0000 UTC m=+0.040502963 container create 4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 07:47:52 compute-0 systemd[1]: Started libpod-conmon-4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a.scope.
Jan 31 07:47:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.6058163 +0000 UTC m=+0.101654381 container init 4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.611013137 +0000 UTC m=+0.106851188 container start 4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 07:47:52 compute-0 flamboyant_williams[287362]: 167 167
Jan 31 07:47:52 compute-0 systemd[1]: libpod-4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a.scope: Deactivated successfully.
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.614906102 +0000 UTC m=+0.110744173 container attach 4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:47:52 compute-0 conmon[287362]: conmon 4fb872b78ff4a6f9a4fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a.scope/container/memory.events
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.61563154 +0000 UTC m=+0.111469591 container died 4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.531008278 +0000 UTC m=+0.026846359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:47:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb8a955a1977fca694ff4e43524c82933bb6eb4184fbb1148898e272523f1be9-merged.mount: Deactivated successfully.
Jan 31 07:47:52 compute-0 podman[287346]: 2026-01-31 07:47:52.708280119 +0000 UTC m=+0.204118170 container remove 4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:47:52 compute-0 systemd[1]: libpod-conmon-4fb872b78ff4a6f9a4fe1f3264e0de6fb33470f985e48cb23840087e80a6ff1a.scope: Deactivated successfully.
Jan 31 07:47:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 2.8 MiB/s wr, 131 op/s
Jan 31 07:47:52 compute-0 podman[287385]: 2026-01-31 07:47:52.939974894 +0000 UTC m=+0.089263278 container create 6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:47:52 compute-0 podman[287385]: 2026-01-31 07:47:52.875764731 +0000 UTC m=+0.025053155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:47:53 compute-0 systemd[1]: Started libpod-conmon-6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45.scope.
Jan 31 07:47:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a04b543c81f9ef0d4fd916af3c0be6de880511b7a3cbb47510cddc8f54e5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a04b543c81f9ef0d4fd916af3c0be6de880511b7a3cbb47510cddc8f54e5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a04b543c81f9ef0d4fd916af3c0be6de880511b7a3cbb47510cddc8f54e5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a04b543c81f9ef0d4fd916af3c0be6de880511b7a3cbb47510cddc8f54e5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:47:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:47:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:47:53 compute-0 podman[287385]: 2026-01-31 07:47:53.197562992 +0000 UTC m=+0.346851396 container init 6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:47:53 compute-0 podman[287385]: 2026-01-31 07:47:53.204386009 +0000 UTC m=+0.353674373 container start 6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:47:53 compute-0 podman[287385]: 2026-01-31 07:47:53.308508098 +0000 UTC m=+0.457796452 container attach 6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:47:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 31 07:47:53 compute-0 sudo[287407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:53 compute-0 sudo[287407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:53 compute-0 sudo[287407]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:53 compute-0 sudo[287432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:53 compute-0 sudo[287432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:53 compute-0 sudo[287432]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:53 compute-0 ovn_controller[149457]: 2026-01-31T07:47:53Z|00201|binding|INFO|Releasing lport 483edb51-7139-4ce1-850e-821c07a9b2d8 from this chassis (sb_readonly=0)
Jan 31 07:47:53 compute-0 nova_compute[247704]: 2026-01-31 07:47:53.783 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 31 07:47:53 compute-0 ceph-mon[74496]: pgmap v1496: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 2.8 MiB/s wr, 131 op/s
Jan 31 07:47:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 31 07:47:54 compute-0 sweet_jemison[287401]: {
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:         "osd_id": 0,
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:         "type": "bluestore"
Jan 31 07:47:54 compute-0 sweet_jemison[287401]:     }
Jan 31 07:47:54 compute-0 sweet_jemison[287401]: }
Jan 31 07:47:54 compute-0 systemd[1]: libpod-6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45.scope: Deactivated successfully.
Jan 31 07:47:54 compute-0 podman[287385]: 2026-01-31 07:47:54.095274995 +0000 UTC m=+1.244563359 container died 6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 07:47:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa6a04b543c81f9ef0d4fd916af3c0be6de880511b7a3cbb47510cddc8f54e5d-merged.mount: Deactivated successfully.
Jan 31 07:47:54 compute-0 podman[287385]: 2026-01-31 07:47:54.147124015 +0000 UTC m=+1.296412379 container remove 6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:47:54 compute-0 systemd[1]: libpod-conmon-6703a989794d31c0dda80b99ae60b724d9db6496c9e12f90f1126708c0cdbd45.scope: Deactivated successfully.
Jan 31 07:47:54 compute-0 sudo[287281]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:47:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:47:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 664510dd-a8e4-4d82-878f-171508f94678 does not exist
Jan 31 07:47:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f8d481c6-16b9-487c-aa3c-28eacfffb36c does not exist
Jan 31 07:47:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4c005f93-f2a1-453e-9dbb-4905db1c2bb8 does not exist
Jan 31 07:47:54 compute-0 sudo[287487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:47:54 compute-0 sudo[287487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:54 compute-0 sudo[287487]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-0 sudo[287512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:47:54 compute-0 sudo[287512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:47:54 compute-0 sudo[287512]: pam_unix(sudo:session): session closed for user root
Jan 31 07:47:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 1.9 MiB/s wr, 48 op/s
Jan 31 07:47:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/376835805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:54 compute-0 ceph-mon[74496]: osdmap e228: 3 total, 3 up, 3 in
Jan 31 07:47:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:47:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3959903741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:47:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:55.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 31 07:47:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 31 07:47:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 31 07:47:56 compute-0 nova_compute[247704]: 2026-01-31 07:47:56.209 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 31 07:47:56 compute-0 ceph-mon[74496]: pgmap v1498: 305 pgs: 305 active+clean; 405 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 1.9 MiB/s wr, 48 op/s
Jan 31 07:47:56 compute-0 ceph-mon[74496]: osdmap e229: 3 total, 3 up, 3 in
Jan 31 07:47:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 31 07:47:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 31 07:47:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:47:56 compute-0 nova_compute[247704]: 2026-01-31 07:47:56.447 247708 DEBUG nova.compute.manager [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-changed-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:47:56 compute-0 nova_compute[247704]: 2026-01-31 07:47:56.447 247708 DEBUG nova.compute.manager [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Refreshing instance network info cache due to event network-changed-08928e39-ac5d-4e86-973c-11d34ee7d4d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:47:56 compute-0 nova_compute[247704]: 2026-01-31 07:47:56.447 247708 DEBUG oslo_concurrency.lockutils [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:47:56 compute-0 nova_compute[247704]: 2026-01-31 07:47:56.448 247708 DEBUG oslo_concurrency.lockutils [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:47:56 compute-0 nova_compute[247704]: 2026-01-31 07:47:56.448 247708 DEBUG nova.network.neutron [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Refreshing network info cache for port 08928e39-ac5d-4e86-973c-11d34ee7d4d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:47:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 462 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 128 op/s
Jan 31 07:47:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:57.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:57 compute-0 nova_compute[247704]: 2026-01-31 07:47:57.282 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:47:57 compute-0 ceph-mon[74496]: osdmap e230: 3 total, 3 up, 3 in
Jan 31 07:47:57 compute-0 podman[287540]: 2026-01-31 07:47:57.9130583 +0000 UTC m=+0.077090850 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:47:58 compute-0 ceph-mon[74496]: pgmap v1501: 305 pgs: 305 active+clean; 462 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 128 op/s
Jan 31 07:47:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:47:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:47:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 484 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 151 op/s
Jan 31 07:47:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:47:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:47:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:59.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:00 compute-0 ceph-mon[74496]: pgmap v1502: 305 pgs: 305 active+clean; 484 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 151 op/s
Jan 31 07:48:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:00.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 484 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.7 MiB/s wr, 162 op/s
Jan 31 07:48:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:01.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:01 compute-0 nova_compute[247704]: 2026-01-31 07:48:01.212 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:01 compute-0 nova_compute[247704]: 2026-01-31 07:48:01.555 247708 DEBUG nova.network.neutron [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updated VIF entry in instance network info cache for port 08928e39-ac5d-4e86-973c-11d34ee7d4d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:48:01 compute-0 nova_compute[247704]: 2026-01-31 07:48:01.556 247708 DEBUG nova.network.neutron [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updating instance_info_cache with network_info: [{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:01 compute-0 nova_compute[247704]: 2026-01-31 07:48:01.590 247708 DEBUG oslo_concurrency.lockutils [req-93b18fb8-6b9f-4087-811a-1310e9897347 req-8df6ba4e-c587-46d1-8cfe-6fab295552d4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:48:02 compute-0 nova_compute[247704]: 2026-01-31 07:48:02.284 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:02 compute-0 ceph-mon[74496]: pgmap v1503: 305 pgs: 305 active+clean; 484 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.7 MiB/s wr, 162 op/s
Jan 31 07:48:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Jan 31 07:48:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:03 compute-0 nova_compute[247704]: 2026-01-31 07:48:03.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 31 07:48:04 compute-0 ceph-mon[74496]: pgmap v1504: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Jan 31 07:48:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 31 07:48:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 31 07:48:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 31 07:48:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:05 compute-0 nova_compute[247704]: 2026-01-31 07:48:05.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:05 compute-0 nova_compute[247704]: 2026-01-31 07:48:05.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:48:05 compute-0 nova_compute[247704]: 2026-01-31 07:48:05.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:48:05 compute-0 ceph-mon[74496]: osdmap e231: 3 total, 3 up, 3 in
Jan 31 07:48:05 compute-0 ceph-mon[74496]: pgmap v1506: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 31 07:48:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1783155386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:06 compute-0 nova_compute[247704]: 2026-01-31 07:48:06.249 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 31 07:48:06 compute-0 nova_compute[247704]: 2026-01-31 07:48:06.468 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:48:06 compute-0 nova_compute[247704]: 2026-01-31 07:48:06.469 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:48:06 compute-0 nova_compute[247704]: 2026-01-31 07:48:06.469 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:48:06 compute-0 nova_compute[247704]: 2026-01-31 07:48:06.469 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7820d682-a83a-42ad-93c8-b1cc08346f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 31 07:48:06 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 31 07:48:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1085148854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:06 compute-0 ceph-mon[74496]: osdmap e232: 3 total, 3 up, 3 in
Jan 31 07:48:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 153 op/s
Jan 31 07:48:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.280 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.281 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.281 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.282 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.282 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.284 247708 INFO nova.compute.manager [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Terminating instance
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.285 247708 DEBUG nova.compute.manager [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.287 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:07 compute-0 kernel: tap08928e39-ac (unregistering): left promiscuous mode
Jan 31 07:48:07 compute-0 NetworkManager[49108]: <info>  [1769845687.3575] device (tap08928e39-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.358 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:07 compute-0 ovn_controller[149457]: 2026-01-31T07:48:07Z|00202|binding|INFO|Releasing lport 08928e39-ac5d-4e86-973c-11d34ee7d4d4 from this chassis (sb_readonly=0)
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:07 compute-0 ovn_controller[149457]: 2026-01-31T07:48:07Z|00203|binding|INFO|Setting lport 08928e39-ac5d-4e86-973c-11d34ee7d4d4 down in Southbound
Jan 31 07:48:07 compute-0 ovn_controller[149457]: 2026-01-31T07:48:07Z|00204|binding|INFO|Removing iface tap08928e39-ac ovn-installed in OVS
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:07 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Jan 31 07:48:07 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003e.scope: Consumed 14.461s CPU time.
Jan 31 07:48:07 compute-0 systemd-machined[214448]: Machine qemu-27-instance-0000003e terminated.
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.529 247708 INFO nova.virt.libvirt.driver [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Instance destroyed successfully.
Jan 31 07:48:07 compute-0 nova_compute[247704]: 2026-01-31 07:48:07.531 247708 DEBUG nova.objects.instance [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lazy-loading 'resources' on Instance uuid 7820d682-a83a-42ad-93c8-b1cc08346f3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:07.707 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:85:bb 10.100.0.10'], port_security=['fa:16:3e:4a:85:bb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7820d682-a83a-42ad-93c8-b1cc08346f3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-281c222e-0b76-45d6-a8af-2dd38564086d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0324e00066c44c39612c68322b97366', 'neutron:revision_number': '4', 'neutron:security_group_ids': '572e2d2d-5b9e-4092-81c5-309971c0ef99', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=213645cd-f029-4c15-8cb0-4306a402168a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=08928e39-ac5d-4e86-973c-11d34ee7d4d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:07.709 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 08928e39-ac5d-4e86-973c-11d34ee7d4d4 in datapath 281c222e-0b76-45d6-a8af-2dd38564086d unbound from our chassis
Jan 31 07:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:07.710 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 281c222e-0b76-45d6-a8af-2dd38564086d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:07.712 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d03af60e-24de-4d48-af2c-95ef7cadf71b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:07.713 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d namespace which is not needed anymore
Jan 31 07:48:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 31 07:48:07 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [NOTICE]   (286393) : haproxy version is 2.8.14-c23fe91
Jan 31 07:48:07 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [NOTICE]   (286393) : path to executable is /usr/sbin/haproxy
Jan 31 07:48:07 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [WARNING]  (286393) : Exiting Master process...
Jan 31 07:48:07 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [ALERT]    (286393) : Current worker (286395) exited with code 143 (Terminated)
Jan 31 07:48:07 compute-0 neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d[286389]: [WARNING]  (286393) : All workers exited. Exiting... (0)
Jan 31 07:48:07 compute-0 systemd[1]: libpod-3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014.scope: Deactivated successfully.
Jan 31 07:48:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 31 07:48:07 compute-0 podman[287600]: 2026-01-31 07:48:07.903163075 +0000 UTC m=+0.062781208 container died 3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:48:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 31 07:48:07 compute-0 ceph-mon[74496]: pgmap v1508: 305 pgs: 305 active+clean; 484 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 153 op/s
Jan 31 07:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014-userdata-shm.mount: Deactivated successfully.
Jan 31 07:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-83cf565faa54b8ab10c6e8185cbcad6cae361c9d8e61057fbe7cdc722199bcb0-merged.mount: Deactivated successfully.
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.014 247708 DEBUG nova.virt.libvirt.vif [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1008557590',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-100855759',id=62,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:47:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0324e00066c44c39612c68322b97366',ramdisk_id='',reservation_id='r-wtupnsuu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1518292426',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1518292426-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:47:20Z,user_data=None,user_id='392b30fc4156484392c30737a737bcac',uuid=7820d682-a83a-42ad-93c8-b1cc08346f3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.015 247708 DEBUG nova.network.os_vif_util [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Converting VIF {"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.015 247708 DEBUG nova.network.os_vif_util [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.016 247708 DEBUG os_vif [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.019 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.019 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08928e39-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.026 247708 INFO os_vif [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:85:bb,bridge_name='br-int',has_traffic_filtering=True,id=08928e39-ac5d-4e86-973c-11d34ee7d4d4,network=Network(281c222e-0b76-45d6-a8af-2dd38564086d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08928e39-ac')
Jan 31 07:48:08 compute-0 podman[287600]: 2026-01-31 07:48:08.056187802 +0000 UTC m=+0.215805935 container cleanup 3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:48:08 compute-0 systemd[1]: libpod-conmon-3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014.scope: Deactivated successfully.
Jan 31 07:48:08 compute-0 podman[287646]: 2026-01-31 07:48:08.183007118 +0000 UTC m=+0.103980118 container remove 3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.189 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f0cc6a6a-5284-4eb4-8894-4f8175d72944]: (4, ('Sat Jan 31 07:48:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d (3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014)\n3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014\nSat Jan 31 07:48:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d (3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014)\n3b18a606d584c1b9b92d59d6fa118a3d86dec60df4d7478016d1c29c3f64a014\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.191 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8aac7ab1-f0ef-4c80-89f6-6d67b6d6efbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.191 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap281c222e-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:08 compute-0 kernel: tap281c222e-00: left promiscuous mode
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.193 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.201 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.203 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[906ad77c-e7cd-4387-9d92-c19157af37c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.226 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9c33fe-45dd-4e77-ad46-3b71832de8cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.228 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4ad710-3d3e-4726-aa0d-56ad2ac93b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.248 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[18771472-bf96-4cb5-85a5-6810a46d619e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588181, 'reachable_time': 35882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287665, 'error': None, 'target': 'ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.252 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-281c222e-0b76-45d6-a8af-2dd38564086d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:48:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d281c222e\x2d0b76\x2d45d6\x2da8af\x2d2dd38564086d.mount: Deactivated successfully.
Jan 31 07:48:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:08.253 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[14659484-3d3f-4529-b68f-91306fc0ba03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 513 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 31 07:48:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.971 247708 INFO nova.virt.libvirt.driver [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Deleting instance files /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e_del
Jan 31 07:48:08 compute-0 nova_compute[247704]: 2026-01-31 07:48:08.975 247708 INFO nova.virt.libvirt.driver [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Deletion of /var/lib/nova/instances/7820d682-a83a-42ad-93c8-b1cc08346f3e_del complete
Jan 31 07:48:09 compute-0 ceph-mon[74496]: osdmap e233: 3 total, 3 up, 3 in
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.014 247708 DEBUG nova.compute.manager [req-83d27632-fdf0-4c26-af9d-757c8bdce13f req-16dc9fb9-ab09-462b-adda-47329504c315 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-vif-unplugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.015 247708 DEBUG oslo_concurrency.lockutils [req-83d27632-fdf0-4c26-af9d-757c8bdce13f req-16dc9fb9-ab09-462b-adda-47329504c315 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.015 247708 DEBUG oslo_concurrency.lockutils [req-83d27632-fdf0-4c26-af9d-757c8bdce13f req-16dc9fb9-ab09-462b-adda-47329504c315 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.016 247708 DEBUG oslo_concurrency.lockutils [req-83d27632-fdf0-4c26-af9d-757c8bdce13f req-16dc9fb9-ab09-462b-adda-47329504c315 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.016 247708 DEBUG nova.compute.manager [req-83d27632-fdf0-4c26-af9d-757c8bdce13f req-16dc9fb9-ab09-462b-adda-47329504c315 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] No waiting events found dispatching network-vif-unplugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.017 247708 DEBUG nova.compute.manager [req-83d27632-fdf0-4c26-af9d-757c8bdce13f req-16dc9fb9-ab09-462b-adda-47329504c315 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-vif-unplugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:48:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 31 07:48:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 31 07:48:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:09.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.191 247708 INFO nova.compute.manager [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Took 1.91 seconds to destroy the instance on the hypervisor.
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.191 247708 DEBUG oslo.service.loopingcall [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.192 247708 DEBUG nova.compute.manager [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:48:09 compute-0 nova_compute[247704]: 2026-01-31 07:48:09.192 247708 DEBUG nova.network.neutron [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:48:10 compute-0 ceph-mon[74496]: pgmap v1510: 305 pgs: 305 active+clean; 513 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 31 07:48:10 compute-0 ceph-mon[74496]: osdmap e234: 3 total, 3 up, 3 in
Jan 31 07:48:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:10.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 504 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.8 MiB/s wr, 280 op/s
Jan 31 07:48:10 compute-0 nova_compute[247704]: 2026-01-31 07:48:10.990 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updating instance_info_cache with network_info: [{"id": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "address": "fa:16:3e:4a:85:bb", "network": {"id": "281c222e-0b76-45d6-a8af-2dd38564086d", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1922705707-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0324e00066c44c39612c68322b97366", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08928e39-ac", "ovs_interfaceid": "08928e39-ac5d-4e86-973c-11d34ee7d4d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.046 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-7820d682-a83a-42ad-93c8-b1cc08346f3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.047 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.047 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.047 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.047 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.047 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.048 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.048 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.048 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.083 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.084 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.084 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.084 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.084 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:11.158 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:11.158 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:11.159 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/369527385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.211 247708 DEBUG nova.network.neutron [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.230 247708 DEBUG nova.compute.manager [req-99fc21c7-e4c7-449c-be15-8caa9523ae91 req-1acb8426-7127-4e7d-b16c-a89cb22467b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.230 247708 DEBUG oslo_concurrency.lockutils [req-99fc21c7-e4c7-449c-be15-8caa9523ae91 req-1acb8426-7127-4e7d-b16c-a89cb22467b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.230 247708 DEBUG oslo_concurrency.lockutils [req-99fc21c7-e4c7-449c-be15-8caa9523ae91 req-1acb8426-7127-4e7d-b16c-a89cb22467b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.230 247708 DEBUG oslo_concurrency.lockutils [req-99fc21c7-e4c7-449c-be15-8caa9523ae91 req-1acb8426-7127-4e7d-b16c-a89cb22467b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.230 247708 DEBUG nova.compute.manager [req-99fc21c7-e4c7-449c-be15-8caa9523ae91 req-1acb8426-7127-4e7d-b16c-a89cb22467b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] No waiting events found dispatching network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.231 247708 WARNING nova.compute.manager [req-99fc21c7-e4c7-449c-be15-8caa9523ae91 req-1acb8426-7127-4e7d-b16c-a89cb22467b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received unexpected event network-vif-plugged-08928e39-ac5d-4e86-973c-11d34ee7d4d4 for instance with vm_state active and task_state deleting.
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.247 247708 INFO nova.compute.manager [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Took 2.06 seconds to deallocate network for instance.
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.309 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.309 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.394 247708 DEBUG oslo_concurrency.processutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272989584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.578 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:11 compute-0 podman[287712]: 2026-01-31 07:48:11.710305078 +0000 UTC m=+0.096527985 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.733 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.734 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=20.86670684814453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.734 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063594915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.862 247708 DEBUG oslo_concurrency.processutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:11 compute-0 nova_compute[247704]: 2026-01-31 07:48:11.867 247708 DEBUG nova.compute.provider_tree [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.056 247708 DEBUG nova.scheduler.client.report [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.165 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.170 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.302 247708 INFO nova.scheduler.client.report [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Deleted allocations for instance 7820d682-a83a-42ad-93c8-b1cc08346f3e
Jan 31 07:48:12 compute-0 ceph-mon[74496]: pgmap v1512: 305 pgs: 305 active+clean; 504 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 7.8 MiB/s wr, 280 op/s
Jan 31 07:48:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4272989584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4063594915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3531660523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.376 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.376 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.401 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.456 247708 DEBUG oslo_concurrency.lockutils [None req-e7f1c95c-4805-40af-ae64-65eeeeb00661 392b30fc4156484392c30737a737bcac f0324e00066c44c39612c68322b97366 - - default default] Lock "7820d682-a83a-42ad-93c8-b1cc08346f3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177206803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 484 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.4 MiB/s wr, 199 op/s
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.899 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.907 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.930 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.971 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:48:12 compute-0 nova_compute[247704]: 2026-01-31 07:48:12.972 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:13 compute-0 nova_compute[247704]: 2026-01-31 07:48:13.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:13.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3177206803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:13 compute-0 nova_compute[247704]: 2026-01-31 07:48:13.428 247708 DEBUG nova.compute.manager [req-af26dc64-7a07-44b4-9919-c1700429e5ca req-c93a7eca-ec84-488f-a92d-05ff5aa0edca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Received event network-vif-deleted-08928e39-ac5d-4e86-973c-11d34ee7d4d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:13 compute-0 nova_compute[247704]: 2026-01-31 07:48:13.967 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:13 compute-0 nova_compute[247704]: 2026-01-31 07:48:13.968 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:48:14 compute-0 sudo[287766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:14 compute-0 sudo[287766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:14 compute-0 sudo[287766]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:14 compute-0 sudo[287791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:14 compute-0 sudo[287791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:14 compute-0 sudo[287791]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:14 compute-0 ceph-mon[74496]: pgmap v1513: 305 pgs: 305 active+clean; 484 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.4 MiB/s wr, 199 op/s
Jan 31 07:48:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:14.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 484 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 158 op/s
Jan 31 07:48:15 compute-0 nova_compute[247704]: 2026-01-31 07:48:15.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:15.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:16 compute-0 ceph-mon[74496]: pgmap v1514: 305 pgs: 305 active+clean; 484 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 158 op/s
Jan 31 07:48:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 31 07:48:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 31 07:48:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 31 07:48:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:16.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 512 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 185 op/s
Jan 31 07:48:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:48:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:17.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:48:17 compute-0 nova_compute[247704]: 2026-01-31 07:48:17.290 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:17 compute-0 ceph-mon[74496]: osdmap e235: 3 total, 3 up, 3 in
Jan 31 07:48:18 compute-0 nova_compute[247704]: 2026-01-31 07:48:18.024 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:18 compute-0 ceph-mon[74496]: pgmap v1516: 305 pgs: 305 active+clean; 512 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 185 op/s
Jan 31 07:48:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 516 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 171 op/s
Jan 31 07:48:18 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 07:48:18 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 07:48:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:19.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:19 compute-0 ceph-mon[74496]: pgmap v1517: 305 pgs: 305 active+clean; 516 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 171 op/s
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:48:20
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'volumes', 'vms']
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:48:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.6 MiB/s wr, 169 op/s
Jan 31 07:48:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:21.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:21.252 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:21 compute-0 nova_compute[247704]: 2026-01-31 07:48:21.253 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:21.253 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:48:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:21 compute-0 nova_compute[247704]: 2026-01-31 07:48:21.617 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:21 compute-0 nova_compute[247704]: 2026-01-31 07:48:21.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:21 compute-0 ceph-mon[74496]: pgmap v1518: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.6 MiB/s wr, 169 op/s
Jan 31 07:48:22 compute-0 nova_compute[247704]: 2026-01-31 07:48:22.323 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:22 compute-0 nova_compute[247704]: 2026-01-31 07:48:22.528 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845687.5265164, 7820d682-a83a-42ad-93c8-b1cc08346f3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:48:22 compute-0 nova_compute[247704]: 2026-01-31 07:48:22.529 247708 INFO nova.compute.manager [-] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] VM Stopped (Lifecycle Event)
Jan 31 07:48:22 compute-0 nova_compute[247704]: 2026-01-31 07:48:22.565 247708 DEBUG nova.compute.manager [None req-0525c6f2-501b-4b38-9806-d79f295ac570 - - - - - -] [instance: 7820d682-a83a-42ad-93c8-b1cc08346f3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 2.6 MiB/s wr, 227 op/s
Jan 31 07:48:23 compute-0 nova_compute[247704]: 2026-01-31 07:48:23.026 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:48:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:23.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:48:24 compute-0 ceph-mon[74496]: pgmap v1519: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 2.6 MiB/s wr, 227 op/s
Jan 31 07:48:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 2.6 MiB/s wr, 227 op/s
Jan 31 07:48:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 31 07:48:26 compute-0 ceph-mon[74496]: pgmap v1520: 305 pgs: 305 active+clean; 516 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 481 KiB/s rd, 2.6 MiB/s wr, 227 op/s
Jan 31 07:48:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 31 07:48:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 31 07:48:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:26.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 516 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 191 KiB/s rd, 900 KiB/s wr, 235 op/s
Jan 31 07:48:27 compute-0 ceph-mon[74496]: osdmap e236: 3 total, 3 up, 3 in
Jan 31 07:48:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:27.255 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:27 compute-0 nova_compute[247704]: 2026-01-31 07:48:27.326 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:28 compute-0 nova_compute[247704]: 2026-01-31 07:48:28.028 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 31 07:48:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 31 07:48:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 31 07:48:28 compute-0 ceph-mon[74496]: pgmap v1522: 305 pgs: 305 active+clean; 516 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 191 KiB/s rd, 900 KiB/s wr, 235 op/s
Jan 31 07:48:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 487 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 45 KiB/s wr, 232 op/s
Jan 31 07:48:28 compute-0 podman[287824]: 2026-01-31 07:48:28.932622375 +0000 UTC m=+0.097959309 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 07:48:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:48:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:48:29 compute-0 ceph-mon[74496]: osdmap e237: 3 total, 3 up, 3 in
Jan 31 07:48:29 compute-0 sshd-session[287844]: Invalid user sol from 45.148.10.240 port 47848
Jan 31 07:48:29 compute-0 sshd-session[287844]: Connection closed by invalid user sol 45.148.10.240 port 47848 [preauth]
Jan 31 07:48:30 compute-0 ceph-mon[74496]: pgmap v1524: 305 pgs: 305 active+clean; 487 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 45 KiB/s wr, 232 op/s
Jan 31 07:48:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 380 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 57 KiB/s wr, 151 op/s
Jan 31 07:48:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:31.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:31 compute-0 ceph-mon[74496]: pgmap v1525: 305 pgs: 305 active+clean; 380 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 57 KiB/s wr, 151 op/s
Jan 31 07:48:32 compute-0 nova_compute[247704]: 2026-01-31 07:48:32.329 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 31 07:48:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 31 07:48:32 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 31 07:48:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 334 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 18 KiB/s wr, 99 op/s
Jan 31 07:48:33 compute-0 nova_compute[247704]: 2026-01-31 07:48:33.030 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:33 compute-0 ceph-mon[74496]: osdmap e238: 3 total, 3 up, 3 in
Jan 31 07:48:33 compute-0 ceph-mon[74496]: pgmap v1527: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 334 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 18 KiB/s wr, 99 op/s
Jan 31 07:48:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1982302776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:34 compute-0 sudo[287849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:34 compute-0 sudo[287849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:34 compute-0 sudo[287849]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:34 compute-0 sudo[287874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:34 compute-0 sudo[287874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:34 compute-0 sudo[287874]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 334 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005847197503722315 of space, bias 1.0, pg target 1.7541592511166946 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004059104841336311 of space, bias 1.0, pg target 1.213672347559557 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:48:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:35.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:36 compute-0 ceph-mon[74496]: pgmap v1528: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 334 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 07:48:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 31 07:48:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 31 07:48:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 31 07:48:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:36.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 211 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 15 KiB/s wr, 100 op/s
Jan 31 07:48:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:37.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:37 compute-0 nova_compute[247704]: 2026-01-31 07:48:37.372 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:37 compute-0 ceph-mon[74496]: osdmap e239: 3 total, 3 up, 3 in
Jan 31 07:48:37 compute-0 ceph-mon[74496]: pgmap v1530: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 211 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 15 KiB/s wr, 100 op/s
Jan 31 07:48:38 compute-0 nova_compute[247704]: 2026-01-31 07:48:38.033 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:38.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 200 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 81 op/s
Jan 31 07:48:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:39.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.356 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.357 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:40 compute-0 ceph-mon[74496]: pgmap v1531: 305 pgs: 305 active+clean; 200 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 81 op/s
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.377 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.480 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.480 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.489 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.489 247708 INFO nova.compute.claims [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:48:40 compute-0 nova_compute[247704]: 2026-01-31 07:48:40.635 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 148 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 4.3 KiB/s wr, 104 op/s
Jan 31 07:48:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:48:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633645125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.106 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.114 247708 DEBUG nova.compute.provider_tree [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.136 247708 DEBUG nova.scheduler.client.report [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.163 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.164 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:48:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:41.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.227 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.228 247708 DEBUG nova.network.neutron [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.255 247708 INFO nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.277 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.377 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.379 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.379 247708 INFO nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Creating image(s)
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.408 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.447 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.486 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.491 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.511 247708 DEBUG nova.policy [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97abab8eb79247cd89fb2ebff295b890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.546 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.547 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.549 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.550 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.593 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:41 compute-0 nova_compute[247704]: 2026-01-31 07:48:41.598 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3633645125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 31 07:48:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 31 07:48:41 compute-0 podman[288016]: 2026-01-31 07:48:41.919262394 +0000 UTC m=+0.090429975 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:48:42 compute-0 nova_compute[247704]: 2026-01-31 07:48:42.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:42 compute-0 nova_compute[247704]: 2026-01-31 07:48:42.588 247708 DEBUG nova.network.neutron [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Successfully created port: 64a7f6f3-af5d-4049-948c-1a5105761574 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:48:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:42.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:42 compute-0 ceph-mon[74496]: pgmap v1532: 305 pgs: 305 active+clean; 148 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 4.3 KiB/s wr, 104 op/s
Jan 31 07:48:42 compute-0 ceph-mon[74496]: osdmap e240: 3 total, 3 up, 3 in
Jan 31 07:48:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3212683337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.9 KiB/s wr, 99 op/s
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.078 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:43.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.716 247708 DEBUG nova.network.neutron [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Successfully updated port: 64a7f6f3-af5d-4049-948c-1a5105761574 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.733 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "refresh_cache-4332ef04-94ba-46ae-98db-3c3f9fb470b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.733 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquired lock "refresh_cache-4332ef04-94ba-46ae-98db-3c3f9fb470b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.733 247708 DEBUG nova.network.neutron [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.899 247708 DEBUG nova.compute.manager [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received event network-changed-64a7f6f3-af5d-4049-948c-1a5105761574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.899 247708 DEBUG nova.compute.manager [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Refreshing instance network info cache due to event network-changed-64a7f6f3-af5d-4049-948c-1a5105761574. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:48:43 compute-0 nova_compute[247704]: 2026-01-31 07:48:43.899 247708 DEBUG oslo_concurrency.lockutils [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4332ef04-94ba-46ae-98db-3c3f9fb470b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:48:44 compute-0 nova_compute[247704]: 2026-01-31 07:48:44.029 247708 DEBUG nova.network.neutron [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:48:44 compute-0 ceph-mon[74496]: pgmap v1534: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.9 KiB/s wr, 99 op/s
Jan 31 07:48:44 compute-0 nova_compute[247704]: 2026-01-31 07:48:44.143 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:44 compute-0 nova_compute[247704]: 2026-01-31 07:48:44.277 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] resizing rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:48:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:44.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:48:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626385488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:48:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:48:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626385488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:48:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Jan 31 07:48:45 compute-0 nova_compute[247704]: 2026-01-31 07:48:45.143 247708 DEBUG nova.objects.instance [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'migration_context' on Instance uuid 4332ef04-94ba-46ae-98db-3c3f9fb470b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:45.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:45 compute-0 nova_compute[247704]: 2026-01-31 07:48:45.417 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:48:45 compute-0 nova_compute[247704]: 2026-01-31 07:48:45.417 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Ensure instance console log exists: /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:48:45 compute-0 nova_compute[247704]: 2026-01-31 07:48:45.418 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:45 compute-0 nova_compute[247704]: 2026-01-31 07:48:45.418 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:45 compute-0 nova_compute[247704]: 2026-01-31 07:48:45.419 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/626385488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:48:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/626385488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.029 247708 DEBUG nova.network.neutron [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Updating instance_info_cache with network_info: [{"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.055 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Releasing lock "refresh_cache-4332ef04-94ba-46ae-98db-3c3f9fb470b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.056 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Instance network_info: |[{"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.056 247708 DEBUG oslo_concurrency.lockutils [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4332ef04-94ba-46ae-98db-3c3f9fb470b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.057 247708 DEBUG nova.network.neutron [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Refreshing network info cache for port 64a7f6f3-af5d-4049-948c-1a5105761574 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.062 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Start _get_guest_xml network_info=[{"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.073 247708 WARNING nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.083 247708 DEBUG nova.virt.libvirt.host [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.084 247708 DEBUG nova.virt.libvirt.host [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.088 247708 DEBUG nova.virt.libvirt.host [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.088 247708 DEBUG nova.virt.libvirt.host [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.090 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.090 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.091 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.091 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.091 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.092 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.092 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.092 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.093 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.093 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.093 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.093 247708 DEBUG nova.virt.hardware [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.097 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:48:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4148921643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.563 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.595 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:46 compute-0 nova_compute[247704]: 2026-01-31 07:48:46.601 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:46.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:46 compute-0 ceph-mon[74496]: pgmap v1535: 305 pgs: 305 active+clean; 121 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 41 op/s
Jan 31 07:48:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4148921643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:48:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 153 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Jan 31 07:48:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:48:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370960855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.128 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.131 247708 DEBUG nova.virt.libvirt.vif [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1565268821',display_name='tempest-DeleteServersTestJSON-server-1565268821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1565268821',id=66,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-hy2sfyxh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:48:41Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4332ef04-94ba-46ae-98db-3c3f9fb470b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.132 247708 DEBUG nova.network.os_vif_util [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.133 247708 DEBUG nova.network.os_vif_util [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.135 247708 DEBUG nova.objects.instance [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4332ef04-94ba-46ae-98db-3c3f9fb470b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.170 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <uuid>4332ef04-94ba-46ae-98db-3c3f9fb470b2</uuid>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <name>instance-00000042</name>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:name>tempest-DeleteServersTestJSON-server-1565268821</nova:name>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:48:46</nova:creationTime>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:user uuid="97abab8eb79247cd89fb2ebff295b890">tempest-DeleteServersTestJSON-2031597701-project-member</nova:user>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:project uuid="f299640bb1f64e5fa12b23955e5a2127">tempest-DeleteServersTestJSON-2031597701</nova:project>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <nova:port uuid="64a7f6f3-af5d-4049-948c-1a5105761574">
Jan 31 07:48:47 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <system>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <entry name="serial">4332ef04-94ba-46ae-98db-3c3f9fb470b2</entry>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <entry name="uuid">4332ef04-94ba-46ae-98db-3c3f9fb470b2</entry>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </system>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <os>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </os>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <features>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </features>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk">
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </source>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk.config">
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </source>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:48:47 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:71:bf:d8"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <target dev="tap64a7f6f3-af"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/console.log" append="off"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <video>
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </video>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:48:47 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:48:47 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:48:47 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:48:47 compute-0 nova_compute[247704]: </domain>
Jan 31 07:48:47 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.173 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Preparing to wait for external event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.174 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.175 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.175 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.177 247708 DEBUG nova.virt.libvirt.vif [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1565268821',display_name='tempest-DeleteServersTestJSON-server-1565268821',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1565268821',id=66,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-hy2sfyxh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:48:41Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4332ef04-94ba-46ae-98db-3c3f9fb470b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.178 247708 DEBUG nova.network.os_vif_util [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.180 247708 DEBUG nova.network.os_vif_util [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.181 247708 DEBUG os_vif [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.183 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.183 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.190 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.191 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64a7f6f3-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.192 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64a7f6f3-af, col_values=(('external_ids', {'iface-id': '64a7f6f3-af5d-4049-948c-1a5105761574', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:bf:d8', 'vm-uuid': '4332ef04-94ba-46ae-98db-3c3f9fb470b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:47 compute-0 NetworkManager[49108]: <info>  [1769845727.1965] manager: (tap64a7f6f3-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.198 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.204 247708 INFO os_vif [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af')
Jan 31 07:48:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:47.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.345 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.346 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.346 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] No VIF found with MAC fa:16:3e:71:bf:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.346 247708 INFO nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Using config drive
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.439 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:47 compute-0 nova_compute[247704]: 2026-01-31 07:48:47.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:48 compute-0 ceph-mon[74496]: pgmap v1536: 305 pgs: 305 active+clean; 153 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Jan 31 07:48:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3370960855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:48:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:48.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 154 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 07:48:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:49.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:49 compute-0 nova_compute[247704]: 2026-01-31 07:48:49.546 247708 INFO nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Creating config drive at /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/disk.config
Jan 31 07:48:49 compute-0 nova_compute[247704]: 2026-01-31 07:48:49.550 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpp3la9xla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:49 compute-0 nova_compute[247704]: 2026-01-31 07:48:49.689 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpp3la9xla" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:49 compute-0 nova_compute[247704]: 2026-01-31 07:48:49.721 247708 DEBUG nova.storage.rbd_utils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] rbd image 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:48:49 compute-0 nova_compute[247704]: 2026-01-31 07:48:49.726 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/disk.config 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.349 247708 DEBUG oslo_concurrency.processutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/disk.config 4332ef04-94ba-46ae-98db-3c3f9fb470b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.350 247708 INFO nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Deleting local config drive /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2/disk.config because it was imported into RBD.
Jan 31 07:48:50 compute-0 ceph-mon[74496]: pgmap v1537: 305 pgs: 305 active+clean; 154 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 07:48:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3811372512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.397 247708 DEBUG nova.network.neutron [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Updated VIF entry in instance network info cache for port 64a7f6f3-af5d-4049-948c-1a5105761574. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.398 247708 DEBUG nova.network.neutron [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Updating instance_info_cache with network_info: [{"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:48:50 compute-0 kernel: tap64a7f6f3-af: entered promiscuous mode
Jan 31 07:48:50 compute-0 NetworkManager[49108]: <info>  [1769845730.4089] manager: (tap64a7f6f3-af): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Jan 31 07:48:50 compute-0 ovn_controller[149457]: 2026-01-31T07:48:50Z|00205|binding|INFO|Claiming lport 64a7f6f3-af5d-4049-948c-1a5105761574 for this chassis.
Jan 31 07:48:50 compute-0 ovn_controller[149457]: 2026-01-31T07:48:50Z|00206|binding|INFO|64a7f6f3-af5d-4049-948c-1a5105761574: Claiming fa:16:3e:71:bf:d8 10.100.0.13
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.409 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.415 247708 DEBUG oslo_concurrency.lockutils [req-452e2c01-29c8-4062-9772-7e72337fd998 req-623cd554-14ba-421b-a330-e2fa1f55f595 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4332ef04-94ba-46ae-98db-3c3f9fb470b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.417 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.421 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:bf:d8 10.100.0.13'], port_security=['fa:16:3e:71:bf:d8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4332ef04-94ba-46ae-98db-3c3f9fb470b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=64a7f6f3-af5d-4049-948c-1a5105761574) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.422 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 64a7f6f3-af5d-4049-948c-1a5105761574 in datapath 60244e92-1524-47f0-8207-05d0104afa47 bound to our chassis
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.424 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:48:50 compute-0 systemd-machined[214448]: New machine qemu-28-instance-00000042.
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.434 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc3222e-3956-4321-8479-68e50cd1c863]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.435 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60244e92-11 in ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.438 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60244e92-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.438 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7f79f43e-04e8-4658-82a3-3283015cf8e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.439 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9f192097-265e-49e9-8bdd-a175ce989a32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.450 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e37649d-92c8-47c7-a50e-17d528a0b8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000042.
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.459 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 ovn_controller[149457]: 2026-01-31T07:48:50Z|00207|binding|INFO|Setting lport 64a7f6f3-af5d-4049-948c-1a5105761574 ovn-installed in OVS
Jan 31 07:48:50 compute-0 ovn_controller[149457]: 2026-01-31T07:48:50Z|00208|binding|INFO|Setting lport 64a7f6f3-af5d-4049-948c-1a5105761574 up in Southbound
Jan 31 07:48:50 compute-0 systemd-udevd[288260]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.465 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.476 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b953a82d-9c77-4bbe-bbd0-1aaf6e145112]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 NetworkManager[49108]: <info>  [1769845730.4804] device (tap64a7f6f3-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:48:50 compute-0 NetworkManager[49108]: <info>  [1769845730.4814] device (tap64a7f6f3-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.508 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[019fd293-b5e0-441a-8851-be14b0da23a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.513 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24156621-39d7-403f-a602-6e740d247a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 NetworkManager[49108]: <info>  [1769845730.5150] manager: (tap60244e92-10): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.538 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6d8d51-cf70-4953-9b98-776e6bacb7d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.542 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4b56cc-76a6-4e0c-87e5-a822e1b7d5c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 NetworkManager[49108]: <info>  [1769845730.5638] device (tap60244e92-10): carrier: link connected
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.568 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9854f97f-d448-4888-aca0-c558177d0c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.585 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a4177b1b-6362-401a-9b63-85954e7e20b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597318, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288290, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.603 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5bf763-7d0f-44cd-b46e-02245b6925b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:59f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597318, 'tstamp': 597318}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288291, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.623 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cc21f916-a956-4cc2-b88c-9e83531adf74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60244e92-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:59:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597318, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288292, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.656 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f507b7-5635-457b-bd87-70bc6ea6a9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.714 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9255d540-63de-442f-83b3-cd73409d2431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.716 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.717 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.717 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60244e92-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.720 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 kernel: tap60244e92-10: entered promiscuous mode
Jan 31 07:48:50 compute-0 NetworkManager[49108]: <info>  [1769845730.7211] manager: (tap60244e92-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.725 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60244e92-10, col_values=(('external_ids', {'iface-id': '4e20d708-9f46-41f3-86c0-9ab3849f8392'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 ovn_controller[149457]: 2026-01-31T07:48:50Z|00209|binding|INFO|Releasing lport 4e20d708-9f46-41f3-86c0-9ab3849f8392 from this chassis (sb_readonly=0)
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.729 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.730 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[13daf819-58c5-4c1f-89b2-818d62918512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.731 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/60244e92-1524-47f0-8207-05d0104afa47.pid.haproxy
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 60244e92-1524-47f0-8207-05d0104afa47
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.732 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:50.732 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'env', 'PROCESS_TAG=haproxy-60244e92-1524-47f0-8207-05d0104afa47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60244e92-1524-47f0-8207-05d0104afa47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.864 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845730.8644862, 4332ef04-94ba-46ae-98db-3c3f9fb470b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.865 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] VM Started (Lifecycle Event)
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.892 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.897 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845730.8646343, 4332ef04-94ba-46ae-98db-3c3f9fb470b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.897 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] VM Paused (Lifecycle Event)
Jan 31 07:48:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.919 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.924 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:48:50 compute-0 nova_compute[247704]: 2026-01-31 07:48:50.948 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:48:51 compute-0 podman[288365]: 2026-01-31 07:48:51.124998844 +0000 UTC m=+0.050334434 container create 79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 07:48:51 compute-0 systemd[1]: Started libpod-conmon-79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312.scope.
Jan 31 07:48:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:48:51 compute-0 podman[288365]: 2026-01-31 07:48:51.095304827 +0000 UTC m=+0.020640457 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ad23e956a8f6b2983b473e80a0c7c4ba5edf04a69a3d7290da011806d023035/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:48:51 compute-0 podman[288365]: 2026-01-31 07:48:51.205743721 +0000 UTC m=+0.131079341 container init 79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 07:48:51 compute-0 podman[288365]: 2026-01-31 07:48:51.210928629 +0000 UTC m=+0.136264219 container start 79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:48:51 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [NOTICE]   (288384) : New worker (288386) forked
Jan 31 07:48:51 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [NOTICE]   (288384) : Loading success.
Jan 31 07:48:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:51.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 31 07:48:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 31 07:48:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 31 07:48:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.851 247708 DEBUG nova.compute.manager [req-da30fad0-9ab0-421d-ad70-5167f980ff83 req-318f6bdb-7233-436f-b250-d8f21cff11c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.851 247708 DEBUG oslo_concurrency.lockutils [req-da30fad0-9ab0-421d-ad70-5167f980ff83 req-318f6bdb-7233-436f-b250-d8f21cff11c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.852 247708 DEBUG oslo_concurrency.lockutils [req-da30fad0-9ab0-421d-ad70-5167f980ff83 req-318f6bdb-7233-436f-b250-d8f21cff11c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.853 247708 DEBUG oslo_concurrency.lockutils [req-da30fad0-9ab0-421d-ad70-5167f980ff83 req-318f6bdb-7233-436f-b250-d8f21cff11c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.853 247708 DEBUG nova.compute.manager [req-da30fad0-9ab0-421d-ad70-5167f980ff83 req-318f6bdb-7233-436f-b250-d8f21cff11c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Processing event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.854 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.860 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845731.8597515, 4332ef04-94ba-46ae-98db-3c3f9fb470b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.861 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] VM Resumed (Lifecycle Event)
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.864 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.869 247708 INFO nova.virt.libvirt.driver [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Instance spawned successfully.
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.871 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.890 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.899 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.902 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.903 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.903 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.903 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.904 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.904 247708 DEBUG nova.virt.libvirt.driver [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.937 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.991 247708 INFO nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Took 10.61 seconds to spawn the instance on the hypervisor.
Jan 31 07:48:51 compute-0 nova_compute[247704]: 2026-01-31 07:48:51.991 247708 DEBUG nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:52 compute-0 nova_compute[247704]: 2026-01-31 07:48:52.069 247708 INFO nova.compute.manager [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Took 11.63 seconds to build instance.
Jan 31 07:48:52 compute-0 nova_compute[247704]: 2026-01-31 07:48:52.091 247708 DEBUG oslo_concurrency.lockutils [None req-1ddefda4-c041-4136-8f2b-740ae3f0494e 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:52 compute-0 nova_compute[247704]: 2026-01-31 07:48:52.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 31 07:48:52 compute-0 nova_compute[247704]: 2026-01-31 07:48:52.436 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:52 compute-0 ceph-mon[74496]: pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 31 07:48:52 compute-0 ceph-mon[74496]: osdmap e241: 3 total, 3 up, 3 in
Jan 31 07:48:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 31 07:48:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 31 07:48:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Jan 31 07:48:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:53.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:53 compute-0 ceph-mon[74496]: osdmap e242: 3 total, 3 up, 3 in
Jan 31 07:48:53 compute-0 ceph-mon[74496]: pgmap v1541: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 170 KiB/s rd, 2.7 MiB/s wr, 102 op/s
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.890 247708 DEBUG nova.objects.instance [None req-5d22f7a0-7e47-4c1f-af96-d3ad8112066c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4332ef04-94ba-46ae-98db-3c3f9fb470b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.914 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845733.9143255, 4332ef04-94ba-46ae-98db-3c3f9fb470b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.915 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] VM Paused (Lifecycle Event)
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.931 247708 DEBUG nova.compute.manager [req-a6adeeb3-fd7c-4401-9835-8359de9ce587 req-483d98e1-b425-45d5-853c-64df9e6db87e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.931 247708 DEBUG oslo_concurrency.lockutils [req-a6adeeb3-fd7c-4401-9835-8359de9ce587 req-483d98e1-b425-45d5-853c-64df9e6db87e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.932 247708 DEBUG oslo_concurrency.lockutils [req-a6adeeb3-fd7c-4401-9835-8359de9ce587 req-483d98e1-b425-45d5-853c-64df9e6db87e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.932 247708 DEBUG oslo_concurrency.lockutils [req-a6adeeb3-fd7c-4401-9835-8359de9ce587 req-483d98e1-b425-45d5-853c-64df9e6db87e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.932 247708 DEBUG nova.compute.manager [req-a6adeeb3-fd7c-4401-9835-8359de9ce587 req-483d98e1-b425-45d5-853c-64df9e6db87e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] No waiting events found dispatching network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.932 247708 WARNING nova.compute.manager [req-a6adeeb3-fd7c-4401-9835-8359de9ce587 req-483d98e1-b425-45d5-853c-64df9e6db87e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received unexpected event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 for instance with vm_state active and task_state suspending.
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.934 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.938 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:48:53 compute-0 nova_compute[247704]: 2026-01-31 07:48:53.955 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 31 07:48:54 compute-0 kernel: tap64a7f6f3-af (unregistering): left promiscuous mode
Jan 31 07:48:54 compute-0 NetworkManager[49108]: <info>  [1769845734.3931] device (tap64a7f6f3-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:48:54 compute-0 ovn_controller[149457]: 2026-01-31T07:48:54Z|00210|binding|INFO|Releasing lport 64a7f6f3-af5d-4049-948c-1a5105761574 from this chassis (sb_readonly=0)
Jan 31 07:48:54 compute-0 ovn_controller[149457]: 2026-01-31T07:48:54Z|00211|binding|INFO|Setting lport 64a7f6f3-af5d-4049-948c-1a5105761574 down in Southbound
Jan 31 07:48:54 compute-0 ovn_controller[149457]: 2026-01-31T07:48:54Z|00212|binding|INFO|Removing iface tap64a7f6f3-af ovn-installed in OVS
Jan 31 07:48:54 compute-0 nova_compute[247704]: 2026-01-31 07:48:54.401 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:54.409 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:bf:d8 10.100.0.13'], port_security=['fa:16:3e:71:bf:d8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4332ef04-94ba-46ae-98db-3c3f9fb470b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60244e92-1524-47f0-8207-05d0104afa47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f299640bb1f64e5fa12b23955e5a2127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0f055dcb-9af1-4cf7-aa74-c819da93756e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70b36ac0-62cb-4e29-924b-93c5ad906bc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=64a7f6f3-af5d-4049-948c-1a5105761574) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:54.411 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 64a7f6f3-af5d-4049-948c-1a5105761574 in datapath 60244e92-1524-47f0-8207-05d0104afa47 unbound from our chassis
Jan 31 07:48:54 compute-0 nova_compute[247704]: 2026-01-31 07:48:54.412 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:54.412 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60244e92-1524-47f0-8207-05d0104afa47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:48:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:54.414 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[540f8114-1d4c-4a21-b1c6-fcbfd3b38c34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:54.414 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 namespace which is not needed anymore
Jan 31 07:48:54 compute-0 sudo[288400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-0 sudo[288400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-0 sudo[288400]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000042.scope: Deactivated successfully.
Jan 31 07:48:54 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000042.scope: Consumed 2.655s CPU time.
Jan 31 07:48:54 compute-0 systemd-machined[214448]: Machine qemu-28-instance-00000042 terminated.
Jan 31 07:48:54 compute-0 sudo[288436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-0 sudo[288436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-0 sudo[288436]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-0 nova_compute[247704]: 2026-01-31 07:48:54.572 247708 DEBUG nova.compute.manager [None req-5d22f7a0-7e47-4c1f-af96-d3ad8112066c 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:48:54 compute-0 sudo[288498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-0 sudo[288498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-0 sudo[288498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:54.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 31 07:48:54 compute-0 sudo[288523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:48:54 compute-0 sudo[288523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-0 sudo[288523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [NOTICE]   (288384) : haproxy version is 2.8.14-c23fe91
Jan 31 07:48:54 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [NOTICE]   (288384) : path to executable is /usr/sbin/haproxy
Jan 31 07:48:54 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [WARNING]  (288384) : Exiting Master process...
Jan 31 07:48:54 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [ALERT]    (288384) : Current worker (288386) exited with code 143 (Terminated)
Jan 31 07:48:54 compute-0 neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47[288380]: [WARNING]  (288384) : All workers exited. Exiting... (0)
Jan 31 07:48:54 compute-0 systemd[1]: libpod-79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312.scope: Deactivated successfully.
Jan 31 07:48:54 compute-0 podman[288473]: 2026-01-31 07:48:54.720917455 +0000 UTC m=+0.209446490 container died 79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:48:54 compute-0 sudo[288551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:54 compute-0 sudo[288551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-0 sudo[288551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:54 compute-0 sudo[288585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:48:54 compute-0 sudo[288585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 31 07:48:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 31 07:48:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 184 KiB/s rd, 4.0 KiB/s wr, 71 op/s
Jan 31 07:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312-userdata-shm.mount: Deactivated successfully.
Jan 31 07:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad23e956a8f6b2983b473e80a0c7c4ba5edf04a69a3d7290da011806d023035-merged.mount: Deactivated successfully.
Jan 31 07:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:48:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:55.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:48:55 compute-0 podman[288473]: 2026-01-31 07:48:55.293386774 +0000 UTC m=+0.781915809 container cleanup 79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:48:55 compute-0 systemd[1]: libpod-conmon-79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312.scope: Deactivated successfully.
Jan 31 07:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:55 compute-0 sudo[288585]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:48:55 compute-0 podman[288631]: 2026-01-31 07:48:55.580063044 +0000 UTC m=+0.260916710 container remove 79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.587 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad2c368-80bc-4f11-9c59-e8f1b405575f]: (4, ('Sat Jan 31 07:48:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312)\n79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312\nSat Jan 31 07:48:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 (79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312)\n79950f479b65085c70a10da5e7116cb48f3a1b746f20cbf1dc4a7dbe5aa49312\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.588 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[08b0dd02-77f7-4bb3-9a94-7b3d4b966a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.589 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60244e92-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:55 compute-0 nova_compute[247704]: 2026-01-31 07:48:55.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:55 compute-0 kernel: tap60244e92-10: left promiscuous mode
Jan 31 07:48:55 compute-0 nova_compute[247704]: 2026-01-31 07:48:55.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.605 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[678cee67-8eba-4c21-9acd-3605801eade7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.625 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[12b300c9-e3da-4e2b-9bee-02e8b520d110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.626 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb79667-f4d9-42b5-82aa-1d3b95b624b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.642 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c833d292-9471-43c7-bff0-e39cd6ab2099]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597311, 'reachable_time': 38926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288669, 'error': None, 'target': 'ovnmeta-60244e92-1524-47f0-8207-05d0104afa47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d60244e92\x2d1524\x2d47f0\x2d8207\x2d05d0104afa47.mount: Deactivated successfully.
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.644 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60244e92-1524-47f0-8207-05d0104afa47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:48:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:55.645 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f9846fd9-ff12-4106-a95f-b9b78ffcda9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.029 247708 DEBUG nova.compute.manager [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received event network-vif-unplugged-64a7f6f3-af5d-4049-948c-1a5105761574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.029 247708 DEBUG oslo_concurrency.lockutils [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.029 247708 DEBUG oslo_concurrency.lockutils [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.029 247708 DEBUG oslo_concurrency.lockutils [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.029 247708 DEBUG nova.compute.manager [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] No waiting events found dispatching network-vif-unplugged-64a7f6f3-af5d-4049-948c-1a5105761574 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.030 247708 WARNING nova.compute.manager [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received unexpected event network-vif-unplugged-64a7f6f3-af5d-4049-948c-1a5105761574 for instance with vm_state suspended and task_state None.
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.030 247708 DEBUG nova.compute.manager [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.030 247708 DEBUG oslo_concurrency.lockutils [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.030 247708 DEBUG oslo_concurrency.lockutils [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.030 247708 DEBUG oslo_concurrency.lockutils [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.030 247708 DEBUG nova.compute.manager [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] No waiting events found dispatching network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.031 247708 WARNING nova.compute.manager [req-229aec6b-14ec-4201-8216-b3a3f9fed58e req-8177ed76-0401-4c95-a34e-29b4150b5b4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received unexpected event network-vif-plugged-64a7f6f3-af5d-4049-948c-1a5105761574 for instance with vm_state suspended and task_state None.
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: osdmap e243: 3 total, 3 up, 3 in
Jan 31 07:48:56 compute-0 ceph-mon[74496]: pgmap v1543: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 184 KiB/s rd, 4.0 KiB/s wr, 71 op/s
Jan 31 07:48:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f2804929-62f1-4566-9ff9-a34b09a2818d does not exist
Jan 31 07:48:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3e187511-d2be-4352-9870-ca3c20c2a915 does not exist
Jan 31 07:48:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 11709d98-243a-479c-a7c6-b5acf20b2b03 does not exist
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:48:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:48:56 compute-0 sudo[288670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:56 compute-0 sudo[288670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:56 compute-0 sudo[288670]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.400 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.401 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.401 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.401 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.402 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.403 247708 INFO nova.compute.manager [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Terminating instance
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.404 247708 DEBUG nova.compute.manager [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:48:56 compute-0 sudo[288695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:48:56 compute-0 sudo[288695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:56 compute-0 sudo[288695]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.415 247708 INFO nova.virt.libvirt.driver [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Instance destroyed successfully.
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.416 247708 DEBUG nova.objects.instance [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lazy-loading 'resources' on Instance uuid 4332ef04-94ba-46ae-98db-3c3f9fb470b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.429 247708 DEBUG nova.virt.libvirt.vif [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1565268821',display_name='tempest-DeleteServersTestJSON-server-1565268821',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1565268821',id=66,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:48:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f299640bb1f64e5fa12b23955e5a2127',ramdisk_id='',reservation_id='r-hy2sfyxh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-2031597701',owner_user_name='tempest-DeleteServersTestJSON-2031597701-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:48:54Z,user_data=None,user_id='97abab8eb79247cd89fb2ebff295b890',uuid=4332ef04-94ba-46ae-98db-3c3f9fb470b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.430 247708 DEBUG nova.network.os_vif_util [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converting VIF {"id": "64a7f6f3-af5d-4049-948c-1a5105761574", "address": "fa:16:3e:71:bf:d8", "network": {"id": "60244e92-1524-47f0-8207-05d0104afa47", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1527532153-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f299640bb1f64e5fa12b23955e5a2127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64a7f6f3-af", "ovs_interfaceid": "64a7f6f3-af5d-4049-948c-1a5105761574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.430 247708 DEBUG nova.network.os_vif_util [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.431 247708 DEBUG os_vif [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.432 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64a7f6f3-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.434 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:56 compute-0 nova_compute[247704]: 2026-01-31 07:48:56.437 247708 INFO os_vif [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:bf:d8,bridge_name='br-int',has_traffic_filtering=True,id=64a7f6f3-af5d-4049-948c-1a5105761574,network=Network(60244e92-1524-47f0-8207-05d0104afa47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64a7f6f3-af')
Jan 31 07:48:56 compute-0 sudo[288720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:56 compute-0 sudo[288720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:48:56 compute-0 sudo[288720]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:56 compute-0 sudo[288761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:48:56 compute-0 sudo[288761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:56.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 193 op/s
Jan 31 07:48:56 compute-0 podman[288826]: 2026-01-31 07:48:56.81515685 +0000 UTC m=+0.018201386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:48:56 compute-0 podman[288826]: 2026-01-31 07:48:56.961583537 +0000 UTC m=+0.164628043 container create aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:48:57 compute-0 systemd[1]: Started libpod-conmon-aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02.scope.
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:48:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:48:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:48:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:48:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:57.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:48:57 compute-0 podman[288826]: 2026-01-31 07:48:57.260485416 +0000 UTC m=+0.463529912 container init aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:48:57 compute-0 podman[288826]: 2026-01-31 07:48:57.266505654 +0000 UTC m=+0.469550190 container start aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dirac, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:48:57 compute-0 bold_dirac[288842]: 167 167
Jan 31 07:48:57 compute-0 systemd[1]: libpod-aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02.scope: Deactivated successfully.
Jan 31 07:48:57 compute-0 podman[288826]: 2026-01-31 07:48:57.401394847 +0000 UTC m=+0.604439333 container attach aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:48:57 compute-0 podman[288826]: 2026-01-31 07:48:57.402452963 +0000 UTC m=+0.605497449 container died aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:48:57 compute-0 nova_compute[247704]: 2026-01-31 07:48:57.482 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bb08a1611f5e332b19ee6a64bee127a55095523c2d3fc3849a1580063f21278-merged.mount: Deactivated successfully.
Jan 31 07:48:57 compute-0 podman[288826]: 2026-01-31 07:48:57.951411987 +0000 UTC m=+1.154456463 container remove aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dirac, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:48:58 compute-0 systemd[1]: libpod-conmon-aca67bf5c489ba614620614ed134ceb4fa67e8f6839a948c7918b7515b13be02.scope: Deactivated successfully.
Jan 31 07:48:58 compute-0 podman[288869]: 2026-01-31 07:48:58.060758194 +0000 UTC m=+0.024074290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:48:58 compute-0 podman[288869]: 2026-01-31 07:48:58.213518915 +0000 UTC m=+0.176835021 container create 8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:48:58 compute-0 systemd[1]: Started libpod-conmon-8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54.scope.
Jan 31 07:48:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b4eb63755e8178f38044663d80ecda54f85ca03e74f1cd8f81a8605967173/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b4eb63755e8178f38044663d80ecda54f85ca03e74f1cd8f81a8605967173/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b4eb63755e8178f38044663d80ecda54f85ca03e74f1cd8f81a8605967173/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b4eb63755e8178f38044663d80ecda54f85ca03e74f1cd8f81a8605967173/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:48:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3b4eb63755e8178f38044663d80ecda54f85ca03e74f1cd8f81a8605967173/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:48:58 compute-0 podman[288869]: 2026-01-31 07:48:58.513626374 +0000 UTC m=+0.476942520 container init 8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:48:58 compute-0 podman[288869]: 2026-01-31 07:48:58.521623211 +0000 UTC m=+0.484939287 container start 8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:48:58 compute-0 ceph-mon[74496]: pgmap v1544: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 28 KiB/s wr, 193 op/s
Jan 31 07:48:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:58.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:58 compute-0 podman[288869]: 2026-01-31 07:48:58.739967498 +0000 UTC m=+0.703283574 container attach 8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:48:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:58.747 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:48:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:48:58.749 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:48:58 compute-0 nova_compute[247704]: 2026-01-31 07:48:58.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:48:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 82 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 24 KiB/s wr, 203 op/s
Jan 31 07:48:59 compute-0 nova_compute[247704]: 2026-01-31 07:48:59.079 247708 INFO nova.virt.libvirt.driver [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Deleting instance files /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2_del
Jan 31 07:48:59 compute-0 nova_compute[247704]: 2026-01-31 07:48:59.080 247708 INFO nova.virt.libvirt.driver [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Deletion of /var/lib/nova/instances/4332ef04-94ba-46ae-98db-3c3f9fb470b2_del complete
Jan 31 07:48:59 compute-0 nova_compute[247704]: 2026-01-31 07:48:59.137 247708 INFO nova.compute.manager [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Took 2.73 seconds to destroy the instance on the hypervisor.
Jan 31 07:48:59 compute-0 nova_compute[247704]: 2026-01-31 07:48:59.138 247708 DEBUG oslo.service.loopingcall [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:48:59 compute-0 nova_compute[247704]: 2026-01-31 07:48:59.138 247708 DEBUG nova.compute.manager [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:48:59 compute-0 nova_compute[247704]: 2026-01-31 07:48:59.139 247708 DEBUG nova.network.neutron [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:48:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:48:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:48:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:59.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:48:59 compute-0 happy_chatelet[288886]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:48:59 compute-0 happy_chatelet[288886]: --> relative data size: 1.0
Jan 31 07:48:59 compute-0 happy_chatelet[288886]: --> All data devices are unavailable
Jan 31 07:48:59 compute-0 systemd[1]: libpod-8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54.scope: Deactivated successfully.
Jan 31 07:48:59 compute-0 podman[288869]: 2026-01-31 07:48:59.412320912 +0000 UTC m=+1.375636988 container died 8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce3b4eb63755e8178f38044663d80ecda54f85ca03e74f1cd8f81a8605967173-merged.mount: Deactivated successfully.
Jan 31 07:48:59 compute-0 podman[288869]: 2026-01-31 07:48:59.477312695 +0000 UTC m=+1.440628781 container remove 8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chatelet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:48:59 compute-0 systemd[1]: libpod-conmon-8a5eb9f57716518278fb460176da9b40084527bc58b3800365871d7f5dc7ce54.scope: Deactivated successfully.
Jan 31 07:48:59 compute-0 sudo[288761]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:59 compute-0 podman[288903]: 2026-01-31 07:48:59.532618809 +0000 UTC m=+0.075747646 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 07:48:59 compute-0 ceph-mon[74496]: pgmap v1545: 305 pgs: 305 active+clean; 82 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 24 KiB/s wr, 203 op/s
Jan 31 07:48:59 compute-0 sudo[288933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:59 compute-0 sudo[288933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:59 compute-0 sudo[288933]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:59 compute-0 sudo[288958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:48:59 compute-0 sudo[288958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:59 compute-0 sudo[288958]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:59 compute-0 sudo[288983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:48:59 compute-0 sudo[288983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:48:59 compute-0 sudo[288983]: pam_unix(sudo:session): session closed for user root
Jan 31 07:48:59 compute-0 sudo[289008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:48:59 compute-0 sudo[289008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.0480362 +0000 UTC m=+0.049390430 container create a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:49:00 compute-0 systemd[1]: Started libpod-conmon-a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c.scope.
Jan 31 07:49:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.028034421 +0000 UTC m=+0.029388671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.127866895 +0000 UTC m=+0.129221135 container init a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.136760824 +0000 UTC m=+0.138115064 container start a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.141259964 +0000 UTC m=+0.142614264 container attach a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:49:00 compute-0 hopeful_roentgen[289090]: 167 167
Jan 31 07:49:00 compute-0 systemd[1]: libpod-a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c.scope: Deactivated successfully.
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.144756809 +0000 UTC m=+0.146111049 container died a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e2a9e9621a866e70ca91ec83e0eae80956d49bc34b971f3402673529a5f503-merged.mount: Deactivated successfully.
Jan 31 07:49:00 compute-0 podman[289073]: 2026-01-31 07:49:00.182655417 +0000 UTC m=+0.184009617 container remove a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:49:00 compute-0 systemd[1]: libpod-conmon-a9df49897e9473da61cf0f33401326a3e1d6a0ba1209cb2483203d159a6fd06c.scope: Deactivated successfully.
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.240 247708 DEBUG nova.network.neutron [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.278 247708 INFO nova.compute.manager [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Took 1.14 seconds to deallocate network for instance.
Jan 31 07:49:00 compute-0 podman[289114]: 2026-01-31 07:49:00.322144323 +0000 UTC m=+0.043477695 container create 32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.332 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.333 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:00 compute-0 systemd[1]: Started libpod-conmon-32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182.scope.
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.372 247708 DEBUG nova.compute.manager [req-f136a20e-ba40-440c-8a6d-1dadde0ec3b3 req-a05b37df-7c80-4df7-b561-0d1d5570d243 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Received event network-vif-deleted-64a7f6f3-af5d-4049-948c-1a5105761574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e2138f25a109e8ac5c71e124a4ae33e80e14a95c66bcf5f65f8b84fe666140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e2138f25a109e8ac5c71e124a4ae33e80e14a95c66bcf5f65f8b84fe666140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e2138f25a109e8ac5c71e124a4ae33e80e14a95c66bcf5f65f8b84fe666140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e2138f25a109e8ac5c71e124a4ae33e80e14a95c66bcf5f65f8b84fe666140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.395 247708 DEBUG oslo_concurrency.processutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:00 compute-0 podman[289114]: 2026-01-31 07:49:00.306285385 +0000 UTC m=+0.027618777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:49:00 compute-0 podman[289114]: 2026-01-31 07:49:00.403460765 +0000 UTC m=+0.124794157 container init 32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:49:00 compute-0 podman[289114]: 2026-01-31 07:49:00.40898318 +0000 UTC m=+0.130316552 container start 32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:49:00 compute-0 podman[289114]: 2026-01-31 07:49:00.417720084 +0000 UTC m=+0.139053456 container attach 32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:49:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:00.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907091529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.800 247708 DEBUG oslo_concurrency.processutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.808 247708 DEBUG nova.compute.provider_tree [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.831 247708 DEBUG nova.scheduler.client.report [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:49:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1907091529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.856928) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740857024, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2283, "num_deletes": 262, "total_data_size": 3762954, "memory_usage": 3832736, "flush_reason": "Manual Compaction"}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.866 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740891160, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3691359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31920, "largest_seqno": 34202, "table_properties": {"data_size": 3680901, "index_size": 6760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22153, "raw_average_key_size": 21, "raw_value_size": 3659861, "raw_average_value_size": 3492, "num_data_blocks": 291, "num_entries": 1048, "num_filter_entries": 1048, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845558, "oldest_key_time": 1769845558, "file_creation_time": 1769845740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 34289 microseconds, and 8754 cpu microseconds.
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.891222) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3691359 bytes OK
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.891248) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.893745) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.893761) EVENT_LOG_v1 {"time_micros": 1769845740893756, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.893780) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3753503, prev total WAL file size 3753503, number of live WAL files 2.
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.894519) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3604KB)], [68(10141KB)]
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740894573, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 14076264, "oldest_snapshot_seqno": -1}
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.895 247708 INFO nova.scheduler.client.report [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Deleted allocations for instance 4332ef04-94ba-46ae-98db-3c3f9fb470b2
Jan 31 07:49:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 49 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 223 op/s
Jan 31 07:49:00 compute-0 nova_compute[247704]: 2026-01-31 07:49:00.970 247708 DEBUG oslo_concurrency.lockutils [None req-53a5fb0f-c065-40d5-beaa-f4cd8cc75ba5 97abab8eb79247cd89fb2ebff295b890 f299640bb1f64e5fa12b23955e5a2127 - - default default] Lock "4332ef04-94ba-46ae-98db-3c3f9fb470b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6327 keys, 12109891 bytes, temperature: kUnknown
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740977855, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 12109891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12063760, "index_size": 29189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 161515, "raw_average_key_size": 25, "raw_value_size": 11946457, "raw_average_value_size": 1888, "num_data_blocks": 1174, "num_entries": 6327, "num_filter_entries": 6327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845740, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.978173) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12109891 bytes
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.980063) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 145.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 6864, records dropped: 537 output_compression: NoCompression
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.980117) EVENT_LOG_v1 {"time_micros": 1769845740980103, "job": 38, "event": "compaction_finished", "compaction_time_micros": 83363, "compaction_time_cpu_micros": 17931, "output_level": 6, "num_output_files": 1, "total_output_size": 12109891, "num_input_records": 6864, "num_output_records": 6327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740980749, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845740982111, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.894398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.982171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.982177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.982181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.982184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:00.982188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:01 compute-0 keen_curie[289129]: {
Jan 31 07:49:01 compute-0 keen_curie[289129]:     "0": [
Jan 31 07:49:01 compute-0 keen_curie[289129]:         {
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "devices": [
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "/dev/loop3"
Jan 31 07:49:01 compute-0 keen_curie[289129]:             ],
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "lv_name": "ceph_lv0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "lv_size": "7511998464",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "name": "ceph_lv0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "tags": {
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.cluster_name": "ceph",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.crush_device_class": "",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.encrypted": "0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.osd_id": "0",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.type": "block",
Jan 31 07:49:01 compute-0 keen_curie[289129]:                 "ceph.vdo": "0"
Jan 31 07:49:01 compute-0 keen_curie[289129]:             },
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "type": "block",
Jan 31 07:49:01 compute-0 keen_curie[289129]:             "vg_name": "ceph_vg0"
Jan 31 07:49:01 compute-0 keen_curie[289129]:         }
Jan 31 07:49:01 compute-0 keen_curie[289129]:     ]
Jan 31 07:49:01 compute-0 keen_curie[289129]: }
Jan 31 07:49:01 compute-0 systemd[1]: libpod-32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182.scope: Deactivated successfully.
Jan 31 07:49:01 compute-0 podman[289114]: 2026-01-31 07:49:01.137850929 +0000 UTC m=+0.859184331 container died 32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:49:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:01.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-09e2138f25a109e8ac5c71e124a4ae33e80e14a95c66bcf5f65f8b84fe666140-merged.mount: Deactivated successfully.
Jan 31 07:49:01 compute-0 nova_compute[247704]: 2026-01-31 07:49:01.434 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:01 compute-0 podman[289114]: 2026-01-31 07:49:01.462261003 +0000 UTC m=+1.183594405 container remove 32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:49:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 31 07:49:01 compute-0 systemd[1]: libpod-conmon-32945f2159d87bd8f6b61516b1e16638e017675e42a6fbd8ac2101d7f69ea182.scope: Deactivated successfully.
Jan 31 07:49:01 compute-0 sudo[289008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 31 07:49:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 31 07:49:01 compute-0 sudo[289176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:01 compute-0 sudo[289176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:01 compute-0 sudo[289176]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:01 compute-0 sudo[289201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:49:01 compute-0 sudo[289201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:01 compute-0 sudo[289201]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:01 compute-0 sudo[289226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:01 compute-0 sudo[289226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:01 compute-0 sudo[289226]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:01 compute-0 sudo[289251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:49:01 compute-0 sudo[289251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:01 compute-0 ceph-mon[74496]: pgmap v1546: 305 pgs: 305 active+clean; 49 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 24 KiB/s wr, 223 op/s
Jan 31 07:49:01 compute-0 ceph-mon[74496]: osdmap e244: 3 total, 3 up, 3 in
Jan 31 07:49:01 compute-0 anacron[52258]: Job `cron.monthly' started
Jan 31 07:49:02 compute-0 anacron[52258]: Job `cron.monthly' terminated
Jan 31 07:49:02 compute-0 anacron[52258]: Normal exit (3 jobs run)
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.060174365 +0000 UTC m=+0.054198788 container create 3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_morse, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:49:02 compute-0 systemd[1]: Started libpod-conmon-3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83.scope.
Jan 31 07:49:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.039353015 +0000 UTC m=+0.033377518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.146574271 +0000 UTC m=+0.140598764 container init 3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.155726646 +0000 UTC m=+0.149751059 container start 3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:49:02 compute-0 agitated_morse[289335]: 167 167
Jan 31 07:49:02 compute-0 systemd[1]: libpod-3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83.scope: Deactivated successfully.
Jan 31 07:49:02 compute-0 conmon[289335]: conmon 3771f2f2531dbbad02e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83.scope/container/memory.events
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.162309876 +0000 UTC m=+0.156334329 container attach 3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_morse, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.162917892 +0000 UTC m=+0.156942335 container died 3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:49:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5e70f2bb226361a1e00a3c16450488ba4ef40d23c33b059932c6d0795ceb9a2-merged.mount: Deactivated successfully.
Jan 31 07:49:02 compute-0 podman[289317]: 2026-01-31 07:49:02.21634118 +0000 UTC m=+0.210365633 container remove 3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_morse, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:49:02 compute-0 systemd[1]: libpod-conmon-3771f2f2531dbbad02e9627c0f9d7024db1a05bbf715a6abfe54583fba0f6a83.scope: Deactivated successfully.
Jan 31 07:49:02 compute-0 podman[289361]: 2026-01-31 07:49:02.350400043 +0000 UTC m=+0.042334959 container create 0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:49:02 compute-0 systemd[1]: Started libpod-conmon-0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589.scope.
Jan 31 07:49:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:49:02 compute-0 podman[289361]: 2026-01-31 07:49:02.332004981 +0000 UTC m=+0.023939877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca93497333a996ba10fa7cf894efbc9e95fe326553afbb208649222e0384e97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca93497333a996ba10fa7cf894efbc9e95fe326553afbb208649222e0384e97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca93497333a996ba10fa7cf894efbc9e95fe326553afbb208649222e0384e97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eca93497333a996ba10fa7cf894efbc9e95fe326553afbb208649222e0384e97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:02 compute-0 podman[289361]: 2026-01-31 07:49:02.44547726 +0000 UTC m=+0.137412166 container init 0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:49:02 compute-0 podman[289361]: 2026-01-31 07:49:02.451858877 +0000 UTC m=+0.143793763 container start 0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:49:02 compute-0 podman[289361]: 2026-01-31 07:49:02.455568308 +0000 UTC m=+0.147503204 container attach 0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:49:02 compute-0 nova_compute[247704]: 2026-01-31 07:49:02.484 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:02.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 31 07:49:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 31 07:49:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 31 07:49:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 234 op/s
Jan 31 07:49:03 compute-0 lucid_lewin[289377]: {
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:         "osd_id": 0,
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:         "type": "bluestore"
Jan 31 07:49:03 compute-0 lucid_lewin[289377]:     }
Jan 31 07:49:03 compute-0 lucid_lewin[289377]: }
Jan 31 07:49:03 compute-0 systemd[1]: libpod-0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589.scope: Deactivated successfully.
Jan 31 07:49:03 compute-0 podman[289361]: 2026-01-31 07:49:03.254943814 +0000 UTC m=+0.946878690 container died 0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:49:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:03.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-eca93497333a996ba10fa7cf894efbc9e95fe326553afbb208649222e0384e97-merged.mount: Deactivated successfully.
Jan 31 07:49:03 compute-0 podman[289361]: 2026-01-31 07:49:03.305762678 +0000 UTC m=+0.997697574 container remove 0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:49:03 compute-0 systemd[1]: libpod-conmon-0d24e4ee009b0c264cff87a14fac3224f40a0ddc6b18113862a12762b28ee589.scope: Deactivated successfully.
Jan 31 07:49:03 compute-0 sudo[289251]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:49:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:49:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:49:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:49:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5c005cde-a25b-4936-a6b3-96ca3c818f84 does not exist
Jan 31 07:49:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 78456117-613e-46b8-9fc1-63636f9e2579 does not exist
Jan 31 07:49:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 22e5cad2-46f2-420d-90cd-b79809fbef55 does not exist
Jan 31 07:49:03 compute-0 sudo[289412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:03 compute-0 sudo[289412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:03 compute-0 sudo[289412]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:03 compute-0 sudo[289437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:49:03 compute-0 sudo[289437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:03 compute-0 sudo[289437]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:03 compute-0 nova_compute[247704]: 2026-01-31 07:49:03.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:03 compute-0 ceph-mon[74496]: osdmap e245: 3 total, 3 up, 3 in
Jan 31 07:49:03 compute-0 ceph-mon[74496]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 234 op/s
Jan 31 07:49:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:49:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:49:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:04.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 4.5 KiB/s wr, 95 op/s
Jan 31 07:49:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1096458877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:05.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:05 compute-0 nova_compute[247704]: 2026-01-31 07:49:05.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:05 compute-0 nova_compute[247704]: 2026-01-31 07:49:05.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:49:05 compute-0 nova_compute[247704]: 2026-01-31 07:49:05.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:49:05 compute-0 nova_compute[247704]: 2026-01-31 07:49:05.589 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:49:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:05.751 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:06 compute-0 ceph-mon[74496]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 4.5 KiB/s wr, 95 op/s
Jan 31 07:49:06 compute-0 nova_compute[247704]: 2026-01-31 07:49:06.437 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:06 compute-0 nova_compute[247704]: 2026-01-31 07:49:06.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 56 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 735 KiB/s wr, 70 op/s
Jan 31 07:49:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/968428789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:07.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.587 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.587 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:49:07 compute-0 nova_compute[247704]: 2026-01-31 07:49:07.588 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2812782918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.034 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.217 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.219 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4553MB free_disk=20.975112915039062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.219 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.219 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:08 compute-0 ceph-mon[74496]: pgmap v1551: 305 pgs: 305 active+clean; 56 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 735 KiB/s wr, 70 op/s
Jan 31 07:49:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2089316534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2812782918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.303 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.303 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.328 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:08.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410917404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.737 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.745 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.778 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.824 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:49:08 compute-0 nova_compute[247704]: 2026-01-31 07:49:08.825 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 64 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 838 KiB/s wr, 35 op/s
Jan 31 07:49:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2410917404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2827750949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:09.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:09 compute-0 nova_compute[247704]: 2026-01-31 07:49:09.574 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845734.5718665, 4332ef04-94ba-46ae-98db-3c3f9fb470b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:49:09 compute-0 nova_compute[247704]: 2026-01-31 07:49:09.574 247708 INFO nova.compute.manager [-] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] VM Stopped (Lifecycle Event)
Jan 31 07:49:09 compute-0 nova_compute[247704]: 2026-01-31 07:49:09.597 247708 DEBUG nova.compute.manager [None req-7644cd25-b993-4cc0-9a83-49a88dc02ea7 - - - - - -] [instance: 4332ef04-94ba-46ae-98db-3c3f9fb470b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:09 compute-0 nova_compute[247704]: 2026-01-31 07:49:09.820 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:09 compute-0 nova_compute[247704]: 2026-01-31 07:49:09.820 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:09 compute-0 nova_compute[247704]: 2026-01-31 07:49:09.820 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:10 compute-0 ceph-mon[74496]: pgmap v1552: 305 pgs: 305 active+clean; 64 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 838 KiB/s wr, 35 op/s
Jan 31 07:49:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3542332933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2125014550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/937353425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:10 compute-0 nova_compute[247704]: 2026-01-31 07:49:10.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:49:10 compute-0 nova_compute[247704]: 2026-01-31 07:49:10.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:49:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:10.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 55 op/s
Jan 31 07:49:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:11.159 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:11.159 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:11.159 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:11.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:11 compute-0 nova_compute[247704]: 2026-01-31 07:49:11.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.537588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751537840, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 404, "num_deletes": 252, "total_data_size": 296857, "memory_usage": 305464, "flush_reason": "Manual Compaction"}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751549700, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 293730, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34203, "largest_seqno": 34606, "table_properties": {"data_size": 291266, "index_size": 564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6848, "raw_average_key_size": 21, "raw_value_size": 286145, "raw_average_value_size": 877, "num_data_blocks": 24, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845741, "oldest_key_time": 1769845741, "file_creation_time": 1769845751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 12161 microseconds, and 1284 cpu microseconds.
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.549748) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 293730 bytes OK
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.549769) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.559134) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.559180) EVENT_LOG_v1 {"time_micros": 1769845751559172, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.559204) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 294281, prev total WAL file size 294281, number of live WAL files 2.
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.559779) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303038' seq:72057594037927935, type:22 .. '6D6772737461740031323539' seq:0, type:0; will stop at (end)
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(286KB)], [71(11MB)]
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751559819, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12403621, "oldest_snapshot_seqno": -1}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6133 keys, 8560215 bytes, temperature: kUnknown
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751714529, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8560215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8520065, "index_size": 23720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 157667, "raw_average_key_size": 25, "raw_value_size": 8410702, "raw_average_value_size": 1371, "num_data_blocks": 946, "num_entries": 6133, "num_filter_entries": 6133, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.714758) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8560215 bytes
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.716578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.1 rd, 55.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.5 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(71.4) write-amplify(29.1) OK, records in: 6653, records dropped: 520 output_compression: NoCompression
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.716604) EVENT_LOG_v1 {"time_micros": 1769845751716591, "job": 40, "event": "compaction_finished", "compaction_time_micros": 154778, "compaction_time_cpu_micros": 16295, "output_level": 6, "num_output_files": 1, "total_output_size": 8560215, "num_input_records": 6653, "num_output_records": 6133, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751716772, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845751718639, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.559690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.718847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.718858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.718863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.718867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:49:11.718871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:49:12 compute-0 ceph-mon[74496]: pgmap v1553: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 55 op/s
Jan 31 07:49:12 compute-0 nova_compute[247704]: 2026-01-31 07:49:12.488 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 07:49:12 compute-0 podman[289511]: 2026-01-31 07:49:12.935977112 +0000 UTC m=+0.105257568 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 07:49:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:13.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:14 compute-0 sudo[289538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:14 compute-0 sudo[289538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:14 compute-0 sudo[289538]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:14 compute-0 ceph-mon[74496]: pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 07:49:14 compute-0 sudo[289563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:14 compute-0 sudo[289563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:14 compute-0 sudo[289563]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 31 07:49:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:15.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:15 compute-0 ceph-mon[74496]: pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 31 07:49:16 compute-0 nova_compute[247704]: 2026-01-31 07:49:16.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 07:49:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:17.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:17 compute-0 nova_compute[247704]: 2026-01-31 07:49:17.490 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:18 compute-0 ceph-mon[74496]: pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 07:49:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Jan 31 07:49:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:19.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:19 compute-0 ceph-mon[74496]: pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 96 op/s
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:49:20
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:49:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Jan 31 07:49:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:21.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:21 compute-0 nova_compute[247704]: 2026-01-31 07:49:21.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:21 compute-0 ceph-mon[74496]: pgmap v1558: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 89 op/s
Jan 31 07:49:22 compute-0 nova_compute[247704]: 2026-01-31 07:49:22.492 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 07:49:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2455197800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:23.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 31 07:49:24 compute-0 ceph-mon[74496]: pgmap v1559: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 07:49:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 31 07:49:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 31 07:49:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 78 op/s
Jan 31 07:49:25 compute-0 ceph-mon[74496]: osdmap e246: 3 total, 3 up, 3 in
Jan 31 07:49:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:25.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:26 compute-0 nova_compute[247704]: 2026-01-31 07:49:26.498 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:26 compute-0 ceph-mon[74496]: pgmap v1561: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 78 op/s
Jan 31 07:49:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:49:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:49:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 106 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 31 07:49:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:27.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:27 compute-0 nova_compute[247704]: 2026-01-31 07:49:27.495 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:28 compute-0 ceph-mon[74496]: pgmap v1562: 305 pgs: 305 active+clean; 106 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 1.4 MiB/s wr, 65 op/s
Jan 31 07:49:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:28.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 121 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 31 07:49:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:29.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:29 compute-0 ceph-mon[74496]: pgmap v1563: 305 pgs: 305 active+clean; 121 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 31 07:49:29 compute-0 podman[289596]: 2026-01-31 07:49:29.88378715 +0000 UTC m=+0.053621004 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 07:49:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 408 KiB/s rd, 2.6 MiB/s wr, 101 op/s
Jan 31 07:49:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:31.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:31 compute-0 nova_compute[247704]: 2026-01-31 07:49:31.500 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 31 07:49:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 31 07:49:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 31 07:49:32 compute-0 ceph-mon[74496]: pgmap v1564: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 408 KiB/s rd, 2.6 MiB/s wr, 101 op/s
Jan 31 07:49:32 compute-0 ceph-mon[74496]: osdmap e247: 3 total, 3 up, 3 in
Jan 31 07:49:32 compute-0 nova_compute[247704]: 2026-01-31 07:49:32.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 31 07:49:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:34 compute-0 ceph-mon[74496]: pgmap v1566: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 31 07:49:34 compute-0 sudo[289617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:34 compute-0 sudo[289617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:34 compute-0 sudo[289617]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:34 compute-0 sudo[289642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:34 compute-0 sudo[289642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:34 compute-0 sudo[289642]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 394 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:49:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:35.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:36 compute-0 ceph-mon[74496]: pgmap v1567: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 394 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 31 07:49:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:36 compute-0 nova_compute[247704]: 2026-01-31 07:49:36.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 31 07:49:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:37 compute-0 nova_compute[247704]: 2026-01-31 07:49:37.500 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:37 compute-0 ceph-mon[74496]: pgmap v1568: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.2 MiB/s wr, 38 op/s
Jan 31 07:49:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:38.729 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:49:38 compute-0 nova_compute[247704]: 2026-01-31 07:49:38.730 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:38.730 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:49:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 31 07:49:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 31 07:49:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 31 07:49:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 37 KiB/s wr, 3 op/s
Jan 31 07:49:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:39.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:39 compute-0 ceph-mon[74496]: osdmap e248: 3 total, 3 up, 3 in
Jan 31 07:49:39 compute-0 ceph-mon[74496]: pgmap v1570: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 37 KiB/s wr, 3 op/s
Jan 31 07:49:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3117650854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2044729548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 32 KiB/s wr, 25 op/s
Jan 31 07:49:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:41.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:41 compute-0 nova_compute[247704]: 2026-01-31 07:49:41.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:41 compute-0 ceph-mon[74496]: pgmap v1571: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 32 KiB/s wr, 25 op/s
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.503 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.638 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.639 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.659 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:49:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:42.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.810 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.811 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.819 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.819 247708 INFO nova.compute.claims [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:49:42 compute-0 nova_compute[247704]: 2026-01-31 07:49:42.927 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 409 B/s wr, 31 op/s
Jan 31 07:49:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:43.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:49:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2465379160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.397 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.406 247708 DEBUG nova.compute.provider_tree [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.425 247708 DEBUG nova.scheduler.client.report [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.443 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.443 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.525 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.526 247708 DEBUG nova.network.neutron [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.550 247708 INFO nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.574 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.669 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.670 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.671 247708 INFO nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Creating image(s)
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.698 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.729 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.761 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.765 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.784 247708 DEBUG nova.policy [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '07aa1b5aaea444449f8ef00dfe56e8eb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.814 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.815 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.816 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.816 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.843 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:43 compute-0 nova_compute[247704]: 2026-01-31 07:49:43.849 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:43 compute-0 podman[289749]: 2026-01-31 07:49:43.918871488 +0000 UTC m=+0.087719049 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 07:49:44 compute-0 ceph-mon[74496]: pgmap v1572: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 409 B/s wr, 31 op/s
Jan 31 07:49:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2465379160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:44 compute-0 nova_compute[247704]: 2026-01-31 07:49:44.463 247708 DEBUG nova.network.neutron [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Successfully created port: 13e76a94-6f50-418a-b506-89319cf36f33 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:49:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 409 B/s wr, 31 op/s
Jan 31 07:49:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2397668109' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:49:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2397668109' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:49:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:45.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.367 247708 DEBUG nova.network.neutron [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Successfully updated port: 13e76a94-6f50-418a-b506-89319cf36f33 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.386 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.387 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquired lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.387 247708 DEBUG nova.network.neutron [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.673 247708 DEBUG nova.network.neutron [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.802 247708 DEBUG nova.compute.manager [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-changed-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.803 247708 DEBUG nova.compute.manager [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Refreshing instance network info cache due to event network-changed-13e76a94-6f50-418a-b506-89319cf36f33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:49:45 compute-0 nova_compute[247704]: 2026-01-31 07:49:45.803 247708 DEBUG oslo_concurrency.lockutils [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:49:46 compute-0 nova_compute[247704]: 2026-01-31 07:49:46.126 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:46 compute-0 nova_compute[247704]: 2026-01-31 07:49:46.228 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] resizing rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:49:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 31 07:49:46 compute-0 ceph-mon[74496]: pgmap v1573: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 409 B/s wr, 31 op/s
Jan 31 07:49:46 compute-0 nova_compute[247704]: 2026-01-31 07:49:46.553 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:46.732 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 31 07:49:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 31 07:49:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 158 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Jan 31 07:49:46 compute-0 nova_compute[247704]: 2026-01-31 07:49:46.971 247708 DEBUG nova.network.neutron [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updating instance_info_cache with network_info: [{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.311 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Releasing lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.312 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Instance network_info: |[{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.313 247708 DEBUG oslo_concurrency.lockutils [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.313 247708 DEBUG nova.network.neutron [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Refreshing network info cache for port 13e76a94-6f50-418a-b506-89319cf36f33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.321 247708 DEBUG nova.objects.instance [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'migration_context' on Instance uuid d184a9c5-9bfd-4b91-b909-67ea7cf5c982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:47.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.336 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.337 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Ensure instance console log exists: /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.338 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.339 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.339 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.343 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Start _get_guest_xml network_info=[{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.351 247708 WARNING nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.358 247708 DEBUG nova.virt.libvirt.host [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.358 247708 DEBUG nova.virt.libvirt.host [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.365 247708 DEBUG nova.virt.libvirt.host [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.365 247708 DEBUG nova.virt.libvirt.host [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.367 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.367 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.368 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.368 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.368 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.369 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.369 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.369 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.369 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.370 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.370 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.370 247708 DEBUG nova.virt.hardware [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.373 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.506 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:47 compute-0 ceph-mon[74496]: osdmap e249: 3 total, 3 up, 3 in
Jan 31 07:49:47 compute-0 ceph-mon[74496]: pgmap v1575: 305 pgs: 305 active+clean; 158 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Jan 31 07:49:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:49:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4009705974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.897 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.928 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:47 compute-0 nova_compute[247704]: 2026-01-31 07:49:47.933 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:49:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233301675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.365 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.368 247708 DEBUG nova.virt.libvirt.vif [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-816998154',display_name='tempest-SecurityGroupsTestJSON-server-816998154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-816998154',id=68,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-nwcm1kp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:49:43Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=d184a9c5-9bfd-4b91-b909-67ea7cf5c982,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.368 247708 DEBUG nova.network.os_vif_util [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.370 247708 DEBUG nova.network.os_vif_util [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.372 247708 DEBUG nova.objects.instance [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d184a9c5-9bfd-4b91-b909-67ea7cf5c982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.393 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <uuid>d184a9c5-9bfd-4b91-b909-67ea7cf5c982</uuid>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <name>instance-00000044</name>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:name>tempest-SecurityGroupsTestJSON-server-816998154</nova:name>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:49:47</nova:creationTime>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:user uuid="07aa1b5aaea444449f8ef00dfe56e8eb">tempest-SecurityGroupsTestJSON-31421501-project-member</nova:user>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:project uuid="2a52e7591d7f4e068e3f9fa0e4e288d5">tempest-SecurityGroupsTestJSON-31421501</nova:project>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <nova:port uuid="13e76a94-6f50-418a-b506-89319cf36f33">
Jan 31 07:49:48 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <system>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <entry name="serial">d184a9c5-9bfd-4b91-b909-67ea7cf5c982</entry>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <entry name="uuid">d184a9c5-9bfd-4b91-b909-67ea7cf5c982</entry>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </system>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <os>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </os>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <features>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </features>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk">
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </source>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk.config">
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </source>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:49:48 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:62:f2:2e"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <target dev="tap13e76a94-6f"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/console.log" append="off"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <video>
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </video>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:49:48 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:49:48 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:49:48 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:49:48 compute-0 nova_compute[247704]: </domain>
Jan 31 07:49:48 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.394 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Preparing to wait for external event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.394 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.395 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.395 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.397 247708 DEBUG nova.virt.libvirt.vif [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-816998154',display_name='tempest-SecurityGroupsTestJSON-server-816998154',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-816998154',id=68,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-nwcm1kp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:49:43Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=d184a9c5-9bfd-4b91-b909-67ea7cf5c982,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.397 247708 DEBUG nova.network.os_vif_util [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.398 247708 DEBUG nova.network.os_vif_util [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.399 247708 DEBUG os_vif [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.400 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.401 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.401 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.407 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.407 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13e76a94-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.408 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13e76a94-6f, col_values=(('external_ids', {'iface-id': '13e76a94-6f50-418a-b506-89319cf36f33', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:f2:2e', 'vm-uuid': 'd184a9c5-9bfd-4b91-b909-67ea7cf5c982'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.410 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-0 NetworkManager[49108]: <info>  [1769845788.4118] manager: (tap13e76a94-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.414 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.420 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.423 247708 INFO os_vif [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f')
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.489 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.490 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.490 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] No VIF found with MAC fa:16:3e:62:f2:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.491 247708 INFO nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Using config drive
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.530 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 07:49:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 07:49:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4009705974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3597531801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2233301675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.929 247708 INFO nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Creating config drive at /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/disk.config
Jan 31 07:49:48 compute-0 nova_compute[247704]: 2026-01-31 07:49:48.934 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5midd37i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 161 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Jan 31 07:49:49 compute-0 nova_compute[247704]: 2026-01-31 07:49:49.069 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5midd37i" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:49 compute-0 nova_compute[247704]: 2026-01-31 07:49:49.121 247708 DEBUG nova.storage.rbd_utils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] rbd image d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:49:49 compute-0 nova_compute[247704]: 2026-01-31 07:49:49.128 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/disk.config d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:49:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:49.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:49 compute-0 nova_compute[247704]: 2026-01-31 07:49:49.392 247708 DEBUG nova.network.neutron [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updated VIF entry in instance network info cache for port 13e76a94-6f50-418a-b506-89319cf36f33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:49:49 compute-0 nova_compute[247704]: 2026-01-31 07:49:49.393 247708 DEBUG nova.network.neutron [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updating instance_info_cache with network_info: [{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:49 compute-0 nova_compute[247704]: 2026-01-31 07:49:49.421 247708 DEBUG oslo_concurrency.lockutils [req-bb038c76-c05d-45b7-b89c-8ad0221db845 req-61f14233-04c8-4c09-a985-46e75d479fd1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:49:50 compute-0 ceph-mon[74496]: pgmap v1576: 305 pgs: 305 active+clean; 161 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:49:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:50.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.749 247708 DEBUG oslo_concurrency.processutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/disk.config d184a9c5-9bfd-4b91-b909-67ea7cf5c982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.750 247708 INFO nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Deleting local config drive /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982/disk.config because it was imported into RBD.
Jan 31 07:49:50 compute-0 kernel: tap13e76a94-6f: entered promiscuous mode
Jan 31 07:49:50 compute-0 NetworkManager[49108]: <info>  [1769845790.8222] manager: (tap13e76a94-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Jan 31 07:49:50 compute-0 ovn_controller[149457]: 2026-01-31T07:49:50Z|00213|binding|INFO|Claiming lport 13e76a94-6f50-418a-b506-89319cf36f33 for this chassis.
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.823 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:50 compute-0 ovn_controller[149457]: 2026-01-31T07:49:50Z|00214|binding|INFO|13e76a94-6f50-418a-b506-89319cf36f33: Claiming fa:16:3e:62:f2:2e 10.100.0.9
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.829 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.831 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.852 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:50 compute-0 systemd-udevd[290021]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:49:50 compute-0 ovn_controller[149457]: 2026-01-31T07:49:50Z|00215|binding|INFO|Setting lport 13e76a94-6f50-418a-b506-89319cf36f33 ovn-installed in OVS
Jan 31 07:49:50 compute-0 nova_compute[247704]: 2026-01-31 07:49:50.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:50 compute-0 systemd-machined[214448]: New machine qemu-29-instance-00000044.
Jan 31 07:49:50 compute-0 ovn_controller[149457]: 2026-01-31T07:49:50Z|00216|binding|INFO|Setting lport 13e76a94-6f50-418a-b506-89319cf36f33 up in Southbound
Jan 31 07:49:50 compute-0 NetworkManager[49108]: <info>  [1769845790.8756] device (tap13e76a94-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:49:50 compute-0 NetworkManager[49108]: <info>  [1769845790.8765] device (tap13e76a94-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.872 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:2e 10.100.0.9'], port_security=['fa:16:3e:62:f2:2e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd184a9c5-9bfd-4b91-b909-67ea7cf5c982', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd6aa0181-fdb2-4d3e-a8a9-d361e28d0bfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b3302f-04ed-4085-954c-3fd369ef549b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=13e76a94-6f50-418a-b506-89319cf36f33) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.874 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 13e76a94-6f50-418a-b506-89319cf36f33 in datapath abea69d7-4daf-4b3f-9f7f-2b06f416400d bound to our chassis
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.877 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:49:50 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000044.
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.889 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[60770e6e-f0bf-4106-8a7f-bbb7d3d26707]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.891 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabea69d7-41 in ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.894 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabea69d7-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.895 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[72ceeb3b-d907-479f-b1c8-8c5983bb2f9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.896 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[69c7aab8-05b7-42b9-9705-2806a2df4c13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.907 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[5af1b16b-81f0-4240-9fed-9967e537c769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.919 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d506ce9-eb10-4d39-8a03-7d51a939192e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 137 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.943 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[67f74339-30cb-492c-b513-38f693da1563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 NetworkManager[49108]: <info>  [1769845790.9518] manager: (tapabea69d7-40): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.950 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[42eb7058-3ac1-4104-a148-fd10903cfd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.983 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[55cd79da-3308-4717-a547-f9bdb0de017a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:50.986 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef7edd6-5d75-4fe6-a69d-4a9b24db5943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 NetworkManager[49108]: <info>  [1769845791.0099] device (tapabea69d7-40): carrier: link connected
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.018 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac105d5-bd41-49d0-9705-a1613b0b9f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.038 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e1952f95-d3fb-4fd6-b743-d373b6ca8bfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabea69d7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:68:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603362, 'reachable_time': 19500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290055, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.059 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[be1afa07-e097-4737-9c92-eba1fa460d9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:68f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603362, 'tstamp': 603362}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290056, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.076 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[de05942a-2b5e-4ae6-a859-69038f1b886f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabea69d7-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:68:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603362, 'reachable_time': 19500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290057, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.107 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[492c5197-691f-41bf-b386-d3fa9f7f4569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.175 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7a0bdb-563a-40fc-9b1d-fde0587f6c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.177 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabea69d7-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.177 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.178 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabea69d7-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:51 compute-0 kernel: tapabea69d7-40: entered promiscuous mode
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.180 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:51 compute-0 NetworkManager[49108]: <info>  [1769845791.1816] manager: (tapabea69d7-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.184 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabea69d7-40, col_values=(('external_ids', {'iface-id': '38a02e8d-e8fe-4f86-adeb-b79ce221c983'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:51 compute-0 ovn_controller[149457]: 2026-01-31T07:49:51Z|00217|binding|INFO|Releasing lport 38a02e8d-e8fe-4f86-adeb-b79ce221c983 from this chassis (sb_readonly=0)
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.187 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.188 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0b4bb5-dee1-42a7-9065-a3d0395ea4d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.189 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/abea69d7-4daf-4b3f-9f7f-2b06f416400d.pid.haproxy
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID abea69d7-4daf-4b3f-9f7f-2b06f416400d
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:49:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:49:51.191 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'env', 'PROCESS_TAG=haproxy-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abea69d7-4daf-4b3f-9f7f-2b06f416400d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:51.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.346 247708 DEBUG nova.compute.manager [req-cb1c3b2f-0345-485a-b3c8-cf01344b8562 req-41726353-b38f-49e9-a62d-ec4047437d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.346 247708 DEBUG oslo_concurrency.lockutils [req-cb1c3b2f-0345-485a-b3c8-cf01344b8562 req-41726353-b38f-49e9-a62d-ec4047437d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.347 247708 DEBUG oslo_concurrency.lockutils [req-cb1c3b2f-0345-485a-b3c8-cf01344b8562 req-41726353-b38f-49e9-a62d-ec4047437d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.347 247708 DEBUG oslo_concurrency.lockutils [req-cb1c3b2f-0345-485a-b3c8-cf01344b8562 req-41726353-b38f-49e9-a62d-ec4047437d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.347 247708 DEBUG nova.compute.manager [req-cb1c3b2f-0345-485a-b3c8-cf01344b8562 req-41726353-b38f-49e9-a62d-ec4047437d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Processing event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.440 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.442 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845791.4401155, d184a9c5-9bfd-4b91-b909-67ea7cf5c982 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.443 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] VM Started (Lifecycle Event)
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.448 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.452 247708 INFO nova.virt.libvirt.driver [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Instance spawned successfully.
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.453 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.471 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.477 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.481 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.482 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.482 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.482 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.483 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.483 247708 DEBUG nova.virt.libvirt.driver [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.516 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.517 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845791.441715, d184a9c5-9bfd-4b91-b909-67ea7cf5c982 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.518 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] VM Paused (Lifecycle Event)
Jan 31 07:49:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 31 07:49:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 31 07:49:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.554 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.562 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845791.4453883, d184a9c5-9bfd-4b91-b909-67ea7cf5c982 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.562 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] VM Resumed (Lifecycle Event)
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.571 247708 INFO nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Took 7.90 seconds to spawn the instance on the hypervisor.
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.571 247708 DEBUG nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.584 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.590 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:49:51 compute-0 podman[290131]: 2026-01-31 07:49:51.611101722 +0000 UTC m=+0.074660510 container create b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.626 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:49:51 compute-0 systemd[1]: Started libpod-conmon-b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9.scope.
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.662 247708 INFO nova.compute.manager [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Took 8.89 seconds to build instance.
Jan 31 07:49:51 compute-0 podman[290131]: 2026-01-31 07:49:51.579646982 +0000 UTC m=+0.043205820 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:49:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:49:51 compute-0 nova_compute[247704]: 2026-01-31 07:49:51.685 247708 DEBUG oslo_concurrency.lockutils [None req-e2724a57-5b6c-493a-af9a-9652cadc1592 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/477f97f9d59f1c8d39a5f17c90051f5c95374ce6e667dd4ff98b5297fe19ceea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:49:51 compute-0 podman[290131]: 2026-01-31 07:49:51.764866798 +0000 UTC m=+0.228425666 container init b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:49:51 compute-0 podman[290131]: 2026-01-31 07:49:51.772745331 +0000 UTC m=+0.236304149 container start b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:49:51 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [NOTICE]   (290151) : New worker (290153) forked
Jan 31 07:49:51 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [NOTICE]   (290151) : Loading success.
Jan 31 07:49:52 compute-0 ceph-mon[74496]: pgmap v1577: 305 pgs: 305 active+clean; 137 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 31 07:49:52 compute-0 ceph-mon[74496]: osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:49:52 compute-0 nova_compute[247704]: 2026-01-31 07:49:52.508 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:49:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:52.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:49:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 114 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Jan 31 07:49:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:53.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.692 247708 DEBUG nova.compute.manager [req-c9cdaba6-1ba9-4b1a-88ad-f29bd93ce994 req-ebaee941-7b14-4957-9c89-896446c1959b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.693 247708 DEBUG oslo_concurrency.lockutils [req-c9cdaba6-1ba9-4b1a-88ad-f29bd93ce994 req-ebaee941-7b14-4957-9c89-896446c1959b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.694 247708 DEBUG oslo_concurrency.lockutils [req-c9cdaba6-1ba9-4b1a-88ad-f29bd93ce994 req-ebaee941-7b14-4957-9c89-896446c1959b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.694 247708 DEBUG oslo_concurrency.lockutils [req-c9cdaba6-1ba9-4b1a-88ad-f29bd93ce994 req-ebaee941-7b14-4957-9c89-896446c1959b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.694 247708 DEBUG nova.compute.manager [req-c9cdaba6-1ba9-4b1a-88ad-f29bd93ce994 req-ebaee941-7b14-4957-9c89-896446c1959b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] No waiting events found dispatching network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:49:53 compute-0 nova_compute[247704]: 2026-01-31 07:49:53.695 247708 WARNING nova.compute.manager [req-c9cdaba6-1ba9-4b1a-88ad-f29bd93ce994 req-ebaee941-7b14-4957-9c89-896446c1959b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received unexpected event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 for instance with vm_state active and task_state None.
Jan 31 07:49:54 compute-0 nova_compute[247704]: 2026-01-31 07:49:54.037 247708 DEBUG nova.compute.manager [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-changed-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:54 compute-0 nova_compute[247704]: 2026-01-31 07:49:54.038 247708 DEBUG nova.compute.manager [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Refreshing instance network info cache due to event network-changed-13e76a94-6f50-418a-b506-89319cf36f33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:49:54 compute-0 nova_compute[247704]: 2026-01-31 07:49:54.038 247708 DEBUG oslo_concurrency.lockutils [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:49:54 compute-0 nova_compute[247704]: 2026-01-31 07:49:54.038 247708 DEBUG oslo_concurrency.lockutils [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:49:54 compute-0 nova_compute[247704]: 2026-01-31 07:49:54.038 247708 DEBUG nova.network.neutron [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Refreshing network info cache for port 13e76a94-6f50-418a-b506-89319cf36f33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:49:54 compute-0 ceph-mon[74496]: pgmap v1579: 305 pgs: 305 active+clean; 114 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Jan 31 07:49:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:54 compute-0 sudo[290164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:54 compute-0 sudo[290164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:54 compute-0 sudo[290164]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:54 compute-0 sudo[290189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:49:54 compute-0 sudo[290189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:49:54 compute-0 sudo[290189]: pam_unix(sudo:session): session closed for user root
Jan 31 07:49:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 114 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 294 KiB/s wr, 47 op/s
Jan 31 07:49:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1330305516' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:49:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1330305516' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:49:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:49:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:55.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:49:55 compute-0 nova_compute[247704]: 2026-01-31 07:49:55.806 247708 DEBUG nova.compute.manager [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-changed-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:49:55 compute-0 nova_compute[247704]: 2026-01-31 07:49:55.806 247708 DEBUG nova.compute.manager [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Refreshing instance network info cache due to event network-changed-13e76a94-6f50-418a-b506-89319cf36f33. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:49:55 compute-0 nova_compute[247704]: 2026-01-31 07:49:55.807 247708 DEBUG oslo_concurrency.lockutils [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:49:56 compute-0 ceph-mon[74496]: pgmap v1580: 305 pgs: 305 active+clean; 114 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 294 KiB/s wr, 47 op/s
Jan 31 07:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:49:56 compute-0 nova_compute[247704]: 2026-01-31 07:49:56.666 247708 DEBUG nova.network.neutron [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updated VIF entry in instance network info cache for port 13e76a94-6f50-418a-b506-89319cf36f33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:49:56 compute-0 nova_compute[247704]: 2026-01-31 07:49:56.667 247708 DEBUG nova.network.neutron [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updating instance_info_cache with network_info: [{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:49:56 compute-0 nova_compute[247704]: 2026-01-31 07:49:56.687 247708 DEBUG oslo_concurrency.lockutils [req-f72e73aa-eb45-4796-b2f1-c5f94138df05 req-ce1da17a-48e9-4ca2-b5b9-d7814c172078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:49:56 compute-0 nova_compute[247704]: 2026-01-31 07:49:56.688 247708 DEBUG oslo_concurrency.lockutils [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:49:56 compute-0 nova_compute[247704]: 2026-01-31 07:49:56.688 247708 DEBUG nova.network.neutron [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Refreshing network info cache for port 13e76a94-6f50-418a-b506-89319cf36f33 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:49:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:49:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 256 KiB/s wr, 140 op/s
Jan 31 07:49:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:57.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:57 compute-0 nova_compute[247704]: 2026-01-31 07:49:57.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:58 compute-0 ceph-mon[74496]: pgmap v1581: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 256 KiB/s wr, 140 op/s
Jan 31 07:49:58 compute-0 nova_compute[247704]: 2026-01-31 07:49:58.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:49:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:49:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:58.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:49:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 227 KiB/s wr, 152 op/s
Jan 31 07:49:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:49:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:49:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:59.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:50:00 compute-0 nova_compute[247704]: 2026-01-31 07:50:00.210 247708 DEBUG nova.network.neutron [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updated VIF entry in instance network info cache for port 13e76a94-6f50-418a-b506-89319cf36f33. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:50:00 compute-0 nova_compute[247704]: 2026-01-31 07:50:00.211 247708 DEBUG nova.network.neutron [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updating instance_info_cache with network_info: [{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:00 compute-0 nova_compute[247704]: 2026-01-31 07:50:00.231 247708 DEBUG oslo_concurrency.lockutils [req-ca55ff86-d502-40a1-b5c3-048aa622197a req-a4d46ae7-6771-43a5-8efd-66e60b4aee38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:00 compute-0 ceph-mon[74496]: pgmap v1582: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 227 KiB/s wr, 152 op/s
Jan 31 07:50:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:50:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:00 compute-0 podman[290218]: 2026-01-31 07:50:00.877868761 +0000 UTC m=+0.052099878 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 07:50:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 120 op/s
Jan 31 07:50:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:02 compute-0 ceph-mon[74496]: pgmap v1583: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 120 op/s
Jan 31 07:50:02 compute-0 nova_compute[247704]: 2026-01-31 07:50:02.512 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:02.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 105 op/s
Jan 31 07:50:03 compute-0 ovn_controller[149457]: 2026-01-31T07:50:03Z|00218|binding|INFO|Releasing lport 38a02e8d-e8fe-4f86-adeb-b79ce221c983 from this chassis (sb_readonly=0)
Jan 31 07:50:03 compute-0 nova_compute[247704]: 2026-01-31 07:50:03.157 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:03 compute-0 nova_compute[247704]: 2026-01-31 07:50:03.417 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:03 compute-0 nova_compute[247704]: 2026-01-31 07:50:03.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:03 compute-0 sudo[290239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:03 compute-0 sudo[290239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:03 compute-0 sudo[290239]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:03 compute-0 sudo[290264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:50:03 compute-0 sudo[290264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:03 compute-0 sudo[290264]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:03 compute-0 sudo[290289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:03 compute-0 sudo[290289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:03 compute-0 sudo[290289]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-0 sudo[290314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:50:04 compute-0 sudo[290314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:04 compute-0 sudo[290314]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-0 ceph-mon[74496]: pgmap v1584: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 105 op/s
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e338aeb3-48a7-4237-9e84-916e805bc84e does not exist
Jan 31 07:50:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f6254391-c28f-424c-a78c-a3f6290a4cc7 does not exist
Jan 31 07:50:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8c88a4a3-3566-4037-bd13-256fd42246c4 does not exist
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:50:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:50:04 compute-0 sudo[290369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:04 compute-0 sudo[290369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:04 compute-0 sudo[290369]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-0 sudo[290394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:50:04 compute-0 sudo[290394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:04 compute-0 sudo[290394]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-0 sudo[290419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:04 compute-0 sudo[290419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:04 compute-0 sudo[290419]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:04 compute-0 sudo[290444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:50:04 compute-0 sudo[290444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.09746917 +0000 UTC m=+0.047915835 container create c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:50:05 compute-0 systemd[1]: Started libpod-conmon-c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a.scope.
Jan 31 07:50:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.074654191 +0000 UTC m=+0.025100886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.240708899 +0000 UTC m=+0.191155564 container init c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.247628149 +0000 UTC m=+0.198074814 container start c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:50:05 compute-0 silly_brattain[290526]: 167 167
Jan 31 07:50:05 compute-0 systemd[1]: libpod-c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a.scope: Deactivated successfully.
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.292152678 +0000 UTC m=+0.242599433 container attach c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.292947628 +0000 UTC m=+0.243394333 container died c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:50:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:05.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e39d8e5c40c699dab9887720a702bbd3e50752aa95254c1fb74c125438f3e7f8-merged.mount: Deactivated successfully.
Jan 31 07:50:05 compute-0 podman[290510]: 2026-01-31 07:50:05.407001671 +0000 UTC m=+0.357448336 container remove c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:50:05 compute-0 systemd[1]: libpod-conmon-c37d50eb3bbabd01e2c084fe8d14c6851884e7e33f62f4f625e89f057cc7d57a.scope: Deactivated successfully.
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:50:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:50:05 compute-0 podman[290554]: 2026-01-31 07:50:05.552959027 +0000 UTC m=+0.049991896 container create ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:50:05 compute-0 systemd[1]: Started libpod-conmon-ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a.scope.
Jan 31 07:50:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b0b71f8ef00f5d26c4ab542acafc03f6c39fb77b6c6437e95a88c332c59cdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b0b71f8ef00f5d26c4ab542acafc03f6c39fb77b6c6437e95a88c332c59cdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b0b71f8ef00f5d26c4ab542acafc03f6c39fb77b6c6437e95a88c332c59cdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b0b71f8ef00f5d26c4ab542acafc03f6c39fb77b6c6437e95a88c332c59cdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79b0b71f8ef00f5d26c4ab542acafc03f6c39fb77b6c6437e95a88c332c59cdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:05 compute-0 podman[290554]: 2026-01-31 07:50:05.531544422 +0000 UTC m=+0.028577101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:50:05 compute-0 podman[290554]: 2026-01-31 07:50:05.635958649 +0000 UTC m=+0.132991318 container init ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_buck, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:50:05 compute-0 podman[290554]: 2026-01-31 07:50:05.646317823 +0000 UTC m=+0.143350472 container start ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 07:50:05 compute-0 ovn_controller[149457]: 2026-01-31T07:50:05Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:62:f2:2e 10.100.0.9
Jan 31 07:50:05 compute-0 ovn_controller[149457]: 2026-01-31T07:50:05Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:62:f2:2e 10.100.0.9
Jan 31 07:50:05 compute-0 podman[290554]: 2026-01-31 07:50:05.650784793 +0000 UTC m=+0.147817462 container attach ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:50:06 compute-0 great_buck[290571]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:50:06 compute-0 great_buck[290571]: --> relative data size: 1.0
Jan 31 07:50:06 compute-0 great_buck[290571]: --> All data devices are unavailable
Jan 31 07:50:06 compute-0 systemd[1]: libpod-ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a.scope: Deactivated successfully.
Jan 31 07:50:06 compute-0 ceph-mon[74496]: pgmap v1585: 305 pgs: 305 active+clean; 88 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Jan 31 07:50:06 compute-0 podman[290586]: 2026-01-31 07:50:06.542911503 +0000 UTC m=+0.032041625 container died ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_buck, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 07:50:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-79b0b71f8ef00f5d26c4ab542acafc03f6c39fb77b6c6437e95a88c332c59cdd-merged.mount: Deactivated successfully.
Jan 31 07:50:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 116 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 154 op/s
Jan 31 07:50:07 compute-0 podman[290586]: 2026-01-31 07:50:07.040243505 +0000 UTC m=+0.529373607 container remove ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:50:07 compute-0 systemd[1]: libpod-conmon-ab24df778a202bc486b1a38c78d580a1bb615f41a570eab8b92105c61684644a.scope: Deactivated successfully.
Jan 31 07:50:07 compute-0 sudo[290444]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:07 compute-0 sudo[290601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:07 compute-0 sudo[290601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:07 compute-0 sudo[290601]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:07 compute-0 sudo[290626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:50:07 compute-0 sudo[290626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:07 compute-0 sudo[290626]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:07 compute-0 sudo[290651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:07 compute-0 sudo[290651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:07 compute-0 sudo[290651]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:07 compute-0 sudo[290676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:50:07 compute-0 sudo[290676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.555 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.660598289 +0000 UTC m=+0.048697974 container create 61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:50:07 compute-0 systemd[1]: Started libpod-conmon-61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50.scope.
Jan 31 07:50:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.635124545 +0000 UTC m=+0.023224330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.737319748 +0000 UTC m=+0.125419463 container init 61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.74552582 +0000 UTC m=+0.133625505 container start 61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:50:07 compute-0 cranky_goldwasser[290760]: 167 167
Jan 31 07:50:07 compute-0 systemd[1]: libpod-61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50.scope: Deactivated successfully.
Jan 31 07:50:07 compute-0 conmon[290760]: conmon 61a3195d0da895687ccc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50.scope/container/memory.events
Jan 31 07:50:07 compute-0 ceph-mon[74496]: pgmap v1586: 305 pgs: 305 active+clean; 116 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 154 op/s
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.756716274 +0000 UTC m=+0.144815979 container attach 61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldwasser, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.757045691 +0000 UTC m=+0.145145366 container died 61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-69f2c904f90ff0fb57926bf4bc355a73082aefce275c2964e1e3307661f84e19-merged.mount: Deactivated successfully.
Jan 31 07:50:07 compute-0 podman[290744]: 2026-01-31 07:50:07.800182298 +0000 UTC m=+0.188281993 container remove 61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:50:07 compute-0 systemd[1]: libpod-conmon-61a3195d0da895687ccc4453ad35ce9efc1b7c3e4732ca4b4cd1660f10792e50.scope: Deactivated successfully.
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.806 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.806 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.806 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:50:07 compute-0 nova_compute[247704]: 2026-01-31 07:50:07.807 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d184a9c5-9bfd-4b91-b909-67ea7cf5c982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:07 compute-0 podman[290787]: 2026-01-31 07:50:07.933937694 +0000 UTC m=+0.041998000 container create b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 07:50:07 compute-0 systemd[1]: Started libpod-conmon-b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137.scope.
Jan 31 07:50:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:50:08 compute-0 podman[290787]: 2026-01-31 07:50:07.912613122 +0000 UTC m=+0.020673458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a670d79d7e46a08c581afa732af71131654708adb3fe79f2635f8e06093244a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a670d79d7e46a08c581afa732af71131654708adb3fe79f2635f8e06093244a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a670d79d7e46a08c581afa732af71131654708adb3fe79f2635f8e06093244a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a670d79d7e46a08c581afa732af71131654708adb3fe79f2635f8e06093244a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:08 compute-0 podman[290787]: 2026-01-31 07:50:08.036536866 +0000 UTC m=+0.144597202 container init b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:50:08 compute-0 podman[290787]: 2026-01-31 07:50:08.054930368 +0000 UTC m=+0.162990724 container start b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:50:08 compute-0 podman[290787]: 2026-01-31 07:50:08.059413677 +0000 UTC m=+0.167474043 container attach b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:50:08 compute-0 nova_compute[247704]: 2026-01-31 07:50:08.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:50:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:50:08 compute-0 jolly_cerf[290803]: {
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:     "0": [
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:         {
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "devices": [
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "/dev/loop3"
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             ],
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "lv_name": "ceph_lv0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "lv_size": "7511998464",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "name": "ceph_lv0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "tags": {
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.cluster_name": "ceph",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.crush_device_class": "",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.encrypted": "0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.osd_id": "0",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.type": "block",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:                 "ceph.vdo": "0"
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             },
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "type": "block",
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:             "vg_name": "ceph_vg0"
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:         }
Jan 31 07:50:08 compute-0 jolly_cerf[290803]:     ]
Jan 31 07:50:08 compute-0 jolly_cerf[290803]: }
Jan 31 07:50:08 compute-0 systemd[1]: libpod-b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137.scope: Deactivated successfully.
Jan 31 07:50:08 compute-0 podman[290787]: 2026-01-31 07:50:08.919544324 +0000 UTC m=+1.027604690 container died b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:50:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 119 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a670d79d7e46a08c581afa732af71131654708adb3fe79f2635f8e06093244a-merged.mount: Deactivated successfully.
Jan 31 07:50:08 compute-0 podman[290787]: 2026-01-31 07:50:08.979726858 +0000 UTC m=+1.087787184 container remove b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:50:08 compute-0 systemd[1]: libpod-conmon-b9386392faffc236e8ed183f12adc0ed99deb4eae055002cc1830cdc6e758137.scope: Deactivated successfully.
Jan 31 07:50:09 compute-0 sudo[290676]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:09 compute-0 sudo[290823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:09 compute-0 sudo[290823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:09 compute-0 sudo[290823]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:09 compute-0 sudo[290848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:50:09 compute-0 sudo[290848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:09 compute-0 sudo[290848]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:09 compute-0 sudo[290873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:09 compute-0 sudo[290873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:09 compute-0 sudo[290873]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:09 compute-0 sudo[290898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:50:09 compute-0 sudo[290898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:09.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.578545785 +0000 UTC m=+0.041116918 container create c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:50:09 compute-0 systemd[1]: Started libpod-conmon-c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e.scope.
Jan 31 07:50:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.559064558 +0000 UTC m=+0.021635721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.667438822 +0000 UTC m=+0.130009975 container init c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.677121839 +0000 UTC m=+0.139692972 container start c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:50:09 compute-0 frosty_kowalevski[290981]: 167 167
Jan 31 07:50:09 compute-0 systemd[1]: libpod-c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e.scope: Deactivated successfully.
Jan 31 07:50:09 compute-0 conmon[290981]: conmon c29d5bdcea1f625c7f2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e.scope/container/memory.events
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.689094613 +0000 UTC m=+0.151665746 container attach c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.689587135 +0000 UTC m=+0.152158268 container died c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 07:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-8898f52c8fdfb78a71d4cc74642df1ae79ed71ee1dc19829f8089825a66e031e-merged.mount: Deactivated successfully.
Jan 31 07:50:09 compute-0 podman[290964]: 2026-01-31 07:50:09.741269801 +0000 UTC m=+0.203840934 container remove c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:50:09 compute-0 systemd[1]: libpod-conmon-c29d5bdcea1f625c7f2d75632db1799ca09ca909cd79104cd2e4aefdfd694c3e.scope: Deactivated successfully.
Jan 31 07:50:09 compute-0 podman[291004]: 2026-01-31 07:50:09.926174419 +0000 UTC m=+0.044668605 container create 8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:50:09 compute-0 systemd[1]: Started libpod-conmon-8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec.scope.
Jan 31 07:50:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:50:09 compute-0 podman[291004]: 2026-01-31 07:50:09.9041675 +0000 UTC m=+0.022661666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd6856d19c3176c6790887e689e05319d535b15323b002e83899aa03852fcb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd6856d19c3176c6790887e689e05319d535b15323b002e83899aa03852fcb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd6856d19c3176c6790887e689e05319d535b15323b002e83899aa03852fcb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fd6856d19c3176c6790887e689e05319d535b15323b002e83899aa03852fcb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:50:10 compute-0 podman[291004]: 2026-01-31 07:50:10.024290382 +0000 UTC m=+0.142784578 container init 8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 07:50:10 compute-0 podman[291004]: 2026-01-31 07:50:10.040252074 +0000 UTC m=+0.158746260 container start 8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:50:10 compute-0 podman[291004]: 2026-01-31 07:50:10.044474357 +0000 UTC m=+0.162968543 container attach 8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:50:10 compute-0 ceph-mon[74496]: pgmap v1587: 305 pgs: 305 active+clean; 119 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 07:50:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2475019247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3085610044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]: {
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:         "osd_id": 0,
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:         "type": "bluestore"
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]:     }
Jan 31 07:50:10 compute-0 nervous_satoshi[291021]: }
Jan 31 07:50:10 compute-0 systemd[1]: libpod-8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec.scope: Deactivated successfully.
Jan 31 07:50:10 compute-0 podman[291004]: 2026-01-31 07:50:10.833636255 +0000 UTC m=+0.952130401 container died 8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fd6856d19c3176c6790887e689e05319d535b15323b002e83899aa03852fcb1-merged.mount: Deactivated successfully.
Jan 31 07:50:10 compute-0 podman[291004]: 2026-01-31 07:50:10.938412632 +0000 UTC m=+1.056906818 container remove 8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 07:50:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:10 compute-0 systemd[1]: libpod-conmon-8d09f3c3e90fd7194f42548bf59e399053fb74e30042e8df5c6c58ed4ed487ec.scope: Deactivated successfully.
Jan 31 07:50:10 compute-0 nova_compute[247704]: 2026-01-31 07:50:10.979 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updating instance_info_cache with network_info: [{"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:10 compute-0 sudo[290898]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.009 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-d184a9c5-9bfd-4b91-b909-67ea7cf5c982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.009 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.010 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.011 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.011 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.034 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.035 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.035 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.036 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.036 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:50:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 21738895-c539-4c8b-b244-c0a9d4359593 does not exist
Jan 31 07:50:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 212164de-fa9e-43dd-b7e2-b9492172fdde does not exist
Jan 31 07:50:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8cbbffea-29d9-4136-96fd-689069ec1c39 does not exist
Jan 31 07:50:11 compute-0 sudo[291057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:11 compute-0 sudo[291057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:11 compute-0 sudo[291057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:11.160 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:11.161 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:11.162 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/37391559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:50:11 compute-0 sudo[291083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:50:11 compute-0 sudo[291083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:11 compute-0 sudo[291083]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:11.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573730805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.521 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.590 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.591 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000044 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.727 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.728 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4327MB free_disk=20.94293212890625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.728 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.729 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.816 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance d184a9c5-9bfd-4b91-b909-67ea7cf5c982 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.817 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.817 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:50:11 compute-0 nova_compute[247704]: 2026-01-31 07:50:11.872 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809094251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.302 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.309 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.326 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.360 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.360 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:12 compute-0 ceph-mon[74496]: pgmap v1588: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1573730805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/889192927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1809094251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.559 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:12.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.911 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.913 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.913 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.913 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:12 compute-0 nova_compute[247704]: 2026-01-31 07:50:12.914 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:50:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:13.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:13 compute-0 nova_compute[247704]: 2026-01-31 07:50:13.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:14 compute-0 ceph-mon[74496]: pgmap v1589: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:14 compute-0 nova_compute[247704]: 2026-01-31 07:50:14.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:14.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:14 compute-0 podman[291153]: 2026-01-31 07:50:14.945373924 +0000 UTC m=+0.104835848 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 07:50:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:14 compute-0 sudo[291173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:14 compute-0 sudo[291173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:14 compute-0 sudo[291173]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:15 compute-0 sudo[291204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:15 compute-0 sudo[291204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:15 compute-0 sudo[291204]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:15.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:16 compute-0 ceph-mon[74496]: pgmap v1590: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:17 compute-0 nova_compute[247704]: 2026-01-31 07:50:17.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:18 compute-0 nova_compute[247704]: 2026-01-31 07:50:18.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:18 compute-0 ceph-mon[74496]: pgmap v1591: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 07:50:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3679847673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:18.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 158 KiB/s wr, 5 op/s
Jan 31 07:50:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:19.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:19 compute-0 ceph-mon[74496]: pgmap v1592: 305 pgs: 305 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 158 KiB/s wr, 5 op/s
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:50:20
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes']
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:50:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:20.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 159 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 31 07:50:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:22 compute-0 ceph-mon[74496]: pgmap v1593: 305 pgs: 305 active+clean; 159 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Jan 31 07:50:22 compute-0 nova_compute[247704]: 2026-01-31 07:50:22.608 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:22.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:50:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:23.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:23 compute-0 nova_compute[247704]: 2026-01-31 07:50:23.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:24 compute-0 ceph-mon[74496]: pgmap v1594: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:50:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3912991515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/147065805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:24.380 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:24 compute-0 nova_compute[247704]: 2026-01-31 07:50:24.380 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:24.382 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:50:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:50:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:24.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:50:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:50:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:26 compute-0 ceph-mon[74496]: pgmap v1595: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:50:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:26.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 07:50:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:27.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:27 compute-0 nova_compute[247704]: 2026-01-31 07:50:27.641 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:28 compute-0 ceph-mon[74496]: pgmap v1596: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 07:50:28 compute-0 nova_compute[247704]: 2026-01-31 07:50:28.431 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:28.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 31 07:50:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:30.385 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:30 compute-0 ceph-mon[74496]: pgmap v1597: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 31 07:50:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:30.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 31 07:50:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4063991903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:31 compute-0 podman[291238]: 2026-01-31 07:50:31.8855934 +0000 UTC m=+0.048336846 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 07:50:32 compute-0 ceph-mon[74496]: pgmap v1598: 305 pgs: 305 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 31 07:50:32 compute-0 nova_compute[247704]: 2026-01-31 07:50:32.643 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 182 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 780 KiB/s wr, 75 op/s
Jan 31 07:50:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:33.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:33 compute-0 nova_compute[247704]: 2026-01-31 07:50:33.433 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:33 compute-0 ceph-mon[74496]: pgmap v1599: 305 pgs: 305 active+clean; 182 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 780 KiB/s wr, 75 op/s
Jan 31 07:50:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3424192575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 182 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 481 KiB/s wr, 74 op/s
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0034186048878652522 of space, bias 1.0, pg target 1.0255814663595757 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:50:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:50:35 compute-0 sudo[291256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:35 compute-0 sudo[291256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:35 compute-0 sudo[291256]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:35 compute-0 sudo[291281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:35 compute-0 sudo[291281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:35 compute-0 sudo[291281]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:35 compute-0 ceph-mon[74496]: pgmap v1600: 305 pgs: 305 active+clean; 182 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 481 KiB/s wr, 74 op/s
Jan 31 07:50:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1386995953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 07:50:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:37 compute-0 nova_compute[247704]: 2026-01-31 07:50:37.645 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:38 compute-0 nova_compute[247704]: 2026-01-31 07:50:38.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:38 compute-0 ceph-mon[74496]: pgmap v1601: 305 pgs: 305 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 07:50:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2732920842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2809114543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 31 07:50:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:39 compute-0 ceph-mon[74496]: pgmap v1602: 305 pgs: 305 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 31 07:50:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 212 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.4 MiB/s wr, 205 op/s
Jan 31 07:50:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:42 compute-0 ceph-mon[74496]: pgmap v1603: 305 pgs: 305 active+clean; 212 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.4 MiB/s wr, 205 op/s
Jan 31 07:50:42 compute-0 nova_compute[247704]: 2026-01-31 07:50:42.647 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 226 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 167 op/s
Jan 31 07:50:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3365562488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:43.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:43 compute-0 nova_compute[247704]: 2026-01-31 07:50:43.437 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:44 compute-0 ceph-mon[74496]: pgmap v1604: 305 pgs: 305 active+clean; 226 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 167 op/s
Jan 31 07:50:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:44.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 226 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Jan 31 07:50:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/145280449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2515165356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:50:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2515165356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:50:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:45 compute-0 podman[291312]: 2026-01-31 07:50:45.910871019 +0000 UTC m=+0.078397861 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:50:46 compute-0 ceph-mon[74496]: pgmap v1605: 305 pgs: 305 active+clean; 226 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Jan 31 07:50:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:46.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.1 MiB/s wr, 219 op/s
Jan 31 07:50:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:47.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:47 compute-0 nova_compute[247704]: 2026-01-31 07:50:47.650 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:48 compute-0 ceph-mon[74496]: pgmap v1606: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.1 MiB/s wr, 219 op/s
Jan 31 07:50:48 compute-0 nova_compute[247704]: 2026-01-31 07:50:48.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:48 compute-0 sshd-session[291340]: Invalid user user from 45.148.10.240 port 37180
Jan 31 07:50:48 compute-0 sshd-session[291340]: Connection closed by invalid user user 45.148.10.240 port 37180 [preauth]
Jan 31 07:50:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 234 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Jan 31 07:50:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:49.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.440 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.441 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.441 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.441 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.441 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.442 247708 INFO nova.compute.manager [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Terminating instance
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.443 247708 DEBUG nova.compute.manager [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:50:49 compute-0 kernel: tap13e76a94-6f (unregistering): left promiscuous mode
Jan 31 07:50:49 compute-0 NetworkManager[49108]: <info>  [1769845849.5052] device (tap13e76a94-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:50:49 compute-0 ovn_controller[149457]: 2026-01-31T07:50:49Z|00219|binding|INFO|Releasing lport 13e76a94-6f50-418a-b506-89319cf36f33 from this chassis (sb_readonly=0)
Jan 31 07:50:49 compute-0 ovn_controller[149457]: 2026-01-31T07:50:49Z|00220|binding|INFO|Setting lport 13e76a94-6f50-418a-b506-89319cf36f33 down in Southbound
Jan 31 07:50:49 compute-0 ovn_controller[149457]: 2026-01-31T07:50:49Z|00221|binding|INFO|Removing iface tap13e76a94-6f ovn-installed in OVS
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.513 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.518 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:f2:2e 10.100.0.9'], port_security=['fa:16:3e:62:f2:2e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd184a9c5-9bfd-4b91-b909-67ea7cf5c982', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a52e7591d7f4e068e3f9fa0e4e288d5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1c8d5fd1-9333-4ac8-877d-bfde32b50e92 b6553c08-37f5-4234-ad4d-6b27ea1af1b2 d6aa0181-fdb2-4d3e-a8a9-d361e28d0bfd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b3302f-04ed-4085-954c-3fd369ef549b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=13e76a94-6f50-418a-b506-89319cf36f33) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.519 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 13e76a94-6f50-418a-b506-89319cf36f33 in datapath abea69d7-4daf-4b3f-9f7f-2b06f416400d unbound from our chassis
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.521 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abea69d7-4daf-4b3f-9f7f-2b06f416400d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.523 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2b11abaa-c05b-47b7-b699-329d8712f6f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.524 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d namespace which is not needed anymore
Jan 31 07:50:49 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000044.scope: Deactivated successfully.
Jan 31 07:50:49 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000044.scope: Consumed 15.137s CPU time.
Jan 31 07:50:49 compute-0 systemd-machined[214448]: Machine qemu-29-instance-00000044 terminated.
Jan 31 07:50:49 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [NOTICE]   (290151) : haproxy version is 2.8.14-c23fe91
Jan 31 07:50:49 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [NOTICE]   (290151) : path to executable is /usr/sbin/haproxy
Jan 31 07:50:49 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [WARNING]  (290151) : Exiting Master process...
Jan 31 07:50:49 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [ALERT]    (290151) : Current worker (290153) exited with code 143 (Terminated)
Jan 31 07:50:49 compute-0 neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d[290147]: [WARNING]  (290151) : All workers exited. Exiting... (0)
Jan 31 07:50:49 compute-0 systemd[1]: libpod-b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9.scope: Deactivated successfully.
Jan 31 07:50:49 compute-0 conmon[290147]: conmon b4a0f0979f85ec6eaf6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9.scope/container/memory.events
Jan 31 07:50:49 compute-0 podman[291367]: 2026-01-31 07:50:49.638648233 +0000 UTC m=+0.043157338 container died b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9-userdata-shm.mount: Deactivated successfully.
Jan 31 07:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-477f97f9d59f1c8d39a5f17c90051f5c95374ce6e667dd4ff98b5297fe19ceea-merged.mount: Deactivated successfully.
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.675 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 podman[291367]: 2026-01-31 07:50:49.685900801 +0000 UTC m=+0.090409906 container cleanup b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.684 247708 INFO nova.virt.libvirt.driver [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Instance destroyed successfully.
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.685 247708 DEBUG nova.objects.instance [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lazy-loading 'resources' on Instance uuid d184a9c5-9bfd-4b91-b909-67ea7cf5c982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:50:49 compute-0 systemd[1]: libpod-conmon-b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9.scope: Deactivated successfully.
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.705 247708 DEBUG nova.virt.libvirt.vif [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:49:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-816998154',display_name='tempest-SecurityGroupsTestJSON-server-816998154',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-816998154',id=68,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:49:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a52e7591d7f4e068e3f9fa0e4e288d5',ramdisk_id='',reservation_id='r-nwcm1kp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-31421501',owner_user_name='tempest-SecurityGroupsTestJSON-31421501-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:49:51Z,user_data=None,user_id='07aa1b5aaea444449f8ef00dfe56e8eb',uuid=d184a9c5-9bfd-4b91-b909-67ea7cf5c982,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.706 247708 DEBUG nova.network.os_vif_util [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converting VIF {"id": "13e76a94-6f50-418a-b506-89319cf36f33", "address": "fa:16:3e:62:f2:2e", "network": {"id": "abea69d7-4daf-4b3f-9f7f-2b06f416400d", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-533348328-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a52e7591d7f4e068e3f9fa0e4e288d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13e76a94-6f", "ovs_interfaceid": "13e76a94-6f50-418a-b506-89319cf36f33", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.707 247708 DEBUG nova.network.os_vif_util [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.707 247708 DEBUG os_vif [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.709 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.709 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13e76a94-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.711 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.712 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.715 247708 INFO os_vif [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:62:f2:2e,bridge_name='br-int',has_traffic_filtering=True,id=13e76a94-6f50-418a-b506-89319cf36f33,network=Network(abea69d7-4daf-4b3f-9f7f-2b06f416400d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13e76a94-6f')
Jan 31 07:50:49 compute-0 podman[291402]: 2026-01-31 07:50:49.758201392 +0000 UTC m=+0.045847244 container remove b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.764 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[96f22ff5-e581-4490-bc99-bb110f985f09]: (4, ('Sat Jan 31 07:50:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d (b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9)\nb4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9\nSat Jan 31 07:50:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d (b4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9)\nb4a0f0979f85ec6eaf6d5760b0c93c7cbd000473a1de20470ec6baa95adee4a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.766 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1a15efa9-0ee5-4fab-b5be-c6c3d2b5191d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.767 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabea69d7-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.810 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 kernel: tapabea69d7-40: left promiscuous mode
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.817 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.821 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d576088b-8c3d-4822-9fba-59ba322d1354]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.835 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a8c306-1032-408f-8f7d-e34d14a32036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.837 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[04f8da45-8d10-4421-abe2-c6f5359cb1e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.844 247708 DEBUG nova.compute.manager [req-3c048658-0dbf-4788-97a0-8dbadc7394f4 req-a6510ce2-ae1a-43bb-bbc7-8b1af72c0057 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-vif-unplugged-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.845 247708 DEBUG oslo_concurrency.lockutils [req-3c048658-0dbf-4788-97a0-8dbadc7394f4 req-a6510ce2-ae1a-43bb-bbc7-8b1af72c0057 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.845 247708 DEBUG oslo_concurrency.lockutils [req-3c048658-0dbf-4788-97a0-8dbadc7394f4 req-a6510ce2-ae1a-43bb-bbc7-8b1af72c0057 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.845 247708 DEBUG oslo_concurrency.lockutils [req-3c048658-0dbf-4788-97a0-8dbadc7394f4 req-a6510ce2-ae1a-43bb-bbc7-8b1af72c0057 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.845 247708 DEBUG nova.compute.manager [req-3c048658-0dbf-4788-97a0-8dbadc7394f4 req-a6510ce2-ae1a-43bb-bbc7-8b1af72c0057 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] No waiting events found dispatching network-vif-unplugged-13e76a94-6f50-418a-b506-89319cf36f33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:49 compute-0 nova_compute[247704]: 2026-01-31 07:50:49.846 247708 DEBUG nova.compute.manager [req-3c048658-0dbf-4788-97a0-8dbadc7394f4 req-a6510ce2-ae1a-43bb-bbc7-8b1af72c0057 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-vif-unplugged-13e76a94-6f50-418a-b506-89319cf36f33 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.859 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[99726e60-9ec4-43e9-b02f-2551f199169b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603355, 'reachable_time': 19049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291435, 'error': None, 'target': 'ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.864 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abea69d7-4daf-4b3f-9f7f-2b06f416400d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:50:49 compute-0 systemd[1]: run-netns-ovnmeta\x2dabea69d7\x2d4daf\x2d4b3f\x2d9f7f\x2d2b06f416400d.mount: Deactivated successfully.
Jan 31 07:50:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:50:49.864 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[4d202a71-2c79-4b63-b08c-5aa30342d564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:50:50 compute-0 nova_compute[247704]: 2026-01-31 07:50:50.205 247708 INFO nova.virt.libvirt.driver [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Deleting instance files /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982_del
Jan 31 07:50:50 compute-0 nova_compute[247704]: 2026-01-31 07:50:50.206 247708 INFO nova.virt.libvirt.driver [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Deletion of /var/lib/nova/instances/d184a9c5-9bfd-4b91-b909-67ea7cf5c982_del complete
Jan 31 07:50:50 compute-0 ceph-mon[74496]: pgmap v1607: 305 pgs: 305 active+clean; 234 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Jan 31 07:50:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2419056380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4286215794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:50:50 compute-0 nova_compute[247704]: 2026-01-31 07:50:50.619 247708 INFO nova.compute.manager [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Took 1.18 seconds to destroy the instance on the hypervisor.
Jan 31 07:50:50 compute-0 nova_compute[247704]: 2026-01-31 07:50:50.620 247708 DEBUG oslo.service.loopingcall [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:50:50 compute-0 nova_compute[247704]: 2026-01-31 07:50:50.620 247708 DEBUG nova.compute.manager [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:50:50 compute-0 nova_compute[247704]: 2026-01-31 07:50:50.620 247708 DEBUG nova.network.neutron [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:50:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 187 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Jan 31 07:50:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:51.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:51 compute-0 nova_compute[247704]: 2026-01-31 07:50:51.934 247708 DEBUG nova.compute.manager [req-22dd9ac1-bfea-4844-af0c-a4e2f2df4190 req-67ef65c8-cd45-4b4b-889d-4f986d7f7729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:51 compute-0 nova_compute[247704]: 2026-01-31 07:50:51.935 247708 DEBUG oslo_concurrency.lockutils [req-22dd9ac1-bfea-4844-af0c-a4e2f2df4190 req-67ef65c8-cd45-4b4b-889d-4f986d7f7729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:51 compute-0 nova_compute[247704]: 2026-01-31 07:50:51.935 247708 DEBUG oslo_concurrency.lockutils [req-22dd9ac1-bfea-4844-af0c-a4e2f2df4190 req-67ef65c8-cd45-4b4b-889d-4f986d7f7729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:51 compute-0 nova_compute[247704]: 2026-01-31 07:50:51.935 247708 DEBUG oslo_concurrency.lockutils [req-22dd9ac1-bfea-4844-af0c-a4e2f2df4190 req-67ef65c8-cd45-4b4b-889d-4f986d7f7729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:51 compute-0 nova_compute[247704]: 2026-01-31 07:50:51.936 247708 DEBUG nova.compute.manager [req-22dd9ac1-bfea-4844-af0c-a4e2f2df4190 req-67ef65c8-cd45-4b4b-889d-4f986d7f7729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] No waiting events found dispatching network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:50:51 compute-0 nova_compute[247704]: 2026-01-31 07:50:51.936 247708 WARNING nova.compute.manager [req-22dd9ac1-bfea-4844-af0c-a4e2f2df4190 req-67ef65c8-cd45-4b4b-889d-4f986d7f7729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received unexpected event network-vif-plugged-13e76a94-6f50-418a-b506-89319cf36f33 for instance with vm_state active and task_state deleting.
Jan 31 07:50:52 compute-0 ceph-mon[74496]: pgmap v1608: 305 pgs: 305 active+clean; 187 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 249 op/s
Jan 31 07:50:52 compute-0 nova_compute[247704]: 2026-01-31 07:50:52.653 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:52.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:52 compute-0 nova_compute[247704]: 2026-01-31 07:50:52.867 247708 DEBUG nova.network.neutron [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:50:52 compute-0 nova_compute[247704]: 2026-01-31 07:50:52.900 247708 INFO nova.compute.manager [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Took 2.28 seconds to deallocate network for instance.
Jan 31 07:50:52 compute-0 nova_compute[247704]: 2026-01-31 07:50:52.939 247708 DEBUG nova.compute.manager [req-e73af52a-2b23-43c4-907a-d866f708b4ce req-794d1d68-3ead-4901-86dd-3da76941d7f6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Received event network-vif-deleted-13e76a94-6f50-418a-b506-89319cf36f33 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:50:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 157 op/s
Jan 31 07:50:52 compute-0 nova_compute[247704]: 2026-01-31 07:50:52.980 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:50:52 compute-0 nova_compute[247704]: 2026-01-31 07:50:52.981 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:50:53 compute-0 nova_compute[247704]: 2026-01-31 07:50:53.035 247708 DEBUG oslo_concurrency.processutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:50:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:53.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:50:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4106909107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:53 compute-0 nova_compute[247704]: 2026-01-31 07:50:53.505 247708 DEBUG oslo_concurrency.processutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:50:53 compute-0 nova_compute[247704]: 2026-01-31 07:50:53.512 247708 DEBUG nova.compute.provider_tree [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:50:53 compute-0 nova_compute[247704]: 2026-01-31 07:50:53.617 247708 DEBUG nova.scheduler.client.report [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:50:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4106909107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:50:53 compute-0 nova_compute[247704]: 2026-01-31 07:50:53.756 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:53 compute-0 nova_compute[247704]: 2026-01-31 07:50:53.792 247708 INFO nova.scheduler.client.report [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Deleted allocations for instance d184a9c5-9bfd-4b91-b909-67ea7cf5c982
Jan 31 07:50:54 compute-0 nova_compute[247704]: 2026-01-31 07:50:54.082 247708 DEBUG oslo_concurrency.lockutils [None req-7ccaaab2-2453-4f17-ab35-414773076c2f 07aa1b5aaea444449f8ef00dfe56e8eb 2a52e7591d7f4e068e3f9fa0e4e288d5 - - default default] Lock "d184a9c5-9bfd-4b91-b909-67ea7cf5c982" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:50:54 compute-0 nova_compute[247704]: 2026-01-31 07:50:54.713 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 139 op/s
Jan 31 07:50:55 compute-0 sudo[291461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:55 compute-0 sudo[291461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:55 compute-0 sudo[291461]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:55 compute-0 sudo[291486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:50:55 compute-0 sudo[291486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:50:55 compute-0 sudo[291486]: pam_unix(sudo:session): session closed for user root
Jan 31 07:50:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:55.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:50:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 184 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 07:50:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:50:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:50:57 compute-0 nova_compute[247704]: 2026-01-31 07:50:57.655 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:50:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:58.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 919 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Jan 31 07:50:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:50:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:50:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:59.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:50:59 compute-0 nova_compute[247704]: 2026-01-31 07:50:59.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:50:59 compute-0 nova_compute[247704]: 2026-01-31 07:50:59.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:50:59 compute-0 nova_compute[247704]: 2026-01-31 07:50:59.607 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:50:59 compute-0 nova_compute[247704]: 2026-01-31 07:50:59.718 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:00.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 07:51:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:01.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:01 compute-0 nova_compute[247704]: 2026-01-31 07:51:01.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:02 compute-0 nova_compute[247704]: 2026-01-31 07:51:02.703 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:02.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:02 compute-0 podman[291516]: 2026-01-31 07:51:02.880036381 +0000 UTC m=+0.054576767 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 07:51:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 916 KiB/s wr, 29 op/s
Jan 31 07:51:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:03.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).paxos(paxos updating c 3013..3655) accept timeout, calling fresh election
Jan 31 07:51:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:51:03 compute-0 ceph-mon[74496]: paxos.0).electionLogic(32) init, last seen epoch 32
Jan 31 07:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:51:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:51:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:51:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:51:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:51:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:51:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 45m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:51:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:51:04 compute-0 nova_compute[247704]: 2026-01-31 07:51:04.683 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845849.6816142, d184a9c5-9bfd-4b91-b909-67ea7cf5c982 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:51:04 compute-0 nova_compute[247704]: 2026-01-31 07:51:04.684 247708 INFO nova.compute.manager [-] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] VM Stopped (Lifecycle Event)
Jan 31 07:51:04 compute-0 nova_compute[247704]: 2026-01-31 07:51:04.722 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:04.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 194 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 1001 KiB/s wr, 38 op/s
Jan 31 07:51:05 compute-0 ceph-mon[74496]: pgmap v1610: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 139 op/s
Jan 31 07:51:05 compute-0 ceph-mon[74496]: pgmap v1611: 305 pgs: 305 active+clean; 184 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 07:51:05 compute-0 ceph-mon[74496]: pgmap v1612: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 919 KiB/s rd, 2.7 MiB/s wr, 105 op/s
Jan 31 07:51:05 compute-0 ceph-mon[74496]: pgmap v1613: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 07:51:05 compute-0 ceph-mon[74496]: pgmap v1614: 305 pgs: 305 active+clean; 193 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 916 KiB/s wr, 29 op/s
Jan 31 07:51:05 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:51:05 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:51:05 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:51:05 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:51:05 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:51:05 compute-0 ceph-mon[74496]: osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:51:05 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 45m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:51:05 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:51:05 compute-0 nova_compute[247704]: 2026-01-31 07:51:05.298 247708 DEBUG nova.compute.manager [None req-901a173d-bd0a-4f08-bec2-8e4553a869fe - - - - - -] [instance: d184a9c5-9bfd-4b91-b909-67ea7cf5c982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:51:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:05.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:05 compute-0 nova_compute[247704]: 2026-01-31 07:51:05.853 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:06 compute-0 ceph-mon[74496]: pgmap v1615: 305 pgs: 305 active+clean; 194 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 1001 KiB/s wr, 38 op/s
Jan 31 07:51:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:06.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 194 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 1002 KiB/s wr, 49 op/s
Jan 31 07:51:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:07.401 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:51:07 compute-0 nova_compute[247704]: 2026-01-31 07:51:07.402 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:07.403 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:51:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:07.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:07 compute-0 nova_compute[247704]: 2026-01-31 07:51:07.706 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:08 compute-0 ceph-mon[74496]: pgmap v1616: 305 pgs: 305 active+clean; 194 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 1002 KiB/s wr, 49 op/s
Jan 31 07:51:08 compute-0 nova_compute[247704]: 2026-01-31 07:51:08.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:08 compute-0 nova_compute[247704]: 2026-01-31 07:51:08.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:51:08 compute-0 nova_compute[247704]: 2026-01-31 07:51:08.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:51:08 compute-0 nova_compute[247704]: 2026-01-31 07:51:08.683 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:51:08 compute-0 nova_compute[247704]: 2026-01-31 07:51:08.684 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:08.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 211 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.309 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:09.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.706 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.706 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.707 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.707 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.707 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:09 compute-0 nova_compute[247704]: 2026-01-31 07:51:09.733 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:51:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836526511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.155 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.346 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.347 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4611MB free_disk=20.921955108642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.348 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.348 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:10 compute-0 ceph-mon[74496]: pgmap v1617: 305 pgs: 305 active+clean; 211 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Jan 31 07:51:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3836526511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:10.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.849 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.849 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:51:10 compute-0 nova_compute[247704]: 2026-01-31 07:51:10.935 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:51:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.015 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.015 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.035 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.056 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.081 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:11.160 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:11.161 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:11.161 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:11.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:11 compute-0 sudo[291586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:11 compute-0 sudo[291586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-0 sudo[291586]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:51:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2883700870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.562 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:11 compute-0 sudo[291611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:51:11 compute-0 sudo[291611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.568 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:51:11 compute-0 sudo[291611]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:11 compute-0 sudo[291638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:11 compute-0 sudo[291638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-0 sudo[291638]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.631 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:51:11 compute-0 sudo[291663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:51:11 compute-0 sudo[291663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.763 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:51:11 compute-0 nova_compute[247704]: 2026-01-31 07:51:11.764 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:11 compute-0 ceph-mon[74496]: pgmap v1618: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2883700870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:12 compute-0 sudo[291663]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bfa31f3a-6b14-4b75-9ab2-8238d0b66e21 does not exist
Jan 31 07:51:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a4dce70c-06fa-46c9-925b-2448bb046c28 does not exist
Jan 31 07:51:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e31087d9-5a7d-4862-8485-ad3e8a57450a does not exist
Jan 31 07:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:51:12 compute-0 sudo[291720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:12 compute-0 sudo[291720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:12 compute-0 sudo[291720]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:12 compute-0 sudo[291745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:51:12 compute-0 sudo[291745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:12 compute-0 sudo[291745]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:12 compute-0 sudo[291770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:12 compute-0 sudo[291770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:12 compute-0 sudo[291770]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:12 compute-0 sudo[291795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:51:12 compute-0 sudo[291795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:12 compute-0 nova_compute[247704]: 2026-01-31 07:51:12.709 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:12 compute-0 nova_compute[247704]: 2026-01-31 07:51:12.758 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:12 compute-0 nova_compute[247704]: 2026-01-31 07:51:12.759 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:12 compute-0 nova_compute[247704]: 2026-01-31 07:51:12.759 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:12 compute-0 nova_compute[247704]: 2026-01-31 07:51:12.759 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:51:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:12.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:12 compute-0 podman[291862]: 2026-01-31 07:51:12.965897751 +0000 UTC m=+0.041419345 container create bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:51:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:13 compute-0 systemd[1]: Started libpod-conmon-bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93.scope.
Jan 31 07:51:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:51:13 compute-0 podman[291862]: 2026-01-31 07:51:12.944500558 +0000 UTC m=+0.020022182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:51:13 compute-0 podman[291862]: 2026-01-31 07:51:13.053130178 +0000 UTC m=+0.128651792 container init bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:51:13 compute-0 podman[291862]: 2026-01-31 07:51:13.062710122 +0000 UTC m=+0.138231716 container start bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:51:13 compute-0 podman[291862]: 2026-01-31 07:51:13.066543727 +0000 UTC m=+0.142065341 container attach bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:51:13 compute-0 naughty_tu[291878]: 167 167
Jan 31 07:51:13 compute-0 systemd[1]: libpod-bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93.scope: Deactivated successfully.
Jan 31 07:51:13 compute-0 conmon[291878]: conmon bccb89208143dd3706eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93.scope/container/memory.events
Jan 31 07:51:13 compute-0 podman[291862]: 2026-01-31 07:51:13.071500637 +0000 UTC m=+0.147022241 container died bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d64c92d096e76979c573f650500c652feac3c84b753c64a7bf575448a4471fa-merged.mount: Deactivated successfully.
Jan 31 07:51:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:51:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:51:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:51:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:51:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:51:13 compute-0 podman[291862]: 2026-01-31 07:51:13.125859979 +0000 UTC m=+0.201381613 container remove bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:51:13 compute-0 systemd[1]: libpod-conmon-bccb89208143dd3706eb495d5a99035d77d485846b3f41fab016978c64000d93.scope: Deactivated successfully.
Jan 31 07:51:13 compute-0 podman[291900]: 2026-01-31 07:51:13.298787695 +0000 UTC m=+0.052361434 container create e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_villani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:51:13 compute-0 systemd[1]: Started libpod-conmon-e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51.scope.
Jan 31 07:51:13 compute-0 podman[291900]: 2026-01-31 07:51:13.276830677 +0000 UTC m=+0.030404466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:51:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5d0347e6ffd3bac1a0994874fd9d182cfa9349df4d5dc736070c6bfc09081a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5d0347e6ffd3bac1a0994874fd9d182cfa9349df4d5dc736070c6bfc09081a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5d0347e6ffd3bac1a0994874fd9d182cfa9349df4d5dc736070c6bfc09081a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5d0347e6ffd3bac1a0994874fd9d182cfa9349df4d5dc736070c6bfc09081a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de5d0347e6ffd3bac1a0994874fd9d182cfa9349df4d5dc736070c6bfc09081a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:13 compute-0 podman[291900]: 2026-01-31 07:51:13.400815854 +0000 UTC m=+0.154389623 container init e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:51:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:13.405 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:51:13 compute-0 podman[291900]: 2026-01-31 07:51:13.408315927 +0000 UTC m=+0.161889676 container start e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:51:13 compute-0 podman[291900]: 2026-01-31 07:51:13.416270873 +0000 UTC m=+0.169844642 container attach e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:51:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:13 compute-0 nova_compute[247704]: 2026-01-31 07:51:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:13 compute-0 nova_compute[247704]: 2026-01-31 07:51:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:13 compute-0 nova_compute[247704]: 2026-01-31 07:51:13.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:51:13 compute-0 nova_compute[247704]: 2026-01-31 07:51:13.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:51:14 compute-0 ceph-mon[74496]: pgmap v1619: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3780460574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:14 compute-0 quizzical_villani[291918]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:51:14 compute-0 quizzical_villani[291918]: --> relative data size: 1.0
Jan 31 07:51:14 compute-0 quizzical_villani[291918]: --> All data devices are unavailable
Jan 31 07:51:14 compute-0 systemd[1]: libpod-e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51.scope: Deactivated successfully.
Jan 31 07:51:14 compute-0 podman[291900]: 2026-01-31 07:51:14.282869648 +0000 UTC m=+1.036443427 container died e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_villani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-de5d0347e6ffd3bac1a0994874fd9d182cfa9349df4d5dc736070c6bfc09081a-merged.mount: Deactivated successfully.
Jan 31 07:51:14 compute-0 podman[291900]: 2026-01-31 07:51:14.347779918 +0000 UTC m=+1.101353687 container remove e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:51:14 compute-0 systemd[1]: libpod-conmon-e22aed14e8a124b24a8ebd4623d3f317cea7227ac52fa4f39f74fe2f6b988c51.scope: Deactivated successfully.
Jan 31 07:51:14 compute-0 sudo[291795]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:14 compute-0 sudo[291947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:14 compute-0 sudo[291947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:14 compute-0 sudo[291947]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:14 compute-0 sudo[291972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:51:14 compute-0 sudo[291972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:14 compute-0 sudo[291972]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:14 compute-0 sudo[291997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:14 compute-0 sudo[291997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:14 compute-0 sudo[291997]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:14 compute-0 sudo[292022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:51:14 compute-0 sudo[292022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:14 compute-0 nova_compute[247704]: 2026-01-31 07:51:14.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:14 compute-0 podman[292087]: 2026-01-31 07:51:14.948417829 +0000 UTC m=+0.091233526 container create eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hopper, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:51:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:14 compute-0 podman[292087]: 2026-01-31 07:51:14.893037702 +0000 UTC m=+0.035853419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:51:14 compute-0 systemd[1]: Started libpod-conmon-eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d.scope.
Jan 31 07:51:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:51:15 compute-0 podman[292087]: 2026-01-31 07:51:15.050313225 +0000 UTC m=+0.193128952 container init eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:51:15 compute-0 podman[292087]: 2026-01-31 07:51:15.061765705 +0000 UTC m=+0.204581402 container start eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:51:15 compute-0 happy_hopper[292103]: 167 167
Jan 31 07:51:15 compute-0 systemd[1]: libpod-eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d.scope: Deactivated successfully.
Jan 31 07:51:15 compute-0 podman[292087]: 2026-01-31 07:51:15.080716609 +0000 UTC m=+0.223532336 container attach eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hopper, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:51:15 compute-0 podman[292087]: 2026-01-31 07:51:15.081663612 +0000 UTC m=+0.224479309 container died eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 07:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b89d442a047ccbaed9c905a41fe89e28225f414a1bda2d07c4c6455c751b13a-merged.mount: Deactivated successfully.
Jan 31 07:51:15 compute-0 podman[292087]: 2026-01-31 07:51:15.146012058 +0000 UTC m=+0.288827755 container remove eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hopper, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 07:51:15 compute-0 systemd[1]: libpod-conmon-eab6f35ae11d4e5243a57640878a6ddc1d990a666add1e155fc9c37f438b2a7d.scope: Deactivated successfully.
Jan 31 07:51:15 compute-0 sudo[292144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:15 compute-0 sudo[292144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:15 compute-0 podman[292130]: 2026-01-31 07:51:15.272375513 +0000 UTC m=+0.025351782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:51:15 compute-0 sudo[292144]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:15 compute-0 podman[292130]: 2026-01-31 07:51:15.376249887 +0000 UTC m=+0.129226136 container create 89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:51:15 compute-0 sudo[292170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:15 compute-0 sudo[292170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:15 compute-0 sudo[292170]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:15.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:15 compute-0 systemd[1]: Started libpod-conmon-89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89.scope.
Jan 31 07:51:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269d2eb8954ed318d507e471eb70e4234b0fbb20cb90489befbb1090883276cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269d2eb8954ed318d507e471eb70e4234b0fbb20cb90489befbb1090883276cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269d2eb8954ed318d507e471eb70e4234b0fbb20cb90489befbb1090883276cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/269d2eb8954ed318d507e471eb70e4234b0fbb20cb90489befbb1090883276cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:16 compute-0 podman[292130]: 2026-01-31 07:51:16.478476734 +0000 UTC m=+1.231453033 container init 89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 07:51:16 compute-0 podman[292130]: 2026-01-31 07:51:16.485382783 +0000 UTC m=+1.238359032 container start 89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:51:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:16 compute-0 ceph-mon[74496]: pgmap v1620: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 07:51:16 compute-0 podman[292130]: 2026-01-31 07:51:16.772978827 +0000 UTC m=+1.525955106 container attach 89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:51:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:16.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:16 compute-0 podman[292202]: 2026-01-31 07:51:16.922005676 +0000 UTC m=+0.093162242 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:51:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 31 07:51:17 compute-0 agitated_galois[292197]: {
Jan 31 07:51:17 compute-0 agitated_galois[292197]:     "0": [
Jan 31 07:51:17 compute-0 agitated_galois[292197]:         {
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "devices": [
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "/dev/loop3"
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             ],
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "lv_name": "ceph_lv0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "lv_size": "7511998464",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "name": "ceph_lv0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "tags": {
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.cluster_name": "ceph",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.crush_device_class": "",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.encrypted": "0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.osd_id": "0",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.type": "block",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:                 "ceph.vdo": "0"
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             },
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "type": "block",
Jan 31 07:51:17 compute-0 agitated_galois[292197]:             "vg_name": "ceph_vg0"
Jan 31 07:51:17 compute-0 agitated_galois[292197]:         }
Jan 31 07:51:17 compute-0 agitated_galois[292197]:     ]
Jan 31 07:51:17 compute-0 agitated_galois[292197]: }
Jan 31 07:51:17 compute-0 systemd[1]: libpod-89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89.scope: Deactivated successfully.
Jan 31 07:51:17 compute-0 podman[292235]: 2026-01-31 07:51:17.404269379 +0000 UTC m=+0.030552930 container died 89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:51:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:17.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-269d2eb8954ed318d507e471eb70e4234b0fbb20cb90489befbb1090883276cf-merged.mount: Deactivated successfully.
Jan 31 07:51:17 compute-0 podman[292235]: 2026-01-31 07:51:17.6843899 +0000 UTC m=+0.310673431 container remove 89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:51:17 compute-0 systemd[1]: libpod-conmon-89f12738eb4ba7a8b28e19b3bf7769f63bc6e80f882e2895977d9c1979e42b89.scope: Deactivated successfully.
Jan 31 07:51:17 compute-0 nova_compute[247704]: 2026-01-31 07:51:17.712 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2099143682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:17 compute-0 ceph-mon[74496]: pgmap v1621: 305 pgs: 305 active+clean; 213 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 104 op/s
Jan 31 07:51:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2162422567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:17 compute-0 sudo[292022]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:17 compute-0 sudo[292250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:17 compute-0 sudo[292250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:17 compute-0 sudo[292250]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:17 compute-0 sudo[292275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:51:17 compute-0 sudo[292275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:17 compute-0 sudo[292275]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:17 compute-0 sudo[292300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:17 compute-0 sudo[292300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:17 compute-0 sudo[292300]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:17 compute-0 sudo[292325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:51:17 compute-0 sudo[292325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.416966243 +0000 UTC m=+0.115924971 container create 28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.324308003 +0000 UTC m=+0.023266751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:51:18 compute-0 systemd[1]: Started libpod-conmon-28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f.scope.
Jan 31 07:51:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.632492312 +0000 UTC m=+0.331451060 container init 28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.642067396 +0000 UTC m=+0.341026164 container start 28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:51:18 compute-0 infallible_davinci[292407]: 167 167
Jan 31 07:51:18 compute-0 systemd[1]: libpod-28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f.scope: Deactivated successfully.
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.69409512 +0000 UTC m=+0.393053868 container attach 28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.695349381 +0000 UTC m=+0.394308209 container died 28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:51:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:18.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f7c737e7476e82ebfe1dafe7f43c67776daaddc37ec9fd65664620e3144a7e7-merged.mount: Deactivated successfully.
Jan 31 07:51:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 220 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 114 op/s
Jan 31 07:51:18 compute-0 podman[292390]: 2026-01-31 07:51:18.990333896 +0000 UTC m=+0.689292664 container remove 28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:51:19 compute-0 systemd[1]: libpod-conmon-28cef7fd37d0a1ac83e74b9f6c50bee69f3c18cb04e51c19373aceb101e5150f.scope: Deactivated successfully.
Jan 31 07:51:19 compute-0 podman[292433]: 2026-01-31 07:51:19.202787469 +0000 UTC m=+0.079987110 container create 15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 07:51:19 compute-0 podman[292433]: 2026-01-31 07:51:19.15297675 +0000 UTC m=+0.030176401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:51:19 compute-0 systemd[1]: Started libpod-conmon-15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4.scope.
Jan 31 07:51:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92816027fc7a0d2fa7fd15fd8690e61cc14272994c355cfd356a37745892b30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92816027fc7a0d2fa7fd15fd8690e61cc14272994c355cfd356a37745892b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92816027fc7a0d2fa7fd15fd8690e61cc14272994c355cfd356a37745892b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92816027fc7a0d2fa7fd15fd8690e61cc14272994c355cfd356a37745892b30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:51:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:19.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:19 compute-0 podman[292433]: 2026-01-31 07:51:19.531532291 +0000 UTC m=+0.408731942 container init 15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:51:19 compute-0 podman[292433]: 2026-01-31 07:51:19.537370894 +0000 UTC m=+0.414570525 container start 15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:51:19 compute-0 podman[292433]: 2026-01-31 07:51:19.565264847 +0000 UTC m=+0.442464488 container attach 15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 07:51:19 compute-0 nova_compute[247704]: 2026-01-31 07:51:19.743 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:51:20
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:51:20 compute-0 youthful_bell[292451]: {
Jan 31 07:51:20 compute-0 youthful_bell[292451]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:51:20 compute-0 youthful_bell[292451]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:51:20 compute-0 youthful_bell[292451]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:51:20 compute-0 youthful_bell[292451]:         "osd_id": 0,
Jan 31 07:51:20 compute-0 youthful_bell[292451]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:51:20 compute-0 youthful_bell[292451]:         "type": "bluestore"
Jan 31 07:51:20 compute-0 youthful_bell[292451]:     }
Jan 31 07:51:20 compute-0 youthful_bell[292451]: }
Jan 31 07:51:20 compute-0 systemd[1]: libpod-15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4.scope: Deactivated successfully.
Jan 31 07:51:20 compute-0 podman[292433]: 2026-01-31 07:51:20.398510686 +0000 UTC m=+1.275710317 container died 15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:51:20 compute-0 ceph-mon[74496]: pgmap v1622: 305 pgs: 305 active+clean; 220 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 114 op/s
Jan 31 07:51:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:20.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 239 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Jan 31 07:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d92816027fc7a0d2fa7fd15fd8690e61cc14272994c355cfd356a37745892b30-merged.mount: Deactivated successfully.
Jan 31 07:51:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:51:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:21.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:51:21 compute-0 podman[292433]: 2026-01-31 07:51:21.477953254 +0000 UTC m=+2.355152895 container remove 15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:51:21 compute-0 sudo[292325]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:51:21 compute-0 systemd[1]: libpod-conmon-15b8af371fc56aad544c8b13b23e8727599c756d1a8834176b466399c5e5c2e4.scope: Deactivated successfully.
Jan 31 07:51:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:51:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3621442984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:21 compute-0 ceph-mon[74496]: pgmap v1623: 305 pgs: 305 active+clean; 239 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Jan 31 07:51:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b0f2081d-15eb-4d7f-bcae-6d4132c86cae does not exist
Jan 31 07:51:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c49f5041-5b6c-4c26-8d9b-a29716eba107 does not exist
Jan 31 07:51:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 290194b9-9c3c-4c0b-813c-7e49e5dadcea does not exist
Jan 31 07:51:22 compute-0 sudo[292485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:22 compute-0 sudo[292485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:22 compute-0 sudo[292485]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.259 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.260 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:22 compute-0 sudo[292510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:51:22 compute-0 sudo[292510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:22 compute-0 sudo[292510]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.508 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.839 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.840 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.849 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:51:22 compute-0 nova_compute[247704]: 2026-01-31 07:51:22.850 247708 INFO nova.compute.claims [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:51:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 239 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 07:51:23 compute-0 nova_compute[247704]: 2026-01-31 07:51:23.066 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:51:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:51:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:23.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:51:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:51:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587214139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:23 compute-0 nova_compute[247704]: 2026-01-31 07:51:23.493 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:23 compute-0 nova_compute[247704]: 2026-01-31 07:51:23.499 247708 DEBUG nova.compute.provider_tree [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:51:23 compute-0 nova_compute[247704]: 2026-01-31 07:51:23.681 247708 DEBUG nova.scheduler.client.report [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:51:23 compute-0 nova_compute[247704]: 2026-01-31 07:51:23.840 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:23 compute-0 nova_compute[247704]: 2026-01-31 07:51:23.841 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:51:24 compute-0 nova_compute[247704]: 2026-01-31 07:51:24.397 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:51:24 compute-0 nova_compute[247704]: 2026-01-31 07:51:24.398 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:51:24 compute-0 ceph-mon[74496]: pgmap v1624: 305 pgs: 305 active+clean; 239 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 07:51:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1587214139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:51:24 compute-0 nova_compute[247704]: 2026-01-31 07:51:24.664 247708 INFO nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:51:24 compute-0 nova_compute[247704]: 2026-01-31 07:51:24.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 241 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 07:51:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:25 compute-0 nova_compute[247704]: 2026-01-31 07:51:25.564 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:51:25 compute-0 ceph-mon[74496]: pgmap v1625: 305 pgs: 305 active+clean; 241 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.000 247708 INFO nova.virt.block_device [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Booting with volume a8fc40ce-92f5-4848-b108-e10dbbb71633 at /dev/vda
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.265 247708 DEBUG os_brick.utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.267 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.280 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.281 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0c58a7-1f7b-4617-b94e-1214313896b5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.282 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.289 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.289 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[21ae7407-eb2c-4247-91a2-68003e8ed54c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.291 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.300 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.300 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[757b5f2a-9c53-43db-8e78-42fa22eb8ca0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.302 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1ed2c0-0f5c-480a-bec7-e51acf773d1a]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.302 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.329 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.332 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.332 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.332 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.333 247708 DEBUG os_brick.utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:51:26 compute-0 nova_compute[247704]: 2026-01-31 07:51:26.333 247708 DEBUG nova.virt.block_device [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating existing volume attachment record: f045fb93-8f01-48c8-8c73-a75112fb995b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:51:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:26.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 242 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 237 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 07:51:27 compute-0 nova_compute[247704]: 2026-01-31 07:51:27.474 247708 DEBUG nova.policy [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb0d7106e4b04ac980748beb40d5cedf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:51:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:27 compute-0 nova_compute[247704]: 2026-01-31 07:51:27.715 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:28 compute-0 ceph-mon[74496]: pgmap v1626: 305 pgs: 305 active+clean; 242 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 237 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 07:51:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1851526475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:28.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.256 247708 INFO nova.virt.block_device [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Booting with volume f4f1465d-4b57-44df-8e66-548b09052627 at /dev/vdb
Jan 31 07:51:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.690 247708 DEBUG os_brick.utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.691 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.698 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.699 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc6ec18-bfc0-43a3-9b26-7c3520601738]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.700 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.708 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.709 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[88d20ec1-807f-4d07-a3f2-2551a0173b4b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.711 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.720 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.721 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[a2914912-91aa-4701-a01d-26ce39d2935d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.722 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8beaba-18b7-4227-80dc-db89ddf0fd99]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.723 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.745 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.747 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.747 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.748 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.748 247708 DEBUG os_brick.utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.748 247708 DEBUG nova.virt.block_device [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating existing volume attachment record: 05e2eb3f-84f5-40b7-bd25-9e9d1b26cfd7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:51:29 compute-0 nova_compute[247704]: 2026-01-31 07:51:29.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:30 compute-0 ceph-mon[74496]: pgmap v1627: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 07:51:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 217 KiB/s rd, 987 KiB/s wr, 41 op/s
Jan 31 07:51:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:51:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:51:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:51:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089878778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:32 compute-0 nova_compute[247704]: 2026-01-31 07:51:32.716 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:32 compute-0 ceph-mon[74496]: pgmap v1628: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 217 KiB/s rd, 987 KiB/s wr, 41 op/s
Jan 31 07:51:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3089878778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:32.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 192 KiB/s rd, 158 KiB/s wr, 35 op/s
Jan 31 07:51:33 compute-0 nova_compute[247704]: 2026-01-31 07:51:33.168 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully created port: f70ffad1-6c67-4e2f-9488-5f51be8ca30f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:51:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:33.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:33 compute-0 podman[292577]: 2026-01-31 07:51:33.886721329 +0000 UTC m=+0.057898029 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:51:33 compute-0 ceph-mon[74496]: pgmap v1629: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 192 KiB/s rd, 158 KiB/s wr, 35 op/s
Jan 31 07:51:34 compute-0 nova_compute[247704]: 2026-01-31 07:51:34.755 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:34.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:34 compute-0 nova_compute[247704]: 2026-01-31 07:51:34.982 247708 INFO nova.virt.block_device [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Booting with volume 9b441e8b-6bb5-4554-9b00-62c0672ca9cf at /dev/vdc
Jan 31 07:51:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 73 KiB/s wr, 8 op/s
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004342277068211427 of space, bias 1.0, pg target 1.3026831204634282 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:51:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.189 247708 DEBUG os_brick.utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.190 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.199 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.199 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[536c97ab-6500-4e9a-9726-ffc7a2241b0b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.200 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.206 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.206 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[0d207b39-c5eb-4eec-b13e-381c73451b82]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.208 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.214 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.215 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[049559da-449d-49b8-861c-1a495d47c897]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.216 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[f8aca31c-3415-4e0b-8653-6dd45f017605]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.216 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.236 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.239 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.239 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.239 247708 DEBUG os_brick.initiator.connectors.lightos [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.240 247708 DEBUG os_brick.utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 07:51:35 compute-0 nova_compute[247704]: 2026-01-31 07:51:35.240 247708 DEBUG nova.virt.block_device [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating existing volume attachment record: de9a0d23-e1e1-45b6-afc1-1d3caabd10dc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 07:51:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:35.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:35 compute-0 sudo[292607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:35 compute-0 sudo[292607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:35 compute-0 sudo[292607]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:35 compute-0 sudo[292632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:35 compute-0 sudo[292632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:35 compute-0 sudo[292632]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:36 compute-0 nova_compute[247704]: 2026-01-31 07:51:36.248 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully created port: 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:51:36 compute-0 ceph-mon[74496]: pgmap v1630: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 73 KiB/s wr, 8 op/s
Jan 31 07:51:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:36.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 29 KiB/s wr, 6 op/s
Jan 31 07:51:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/132925161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:51:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:37.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:37 compute-0 nova_compute[247704]: 2026-01-31 07:51:37.719 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.498 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.499 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.500 247708 INFO nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Creating image(s)
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.500 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.500 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Ensure instance console log exists: /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.501 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.501 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.502 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:51:38 compute-0 ceph-mon[74496]: pgmap v1631: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 29 KiB/s wr, 6 op/s
Jan 31 07:51:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:51:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:51:38 compute-0 nova_compute[247704]: 2026-01-31 07:51:38.904 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully created port: efbaae5d-81ff-4cc9-8c11-21b46928ff4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:51:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 22 KiB/s wr, 5 op/s
Jan 31 07:51:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:39.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:39 compute-0 ceph-mon[74496]: pgmap v1632: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 22 KiB/s wr, 5 op/s
Jan 31 07:51:39 compute-0 nova_compute[247704]: 2026-01-31 07:51:39.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 16 KiB/s wr, 1 op/s
Jan 31 07:51:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:41.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:42 compute-0 ceph-mon[74496]: pgmap v1633: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 16 KiB/s wr, 1 op/s
Jan 31 07:51:42 compute-0 nova_compute[247704]: 2026-01-31 07:51:42.606 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully created port: b30ffaa7-e766-4963-9625-a6ed3c381b9e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:51:42 compute-0 nova_compute[247704]: 2026-01-31 07:51:42.750 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:42.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 1 op/s
Jan 31 07:51:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:44 compute-0 nova_compute[247704]: 2026-01-31 07:51:44.761 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:44 compute-0 ceph-mon[74496]: pgmap v1634: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 1 op/s
Jan 31 07:51:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:44.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Jan 31 07:51:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:45.285 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:51:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:45.286 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:51:45 compute-0 nova_compute[247704]: 2026-01-31 07:51:45.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:45.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:46 compute-0 ceph-mon[74496]: pgmap v1635: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 15 KiB/s wr, 0 op/s
Jan 31 07:51:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1397894410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:51:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1397894410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:51:46 compute-0 nova_compute[247704]: 2026-01-31 07:51:46.302 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully created port: 4b53f464-7fe8-4fdb-9628-1af032b23899 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:51:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:46.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 07:51:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:47 compute-0 nova_compute[247704]: 2026-01-31 07:51:47.752 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:47 compute-0 podman[292663]: 2026-01-31 07:51:47.932415198 +0000 UTC m=+0.105638059 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 07:51:48 compute-0 ceph-mon[74496]: pgmap v1636: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 07:51:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:48.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.438 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: f70ffad1-6c67-4e2f-9488-5f51be8ca30f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:51:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:49.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.598 247708 DEBUG nova.compute.manager [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.598 247708 DEBUG nova.compute.manager [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-f70ffad1-6c67-4e2f-9488-5f51be8ca30f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.599 247708 DEBUG oslo_concurrency.lockutils [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.599 247708 DEBUG oslo_concurrency.lockutils [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.599 247708 DEBUG nova.network.neutron [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port f70ffad1-6c67-4e2f-9488-5f51be8ca30f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.764 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:49 compute-0 nova_compute[247704]: 2026-01-31 07:51:49.969 247708 DEBUG nova.network.neutron [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:51:50 compute-0 nova_compute[247704]: 2026-01-31 07:51:50.538 247708 DEBUG nova.network.neutron [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:50 compute-0 nova_compute[247704]: 2026-01-31 07:51:50.574 247708 DEBUG oslo_concurrency.lockutils [req-5aa9b75f-a452-4502-b0a8-64360879e24b req-d00e948f-7294-4a53-9550-b0f30dde66d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:50 compute-0 ceph-mon[74496]: pgmap v1637: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Jan 31 07:51:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:50.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:50 compute-0 nova_compute[247704]: 2026-01-31 07:51:50.975 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: e03f85ae-0195-4e14-9903-e7b06167a724 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:51:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 07:51:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:51.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:51 compute-0 nova_compute[247704]: 2026-01-31 07:51:51.749 247708 DEBUG nova.compute.manager [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-e03f85ae-0195-4e14-9903-e7b06167a724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:51 compute-0 nova_compute[247704]: 2026-01-31 07:51:51.750 247708 DEBUG nova.compute.manager [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-e03f85ae-0195-4e14-9903-e7b06167a724. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:51 compute-0 nova_compute[247704]: 2026-01-31 07:51:51.750 247708 DEBUG oslo_concurrency.lockutils [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:51 compute-0 nova_compute[247704]: 2026-01-31 07:51:51.750 247708 DEBUG oslo_concurrency.lockutils [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:51 compute-0 nova_compute[247704]: 2026-01-31 07:51:51.751 247708 DEBUG nova.network.neutron [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port e03f85ae-0195-4e14-9903-e7b06167a724 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:51 compute-0 ceph-mon[74496]: pgmap v1638: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 07:51:52 compute-0 nova_compute[247704]: 2026-01-31 07:51:52.033 247708 DEBUG nova.network.neutron [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:51:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:51:52.288 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:51:52 compute-0 nova_compute[247704]: 2026-01-31 07:51:52.515 247708 DEBUG nova.network.neutron [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:52 compute-0 nova_compute[247704]: 2026-01-31 07:51:52.546 247708 DEBUG oslo_concurrency.lockutils [req-e802aa62-5105-4455-bad8-97957fdaf21c req-c520e66d-d7ed-4394-90bc-85bcc8c60b71 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:52 compute-0 nova_compute[247704]: 2026-01-31 07:51:52.676 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: 4eb52c86-78dd-484b-8a6f-30698b76d281 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:51:52 compute-0 nova_compute[247704]: 2026-01-31 07:51:52.755 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:51:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:51:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 07:51:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:53.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.008 247708 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-4eb52c86-78dd-484b-8a6f-30698b76d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.008 247708 DEBUG nova.compute.manager [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-4eb52c86-78dd-484b-8a6f-30698b76d281. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.008 247708 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.009 247708 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.009 247708 DEBUG nova.network.neutron [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port 4eb52c86-78dd-484b-8a6f-30698b76d281 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.025 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.393 247708 DEBUG nova.network.neutron [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:54.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:54 compute-0 nova_compute[247704]: 2026-01-31 07:51:54.963 247708 DEBUG nova.network.neutron [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 210 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 2.4 KiB/s wr, 10 op/s
Jan 31 07:51:55 compute-0 ceph-mon[74496]: pgmap v1639: 305 pgs: 305 active+clean; 246 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 07:51:55 compute-0 nova_compute[247704]: 2026-01-31 07:51:55.221 247708 DEBUG oslo_concurrency.lockutils [req-92fe061f-0824-4f25-969c-d71237a2c2a7 req-5f7bc807-9231-4c54-8bda-f2eb1dce5202 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:55.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:55 compute-0 sudo[292693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:55 compute-0 sudo[292693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:55 compute-0 sudo[292693]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:55 compute-0 sudo[292718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:51:55 compute-0 sudo[292718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:51:55 compute-0 sudo[292718]: pam_unix(sudo:session): session closed for user root
Jan 31 07:51:56 compute-0 nova_compute[247704]: 2026-01-31 07:51:56.268 247708 DEBUG nova.compute.manager [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:56 compute-0 nova_compute[247704]: 2026-01-31 07:51:56.269 247708 DEBUG nova.compute.manager [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:56 compute-0 nova_compute[247704]: 2026-01-31 07:51:56.269 247708 DEBUG oslo_concurrency.lockutils [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:56 compute-0 nova_compute[247704]: 2026-01-31 07:51:56.270 247708 DEBUG oslo_concurrency.lockutils [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:56 compute-0 nova_compute[247704]: 2026-01-31 07:51:56.270 247708 DEBUG nova.network.neutron [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:56 compute-0 ceph-mon[74496]: pgmap v1640: 305 pgs: 305 active+clean; 210 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 2.4 KiB/s wr, 10 op/s
Jan 31 07:51:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:51:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 194 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.238 247708 DEBUG nova.network.neutron [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:51:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:57.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.715 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: efbaae5d-81ff-4cc9-8c11-21b46928ff4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.888 247708 DEBUG nova.compute.manager [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-efbaae5d-81ff-4cc9-8c11-21b46928ff4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.888 247708 DEBUG nova.compute.manager [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-efbaae5d-81ff-4cc9-8c11-21b46928ff4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.889 247708 DEBUG oslo_concurrency.lockutils [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.898 247708 DEBUG nova.network.neutron [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.948 247708 DEBUG oslo_concurrency.lockutils [req-f99dbc60-89e9-452e-9205-b31b80f05e96 req-2ef521e5-7981-40a0-979c-c05f563090f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.949 247708 DEBUG oslo_concurrency.lockutils [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:51:57 compute-0 nova_compute[247704]: 2026-01-31 07:51:57.950 247708 DEBUG nova.network.neutron [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port efbaae5d-81ff-4cc9-8c11-21b46928ff4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:51:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:51:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:58.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:51:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 4.0 KiB/s wr, 34 op/s
Jan 31 07:51:59 compute-0 nova_compute[247704]: 2026-01-31 07:51:59.026 247708 DEBUG nova.network.neutron [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:51:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:51:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:51:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:51:59 compute-0 nova_compute[247704]: 2026-01-31 07:51:59.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:51:59 compute-0 nova_compute[247704]: 2026-01-31 07:51:59.954 247708 DEBUG nova.network.neutron [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:00 compute-0 nova_compute[247704]: 2026-01-31 07:52:00.191 247708 DEBUG oslo_concurrency.lockutils [req-e8d14ad2-65d0-4c0e-8810-4bae74ede952 req-b94d8f9d-679d-4d51-bb8d-c1667ebb3218 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:01 compute-0 nova_compute[247704]: 2026-01-31 07:52:01.069 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: b30ffaa7-e766-4963-9625-a6ed3c381b9e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:52:01 compute-0 nova_compute[247704]: 2026-01-31 07:52:01.201 247708 DEBUG nova.compute.manager [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-b30ffaa7-e766-4963-9625-a6ed3c381b9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:01 compute-0 nova_compute[247704]: 2026-01-31 07:52:01.202 247708 DEBUG nova.compute.manager [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-b30ffaa7-e766-4963-9625-a6ed3c381b9e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:52:01 compute-0 nova_compute[247704]: 2026-01-31 07:52:01.202 247708 DEBUG oslo_concurrency.lockutils [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:01 compute-0 nova_compute[247704]: 2026-01-31 07:52:01.202 247708 DEBUG oslo_concurrency.lockutils [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:01 compute-0 nova_compute[247704]: 2026-01-31 07:52:01.203 247708 DEBUG nova.network.neutron [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port b30ffaa7-e766-4963-9625-a6ed3c381b9e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:52:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:01.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:02 compute-0 nova_compute[247704]: 2026-01-31 07:52:02.275 247708 DEBUG nova.network.neutron [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:52:02 compute-0 nova_compute[247704]: 2026-01-31 07:52:02.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:02.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.005 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Successfully updated port: 4b53f464-7fe8-4fdb-9628-1af032b23899 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.085 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.366 247708 DEBUG nova.compute.manager [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-4b53f464-7fe8-4fdb-9628-1af032b23899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.366 247708 DEBUG nova.compute.manager [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-4b53f464-7fe8-4fdb-9628-1af032b23899. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.367 247708 DEBUG oslo_concurrency.lockutils [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.601 247708 DEBUG nova.network.neutron [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.616 247708 DEBUG oslo_concurrency.lockutils [req-8cdf4672-9583-4757-a6aa-eefb17d22d2e req-022d1cb3-966b-4d76-b316-48efc93fdef2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.617 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.617 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:52:03 compute-0 nova_compute[247704]: 2026-01-31 07:52:03.819 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:52:04 compute-0 nova_compute[247704]: 2026-01-31 07:52:04.776 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:52:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:52:04 compute-0 podman[292748]: 2026-01-31 07:52:04.91146316 +0000 UTC m=+0.076098134 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:52:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:05.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:06 compute-0 ceph-mon[74496]: pgmap v1641: 305 pgs: 305 active+clean; 194 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 3.0 KiB/s wr, 15 op/s
Jan 31 07:52:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:06.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.9 KiB/s wr, 25 op/s
Jan 31 07:52:07 compute-0 ceph-mon[74496]: pgmap v1642: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 4.0 KiB/s wr, 34 op/s
Jan 31 07:52:07 compute-0 ceph-mon[74496]: pgmap v1643: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:07 compute-0 ceph-mon[74496]: pgmap v1644: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:07 compute-0 ceph-mon[74496]: pgmap v1645: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 KiB/s wr, 35 op/s
Jan 31 07:52:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:07.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:07 compute-0 nova_compute[247704]: 2026-01-31 07:52:07.761 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:08 compute-0 nova_compute[247704]: 2026-01-31 07:52:08.022 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:08 compute-0 ceph-mon[74496]: pgmap v1646: 305 pgs: 305 active+clean; 167 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.9 KiB/s wr, 25 op/s
Jan 31 07:52:08 compute-0 nova_compute[247704]: 2026-01-31 07:52:08.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:08.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 135 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Jan 31 07:52:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:09.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:09 compute-0 nova_compute[247704]: 2026-01-31 07:52:09.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:09 compute-0 nova_compute[247704]: 2026-01-31 07:52:09.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:52:09 compute-0 nova_compute[247704]: 2026-01-31 07:52:09.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:52:09 compute-0 nova_compute[247704]: 2026-01-31 07:52:09.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:09 compute-0 ceph-mon[74496]: pgmap v1647: 305 pgs: 305 active+clean; 135 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 25 op/s
Jan 31 07:52:09 compute-0 nova_compute[247704]: 2026-01-31 07:52:09.920 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:52:09 compute-0 nova_compute[247704]: 2026-01-31 07:52:09.921 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:52:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:10.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:10 compute-0 nova_compute[247704]: 2026-01-31 07:52:10.915 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 113 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 07:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:11.161 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:11.162 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:11.162 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.983 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.984 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.985 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.985 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:52:11 compute-0 nova_compute[247704]: 2026-01-31 07:52:11.985 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:52:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790364297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:12 compute-0 nova_compute[247704]: 2026-01-31 07:52:12.494 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:12 compute-0 ceph-mon[74496]: pgmap v1648: 305 pgs: 305 active+clean; 113 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 07:52:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2746839228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:12 compute-0 nova_compute[247704]: 2026-01-31 07:52:12.704 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:52:12 compute-0 nova_compute[247704]: 2026-01-31 07:52:12.706 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4658MB free_disk=20.98541259765625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:52:12 compute-0 nova_compute[247704]: 2026-01-31 07:52:12.706 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:12 compute-0 nova_compute[247704]: 2026-01-31 07:52:12.707 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:12 compute-0 nova_compute[247704]: 2026-01-31 07:52:12.765 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:12.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 17 op/s
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.025 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9f390c61-950a-4c26-8733-d43d910f2430 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.026 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.027 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.081 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:52:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/882165218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.570 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.580 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:52:13 compute-0 nova_compute[247704]: 2026-01-31 07:52:13.680 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:52:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2790364297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2589437897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/882165218' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:14 compute-0 nova_compute[247704]: 2026-01-31 07:52:14.252 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:52:14 compute-0 nova_compute[247704]: 2026-01-31 07:52:14.252 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:14 compute-0 nova_compute[247704]: 2026-01-31 07:52:14.806 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:14.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Jan 31 07:52:15 compute-0 nova_compute[247704]: 2026-01-31 07:52:15.252 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:15 compute-0 nova_compute[247704]: 2026-01-31 07:52:15.253 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:15 compute-0 nova_compute[247704]: 2026-01-31 07:52:15.254 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:52:15 compute-0 ceph-mon[74496]: pgmap v1649: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 17 op/s
Jan 31 07:52:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3392995350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4102396379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:15 compute-0 nova_compute[247704]: 2026-01-31 07:52:15.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:15.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:15 compute-0 sudo[292818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:15 compute-0 sudo[292818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:15 compute-0 sudo[292818]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:15 compute-0 sudo[292843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:15 compute-0 sudo[292843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:15 compute-0 sudo[292843]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:15 compute-0 nova_compute[247704]: 2026-01-31 07:52:15.926 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:52:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:16 compute-0 ceph-mon[74496]: pgmap v1650: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 20 op/s
Jan 31 07:52:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/53417800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 KiB/s wr, 20 op/s
Jan 31 07:52:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:17 compute-0 nova_compute[247704]: 2026-01-31 07:52:17.807 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.118 247708 DEBUG nova.network.neutron [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [{"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1494411292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:18 compute-0 ceph-mon[74496]: pgmap v1651: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 KiB/s wr, 20 op/s
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.463 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.465 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance network_info: |[{"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.467 247708 DEBUG oslo_concurrency.lockutils [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.468 247708 DEBUG nova.network.neutron [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port 4b53f464-7fe8-4fdb-9628-1af032b23899 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.479 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Start _get_guest_xml network_info=[{"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_
Jan 31 07:52:18 compute-0 nova_compute[247704]: ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'f045fb93-8f01-48c8-8c73-a75112fb995b', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a8fc40ce-92f5-4848-b108-e10dbbb71633', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a8fc40ce-92f5-4848-b108-e10dbbb71633', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f390c61-950a-4c26-8733-d43d910f2430', 'attached_at': '', 'detached_at': '', 'volume_id': 'a8fc40ce-92f5-4848-b108-e10dbbb71633', 'serial': 'a8fc40ce-92f5-4848-b108-e10dbbb71633'}, 'mount_device': '/dev/vda', 'volume_type': None}, {'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '05e2eb3f-84f5-40b7-bd25-9e9d1b26cfd7', 'boot_index': 1, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f4f1465d-4b57-44df-8e66-548b09052627', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f4f1465d-4b57-44df-8e66-548b09052627', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f390c61-950a-4c26-8733-d43d910f2430', 'attached_at': '', 'detached_at': '', 'volume_id': 'f4f1465d-4b57-44df-8e66-548b09052627', 'serial': 'f4f1465d-4b57-44df-8e66-548b09052627'}, 'mount_device': '/dev/vdb', 'volume_type': None}, {'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'de9a0d23-e1e1-45b6-afc1-1d3caabd10dc', 'boot_index': 2, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9b441e8b-6bb5-4554-9b00-62c0672ca9cf', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9b441e8b-6bb5-4554-9b00-62c0672ca9cf', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9f390c61-950a-4c26-8733-d43d910f2430', 'attached_at': '', 'detached_at': '', 'volume_id': '9b441e8b-6bb5-4554-9b00-62c0672ca9cf', 'serial': '9b441e8b-6bb5-4554-9b00-62c0672ca9cf'}, 'mount_device': '/dev/vdc', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.484 247708 WARNING nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.529 247708 DEBUG nova.virt.libvirt.host [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.530 247708 DEBUG nova.virt.libvirt.host [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.534 247708 DEBUG nova.virt.libvirt.host [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.535 247708 DEBUG nova.virt.libvirt.host [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.536 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.536 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.536 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.537 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.537 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.537 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.537 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.538 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.538 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.538 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.538 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.538 247708 DEBUG nova.virt.hardware [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.570 247708 DEBUG nova.storage.rbd_utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] rbd image 9f390c61-950a-4c26-8733-d43d910f2430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:18 compute-0 nova_compute[247704]: 2026-01-31 07:52:18.577 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:18 compute-0 rsyslogd[1005]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 07:52:18.479 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 07:52:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:18 compute-0 podman[292907]: 2026-01-31 07:52:18.949503 +0000 UTC m=+0.126693374 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:52:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 07:52:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:52:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340991667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.092 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.747 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.748 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.749 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.750 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.750 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.751 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.752 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.752 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.753 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.754 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.754 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.755 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.755 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.756 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.756 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.758 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.758 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.759 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.759 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.760 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.760 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.761 247708 DEBUG nova.objects.instance [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f390c61-950a-4c26-8733-d43d910f2430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.788 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <uuid>9f390c61-950a-4c26-8733-d43d910f2430</uuid>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <name>instance-00000048</name>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:name>tempest-device-tagging-server-470810591</nova:name>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:52:18</nova:creationTime>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:user uuid="eb0d7106e4b04ac980748beb40d5cedf">tempest-TaggedBootDevicesTest-805673079-project-member</nova:user>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:project uuid="3264618ebdfa4478919141d5d3c4d3b3">tempest-TaggedBootDevicesTest-805673079</nova:project>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="f70ffad1-6c67-4e2f-9488-5f51be8ca30f">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="e03f85ae-0195-4e14-9903-e7b06167a724">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.1.1.58" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="4eb52c86-78dd-484b-8a6f-30698b76d281">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.1.1.171" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="3c7e8d6d-d8dc-4023-8967-e09698c9d8bd">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.1.1.123" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="efbaae5d-81ff-4cc9-8c11-21b46928ff4e">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.1.1.8" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="b30ffaa7-e766-4963-9625-a6ed3c381b9e">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <nova:port uuid="4b53f464-7fe8-4fdb-9628-1af032b23899">
Jan 31 07:52:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <system>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <entry name="serial">9f390c61-950a-4c26-8733-d43d910f2430</entry>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <entry name="uuid">9f390c61-950a-4c26-8733-d43d910f2430</entry>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </system>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <os>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </os>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <features>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </features>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9f390c61-950a-4c26-8733-d43d910f2430_disk.config">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </source>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-a8fc40ce-92f5-4848-b108-e10dbbb71633">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </source>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <serial>a8fc40ce-92f5-4848-b108-e10dbbb71633</serial>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-f4f1465d-4b57-44df-8e66-548b09052627">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </source>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <serial>f4f1465d-4b57-44df-8e66-548b09052627</serial>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-9b441e8b-6bb5-4554-9b00-62c0672ca9cf">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </source>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:52:19 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="vdc" bus="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <serial>9b441e8b-6bb5-4554-9b00-62c0672ca9cf</serial>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:0e:06:0e"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tapf70ffad1-6c"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:1e:8e:bc"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tape03f85ae-01"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:a9:1d:84"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tap4eb52c86-78"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:91:6d:e4"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tap3c7e8d6d-d8"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:2a:f2:bf"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tapefbaae5d-81"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:d4:67:df"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tapb30ffaa7-e7"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:57:74:ed"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <target dev="tap4b53f464-7f"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/console.log" append="off"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <video>
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </video>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:52:19 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:52:19 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:52:19 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:52:19 compute-0 nova_compute[247704]: </domain>
Jan 31 07:52:19 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.790 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.791 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.792 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.792 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.793 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.793 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.793 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.794 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.794 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.795 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.795 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.795 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.796 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.796 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.797 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.797 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.798 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.798 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.798 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.799 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.799 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.800 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.800 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.800 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.801 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Preparing to wait for external event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.801 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.801 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.802 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.803 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.803 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.804 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.805 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.806 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.807 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.808 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.810 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.814 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.814 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf70ffad1-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.815 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf70ffad1-6c, col_values=(('external_ids', {'iface-id': 'f70ffad1-6c67-4e2f-9488-5f51be8ca30f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:06:0e', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.817 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 NetworkManager[49108]: <info>  [1769845939.8192] manager: (tapf70ffad1-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.821 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.831 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c')
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.832 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.833 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.834 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.835 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.836 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.836 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.837 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.839 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.840 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03f85ae-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.840 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape03f85ae-01, col_values=(('external_ids', {'iface-id': 'e03f85ae-0195-4e14-9903-e7b06167a724', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:8e:bc', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.842 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 NetworkManager[49108]: <info>  [1769845939.8435] manager: (tape03f85ae-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.851 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01')
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.852 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.853 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.853 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.853 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.854 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.854 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.854 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.857 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb52c86-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.858 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb52c86-78, col_values=(('external_ids', {'iface-id': '4eb52c86-78dd-484b-8a6f-30698b76d281', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:1d:84', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.860 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 NetworkManager[49108]: <info>  [1769845939.8608] manager: (tap4eb52c86-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.863 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.869 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78')
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.871 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.871 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.872 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.872 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.873 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.873 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.874 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.876 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.876 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c7e8d6d-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.877 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c7e8d6d-d8, col_values=(('external_ids', {'iface-id': '3c7e8d6d-d8dc-4023-8967-e09698c9d8bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:6d:e4', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.878 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 NetworkManager[49108]: <info>  [1769845939.8793] manager: (tap3c7e8d6d-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.889 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.890 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8')
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.891 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.891 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.892 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.892 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.893 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.893 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.894 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.896 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.896 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefbaae5d-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.897 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefbaae5d-81, col_values=(('external_ids', {'iface-id': 'efbaae5d-81ff-4cc9-8c11-21b46928ff4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:f2:bf', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.898 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 NetworkManager[49108]: <info>  [1769845939.9004] manager: (tapefbaae5d-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.913 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.915 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81')
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.916 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.916 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.916 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.917 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.917 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.917 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.919 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.919 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb30ffaa7-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.920 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb30ffaa7-e7, col_values=(('external_ids', {'iface-id': 'b30ffaa7-e766-4963-9625-a6ed3c381b9e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:67:df', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.921 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 NetworkManager[49108]: <info>  [1769845939.9220] manager: (tapb30ffaa7-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.923 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.937 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.938 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7')
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.939 247708 DEBUG nova.virt.libvirt.vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.939 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.940 247708 DEBUG nova.network.os_vif_util [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.940 247708 DEBUG os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.941 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.941 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.941 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.943 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.943 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b53f464-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:19 compute-0 nova_compute[247704]: 2026-01-31 07:52:19.944 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b53f464-7f, col_values=(('external_ids', {'iface-id': '4b53f464-7fe8-4fdb-9628-1af032b23899', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:74:ed', 'vm-uuid': '9f390c61-950a-4c26-8733-d43d910f2430'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:20 compute-0 ceph-mon[74496]: pgmap v1652: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 07:52:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3340991667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:20 compute-0 NetworkManager[49108]: <info>  [1769845940.0029] manager: (tap4b53f464-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.004 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.021 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.022 247708 INFO os_vif [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f')
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:52:20
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.341 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.342 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.342 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] No VIF found with MAC fa:16:3e:0e:06:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.343 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] No VIF found with MAC fa:16:3e:2a:f2:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.343 247708 INFO nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Using config drive
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.541 247708 DEBUG nova.storage.rbd_utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] rbd image 9f390c61-950a-4c26-8733-d43d910f2430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.547 247708 DEBUG nova.network.neutron [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updated VIF entry in instance network info cache for port 4b53f464-7fe8-4fdb-9628-1af032b23899. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.548 247708 DEBUG nova.network.neutron [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [{"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:20 compute-0 nova_compute[247704]: 2026-01-31 07:52:20.599 247708 DEBUG oslo_concurrency.lockutils [req-58b7f8ad-4154-4517-a5af-aaae06fa8024 req-feacca6b-6237-4acb-a90c-7ab5c1aa8795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:20 compute-0 ovn_controller[149457]: 2026-01-31T07:52:20Z|00222|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 07:52:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 22 op/s
Jan 31 07:52:21 compute-0 nova_compute[247704]: 2026-01-31 07:52:21.106 247708 INFO nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Creating config drive at /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/disk.config
Jan 31 07:52:21 compute-0 nova_compute[247704]: 2026-01-31 07:52:21.112 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqt42v0si execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:21 compute-0 nova_compute[247704]: 2026-01-31 07:52:21.243 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpqt42v0si" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:21 compute-0 nova_compute[247704]: 2026-01-31 07:52:21.277 247708 DEBUG nova.storage.rbd_utils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] rbd image 9f390c61-950a-4c26-8733-d43d910f2430_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:52:21 compute-0 nova_compute[247704]: 2026-01-31 07:52:21.281 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/disk.config 9f390c61-950a-4c26-8733-d43d910f2430_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:52:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:21.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:21 compute-0 ceph-mon[74496]: pgmap v1653: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 22 op/s
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.138 247708 DEBUG oslo_concurrency.processutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/disk.config 9f390c61-950a-4c26-8733-d43d910f2430_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.857s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.139 247708 INFO nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Deleting local config drive /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430/disk.config because it was imported into RBD.
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.1936] manager: (tapf70ffad1-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Jan 31 07:52:22 compute-0 kernel: tapf70ffad1-6c: entered promiscuous mode
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2041] manager: (tape03f85ae-01): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00223|binding|INFO|Claiming lport f70ffad1-6c67-4e2f-9488-5f51be8ca30f for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00224|binding|INFO|f70ffad1-6c67-4e2f-9488-5f51be8ca30f: Claiming fa:16:3e:0e:06:0e 10.100.0.13
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.212 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2132] manager: (tap4eb52c86-78): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Jan 31 07:52:22 compute-0 kernel: tape03f85ae-01: entered promiscuous mode
Jan 31 07:52:22 compute-0 kernel: tap4eb52c86-78: entered promiscuous mode
Jan 31 07:52:22 compute-0 systemd-udevd[293050]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:22 compute-0 systemd-udevd[293049]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:22 compute-0 systemd-udevd[293051]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2263] manager: (tap3c7e8d6d-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Jan 31 07:52:22 compute-0 systemd-udevd[293059]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2329] device (tape03f85ae-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2337] device (tape03f85ae-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2417] manager: (tapefbaae5d-81): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2433] device (tapf70ffad1-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2442] device (tap4eb52c86-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 systemd-udevd[293064]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2447] device (tapf70ffad1-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2450] device (tap4eb52c86-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00225|if_status|INFO|Not updating pb chassis for 4eb52c86-78dd-484b-8a6f-30698b76d281 now as sb is readonly
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.248 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2504] device (tap3c7e8d6d-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 kernel: tap3c7e8d6d-d8: entered promiscuous mode
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2515] device (tap3c7e8d6d-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2568] manager: (tapb30ffaa7-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2578] device (tapefbaae5d-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 kernel: tapefbaae5d-81: entered promiscuous mode
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2590] device (tapefbaae5d-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.261 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 kernel: tapb30ffaa7-e7: entered promiscuous mode
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.265 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:06:0e 10.100.0.13'], port_security=['fa:16:3e:0e:06:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3752940-fba8-4389-b589-ddf451dddc0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b43212d-4075-45e4-8769-aa7227c99173, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=f70ffad1-6c67-4e2f-9488-5f51be8ca30f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.266 160028 INFO neutron.agent.ovn.metadata.agent [-] Port f70ffad1-6c67-4e2f-9488-5f51be8ca30f in datapath a3752940-fba8-4389-b589-ddf451dddc0e bound to our chassis
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2683] device (tapb30ffaa7-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2701] manager: (tap4b53f464-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2706] device (tapb30ffaa7-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00226|binding|INFO|Claiming lport efbaae5d-81ff-4cc9-8c11-21b46928ff4e for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00227|binding|INFO|efbaae5d-81ff-4cc9-8c11-21b46928ff4e: Claiming fa:16:3e:2a:f2:bf 10.1.1.8
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00228|binding|INFO|Claiming lport b30ffaa7-e766-4963-9625-a6ed3c381b9e for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00229|binding|INFO|b30ffaa7-e766-4963-9625-a6ed3c381b9e: Claiming fa:16:3e:d4:67:df 10.2.2.100
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00230|binding|INFO|Claiming lport 4eb52c86-78dd-484b-8a6f-30698b76d281 for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00231|binding|INFO|4eb52c86-78dd-484b-8a6f-30698b76d281: Claiming fa:16:3e:a9:1d:84 10.1.1.171
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00232|binding|INFO|Claiming lport e03f85ae-0195-4e14-9903-e7b06167a724 for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00233|binding|INFO|e03f85ae-0195-4e14-9903-e7b06167a724: Claiming fa:16:3e:1e:8e:bc 10.1.1.58
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00234|binding|INFO|Claiming lport 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00235|binding|INFO|3c7e8d6d-d8dc-4023-8967-e09698c9d8bd: Claiming fa:16:3e:91:6d:e4 10.1.1.123
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.271 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3752940-fba8-4389-b589-ddf451dddc0e
Jan 31 07:52:22 compute-0 kernel: tap4b53f464-7f: entered promiscuous mode
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.274 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00236|binding|INFO|Setting lport f70ffad1-6c67-4e2f-9488-5f51be8ca30f ovn-installed in OVS
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.278 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2813] device (tap4b53f464-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.2829] device (tap4b53f464-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.284 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4f57373d-d2cb-43f7-8fa9-adfd6fda3cc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.286 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3752940-f1 in ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.288 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3752940-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.289 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[295f3860-0906-4a71-a0a6-1a30f9fd4890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.290 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[09704bff-047f-4b7b-95e3-854053a9efc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.307 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 systemd-machined[214448]: New machine qemu-30-instance-00000048.
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.307 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[75ebb845-3535-4a15-85a4-0941205c64db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00237|binding|INFO|Setting lport 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd ovn-installed in OVS
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00238|binding|INFO|Setting lport 4eb52c86-78dd-484b-8a6f-30698b76d281 ovn-installed in OVS
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00239|binding|INFO|Setting lport e03f85ae-0195-4e14-9903-e7b06167a724 ovn-installed in OVS
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00240|binding|INFO|Setting lport b30ffaa7-e766-4963-9625-a6ed3c381b9e ovn-installed in OVS
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00241|binding|INFO|Setting lport efbaae5d-81ff-4cc9-8c11-21b46928ff4e ovn-installed in OVS
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.317 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000048.
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.325 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[027bbec4-6579-43ee-84df-d51b4b4173b5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.353 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f316d937-e5b3-4a09-963f-b2bdfed75ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.3614] manager: (tapa3752940-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.361 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e6f7fb-dba9-4822-ac12-3fb51e507b4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.384 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[afb40709-cb99-4747-bd61-21835291d526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.388 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa9b807-c139-4447-8024-4db54f8fdfc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.4114] device (tapa3752940-f0): carrier: link connected
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00242|binding|INFO|Claiming lport 4b53f464-7fe8-4fdb-9628-1af032b23899 for this chassis.
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00243|binding|INFO|4b53f464-7fe8-4fdb-9628-1af032b23899: Claiming fa:16:3e:57:74:ed 10.2.2.200
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.416 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:8e:bc 10.1.1.58'], port_security=['fa:16:3e:1e:8e:bc 10.1.1.58'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-896460109', 'neutron:cidrs': '10.1.1.58/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-896460109', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e626db8-81a7-41ec-b76c-dee792980f20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e03f85ae-0195-4e14-9903-e7b06167a724) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00244|binding|INFO|Setting lport 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd up in Southbound
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00245|binding|INFO|Setting lport f70ffad1-6c67-4e2f-9488-5f51be8ca30f up in Southbound
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00246|binding|INFO|Setting lport 4eb52c86-78dd-484b-8a6f-30698b76d281 up in Southbound
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00247|binding|INFO|Setting lport e03f85ae-0195-4e14-9903-e7b06167a724 up in Southbound
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00248|binding|INFO|Setting lport b30ffaa7-e766-4963-9625-a6ed3c381b9e up in Southbound
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00249|binding|INFO|Setting lport efbaae5d-81ff-4cc9-8c11-21b46928ff4e up in Southbound
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.418 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:f2:bf 10.1.1.8'], port_security=['fa:16:3e:2a:f2:bf 10.1.1.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.8/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=efbaae5d-81ff-4cc9-8c11-21b46928ff4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00250|binding|INFO|Setting lport 4b53f464-7fe8-4fdb-9628-1af032b23899 ovn-installed in OVS
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.419 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:1d:84 10.1.1.171'], port_security=['fa:16:3e:a9:1d:84 10.1.1.171'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1177458892', 'neutron:cidrs': '10.1.1.171/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1177458892', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e626db8-81a7-41ec-b76c-dee792980f20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4eb52c86-78dd-484b-8a6f-30698b76d281) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.420 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:6d:e4 10.1.1.123'], port_security=['fa:16:3e:91:6d:e4 10.1.1.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.123/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.421 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:67:df 10.2.2.100'], port_security=['fa:16:3e:d4:67:df 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53c7f058-b6f8-4179-8f8a-535c8c58d9f3, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=b30ffaa7-e766-4963-9625-a6ed3c381b9e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.419 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[532a324a-8fa6-424b-9e68-9b607ea5adeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.436 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[568827c4-3437-48eb-9960-dae8cee18057]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3752940-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:d1:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618502, 'reachable_time': 22705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293108, 'error': None, 'target': 'ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.447 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[39d27b5e-aea7-4e85-b6fb-713a43dac12f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecc:d188'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618502, 'tstamp': 618502}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293109, 'error': None, 'target': 'ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.465 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fafa5d1d-ae37-4589-b2d3-f8af7dc65b99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3752940-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cc:d1:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618502, 'reachable_time': 22705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293110, 'error': None, 'target': 'ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.500 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc621d9-022e-4233-8ac8-0bd770409cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.570 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:74:ed 10.2.2.200'], port_security=['fa:16:3e:57:74:ed 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53c7f058-b6f8-4179-8f8a-535c8c58d9f3, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4b53f464-7fe8-4fdb-9628-1af032b23899) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00251|binding|INFO|Setting lport 4b53f464-7fe8-4fdb-9628-1af032b23899 up in Southbound
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.579 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[de8c9bb7-b4e9-4a42-ab4b-0f4cb954728b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.581 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3752940-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.581 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.582 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3752940-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:22 compute-0 NetworkManager[49108]: <info>  [1769845942.5841] manager: (tapa3752940-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Jan 31 07:52:22 compute-0 kernel: tapa3752940-f0: entered promiscuous mode
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.587 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3752940-f0, col_values=(('external_ids', {'iface-id': '4dd901b3-b32e-4193-b4e6-b825ed5a208e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:22 compute-0 ovn_controller[149457]: 2026-01-31T07:52:22Z|00252|binding|INFO|Releasing lport 4dd901b3-b32e-4193-b4e6-b825ed5a208e from this chassis (sb_readonly=1)
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.592 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 sudo[293115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.600 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3752940-fba8-4389-b589-ddf451dddc0e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3752940-fba8-4389-b589-ddf451dddc0e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:52:22 compute-0 sudo[293115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.601 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e975886e-693d-482b-b987-dc4e5ee1d73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.604 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-a3752940-fba8-4389-b589-ddf451dddc0e
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:52:22 compute-0 sudo[293115]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/a3752940-fba8-4389-b589-ddf451dddc0e.pid.haproxy
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID a3752940-fba8-4389-b589-ddf451dddc0e
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:52:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:22.605 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e', 'env', 'PROCESS_TAG=haproxy-a3752940-fba8-4389-b589-ddf451dddc0e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3752940-fba8-4389-b589-ddf451dddc0e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:52:22 compute-0 sudo[293150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:52:22 compute-0 sudo[293150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-0 sudo[293150]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:22 compute-0 sudo[293202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:22 compute-0 sudo[293202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-0 sudo[293202]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:22 compute-0 sudo[293232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:52:22 compute-0 sudo[293232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:22 compute-0 nova_compute[247704]: 2026-01-31 07:52:22.809 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:22.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.006 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845943.006286, 9f390c61-950a-4c26-8733-d43d910f2430 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.007 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] VM Started (Lifecycle Event)
Jan 31 07:52:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 14 op/s
Jan 31 07:52:23 compute-0 podman[293315]: 2026-01-31 07:52:22.929493121 +0000 UTC m=+0.024014809 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:52:23 compute-0 podman[293315]: 2026-01-31 07:52:23.157363772 +0000 UTC m=+0.251885430 container create 94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 07:52:23 compute-0 sudo[293232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:23 compute-0 systemd[1]: Started libpod-conmon-94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7.scope.
Jan 31 07:52:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba87f6b2da21d1d52c2a96199d3bbb668eb3c1845b9729b1282fdcf53ab76011/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:23 compute-0 podman[293315]: 2026-01-31 07:52:23.238593492 +0000 UTC m=+0.333115150 container init 94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 07:52:23 compute-0 podman[293315]: 2026-01-31 07:52:23.243397689 +0000 UTC m=+0.337919347 container start 94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:52:23 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [NOTICE]   (293376) : New worker (293378) forked
Jan 31 07:52:23 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [NOTICE]   (293376) : Loading success.
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.266 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.271 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845943.010444, 9f390c61-950a-4c26-8733-d43d910f2430 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.271 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] VM Paused (Lifecycle Event)
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.278 247708 DEBUG nova.compute.manager [req-19a26965-5854-4100-9698-d37dbee08e65 req-8bd10a10-96c6-4374-a2d2-460456906090 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.279 247708 DEBUG oslo_concurrency.lockutils [req-19a26965-5854-4100-9698-d37dbee08e65 req-8bd10a10-96c6-4374-a2d2-460456906090 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.279 247708 DEBUG oslo_concurrency.lockutils [req-19a26965-5854-4100-9698-d37dbee08e65 req-8bd10a10-96c6-4374-a2d2-460456906090 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.279 247708 DEBUG oslo_concurrency.lockutils [req-19a26965-5854-4100-9698-d37dbee08e65 req-8bd10a10-96c6-4374-a2d2-460456906090 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.279 247708 DEBUG nova.compute.manager [req-19a26965-5854-4100-9698-d37dbee08e65 req-8bd10a10-96c6-4374-a2d2-460456906090 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.280 247708 DEBUG nova.compute.manager [req-ef2d1c17-d72f-4b6f-ada7-55603a7c4f6c req-fbc8a071-a489-4205-98e5-09408f448774 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.280 247708 DEBUG oslo_concurrency.lockutils [req-ef2d1c17-d72f-4b6f-ada7-55603a7c4f6c req-fbc8a071-a489-4205-98e5-09408f448774 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.280 247708 DEBUG oslo_concurrency.lockutils [req-ef2d1c17-d72f-4b6f-ada7-55603a7c4f6c req-fbc8a071-a489-4205-98e5-09408f448774 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.281 247708 DEBUG oslo_concurrency.lockutils [req-ef2d1c17-d72f-4b6f-ada7-55603a7c4f6c req-fbc8a071-a489-4205-98e5-09408f448774 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.281 247708 DEBUG nova.compute.manager [req-ef2d1c17-d72f-4b6f-ada7-55603a7c4f6c req-fbc8a071-a489-4205-98e5-09408f448774 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.294 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e03f85ae-0195-4e14-9903-e7b06167a724 in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.296 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.304 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[527fc60d-78d2-4605-97a5-a9222cc2dc05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.304 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0e9c0baa-f1 in ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.307 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0e9c0baa-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.307 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0bb55d-58dc-4e2f-b430-17f67ef64ab4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.308 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e602a5-dd9d-4630-a1fd-e89d91635f68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.318 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[ba13660d-3273-4dc2-8036-4b98ac1d655b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.329 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9f74fa8a-b297-499f-898e-2c663b8f03e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:52:23 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.359 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9871f631-dd46-44c4-8a1b-dec12fdcd02e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.365 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4290d332-cab2-405d-9c79-1b940f10f625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 NetworkManager[49108]: <info>  [1769845943.3675] manager: (tap0e9c0baa-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/128)
Jan 31 07:52:23 compute-0 systemd-udevd[293105]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.387 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2bc04f-5f61-490e-a66e-cb52483fb918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.390 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[124af788-6f0e-4160-9f5d-158263c8d9af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 NetworkManager[49108]: <info>  [1769845943.4047] device (tap0e9c0baa-f0): carrier: link connected
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.409 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[985ffee9-97a3-4bf8-ba2d-727db34cd1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.426 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d04d4f7-206e-44a7-af6b-617d5fc906df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e9c0baa-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:9e:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618602, 'reachable_time': 43911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293400, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.441 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2de6d896-2a98-44e8-94d8-f3a8d267e2e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:9e7d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618602, 'tstamp': 618602}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293401, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.456 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4713b8-91f9-4dcc-a1b7-0f82d2140075]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e9c0baa-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:9e:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618602, 'reachable_time': 43911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293402, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.462 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.467 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.486 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d10af92b-a253-404a-9b65-e0719ac8efea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.514 247708 DEBUG nova.compute.manager [req-ae959c44-1440-4f1d-97b5-5ca9aa5c8b50 req-28a5ccf3-e611-43b2-9cee-647f0342db7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.515 247708 DEBUG oslo_concurrency.lockutils [req-ae959c44-1440-4f1d-97b5-5ca9aa5c8b50 req-28a5ccf3-e611-43b2-9cee-647f0342db7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.515 247708 DEBUG oslo_concurrency.lockutils [req-ae959c44-1440-4f1d-97b5-5ca9aa5c8b50 req-28a5ccf3-e611-43b2-9cee-647f0342db7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.515 247708 DEBUG oslo_concurrency.lockutils [req-ae959c44-1440-4f1d-97b5-5ca9aa5c8b50 req-28a5ccf3-e611-43b2-9cee-647f0342db7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.516 247708 DEBUG nova.compute.manager [req-ae959c44-1440-4f1d-97b5-5ca9aa5c8b50 req-28a5ccf3-e611-43b2-9cee-647f0342db7a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.535 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc60a70-1bf0-4467-a407-3a93308392b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.537 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e9c0baa-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.537 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.537 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e9c0baa-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:23 compute-0 NetworkManager[49108]: <info>  [1769845943.5404] manager: (tap0e9c0baa-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 31 07:52:23 compute-0 kernel: tap0e9c0baa-f0: entered promiscuous mode
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.540 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.544 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e9c0baa-f0, col_values=(('external_ids', {'iface-id': 'c4b14b14-48a7-4288-bcf4-855cd8737a4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:23 compute-0 ovn_controller[149457]: 2026-01-31T07:52:23Z|00253|binding|INFO|Releasing lport c4b14b14-48a7-4288-bcf4-855cd8737a4e from this chassis (sb_readonly=0)
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.545 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.546 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.547 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7b393105-2c86-4deb-97f7-c6f20973bd06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.548 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2.pid.haproxy
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:52:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:23.549 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'env', 'PROCESS_TAG=haproxy-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.566 247708 DEBUG nova.compute.manager [req-b8068bd7-012f-41d9-91f1-75c0139651b0 req-ffa0f35e-8a0e-4e3c-a235-412148307979 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.566 247708 DEBUG oslo_concurrency.lockutils [req-b8068bd7-012f-41d9-91f1-75c0139651b0 req-ffa0f35e-8a0e-4e3c-a235-412148307979 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.567 247708 DEBUG oslo_concurrency.lockutils [req-b8068bd7-012f-41d9-91f1-75c0139651b0 req-ffa0f35e-8a0e-4e3c-a235-412148307979 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.568 247708 DEBUG oslo_concurrency.lockutils [req-b8068bd7-012f-41d9-91f1-75c0139651b0 req-ffa0f35e-8a0e-4e3c-a235-412148307979 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.569 247708 DEBUG nova.compute.manager [req-b8068bd7-012f-41d9-91f1-75c0139651b0 req-ffa0f35e-8a0e-4e3c-a235-412148307979 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:23.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:23 compute-0 nova_compute[247704]: 2026-01-31 07:52:23.578 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:52:23 compute-0 podman[293434]: 2026-01-31 07:52:23.886534761 +0000 UTC m=+0.030689332 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:52:24 compute-0 podman[293434]: 2026-01-31 07:52:24.015116641 +0000 UTC m=+0.159271222 container create b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:52:24 compute-0 systemd[1]: Started libpod-conmon-b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a.scope.
Jan 31 07:52:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca3d779ec397198f52295c85ce0ccce7c3ce13977e64e8866fec2fb104db368/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:52:24 compute-0 podman[293434]: 2026-01-31 07:52:24.158308899 +0000 UTC m=+0.302463480 container init b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:52:24 compute-0 podman[293434]: 2026-01-31 07:52:24.16328923 +0000 UTC m=+0.307443781 container start b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 07:52:24 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [NOTICE]   (293454) : New worker (293456) forked
Jan 31 07:52:24 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [NOTICE]   (293454) : Loading success.
Jan 31 07:52:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:52:24 compute-0 ceph-mon[74496]: pgmap v1654: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 14 op/s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.423 160028 INFO neutron.agent.ovn.metadata.agent [-] Port efbaae5d-81ff-4cc9-8c11-21b46928ff4e in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.426 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.435 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[09f9cfe1-d00c-4994-a2dd-169ccc39f08b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.459 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[37ae5fc6-2add-45d6-a7b4-693cb6223f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.462 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[086efc96-1496-4dbb-ac04-ec398f6df116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.482 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2f58f353-e8a2-45a9-880f-0aeee99c00d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.502 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8280d370-758a-4bfb-a9ab-a77d497cf82f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e9c0baa-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:9e:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618602, 'reachable_time': 43911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293470, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.524 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb7a557-c2cd-42ed-bf65-a3253d038395]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap0e9c0baa-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618612, 'tstamp': 618612}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293471, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0e9c0baa-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618615, 'tstamp': 618615}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293471, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.527 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e9c0baa-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.530 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.531 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.532 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e9c0baa-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.532 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.533 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e9c0baa-f0, col_values=(('external_ids', {'iface-id': 'c4b14b14-48a7-4288-bcf4-855cd8737a4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.533 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.534 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb52c86-78dd-484b-8a6f-30698b76d281 in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.536 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.549 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0db458-df31-4705-9d9a-bcd3b3a1316c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.569 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b962e9-c502-4f1c-8817-91a14e546ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.573 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1315c561-b0ef-4a94-ac3b-cfb5241c88ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.592 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[925a888a-76c2-4b04-a6f9-03411e519a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.609 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[358909f2-3941-4334-bd40-22d1673a9001]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e9c0baa-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:9e:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618602, 'reachable_time': 43911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293477, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.624 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e537f14c-6861-421e-a413-16503cd13cd8]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap0e9c0baa-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618612, 'tstamp': 618612}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293478, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0e9c0baa-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618615, 'tstamp': 618615}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293478, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.625 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e9c0baa-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.627 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.628 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e9c0baa-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.628 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.629 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e9c0baa-f0, col_values=(('external_ids', {'iface-id': 'c4b14b14-48a7-4288-bcf4-855cd8737a4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.629 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.630 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.632 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.644 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[52151096-d4d1-49db-9a9e-104d4fe17b53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.670 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[049d05ba-0271-42ce-be64-df94f7b09d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.674 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[65369465-9c7d-4ac1-a529-8bd9177954a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.692 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[039ea816-3276-4370-bea5-0b7634a0b338]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.705 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a0801000-6894-43c7-816f-7326d179d1c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0e9c0baa-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:9e:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 9, 'rx_bytes': 266, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 9, 'rx_bytes': 266, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618602, 'reachable_time': 43911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293484, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.716 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[532db628-c49f-4dd8-90c8-89a4c431da70]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap0e9c0baa-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618612, 'tstamp': 618612}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293485, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0e9c0baa-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618615, 'tstamp': 618615}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293485, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.718 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e9c0baa-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.720 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.721 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.722 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e9c0baa-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.723 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.723 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0e9c0baa-f0, col_values=(('external_ids', {'iface-id': 'c4b14b14-48a7-4288-bcf4-855cd8737a4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.724 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.725 160028 INFO neutron.agent.ovn.metadata.agent [-] Port b30ffaa7-e766-4963-9625-a6ed3c381b9e in datapath 8e83be71-fd66-4d0d-84cf-6e2da4831dad unbound from our chassis
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.728 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e83be71-fd66-4d0d-84cf-6e2da4831dad
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.738 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[95f7fab9-9f61-476b-a746-5e5190b8ffa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.738 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e83be71-f1 in ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.740 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e83be71-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.740 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d69abcce-b77e-4b99-8116-08096f8a2679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.741 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca359e-72f3-43be-b304-58a8db2fb106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.749 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[2d96db87-2b9b-444f-aaa8-167b1e28d6f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.758 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[02a21cbe-39ee-4249-ad51-703190eabfb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.779 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[13ad5c9b-10b0-496a-bce5-615fdf1c7abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.784 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65a1b244-f1fb-4bbe-a0cf-5d869fd46042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 NetworkManager[49108]: <info>  [1769845944.7850] manager: (tap8e83be71-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.809 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[689ad4fe-7c16-41fe-9359-29a55f22f9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.813 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[81a8a7b3-ed76-4743-b3ab-6a45f888b181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 NetworkManager[49108]: <info>  [1769845944.8332] device (tap8e83be71-f0): carrier: link connected
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.838 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9ddc79-f78d-4355-ac77-3f0d45365caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.854 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[402009e4-6f7b-4e82-8339-1f23afdc3794]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e83be71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:4a:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618745, 'reachable_time': 25660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293496, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.867 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e664d7ae-11a4-4223-8037-e5d41c57eecc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:4a61'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618745, 'tstamp': 618745}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293497, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.882 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3d740f78-405c-48d4-9b80-b979afc4c225]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e83be71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:4a:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618745, 'reachable_time': 25660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293498, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.904 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ffb60c-cfa6-4f10-b96e-83141558746a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.956 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab17054-e694-4e2d-9662-f13c77f5952f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.958 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e83be71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.958 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.958 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e83be71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 kernel: tap8e83be71-f0: entered promiscuous mode
Jan 31 07:52:24 compute-0 NetworkManager[49108]: <info>  [1769845944.9623] manager: (tap8e83be71-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.965 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e83be71-f0, col_values=(('external_ids', {'iface-id': 'a78f7dfd-bb43-4fb8-8fa3-1dac8ecd37b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.967 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:24 compute-0 ovn_controller[149457]: 2026-01-31T07:52:24Z|00254|binding|INFO|Releasing lport a78f7dfd-bb43-4fb8-8fa3-1dac8ecd37b8 from this chassis (sb_readonly=0)
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.970 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e83be71-fd66-4d0d-84cf-6e2da4831dad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e83be71-fd66-4d0d-84cf-6e2da4831dad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.971 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[58b0528d-1c7f-46f6-9b50-8089db715380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.972 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-8e83be71-fd66-4d0d-84cf-6e2da4831dad
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/8e83be71-fd66-4d0d-84cf-6e2da4831dad.pid.haproxy
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 8e83be71-fd66-4d0d-84cf-6e2da4831dad
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:52:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:24.973 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'env', 'PROCESS_TAG=haproxy-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e83be71-fd66-4d0d-84cf-6e2da4831dad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:52:24 compute-0 nova_compute[247704]: 2026-01-31 07:52:24.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 20 op/s
Jan 31 07:52:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:52:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:52:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:52:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:52:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:52:25 compute-0 podman[293531]: 2026-01-31 07:52:25.383554458 +0000 UTC m=+0.084937642 container create bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 07:52:25 compute-0 podman[293531]: 2026-01-31 07:52:25.324432669 +0000 UTC m=+0.025815873 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.423 247708 DEBUG nova.compute.manager [req-033e65d7-5df6-4783-8a62-c735f56d3121 req-ad435521-af02-43d0-ac4d-463f2ee3765c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.424 247708 DEBUG oslo_concurrency.lockutils [req-033e65d7-5df6-4783-8a62-c735f56d3121 req-ad435521-af02-43d0-ac4d-463f2ee3765c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.425 247708 DEBUG oslo_concurrency.lockutils [req-033e65d7-5df6-4783-8a62-c735f56d3121 req-ad435521-af02-43d0-ac4d-463f2ee3765c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.425 247708 DEBUG oslo_concurrency.lockutils [req-033e65d7-5df6-4783-8a62-c735f56d3121 req-ad435521-af02-43d0-ac4d-463f2ee3765c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.425 247708 DEBUG nova.compute.manager [req-033e65d7-5df6-4783-8a62-c735f56d3121 req-ad435521-af02-43d0-ac4d-463f2ee3765c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No event matching network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 in dict_keys([('network-vif-plugged', 'e03f85ae-0195-4e14-9903-e7b06167a724'), ('network-vif-plugged', 'efbaae5d-81ff-4cc9-8c11-21b46928ff4e'), ('network-vif-plugged', 'b30ffaa7-e766-4963-9625-a6ed3c381b9e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.426 247708 WARNING nova.compute.manager [req-033e65d7-5df6-4783-8a62-c735f56d3121 req-ad435521-af02-43d0-ac4d-463f2ee3765c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 for instance with vm_state building and task_state spawning.
Jan 31 07:52:25 compute-0 systemd[1]: Started libpod-conmon-bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af.scope.
Jan 31 07:52:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c68d507f7ad38bd9dfdbc6abde5bb8bb7682992430579072a32e414e49bdb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.467 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.467 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.468 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.468 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.468 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No event matching network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 in dict_keys([('network-vif-plugged', 'e03f85ae-0195-4e14-9903-e7b06167a724'), ('network-vif-plugged', 'efbaae5d-81ff-4cc9-8c11-21b46928ff4e'), ('network-vif-plugged', 'b30ffaa7-e766-4963-9625-a6ed3c381b9e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.469 247708 WARNING nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 for instance with vm_state building and task_state spawning.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.469 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.469 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.469 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.470 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.470 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.470 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.470 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.471 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.471 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.471 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No event matching network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e in dict_keys([('network-vif-plugged', 'e03f85ae-0195-4e14-9903-e7b06167a724'), ('network-vif-plugged', 'efbaae5d-81ff-4cc9-8c11-21b46928ff4e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.471 247708 WARNING nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e for instance with vm_state building and task_state spawning.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.472 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.472 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.472 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.472 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.473 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.473 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.473 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.473 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 podman[293531]: 2026-01-31 07:52:25.474181867 +0000 UTC m=+0.175565091 container init bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.474 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.474 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No event matching network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e in dict_keys([('network-vif-plugged', 'e03f85ae-0195-4e14-9903-e7b06167a724')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.474 247708 WARNING nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e for instance with vm_state building and task_state spawning.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.474 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.475 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.475 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.475 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.475 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Processing event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.476 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.476 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.476 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.476 247708 DEBUG oslo_concurrency.lockutils [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.477 247708 DEBUG nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.477 247708 WARNING nova.compute.manager [req-33ece7ce-1541-4dc7-af33-ec9c6f425fe9 req-87ac59df-d1fe-48a4-9e98-d8a5797afbf0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 for instance with vm_state building and task_state spawning.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.478 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:52:25 compute-0 podman[293531]: 2026-01-31 07:52:25.479473416 +0000 UTC m=+0.180856600 container start bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.483 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769845945.4833217, 9f390c61-950a-4c26-8733-d43d910f2430 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.483 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] VM Resumed (Lifecycle Event)
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.485 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.489 247708 INFO nova.virt.libvirt.driver [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance spawned successfully.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.490 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:52:25 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [NOTICE]   (293552) : New worker (293554) forked
Jan 31 07:52:25 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [NOTICE]   (293552) : Loading success.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.529 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.533 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.544 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.545 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.545 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.546 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.546 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.546 247708 DEBUG nova.virt.libvirt.driver [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:52:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:25.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.616 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4b53f464-7fe8-4fdb-9628-1af032b23899 in datapath 8e83be71-fd66-4d0d-84cf-6e2da4831dad unbound from our chassis
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.618 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e83be71-fd66-4d0d-84cf-6e2da4831dad
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.627 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2d9790-07f5-44de-9165-22207fa45087]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.648 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[43edc5e0-bfa6-4f3c-8ad6-8ac00cd0bfdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.651 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8f114a92-5907-4b94-85b8-9258e9a217db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c0fb8998-153a-4d34-8cc0-c0df5ea131e1 does not exist
Jan 31 07:52:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 699be511-7117-4c5e-8c61-942fef2fb264 does not exist
Jan 31 07:52:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0aa98173-1daa-42ca-a012-8ec1e9ba61fc does not exist
Jan 31 07:52:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:52:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:52:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:52:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:52:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:52:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.686 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf73bac-209c-4cb3-a241-052e9a50f9ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.704 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6142ae7c-07af-4e53-b731-13df9262f813]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e83be71-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:4a:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618745, 'reachable_time': 25660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293574, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.718 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e6af1f70-b4e4-45ec-a7be-b839ae5ece8e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8e83be71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618754, 'tstamp': 618754}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293587, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap8e83be71-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618757, 'tstamp': 618757}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293587, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.721 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e83be71-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.723 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.725 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e83be71-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.725 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.725 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e83be71-f0, col_values=(('external_ids', {'iface-id': 'a78f7dfd-bb43-4fb8-8fa3-1dac8ecd37b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:25.726 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:52:25 compute-0 sudo[293568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:25 compute-0 sudo[293568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:25 compute-0 sudo[293568]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:25 compute-0 sudo[293595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:52:25 compute-0 sudo[293595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:25 compute-0 sudo[293595]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:25 compute-0 sudo[293620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:52:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:52:25 compute-0 sudo[293620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:25 compute-0 sudo[293620]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.914 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:52:25 compute-0 sudo[293645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:52:25 compute-0 sudo[293645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.985 247708 INFO nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Took 47.49 seconds to spawn the instance on the hypervisor.
Jan 31 07:52:25 compute-0 nova_compute[247704]: 2026-01-31 07:52:25.990 247708 DEBUG nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.027 247708 DEBUG nova.compute.manager [req-fcaf1476-187b-4c78-8a59-6ee0e88ba9e8 req-94ba354a-701a-4a18-8a20-5a8e16ee148e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.028 247708 DEBUG oslo_concurrency.lockutils [req-fcaf1476-187b-4c78-8a59-6ee0e88ba9e8 req-94ba354a-701a-4a18-8a20-5a8e16ee148e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.029 247708 DEBUG oslo_concurrency.lockutils [req-fcaf1476-187b-4c78-8a59-6ee0e88ba9e8 req-94ba354a-701a-4a18-8a20-5a8e16ee148e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.030 247708 DEBUG oslo_concurrency.lockutils [req-fcaf1476-187b-4c78-8a59-6ee0e88ba9e8 req-94ba354a-701a-4a18-8a20-5a8e16ee148e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.031 247708 DEBUG nova.compute.manager [req-fcaf1476-187b-4c78-8a59-6ee0e88ba9e8 req-94ba354a-701a-4a18-8a20-5a8e16ee148e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.031 247708 WARNING nova.compute.manager [req-fcaf1476-187b-4c78-8a59-6ee0e88ba9e8 req-94ba354a-701a-4a18-8a20-5a8e16ee148e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f for instance with vm_state building and task_state spawning.
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.037 247708 DEBUG nova.compute.manager [req-8343317a-3a84-47f1-b6f0-fd280fecc230 req-9ea03038-88d8-4b67-9d9c-ce18d681652f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.037 247708 DEBUG oslo_concurrency.lockutils [req-8343317a-3a84-47f1-b6f0-fd280fecc230 req-9ea03038-88d8-4b67-9d9c-ce18d681652f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.038 247708 DEBUG oslo_concurrency.lockutils [req-8343317a-3a84-47f1-b6f0-fd280fecc230 req-9ea03038-88d8-4b67-9d9c-ce18d681652f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.039 247708 DEBUG oslo_concurrency.lockutils [req-8343317a-3a84-47f1-b6f0-fd280fecc230 req-9ea03038-88d8-4b67-9d9c-ce18d681652f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.039 247708 DEBUG nova.compute.manager [req-8343317a-3a84-47f1-b6f0-fd280fecc230 req-9ea03038-88d8-4b67-9d9c-ce18d681652f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.040 247708 WARNING nova.compute.manager [req-8343317a-3a84-47f1-b6f0-fd280fecc230 req-9ea03038-88d8-4b67-9d9c-ce18d681652f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd for instance with vm_state building and task_state spawning.
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.123 247708 INFO nova.compute.manager [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Took 63.32 seconds to build instance.
Jan 31 07:52:26 compute-0 nova_compute[247704]: 2026-01-31 07:52:26.147 247708 DEBUG oslo_concurrency.lockutils [None req-5a27bcaa-23e7-4658-ab8d-d7acaa922d6e eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 63.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:26 compute-0 podman[293708]: 2026-01-31 07:52:26.239826059 +0000 UTC m=+0.069640417 container create 21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:26 compute-0 podman[293708]: 2026-01-31 07:52:26.190241555 +0000 UTC m=+0.020055933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:52:26 compute-0 systemd[1]: Started libpod-conmon-21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52.scope.
Jan 31 07:52:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:26 compute-0 podman[293708]: 2026-01-31 07:52:26.507113506 +0000 UTC m=+0.336927874 container init 21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:26 compute-0 podman[293708]: 2026-01-31 07:52:26.514438405 +0000 UTC m=+0.344252773 container start 21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:52:26 compute-0 gracious_chaum[293724]: 167 167
Jan 31 07:52:26 compute-0 systemd[1]: libpod-21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52.scope: Deactivated successfully.
Jan 31 07:52:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:26 compute-0 podman[293708]: 2026-01-31 07:52:26.692374604 +0000 UTC m=+0.522188962 container attach 21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:52:26 compute-0 podman[293708]: 2026-01-31 07:52:26.692946608 +0000 UTC m=+0.522760946 container died 21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:52:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:52:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 32 op/s
Jan 31 07:52:27 compute-0 ceph-mon[74496]: pgmap v1655: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 20 op/s
Jan 31 07:52:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:52:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:52:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:52:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-37dcf0dc827d3f4e619513b96bdfd04a412c3e4236f7e83c0026e3a0e6859acf-merged.mount: Deactivated successfully.
Jan 31 07:52:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:27.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:27 compute-0 nova_compute[247704]: 2026-01-31 07:52:27.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:28 compute-0 podman[293708]: 2026-01-31 07:52:28.145950617 +0000 UTC m=+1.975764945 container remove 21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:52:28 compute-0 systemd[1]: libpod-conmon-21bf4dc30e0811a41ac4f722f6a16e2f46937042a4d9073484ae168d28fe0d52.scope: Deactivated successfully.
Jan 31 07:52:28 compute-0 podman[293750]: 2026-01-31 07:52:28.289709277 +0000 UTC m=+0.018390121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:52:28 compute-0 podman[293750]: 2026-01-31 07:52:28.480327986 +0000 UTC m=+0.209008790 container create 7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 07:52:28 compute-0 ceph-mon[74496]: pgmap v1656: 305 pgs: 305 active+clean; 88 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 32 op/s
Jan 31 07:52:28 compute-0 systemd[1]: Started libpod-conmon-7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502.scope.
Jan 31 07:52:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5c6ed4ea5769681888113e4f4a21cc840dc33647c04edcf718a80c9129e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5c6ed4ea5769681888113e4f4a21cc840dc33647c04edcf718a80c9129e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5c6ed4ea5769681888113e4f4a21cc840dc33647c04edcf718a80c9129e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5c6ed4ea5769681888113e4f4a21cc840dc33647c04edcf718a80c9129e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f5c6ed4ea5769681888113e4f4a21cc840dc33647c04edcf718a80c9129e32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:28.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 98 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 729 KiB/s wr, 80 op/s
Jan 31 07:52:29 compute-0 podman[293750]: 2026-01-31 07:52:29.066564244 +0000 UTC m=+0.795245058 container init 7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:52:29 compute-0 podman[293750]: 2026-01-31 07:52:29.071548067 +0000 UTC m=+0.800228871 container start 7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:52:29 compute-0 podman[293750]: 2026-01-31 07:52:29.153809781 +0000 UTC m=+0.882490595 container attach 7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:52:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:29.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:29 compute-0 elegant_keldysh[293767]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:52:29 compute-0 elegant_keldysh[293767]: --> relative data size: 1.0
Jan 31 07:52:29 compute-0 elegant_keldysh[293767]: --> All data devices are unavailable
Jan 31 07:52:29 compute-0 systemd[1]: libpod-7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502.scope: Deactivated successfully.
Jan 31 07:52:29 compute-0 podman[293750]: 2026-01-31 07:52:29.881673489 +0000 UTC m=+1.610354303 container died 7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 07:52:30 compute-0 nova_compute[247704]: 2026-01-31 07:52:30.004 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:30 compute-0 ceph-mon[74496]: pgmap v1657: 305 pgs: 305 active+clean; 98 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 729 KiB/s wr, 80 op/s
Jan 31 07:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-43f5c6ed4ea5769681888113e4f4a21cc840dc33647c04edcf718a80c9129e32-merged.mount: Deactivated successfully.
Jan 31 07:52:30 compute-0 podman[293750]: 2026-01-31 07:52:30.871619585 +0000 UTC m=+2.600300389 container remove 7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_keldysh, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:30 compute-0 sudo[293645]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:30 compute-0 systemd[1]: libpod-conmon-7f649db1be9ba1a0a884913a952a3662b93f2829519c6daba4d7fb28bc9ae502.scope: Deactivated successfully.
Jan 31 07:52:30 compute-0 sudo[293794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:30 compute-0 sudo[293794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:30 compute-0 sudo[293794]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 115 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Jan 31 07:52:31 compute-0 sudo[293819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:52:31 compute-0 sudo[293819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:31 compute-0 sudo[293819]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:31 compute-0 sudo[293844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:31 compute-0 sudo[293844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:31 compute-0 sudo[293844]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:31 compute-0 sudo[293869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:52:31 compute-0 sudo[293869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:31.236 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:31.237 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:52:31 compute-0 nova_compute[247704]: 2026-01-31 07:52:31.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:31.239 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:31.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:31 compute-0 podman[293935]: 2026-01-31 07:52:31.514638565 +0000 UTC m=+0.027213828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:52:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:31 compute-0 podman[293935]: 2026-01-31 07:52:31.811697411 +0000 UTC m=+0.324272644 container create b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:52:32 compute-0 systemd[1]: Started libpod-conmon-b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7.scope.
Jan 31 07:52:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:32 compute-0 ceph-mon[74496]: pgmap v1658: 305 pgs: 305 active+clean; 115 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Jan 31 07:52:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2436687015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:32 compute-0 podman[293935]: 2026-01-31 07:52:32.307028602 +0000 UTC m=+0.819603825 container init b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brattain, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 07:52:32 compute-0 podman[293935]: 2026-01-31 07:52:32.313663825 +0000 UTC m=+0.826239038 container start b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brattain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:52:32 compute-0 strange_brattain[293951]: 167 167
Jan 31 07:52:32 compute-0 systemd[1]: libpod-b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7.scope: Deactivated successfully.
Jan 31 07:52:32 compute-0 podman[293935]: 2026-01-31 07:52:32.44860329 +0000 UTC m=+0.961178533 container attach b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:52:32 compute-0 podman[293935]: 2026-01-31 07:52:32.449172414 +0000 UTC m=+0.961747677 container died b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brattain, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28af153718ab46c2de3d29e917932f2e6f753261431525c9fbf8f8761697178-merged.mount: Deactivated successfully.
Jan 31 07:52:32 compute-0 nova_compute[247704]: 2026-01-31 07:52:32.814 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 134 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 07:52:33 compute-0 podman[293935]: 2026-01-31 07:52:33.19083222 +0000 UTC m=+1.703407453 container remove b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:52:33 compute-0 systemd[1]: libpod-conmon-b3768c0f99db7c925be1f6f6b76ae2867171c6515a9854061d914196586fbdb7.scope: Deactivated successfully.
Jan 31 07:52:33 compute-0 NetworkManager[49108]: <info>  [1769845953.3636] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 31 07:52:33 compute-0 NetworkManager[49108]: <info>  [1769845953.3655] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.362 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.449 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:33 compute-0 ovn_controller[149457]: 2026-01-31T07:52:33Z|00255|binding|INFO|Releasing lport c4b14b14-48a7-4288-bcf4-855cd8737a4e from this chassis (sb_readonly=0)
Jan 31 07:52:33 compute-0 ovn_controller[149457]: 2026-01-31T07:52:33Z|00256|binding|INFO|Releasing lport 4dd901b3-b32e-4193-b4e6-b825ed5a208e from this chassis (sb_readonly=0)
Jan 31 07:52:33 compute-0 ovn_controller[149457]: 2026-01-31T07:52:33Z|00257|binding|INFO|Releasing lport a78f7dfd-bb43-4fb8-8fa3-1dac8ecd37b8 from this chassis (sb_readonly=0)
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.474 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:33 compute-0 podman[293978]: 2026-01-31 07:52:33.424286787 +0000 UTC m=+0.037492009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:52:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:33.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:33 compute-0 podman[293978]: 2026-01-31 07:52:33.61057942 +0000 UTC m=+0.223784582 container create de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:52:33 compute-0 systemd[1]: Started libpod-conmon-de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d.scope.
Jan 31 07:52:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f439c11912fea2cb272bd1e7348125dd33aeca2b5afca06a1a1337e826a7c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f439c11912fea2cb272bd1e7348125dd33aeca2b5afca06a1a1337e826a7c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f439c11912fea2cb272bd1e7348125dd33aeca2b5afca06a1a1337e826a7c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f439c11912fea2cb272bd1e7348125dd33aeca2b5afca06a1a1337e826a7c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:33 compute-0 podman[293978]: 2026-01-31 07:52:33.944910588 +0000 UTC m=+0.558115730 container init de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_murdock, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.949 247708 DEBUG nova.compute.manager [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-changed-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.951 247708 DEBUG nova.compute.manager [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing instance network info cache due to event network-changed-f70ffad1-6c67-4e2f-9488-5f51be8ca30f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.952 247708 DEBUG oslo_concurrency.lockutils [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.952 247708 DEBUG oslo_concurrency.lockutils [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:52:33 compute-0 nova_compute[247704]: 2026-01-31 07:52:33.953 247708 DEBUG nova.network.neutron [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Refreshing network info cache for port f70ffad1-6c67-4e2f-9488-5f51be8ca30f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:52:33 compute-0 podman[293978]: 2026-01-31 07:52:33.9543621 +0000 UTC m=+0.567567262 container start de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_murdock, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:52:34 compute-0 podman[293978]: 2026-01-31 07:52:34.035325933 +0000 UTC m=+0.648531105 container attach de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:52:34 compute-0 ceph-mon[74496]: pgmap v1659: 305 pgs: 305 active+clean; 134 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 07:52:34 compute-0 lucid_murdock[293996]: {
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:     "0": [
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:         {
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "devices": [
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "/dev/loop3"
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             ],
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "lv_name": "ceph_lv0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "lv_size": "7511998464",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "name": "ceph_lv0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "tags": {
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.cluster_name": "ceph",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.crush_device_class": "",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.encrypted": "0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.osd_id": "0",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.type": "block",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:                 "ceph.vdo": "0"
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             },
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "type": "block",
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:             "vg_name": "ceph_vg0"
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:         }
Jan 31 07:52:34 compute-0 lucid_murdock[293996]:     ]
Jan 31 07:52:34 compute-0 lucid_murdock[293996]: }
Jan 31 07:52:34 compute-0 systemd[1]: libpod-de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d.scope: Deactivated successfully.
Jan 31 07:52:34 compute-0 podman[293978]: 2026-01-31 07:52:34.729407714 +0000 UTC m=+1.342612886 container died de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:52:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:34.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-90f439c11912fea2cb272bd1e7348125dd33aeca2b5afca06a1a1337e826a7c7-merged.mount: Deactivated successfully.
Jan 31 07:52:35 compute-0 podman[293978]: 2026-01-31 07:52:35.007403332 +0000 UTC m=+1.620608464 container remove de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:52:35 compute-0 nova_compute[247704]: 2026-01-31 07:52:35.009 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:35 compute-0 systemd[1]: libpod-conmon-de4ce2745dc4b6d98b6fd704c153770b947eabdeae7f7c45ddb3ec0d4eac6f4d.scope: Deactivated successfully.
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 162 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 124 op/s
Jan 31 07:52:35 compute-0 sudo[293869]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:35 compute-0 podman[294019]: 2026-01-31 07:52:35.07140412 +0000 UTC m=+0.108639202 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006841208356597804 of space, bias 1.0, pg target 0.2052362506979341 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001978388760934301 of space, bias 1.0, pg target 0.5935166282802903 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:52:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:52:35 compute-0 sudo[294036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:35 compute-0 sudo[294036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:35 compute-0 sudo[294036]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:35 compute-0 sudo[294064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:52:35 compute-0 sudo[294064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:35 compute-0 sudo[294064]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:35 compute-0 sudo[294089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:35 compute-0 sudo[294089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:35 compute-0 sudo[294089]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:35 compute-0 sudo[294114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:52:35 compute-0 sudo[294114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:35.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:35 compute-0 podman[294180]: 2026-01-31 07:52:35.612437871 +0000 UTC m=+0.032195950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:52:35 compute-0 podman[294180]: 2026-01-31 07:52:35.773328312 +0000 UTC m=+0.193086411 container create b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:52:35 compute-0 ceph-mon[74496]: pgmap v1660: 305 pgs: 305 active+clean; 162 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 124 op/s
Jan 31 07:52:35 compute-0 systemd[1]: Started libpod-conmon-b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d.scope.
Jan 31 07:52:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:35 compute-0 podman[294180]: 2026-01-31 07:52:35.934954861 +0000 UTC m=+0.354712940 container init b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:35 compute-0 podman[294180]: 2026-01-31 07:52:35.942308581 +0000 UTC m=+0.362066640 container start b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:52:35 compute-0 strange_margulis[294196]: 167 167
Jan 31 07:52:35 compute-0 systemd[1]: libpod-b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d.scope: Deactivated successfully.
Jan 31 07:52:35 compute-0 conmon[294196]: conmon b18c05c485d3374d0b41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d.scope/container/memory.events
Jan 31 07:52:35 compute-0 sudo[294199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:35 compute-0 sudo[294199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:35 compute-0 sudo[294199]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:36 compute-0 sudo[294232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:36 compute-0 sudo[294232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:36 compute-0 sudo[294232]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:36 compute-0 podman[294180]: 2026-01-31 07:52:36.080979567 +0000 UTC m=+0.500737626 container attach b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:52:36 compute-0 podman[294180]: 2026-01-31 07:52:36.081669213 +0000 UTC m=+0.501427282 container died b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:52:36 compute-0 nova_compute[247704]: 2026-01-31 07:52:36.096 247708 DEBUG nova.network.neutron [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updated VIF entry in instance network info cache for port f70ffad1-6c67-4e2f-9488-5f51be8ca30f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:52:36 compute-0 nova_compute[247704]: 2026-01-31 07:52:36.099 247708 DEBUG nova.network.neutron [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [{"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:52:36 compute-0 nova_compute[247704]: 2026-01-31 07:52:36.136 247708 DEBUG oslo_concurrency.lockutils [req-793ecc62-2768-4019-b6cd-02c36d3ffe6f req-b2b106f7-dba0-4c1f-9789-85ba9f5158b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9f390c61-950a-4c26-8733-d43d910f2430" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:52:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-05a1acbb0849ea0b9d6f3cb553b2c12cc2c5d83cbc1bc7639d54290f626b037b-merged.mount: Deactivated successfully.
Jan 31 07:52:36 compute-0 podman[294180]: 2026-01-31 07:52:36.208597583 +0000 UTC m=+0.628355642 container remove b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_margulis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:52:36 compute-0 systemd[1]: libpod-conmon-b18c05c485d3374d0b414df87115d37f922ebf939be06033c66348da5250038d.scope: Deactivated successfully.
Jan 31 07:52:36 compute-0 podman[294270]: 2026-01-31 07:52:36.401670692 +0000 UTC m=+0.058553826 container create c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_snyder, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:36 compute-0 systemd[1]: Started libpod-conmon-c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858.scope.
Jan 31 07:52:36 compute-0 podman[294270]: 2026-01-31 07:52:36.372072787 +0000 UTC m=+0.028956011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:52:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b455cce7eeab2ffa9d9f8621c93b3a529cd2501a550a53ad3e7b742770f5a2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b455cce7eeab2ffa9d9f8621c93b3a529cd2501a550a53ad3e7b742770f5a2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b455cce7eeab2ffa9d9f8621c93b3a529cd2501a550a53ad3e7b742770f5a2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b455cce7eeab2ffa9d9f8621c93b3a529cd2501a550a53ad3e7b742770f5a2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:52:36 compute-0 podman[294270]: 2026-01-31 07:52:36.498579125 +0000 UTC m=+0.155462349 container init c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:52:36 compute-0 podman[294270]: 2026-01-31 07:52:36.506972131 +0000 UTC m=+0.163855255 container start c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_snyder, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 07:52:36 compute-0 podman[294270]: 2026-01-31 07:52:36.516023283 +0000 UTC m=+0.172906507 container attach c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:52:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.820840) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956820956, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1965, "num_deletes": 257, "total_data_size": 3392648, "memory_usage": 3442800, "flush_reason": "Manual Compaction"}
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956877243, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3324350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34607, "largest_seqno": 36571, "table_properties": {"data_size": 3315449, "index_size": 5459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19020, "raw_average_key_size": 20, "raw_value_size": 3297377, "raw_average_value_size": 3553, "num_data_blocks": 238, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845752, "oldest_key_time": 1769845752, "file_creation_time": 1769845956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 56444 microseconds, and 6656 cpu microseconds.
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.877300) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3324350 bytes OK
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.877329) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.887599) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.887629) EVENT_LOG_v1 {"time_micros": 1769845956887620, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.887660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3384552, prev total WAL file size 3384552, number of live WAL files 2.
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.888608) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3246KB)], [74(8359KB)]
Jan 31 07:52:36 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845956888676, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11884565, "oldest_snapshot_seqno": -1}
Jan 31 07:52:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 173 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 129 op/s
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6525 keys, 11725032 bytes, temperature: kUnknown
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845957036844, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11725032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11679115, "index_size": 28527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167052, "raw_average_key_size": 25, "raw_value_size": 11559818, "raw_average_value_size": 1771, "num_data_blocks": 1149, "num_entries": 6525, "num_filter_entries": 6525, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.037059) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11725032 bytes
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.040555) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.2 rd, 79.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.2 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 7061, records dropped: 536 output_compression: NoCompression
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.040570) EVENT_LOG_v1 {"time_micros": 1769845957040562, "job": 42, "event": "compaction_finished", "compaction_time_micros": 148239, "compaction_time_cpu_micros": 21916, "output_level": 6, "num_output_files": 1, "total_output_size": 11725032, "num_input_records": 7061, "num_output_records": 6525, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845957040962, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845957041711, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:36.888446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.041786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.041791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.041793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.041794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:37 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:37.041795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:37 compute-0 jovial_snyder[294287]: {
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:         "osd_id": 0,
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:         "type": "bluestore"
Jan 31 07:52:37 compute-0 jovial_snyder[294287]:     }
Jan 31 07:52:37 compute-0 jovial_snyder[294287]: }
Jan 31 07:52:37 compute-0 systemd[1]: libpod-c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858.scope: Deactivated successfully.
Jan 31 07:52:37 compute-0 podman[294270]: 2026-01-31 07:52:37.310224155 +0000 UTC m=+0.967107349 container died c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b455cce7eeab2ffa9d9f8621c93b3a529cd2501a550a53ad3e7b742770f5a2f-merged.mount: Deactivated successfully.
Jan 31 07:52:37 compute-0 podman[294270]: 2026-01-31 07:52:37.387446176 +0000 UTC m=+1.044329310 container remove c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_snyder, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 07:52:37 compute-0 systemd[1]: libpod-conmon-c53360aabf14fe6cf7c90f67ac4ce0d824a83fab10606fc291559678d4704858.scope: Deactivated successfully.
Jan 31 07:52:37 compute-0 sudo[294114]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:52:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:52:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 845109eb-9c98-4566-8ca3-95f50ac23238 does not exist
Jan 31 07:52:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 33838563-4b77-4bf5-868b-71b4b3b84342 does not exist
Jan 31 07:52:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4828c7d3-49f3-438d-8d3d-aaf4e5a10abb does not exist
Jan 31 07:52:37 compute-0 sudo[294323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:37 compute-0 sudo[294323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:37 compute-0 sudo[294323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:37 compute-0 sudo[294348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:52:37 compute-0 sudo[294348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:37 compute-0 sudo[294348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:37.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:37 compute-0 nova_compute[247704]: 2026-01-31 07:52:37.817 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:38 compute-0 ceph-mon[74496]: pgmap v1661: 305 pgs: 305 active+clean; 173 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 129 op/s
Jan 31 07:52:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:52:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1357689423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 180 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 31 07:52:39 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 31 07:52:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2918745298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2684906466' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-0 ceph-mon[74496]: pgmap v1662: 305 pgs: 305 active+clean; 180 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 116 op/s
Jan 31 07:52:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4230265387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:39 compute-0 ovn_controller[149457]: 2026-01-31T07:52:39Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:1d:84 10.1.1.171
Jan 31 07:52:39 compute-0 ovn_controller[149457]: 2026-01-31T07:52:39Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:1d:84 10.1.1.171
Jan 31 07:52:40 compute-0 nova_compute[247704]: 2026-01-31 07:52:40.017 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:74:ed 10.2.2.200
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:74:ed 10.2.2.200
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:06:0e 10.100.0.13
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:06:0e 10.100.0.13
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:67:df 10.2.2.100
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:67:df 10.2.2.100
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:8e:bc 10.1.1.58
Jan 31 07:52:40 compute-0 ovn_controller[149457]: 2026-01-31T07:52:40Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:8e:bc 10.1.1.58
Jan 31 07:52:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 192 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 753 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 31 07:52:41 compute-0 ovn_controller[149457]: 2026-01-31T07:52:41Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:6d:e4 10.1.1.123
Jan 31 07:52:41 compute-0 ovn_controller[149457]: 2026-01-31T07:52:41Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:6d:e4 10.1.1.123
Jan 31 07:52:41 compute-0 ovn_controller[149457]: 2026-01-31T07:52:41Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:f2:bf 10.1.1.8
Jan 31 07:52:41 compute-0 ovn_controller[149457]: 2026-01-31T07:52:41Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:f2:bf 10.1.1.8
Jan 31 07:52:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:52:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:41.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:52:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:42 compute-0 nova_compute[247704]: 2026-01-31 07:52:42.004 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:42 compute-0 ceph-mon[74496]: pgmap v1663: 305 pgs: 305 active+clean; 192 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 753 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 31 07:52:42 compute-0 nova_compute[247704]: 2026-01-31 07:52:42.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 205 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.266488) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963266666, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 326, "num_deletes": 251, "total_data_size": 194239, "memory_usage": 201528, "flush_reason": "Manual Compaction"}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963284895, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 193425, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36572, "largest_seqno": 36897, "table_properties": {"data_size": 191219, "index_size": 370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5457, "raw_average_key_size": 18, "raw_value_size": 186894, "raw_average_value_size": 644, "num_data_blocks": 16, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845957, "oldest_key_time": 1769845957, "file_creation_time": 1769845963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 18601 microseconds, and 2089 cpu microseconds.
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.285059) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 193425 bytes OK
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.285208) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.288983) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.289006) EVENT_LOG_v1 {"time_micros": 1769845963288999, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.289030) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 191958, prev total WAL file size 191958, number of live WAL files 2.
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.289723) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(188KB)], [77(11MB)]
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963289803, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11918457, "oldest_snapshot_seqno": -1}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6302 keys, 9986583 bytes, temperature: kUnknown
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963499040, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9986583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9943678, "index_size": 26027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 163122, "raw_average_key_size": 25, "raw_value_size": 9829800, "raw_average_value_size": 1559, "num_data_blocks": 1037, "num_entries": 6302, "num_filter_entries": 6302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769845963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.499527) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9986583 bytes
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.508774) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.9 rd, 47.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(113.2) write-amplify(51.6) OK, records in: 6815, records dropped: 513 output_compression: NoCompression
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.508826) EVENT_LOG_v1 {"time_micros": 1769845963508807, "job": 44, "event": "compaction_finished", "compaction_time_micros": 209378, "compaction_time_cpu_micros": 21569, "output_level": 6, "num_output_files": 1, "total_output_size": 9986583, "num_input_records": 6815, "num_output_records": 6302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963509104, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845963510924, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.289601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.511048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.511057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.511059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.511062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:52:43.511064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:52:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:43.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:44 compute-0 ceph-mon[74496]: pgmap v1664: 305 pgs: 305 active+clean; 205 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 07:52:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:44.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:45 compute-0 nova_compute[247704]: 2026-01-31 07:52:45.021 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 31 07:52:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1330100280' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:52:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1330100280' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:52:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2159163348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:46 compute-0 ceph-mon[74496]: pgmap v1665: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 31 07:52:46 compute-0 nova_compute[247704]: 2026-01-31 07:52:46.660 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:46.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 31 07:52:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:47.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:47 compute-0 nova_compute[247704]: 2026-01-31 07:52:47.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:48 compute-0 ceph-mon[74496]: pgmap v1666: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 31 07:52:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 146 op/s
Jan 31 07:52:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:49.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:49 compute-0 podman[294379]: 2026-01-31 07:52:49.950058609 +0000 UTC m=+0.107532184 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 07:52:50 compute-0 nova_compute[247704]: 2026-01-31 07:52:50.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:52:50 compute-0 ceph-mon[74496]: pgmap v1667: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 146 op/s
Jan 31 07:52:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:50.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 31 07:52:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:51 compute-0 ceph-mon[74496]: pgmap v1668: 305 pgs: 305 active+clean; 214 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:51.892 160292 DEBUG eventlet.wsgi.server [-] (160292) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 31 07:52:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:51.894 160292 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: Accept: */*
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: Connection: close
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: Content-Type: text/plain
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: Host: 169.254.169.254
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: User-Agent: curl/7.84.0
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: X-Forwarded-For: 10.100.0.13
Jan 31 07:52:51 compute-0 ovn_metadata_agent[160021]: X-Ovn-Network-Id: a3752940-fba8-4389-b589-ddf451dddc0e __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 31 07:52:52 compute-0 nova_compute[247704]: 2026-01-31 07:52:52.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3956740207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:52:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 214 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 161 op/s
Jan 31 07:52:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:53.442 160292 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 31 07:52:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:53.442 160292 INFO eventlet.wsgi.server [-] 10.100.0.13,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2548 time: 1.5486321
Jan 31 07:52:53 compute-0 haproxy-metadata-proxy-a3752940-fba8-4389-b589-ddf451dddc0e[293378]: 10.100.0.13:50114 [31/Jan/2026:07:52:51.891] listener listener/metadata 0/0/0/1551/1551 200 2532 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 07:52:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:54 compute-0 ceph-mon[74496]: pgmap v1669: 305 pgs: 305 active+clean; 214 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 161 op/s
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.555 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.555 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.556 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.556 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.557 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.558 247708 INFO nova.compute.manager [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Terminating instance
Jan 31 07:52:54 compute-0 nova_compute[247704]: 2026-01-31 07:52:54.559 247708 DEBUG nova.compute.manager [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:52:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:54.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 218 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 143 op/s
Jan 31 07:52:55 compute-0 kernel: tapf70ffad1-6c (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.5095] device (tapf70ffad1-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00258|binding|INFO|Releasing lport f70ffad1-6c67-4e2f-9488-5f51be8ca30f from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00259|binding|INFO|Setting lport f70ffad1-6c67-4e2f-9488-5f51be8ca30f down in Southbound
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.524 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00260|binding|INFO|Removing iface tapf70ffad1-6c ovn-installed in OVS
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.527 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.532 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 kernel: tape03f85ae-01 (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.5409] device (tape03f85ae-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.548 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:06:0e 10.100.0.13'], port_security=['fa:16:3e:0e:06:0e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3752940-fba8-4389-b589-ddf451dddc0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b43212d-4075-45e4-8769-aa7227c99173, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=f70ffad1-6c67-4e2f-9488-5f51be8ca30f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.550 160028 INFO neutron.agent.ovn.metadata.agent [-] Port f70ffad1-6c67-4e2f-9488-5f51be8ca30f in datapath a3752940-fba8-4389-b589-ddf451dddc0e unbound from our chassis
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00261|binding|INFO|Releasing lport c4b14b14-48a7-4288-bcf4-855cd8737a4e from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00262|binding|INFO|Releasing lport 4dd901b3-b32e-4193-b4e6-b825ed5a208e from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00263|binding|INFO|Releasing lport a78f7dfd-bb43-4fb8-8fa3-1dac8ecd37b8 from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.553 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3752940-fba8-4389-b589-ddf451dddc0e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.554 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.555 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d86477a3-ad8e-43e0-8aa5-a90d0e46ee76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.555 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e namespace which is not needed anymore
Jan 31 07:52:55 compute-0 kernel: tap4eb52c86-78 (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.563 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00264|binding|INFO|Releasing lport e03f85ae-0195-4e14-9903-e7b06167a724 from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00265|binding|INFO|Setting lport e03f85ae-0195-4e14-9903-e7b06167a724 down in Southbound
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00266|binding|INFO|Removing iface tape03f85ae-01 ovn-installed in OVS
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.5676] device (tap4eb52c86-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.569 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:8e:bc 10.1.1.58'], port_security=['fa:16:3e:1e:8e:bc 10.1.1.58'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-896460109', 'neutron:cidrs': '10.1.1.58/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-896460109', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e626db8-81a7-41ec-b76c-dee792980f20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e03f85ae-0195-4e14-9903-e7b06167a724) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 kernel: tap3c7e8d6d-d8 (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.6023] device (tap3c7e8d6d-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:55 compute-0 kernel: tapefbaae5d-81 (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.6434] device (tapefbaae5d-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.648 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.660 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00267|binding|INFO|Releasing lport efbaae5d-81ff-4cc9-8c11-21b46928ff4e from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 kernel: tapb30ffaa7-e7 (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00268|binding|INFO|Releasing lport c4b14b14-48a7-4288-bcf4-855cd8737a4e from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00269|binding|INFO|Releasing lport 4eb52c86-78dd-484b-8a6f-30698b76d281 from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00270|binding|INFO|Releasing lport 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00271|binding|INFO|Releasing lport 4dd901b3-b32e-4193-b4e6-b825ed5a208e from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00272|binding|INFO|Releasing lport a78f7dfd-bb43-4fb8-8fa3-1dac8ecd37b8 from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.6743] device (tapb30ffaa7-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00273|binding|INFO|Removing iface tap4eb52c86-78 ovn-installed in OVS
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00274|binding|INFO|Removing iface tap3c7e8d6d-d8 ovn-installed in OVS
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00275|binding|INFO|Removing iface tapefbaae5d-81 ovn-installed in OVS
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00276|binding|INFO|Setting lport 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd down in Southbound
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00277|binding|INFO|Setting lport 4eb52c86-78dd-484b-8a6f-30698b76d281 down in Southbound
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00278|binding|INFO|Setting lport efbaae5d-81ff-4cc9-8c11-21b46928ff4e down in Southbound
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.683 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.691 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:f2:bf 10.1.1.8'], port_security=['fa:16:3e:2a:f2:bf 10.1.1.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.8/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=efbaae5d-81ff-4cc9-8c11-21b46928ff4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.692 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:1d:84 10.1.1.171'], port_security=['fa:16:3e:a9:1d:84 10.1.1.171'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1177458892', 'neutron:cidrs': '10.1.1.171/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1177458892', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e626db8-81a7-41ec-b76c-dee792980f20', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4eb52c86-78dd-484b-8a6f-30698b76d281) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 kernel: tap4b53f464-7f (unregistering): left promiscuous mode
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.694 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:6d:e4 10.1.1.123'], port_security=['fa:16:3e:91:6d:e4 10.1.1.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.123/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a058b6c-41b8-40e0-ac85-467e6c702f15, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.7031] device (tap4b53f464-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00279|binding|INFO|Releasing lport b30ffaa7-e766-4963-9625-a6ed3c381b9e from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00280|binding|INFO|Setting lport b30ffaa7-e766-4963-9625-a6ed3c381b9e down in Southbound
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00281|binding|INFO|Removing iface tapb30ffaa7-e7 ovn-installed in OVS
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.720 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.729 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:67:df 10.2.2.100'], port_security=['fa:16:3e:d4:67:df 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53c7f058-b6f8-4179-8f8a-535c8c58d9f3, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=b30ffaa7-e766-4963-9625-a6ed3c381b9e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00282|binding|INFO|Releasing lport 4b53f464-7fe8-4fdb-9628-1af032b23899 from this chassis (sb_readonly=0)
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00283|binding|INFO|Setting lport 4b53f464-7fe8-4fdb-9628-1af032b23899 down in Southbound
Jan 31 07:52:55 compute-0 ovn_controller[149457]: 2026-01-31T07:52:55Z|00284|binding|INFO|Removing iface tap4b53f464-7f ovn-installed in OVS
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.744 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.746 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:55.753 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:74:ed 10.2.2.200'], port_security=['fa:16:3e:57:74:ed 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '9f390c61-950a-4c26-8733-d43d910f2430', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3264618ebdfa4478919141d5d3c4d3b3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ef25e1c-6f2f-485e-a8ec-2412c5cec3cb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53c7f058-b6f8-4179-8f8a-535c8c58d9f3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4b53f464-7fe8-4fdb-9628-1af032b23899) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:52:55 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000048.scope: Deactivated successfully.
Jan 31 07:52:55 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000048.scope: Consumed 16.553s CPU time.
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 systemd-machined[214448]: Machine qemu-30-instance-00000048 terminated.
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.7786] manager: (tapf70ffad1-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.7922] manager: (tape03f85ae-01): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.8094] manager: (tap4eb52c86-78): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Jan 31 07:52:55 compute-0 NetworkManager[49108]: <info>  [1769845975.8312] manager: (tapefbaae5d-81): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Jan 31 07:52:55 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [NOTICE]   (293376) : haproxy version is 2.8.14-c23fe91
Jan 31 07:52:55 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [NOTICE]   (293376) : path to executable is /usr/sbin/haproxy
Jan 31 07:52:55 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [WARNING]  (293376) : Exiting Master process...
Jan 31 07:52:55 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [ALERT]    (293376) : Current worker (293378) exited with code 143 (Terminated)
Jan 31 07:52:55 compute-0 neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e[293372]: [WARNING]  (293376) : All workers exited. Exiting... (0)
Jan 31 07:52:55 compute-0 systemd[1]: libpod-94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7.scope: Deactivated successfully.
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.859 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 podman[294451]: 2026-01-31 07:52:55.867670929 +0000 UTC m=+0.202899241 container died 94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.875 247708 INFO nova.virt.libvirt.driver [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Instance destroyed successfully.
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.876 247708 DEBUG nova.objects.instance [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lazy-loading 'resources' on Instance uuid 9f390c61-950a-4c26-8733-d43d910f2430 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:52:55 compute-0 ceph-mon[74496]: pgmap v1670: 305 pgs: 305 active+clean; 218 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 143 op/s
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.882 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.895 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.896 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.897 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.897 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.899 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.900 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf70ffad1-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.901 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.904 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.917 247708 DEBUG nova.compute.manager [req-2df6f0a2-a0b2-45c9-a35a-f5d7013cc65d req-b5d30b35-7e55-4749-a39d-ec8c9f83c759 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-e03f85ae-0195-4e14-9903-e7b06167a724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.917 247708 DEBUG oslo_concurrency.lockutils [req-2df6f0a2-a0b2-45c9-a35a-f5d7013cc65d req-b5d30b35-7e55-4749-a39d-ec8c9f83c759 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.918 247708 DEBUG oslo_concurrency.lockutils [req-2df6f0a2-a0b2-45c9-a35a-f5d7013cc65d req-b5d30b35-7e55-4749-a39d-ec8c9f83c759 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.918 247708 DEBUG oslo_concurrency.lockutils [req-2df6f0a2-a0b2-45c9-a35a-f5d7013cc65d req-b5d30b35-7e55-4749-a39d-ec8c9f83c759 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.918 247708 DEBUG nova.compute.manager [req-2df6f0a2-a0b2-45c9-a35a-f5d7013cc65d req-b5d30b35-7e55-4749-a39d-ec8c9f83c759 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-e03f85ae-0195-4e14-9903-e7b06167a724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.918 247708 DEBUG nova.compute.manager [req-2df6f0a2-a0b2-45c9-a35a-f5d7013cc65d req-b5d30b35-7e55-4749-a39d-ec8c9f83c759 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-e03f85ae-0195-4e14-9903-e7b06167a724 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.919 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.923 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f,network=Network(a3752940-fba8-4389-b589-ddf451dddc0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf70ffad1-6c')
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.924 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.925 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.926 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.926 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.928 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03f85ae-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.929 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.931 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.942 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.945 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:8e:bc,bridge_name='br-int',has_traffic_filtering=True,id=e03f85ae-0195-4e14-9903-e7b06167a724,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape03f85ae-01')
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.946 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.947 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.947 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.947 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.949 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.949 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4eb52c86-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.950 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.952 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.962 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:1d:84,bridge_name='br-int',has_traffic_filtering=True,id=4eb52c86-78dd-484b-8a6f-30698b76d281,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4eb52c86-78')
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.962 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.963 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "address": "fa:16:3e:91:6d:e4", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.123", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c7e8d6d-d8", "ovs_interfaceid": "3c7e8d6d-d8dc-4023-8967-e09698c9d8bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.963 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.964 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.965 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.965 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c7e8d6d-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.968 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.979 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:6d:e4,bridge_name='br-int',has_traffic_filtering=True,id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c7e8d6d-d8')
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.980 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.981 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.981 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.982 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.983 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefbaae5d-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.985 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.994 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:f2:bf,bridge_name='br-int',has_traffic_filtering=True,id=efbaae5d-81ff-4cc9-8c11-21b46928ff4e,network=Network(0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefbaae5d-81')
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.995 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.995 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.996 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.996 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.997 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.998 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb30ffaa7-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:55 compute-0 nova_compute[247704]: 2026-01-31 07:52:55.999 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.003 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.005 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:67:df,bridge_name='br-int',has_traffic_filtering=True,id=b30ffaa7-e766-4963-9625-a6ed3c381b9e,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb30ffaa7-e7')
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.006 247708 DEBUG nova.virt.libvirt.vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-470810591',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-470810591',id=72,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF1dBRh/vlkliptOjW62qqwRKbvuTALNGtbDZPNOrUEq2Q9QNY3kaSNhk9Scr8FGlg2yPZjUzDmeEPD1GWyGd5EiQUqubngJAbzw6166sIpp6xIQ3SIUi1f9CctR3zGh1A==',key_name='tempest-keypair-770297055',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:52:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3264618ebdfa4478919141d5d3c4d3b3',ramdisk_id='',reservation_id='r-1jgnhk9u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-805673079',owner_user_name='tempest-TaggedBootDevicesTest-805673079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:52:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb0d7106e4b04ac980748beb40d5cedf',uuid=9f390c61-950a-4c26-8733-d43d910f2430,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.007 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converting VIF {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.007 247708 DEBUG nova.network.os_vif_util [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.008 247708 DEBUG os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.009 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.009 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b53f464-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.010 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.013 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.015 247708 INFO os_vif [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:74:ed,bridge_name='br-int',has_traffic_filtering=True,id=4b53f464-7fe8-4fdb-9628-1af032b23899,network=Network(8e83be71-fd66-4d0d-84cf-6e2da4831dad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b53f464-7f')
Jan 31 07:52:56 compute-0 sudo[294612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:56 compute-0 sudo[294612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:56 compute-0 sudo[294612]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:56 compute-0 sudo[294637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:52:56 compute-0 sudo[294637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:52:56 compute-0 sudo[294637]: pam_unix(sudo:session): session closed for user root
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.266 247708 DEBUG nova.compute.manager [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.267 247708 DEBUG oslo_concurrency.lockutils [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.268 247708 DEBUG oslo_concurrency.lockutils [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.268 247708 DEBUG oslo_concurrency.lockutils [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.268 247708 DEBUG nova.compute.manager [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.268 247708 DEBUG nova.compute.manager [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.268 247708 DEBUG nova.compute.manager [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.269 247708 DEBUG oslo_concurrency.lockutils [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.269 247708 DEBUG oslo_concurrency.lockutils [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.269 247708 DEBUG oslo_concurrency.lockutils [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.269 247708 DEBUG nova.compute.manager [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:56 compute-0 nova_compute[247704]: 2026-01-31 07:52:56.269 247708 WARNING nova.compute.manager [req-8164c2a1-2f90-4c12-b2f6-2412b3fd3d5f req-aaffe331-e144-4f7b-82ad-aad048a78f60 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-f70ffad1-6c67-4e2f-9488-5f51be8ca30f for instance with vm_state active and task_state deleting.
Jan 31 07:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7-userdata-shm.mount: Deactivated successfully.
Jan 31 07:52:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba87f6b2da21d1d52c2a96199d3bbb668eb3c1845b9729b1282fdcf53ab76011-merged.mount: Deactivated successfully.
Jan 31 07:52:56 compute-0 podman[294451]: 2026-01-31 07:52:56.869850925 +0000 UTC m=+1.205079237 container cleanup 94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:52:56 compute-0 systemd[1]: libpod-conmon-94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7.scope: Deactivated successfully.
Jan 31 07:52:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:52:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:56.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 240 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.170 247708 DEBUG nova.compute.manager [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.172 247708 DEBUG oslo_concurrency.lockutils [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.172 247708 DEBUG oslo_concurrency.lockutils [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.172 247708 DEBUG oslo_concurrency.lockutils [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.172 247708 DEBUG nova.compute.manager [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.172 247708 DEBUG nova.compute.manager [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.173 247708 DEBUG nova.compute.manager [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.173 247708 DEBUG oslo_concurrency.lockutils [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.173 247708 DEBUG oslo_concurrency.lockutils [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.174 247708 DEBUG oslo_concurrency.lockutils [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.174 247708 DEBUG nova.compute.manager [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.174 247708 WARNING nova.compute.manager [req-bdd20c60-408a-454e-8a90-f35910342896 req-f95b2939-9057-49e9-a0ae-07af113a48cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd for instance with vm_state active and task_state deleting.
Jan 31 07:52:57 compute-0 podman[294678]: 2026-01-31 07:52:57.522723645 +0000 UTC m=+0.628505755 container remove 94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.531 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[374d4553-234c-4d54-90fe-d86ff0d6dc9e]: (4, ('Sat Jan 31 07:52:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e (94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7)\n94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7\nSat Jan 31 07:52:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e (94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7)\n94e7c1dd8e1691828ca150c648226c6cdd1e90804ad1a60c7e875fda6f8d0cf7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.535 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23d5baec-3192-4475-8d3c-00390ca04289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.538 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3752940-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.542 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:57 compute-0 kernel: tapa3752940-f0: left promiscuous mode
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.549 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.551 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd45219-ba09-45eb-a3fb-e3969e613e0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.570 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc77986-e548-49f1-9704-54d6e964d7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.572 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4c71e9e3-500c-4930-b6c9-d315c8fbcf1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.591 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8d500da2-9825-4867-a52c-d6b502ab38ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618496, 'reachable_time': 43835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294695, 'error': None, 'target': 'ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 systemd[1]: run-netns-ovnmeta\x2da3752940\x2dfba8\x2d4389\x2db589\x2dddf451dddc0e.mount: Deactivated successfully.
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.596 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3752940-fba8-4389-b589-ddf451dddc0e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.597 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b87a2e-43a9-4fd8-8585-64b226f658b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.598 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e03f85ae-0195-4e14-9903-e7b06167a724 in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.601 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.602 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5429f699-409c-4895-94eb-ce2b11e314e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:57.602 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 namespace which is not needed anymore
Jan 31 07:52:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:57 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [NOTICE]   (293454) : haproxy version is 2.8.14-c23fe91
Jan 31 07:52:57 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [NOTICE]   (293454) : path to executable is /usr/sbin/haproxy
Jan 31 07:52:57 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [WARNING]  (293454) : Exiting Master process...
Jan 31 07:52:57 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [ALERT]    (293454) : Current worker (293456) exited with code 143 (Terminated)
Jan 31 07:52:57 compute-0 neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2[293450]: [WARNING]  (293454) : All workers exited. Exiting... (0)
Jan 31 07:52:57 compute-0 systemd[1]: libpod-b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a.scope: Deactivated successfully.
Jan 31 07:52:57 compute-0 podman[294713]: 2026-01-31 07:52:57.743522233 +0000 UTC m=+0.049059492 container died b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a-userdata-shm.mount: Deactivated successfully.
Jan 31 07:52:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ca3d779ec397198f52295c85ce0ccce7c3ce13977e64e8866fec2fb104db368-merged.mount: Deactivated successfully.
Jan 31 07:52:57 compute-0 podman[294713]: 2026-01-31 07:52:57.793685191 +0000 UTC m=+0.099222410 container cleanup b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 07:52:57 compute-0 systemd[1]: libpod-conmon-b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a.scope: Deactivated successfully.
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.830 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.918 247708 INFO nova.virt.libvirt.driver [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Deleting instance files /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430_del
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.919 247708 INFO nova.virt.libvirt.driver [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Deletion of /var/lib/nova/instances/9f390c61-950a-4c26-8733-d43d910f2430_del complete
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.987 247708 INFO nova.compute.manager [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Took 3.43 seconds to destroy the instance on the hypervisor.
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.988 247708 DEBUG oslo.service.loopingcall [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.988 247708 DEBUG nova.compute.manager [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.989 247708 DEBUG nova.network.neutron [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.996 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.996 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.997 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.997 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.997 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.997 247708 WARNING nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-e03f85ae-0195-4e14-9903-e7b06167a724 for instance with vm_state active and task_state deleting.
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.997 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-4eb52c86-78dd-484b-8a6f-30698b76d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.998 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.998 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.998 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.998 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-4eb52c86-78dd-484b-8a6f-30698b76d281 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.998 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-4eb52c86-78dd-484b-8a6f-30698b76d281 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.998 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.999 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.999 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.999 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:57 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.999 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:57.999 247708 WARNING nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-4eb52c86-78dd-484b-8a6f-30698b76d281 for instance with vm_state active and task_state deleting.
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.000 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.000 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.000 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.000 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.000 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.001 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.001 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.001 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.001 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.001 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.002 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.002 247708 WARNING nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-efbaae5d-81ff-4cc9-8c11-21b46928ff4e for instance with vm_state active and task_state deleting.
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.002 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.002 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.002 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.002 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.003 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.003 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.003 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.003 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.003 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.004 247708 DEBUG oslo_concurrency.lockutils [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.004 247708 DEBUG nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.004 247708 WARNING nova.compute.manager [req-8defb4ec-3cf1-49d6-9be2-c457feaebb6d req-99310fbc-dc9f-42ac-a461-6d7e309ff1c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-b30ffaa7-e766-4963-9625-a6ed3c381b9e for instance with vm_state active and task_state deleting.
Jan 31 07:52:58 compute-0 podman[294744]: 2026-01-31 07:52:58.094336625 +0000 UTC m=+0.280683665 container remove b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.101 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc936ca-6ff6-4901-8703-bf94cc0ed905]: (4, ('Sat Jan 31 07:52:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 (b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a)\nb3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a\nSat Jan 31 07:52:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 (b3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a)\nb3c0f63b660456ebf2902d5e1ba1b1cbdca137afd2565e5f9c386ebeb8ce1d6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.104 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f16514bb-4a90-4127-a810-c8cb43b3ca60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.106 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e9c0baa-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.108 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:58 compute-0 kernel: tap0e9c0baa-f0: left promiscuous mode
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.114 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.119 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0aaf50a3-2bfe-45b3-bc2f-ac5036f1f2e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.135 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[568f6985-f656-440d-9a18-fb2c3047aa11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.137 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1317bd4d-6e50-49e9-8853-e4fcdb6d044d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.152 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9c2246-3a4a-4bbc-8ccf-2f893ebce05b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618597, 'reachable_time': 38716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294760, 'error': None, 'target': 'ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.154 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.154 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[39493247-d174-4e2c-a0ec-d8b9faea843c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.155 160028 INFO neutron.agent.ovn.metadata.agent [-] Port efbaae5d-81ff-4cc9-8c11-21b46928ff4e in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d0e9c0baa\x2df29c\x2d43a1\x2d88e8\x2d1325dbe3d9c2.mount: Deactivated successfully.
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.156 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.157 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[05512b9f-a93c-4065-91ba-dfc0189b09a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.158 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb52c86-78dd-484b-8a6f-30698b76d281 in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.159 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.160 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d52be7c5-dcbc-4e42-85fe-b883efb6b415]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.160 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd in datapath 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2 unbound from our chassis
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.162 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.162 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e91af3d3-7dba-4869-9621-d378a7b49a08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.163 160028 INFO neutron.agent.ovn.metadata.agent [-] Port b30ffaa7-e766-4963-9625-a6ed3c381b9e in datapath 8e83be71-fd66-4d0d-84cf-6e2da4831dad unbound from our chassis
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.164 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e83be71-fd66-4d0d-84cf-6e2da4831dad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.164 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1cec8a-248b-4544-a785-3a4c920a5841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.165 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad namespace which is not needed anymore
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.300 247708 DEBUG nova.compute.manager [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-4b53f464-7fe8-4fdb-9628-1af032b23899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.300 247708 DEBUG oslo_concurrency.lockutils [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.301 247708 DEBUG oslo_concurrency.lockutils [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.301 247708 DEBUG oslo_concurrency.lockutils [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.301 247708 DEBUG nova.compute.manager [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-unplugged-4b53f464-7fe8-4fdb-9628-1af032b23899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.302 247708 DEBUG nova.compute.manager [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-unplugged-4b53f464-7fe8-4fdb-9628-1af032b23899 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.302 247708 DEBUG nova.compute.manager [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.302 247708 DEBUG oslo_concurrency.lockutils [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9f390c61-950a-4c26-8733-d43d910f2430-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.302 247708 DEBUG oslo_concurrency.lockutils [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.303 247708 DEBUG oslo_concurrency.lockutils [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.303 247708 DEBUG nova.compute.manager [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] No waiting events found dispatching network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.303 247708 WARNING nova.compute.manager [req-854b1c67-afd8-49c3-91cf-b77f1377023b req-26b3dfc3-4e13-4a27-9d15-8e745c076019 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received unexpected event network-vif-plugged-4b53f464-7fe8-4fdb-9628-1af032b23899 for instance with vm_state active and task_state deleting.
Jan 31 07:52:58 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [NOTICE]   (293552) : haproxy version is 2.8.14-c23fe91
Jan 31 07:52:58 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [NOTICE]   (293552) : path to executable is /usr/sbin/haproxy
Jan 31 07:52:58 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [WARNING]  (293552) : Exiting Master process...
Jan 31 07:52:58 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [WARNING]  (293552) : Exiting Master process...
Jan 31 07:52:58 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [ALERT]    (293552) : Current worker (293554) exited with code 143 (Terminated)
Jan 31 07:52:58 compute-0 neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad[293548]: [WARNING]  (293552) : All workers exited. Exiting... (0)
Jan 31 07:52:58 compute-0 systemd[1]: libpod-bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af.scope: Deactivated successfully.
Jan 31 07:52:58 compute-0 podman[294778]: 2026-01-31 07:52:58.529276128 +0000 UTC m=+0.284446848 container died bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:52:58 compute-0 ceph-mon[74496]: pgmap v1671: 305 pgs: 305 active+clean; 240 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 31 07:52:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1258146455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af-userdata-shm.mount: Deactivated successfully.
Jan 31 07:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-71c68d507f7ad38bd9dfdbc6abde5bb8bb7682992430579072a32e414e49bdb6-merged.mount: Deactivated successfully.
Jan 31 07:52:58 compute-0 podman[294778]: 2026-01-31 07:52:58.690432605 +0000 UTC m=+0.445603325 container cleanup bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:52:58 compute-0 systemd[1]: libpod-conmon-bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af.scope: Deactivated successfully.
Jan 31 07:52:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:52:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:52:58 compute-0 podman[294807]: 2026-01-31 07:52:58.9776743 +0000 UTC m=+0.263951896 container remove bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.983 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1cb0e4-a96e-406b-857b-ecac2da5c82d]: (4, ('Sat Jan 31 07:52:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad (bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af)\nbf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af\nSat Jan 31 07:52:58 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad (bf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af)\nbf1fa972c7898a80e844e6872e5449946293f1fb765f2b33baad1080e23953af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.986 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5e78ffbf-2c8c-4e66-adab-4e2bb4fc982c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:58.988 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e83be71-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:58 compute-0 kernel: tap8e83be71-f0: left promiscuous mode
Jan 31 07:52:58 compute-0 nova_compute[247704]: 2026-01-31 07:52:58.999 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.002 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c05bc24d-219b-4b24-8442-75a3c0a1b6f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.024 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f23332d3-7279-4415-861a-bb9b054c367a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.025 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0faf58-662f-4b55-9bba-0d900d2d39f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 286 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.043 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[38ba8d98-ee9b-45c8-87b6-063a1d70bad8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618739, 'reachable_time': 38933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294818, 'error': None, 'target': 'ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.045 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e83be71-fd66-4d0d-84cf-6e2da4831dad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.045 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[7a945d01-dc63-4dbc-a10c-95b9d741073f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.046 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4b53f464-7fe8-4fdb-9628-1af032b23899 in datapath 8e83be71-fd66-4d0d-84cf-6e2da4831dad unbound from our chassis
Jan 31 07:52:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e83be71\x2dfd66\x2d4d0d\x2d84cf\x2d6e2da4831dad.mount: Deactivated successfully.
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.048 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e83be71-fd66-4d0d-84cf-6e2da4831dad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:52:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:52:59.049 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b795c8f-bd5f-4b38-b99f-34bf89a07236]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:52:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:52:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:52:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:52:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1360236136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:52:59 compute-0 ceph-mon[74496]: pgmap v1672: 305 pgs: 305 active+clean; 286 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Jan 31 07:53:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 292 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 31 07:53:01 compute-0 nova_compute[247704]: 2026-01-31 07:53:01.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:01.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:02 compute-0 ceph-mon[74496]: pgmap v1673: 305 pgs: 305 active+clean; 292 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 31 07:53:02 compute-0 nova_compute[247704]: 2026-01-31 07:53:02.833 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 293 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 31 07:53:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:03.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:04 compute-0 ceph-mon[74496]: pgmap v1674: 305 pgs: 305 active+clean; 293 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 31 07:53:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 296 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.1 MiB/s wr, 203 op/s
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.363 247708 DEBUG nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-deleted-3c7e8d6d-d8dc-4023-8967-e09698c9d8bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.363 247708 INFO nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Neutron deleted interface 3c7e8d6d-d8dc-4023-8967-e09698c9d8bd; detaching it from the instance and deleting it from the info cache
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.363 247708 DEBUG nova.network.neutron [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [{"id": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "address": "fa:16:3e:0e:06:0e", "network": {"id": "a3752940-fba8-4389-b589-ddf451dddc0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-567791171-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf70ffad1-6c", "ovs_interfaceid": "f70ffad1-6c67-4e2f-9488-5f51be8ca30f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.394 247708 DEBUG nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Detach interface failed, port_id=3c7e8d6d-d8dc-4023-8967-e09698c9d8bd, reason: Instance 9f390c61-950a-4c26-8733-d43d910f2430 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.395 247708 DEBUG nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-deleted-f70ffad1-6c67-4e2f-9488-5f51be8ca30f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.395 247708 INFO nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Neutron deleted interface f70ffad1-6c67-4e2f-9488-5f51be8ca30f; detaching it from the instance and deleting it from the info cache
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.396 247708 DEBUG nova.network.neutron [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [{"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "address": "fa:16:3e:d4:67:df", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb30ffaa7-e7", "ovs_interfaceid": "b30ffaa7-e766-4963-9625-a6ed3c381b9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.417 247708 DEBUG nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Detach interface failed, port_id=f70ffad1-6c67-4e2f-9488-5f51be8ca30f, reason: Instance 9f390c61-950a-4c26-8733-d43d910f2430 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.418 247708 DEBUG nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-deleted-b30ffaa7-e766-4963-9625-a6ed3c381b9e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.418 247708 INFO nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Neutron deleted interface b30ffaa7-e766-4963-9625-a6ed3c381b9e; detaching it from the instance and deleting it from the info cache
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.419 247708 DEBUG nova.network.neutron [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [{"id": "e03f85ae-0195-4e14-9903-e7b06167a724", "address": "fa:16:3e:1e:8e:bc", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.58", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape03f85ae-01", "ovs_interfaceid": "e03f85ae-0195-4e14-9903-e7b06167a724", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "4eb52c86-78dd-484b-8a6f-30698b76d281", "address": "fa:16:3e:a9:1d:84", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.171", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb52c86-78", "ovs_interfaceid": "4eb52c86-78dd-484b-8a6f-30698b76d281", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "address": "fa:16:3e:2a:f2:bf", "network": {"id": "0e9c0baa-f29c-43a1-88e8-1325dbe3d9c2", "bridge": "br-int", "label": "tempest-device-tagging-net1-374483330", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefbaae5d-81", "ovs_interfaceid": "efbaae5d-81ff-4cc9-8c11-21b46928ff4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4b53f464-7fe8-4fdb-9628-1af032b23899", "address": "fa:16:3e:57:74:ed", "network": {"id": "8e83be71-fd66-4d0d-84cf-6e2da4831dad", "bridge": "br-int", "label": "tempest-device-tagging-net2-323388708", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3264618ebdfa4478919141d5d3c4d3b3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b53f464-7f", "ovs_interfaceid": "4b53f464-7fe8-4fdb-9628-1af032b23899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:05 compute-0 nova_compute[247704]: 2026-01-31 07:53:05.445 247708 DEBUG nova.compute.manager [req-a6ce731f-d537-42c7-8a47-4dcbdc236cb5 req-014edec7-cd60-41a5-8173-9e98c9a4b327 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Detach interface failed, port_id=b30ffaa7-e766-4963-9625-a6ed3c381b9e, reason: Instance 9f390c61-950a-4c26-8733-d43d910f2430 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:53:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:05 compute-0 podman[294827]: 2026-01-31 07:53:05.902374296 +0000 UTC m=+0.068353075 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:53:06 compute-0 nova_compute[247704]: 2026-01-31 07:53:06.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:06 compute-0 ceph-mon[74496]: pgmap v1675: 305 pgs: 305 active+clean; 296 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.1 MiB/s wr, 203 op/s
Jan 31 07:53:06 compute-0 nova_compute[247704]: 2026-01-31 07:53:06.422 247708 DEBUG nova.network.neutron [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:53:06 compute-0 nova_compute[247704]: 2026-01-31 07:53:06.440 247708 INFO nova.compute.manager [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Took 8.45 seconds to deallocate network for instance.
Jan 31 07:53:06 compute-0 nova_compute[247704]: 2026-01-31 07:53:06.567 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:06.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 310 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 234 op/s
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.461 247708 INFO nova.compute.manager [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Took 1.02 seconds to detach 3 volumes for instance.
Jan 31 07:53:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.694 247708 DEBUG nova.compute.manager [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-deleted-4b53f464-7fe8-4fdb-9628-1af032b23899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.694 247708 DEBUG nova.compute.manager [req-85af8bf4-fa23-4661-9d67-828626aca27d req-2dfa1421-679a-48ff-88b9-9b6a75f24c43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Received event network-vif-deleted-efbaae5d-81ff-4cc9-8c11-21b46928ff4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.835 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.901 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.902 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:07 compute-0 nova_compute[247704]: 2026-01-31 07:53:07.965 247708 DEBUG oslo_concurrency.processutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:53:08 compute-0 sshd-session[294849]: Invalid user solv from 45.148.10.240 port 41484
Jan 31 07:53:08 compute-0 ceph-mon[74496]: pgmap v1676: 305 pgs: 305 active+clean; 310 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 234 op/s
Jan 31 07:53:08 compute-0 sshd-session[294849]: Connection closed by invalid user solv 45.148.10.240 port 41484 [preauth]
Jan 31 07:53:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:53:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758324070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.464 247708 DEBUG oslo_concurrency.processutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.474 247708 DEBUG nova.compute.provider_tree [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.506 247708 DEBUG nova.scheduler.client.report [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.529 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.561 247708 INFO nova.scheduler.client.report [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Deleted allocations for instance 9f390c61-950a-4c26-8733-d43d910f2430
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:08 compute-0 nova_compute[247704]: 2026-01-31 07:53:08.631 247708 DEBUG oslo_concurrency.lockutils [None req-98cbcdf7-9d53-4a47-adf5-d52570cbe6cc eb0d7106e4b04ac980748beb40d5cedf 3264618ebdfa4478919141d5d3c4d3b3 - - default default] Lock "9f390c61-950a-4c26-8733-d43d910f2430" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:08.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 31 07:53:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3758324070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:09.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:10 compute-0 ceph-mon[74496]: pgmap v1677: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 31 07:53:10 compute-0 nova_compute[247704]: 2026-01-31 07:53:10.870 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845975.8688974, 9f390c61-950a-4c26-8733-d43d910f2430 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:53:10 compute-0 nova_compute[247704]: 2026-01-31 07:53:10.871 247708 INFO nova.compute.manager [-] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] VM Stopped (Lifecycle Event)
Jan 31 07:53:10 compute-0 nova_compute[247704]: 2026-01-31 07:53:10.904 247708 DEBUG nova.compute.manager [None req-b877332c-fe69-41d6-bc49-61d43515a4f2 - - - - - -] [instance: 9f390c61-950a-4c26-8733-d43d910f2430] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:53:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:10.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 07:53:11 compute-0 nova_compute[247704]: 2026-01-31 07:53:11.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:11.162 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:11.163 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:11.163 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:11 compute-0 nova_compute[247704]: 2026-01-31 07:53:11.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:11 compute-0 nova_compute[247704]: 2026-01-31 07:53:11.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:11 compute-0 nova_compute[247704]: 2026-01-31 07:53:11.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:53:11 compute-0 nova_compute[247704]: 2026-01-31 07:53:11.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:53:11 compute-0 nova_compute[247704]: 2026-01-31 07:53:11.586 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:53:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:53:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:53:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:12 compute-0 ceph-mon[74496]: pgmap v1678: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 07:53:12 compute-0 nova_compute[247704]: 2026-01-31 07:53:12.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:12.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.588 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.589 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.589 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.589 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:53:13 compute-0 nova_compute[247704]: 2026-01-31 07:53:13.589 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:53:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:13.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:53:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817034885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.017 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.153 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.154 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4616MB free_disk=20.921649932861328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.154 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.154 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.286 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.287 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.308 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:53:14 compute-0 ceph-mon[74496]: pgmap v1679: 305 pgs: 305 active+clean; 326 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 31 07:53:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3915967884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/817034885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:53:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026578386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.769 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.773 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.800 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.844 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:53:14 compute-0 nova_compute[247704]: 2026-01-31 07:53:14.845 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:53:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:14.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 143 op/s
Jan 31 07:53:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:53:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2769371964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4026578386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2769371964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:15.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:15.707 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:15 compute-0 nova_compute[247704]: 2026-01-31 07:53:15.707 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:15.708 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:53:15 compute-0 nova_compute[247704]: 2026-01-31 07:53:15.845 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:15 compute-0 nova_compute[247704]: 2026-01-31 07:53:15.846 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:15 compute-0 nova_compute[247704]: 2026-01-31 07:53:15.846 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:53:16 compute-0 nova_compute[247704]: 2026-01-31 07:53:16.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:16 compute-0 sudo[294922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:16 compute-0 sudo[294922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:16 compute-0 sudo[294922]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:16 compute-0 sudo[294947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:16 compute-0 sudo[294947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:16 compute-0 sudo[294947]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:16 compute-0 nova_compute[247704]: 2026-01-31 07:53:16.467 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:16 compute-0 ceph-mon[74496]: pgmap v1680: 305 pgs: 305 active+clean; 339 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 143 op/s
Jan 31 07:53:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:16.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 326 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 135 op/s
Jan 31 07:53:17 compute-0 nova_compute[247704]: 2026-01-31 07:53:17.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:53:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:53:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:53:17 compute-0 nova_compute[247704]: 2026-01-31 07:53:17.838 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:17 compute-0 ceph-mon[74496]: pgmap v1681: 305 pgs: 305 active+clean; 326 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 135 op/s
Jan 31 07:53:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2029330616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2228712792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3412492290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3412492290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:18.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 278 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 2.9 MiB/s wr, 97 op/s
Jan 31 07:53:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1793600199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3412492290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3412492290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:53:20
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.log']
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:53:20 compute-0 ceph-mon[74496]: pgmap v1682: 305 pgs: 305 active+clean; 278 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 2.9 MiB/s wr, 97 op/s
Jan 31 07:53:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/556546885' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/556546885' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:53:20 compute-0 podman[294974]: 2026-01-31 07:53:20.920285386 +0000 UTC m=+0.094976667 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:53:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:20.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 248 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 31 07:53:21 compute-0 nova_compute[247704]: 2026-01-31 07:53:21.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:53:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2781719887' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:53:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2781719887' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1717920373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:53:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2781719887' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2781719887' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:21.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:22 compute-0 ceph-mon[74496]: pgmap v1683: 305 pgs: 305 active+clean; 248 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Jan 31 07:53:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2052022062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2052022062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:22 compute-0 nova_compute[247704]: 2026-01-31 07:53:22.840 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.004000096s ======
Jan 31 07:53:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:22.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000096s
Jan 31 07:53:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 182 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 31 07:53:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:23.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:23.712 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:53:24 compute-0 ceph-mon[74496]: pgmap v1684: 305 pgs: 305 active+clean; 182 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Jan 31 07:53:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:53:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:24.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:53:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 147 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Jan 31 07:53:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:25.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:26 compute-0 nova_compute[247704]: 2026-01-31 07:53:26.074 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:26 compute-0 ceph-mon[74496]: pgmap v1685: 305 pgs: 305 active+clean; 147 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 2.0 MiB/s wr, 114 op/s
Jan 31 07:53:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:26.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 1.1 MiB/s wr, 115 op/s
Jan 31 07:53:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:27 compute-0 nova_compute[247704]: 2026-01-31 07:53:27.842 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:27 compute-0 ceph-mon[74496]: pgmap v1686: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 1.1 MiB/s wr, 115 op/s
Jan 31 07:53:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:28.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 777 KiB/s wr, 82 op/s
Jan 31 07:53:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:29.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:30 compute-0 ceph-mon[74496]: pgmap v1687: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 777 KiB/s wr, 82 op/s
Jan 31 07:53:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:30.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:31 compute-0 nova_compute[247704]: 2026-01-31 07:53:31.076 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 9.2 KiB/s wr, 57 op/s
Jan 31 07:53:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 07:53:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:31.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 07:53:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:32 compute-0 ceph-mon[74496]: pgmap v1688: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 9.2 KiB/s wr, 57 op/s
Jan 31 07:53:32 compute-0 nova_compute[247704]: 2026-01-31 07:53:32.858 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:53:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:32.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:53:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 7.9 KiB/s wr, 43 op/s
Jan 31 07:53:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:33.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:34 compute-0 ceph-mon[74496]: pgmap v1689: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 7.9 KiB/s wr, 43 op/s
Jan 31 07:53:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 27 op/s
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021763185138656245 of space, bias 1.0, pg target 0.6528955541596874 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:53:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:53:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:35.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:36 compute-0 nova_compute[247704]: 2026-01-31 07:53:36.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:36 compute-0 sudo[295008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:36 compute-0 sudo[295008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:36 compute-0 sudo[295008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:36 compute-0 podman[295032]: 2026-01-31 07:53:36.475926426 +0000 UTC m=+0.065984767 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 07:53:36 compute-0 sudo[295044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:36 compute-0 sudo[295044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:36 compute-0 sudo[295044]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 8.3 KiB/s wr, 13 op/s
Jan 31 07:53:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:37 compute-0 ceph-mon[74496]: pgmap v1690: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 11 KiB/s wr, 27 op/s
Jan 31 07:53:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:37.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:37 compute-0 nova_compute[247704]: 2026-01-31 07:53:37.860 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:37 compute-0 sudo[295078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:37 compute-0 sudo[295078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:37 compute-0 sudo[295078]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:37 compute-0 sudo[295103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:53:37 compute-0 sudo[295103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:37 compute-0 sudo[295103]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:37 compute-0 sudo[295128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:37 compute-0 sudo[295128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:38 compute-0 sudo[295128]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:38 compute-0 sudo[295153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 07:53:38 compute-0 sudo[295153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:38 compute-0 podman[295249]: 2026-01-31 07:53:38.540528433 +0000 UTC m=+0.072540716 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:53:38 compute-0 ceph-mon[74496]: pgmap v1691: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 8.3 KiB/s wr, 13 op/s
Jan 31 07:53:38 compute-0 podman[295249]: 2026-01-31 07:53:38.699317703 +0000 UTC m=+0.231330016 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:53:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:53:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:38.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:53:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 7.7 KiB/s wr, 1 op/s
Jan 31 07:53:39 compute-0 podman[295398]: 2026-01-31 07:53:39.363021369 +0000 UTC m=+0.065149157 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:53:39 compute-0 podman[295398]: 2026-01-31 07:53:39.376728165 +0000 UTC m=+0.078855903 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 07:53:39 compute-0 podman[295467]: 2026-01-31 07:53:39.601786786 +0000 UTC m=+0.067753210 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.expose-services=, release=1793, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Jan 31 07:53:39 compute-0 podman[295467]: 2026-01-31 07:53:39.616644931 +0000 UTC m=+0.082611295 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vendor=Red Hat, Inc.)
Jan 31 07:53:39 compute-0 sudo[295153]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:53:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:40 compute-0 ceph-mon[74496]: pgmap v1692: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 7.7 KiB/s wr, 1 op/s
Jan 31 07:53:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:53:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:40 compute-0 sudo[295501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:40 compute-0 sudo[295501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:40 compute-0 sudo[295501]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:40 compute-0 sudo[295526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:53:40 compute-0 sudo[295526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:40 compute-0 sudo[295526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:40 compute-0 sudo[295551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:40 compute-0 sudo[295551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:40 compute-0 sudo[295551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:40 compute-0 sudo[295576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:53:40 compute-0 sudo[295576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:40.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:41 compute-0 nova_compute[247704]: 2026-01-31 07:53:41.081 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Jan 31 07:53:41 compute-0 sudo[295576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:53:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:53:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:53:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a4e2081c-7233-460f-b521-d01d29e0d070 does not exist
Jan 31 07:53:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 89394091-d934-4b51-a762-95a099173e4e does not exist
Jan 31 07:53:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f530dcac-0a50-4e29-a545-d12f99858db1 does not exist
Jan 31 07:53:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:53:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:53:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:53:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:53:41 compute-0 sudo[295632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:41 compute-0 sudo[295632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:41 compute-0 sudo[295632]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:41 compute-0 sudo[295657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:53:41 compute-0 sudo[295657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:41 compute-0 sudo[295657]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:41 compute-0 sudo[295683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:41 compute-0 sudo[295683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:41 compute-0 sudo[295683]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:53:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:53:41 compute-0 sudo[295708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:53:41 compute-0 sudo[295708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:41.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.82353523 +0000 UTC m=+0.064357895 container create fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_murdock, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:53:41 compute-0 systemd[1]: Started libpod-conmon-fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685.scope.
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.794791089 +0000 UTC m=+0.035613734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:53:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.917543256 +0000 UTC m=+0.158365911 container init fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.927321455 +0000 UTC m=+0.168144120 container start fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.931846816 +0000 UTC m=+0.172669491 container attach fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:53:41 compute-0 dazzling_murdock[295786]: 167 167
Jan 31 07:53:41 compute-0 systemd[1]: libpod-fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685.scope: Deactivated successfully.
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.937606216 +0000 UTC m=+0.178428901 container died fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_murdock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 07:53:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a04cde7e13a0323b842b37b8051c909c9f2a422e8de62eced74241fd3b587b4-merged.mount: Deactivated successfully.
Jan 31 07:53:41 compute-0 podman[295770]: 2026-01-31 07:53:41.993864466 +0000 UTC m=+0.234687101 container remove fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:53:42 compute-0 systemd[1]: libpod-conmon-fa738a017e46b6724a5b7534e6b14bf17007636f957118233bb82f0deca86685.scope: Deactivated successfully.
Jan 31 07:53:42 compute-0 podman[295811]: 2026-01-31 07:53:42.144449499 +0000 UTC m=+0.049757425 container create 4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 07:53:42 compute-0 systemd[1]: Started libpod-conmon-4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd.scope.
Jan 31 07:53:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1350c0d5ae49a3e29eb25ce15a174afd849d8d10c825f6032771b8d7460e11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1350c0d5ae49a3e29eb25ce15a174afd849d8d10c825f6032771b8d7460e11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1350c0d5ae49a3e29eb25ce15a174afd849d8d10c825f6032771b8d7460e11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1350c0d5ae49a3e29eb25ce15a174afd849d8d10c825f6032771b8d7460e11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1350c0d5ae49a3e29eb25ce15a174afd849d8d10c825f6032771b8d7460e11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:42 compute-0 podman[295811]: 2026-01-31 07:53:42.126093548 +0000 UTC m=+0.031401474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:53:42 compute-0 podman[295811]: 2026-01-31 07:53:42.235618351 +0000 UTC m=+0.140926297 container init 4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:53:42 compute-0 podman[295811]: 2026-01-31 07:53:42.241122554 +0000 UTC m=+0.146430470 container start 4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:53:42 compute-0 podman[295811]: 2026-01-31 07:53:42.244971961 +0000 UTC m=+0.150279927 container attach 4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:53:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:42 compute-0 ceph-mon[74496]: pgmap v1693: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Jan 31 07:53:42 compute-0 nova_compute[247704]: 2026-01-31 07:53:42.863 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:42 compute-0 inspiring_shtern[295827]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:53:42 compute-0 inspiring_shtern[295827]: --> relative data size: 1.0
Jan 31 07:53:42 compute-0 inspiring_shtern[295827]: --> All data devices are unavailable
Jan 31 07:53:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:42.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:43 compute-0 systemd[1]: libpod-4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd.scope: Deactivated successfully.
Jan 31 07:53:43 compute-0 podman[295811]: 2026-01-31 07:53:43.009298913 +0000 UTC m=+0.914606839 container died 4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:53:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 31 07:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b1350c0d5ae49a3e29eb25ce15a174afd849d8d10c825f6032771b8d7460e11-merged.mount: Deactivated successfully.
Jan 31 07:53:43 compute-0 podman[295811]: 2026-01-31 07:53:43.148090222 +0000 UTC m=+1.053398138 container remove 4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shtern, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:53:43 compute-0 systemd[1]: libpod-conmon-4d6545712493fe6799a8c2c19f4a1b154d30f56826c193d7e3bfbf9750daf9fd.scope: Deactivated successfully.
Jan 31 07:53:43 compute-0 sudo[295708]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:43 compute-0 sudo[295855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:43 compute-0 sudo[295855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:43 compute-0 sudo[295855]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:43 compute-0 sudo[295880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:53:43 compute-0 sudo[295880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:43 compute-0 sudo[295880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:43 compute-0 sudo[295905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:43 compute-0 sudo[295905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:43 compute-0 sudo[295905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:43 compute-0 sudo[295931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:53:43 compute-0 sudo[295931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:43.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.866429674 +0000 UTC m=+0.051847442 container create ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:53:43 compute-0 systemd[1]: Started libpod-conmon-ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279.scope.
Jan 31 07:53:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.849565907 +0000 UTC m=+0.034983695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.943626323 +0000 UTC m=+0.129044111 container init ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.951306266 +0000 UTC m=+0.136724044 container start ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.95507245 +0000 UTC m=+0.140490228 container attach ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:53:43 compute-0 naughty_agnesi[296011]: 167 167
Jan 31 07:53:43 compute-0 systemd[1]: libpod-ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279.scope: Deactivated successfully.
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.956928132 +0000 UTC m=+0.142345900 container died ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 07:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-638b65601c0ef405eb097ffb94e50945247285f4a88fddaad0f41c883d0c0984-merged.mount: Deactivated successfully.
Jan 31 07:53:43 compute-0 podman[295995]: 2026-01-31 07:53:43.99479701 +0000 UTC m=+0.180214778 container remove ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:53:43 compute-0 systemd[1]: libpod-conmon-ec532dffc7092534ba136bf58b35dd474f3d3ec2607a4ea193bbc17cacac6279.scope: Deactivated successfully.
Jan 31 07:53:44 compute-0 podman[296035]: 2026-01-31 07:53:44.176358788 +0000 UTC m=+0.053352307 container create b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:53:44 compute-0 systemd[1]: Started libpod-conmon-b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b.scope.
Jan 31 07:53:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ecaa62872c5daeb0d8ba62cac39a21f612e04657a5691326a3ba90dc1396/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:44 compute-0 podman[296035]: 2026-01-31 07:53:44.150743563 +0000 UTC m=+0.027737142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ecaa62872c5daeb0d8ba62cac39a21f612e04657a5691326a3ba90dc1396/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ecaa62872c5daeb0d8ba62cac39a21f612e04657a5691326a3ba90dc1396/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c66ecaa62872c5daeb0d8ba62cac39a21f612e04657a5691326a3ba90dc1396/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:44 compute-0 podman[296035]: 2026-01-31 07:53:44.27512584 +0000 UTC m=+0.152119449 container init b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:53:44 compute-0 podman[296035]: 2026-01-31 07:53:44.285029512 +0000 UTC m=+0.162023061 container start b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:53:44 compute-0 podman[296035]: 2026-01-31 07:53:44.28941883 +0000 UTC m=+0.166412459 container attach b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:53:44 compute-0 ceph-mon[74496]: pgmap v1694: 305 pgs: 305 active+clean; 121 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 31 07:53:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:45.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]: {
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:     "0": [
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:         {
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "devices": [
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "/dev/loop3"
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             ],
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "lv_name": "ceph_lv0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "lv_size": "7511998464",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "name": "ceph_lv0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "tags": {
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.cluster_name": "ceph",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.crush_device_class": "",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.encrypted": "0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.osd_id": "0",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.type": "block",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:                 "ceph.vdo": "0"
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             },
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "type": "block",
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:             "vg_name": "ceph_vg0"
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:         }
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]:     ]
Jan 31 07:53:45 compute-0 peaceful_leakey[296052]: }
Jan 31 07:53:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 07:53:45 compute-0 systemd[1]: libpod-b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b.scope: Deactivated successfully.
Jan 31 07:53:45 compute-0 conmon[296052]: conmon b4c0bbf56eb08201c1cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b.scope/container/memory.events
Jan 31 07:53:45 compute-0 podman[296035]: 2026-01-31 07:53:45.124106358 +0000 UTC m=+1.001099897 container died b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c66ecaa62872c5daeb0d8ba62cac39a21f612e04657a5691326a3ba90dc1396-merged.mount: Deactivated successfully.
Jan 31 07:53:45 compute-0 podman[296035]: 2026-01-31 07:53:45.201219537 +0000 UTC m=+1.078213046 container remove b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leakey, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:53:45 compute-0 systemd[1]: libpod-conmon-b4c0bbf56eb08201c1cbb17e8e77725117e242d703c8c336023b7187b423954b.scope: Deactivated successfully.
Jan 31 07:53:45 compute-0 sudo[295931]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:45 compute-0 sudo[296073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:45 compute-0 sudo[296073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:45 compute-0 sudo[296073]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:45 compute-0 sudo[296098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:53:45 compute-0 sudo[296098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:45 compute-0 sudo[296098]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 07:53:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/454254492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 07:53:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/454254492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:45 compute-0 sudo[296124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:45 compute-0 sudo[296124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:45 compute-0 sudo[296124]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:45 compute-0 sudo[296149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:53:45 compute-0 sudo[296149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/454254492' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:53:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/454254492' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:53:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:45.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.826957833 +0000 UTC m=+0.075365378 container create a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:53:45 compute-0 systemd[1]: Started libpod-conmon-a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220.scope.
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.784628475 +0000 UTC m=+0.033036010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:53:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.926641787 +0000 UTC m=+0.175049382 container init a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.934956494 +0000 UTC m=+0.183363999 container start a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_greider, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.938970663 +0000 UTC m=+0.187378188 container attach a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_greider, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 07:53:45 compute-0 practical_greider[296230]: 167 167
Jan 31 07:53:45 compute-0 systemd[1]: libpod-a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220.scope: Deactivated successfully.
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.943500605 +0000 UTC m=+0.191908120 container died a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e9b01cb837b344278ed1eea961843a005ce6f9f0e9157d08a3d6c0c22b4458a-merged.mount: Deactivated successfully.
Jan 31 07:53:45 compute-0 podman[296214]: 2026-01-31 07:53:45.983091982 +0000 UTC m=+0.231499487 container remove a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:53:45 compute-0 systemd[1]: libpod-conmon-a6cb114f58f3a1e101472046c454e61e529e9cb5f824086fe0c4e0af6ca43220.scope: Deactivated successfully.
Jan 31 07:53:46 compute-0 nova_compute[247704]: 2026-01-31 07:53:46.088 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:46 compute-0 podman[296252]: 2026-01-31 07:53:46.135383693 +0000 UTC m=+0.045503740 container create 467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:53:46 compute-0 systemd[1]: Started libpod-conmon-467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778.scope.
Jan 31 07:53:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:53:46 compute-0 podman[296252]: 2026-01-31 07:53:46.11338289 +0000 UTC m=+0.023502917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293938c9dd5c47b13d3910d80a4fcf8d214e6c7b2358f9720b686c79bc41ed31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293938c9dd5c47b13d3910d80a4fcf8d214e6c7b2358f9720b686c79bc41ed31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293938c9dd5c47b13d3910d80a4fcf8d214e6c7b2358f9720b686c79bc41ed31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/293938c9dd5c47b13d3910d80a4fcf8d214e6c7b2358f9720b686c79bc41ed31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:53:46 compute-0 podman[296252]: 2026-01-31 07:53:46.228206782 +0000 UTC m=+0.138326839 container init 467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 07:53:46 compute-0 podman[296252]: 2026-01-31 07:53:46.235210619 +0000 UTC m=+0.145330636 container start 467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:53:46 compute-0 podman[296252]: 2026-01-31 07:53:46.238958294 +0000 UTC m=+0.149078351 container attach 467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 07:53:46 compute-0 ceph-mon[74496]: pgmap v1695: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 07:53:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:47.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:47 compute-0 competent_khayyam[296269]: {
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:         "osd_id": 0,
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:         "type": "bluestore"
Jan 31 07:53:47 compute-0 competent_khayyam[296269]:     }
Jan 31 07:53:47 compute-0 competent_khayyam[296269]: }
Jan 31 07:53:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Jan 31 07:53:47 compute-0 systemd[1]: libpod-467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778.scope: Deactivated successfully.
Jan 31 07:53:47 compute-0 podman[296252]: 2026-01-31 07:53:47.113116116 +0000 UTC m=+1.023236143 container died 467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 07:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-293938c9dd5c47b13d3910d80a4fcf8d214e6c7b2358f9720b686c79bc41ed31-merged.mount: Deactivated successfully.
Jan 31 07:53:47 compute-0 podman[296252]: 2026-01-31 07:53:47.181643381 +0000 UTC m=+1.091763388 container remove 467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:53:47 compute-0 systemd[1]: libpod-conmon-467b031e3f49ed292bd258b07df740ea4a584a15f20dfac8c8d9959d8c4b1778.scope: Deactivated successfully.
Jan 31 07:53:47 compute-0 sudo[296149]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:53:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:53:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cd5b9e38-5c4b-403f-8870-c82fc6fd2d68 does not exist
Jan 31 07:53:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9126d20c-4e85-4a80-aee0-894136ca7213 does not exist
Jan 31 07:53:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6e52fc33-ae24-411d-b75e-ce3e0e7a53da does not exist
Jan 31 07:53:47 compute-0 sudo[296305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:47 compute-0 sudo[296305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:47 compute-0 sudo[296305]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:47 compute-0 sudo[296330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:53:47 compute-0 sudo[296330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:47 compute-0 sudo[296330]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:47 compute-0 nova_compute[247704]: 2026-01-31 07:53:47.864 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:48 compute-0 ceph-mon[74496]: pgmap v1696: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Jan 31 07:53:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:53:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:49.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:53:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:49.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:53:50 compute-0 ceph-mon[74496]: pgmap v1697: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:53:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:53:51 compute-0 nova_compute[247704]: 2026-01-31 07:53:51.091 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:53:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:51.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:53:51 compute-0 ceph-mon[74496]: pgmap v1698: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:51 compute-0 podman[296357]: 2026-01-31 07:53:51.978365906 +0000 UTC m=+0.137893730 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:53:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:52 compute-0 nova_compute[247704]: 2026-01-31 07:53:52.866 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:53:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:53:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:53:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:53.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:53:54 compute-0 ceph-mon[74496]: pgmap v1699: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 07:53:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:55.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Jan 31 07:53:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:53:56 compute-0 nova_compute[247704]: 2026-01-31 07:53:56.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:56 compute-0 ceph-mon[74496]: pgmap v1700: 305 pgs: 305 active+clean; 121 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Jan 31 07:53:56 compute-0 sudo[296386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:56 compute-0 sudo[296386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:56 compute-0 sudo[296386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:56 compute-0 sudo[296411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:53:56 compute-0 sudo[296411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:53:56 compute-0 sudo[296411]: pam_unix(sudo:session): session closed for user root
Jan 31 07:53:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:53:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:53:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 94 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.4 KiB/s rd, 5.0 KiB/s wr, 6 op/s
Jan 31 07:53:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:53:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:53:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:53:57 compute-0 nova_compute[247704]: 2026-01-31 07:53:57.869 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:57.926 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:53:57 compute-0 nova_compute[247704]: 2026-01-31 07:53:57.928 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:53:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:53:57.928 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:53:58 compute-0 ceph-mon[74496]: pgmap v1701: 305 pgs: 305 active+clean; 94 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 3.4 KiB/s rd, 5.0 KiB/s wr, 6 op/s
Jan 31 07:53:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:53:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:53:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 42 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Jan 31 07:53:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:53:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:53:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:00 compute-0 ceph-mon[74496]: pgmap v1702: 305 pgs: 305 active+clean; 42 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Jan 31 07:54:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:01 compute-0 nova_compute[247704]: 2026-01-31 07:54:01.097 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:01.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:02 compute-0 ceph-mon[74496]: pgmap v1703: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:02 compute-0 nova_compute[247704]: 2026-01-31 07:54:02.871 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/320822183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:03.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:04 compute-0 ceph-mon[74496]: pgmap v1704: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:05.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:05 compute-0 ceph-mon[74496]: pgmap v1705: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 07:54:06 compute-0 nova_compute[247704]: 2026-01-31 07:54:06.099 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:06 compute-0 podman[296443]: 2026-01-31 07:54:06.912766162 +0000 UTC m=+0.085616369 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 07:54:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:54:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:07 compute-0 nova_compute[247704]: 2026-01-31 07:54:07.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:07.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:07 compute-0 nova_compute[247704]: 2026-01-31 07:54:07.874 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:07.932 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:54:08 compute-0 ceph-mon[74496]: pgmap v1706: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 07:54:08 compute-0 nova_compute[247704]: 2026-01-31 07:54:08.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 21 op/s
Jan 31 07:54:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:09.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:10 compute-0 ceph-mon[74496]: pgmap v1707: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 21 op/s
Jan 31 07:54:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:11 compute-0 nova_compute[247704]: 2026-01-31 07:54:11.102 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Jan 31 07:54:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:11.163 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:11.163 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:11.163 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:11 compute-0 nova_compute[247704]: 2026-01-31 07:54:11.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:11.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:12 compute-0 nova_compute[247704]: 2026-01-31 07:54:12.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:12 compute-0 nova_compute[247704]: 2026-01-31 07:54:12.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:54:12 compute-0 nova_compute[247704]: 2026-01-31 07:54:12.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:54:12 compute-0 ceph-mon[74496]: pgmap v1708: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Jan 31 07:54:12 compute-0 nova_compute[247704]: 2026-01-31 07:54:12.876 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:13 compute-0 nova_compute[247704]: 2026-01-31 07:54:13.277 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:54:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:13.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:14 compute-0 nova_compute[247704]: 2026-01-31 07:54:14.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:14 compute-0 ceph-mon[74496]: pgmap v1709: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/895703752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:15.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:15 compute-0 nova_compute[247704]: 2026-01-31 07:54:15.516 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:15 compute-0 nova_compute[247704]: 2026-01-31 07:54:15.516 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:15 compute-0 nova_compute[247704]: 2026-01-31 07:54:15.516 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:15 compute-0 nova_compute[247704]: 2026-01-31 07:54:15.517 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:54:15 compute-0 nova_compute[247704]: 2026-01-31 07:54:15.517 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:15.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:15 compute-0 ceph-mon[74496]: pgmap v1710: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:16 compute-0 nova_compute[247704]: 2026-01-31 07:54:16.104 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400977389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:16 compute-0 nova_compute[247704]: 2026-01-31 07:54:16.145 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:16 compute-0 nova_compute[247704]: 2026-01-31 07:54:16.297 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:54:16 compute-0 nova_compute[247704]: 2026-01-31 07:54:16.298 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4640MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:54:16 compute-0 nova_compute[247704]: 2026-01-31 07:54:16.298 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:16 compute-0 nova_compute[247704]: 2026-01-31 07:54:16.299 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:16 compute-0 sudo[296489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:16 compute-0 sudo[296489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:16 compute-0 sudo[296489]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:16 compute-0 sudo[296514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:16 compute-0 sudo[296514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:16 compute-0 sudo[296514]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2400977389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/823784031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:17.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:17 compute-0 nova_compute[247704]: 2026-01-31 07:54:17.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:18 compute-0 ceph-mon[74496]: pgmap v1711: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3230444530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:19.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:20 compute-0 nova_compute[247704]: 2026-01-31 07:54:20.033 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:54:20 compute-0 nova_compute[247704]: 2026-01-31 07:54:20.033 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:54:20
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.control']
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:54:20 compute-0 ceph-mon[74496]: pgmap v1712: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:21 compute-0 nova_compute[247704]: 2026-01-31 07:54:21.107 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:21 compute-0 nova_compute[247704]: 2026-01-31 07:54:21.224 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039010121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:21 compute-0 nova_compute[247704]: 2026-01-31 07:54:21.677 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:21 compute-0 nova_compute[247704]: 2026-01-31 07:54:21.682 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:54:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3604665836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:21 compute-0 ceph-mon[74496]: pgmap v1713: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3039010121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:22 compute-0 nova_compute[247704]: 2026-01-31 07:54:22.164 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:54:22 compute-0 nova_compute[247704]: 2026-01-31 07:54:22.166 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:54:22 compute-0 nova_compute[247704]: 2026-01-31 07:54:22.166 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:22 compute-0 nova_compute[247704]: 2026-01-31 07:54:22.879 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:22 compute-0 podman[296564]: 2026-01-31 07:54:22.923682664 +0000 UTC m=+0.090278073 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 07:54:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:23 compute-0 nova_compute[247704]: 2026-01-31 07:54:23.167 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:23 compute-0 nova_compute[247704]: 2026-01-31 07:54:23.757 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:23 compute-0 nova_compute[247704]: 2026-01-31 07:54:23.758 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:23 compute-0 nova_compute[247704]: 2026-01-31 07:54:23.759 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:23 compute-0 nova_compute[247704]: 2026-01-31 07:54:23.759 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:54:23 compute-0 nova_compute[247704]: 2026-01-31 07:54:23.760 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:54:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:23.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:24 compute-0 ceph-mon[74496]: pgmap v1714: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:25.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:25.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:26 compute-0 nova_compute[247704]: 2026-01-31 07:54:26.109 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:26 compute-0 ceph-mon[74496]: pgmap v1715: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:27.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:27.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:27 compute-0 nova_compute[247704]: 2026-01-31 07:54:27.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:28 compute-0 ceph-mon[74496]: pgmap v1716: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:30 compute-0 ceph-mon[74496]: pgmap v1717: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:31 compute-0 nova_compute[247704]: 2026-01-31 07:54:31.112 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:31 compute-0 ovn_controller[149457]: 2026-01-31T07:54:31Z|00285|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 07:54:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:31.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:32 compute-0 ceph-mon[74496]: pgmap v1718: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:32 compute-0 nova_compute[247704]: 2026-01-31 07:54:32.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:33 compute-0 ceph-mon[74496]: pgmap v1719: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:33.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:54:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8572 writes, 37K keys, 8570 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8572 writes, 8570 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1641 writes, 7293 keys, 1640 commit groups, 1.0 writes per commit group, ingest: 10.71 MB, 0.02 MB/s
                                           Interval WAL: 1641 writes, 1640 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     78.6      0.61              0.12        22    0.028       0      0       0.0       0.0
                                             L6      1/0    9.52 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     74.3     61.7      3.05              0.43        21    0.145    114K    12K       0.0       0.0
                                            Sum      1/0    9.52 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     61.9     64.5      3.66              0.55        43    0.085    114K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.7     67.8     69.3      0.88              0.12        10    0.088     34K   3149       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     74.3     61.7      3.05              0.43        21    0.145    114K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     79.1      0.61              0.12        21    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.047, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.08 MB/s read, 3.7 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 25.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000355 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1490,24.25 MB,7.97819%) FilterBlock(44,321.80 KB,0.103373%) IndexBlock(44,583.08 KB,0.187307%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 07:54:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:54:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:54:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:35.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:36 compute-0 nova_compute[247704]: 2026-01-31 07:54:36.115 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:36 compute-0 ceph-mon[74496]: pgmap v1720: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:36 compute-0 sudo[296598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:36 compute-0 sudo[296598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:36 compute-0 sudo[296598]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:37 compute-0 sudo[296629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:37 compute-0 sudo[296629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:37 compute-0 sudo[296629]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:37 compute-0 podman[296622]: 2026-01-31 07:54:37.047102925 +0000 UTC m=+0.073904187 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 07:54:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:38 compute-0 nova_compute[247704]: 2026-01-31 07:54:38.711 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:38.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:39 compute-0 ceph-mon[74496]: pgmap v1721: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:40 compute-0 ceph-mon[74496]: pgmap v1722: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:41 compute-0 nova_compute[247704]: 2026-01-31 07:54:41.117 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:43 compute-0 ceph-mon[74496]: pgmap v1723: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:43 compute-0 nova_compute[247704]: 2026-01-31 07:54:43.713 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:44 compute-0 ceph-mon[74496]: pgmap v1724: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:44.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2192602938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:54:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2192602938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:54:46 compute-0 nova_compute[247704]: 2026-01-31 07:54:46.121 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:46 compute-0 ceph-mon[74496]: pgmap v1725: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:47 compute-0 sudo[296676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:47 compute-0 sudo[296676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:47 compute-0 sudo[296676]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:47 compute-0 sudo[296701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:54:47 compute-0 sudo[296701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:47 compute-0 sudo[296701]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:48 compute-0 sudo[296726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:48 compute-0 sudo[296726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:48 compute-0 sudo[296726]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:48 compute-0 ceph-mon[74496]: pgmap v1726: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:48 compute-0 sudo[296751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:54:48 compute-0 sudo[296751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:48 compute-0 sudo[296751]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:54:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:54:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:54:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:54:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:54:48 compute-0 nova_compute[247704]: 2026-01-31 07:54:48.715 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:54:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e7e20421-d9ce-444b-a614-37f41ddf3e49 does not exist
Jan 31 07:54:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 63afd3f1-63e3-496a-a726-6c1f2624a25a does not exist
Jan 31 07:54:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 90288253-31f2-4464-8cc4-ffdacca65068 does not exist
Jan 31 07:54:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:54:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:54:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:54:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:54:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:54:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:54:48 compute-0 sudo[296808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:48 compute-0 sudo[296808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:49 compute-0 sudo[296808]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:49 compute-0 sudo[296833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:54:49 compute-0 sudo[296833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:49 compute-0 sudo[296833]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:49 compute-0 sudo[296858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:49 compute-0 sudo[296858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:49 compute-0 sudo[296858]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:49 compute-0 sudo[296883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:54:49 compute-0 sudo[296883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:54:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:54:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:54:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:54:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:54:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:54:49 compute-0 podman[296950]: 2026-01-31 07:54:49.604575273 +0000 UTC m=+0.031314402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:54:49 compute-0 podman[296950]: 2026-01-31 07:54:49.726230009 +0000 UTC m=+0.152969108 container create 210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_joliot, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 07:54:49 compute-0 systemd[1]: Started libpod-conmon-210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27.scope.
Jan 31 07:54:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:54:49 compute-0 podman[296950]: 2026-01-31 07:54:49.86870077 +0000 UTC m=+0.295439959 container init 210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:54:49 compute-0 podman[296950]: 2026-01-31 07:54:49.878687564 +0000 UTC m=+0.305426703 container start 210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_joliot, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:54:49 compute-0 festive_joliot[296966]: 167 167
Jan 31 07:54:49 compute-0 systemd[1]: libpod-210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27.scope: Deactivated successfully.
Jan 31 07:54:49 compute-0 podman[296950]: 2026-01-31 07:54:49.952225432 +0000 UTC m=+0.378964531 container attach 210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_joliot, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:54:49 compute-0 podman[296950]: 2026-01-31 07:54:49.953547761 +0000 UTC m=+0.380286870 container died 210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:54:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f3950762a83eff02ac6d98ec431f656e945c964937774836f5ff2a1ca73bfac-merged.mount: Deactivated successfully.
Jan 31 07:54:50 compute-0 ceph-mon[74496]: pgmap v1727: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:50 compute-0 podman[296950]: 2026-01-31 07:54:50.620794898 +0000 UTC m=+1.047533997 container remove 210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:54:50 compute-0 systemd[1]: libpod-conmon-210f25f2b640950e89f61d55ae5a0750ed013d711011bc97b3cc9b0458ffbf27.scope: Deactivated successfully.
Jan 31 07:54:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:54:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:50.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:54:50 compute-0 podman[296990]: 2026-01-31 07:54:50.796358292 +0000 UTC m=+0.037129014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:54:50 compute-0 podman[296990]: 2026-01-31 07:54:50.962961194 +0000 UTC m=+0.203731866 container create 39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 07:54:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:51 compute-0 systemd[1]: Started libpod-conmon-39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05.scope.
Jan 31 07:54:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:54:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cdf10bdcc6f7244fdc8af02c3729ad45656f77e8c4f2264ca0d6b8e5f7f1fb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:51 compute-0 nova_compute[247704]: 2026-01-31 07:54:51.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cdf10bdcc6f7244fdc8af02c3729ad45656f77e8c4f2264ca0d6b8e5f7f1fb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cdf10bdcc6f7244fdc8af02c3729ad45656f77e8c4f2264ca0d6b8e5f7f1fb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cdf10bdcc6f7244fdc8af02c3729ad45656f77e8c4f2264ca0d6b8e5f7f1fb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cdf10bdcc6f7244fdc8af02c3729ad45656f77e8c4f2264ca0d6b8e5f7f1fb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:51 compute-0 podman[296990]: 2026-01-31 07:54:51.366146436 +0000 UTC m=+0.606917188 container init 39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:54:51 compute-0 podman[296990]: 2026-01-31 07:54:51.375183998 +0000 UTC m=+0.615954670 container start 39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_archimedes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:54:51 compute-0 podman[296990]: 2026-01-31 07:54:51.441608836 +0000 UTC m=+0.682379488 container attach 39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 07:54:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3774768180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:52 compute-0 peaceful_archimedes[297007]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:54:52 compute-0 peaceful_archimedes[297007]: --> relative data size: 1.0
Jan 31 07:54:52 compute-0 peaceful_archimedes[297007]: --> All data devices are unavailable
Jan 31 07:54:52 compute-0 podman[296990]: 2026-01-31 07:54:52.228039453 +0000 UTC m=+1.468810145 container died 39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_archimedes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 07:54:52 compute-0 systemd[1]: libpod-39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05.scope: Deactivated successfully.
Jan 31 07:54:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cdf10bdcc6f7244fdc8af02c3729ad45656f77e8c4f2264ca0d6b8e5f7f1fb1-merged.mount: Deactivated successfully.
Jan 31 07:54:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:52.718 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:54:52 compute-0 nova_compute[247704]: 2026-01-31 07:54:52.719 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:52.723 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:54:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:54:52.725 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:54:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:54:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:53.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:54:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:53 compute-0 ceph-mon[74496]: pgmap v1728: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1505041532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/762709688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:53 compute-0 podman[296990]: 2026-01-31 07:54:53.442687424 +0000 UTC m=+2.683458066 container remove 39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:54:53 compute-0 systemd[1]: libpod-conmon-39e17243e34f051d20350c2b9279bb485541ade5043b571afbb52402bda4dc05.scope: Deactivated successfully.
Jan 31 07:54:53 compute-0 sudo[296883]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:53 compute-0 sudo[297045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:53 compute-0 sudo[297045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:53 compute-0 sudo[297045]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:53 compute-0 podman[297036]: 2026-01-31 07:54:53.62154664 +0000 UTC m=+0.131022475 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:54:53 compute-0 sudo[297087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:54:53 compute-0 sudo[297087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:53 compute-0 sudo[297087]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:53 compute-0 sudo[297112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:53 compute-0 sudo[297112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:53 compute-0 sudo[297112]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:53 compute-0 nova_compute[247704]: 2026-01-31 07:54:53.717 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:53 compute-0 sudo[297138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:54:53 compute-0 sudo[297138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:54 compute-0 podman[297203]: 2026-01-31 07:54:54.110724649 +0000 UTC m=+0.023898976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:54:54 compute-0 podman[297203]: 2026-01-31 07:54:54.451622885 +0000 UTC m=+0.364797222 container create 11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_agnesi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:54:54 compute-0 ceph-mon[74496]: pgmap v1729: 305 pgs: 305 active+clean; 41 MiB data, 679 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:54:54 compute-0 systemd[1]: Started libpod-conmon-11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88.scope.
Jan 31 07:54:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:54:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:54.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:55 compute-0 podman[297203]: 2026-01-31 07:54:55.015352124 +0000 UTC m=+0.928526521 container init 11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_agnesi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:54:55 compute-0 podman[297203]: 2026-01-31 07:54:55.025809418 +0000 UTC m=+0.938983725 container start 11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_agnesi, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 07:54:55 compute-0 upbeat_agnesi[297219]: 167 167
Jan 31 07:54:55 compute-0 systemd[1]: libpod-11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88.scope: Deactivated successfully.
Jan 31 07:54:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:55.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 85 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Jan 31 07:54:55 compute-0 podman[297203]: 2026-01-31 07:54:55.314145087 +0000 UTC m=+1.227319394 container attach 11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:54:55 compute-0 podman[297203]: 2026-01-31 07:54:55.314573528 +0000 UTC m=+1.227747835 container died 11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_agnesi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 07:54:56 compute-0 ceph-mon[74496]: pgmap v1730: 305 pgs: 305 active+clean; 85 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Jan 31 07:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f5fe74647906a6f68f352827c1e5b9a7835624460ac9e117f30fe8e0636dcdc-merged.mount: Deactivated successfully.
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.130 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:56 compute-0 podman[297203]: 2026-01-31 07:54:56.504959674 +0000 UTC m=+2.418134021 container remove 11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:54:56 compute-0 systemd[1]: libpod-conmon-11b0aeca7b9a9376e4012bf1ba896486cdb345a175ea90f52213b1d1aae17a88.scope: Deactivated successfully.
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.674 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.676 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.694 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:54:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:56 compute-0 podman[297244]: 2026-01-31 07:54:56.666166435 +0000 UTC m=+0.034527484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:54:56 compute-0 podman[297244]: 2026-01-31 07:54:56.883618036 +0000 UTC m=+0.251979055 container create 6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.926 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.927 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.937 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:54:56 compute-0 nova_compute[247704]: 2026-01-31 07:54:56.938 247708 INFO nova.compute.claims [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:54:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:57 compute-0 sudo[297258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:57 compute-0 sudo[297258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:57 compute-0 sudo[297258]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 113 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.9 MiB/s wr, 44 op/s
Jan 31 07:54:57 compute-0 systemd[1]: Started libpod-conmon-6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14.scope.
Jan 31 07:54:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e2d02308b0f7c39b6a8f20384aa9dbefa5fe0f55d68aea4e00df160ef849ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e2d02308b0f7c39b6a8f20384aa9dbefa5fe0f55d68aea4e00df160ef849ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e2d02308b0f7c39b6a8f20384aa9dbefa5fe0f55d68aea4e00df160ef849ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e2d02308b0f7c39b6a8f20384aa9dbefa5fe0f55d68aea4e00df160ef849ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:54:57 compute-0 sudo[297283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:54:57 compute-0 sudo[297283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:54:57 compute-0 sudo[297283]: pam_unix(sudo:session): session closed for user root
Jan 31 07:54:57 compute-0 nova_compute[247704]: 2026-01-31 07:54:57.271 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:54:57 compute-0 podman[297244]: 2026-01-31 07:54:57.330769704 +0000 UTC m=+0.699130763 container init 6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:54:57 compute-0 podman[297244]: 2026-01-31 07:54:57.339023118 +0000 UTC m=+0.707384107 container start 6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:54:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:54:57 compute-0 podman[297244]: 2026-01-31 07:54:57.442713211 +0000 UTC m=+0.811074310 container attach 6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:54:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:54:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307736074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:54:57 compute-0 nova_compute[247704]: 2026-01-31 07:54:57.777 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:54:57 compute-0 nova_compute[247704]: 2026-01-31 07:54:57.784 247708 DEBUG nova.compute.provider_tree [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:54:57 compute-0 nova_compute[247704]: 2026-01-31 07:54:57.892 247708 DEBUG nova.scheduler.client.report [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]: {
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:     "0": [
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:         {
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "devices": [
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "/dev/loop3"
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             ],
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "lv_name": "ceph_lv0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "lv_size": "7511998464",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "name": "ceph_lv0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "tags": {
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.cluster_name": "ceph",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.crush_device_class": "",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.encrypted": "0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.osd_id": "0",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.type": "block",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:                 "ceph.vdo": "0"
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             },
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "type": "block",
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:             "vg_name": "ceph_vg0"
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:         }
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]:     ]
Jan 31 07:54:58 compute-0 elated_varahamihira[297298]: }
Jan 31 07:54:58 compute-0 systemd[1]: libpod-6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14.scope: Deactivated successfully.
Jan 31 07:54:58 compute-0 podman[297244]: 2026-01-31 07:54:58.110539571 +0000 UTC m=+1.478900560 container died 6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.115 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.118 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.382 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.382 247708 DEBUG nova.network.neutron [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.512 247708 INFO nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:54:58 compute-0 ceph-mon[74496]: pgmap v1731: 305 pgs: 305 active+clean; 113 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.9 MiB/s wr, 44 op/s
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.720 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:54:58 compute-0 nova_compute[247704]: 2026-01-31 07:54:58.725 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:54:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6e2d02308b0f7c39b6a8f20384aa9dbefa5fe0f55d68aea4e00df160ef849ac-merged.mount: Deactivated successfully.
Jan 31 07:54:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:54:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:54:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:54:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 180 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 80 op/s
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.565 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.567 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.568 247708 INFO nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Creating image(s)
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.613 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.658 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.938 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:54:59 compute-0 nova_compute[247704]: 2026-01-31 07:54:59.944 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:00 compute-0 podman[297244]: 2026-01-31 07:55:00.02949269 +0000 UTC m=+3.397853679 container remove 6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.049 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:00 compute-0 systemd[1]: libpod-conmon-6b80794a03fa0097ad7b864c1919c30635d7d7f732a1d53326676cad15791d14.scope: Deactivated successfully.
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.051 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.052 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.053 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:00 compute-0 sudo[297138]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.088 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.092 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:00 compute-0 nova_compute[247704]: 2026-01-31 07:55:00.143 247708 DEBUG nova.policy [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '73dd06404c1f40adb74b624862a57628', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '71d60d9604414f9d88093c29f5f816cb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:55:00 compute-0 sudo[297433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:00 compute-0 sudo[297433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:00 compute-0 sudo[297433]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:00 compute-0 sudo[297473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:55:00 compute-0 sudo[297473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:00 compute-0 sudo[297473]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:00 compute-0 sudo[297498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:00 compute-0 sudo[297498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:00 compute-0 sudo[297498]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:00 compute-0 sudo[297523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:55:00 compute-0 sudo[297523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1307736074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:00 compute-0 ceph-mon[74496]: pgmap v1732: 305 pgs: 305 active+clean; 180 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 80 op/s
Jan 31 07:55:00 compute-0 podman[297587]: 2026-01-31 07:55:00.593872323 +0000 UTC m=+0.035226530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:55:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:00.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:00 compute-0 podman[297587]: 2026-01-31 07:55:00.999577421 +0000 UTC m=+0.440931638 container create 1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:55:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 180 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 31 07:55:01 compute-0 nova_compute[247704]: 2026-01-31 07:55:01.132 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:01 compute-0 systemd[1]: Started libpod-conmon-1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a.scope.
Jan 31 07:55:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:55:01 compute-0 podman[297587]: 2026-01-31 07:55:01.666061492 +0000 UTC m=+1.107415729 container init 1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:55:01 compute-0 podman[297587]: 2026-01-31 07:55:01.676539136 +0000 UTC m=+1.117893353 container start 1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:55:01 compute-0 agitated_mayer[297607]: 167 167
Jan 31 07:55:01 compute-0 systemd[1]: libpod-1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a.scope: Deactivated successfully.
Jan 31 07:55:01 compute-0 podman[297587]: 2026-01-31 07:55:01.734965895 +0000 UTC m=+1.176320082 container attach 1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 07:55:01 compute-0 podman[297587]: 2026-01-31 07:55:01.735556478 +0000 UTC m=+1.176910665 container died 1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:55:02 compute-0 nova_compute[247704]: 2026-01-31 07:55:02.141 247708 DEBUG nova.network.neutron [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Successfully created port: 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:55:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b70e7bd41c623edd6f2d8c71349758881ff1d7afaff7800a886364e5c186760-merged.mount: Deactivated successfully.
Jan 31 07:55:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:02 compute-0 podman[297587]: 2026-01-31 07:55:02.94133335 +0000 UTC m=+2.382687577 container remove 1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 07:55:02 compute-0 systemd[1]: libpod-conmon-1c9d1fe131a4248d7f4c22700be44274b92ecba50dbff8aed02685c4a279d98a.scope: Deactivated successfully.
Jan 31 07:55:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:03.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 185 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 5.6 MiB/s wr, 94 op/s
Jan 31 07:55:03 compute-0 podman[297633]: 2026-01-31 07:55:03.11944197 +0000 UTC m=+0.035610849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:55:03 compute-0 podman[297633]: 2026-01-31 07:55:03.260583792 +0000 UTC m=+0.176752631 container create 5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 07:55:03 compute-0 systemd[1]: Started libpod-conmon-5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f.scope.
Jan 31 07:55:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c89811e40888e5ca0f82d78f39816756e1b93e80a3f13e06e00df87e90d6c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c89811e40888e5ca0f82d78f39816756e1b93e80a3f13e06e00df87e90d6c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c89811e40888e5ca0f82d78f39816756e1b93e80a3f13e06e00df87e90d6c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c89811e40888e5ca0f82d78f39816756e1b93e80a3f13e06e00df87e90d6c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/263086270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:03 compute-0 ceph-mon[74496]: pgmap v1733: 305 pgs: 305 active+clean; 180 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 31 07:55:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3259012295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:03 compute-0 podman[297633]: 2026-01-31 07:55:03.711959803 +0000 UTC m=+0.628128672 container init 5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 07:55:03 compute-0 podman[297633]: 2026-01-31 07:55:03.721753473 +0000 UTC m=+0.637922312 container start 5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:55:03 compute-0 nova_compute[247704]: 2026-01-31 07:55:03.721 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:03 compute-0 podman[297633]: 2026-01-31 07:55:03.889790807 +0000 UTC m=+0.805959746 container attach 5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.310 247708 DEBUG nova.network.neutron [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Successfully updated port: 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.357 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "refresh_cache-8d4f7ec7-14a4-4cd7-9061-4b73e16be807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.357 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquired lock "refresh_cache-8d4f7ec7-14a4-4cd7-9061-4b73e16be807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.358 247708 DEBUG nova.network.neutron [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]: {
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:         "osd_id": 0,
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:         "type": "bluestore"
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]:     }
Jan 31 07:55:04 compute-0 mystifying_liskov[297651]: }
Jan 31 07:55:04 compute-0 systemd[1]: libpod-5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f.scope: Deactivated successfully.
Jan 31 07:55:04 compute-0 podman[297633]: 2026-01-31 07:55:04.596670863 +0000 UTC m=+1.512839732 container died 5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.605 247708 DEBUG nova.compute.manager [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-changed-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.605 247708 DEBUG nova.compute.manager [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Refreshing instance network info cache due to event network-changed-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:55:04 compute-0 nova_compute[247704]: 2026-01-31 07:55:04.606 247708 DEBUG oslo_concurrency.lockutils [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8d4f7ec7-14a4-4cd7-9061-4b73e16be807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:55:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:05 compute-0 nova_compute[247704]: 2026-01-31 07:55:05.068 247708 DEBUG nova.network.neutron [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:55:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:05.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 202 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 6.2 MiB/s wr, 95 op/s
Jan 31 07:55:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2952688819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:05 compute-0 ceph-mon[74496]: pgmap v1734: 305 pgs: 305 active+clean; 185 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 5.6 MiB/s wr, 94 op/s
Jan 31 07:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c89811e40888e5ca0f82d78f39816756e1b93e80a3f13e06e00df87e90d6c86-merged.mount: Deactivated successfully.
Jan 31 07:55:05 compute-0 nova_compute[247704]: 2026-01-31 07:55:05.893 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:06 compute-0 nova_compute[247704]: 2026-01-31 07:55:06.097 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] resizing rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:55:06 compute-0 nova_compute[247704]: 2026-01-31 07:55:06.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:06 compute-0 podman[297633]: 2026-01-31 07:55:06.842717677 +0000 UTC m=+3.758886536 container remove 5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_liskov, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:55:06 compute-0 systemd[1]: libpod-conmon-5b279e6fa659766f0d6182fc66f44cbd41be6568b24682ab0158d11d04336a9f.scope: Deactivated successfully.
Jan 31 07:55:06 compute-0 sudo[297523]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:55:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1662721803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:06 compute-0 ceph-mon[74496]: pgmap v1735: 305 pgs: 305 active+clean; 202 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 6.2 MiB/s wr, 95 op/s
Jan 31 07:55:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 223 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.2 MiB/s wr, 80 op/s
Jan 31 07:55:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:55:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:55:07 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:55:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7ad26897-6ab4-43c0-940e-b6ac30537f5c does not exist
Jan 31 07:55:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4ad546ce-8473-42fe-b605-85351e156e1f does not exist
Jan 31 07:55:07 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c4262fe2-654e-40c3-af98-af4cde0e03d9 does not exist
Jan 31 07:55:07 compute-0 podman[297741]: 2026-01-31 07:55:07.896828981 +0000 UTC m=+0.064291321 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 07:55:07 compute-0 sudo[297748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:07 compute-0 sudo[297748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:07 compute-0 sudo[297748]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:07 compute-0 sudo[297785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:55:07 compute-0 sudo[297785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:07 compute-0 sudo[297785]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.310 247708 DEBUG nova.network.neutron [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Updating instance_info_cache with network_info: [{"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.410 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Releasing lock "refresh_cache-8d4f7ec7-14a4-4cd7-9061-4b73e16be807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.411 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Instance network_info: |[{"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.411 247708 DEBUG oslo_concurrency.lockutils [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8d4f7ec7-14a4-4cd7-9061-4b73e16be807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.411 247708 DEBUG nova.network.neutron [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Refreshing network info cache for port 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.485 247708 DEBUG nova.objects.instance [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lazy-loading 'migration_context' on Instance uuid 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.558 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.559 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Ensure instance console log exists: /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.559 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.559 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.560 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.562 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Start _get_guest_xml network_info=[{"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.566 247708 WARNING nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.573 247708 DEBUG nova.virt.libvirt.host [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.574 247708 DEBUG nova.virt.libvirt.host [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.580 247708 DEBUG nova.virt.libvirt.host [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.581 247708 DEBUG nova.virt.libvirt.host [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.582 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.582 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.582 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.583 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.583 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.583 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.583 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.583 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.583 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.584 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.584 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.584 247708 DEBUG nova.virt.hardware [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.586 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:08 compute-0 nova_compute[247704]: 2026-01-31 07:55:08.723 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:08 compute-0 ceph-mon[74496]: pgmap v1736: 305 pgs: 305 active+clean; 223 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.2 MiB/s wr, 80 op/s
Jan 31 07:55:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:55:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:55:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:09.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 4.2 MiB/s wr, 66 op/s
Jan 31 07:55:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:55:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788326176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.191 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.223 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.227 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.565 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.566 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:55:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759519715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.657 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.660 247708 DEBUG nova.virt.libvirt.vif [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1487768865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1487768865',id=79,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71d60d9604414f9d88093c29f5f816cb',ramdisk_id='',reservation_id='r-onecju4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-2056113828',owner_user_name='tempest-InstanceActionsV221TestJSON-2056113828-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:58Z,user_data=None,user_id='73dd06404c1f40adb74b624862a57628',uuid=8d4f7ec7-14a4-4cd7-9061-4b73e16be807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.660 247708 DEBUG nova.network.os_vif_util [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Converting VIF {"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.661 247708 DEBUG nova.network.os_vif_util [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.663 247708 DEBUG nova.objects.instance [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.745 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <uuid>8d4f7ec7-14a4-4cd7-9061-4b73e16be807</uuid>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <name>instance-0000004f</name>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1487768865</nova:name>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:55:08</nova:creationTime>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:user uuid="73dd06404c1f40adb74b624862a57628">tempest-InstanceActionsV221TestJSON-2056113828-project-member</nova:user>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:project uuid="71d60d9604414f9d88093c29f5f816cb">tempest-InstanceActionsV221TestJSON-2056113828</nova:project>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <nova:port uuid="83a9fc66-6ef9-40b5-8aba-32bdb5065f5c">
Jan 31 07:55:09 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <system>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <entry name="serial">8d4f7ec7-14a4-4cd7-9061-4b73e16be807</entry>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <entry name="uuid">8d4f7ec7-14a4-4cd7-9061-4b73e16be807</entry>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </system>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <os>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </os>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <features>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </features>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk">
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </source>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk.config">
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </source>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:55:09 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:13:2a:8d"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <target dev="tap83a9fc66-6e"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/console.log" append="off"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <video>
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </video>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:55:09 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:55:09 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:55:09 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:55:09 compute-0 nova_compute[247704]: </domain>
Jan 31 07:55:09 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.745 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Preparing to wait for external event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.745 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.746 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.746 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.746 247708 DEBUG nova.virt.libvirt.vif [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1487768865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1487768865',id=79,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='71d60d9604414f9d88093c29f5f816cb',ramdisk_id='',reservation_id='r-onecju4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-2056113828',owner_user_name='tempest-InstanceActionsV221TestJSON-2056113828-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:58Z,user_data=None,user_id='73dd06404c1f40adb74b624862a57628',uuid=8d4f7ec7-14a4-4cd7-9061-4b73e16be807,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.747 247708 DEBUG nova.network.os_vif_util [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Converting VIF {"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.747 247708 DEBUG nova.network.os_vif_util [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.747 247708 DEBUG os_vif [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.752 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.752 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.752 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.756 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.756 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83a9fc66-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.756 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83a9fc66-6e, col_values=(('external_ids', {'iface-id': '83a9fc66-6ef9-40b5-8aba-32bdb5065f5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:2a:8d', 'vm-uuid': '8d4f7ec7-14a4-4cd7-9061-4b73e16be807'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.758 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:09 compute-0 NetworkManager[49108]: <info>  [1769846109.7594] manager: (tap83a9fc66-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.765 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.765 247708 INFO os_vif [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e')
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.932 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.932 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.932 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] No VIF found with MAC fa:16:3e:13:2a:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.933 247708 INFO nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Using config drive
Jan 31 07:55:09 compute-0 nova_compute[247704]: 2026-01-31 07:55:09.965 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:10 compute-0 ceph-mon[74496]: pgmap v1737: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 4.2 MiB/s wr, 66 op/s
Jan 31 07:55:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/788326176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1759519715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:10.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 31 07:55:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:11.163 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:11.164 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:11.164 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:11 compute-0 nova_compute[247704]: 2026-01-31 07:55:11.276 247708 INFO nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Creating config drive at /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/disk.config
Jan 31 07:55:11 compute-0 nova_compute[247704]: 2026-01-31 07:55:11.280 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfm4ulg2m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:11 compute-0 nova_compute[247704]: 2026-01-31 07:55:11.408 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfm4ulg2m" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:11 compute-0 nova_compute[247704]: 2026-01-31 07:55:11.439 247708 DEBUG nova.storage.rbd_utils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] rbd image 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:55:11 compute-0 nova_compute[247704]: 2026-01-31 07:55:11.444 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/disk.config 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.072 247708 DEBUG nova.network.neutron [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Updated VIF entry in instance network info cache for port 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.073 247708 DEBUG nova.network.neutron [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Updating instance_info_cache with network_info: [{"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:12 compute-0 ceph-mon[74496]: pgmap v1738: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.405 247708 DEBUG oslo_concurrency.processutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/disk.config 8d4f7ec7-14a4-4cd7-9061-4b73e16be807_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.961s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.406 247708 INFO nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Deleting local config drive /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807/disk.config because it was imported into RBD.
Jan 31 07:55:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:12 compute-0 kernel: tap83a9fc66-6e: entered promiscuous mode
Jan 31 07:55:12 compute-0 NetworkManager[49108]: <info>  [1769846112.4603] manager: (tap83a9fc66-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Jan 31 07:55:12 compute-0 ovn_controller[149457]: 2026-01-31T07:55:12Z|00286|binding|INFO|Claiming lport 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c for this chassis.
Jan 31 07:55:12 compute-0 ovn_controller[149457]: 2026-01-31T07:55:12Z|00287|binding|INFO|83a9fc66-6ef9-40b5-8aba-32bdb5065f5c: Claiming fa:16:3e:13:2a:8d 10.100.0.13
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.461 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.463 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 systemd-udevd[297964]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.489 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 systemd-machined[214448]: New machine qemu-31-instance-0000004f.
Jan 31 07:55:12 compute-0 ovn_controller[149457]: 2026-01-31T07:55:12Z|00288|binding|INFO|Setting lport 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c ovn-installed in OVS
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.494 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 NetworkManager[49108]: <info>  [1769846112.5025] device (tap83a9fc66-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:55:12 compute-0 NetworkManager[49108]: <info>  [1769846112.5034] device (tap83a9fc66-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:55:12 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000004f.
Jan 31 07:55:12 compute-0 ovn_controller[149457]: 2026-01-31T07:55:12Z|00289|binding|INFO|Setting lport 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c up in Southbound
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.525 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:2a:8d 10.100.0.13'], port_security=['fa:16:3e:13:2a:8d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8d4f7ec7-14a4-4cd7-9061-4b73e16be807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71d60d9604414f9d88093c29f5f816cb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f85ffe9-4998-43a4-aa20-88dbcd5f3300', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2beeb7b3-1775-4c4c-8be3-449dceb0815f, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.527 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c in datapath 2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 bound to our chassis
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.529 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.542 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fd506512-2742-4fcb-89ee-9ab7ae4b34ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.543 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f1802f0-01 in ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.545 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f1802f0-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.546 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1669fe5b-e37e-4abd-9b05-cc477ee431fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.547 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eefe5a10-5744-4079-8e15-22504883d85b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.559 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.560 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[7e9087ea-55cd-4863-9e28-e14ce8c88227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.572 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c6a7c1-2fc8-4a6f-99aa-54a246f732b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.599 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[29a14c49-f867-4136-afe9-1e7b1fc9b888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.605 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dda22aa6-4d16-4aa4-b590-8c6214f78b49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 NetworkManager[49108]: <info>  [1769846112.6064] manager: (tap2f1802f0-00): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.630 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[060d1952-62be-4709-908c-613d7b0fe13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.632 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[54664f91-8756-49aa-853c-feb5b76b5627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 NetworkManager[49108]: <info>  [1769846112.6546] device (tap2f1802f0-00): carrier: link connected
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.660 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3da1d2e4-7b45-43e0-91e1-f8f32daf681c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.681 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ce211985-8ca2-4bf4-abc9-5a12e78d9912]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f1802f0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:1c:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635527, 'reachable_time': 34122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298005, 'error': None, 'target': 'ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.696 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7d07c7-5176-4265-b6d8-828877a916b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:1c81'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 635527, 'tstamp': 635527}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298015, 'error': None, 'target': 'ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.711 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[72597055-aad4-4023-9958-2db7610ebc10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f1802f0-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:1c:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635527, 'reachable_time': 34122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298018, 'error': None, 'target': 'ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.737 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65011b84-ca52-4485-9f80-29bbe93c79ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:12.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.787 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8c975c-9a8a-439c-80f6-9dd22d709ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.788 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f1802f0-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.789 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.789 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f1802f0-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.791 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 kernel: tap2f1802f0-00: entered promiscuous mode
Jan 31 07:55:12 compute-0 NetworkManager[49108]: <info>  [1769846112.7934] manager: (tap2f1802f0-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.797 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f1802f0-00, col_values=(('external_ids', {'iface-id': '96812502-d71f-4bc4-86c1-3bcdab7bb0c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 ovn_controller[149457]: 2026-01-31T07:55:12Z|00290|binding|INFO|Releasing lport 96812502-d71f-4bc4-86c1-3bcdab7bb0c6 from this chassis (sb_readonly=0)
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.802 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.803 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0016be-14a9-44bb-ac59-18011724efc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.803 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8.pid.haproxy
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:55:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:12.804 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'env', 'PROCESS_TAG=haproxy-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:12 compute-0 nova_compute[247704]: 2026-01-31 07:55:12.930 247708 DEBUG oslo_concurrency.lockutils [req-82586142-76b0-4b34-ac1d-2faef665c161 req-d52bb450-270f-47ba-a0e5-0348713f24aa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8d4f7ec7-14a4-4cd7-9061-4b73e16be807" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:55:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:13 compute-0 nova_compute[247704]: 2026-01-31 07:55:13.101 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 07:55:13 compute-0 nova_compute[247704]: 2026-01-31 07:55:13.103 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:55:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 31 07:55:13 compute-0 podman[298068]: 2026-01-31 07:55:13.156374324 +0000 UTC m=+0.027308792 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:55:13 compute-0 nova_compute[247704]: 2026-01-31 07:55:13.730 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.012 247708 DEBUG nova.compute.manager [req-de81f838-ce8e-4d64-850b-8bef8ba917ca req-5e9a061e-485b-4d22-a78e-7866add71580 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.013 247708 DEBUG oslo_concurrency.lockutils [req-de81f838-ce8e-4d64-850b-8bef8ba917ca req-5e9a061e-485b-4d22-a78e-7866add71580 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.014 247708 DEBUG oslo_concurrency.lockutils [req-de81f838-ce8e-4d64-850b-8bef8ba917ca req-5e9a061e-485b-4d22-a78e-7866add71580 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.015 247708 DEBUG oslo_concurrency.lockutils [req-de81f838-ce8e-4d64-850b-8bef8ba917ca req-5e9a061e-485b-4d22-a78e-7866add71580 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.016 247708 DEBUG nova.compute.manager [req-de81f838-ce8e-4d64-850b-8bef8ba917ca req-5e9a061e-485b-4d22-a78e-7866add71580 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Processing event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:55:14 compute-0 podman[298068]: 2026-01-31 07:55:14.211581784 +0000 UTC m=+1.082516272 container create 6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:14 compute-0 nova_compute[247704]: 2026-01-31 07:55:14.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:14 compute-0 systemd[1]: Started libpod-conmon-6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247.scope.
Jan 31 07:55:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56597fa7f2f0d84cca615d33eec6274b4fe0814f6b180d5915d2c6861452933/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:55:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 227 MiB data, 755 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 140 op/s
Jan 31 07:55:15 compute-0 podman[298068]: 2026-01-31 07:55:15.298563664 +0000 UTC m=+2.169498162 container init 6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:55:15 compute-0 podman[298068]: 2026-01-31 07:55:15.307306991 +0000 UTC m=+2.178241449 container start 6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 07:55:15 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [NOTICE]   (298088) : New worker (298090) forked
Jan 31 07:55:15 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [NOTICE]   (298088) : Loading success.
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.035 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.036 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.036 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.036 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.037 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.224 247708 DEBUG nova.compute.manager [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.225 247708 DEBUG oslo_concurrency.lockutils [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.225 247708 DEBUG oslo_concurrency.lockutils [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.226 247708 DEBUG oslo_concurrency.lockutils [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.226 247708 DEBUG nova.compute.manager [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] No waiting events found dispatching network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.226 247708 WARNING nova.compute.manager [req-52292b98-936a-4fd8-9020-fdc3a99765de req-f74127b5-2657-45b0-aadc-8929a6867d55 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received unexpected event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c for instance with vm_state building and task_state spawning.
Jan 31 07:55:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:55:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249670294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.495 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.605 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.605 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:55:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:16.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.803 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.805 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4527MB free_disk=20.90493392944336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.805 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.806 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.929 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.929 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:55:16 compute-0 nova_compute[247704]: 2026-01-31 07:55:16.930 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:55:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:17.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 227 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 960 KiB/s wr, 167 op/s
Jan 31 07:55:17 compute-0 nova_compute[247704]: 2026-01-31 07:55:17.250 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:17 compute-0 sudo[298122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:17 compute-0 sudo[298122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:17 compute-0 sudo[298122]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:17 compute-0 sudo[298148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:17 compute-0 sudo[298148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:17 compute-0 sudo[298148]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:55:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343447375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:17 compute-0 nova_compute[247704]: 2026-01-31 07:55:17.736 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:17 compute-0 nova_compute[247704]: 2026-01-31 07:55:17.745 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:55:17 compute-0 nova_compute[247704]: 2026-01-31 07:55:17.983 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:55:18 compute-0 nova_compute[247704]: 2026-01-31 07:55:18.128 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:55:18 compute-0 nova_compute[247704]: 2026-01-31 07:55:18.128 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:18 compute-0 nova_compute[247704]: 2026-01-31 07:55:18.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:18.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 227 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 66 KiB/s wr, 168 op/s
Jan 31 07:55:19 compute-0 nova_compute[247704]: 2026-01-31 07:55:19.767 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:55:20
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', '.mgr', 'vms', 'backups']
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:55:20 compute-0 nova_compute[247704]: 2026-01-31 07:55:20.129 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:20 compute-0 nova_compute[247704]: 2026-01-31 07:55:20.130 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:20 compute-0 nova_compute[247704]: 2026-01-31 07:55:20.130 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:20 compute-0 nova_compute[247704]: 2026-01-31 07:55:20.130 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:55:20 compute-0 nova_compute[247704]: 2026-01-31 07:55:20.130 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:55:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:20.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:55:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:55:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 227 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 155 op/s
Jan 31 07:55:21 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:55:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 227 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 13 KiB/s wr, 140 op/s
Jan 31 07:55:23 compute-0 nova_compute[247704]: 2026-01-31 07:55:23.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:23 compute-0 podman[298199]: 2026-01-31 07:55:23.984607889 +0000 UTC m=+0.148125069 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:55:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).paxos(paxos updating c 3264..3935) accept timeout, calling fresh election
Jan 31 07:55:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:55:24 compute-0 ceph-mon[74496]: paxos.0).electionLogic(34) init, last seen epoch 34
Jan 31 07:55:24 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:24 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.760 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846124.7597737, 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.761 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] VM Started (Lifecycle Event)
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.765 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.771 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.777 247708 INFO nova.virt.libvirt.driver [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Instance spawned successfully.
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.777 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:55:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:24 compute-0 nova_compute[247704]: 2026-01-31 07:55:24.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:24 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 227 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 101 op/s
Jan 31 07:55:25 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:25 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.564 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.569 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.580 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.580 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.581 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.581 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.582 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:25 compute-0 nova_compute[247704]: 2026-01-31 07:55:25.582 247708 DEBUG nova.virt.libvirt.driver [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:55:25 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:25 compute-0 sshd-session[298235]: Invalid user solv from 45.148.10.240 port 47754
Jan 31 07:55:25 compute-0 sshd-session[298235]: Connection closed by invalid user solv 45.148.10.240 port 47754 [preauth]
Jan 31 07:55:25 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:26 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:26 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:26.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:27 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:27.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 227 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 182 KiB/s wr, 49 op/s
Jan 31 07:55:27 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:27 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:28 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:28 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:28 compute-0 nova_compute[247704]: 2026-01-31 07:55:28.739 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:28 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:29 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:29.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 263 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 07:55:29 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 07:55:29 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:55:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 50m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Jan 31 07:55:29 compute-0 nova_compute[247704]: 2026-01-31 07:55:29.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 07:55:29 compute-0 ceph-mon[74496]: paxos.0).electionLogic(37) init, last seen epoch 37, mid-election, bumping
Jan 31 07:55:29 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:55:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 50m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 07:55:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 07:55:30 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 07:55:30 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 07:55:30 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 07:55:30 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 07:55:30 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 07:55:30 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 07:55:30 compute-0 ceph-mon[74496]: osdmap e250: 3 total, 3 up, 3 in
Jan 31 07:55:30 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 50m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 07:55:30 compute-0 ceph-mon[74496]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 31 07:55:30 compute-0 ceph-mon[74496]: Cluster is now healthy
Jan 31 07:55:30 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 07:55:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2549763965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:30 compute-0 nova_compute[247704]: 2026-01-31 07:55:30.674 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:55:30 compute-0 nova_compute[247704]: 2026-01-31 07:55:30.675 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846124.7639465, 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:30 compute-0 nova_compute[247704]: 2026-01-31 07:55:30.675 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] VM Paused (Lifecycle Event)
Jan 31 07:55:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:30 compute-0 nova_compute[247704]: 2026-01-31 07:55:30.992 247708 INFO nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Took 31.43 seconds to spawn the instance on the hypervisor.
Jan 31 07:55:30 compute-0 nova_compute[247704]: 2026-01-31 07:55:30.996 247708 DEBUG nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:30 compute-0 nova_compute[247704]: 2026-01-31 07:55:30.998 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:31 compute-0 nova_compute[247704]: 2026-01-31 07:55:31.009 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846124.770216, 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:31 compute-0 nova_compute[247704]: 2026-01-31 07:55:31.009 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] VM Resumed (Lifecycle Event)
Jan 31 07:55:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 268 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Jan 31 07:55:31 compute-0 nova_compute[247704]: 2026-01-31 07:55:31.239 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:31 compute-0 nova_compute[247704]: 2026-01-31 07:55:31.243 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:55:31 compute-0 nova_compute[247704]: 2026-01-31 07:55:31.384 247708 INFO nova.compute.manager [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Took 34.60 seconds to build instance.
Jan 31 07:55:31 compute-0 nova_compute[247704]: 2026-01-31 07:55:31.622 247708 DEBUG oslo_concurrency.lockutils [None req-a87597b8-58f8-4219-8e22-2230a05b0fb7 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1914918059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2519778568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:55:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:32.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:55:32 compute-0 ceph-mon[74496]: pgmap v1748: 305 pgs: 305 active+clean; 268 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Jan 31 07:55:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1141855199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:33.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 274 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 149 op/s
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.744 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.926 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.927 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.928 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.928 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.929 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.932 247708 INFO nova.compute.manager [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Terminating instance
Jan 31 07:55:33 compute-0 nova_compute[247704]: 2026-01-31 07:55:33.933 247708 DEBUG nova.compute.manager [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:55:33 compute-0 ceph-mon[74496]: pgmap v1749: 305 pgs: 305 active+clean; 274 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 149 op/s
Jan 31 07:55:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/407827562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:33 compute-0 kernel: tap83a9fc66-6e (unregistering): left promiscuous mode
Jan 31 07:55:33 compute-0 NetworkManager[49108]: <info>  [1769846133.9942] device (tap83a9fc66-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:55:34 compute-0 ovn_controller[149457]: 2026-01-31T07:55:34Z|00291|binding|INFO|Releasing lport 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c from this chassis (sb_readonly=0)
Jan 31 07:55:34 compute-0 ovn_controller[149457]: 2026-01-31T07:55:34Z|00292|binding|INFO|Setting lport 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c down in Southbound
Jan 31 07:55:34 compute-0 ovn_controller[149457]: 2026-01-31T07:55:34Z|00293|binding|INFO|Removing iface tap83a9fc66-6e ovn-installed in OVS
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.030 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.036 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Jan 31 07:55:34 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000004f.scope: Consumed 9.547s CPU time.
Jan 31 07:55:34 compute-0 systemd-machined[214448]: Machine qemu-31-instance-0000004f terminated.
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.081 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:2a:8d 10.100.0.13'], port_security=['fa:16:3e:13:2a:8d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8d4f7ec7-14a4-4cd7-9061-4b73e16be807', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '71d60d9604414f9d88093c29f5f816cb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f85ffe9-4998-43a4-aa20-88dbcd5f3300', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2beeb7b3-1775-4c4c-8be3-449dceb0815f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.083 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9fc66-6ef9-40b5-8aba-32bdb5065f5c in datapath 2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 unbound from our chassis
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.085 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.086 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[97c01ab9-4083-47e0-8b3f-f52a4b9f24e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.087 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 namespace which is not needed anymore
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.170 247708 INFO nova.virt.libvirt.driver [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Instance destroyed successfully.
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.170 247708 DEBUG nova.objects.instance [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lazy-loading 'resources' on Instance uuid 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:55:34 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [NOTICE]   (298088) : haproxy version is 2.8.14-c23fe91
Jan 31 07:55:34 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [NOTICE]   (298088) : path to executable is /usr/sbin/haproxy
Jan 31 07:55:34 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [WARNING]  (298088) : Exiting Master process...
Jan 31 07:55:34 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [ALERT]    (298088) : Current worker (298090) exited with code 143 (Terminated)
Jan 31 07:55:34 compute-0 neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8[298084]: [WARNING]  (298088) : All workers exited. Exiting... (0)
Jan 31 07:55:34 compute-0 systemd[1]: libpod-6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247.scope: Deactivated successfully.
Jan 31 07:55:34 compute-0 podman[298271]: 2026-01-31 07:55:34.224313356 +0000 UTC m=+0.050035791 container died 6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247-userdata-shm.mount: Deactivated successfully.
Jan 31 07:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c56597fa7f2f0d84cca615d33eec6274b4fe0814f6b180d5915d2c6861452933-merged.mount: Deactivated successfully.
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.278 247708 DEBUG nova.virt.libvirt.vif [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1487768865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1487768865',id=79,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='71d60d9604414f9d88093c29f5f816cb',ramdisk_id='',reservation_id='r-onecju4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-2056113828',owner_user_name='tempest-InstanceActionsV221TestJSON-2056113828-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:55:31Z,user_data=None,user_id='73dd06404c1f40adb74b624862a57628',uuid=8d4f7ec7-14a4-4cd7-9061-4b73e16be807,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.279 247708 DEBUG nova.network.os_vif_util [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Converting VIF {"id": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "address": "fa:16:3e:13:2a:8d", "network": {"id": "2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1869353306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "71d60d9604414f9d88093c29f5f816cb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9fc66-6e", "ovs_interfaceid": "83a9fc66-6ef9-40b5-8aba-32bdb5065f5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.280 247708 DEBUG nova.network.os_vif_util [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.280 247708 DEBUG os_vif [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.282 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.283 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83a9fc66-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.288 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.291 247708 INFO os_vif [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:2a:8d,bridge_name='br-int',has_traffic_filtering=True,id=83a9fc66-6ef9-40b5-8aba-32bdb5065f5c,network=Network(2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9fc66-6e')
Jan 31 07:55:34 compute-0 podman[298271]: 2026-01-31 07:55:34.315414928 +0000 UTC m=+0.141137393 container cleanup 6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 07:55:34 compute-0 systemd[1]: libpod-conmon-6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247.scope: Deactivated successfully.
Jan 31 07:55:34 compute-0 podman[298321]: 2026-01-31 07:55:34.409623708 +0000 UTC m=+0.070721065 container remove 6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.414 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5a505d-7a59-4294-8df4-cd615c180feb]: (4, ('Sat Jan 31 07:55:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 (6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247)\n6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247\nSat Jan 31 07:55:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 (6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247)\n6f02453720c72ef08b6fe3e28d44958e04d067d4525cb70704d820f5a0f9f247\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.416 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[40c44afa-f48b-467e-a82b-855aa29a3b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.417 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f1802f0-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 kernel: tap2f1802f0-00: left promiscuous mode
Jan 31 07:55:34 compute-0 nova_compute[247704]: 2026-01-31 07:55:34.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.430 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[05719c4f-6b7c-45ab-94a7-017a6a0d1ce3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.451 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8419c4bd-41c3-4bb4-bd30-c6c121e6d54d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.453 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9b678e-cd99-42c2-874b-4dc439ea2617]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.470 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[de0e7930-72fc-4eab-a798-d0d10c166b67]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635521, 'reachable_time': 18627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298339, 'error': None, 'target': 'ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.472 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f1802f0-0a1e-43f9-9b6a-f3cd143df5b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:55:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d2f1802f0\x2d0a1e\x2d43f9\x2d9b6a\x2df3cd143df5b8.mount: Deactivated successfully.
Jan 31 07:55:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:34.473 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9d524c87-1d9f-45b0-9cbb-bfe70f9633c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:55:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:55:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006271592336683417 of space, bias 1.0, pg target 1.881477701005025 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 186 op/s
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:55:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:55:35 compute-0 nova_compute[247704]: 2026-01-31 07:55:35.632 247708 INFO nova.virt.libvirt.driver [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Deleting instance files /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807_del
Jan 31 07:55:35 compute-0 nova_compute[247704]: 2026-01-31 07:55:35.633 247708 INFO nova.virt.libvirt.driver [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Deletion of /var/lib/nova/instances/8d4f7ec7-14a4-4cd7-9061-4b73e16be807_del complete
Jan 31 07:55:35 compute-0 nova_compute[247704]: 2026-01-31 07:55:35.733 247708 INFO nova.compute.manager [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Took 1.80 seconds to destroy the instance on the hypervisor.
Jan 31 07:55:35 compute-0 nova_compute[247704]: 2026-01-31 07:55:35.734 247708 DEBUG oslo.service.loopingcall [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:55:35 compute-0 nova_compute[247704]: 2026-01-31 07:55:35.734 247708 DEBUG nova.compute.manager [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:55:35 compute-0 nova_compute[247704]: 2026-01-31 07:55:35.734 247708 DEBUG nova.network.neutron [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:55:36 compute-0 ceph-mon[74496]: pgmap v1750: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 186 op/s
Jan 31 07:55:36 compute-0 nova_compute[247704]: 2026-01-31 07:55:36.466 247708 DEBUG nova.compute.manager [req-00463b94-6454-41f4-8cbc-04e4ba67a0ad req-fbf89812-8cf0-4cfd-835b-0fd0b374c1c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-vif-unplugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:36 compute-0 nova_compute[247704]: 2026-01-31 07:55:36.467 247708 DEBUG oslo_concurrency.lockutils [req-00463b94-6454-41f4-8cbc-04e4ba67a0ad req-fbf89812-8cf0-4cfd-835b-0fd0b374c1c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:36 compute-0 nova_compute[247704]: 2026-01-31 07:55:36.467 247708 DEBUG oslo_concurrency.lockutils [req-00463b94-6454-41f4-8cbc-04e4ba67a0ad req-fbf89812-8cf0-4cfd-835b-0fd0b374c1c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:36 compute-0 nova_compute[247704]: 2026-01-31 07:55:36.467 247708 DEBUG oslo_concurrency.lockutils [req-00463b94-6454-41f4-8cbc-04e4ba67a0ad req-fbf89812-8cf0-4cfd-835b-0fd0b374c1c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:36 compute-0 nova_compute[247704]: 2026-01-31 07:55:36.468 247708 DEBUG nova.compute.manager [req-00463b94-6454-41f4-8cbc-04e4ba67a0ad req-fbf89812-8cf0-4cfd-835b-0fd0b374c1c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] No waiting events found dispatching network-vif-unplugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:36 compute-0 nova_compute[247704]: 2026-01-31 07:55:36.468 247708 DEBUG nova.compute.manager [req-00463b94-6454-41f4-8cbc-04e4ba67a0ad req-fbf89812-8cf0-4cfd-835b-0fd0b374c1c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-vif-unplugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:55:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:36.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:37.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:37.138 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:55:37 compute-0 nova_compute[247704]: 2026-01-31 07:55:37.138 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:37.139 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:55:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 292 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 214 op/s
Jan 31 07:55:37 compute-0 sudo[298343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:37 compute-0 sudo[298343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:37 compute-0 sudo[298343]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:37 compute-0 sudo[298368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:37 compute-0 sudo[298368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:37 compute-0 sudo[298368]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:37 compute-0 nova_compute[247704]: 2026-01-31 07:55:37.983 247708 DEBUG nova.network.neutron [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.303 247708 INFO nova.compute.manager [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Took 2.57 seconds to deallocate network for instance.
Jan 31 07:55:38 compute-0 ceph-mon[74496]: pgmap v1751: 305 pgs: 305 active+clean; 292 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.0 MiB/s wr, 214 op/s
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.409 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.409 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.524 247708 DEBUG oslo_concurrency.processutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.634 247708 DEBUG nova.compute.manager [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.635 247708 DEBUG oslo_concurrency.lockutils [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.635 247708 DEBUG oslo_concurrency.lockutils [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.635 247708 DEBUG oslo_concurrency.lockutils [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.636 247708 DEBUG nova.compute.manager [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] No waiting events found dispatching network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.636 247708 WARNING nova.compute.manager [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received unexpected event network-vif-plugged-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c for instance with vm_state deleted and task_state None.
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.637 247708 DEBUG nova.compute.manager [req-e04192da-0e2e-465d-b6f1-fe2d8da085a6 req-07980b2b-b4a8-4b39-a2fe-41e2010fd6b3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Received event network-vif-deleted-83a9fc66-6ef9-40b5-8aba-32bdb5065f5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.744 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:38.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:38 compute-0 podman[298413]: 2026-01-31 07:55:38.881152567 +0000 UTC m=+0.050637695 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:55:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:55:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3778543200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.968 247708 DEBUG oslo_concurrency.processutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:55:38 compute-0 nova_compute[247704]: 2026-01-31 07:55:38.975 247708 DEBUG nova.compute.provider_tree [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:55:39 compute-0 nova_compute[247704]: 2026-01-31 07:55:39.034 247708 DEBUG nova.scheduler.client.report [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:55:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 MiB/s wr, 236 op/s
Jan 31 07:55:39 compute-0 nova_compute[247704]: 2026-01-31 07:55:39.268 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:39 compute-0 nova_compute[247704]: 2026-01-31 07:55:39.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:39 compute-0 nova_compute[247704]: 2026-01-31 07:55:39.331 247708 INFO nova.scheduler.client.report [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Deleted allocations for instance 8d4f7ec7-14a4-4cd7-9061-4b73e16be807
Jan 31 07:55:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3778543200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:39 compute-0 nova_compute[247704]: 2026-01-31 07:55:39.865 247708 DEBUG oslo_concurrency.lockutils [None req-495ce5cb-4123-455c-a59c-26a0ed015019 73dd06404c1f40adb74b624862a57628 71d60d9604414f9d88093c29f5f816cb - - default default] Lock "8d4f7ec7-14a4-4cd7-9061-4b73e16be807" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:55:40 compute-0 ceph-mon[74496]: pgmap v1752: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.9 MiB/s wr, 236 op/s
Jan 31 07:55:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 172 op/s
Jan 31 07:55:42 compute-0 ceph-mon[74496]: pgmap v1753: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 172 op/s
Jan 31 07:55:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 07:55:43 compute-0 nova_compute[247704]: 2026-01-31 07:55:43.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 07:55:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:55:44.142 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:55:44 compute-0 nova_compute[247704]: 2026-01-31 07:55:44.288 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:44 compute-0 ceph-mon[74496]: pgmap v1754: 305 pgs: 305 active+clean; 293 MiB data, 806 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 07:55:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1164178979' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:44.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:45.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 262 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 190 op/s
Jan 31 07:55:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4023026982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:55:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4023026982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:55:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2371626794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:55:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:46.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:46 compute-0 ceph-mon[74496]: pgmap v1755: 305 pgs: 305 active+clean; 262 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 190 op/s
Jan 31 07:55:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 258 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 31 07:55:47 compute-0 ceph-mon[74496]: pgmap v1756: 305 pgs: 305 active+clean; 258 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 31 07:55:48 compute-0 nova_compute[247704]: 2026-01-31 07:55:48.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:48.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:49.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 247 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 126 op/s
Jan 31 07:55:49 compute-0 nova_compute[247704]: 2026-01-31 07:55:49.167 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846134.166452, 8d4f7ec7-14a4-4cd7-9061-4b73e16be807 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:55:49 compute-0 nova_compute[247704]: 2026-01-31 07:55:49.168 247708 INFO nova.compute.manager [-] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] VM Stopped (Lifecycle Event)
Jan 31 07:55:49 compute-0 nova_compute[247704]: 2026-01-31 07:55:49.291 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:49 compute-0 nova_compute[247704]: 2026-01-31 07:55:49.329 247708 DEBUG nova.compute.manager [None req-5432c67e-7219-4d2c-8c78-057af0417134 - - - - - -] [instance: 8d4f7ec7-14a4-4cd7-9061-4b73e16be807] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:55:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:55:50 compute-0 ceph-mon[74496]: pgmap v1757: 305 pgs: 305 active+clean; 247 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 126 op/s
Jan 31 07:55:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 247 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 106 op/s
Jan 31 07:55:52 compute-0 ceph-mon[74496]: pgmap v1758: 305 pgs: 305 active+clean; 247 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 106 op/s
Jan 31 07:55:52 compute-0 nova_compute[247704]: 2026-01-31 07:55:52.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:52.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:55:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:55:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 226 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 105 op/s
Jan 31 07:55:53 compute-0 nova_compute[247704]: 2026-01-31 07:55:53.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:54 compute-0 nova_compute[247704]: 2026-01-31 07:55:54.293 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:55:54 compute-0 ceph-mon[74496]: pgmap v1759: 305 pgs: 305 active+clean; 226 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 105 op/s
Jan 31 07:55:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:54.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:54 compute-0 podman[298443]: 2026-01-31 07:55:54.952652087 +0000 UTC m=+0.113119685 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:55:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 175 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 23 KiB/s wr, 97 op/s
Jan 31 07:55:55 compute-0 ceph-mon[74496]: pgmap v1760: 305 pgs: 305 active+clean; 175 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 23 KiB/s wr, 97 op/s
Jan 31 07:55:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:56.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1145297886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:55:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:55:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:55:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 19 KiB/s wr, 67 op/s
Jan 31 07:55:57 compute-0 sudo[298471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:57 compute-0 sudo[298471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:57 compute-0 sudo[298471]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:57 compute-0 sudo[298496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:55:57 compute-0 sudo[298496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:55:57 compute-0 sudo[298496]: pam_unix(sudo:session): session closed for user root
Jan 31 07:55:58 compute-0 ceph-mon[74496]: pgmap v1761: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 19 KiB/s wr, 67 op/s
Jan 31 07:55:58 compute-0 nova_compute[247704]: 2026-01-31 07:55:58.754 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:58.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:55:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:55:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:55:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 102 op/s
Jan 31 07:55:59 compute-0 nova_compute[247704]: 2026-01-31 07:55:59.296 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:55:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:00 compute-0 ceph-mon[74496]: pgmap v1762: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 102 op/s
Jan 31 07:56:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:00.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 95 op/s
Jan 31 07:56:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2190604874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:02.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:02 compute-0 ceph-mon[74496]: pgmap v1763: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 95 op/s
Jan 31 07:56:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:03.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 86 op/s
Jan 31 07:56:03 compute-0 nova_compute[247704]: 2026-01-31 07:56:03.755 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:04 compute-0 ceph-mon[74496]: pgmap v1764: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 KiB/s wr, 86 op/s
Jan 31 07:56:04 compute-0 nova_compute[247704]: 2026-01-31 07:56:04.298 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 KiB/s wr, 86 op/s
Jan 31 07:56:06 compute-0 ceph-mon[74496]: pgmap v1765: 305 pgs: 305 active+clean; 167 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 KiB/s wr, 86 op/s
Jan 31 07:56:06 compute-0 nova_compute[247704]: 2026-01-31 07:56:06.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:06 compute-0 nova_compute[247704]: 2026-01-31 07:56:06.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 07:56:06 compute-0 nova_compute[247704]: 2026-01-31 07:56:06.861 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 07:56:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:06.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:07.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 174 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 808 KiB/s wr, 91 op/s
Jan 31 07:56:08 compute-0 sudo[298526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:08 compute-0 sudo[298526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-0 sudo[298526]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-0 sudo[298551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:56:08 compute-0 sudo[298551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-0 sudo[298551]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-0 sudo[298576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:08 compute-0 sudo[298576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-0 sudo[298576]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-0 sudo[298601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:56:08 compute-0 sudo[298601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:08 compute-0 ceph-mon[74496]: pgmap v1766: 305 pgs: 305 active+clean; 174 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 808 KiB/s wr, 91 op/s
Jan 31 07:56:08 compute-0 nova_compute[247704]: 2026-01-31 07:56:08.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:08 compute-0 sudo[298601]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:08.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:09.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 198 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 31 07:56:09 compute-0 nova_compute[247704]: 2026-01-31 07:56:09.300 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:09 compute-0 podman[298657]: 2026-01-31 07:56:09.933569737 +0000 UTC m=+0.098274474 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 07:56:10 compute-0 ceph-mon[74496]: pgmap v1767: 305 pgs: 305 active+clean; 198 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 31 07:56:10 compute-0 nova_compute[247704]: 2026-01-31 07:56:10.862 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:10 compute-0 nova_compute[247704]: 2026-01-31 07:56:10.863 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:10.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:11.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:11.164 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:11.165 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:11.165 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:11 compute-0 ceph-mon[74496]: pgmap v1768: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:12.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:13.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:13 compute-0 nova_compute[247704]: 2026-01-31 07:56:13.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:13 compute-0 nova_compute[247704]: 2026-01-31 07:56:13.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:56:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.302 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:14.398 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.398 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:14.400 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:56:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:14 compute-0 ceph-mon[74496]: pgmap v1769: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.618 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:56:14 compute-0 nova_compute[247704]: 2026-01-31 07:56:14.619 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:56:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:56:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:56:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:56:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:56:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0703de85-a126-4d09-9d4d-affd640775e9 does not exist
Jan 31 07:56:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6d07f1c8-080b-4244-96b7-1871bbfa97d8 does not exist
Jan 31 07:56:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3c9767be-1467-4173-83cd-252a36559259 does not exist
Jan 31 07:56:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:56:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:56:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:56:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:56:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:56:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.063 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.063 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.063 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.064 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.064 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:15 compute-0 sudo[298680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:15 compute-0 sudo[298680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:15 compute-0 sudo[298680]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:15 compute-0 sudo[298706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:56:15 compute-0 sudo[298706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:15 compute-0 sudo[298706]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:15.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:15 compute-0 sudo[298731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:15 compute-0 sudo[298731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:15 compute-0 sudo[298731]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:15 compute-0 sudo[298775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:56:15 compute-0 sudo[298775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375979925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.512 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:56:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:56:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:56:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:56:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.626053268 +0000 UTC m=+0.067339100 container create a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:56:15 compute-0 systemd[1]: Started libpod-conmon-a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae.scope.
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.60206729 +0000 UTC m=+0.043353142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.698 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:56:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.700 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4594MB free_disk=20.897254943847656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.701 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:15 compute-0 nova_compute[247704]: 2026-01-31 07:56:15.701 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.717764583 +0000 UTC m=+0.159050425 container init a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.72611256 +0000 UTC m=+0.167398362 container start a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.729537386 +0000 UTC m=+0.170823188 container attach a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 07:56:15 compute-0 nice_jennings[298860]: 167 167
Jan 31 07:56:15 compute-0 systemd[1]: libpod-a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae.scope: Deactivated successfully.
Jan 31 07:56:15 compute-0 conmon[298860]: conmon a53dd25a4bfeb5a248f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae.scope/container/memory.events
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.734683552 +0000 UTC m=+0.175969374 container died a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-06b9e977ac5f1be617e611a2c1c82f291efbb05665ae7f09f1efac9a344d3f65-merged.mount: Deactivated successfully.
Jan 31 07:56:15 compute-0 podman[298844]: 2026-01-31 07:56:15.777371237 +0000 UTC m=+0.218657019 container remove a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:15 compute-0 systemd[1]: libpod-conmon-a53dd25a4bfeb5a248f3872be92c184ec96fcf15706565160ffa34297caf01ae.scope: Deactivated successfully.
Jan 31 07:56:15 compute-0 podman[298883]: 2026-01-31 07:56:15.944914091 +0000 UTC m=+0.063349020 container create 85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mcclintock, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:16 compute-0 systemd[1]: Started libpod-conmon-85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a.scope.
Jan 31 07:56:16 compute-0 podman[298883]: 2026-01-31 07:56:15.919387919 +0000 UTC m=+0.037822948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:56:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607eafd190be0a87b0b3b2371feaabdeac9b1dffcfa57ac0fe61cc24eea2b78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607eafd190be0a87b0b3b2371feaabdeac9b1dffcfa57ac0fe61cc24eea2b78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607eafd190be0a87b0b3b2371feaabdeac9b1dffcfa57ac0fe61cc24eea2b78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607eafd190be0a87b0b3b2371feaabdeac9b1dffcfa57ac0fe61cc24eea2b78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607eafd190be0a87b0b3b2371feaabdeac9b1dffcfa57ac0fe61cc24eea2b78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:16 compute-0 podman[298883]: 2026-01-31 07:56:16.060861219 +0000 UTC m=+0.179296268 container init 85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:56:16 compute-0 podman[298883]: 2026-01-31 07:56:16.074561426 +0000 UTC m=+0.192996355 container start 85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:16 compute-0 podman[298883]: 2026-01-31 07:56:16.078848562 +0000 UTC m=+0.197283581 container attach 85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.209 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.212 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.313 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.390 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.391 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.411 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.447 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.473 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:16 compute-0 ceph-mon[74496]: pgmap v1770: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/375979925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3000068858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:16.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:16 compute-0 festive_mcclintock[298899]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:56:16 compute-0 festive_mcclintock[298899]: --> relative data size: 1.0
Jan 31 07:56:16 compute-0 festive_mcclintock[298899]: --> All data devices are unavailable
Jan 31 07:56:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778733874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.943 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:16 compute-0 systemd[1]: libpod-85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a.scope: Deactivated successfully.
Jan 31 07:56:16 compute-0 conmon[298899]: conmon 85af3d895e8277357489 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a.scope/container/memory.events
Jan 31 07:56:16 compute-0 podman[298883]: 2026-01-31 07:56:16.95417979 +0000 UTC m=+1.072614749 container died 85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mcclintock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 07:56:16 compute-0 nova_compute[247704]: 2026-01-31 07:56:16.954 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a607eafd190be0a87b0b3b2371feaabdeac9b1dffcfa57ac0fe61cc24eea2b78-merged.mount: Deactivated successfully.
Jan 31 07:56:17 compute-0 podman[298883]: 2026-01-31 07:56:17.020762822 +0000 UTC m=+1.139197771 container remove 85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:17 compute-0 systemd[1]: libpod-conmon-85af3d895e82773574892b5cd2015839a9efc74580e5564e46e5fa7b58415e3a.scope: Deactivated successfully.
Jan 31 07:56:17 compute-0 sudo[298775]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-0 sudo[298948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:17 compute-0 sudo[298948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-0 sudo[298948]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:17.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:17 compute-0 sudo[298973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:56:17 compute-0 sudo[298973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-0 sudo[298973]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-0 sudo[298998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:17 compute-0 sudo[298998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-0 sudo[298998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-0 sudo[299023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:56:17 compute-0 sudo[299023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.647380769 +0000 UTC m=+0.048701162 container create bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:56:17 compute-0 nova_compute[247704]: 2026-01-31 07:56:17.658 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:56:17 compute-0 systemd[1]: Started libpod-conmon-bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35.scope.
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.621477029 +0000 UTC m=+0.022797442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:56:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:17 compute-0 sudo[299103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:17 compute-0 sudo[299103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-0 sudo[299103]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2778733874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:17 compute-0 ceph-mon[74496]: pgmap v1771: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.757103547 +0000 UTC m=+0.158423970 container init bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.76393109 +0000 UTC m=+0.165251473 container start bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 07:56:17 compute-0 systemd[1]: libpod-bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35.scope: Deactivated successfully.
Jan 31 07:56:17 compute-0 stoic_cerf[299128]: 167 167
Jan 31 07:56:17 compute-0 conmon[299128]: conmon bac96a9668968baf9f8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35.scope/container/memory.events
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.782041016 +0000 UTC m=+0.183361399 container attach bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.782707131 +0000 UTC m=+0.184027514 container died bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 07:56:17 compute-0 sudo[299133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:17 compute-0 sudo[299133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:17 compute-0 sudo[299133]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-500403376eae9896ddb3a2eecb93cb1b3a8adfb0d6215ea7f2eba95bb8091cbf-merged.mount: Deactivated successfully.
Jan 31 07:56:17 compute-0 podman[299089]: 2026-01-31 07:56:17.980434941 +0000 UTC m=+0.381755334 container remove bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_cerf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:17 compute-0 systemd[1]: libpod-conmon-bac96a9668968baf9f8c5be58040e8b4d82e581ca77cd70036c0fd3288fa0a35.scope: Deactivated successfully.
Jan 31 07:56:18 compute-0 podman[299181]: 2026-01-31 07:56:18.149029587 +0000 UTC m=+0.048294952 container create 23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:56:18 compute-0 systemd[1]: Started libpod-conmon-23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe.scope.
Jan 31 07:56:18 compute-0 podman[299181]: 2026-01-31 07:56:18.126735147 +0000 UTC m=+0.026000612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:56:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae2e74e51ff864a5fa0fb81ef6fbd168b9bb6965ddf8b6f3945f06d2a3dcfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae2e74e51ff864a5fa0fb81ef6fbd168b9bb6965ddf8b6f3945f06d2a3dcfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae2e74e51ff864a5fa0fb81ef6fbd168b9bb6965ddf8b6f3945f06d2a3dcfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bae2e74e51ff864a5fa0fb81ef6fbd168b9bb6965ddf8b6f3945f06d2a3dcfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:18 compute-0 podman[299181]: 2026-01-31 07:56:18.261360123 +0000 UTC m=+0.160625508 container init 23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:56:18 compute-0 podman[299181]: 2026-01-31 07:56:18.269070506 +0000 UTC m=+0.168335871 container start 23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:18 compute-0 podman[299181]: 2026-01-31 07:56:18.331794402 +0000 UTC m=+0.231059777 container attach 23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:18.403 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:18 compute-0 nova_compute[247704]: 2026-01-31 07:56:18.599 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:56:18 compute-0 nova_compute[247704]: 2026-01-31 07:56:18.600 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:18 compute-0 nova_compute[247704]: 2026-01-31 07:56:18.601 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:18 compute-0 nova_compute[247704]: 2026-01-31 07:56:18.605 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:18 compute-0 nova_compute[247704]: 2026-01-31 07:56:18.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:18.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2041874506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:19 compute-0 quizzical_cray[299197]: {
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:     "0": [
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:         {
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "devices": [
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "/dev/loop3"
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             ],
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "lv_name": "ceph_lv0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "lv_size": "7511998464",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "name": "ceph_lv0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "tags": {
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.cluster_name": "ceph",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.crush_device_class": "",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.encrypted": "0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.osd_id": "0",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.type": "block",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:                 "ceph.vdo": "0"
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             },
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "type": "block",
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:             "vg_name": "ceph_vg0"
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:         }
Jan 31 07:56:19 compute-0 quizzical_cray[299197]:     ]
Jan 31 07:56:19 compute-0 quizzical_cray[299197]: }
Jan 31 07:56:19 compute-0 systemd[1]: libpod-23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe.scope: Deactivated successfully.
Jan 31 07:56:19 compute-0 podman[299181]: 2026-01-31 07:56:19.036338395 +0000 UTC m=+0.935603770 container died 23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:56:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:19.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 211 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Jan 31 07:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bae2e74e51ff864a5fa0fb81ef6fbd168b9bb6965ddf8b6f3945f06d2a3dcfc-merged.mount: Deactivated successfully.
Jan 31 07:56:19 compute-0 podman[299181]: 2026-01-31 07:56:19.299350697 +0000 UTC m=+1.198616092 container remove 23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_cray, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:56:19 compute-0 nova_compute[247704]: 2026-01-31 07:56:19.304 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:19 compute-0 systemd[1]: libpod-conmon-23ace7261b1fefe42368eef2b9842cdbe23da0e60d80a6c0901638747e8f9ebe.scope: Deactivated successfully.
Jan 31 07:56:19 compute-0 sudo[299023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:19 compute-0 sudo[299220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:19 compute-0 sudo[299220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:19 compute-0 sudo[299220]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:19 compute-0 sudo[299246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:56:19 compute-0 sudo[299246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:19 compute-0 sudo[299246]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:19 compute-0 sudo[299271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:19 compute-0 sudo[299271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:19 compute-0 sudo[299271]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:19 compute-0 sudo[299296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:56:19 compute-0 sudo[299296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:19 compute-0 podman[299362]: 2026-01-31 07:56:19.881728143 +0000 UTC m=+0.021074793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:56:20
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root']
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:56:20 compute-0 podman[299362]: 2026-01-31 07:56:20.125675177 +0000 UTC m=+0.265021837 container create efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 07:56:20 compute-0 ceph-mon[74496]: pgmap v1772: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 211 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Jan 31 07:56:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/51827031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:20 compute-0 systemd[1]: Started libpod-conmon-efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4.scope.
Jan 31 07:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:20 compute-0 podman[299362]: 2026-01-31 07:56:20.312989844 +0000 UTC m=+0.452336564 container init efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:56:20 compute-0 podman[299362]: 2026-01-31 07:56:20.321875433 +0000 UTC m=+0.461222103 container start efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:20 compute-0 podman[299362]: 2026-01-31 07:56:20.326630899 +0000 UTC m=+0.465977609 container attach efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:20 compute-0 romantic_neumann[299378]: 167 167
Jan 31 07:56:20 compute-0 systemd[1]: libpod-efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4.scope: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[299362]: 2026-01-31 07:56:20.330756241 +0000 UTC m=+0.470102901 container died efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3bdec6530cbfdbf1f9e63c4823e33ccd435f13522a04f09fd96e19caeb55315-merged.mount: Deactivated successfully.
Jan 31 07:56:20 compute-0 podman[299362]: 2026-01-31 07:56:20.381323225 +0000 UTC m=+0.520669885 container remove efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:20 compute-0 systemd[1]: libpod-conmon-efba6df60f3d4377293bc4fbf7192264b4c747eca0595fb4b1343898937fe2a4.scope: Deactivated successfully.
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:56:20 compute-0 podman[299400]: 2026-01-31 07:56:20.571228559 +0000 UTC m=+0.044009127 container create 8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:56:20 compute-0 systemd[1]: Started libpod-conmon-8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b.scope.
Jan 31 07:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3255910a6e574a730e440ea601002c04f498baa93b9004cc6e80917bdd4e4c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3255910a6e574a730e440ea601002c04f498baa93b9004cc6e80917bdd4e4c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3255910a6e574a730e440ea601002c04f498baa93b9004cc6e80917bdd4e4c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3255910a6e574a730e440ea601002c04f498baa93b9004cc6e80917bdd4e4c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:20 compute-0 podman[299400]: 2026-01-31 07:56:20.551995438 +0000 UTC m=+0.024776036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:56:20 compute-0 podman[299400]: 2026-01-31 07:56:20.647901387 +0000 UTC m=+0.120682035 container init 8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 07:56:20 compute-0 podman[299400]: 2026-01-31 07:56:20.654714819 +0000 UTC m=+0.127495387 container start 8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:56:20 compute-0 podman[299400]: 2026-01-31 07:56:20.658257478 +0000 UTC m=+0.131038056 container attach 8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:56:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:20.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:21 compute-0 nova_compute[247704]: 2026-01-31 07:56:21.031 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:21 compute-0 nova_compute[247704]: 2026-01-31 07:56:21.033 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:21 compute-0 nova_compute[247704]: 2026-01-31 07:56:21.033 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:21 compute-0 nova_compute[247704]: 2026-01-31 07:56:21.034 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:21 compute-0 nova_compute[247704]: 2026-01-31 07:56:21.034 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:56:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:21.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 17 KiB/s wr, 11 op/s
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]: {
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:         "osd_id": 0,
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:         "type": "bluestore"
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]:     }
Jan 31 07:56:21 compute-0 agitated_bhaskara[299416]: }
Jan 31 07:56:21 compute-0 systemd[1]: libpod-8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b.scope: Deactivated successfully.
Jan 31 07:56:21 compute-0 podman[299400]: 2026-01-31 07:56:21.559276993 +0000 UTC m=+1.032057641 container died 8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:21 compute-0 nova_compute[247704]: 2026-01-31 07:56:21.559 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3255910a6e574a730e440ea601002c04f498baa93b9004cc6e80917bdd4e4c8-merged.mount: Deactivated successfully.
Jan 31 07:56:21 compute-0 podman[299400]: 2026-01-31 07:56:21.786379 +0000 UTC m=+1.259159598 container remove 8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_bhaskara, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:56:21 compute-0 systemd[1]: libpod-conmon-8142d6eb84e9a738f9f8c17151a34c5bcf22935099dfa76e79124eed778ad32b.scope: Deactivated successfully.
Jan 31 07:56:21 compute-0 sudo[299296]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:56:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:56:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ee35b096-da69-485f-be94-8abddd358782 does not exist
Jan 31 07:56:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4d34ce24-1dcd-499c-a325-08c7448f25dc does not exist
Jan 31 07:56:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1f909b91-5f8e-49f9-bbcf-658cabdcb938 does not exist
Jan 31 07:56:22 compute-0 sudo[299451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:22 compute-0 sudo[299451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:22 compute-0 sudo[299451]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:22 compute-0 sudo[299476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:56:22 compute-0 sudo[299476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:22 compute-0 sudo[299476]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:22 compute-0 ceph-mon[74496]: pgmap v1773: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 17 KiB/s wr, 11 op/s
Jan 31 07:56:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:56:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1461398054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:22.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Jan 31 07:56:23 compute-0 nova_compute[247704]: 2026-01-31 07:56:23.762 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:24 compute-0 nova_compute[247704]: 2026-01-31 07:56:24.308 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:24 compute-0 ceph-mon[74496]: pgmap v1774: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 0 op/s
Jan 31 07:56:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:24.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 07:56:25 compute-0 podman[299504]: 2026-01-31 07:56:25.900833662 +0000 UTC m=+0.071343159 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 07:56:26 compute-0 ceph-mon[74496]: pgmap v1775: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 07:56:26 compute-0 nova_compute[247704]: 2026-01-31 07:56:26.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:26 compute-0 nova_compute[247704]: 2026-01-31 07:56:26.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 07:56:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:26.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 07:56:27 compute-0 ceph-mon[74496]: pgmap v1776: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 07:56:28 compute-0 nova_compute[247704]: 2026-01-31 07:56:28.765 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:28.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 1 op/s
Jan 31 07:56:29 compute-0 nova_compute[247704]: 2026-01-31 07:56:29.310 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 07:56:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 102K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 8855 syncs, 3.00 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5127 writes, 18K keys, 5127 commit groups, 1.0 writes per commit group, ingest: 20.43 MB, 0.03 MB/s
                                           Interval WAL: 5128 writes, 1972 syncs, 2.60 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 07:56:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:30 compute-0 ceph-mon[74496]: pgmap v1777: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 15 KiB/s wr, 1 op/s
Jan 31 07:56:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:30.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 0 op/s
Jan 31 07:56:32 compute-0 ceph-mon[74496]: pgmap v1778: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 0 op/s
Jan 31 07:56:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3368109371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.723 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.724 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.760 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:56:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:32.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.964 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.966 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.974 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:56:32 compute-0 nova_compute[247704]: 2026-01-31 07:56:32.975 247708 INFO nova.compute.claims [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.145 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s wr, 1 op/s
Jan 31 07:56:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280707329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.635 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.640 247708 DEBUG nova.compute.provider_tree [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.659 247708 DEBUG nova.scheduler.client.report [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.715 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.716 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.838 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.838 247708 DEBUG nova.network.neutron [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.879 247708 INFO nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:56:33 compute-0 nova_compute[247704]: 2026-01-31 07:56:33.907 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.069 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.071 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.071 247708 INFO nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Creating image(s)
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.098 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.127 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.158 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.164 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.165 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:34 compute-0 ceph-mon[74496]: pgmap v1779: 305 pgs: 305 active+clean; 200 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s wr, 1 op/s
Jan 31 07:56:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4280707329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.509155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194509311, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2163, "num_deletes": 251, "total_data_size": 3876285, "memory_usage": 3933456, "flush_reason": "Manual Compaction"}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194569345, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3776654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36898, "largest_seqno": 39060, "table_properties": {"data_size": 3767039, "index_size": 6045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20255, "raw_average_key_size": 20, "raw_value_size": 3747648, "raw_average_value_size": 3797, "num_data_blocks": 264, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845964, "oldest_key_time": 1769845964, "file_creation_time": 1769846194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 60217 microseconds, and 6867 cpu microseconds.
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.569388) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3776654 bytes OK
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.569407) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.572703) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.572760) EVENT_LOG_v1 {"time_micros": 1769846194572749, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.572789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3867553, prev total WAL file size 3867553, number of live WAL files 2.
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.573993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3688KB)], [80(9752KB)]
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194574108, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13763237, "oldest_snapshot_seqno": -1}
Jan 31 07:56:34 compute-0 nova_compute[247704]: 2026-01-31 07:56:34.704 247708 DEBUG nova.policy [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f60419a58aea43b9a0b6db7d61d71246', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1cd91610847a480caeee0ae3cdabf066', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6764 keys, 11851849 bytes, temperature: kUnknown
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194775637, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11851849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11804606, "index_size": 29211, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 173609, "raw_average_key_size": 25, "raw_value_size": 11681457, "raw_average_value_size": 1727, "num_data_blocks": 1167, "num_entries": 6764, "num_filter_entries": 6764, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.775976) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11851849 bytes
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.781858) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.3 rd, 58.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.5 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 7289, records dropped: 525 output_compression: NoCompression
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.781887) EVENT_LOG_v1 {"time_micros": 1769846194781873, "job": 46, "event": "compaction_finished", "compaction_time_micros": 201629, "compaction_time_cpu_micros": 36473, "output_level": 6, "num_output_files": 1, "total_output_size": 11851849, "num_input_records": 7289, "num_output_records": 6764, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194782576, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846194783944, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.573821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.784134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.784144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.784148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.784152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:56:34.784156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:56:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004345003373348222 of space, bias 1.0, pg target 1.3035010120044666 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:56:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 222 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 510 KiB/s wr, 16 op/s
Jan 31 07:56:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000021s ======
Jan 31 07:56:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Jan 31 07:56:36 compute-0 nova_compute[247704]: 2026-01-31 07:56:36.216 247708 DEBUG nova.virt.libvirt.imagebackend [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/40cf2ff3-f7ff-4843-b4ab-b7dcc843006f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/40cf2ff3-f7ff-4843-b4ab-b7dcc843006f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 07:56:36 compute-0 ceph-mon[74496]: pgmap v1780: 305 pgs: 305 active+clean; 222 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 510 KiB/s wr, 16 op/s
Jan 31 07:56:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 237 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Jan 31 07:56:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:37 compute-0 nova_compute[247704]: 2026-01-31 07:56:37.623 247708 DEBUG nova.network.neutron [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Successfully created port: ab92f411-7c5d-40bc-b720-f7cea0eb4596 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:56:37 compute-0 sudo[299611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:37 compute-0 sudo[299611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:37 compute-0 sudo[299611]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:37 compute-0 sudo[299636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:37 compute-0 ceph-mon[74496]: pgmap v1781: 305 pgs: 305 active+clean; 237 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Jan 31 07:56:37 compute-0 sudo[299636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:37 compute-0 sudo[299636]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:38 compute-0 nova_compute[247704]: 2026-01-31 07:56:38.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1342499023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:38.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.133 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 702 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 07:56:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:39.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.212 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.214 247708 DEBUG nova.virt.images [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] 40cf2ff3-f7ff-4843-b4ab-b7dcc843006f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.215 247708 DEBUG nova.privsep.utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.215 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.315 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.564 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.part /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.568 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.651 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.653 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.684 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.689 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 4c02576e-848d-4193-88a4-239a9e86e206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.748 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.919 247708 WARNING nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.920 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 4c02576e-848d-4193-88a4-239a9e86e206 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 07:56:39 compute-0 nova_compute[247704]: 2026-01-31 07:56:39.921 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:39 compute-0 ceph-mon[74496]: pgmap v1782: 305 pgs: 305 active+clean; 246 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 702 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.058 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 4c02576e-848d-4193-88a4-239a9e86e206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.129 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] resizing rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.243 247708 DEBUG nova.network.neutron [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Successfully updated port: ab92f411-7c5d-40bc-b720-f7cea0eb4596 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.248 247708 DEBUG nova.objects.instance [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c02576e-848d-4193-88a4-239a9e86e206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.367 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.368 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.381 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.382 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Ensure instance console log exists: /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.382 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.383 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.383 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.433 247708 DEBUG nova.compute.manager [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-changed-ab92f411-7c5d-40bc-b720-f7cea0eb4596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.434 247708 DEBUG nova.compute.manager [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Refreshing instance network info cache due to event network-changed-ab92f411-7c5d-40bc-b720-f7cea0eb4596. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.434 247708 DEBUG oslo_concurrency.lockutils [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.435 247708 DEBUG oslo_concurrency.lockutils [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.435 247708 DEBUG nova.network.neutron [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Refreshing network info cache for port ab92f411-7c5d-40bc-b720-f7cea0eb4596 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.440 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.557 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.758 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.759 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.768 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.768 247708 INFO nova.compute.claims [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:56:40 compute-0 podman[299788]: 2026-01-31 07:56:40.893715269 +0000 UTC m=+0.060999788 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:56:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:40.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:40 compute-0 nova_compute[247704]: 2026-01-31 07:56:40.973 247708 DEBUG nova.network.neutron [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:56:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 271 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 36 op/s
Jan 31 07:56:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000022s ======
Jan 31 07:56:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:41.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.192 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 07:56:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:56:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177322587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.613 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.619 247708 DEBUG nova.compute.provider_tree [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.842 247708 DEBUG nova.scheduler.client.report [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.904 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.905 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.987 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:56:41 compute-0 nova_compute[247704]: 2026-01-31 07:56:41.987 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.017 247708 INFO nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.042 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.071 247708 DEBUG nova.network.neutron [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.099 247708 DEBUG oslo_concurrency.lockutils [req-f2139636-2c28-429d-8fc7-b5c433059741 req-a894f421-6771-4cbd-ae19-9a463fb65971 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.100 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquired lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.100 247708 DEBUG nova.network.neutron [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.191 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.193 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.193 247708 INFO nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Creating image(s)
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.223 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:42 compute-0 ceph-mon[74496]: pgmap v1783: 305 pgs: 305 active+clean; 271 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 36 op/s
Jan 31 07:56:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3177322587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.255 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.280 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.286 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.343 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.344 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.345 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.345 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.370 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.376 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 7e357b28-4198-49a1-b10b-bbd4274180d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.827 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 7e357b28-4198-49a1-b10b-bbd4274180d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.900 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] resizing rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:56:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:42 compute-0 nova_compute[247704]: 2026-01-31 07:56:42.942 247708 DEBUG nova.policy [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56eecf4373334b18a454186e0c54e924', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56ce08a86486427fbebbfbd075cdb404', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.067 247708 DEBUG nova.network.neutron [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:56:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 301 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 63 op/s
Jan 31 07:56:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.442 247708 DEBUG nova.objects.instance [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lazy-loading 'migration_context' on Instance uuid 7e357b28-4198-49a1-b10b-bbd4274180d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.462 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.463 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Ensure instance console log exists: /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.463 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.463 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.464 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:43 compute-0 nova_compute[247704]: 2026-01-31 07:56:43.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:44 compute-0 nova_compute[247704]: 2026-01-31 07:56:44.317 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:44 compute-0 ceph-mon[74496]: pgmap v1784: 305 pgs: 305 active+clean; 301 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 63 op/s
Jan 31 07:56:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2742351362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2933832284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:44.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 367 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.2 MiB/s wr, 101 op/s
Jan 31 07:56:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.415 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Successfully created port: 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.450 247708 DEBUG nova.network.neutron [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updating instance_info_cache with network_info: [{"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:56:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/351507074' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:56:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/351507074' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.580 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Releasing lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.580 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Instance network_info: |[{"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.582 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Start _get_guest_xml network_info=[{"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '40cf2ff3-f7ff-4843-b4ab-b7dcc843006f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.588 247708 WARNING nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.593 247708 DEBUG nova.virt.libvirt.host [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.594 247708 DEBUG nova.virt.libvirt.host [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.606 247708 DEBUG nova.virt.libvirt.host [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.607 247708 DEBUG nova.virt.libvirt.host [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.608 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.609 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.609 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.609 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.610 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.610 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.610 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.610 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.611 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.611 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.611 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.611 247708 DEBUG nova.virt.hardware [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:56:45 compute-0 nova_compute[247704]: 2026-01-31 07:56:45.614 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:56:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256788063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.098 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.139 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.145 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:56:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2010572055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.579 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.582 247708 DEBUG nova.virt.libvirt.vif [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-352749004',display_name='tempest-ListServerFiltersTestJSON-instance-352749004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-352749004',id=82,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-imjxf71g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:33Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=4c02576e-848d-4193-88a4-239a9e86e206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.583 247708 DEBUG nova.network.os_vif_util [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.584 247708 DEBUG nova.network.os_vif_util [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.586 247708 DEBUG nova.objects.instance [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c02576e-848d-4193-88a4-239a9e86e206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:56:46 compute-0 ceph-mon[74496]: pgmap v1785: 305 pgs: 305 active+clean; 367 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.2 MiB/s wr, 101 op/s
Jan 31 07:56:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4256788063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2010572055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.614 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <uuid>4c02576e-848d-4193-88a4-239a9e86e206</uuid>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <name>instance-00000052</name>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-352749004</nova:name>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:56:45</nova:creationTime>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:user uuid="f60419a58aea43b9a0b6db7d61d71246">tempest-ListServerFiltersTestJSON-334452958-project-member</nova:user>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:project uuid="1cd91610847a480caeee0ae3cdabf066">tempest-ListServerFiltersTestJSON-334452958</nova:project>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <nova:port uuid="ab92f411-7c5d-40bc-b720-f7cea0eb4596">
Jan 31 07:56:46 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <system>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <entry name="serial">4c02576e-848d-4193-88a4-239a9e86e206</entry>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <entry name="uuid">4c02576e-848d-4193-88a4-239a9e86e206</entry>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </system>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <os>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </os>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <features>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </features>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4c02576e-848d-4193-88a4-239a9e86e206_disk">
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </source>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4c02576e-848d-4193-88a4-239a9e86e206_disk.config">
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </source>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:56:46 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:2a:7a:df"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <target dev="tapab92f411-7c"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/console.log" append="off"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <video>
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </video>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:56:46 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:56:46 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:56:46 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:56:46 compute-0 nova_compute[247704]: </domain>
Jan 31 07:56:46 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.616 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Preparing to wait for external event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.616 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.617 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.617 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.618 247708 DEBUG nova.virt.libvirt.vif [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-352749004',display_name='tempest-ListServerFiltersTestJSON-instance-352749004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-352749004',id=82,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-imjxf71g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:33Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=4c02576e-848d-4193-88a4-239a9e86e206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.619 247708 DEBUG nova.network.os_vif_util [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.620 247708 DEBUG nova.network.os_vif_util [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.621 247708 DEBUG os_vif [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.622 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.623 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.624 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.629 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.630 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab92f411-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.630 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab92f411-7c, col_values=(('external_ids', {'iface-id': 'ab92f411-7c5d-40bc-b720-f7cea0eb4596', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:7a:df', 'vm-uuid': '4c02576e-848d-4193-88a4-239a9e86e206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.632 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:46 compute-0 NetworkManager[49108]: <info>  [1769846206.6345] manager: (tapab92f411-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.640 247708 INFO os_vif [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c')
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.710 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.711 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.711 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] No VIF found with MAC fa:16:3e:2a:7a:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.712 247708 INFO nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Using config drive
Jan 31 07:56:46 compute-0 nova_compute[247704]: 2026-01-31 07:56:46.742 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:46.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 375 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 103 op/s
Jan 31 07:56:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:47.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:47 compute-0 nova_compute[247704]: 2026-01-31 07:56:47.771 247708 INFO nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Creating config drive at /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/disk.config
Jan 31 07:56:47 compute-0 nova_compute[247704]: 2026-01-31 07:56:47.776 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdw68wryf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:47 compute-0 nova_compute[247704]: 2026-01-31 07:56:47.906 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdw68wryf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:47 compute-0 nova_compute[247704]: 2026-01-31 07:56:47.942 247708 DEBUG nova.storage.rbd_utils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] rbd image 4c02576e-848d-4193-88a4-239a9e86e206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:56:47 compute-0 nova_compute[247704]: 2026-01-31 07:56:47.947 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/disk.config 4c02576e-848d-4193-88a4-239a9e86e206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.148 247708 DEBUG oslo_concurrency.processutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/disk.config 4c02576e-848d-4193-88a4-239a9e86e206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.150 247708 INFO nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Deleting local config drive /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206/disk.config because it was imported into RBD.
Jan 31 07:56:48 compute-0 kernel: tapab92f411-7c: entered promiscuous mode
Jan 31 07:56:48 compute-0 ovn_controller[149457]: 2026-01-31T07:56:48Z|00294|binding|INFO|Claiming lport ab92f411-7c5d-40bc-b720-f7cea0eb4596 for this chassis.
Jan 31 07:56:48 compute-0 ovn_controller[149457]: 2026-01-31T07:56:48Z|00295|binding|INFO|ab92f411-7c5d-40bc-b720-f7cea0eb4596: Claiming fa:16:3e:2a:7a:df 10.100.0.6
Jan 31 07:56:48 compute-0 NetworkManager[49108]: <info>  [1769846208.2247] manager: (tapab92f411-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.223 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 systemd-udevd[300132]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.256 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 systemd-machined[214448]: New machine qemu-32-instance-00000052.
Jan 31 07:56:48 compute-0 NetworkManager[49108]: <info>  [1769846208.2598] device (tapab92f411-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:56:48 compute-0 NetworkManager[49108]: <info>  [1769846208.2612] device (tapab92f411-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:56:48 compute-0 ovn_controller[149457]: 2026-01-31T07:56:48Z|00296|binding|INFO|Setting lport ab92f411-7c5d-40bc-b720-f7cea0eb4596 ovn-installed in OVS
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.264 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-00000052.
Jan 31 07:56:48 compute-0 ovn_controller[149457]: 2026-01-31T07:56:48Z|00297|binding|INFO|Setting lport ab92f411-7c5d-40bc-b720-f7cea0eb4596 up in Southbound
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.278 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:7a:df 10.100.0.6'], port_security=['fa:16:3e:2a:7a:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4c02576e-848d-4193-88a4-239a9e86e206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ab92f411-7c5d-40bc-b720-f7cea0eb4596) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.279 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ab92f411-7c5d-40bc-b720-f7cea0eb4596 in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb bound to our chassis
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.281 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.290 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dea23f58-4c62-4a7b-9b5b-95c509a03ffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.291 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca1ed3b2-b1 in ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.293 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca1ed3b2-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.293 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[29376b6f-c8c3-430a-944f-ace9cdd6ef56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.294 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b0854439-a0e5-42db-a9ea-f70a77b7dde1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.306 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[b09f9754-9d38-4a3c-8cbb-59e02823daea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.319 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[caf13e33-934c-4ed7-90de-be017fb3bc2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.352 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[154f95c7-18f4-4ff5-8425-e1f833aeb64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 systemd-udevd[300136]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:56:48 compute-0 NetworkManager[49108]: <info>  [1769846208.3601] manager: (tapca1ed3b2-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.359 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3dac1c-904a-4637-aa52-ef193ac1a7ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.387 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[df126e0c-5ddb-4f11-a8e3-fd07ca737c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.390 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[86e58701-bbdc-45a2-9771-4b24ebee0a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 NetworkManager[49108]: <info>  [1769846208.4115] device (tapca1ed3b2-b0): carrier: link connected
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.416 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5760f042-1d7e-425e-823d-bba66293657c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.430 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9e243239-29bd-4807-8e65-27a1f95376c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645102, 'reachable_time': 39100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300167, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.450 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ff039166-e297-46f1-a0e3-640f067defa2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:7011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645102, 'tstamp': 645102}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300168, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.463 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7a4d47-be4b-4ea8-b916-e72ad9a1da58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca1ed3b2-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:70:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645102, 'reachable_time': 39100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300169, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.503 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23b131cc-f222-41db-b6e5-1dbd81fda10d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.567 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[59436f51-7893-4f67-a432-a02d3ed154a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.569 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.570 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.570 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca1ed3b2-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.573 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 kernel: tapca1ed3b2-b0: entered promiscuous mode
Jan 31 07:56:48 compute-0 NetworkManager[49108]: <info>  [1769846208.5739] manager: (tapca1ed3b2-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.576 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.579 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca1ed3b2-b0, col_values=(('external_ids', {'iface-id': 'd19b5f05-fa79-4835-8ef4-51f87493d59b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 ovn_controller[149457]: 2026-01-31T07:56:48Z|00298|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.589 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.591 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.592 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5161c37a-7e03-4609-8008-4791b87bc1e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.593 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.pid.haproxy
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID ca1ed3b2-b27d-427e-a9bd-cc12393752eb
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:56:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:56:48.594 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'env', 'PROCESS_TAG=haproxy-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca1ed3b2-b27d-427e-a9bd-cc12393752eb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:56:48 compute-0 ceph-mon[74496]: pgmap v1786: 305 pgs: 305 active+clean; 375 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 103 op/s
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.879 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846208.8785293, 4c02576e-848d-4193-88a4-239a9e86e206 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.880 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] VM Started (Lifecycle Event)
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.920 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.926 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846208.8786962, 4c02576e-848d-4193-88a4-239a9e86e206 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.927 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] VM Paused (Lifecycle Event)
Jan 31 07:56:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:48.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.963 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:48 compute-0 nova_compute[247704]: 2026-01-31 07:56:48.968 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.002 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Successfully created port: 52146f19-c519-41e7-a10c-703af57ee396 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.052 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:56:49 compute-0 podman[300242]: 2026-01-31 07:56:49.058482664 +0000 UTC m=+0.069964748 container create 6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:56:49 compute-0 systemd[1]: Started libpod-conmon-6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8.scope.
Jan 31 07:56:49 compute-0 podman[300242]: 2026-01-31 07:56:49.022464587 +0000 UTC m=+0.033946721 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:56:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc35fb10fa0787aeaab3c398522230f8922fbb3f6c9646b4aca798c02b6dfdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:56:49 compute-0 podman[300242]: 2026-01-31 07:56:49.15601399 +0000 UTC m=+0.167496094 container init 6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:56:49 compute-0 podman[300242]: 2026-01-31 07:56:49.16450747 +0000 UTC m=+0.175989594 container start 6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.166 247708 DEBUG nova.compute.manager [req-fe6dcc59-1509-4015-8756-ae7e8b1a5c2f req-717636c5-6407-4e77-96dc-e37e3c09e795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.166 247708 DEBUG oslo_concurrency.lockutils [req-fe6dcc59-1509-4015-8756-ae7e8b1a5c2f req-717636c5-6407-4e77-96dc-e37e3c09e795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.167 247708 DEBUG oslo_concurrency.lockutils [req-fe6dcc59-1509-4015-8756-ae7e8b1a5c2f req-717636c5-6407-4e77-96dc-e37e3c09e795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.167 247708 DEBUG oslo_concurrency.lockutils [req-fe6dcc59-1509-4015-8756-ae7e8b1a5c2f req-717636c5-6407-4e77-96dc-e37e3c09e795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.167 247708 DEBUG nova.compute.manager [req-fe6dcc59-1509-4015-8756-ae7e8b1a5c2f req-717636c5-6407-4e77-96dc-e37e3c09e795 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Processing event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.168 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.172 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846209.1719975, 4c02576e-848d-4193-88a4-239a9e86e206 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.172 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] VM Resumed (Lifecycle Event)
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.175 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.179 247708 INFO nova.virt.libvirt.driver [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Instance spawned successfully.
Jan 31 07:56:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 126 op/s
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.193 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:56:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:49 compute-0 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[300257]: [NOTICE]   (300261) : New worker (300263) forked
Jan 31 07:56:49 compute-0 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[300257]: [NOTICE]   (300261) : Loading success.
Jan 31 07:56:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.004000087s ======
Jan 31 07:56:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000087s
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.269 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.277 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.281 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.281 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.282 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.283 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.283 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.284 247708 DEBUG nova.virt.libvirt.driver [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.347 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:56:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.462 247708 INFO nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Took 15.39 seconds to spawn the instance on the hypervisor.
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.463 247708 DEBUG nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.575 247708 INFO nova.compute.manager [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Took 16.65 seconds to build instance.
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.610 247708 DEBUG oslo_concurrency.lockutils [None req-2442edd9-910f-468f-b7b1-3911c394bc4c f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.611 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "4c02576e-848d-4193-88a4-239a9e86e206" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.612 247708 INFO nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:56:49 compute-0 nova_compute[247704]: 2026-01-31 07:56:49.612 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "4c02576e-848d-4193-88a4-239a9e86e206" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:49 compute-0 ceph-mon[74496]: pgmap v1787: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 126 op/s
Jan 31 07:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:56:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1084162188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 153 op/s
Jan 31 07:56:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:56:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:51.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:56:51 compute-0 nova_compute[247704]: 2026-01-31 07:56:51.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:52 compute-0 ceph-mon[74496]: pgmap v1788: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 153 op/s
Jan 31 07:56:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1497168377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.088 247708 DEBUG nova.compute.manager [req-03f0770c-606c-47cc-989a-95293bece81c req-727705ef-c6bc-4055-8207-63dfee1a41ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.089 247708 DEBUG oslo_concurrency.lockutils [req-03f0770c-606c-47cc-989a-95293bece81c req-727705ef-c6bc-4055-8207-63dfee1a41ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.089 247708 DEBUG oslo_concurrency.lockutils [req-03f0770c-606c-47cc-989a-95293bece81c req-727705ef-c6bc-4055-8207-63dfee1a41ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.089 247708 DEBUG oslo_concurrency.lockutils [req-03f0770c-606c-47cc-989a-95293bece81c req-727705ef-c6bc-4055-8207-63dfee1a41ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.090 247708 DEBUG nova.compute.manager [req-03f0770c-606c-47cc-989a-95293bece81c req-727705ef-c6bc-4055-8207-63dfee1a41ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] No waiting events found dispatching network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.090 247708 WARNING nova.compute.manager [req-03f0770c-606c-47cc-989a-95293bece81c req-727705ef-c6bc-4055-8207-63dfee1a41ed 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received unexpected event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 for instance with vm_state active and task_state None.
Jan 31 07:56:52 compute-0 nova_compute[247704]: 2026-01-31 07:56:52.295 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Successfully updated port: 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:56:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:52.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 193 op/s
Jan 31 07:56:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:53 compute-0 nova_compute[247704]: 2026-01-31 07:56:53.779 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:53 compute-0 nova_compute[247704]: 2026-01-31 07:56:53.827 247708 DEBUG nova.compute.manager [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-changed-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:53 compute-0 nova_compute[247704]: 2026-01-31 07:56:53.828 247708 DEBUG nova.compute.manager [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Refreshing instance network info cache due to event network-changed-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:56:53 compute-0 nova_compute[247704]: 2026-01-31 07:56:53.828 247708 DEBUG oslo_concurrency.lockutils [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:53 compute-0 nova_compute[247704]: 2026-01-31 07:56:53.829 247708 DEBUG oslo_concurrency.lockutils [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:56:53 compute-0 nova_compute[247704]: 2026-01-31 07:56:53.830 247708 DEBUG nova.network.neutron [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Refreshing network info cache for port 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:56:54 compute-0 ceph-mon[74496]: pgmap v1789: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 193 op/s
Jan 31 07:56:54 compute-0 nova_compute[247704]: 2026-01-31 07:56:54.401 247708 DEBUG nova.network.neutron [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:56:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:56:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:55 compute-0 nova_compute[247704]: 2026-01-31 07:56:55.136 247708 DEBUG nova.network.neutron [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:56:55 compute-0 nova_compute[247704]: 2026-01-31 07:56:55.177 247708 DEBUG oslo_concurrency.lockutils [req-e016cffb-cb3f-4d40-bccb-ec40ee754a1d req-bf1ad417-9057-4306-a30f-02b92961afd7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:56:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Jan 31 07:56:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:55.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:56 compute-0 ceph-mon[74496]: pgmap v1790: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.377 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Successfully updated port: 52146f19-c519-41e7-a10c-703af57ee396 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.455 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.456 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquired lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.457 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.637 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.797 247708 DEBUG nova.compute.manager [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-changed-52146f19-c519-41e7-a10c-703af57ee396 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.798 247708 DEBUG nova.compute.manager [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Refreshing instance network info cache due to event network-changed-52146f19-c519-41e7-a10c-703af57ee396. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:56:56 compute-0 nova_compute[247704]: 2026-01-31 07:56:56.799 247708 DEBUG oslo_concurrency.lockutils [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:56:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:56 compute-0 podman[300276]: 2026-01-31 07:56:56.984412976 +0000 UTC m=+0.147808503 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 07:56:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 920 KiB/s wr, 163 op/s
Jan 31 07:56:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:57.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:57 compute-0 nova_compute[247704]: 2026-01-31 07:56:57.472 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:56:57 compute-0 sudo[300305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:57 compute-0 sudo[300305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:57 compute-0 sudo[300305]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:58 compute-0 sudo[300330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:56:58 compute-0 sudo[300330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:56:58 compute-0 sudo[300330]: pam_unix(sudo:session): session closed for user root
Jan 31 07:56:58 compute-0 ceph-mon[74496]: pgmap v1791: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 920 KiB/s wr, 163 op/s
Jan 31 07:56:58 compute-0 nova_compute[247704]: 2026-01-31 07:56:58.781 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:56:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:56:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:56:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 353 KiB/s wr, 216 op/s
Jan 31 07:56:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:56:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:56:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:56:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:00 compute-0 ceph-mon[74496]: pgmap v1792: 305 pgs: 305 active+clean; 386 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 353 KiB/s wr, 216 op/s
Jan 31 07:57:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 394 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 820 KiB/s wr, 201 op/s
Jan 31 07:57:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:01.227 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:01 compute-0 nova_compute[247704]: 2026-01-31 07:57:01.228 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:01.231 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:57:01 compute-0 nova_compute[247704]: 2026-01-31 07:57:01.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:02 compute-0 ovn_controller[149457]: 2026-01-31T07:57:02Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:7a:df 10.100.0.6
Jan 31 07:57:02 compute-0 ovn_controller[149457]: 2026-01-31T07:57:02Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:7a:df 10.100.0.6
Jan 31 07:57:02 compute-0 ceph-mon[74496]: pgmap v1793: 305 pgs: 305 active+clean; 394 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 820 KiB/s wr, 201 op/s
Jan 31 07:57:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:57:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:57:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 410 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 194 op/s
Jan 31 07:57:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.411 247708 DEBUG nova.network.neutron [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updating instance_info_cache with network_info: [{"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.782 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.962 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Releasing lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.963 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Instance network_info: |[{"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.964 247708 DEBUG oslo_concurrency.lockutils [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.964 247708 DEBUG nova.network.neutron [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Refreshing network info cache for port 52146f19-c519-41e7-a10c-703af57ee396 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.970 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Start _get_guest_xml network_info=[{"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.979 247708 WARNING nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.989 247708 DEBUG nova.virt.libvirt.host [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.990 247708 DEBUG nova.virt.libvirt.host [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.996 247708 DEBUG nova.virt.libvirt.host [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:57:03 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.997 247708 DEBUG nova.virt.libvirt.host [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:03.999 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.000 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.001 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.001 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.001 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.002 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.002 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.003 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.003 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.003 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.004 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.004 247708 DEBUG nova.virt.hardware [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.008 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:04.234 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:57:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621108763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.452 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.495 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:04 compute-0 nova_compute[247704]: 2026-01-31 07:57:04.500 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:04 compute-0 ceph-mon[74496]: pgmap v1794: 305 pgs: 305 active+clean; 410 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 194 op/s
Jan 31 07:57:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3621108763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:57:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031796697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.040 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.042 247708 DEBUG nova.virt.libvirt.vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1201502254',display_name='tempest-ServersTestMultiNic-server-1201502254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1201502254',id=84,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-nqtchzo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:42Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=7e357b28-4198-49a1-b10b-bbd4274180d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.042 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.043 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.044 247708 DEBUG nova.virt.libvirt.vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1201502254',display_name='tempest-ServersTestMultiNic-server-1201502254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1201502254',id=84,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-nqtchzo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:42Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=7e357b28-4198-49a1-b10b-bbd4274180d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.044 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.045 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.046 247708 DEBUG nova.objects.instance [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7e357b28-4198-49a1-b10b-bbd4274180d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.136 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <uuid>7e357b28-4198-49a1-b10b-bbd4274180d0</uuid>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <name>instance-00000054</name>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersTestMultiNic-server-1201502254</nova:name>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:57:03</nova:creationTime>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:user uuid="56eecf4373334b18a454186e0c54e924">tempest-ServersTestMultiNic-539084191-project-member</nova:user>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:project uuid="56ce08a86486427fbebbfbd075cdb404">tempest-ServersTestMultiNic-539084191</nova:project>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:port uuid="608af9e6-ae8b-4d49-9d2c-b8973ad0fe15">
Jan 31 07:57:05 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.186" ipVersion="4"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <nova:port uuid="52146f19-c519-41e7-a10c-703af57ee396">
Jan 31 07:57:05 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.1.206" ipVersion="4"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <system>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <entry name="serial">7e357b28-4198-49a1-b10b-bbd4274180d0</entry>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <entry name="uuid">7e357b28-4198-49a1-b10b-bbd4274180d0</entry>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </system>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <os>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </os>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <features>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </features>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/7e357b28-4198-49a1-b10b-bbd4274180d0_disk">
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </source>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/7e357b28-4198-49a1-b10b-bbd4274180d0_disk.config">
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </source>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:57:05 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:c5:d9:0e"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <target dev="tap608af9e6-ae"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:1e:c0:91"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <target dev="tap52146f19-c5"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/console.log" append="off"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <video>
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </video>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:57:05 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:57:05 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:57:05 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:57:05 compute-0 nova_compute[247704]: </domain>
Jan 31 07:57:05 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.138 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Preparing to wait for external event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.139 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.139 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.140 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.140 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Preparing to wait for external event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.140 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.141 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.141 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.142 247708 DEBUG nova.virt.libvirt.vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1201502254',display_name='tempest-ServersTestMultiNic-server-1201502254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1201502254',id=84,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-nqtchzo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:42Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=7e357b28-4198-49a1-b10b-bbd4274180d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.142 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.143 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.144 247708 DEBUG os_vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.144 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.145 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.145 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.151 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.152 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap608af9e6-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.153 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap608af9e6-ae, col_values=(('external_ids', {'iface-id': '608af9e6-ae8b-4d49-9d2c-b8973ad0fe15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:d9:0e', 'vm-uuid': '7e357b28-4198-49a1-b10b-bbd4274180d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:05 compute-0 NetworkManager[49108]: <info>  [1769846225.1598] manager: (tap608af9e6-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.163 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.165 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.166 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.168 247708 INFO os_vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae')
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.169 247708 DEBUG nova.virt.libvirt.vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1201502254',display_name='tempest-ServersTestMultiNic-server-1201502254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1201502254',id=84,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-nqtchzo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:42Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=7e357b28-4198-49a1-b10b-bbd4274180d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.169 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.170 247708 DEBUG nova.network.os_vif_util [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.170 247708 DEBUG os_vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.171 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.171 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.171 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.174 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.174 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52146f19-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.174 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52146f19-c5, col_values=(('external_ids', {'iface-id': '52146f19-c519-41e7-a10c-703af57ee396', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:c0:91', 'vm-uuid': '7e357b28-4198-49a1-b10b-bbd4274180d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.175 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 NetworkManager[49108]: <info>  [1769846225.1764] manager: (tap52146f19-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.181 247708 INFO os_vif [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5')
Jan 31 07:57:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 443 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 221 op/s
Jan 31 07:57:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.528 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.529 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.530 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No VIF found with MAC fa:16:3e:c5:d9:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.530 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] No VIF found with MAC fa:16:3e:1e:c0:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.531 247708 INFO nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Using config drive
Jan 31 07:57:05 compute-0 nova_compute[247704]: 2026-01-31 07:57:05.580 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4031796697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:06 compute-0 ceph-mon[74496]: pgmap v1795: 305 pgs: 305 active+clean; 443 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 221 op/s
Jan 31 07:57:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:06.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.190 247708 INFO nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Creating config drive at /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/disk.config
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.195 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_g1ug_xo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 448 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 31 07:57:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.334 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_g1ug_xo" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.368 247708 DEBUG nova.storage.rbd_utils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] rbd image 7e357b28-4198-49a1-b10b-bbd4274180d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.372 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/disk.config 7e357b28-4198-49a1-b10b-bbd4274180d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.537 247708 DEBUG oslo_concurrency.processutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/disk.config 7e357b28-4198-49a1-b10b-bbd4274180d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.537 247708 INFO nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Deleting local config drive /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0/disk.config because it was imported into RBD.
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.5831] manager: (tap608af9e6-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Jan 31 07:57:07 compute-0 kernel: tap608af9e6-ae: entered promiscuous mode
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00299|binding|INFO|Claiming lport 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 for this chassis.
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00300|binding|INFO|608af9e6-ae8b-4d49-9d2c-b8973ad0fe15: Claiming fa:16:3e:c5:d9:0e 10.100.0.186
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.587 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.5984] manager: (tap52146f19-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Jan 31 07:57:07 compute-0 kernel: tap52146f19-c5: entered promiscuous mode
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00301|if_status|INFO|Dropped 14 log messages in last 286 seconds (most recently, 286 seconds ago) due to excessive rate
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00302|if_status|INFO|Not updating pb chassis for 52146f19-c519-41e7-a10c-703af57ee396 now as sb is readonly
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:07 compute-0 systemd-udevd[300503]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:57:07 compute-0 systemd-udevd[300502]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:57:07 compute-0 systemd-machined[214448]: New machine qemu-33-instance-00000054.
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.6344] device (tap52146f19-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.6361] device (tap608af9e6-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:57:07 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-00000054.
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.6383] device (tap52146f19-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.6389] device (tap608af9e6-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00303|binding|INFO|Claiming lport 52146f19-c519-41e7-a10c-703af57ee396 for this chassis.
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00304|binding|INFO|52146f19-c519-41e7-a10c-703af57ee396: Claiming fa:16:3e:1e:c0:91 10.100.1.206
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00305|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00306|binding|INFO|Setting lport 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 ovn-installed in OVS
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00307|binding|INFO|Setting lport 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 up in Southbound
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.757 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:d9:0e 10.100.0.186'], port_security=['fa:16:3e:c5:d9:0e 10.100.0.186'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.186/24', 'neutron:device_id': '7e357b28-4198-49a1-b10b-bbd4274180d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4862ee-e9f8-47d9-848f-baebec117c85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a25596ef-adef-4729-8060-b60652a56c6c, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.759 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 in datapath dc4862ee-e9f8-47d9-848f-baebec117c85 bound to our chassis
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.761 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dc4862ee-e9f8-47d9-848f-baebec117c85
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00308|binding|INFO|Setting lport 52146f19-c519-41e7-a10c-703af57ee396 ovn-installed in OVS
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.770 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.775 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[610256f7-76c7-4a5c-a5ad-3a0e8b1fe670]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.777 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdc4862ee-e1 in ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.779 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdc4862ee-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.779 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[53b407d9-e27b-40ac-9ac7-e877f99c90db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.780 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fa014b0f-9101-4beb-85f8-5aa7c1e1f9be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.789 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa19bae-a99c-4da9-92a4-2b2a302f17a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.802 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6182d762-5449-4d05-8150-a842a8931dc0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.831 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[be0c0c90-03f3-491d-a30c-0ba68cab43ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.8366] manager: (tapdc4862ee-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.837 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa3e9b1-59e3-4897-b636-0da788961883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.863 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3b5a9d-00f8-4ac9-94dd-ef5a9d3e70fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.868 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[923c6a8f-5a97-4010-8fc6-4f9a64d7f824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 NetworkManager[49108]: <info>  [1769846227.8898] device (tapdc4862ee-e0): carrier: link connected
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.894 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1d14f98d-0702-4b8a-a3be-9ce1dfca68e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_controller[149457]: 2026-01-31T07:57:07Z|00309|binding|INFO|Setting lport 52146f19-c519-41e7-a10c-703af57ee396 up in Southbound
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.906 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:c0:91 10.100.1.206'], port_security=['fa:16:3e:1e:c0:91 10.100.1.206'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.206/24', 'neutron:device_id': '7e357b28-4198-49a1-b10b-bbd4274180d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20c6d006-2998-430a-ace9-5819b781e4b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e9b9159-08e2-4f64-86b2-1e2537a66cc1, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=52146f19-c519-41e7-a10c-703af57ee396) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.908 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ee32f65d-0689-4526-bef8-aa0bed518f62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4862ee-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:6f:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647050, 'reachable_time': 22494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300573, 'error': None, 'target': 'ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.921 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8235d255-a450-4f04-bbc3-f564842f4dc8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:6f90'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647050, 'tstamp': 647050}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300579, 'error': None, 'target': 'ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.935 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5fee45ba-bd7d-4896-bb12-e5e63bdc7b83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4862ee-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:6f:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647050, 'reachable_time': 22494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300580, 'error': None, 'target': 'ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:07.959 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c12889e5-e649-4de4-ac60-a0fddc9e9e1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.989 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846227.9875455, 7e357b28-4198-49a1-b10b-bbd4274180d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:07 compute-0 nova_compute[247704]: 2026-01-31 07:57:07.989 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] VM Started (Lifecycle Event)
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.011 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[017cde1b-00e9-4140-bc0a-5108f36efe72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.013 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4862ee-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.013 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.013 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc4862ee-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.015 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 NetworkManager[49108]: <info>  [1769846228.0159] manager: (tapdc4862ee-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 31 07:57:08 compute-0 kernel: tapdc4862ee-e0: entered promiscuous mode
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.018 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.020 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdc4862ee-e0, col_values=(('external_ids', {'iface-id': '62fe0282-584a-40f2-8900-f9915e024987'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.021 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 ovn_controller[149457]: 2026-01-31T07:57:08Z|00310|binding|INFO|Releasing lport 62fe0282-584a-40f2-8900-f9915e024987 from this chassis (sb_readonly=0)
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.025 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dc4862ee-e9f8-47d9-848f-baebec117c85.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dc4862ee-e9f8-47d9-848f-baebec117c85.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.026 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[646e048c-e904-4f6d-abf1-a53403e074e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.027 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-dc4862ee-e9f8-47d9-848f-baebec117c85
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/dc4862ee-e9f8-47d9-848f-baebec117c85.pid.haproxy
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID dc4862ee-e9f8-47d9-848f-baebec117c85
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.028 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85', 'env', 'PROCESS_TAG=haproxy-dc4862ee-e9f8-47d9-848f-baebec117c85', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dc4862ee-e9f8-47d9-848f-baebec117c85.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.032 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.036 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846227.9886763, 7e357b28-4198-49a1-b10b-bbd4274180d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.036 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] VM Paused (Lifecycle Event)
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.162 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.166 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.283 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:57:08 compute-0 podman[300614]: 2026-01-31 07:57:08.3728041 +0000 UTC m=+0.050788309 container create 5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:57:08 compute-0 systemd[1]: Started libpod-conmon-5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a.scope.
Jan 31 07:57:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:08 compute-0 podman[300614]: 2026-01-31 07:57:08.344428458 +0000 UTC m=+0.022412687 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:57:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b73b013a4447a51fc81ca9163e3c967b41d1ff56170a6366a58a4cb179b9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:08 compute-0 podman[300614]: 2026-01-31 07:57:08.454973571 +0000 UTC m=+0.132957800 container init 5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:57:08 compute-0 podman[300614]: 2026-01-31 07:57:08.460697971 +0000 UTC m=+0.138682180 container start 5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 07:57:08 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [NOTICE]   (300633) : New worker (300635) forked
Jan 31 07:57:08 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [NOTICE]   (300633) : Loading success.
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.518 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 52146f19-c519-41e7-a10c-703af57ee396 in datapath 20c6d006-2998-430a-ace9-5819b781e4b4 unbound from our chassis
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.521 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20c6d006-2998-430a-ace9-5819b781e4b4
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.529 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fb5cc3-df47-479b-89b1-1864ceba542d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.530 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20c6d006-21 in ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.533 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20c6d006-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.533 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0816c3cb-8361-454f-b9b0-0a5e05b8c45d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.534 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0bab74-8174-4658-8ba5-ba23cb1f5cca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.542 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c8f70-f38e-4071-8e42-5a1f41c79a81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.558 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[03d3aaae-e369-4739-99e8-9193b92ad94b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.586 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c38b50-09ae-44c0-9323-259fb45e8fc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.592 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cf28aaf9-d586-4891-96bf-add859256349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 NetworkManager[49108]: <info>  [1769846228.5934] manager: (tap20c6d006-20): new Veth device (/org/freedesktop/NetworkManager/Devices/152)
Jan 31 07:57:08 compute-0 systemd-udevd[300562]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.629 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[88d6e992-4a3a-4a11-8b32-78196ac8619b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.636 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e901380a-7451-4abe-91d2-5cd496528341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ceph-mon[74496]: pgmap v1796: 305 pgs: 305 active+clean; 448 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 31 07:57:08 compute-0 NetworkManager[49108]: <info>  [1769846228.6656] device (tap20c6d006-20): carrier: link connected
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.672 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[044378c5-8ff7-4fd8-87e9-66ee9afd2c36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.692 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb788d9-c815-40f4-9f76-b3d39007003c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20c6d006-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ed:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647128, 'reachable_time': 30194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300654, 'error': None, 'target': 'ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.708 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7afa5ae5-68c0-4fc4-b062-7a0410bb104d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:ed3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647128, 'tstamp': 647128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300655, 'error': None, 'target': 'ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.725 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9957df7f-df79-4880-8915-70a933b6adfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20c6d006-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ed:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647128, 'reachable_time': 30194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300656, 'error': None, 'target': 'ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.731 247708 DEBUG nova.compute.manager [req-84db4ab6-49da-4092-b320-64282155acb2 req-59a61695-ed88-4d10-bf0b-bfe73e2d6429 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.732 247708 DEBUG oslo_concurrency.lockutils [req-84db4ab6-49da-4092-b320-64282155acb2 req-59a61695-ed88-4d10-bf0b-bfe73e2d6429 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.733 247708 DEBUG oslo_concurrency.lockutils [req-84db4ab6-49da-4092-b320-64282155acb2 req-59a61695-ed88-4d10-bf0b-bfe73e2d6429 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.733 247708 DEBUG oslo_concurrency.lockutils [req-84db4ab6-49da-4092-b320-64282155acb2 req-59a61695-ed88-4d10-bf0b-bfe73e2d6429 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.734 247708 DEBUG nova.compute.manager [req-84db4ab6-49da-4092-b320-64282155acb2 req-59a61695-ed88-4d10-bf0b-bfe73e2d6429 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Processing event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.749 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[be8daa35-9f7b-4866-b6b2-939646d50797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.784 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.806 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7607aa-0931-4808-9efb-fee862597c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.807 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20c6d006-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.808 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.808 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20c6d006-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.810 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 kernel: tap20c6d006-20: entered promiscuous mode
Jan 31 07:57:08 compute-0 NetworkManager[49108]: <info>  [1769846228.8120] manager: (tap20c6d006-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.817 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20c6d006-20, col_values=(('external_ids', {'iface-id': '33aba96a-0540-49d1-a5d0-fa468f831e95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.819 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 ovn_controller[149457]: 2026-01-31T07:57:08Z|00311|binding|INFO|Releasing lport 33aba96a-0540-49d1-a5d0-fa468f831e95 from this chassis (sb_readonly=0)
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.820 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20c6d006-2998-430a-ace9-5819b781e4b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20c6d006-2998-430a-ace9-5819b781e4b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.821 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a1814d29-a231-4bc4-8a81-951e83a62d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.822 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-20c6d006-2998-430a-ace9-5819b781e4b4
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/20c6d006-2998-430a-ace9-5819b781e4b4.pid.haproxy
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 20c6d006-2998-430a-ace9-5819b781e4b4
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:57:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:08.822 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4', 'env', 'PROCESS_TAG=haproxy-20c6d006-2998-430a-ace9-5819b781e4b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20c6d006-2998-430a-ace9-5819b781e4b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:57:08 compute-0 nova_compute[247704]: 2026-01-31 07:57:08.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:08.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.160 247708 DEBUG nova.network.neutron [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updated VIF entry in instance network info cache for port 52146f19-c519-41e7-a10c-703af57ee396. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.162 247708 DEBUG nova.network.neutron [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updating instance_info_cache with network_info: [{"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:09 compute-0 podman[300688]: 2026-01-31 07:57:09.16954888 +0000 UTC m=+0.061883048 container create e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:57:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 477 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 245 op/s
Jan 31 07:57:09 compute-0 systemd[1]: Started libpod-conmon-e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043.scope.
Jan 31 07:57:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:09 compute-0 podman[300688]: 2026-01-31 07:57:09.13342032 +0000 UTC m=+0.025754538 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:57:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea91a2575b3869ccd08d7e9ab4144f6f7a25bd8dddaf8cb18ebb38dd3997fc2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:09 compute-0 podman[300688]: 2026-01-31 07:57:09.25081753 +0000 UTC m=+0.143151808 container init e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:57:09 compute-0 podman[300688]: 2026-01-31 07:57:09.256608811 +0000 UTC m=+0.148943019 container start e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 07:57:09 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [NOTICE]   (300708) : New worker (300710) forked
Jan 31 07:57:09 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [NOTICE]   (300708) : Loading success.
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.293 247708 DEBUG oslo_concurrency.lockutils [req-abe99814-e9b5-46c8-8490-dcb572258c71 req-fe0351d3-d9d5-423a-acdf-34f67f63af2b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-7e357b28-4198-49a1-b10b-bbd4274180d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.400 247708 DEBUG nova.compute.manager [req-04e76124-e8d1-4960-ac05-be95d7385502 req-ef50b74d-d2d2-4602-bf8d-d0baa8f2fae5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.401 247708 DEBUG oslo_concurrency.lockutils [req-04e76124-e8d1-4960-ac05-be95d7385502 req-ef50b74d-d2d2-4602-bf8d-d0baa8f2fae5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.401 247708 DEBUG oslo_concurrency.lockutils [req-04e76124-e8d1-4960-ac05-be95d7385502 req-ef50b74d-d2d2-4602-bf8d-d0baa8f2fae5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.402 247708 DEBUG oslo_concurrency.lockutils [req-04e76124-e8d1-4960-ac05-be95d7385502 req-ef50b74d-d2d2-4602-bf8d-d0baa8f2fae5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.402 247708 DEBUG nova.compute.manager [req-04e76124-e8d1-4960-ac05-be95d7385502 req-ef50b74d-d2d2-4602-bf8d-d0baa8f2fae5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Processing event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.403 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.409 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846229.4088004, 7e357b28-4198-49a1-b10b-bbd4274180d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.409 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] VM Resumed (Lifecycle Event)
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.413 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.417 247708 INFO nova.virt.libvirt.driver [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Instance spawned successfully.
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.417 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:57:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.476 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.484 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.492 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.493 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.494 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.494 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.495 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.495 247708 DEBUG nova.virt.libvirt.driver [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.664 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:57:09 compute-0 ceph-mon[74496]: pgmap v1797: 305 pgs: 305 active+clean; 477 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 245 op/s
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.945 247708 INFO nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Took 27.75 seconds to spawn the instance on the hypervisor.
Jan 31 07:57:09 compute-0 nova_compute[247704]: 2026-01-31 07:57:09.946 247708 DEBUG nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:10 compute-0 nova_compute[247704]: 2026-01-31 07:57:10.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:10 compute-0 nova_compute[247704]: 2026-01-31 07:57:10.353 247708 INFO nova.compute.manager [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Took 29.63 seconds to build instance.
Jan 31 07:57:10 compute-0 nova_compute[247704]: 2026-01-31 07:57:10.639 247708 DEBUG oslo_concurrency.lockutils [None req-2dd06841-6bd5-456c-b05d-6fa2a627b1ef 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:10.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:11.166 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:11.166 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:11.167 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 456 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.4 MiB/s wr, 213 op/s
Jan 31 07:57:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:11 compute-0 nova_compute[247704]: 2026-01-31 07:57:11.735 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:11 compute-0 podman[300721]: 2026-01-31 07:57:11.901186098 +0000 UTC m=+0.064811831 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.297 247708 DEBUG nova.compute.manager [req-d6d28638-d9b0-4b30-ba21-fd7325be3b43 req-7e5002d3-df8e-4149-a1ec-c9d8b8d3774d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.297 247708 DEBUG oslo_concurrency.lockutils [req-d6d28638-d9b0-4b30-ba21-fd7325be3b43 req-7e5002d3-df8e-4149-a1ec-c9d8b8d3774d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.298 247708 DEBUG oslo_concurrency.lockutils [req-d6d28638-d9b0-4b30-ba21-fd7325be3b43 req-7e5002d3-df8e-4149-a1ec-c9d8b8d3774d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.298 247708 DEBUG oslo_concurrency.lockutils [req-d6d28638-d9b0-4b30-ba21-fd7325be3b43 req-7e5002d3-df8e-4149-a1ec-c9d8b8d3774d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.298 247708 DEBUG nova.compute.manager [req-d6d28638-d9b0-4b30-ba21-fd7325be3b43 req-7e5002d3-df8e-4149-a1ec-c9d8b8d3774d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] No waiting events found dispatching network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.299 247708 WARNING nova.compute.manager [req-d6d28638-d9b0-4b30-ba21-fd7325be3b43 req-7e5002d3-df8e-4149-a1ec-c9d8b8d3774d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received unexpected event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 for instance with vm_state active and task_state None.
Jan 31 07:57:12 compute-0 ceph-mon[74496]: pgmap v1798: 305 pgs: 305 active+clean; 456 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.4 MiB/s wr, 213 op/s
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.566 247708 DEBUG nova.compute.manager [req-0de5f24a-be29-4cbb-802d-36c6d95c2d5d req-db0a5e1e-1dd7-4038-9d85-3f68f4c7a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.567 247708 DEBUG oslo_concurrency.lockutils [req-0de5f24a-be29-4cbb-802d-36c6d95c2d5d req-db0a5e1e-1dd7-4038-9d85-3f68f4c7a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.567 247708 DEBUG oslo_concurrency.lockutils [req-0de5f24a-be29-4cbb-802d-36c6d95c2d5d req-db0a5e1e-1dd7-4038-9d85-3f68f4c7a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.568 247708 DEBUG oslo_concurrency.lockutils [req-0de5f24a-be29-4cbb-802d-36c6d95c2d5d req-db0a5e1e-1dd7-4038-9d85-3f68f4c7a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.568 247708 DEBUG nova.compute.manager [req-0de5f24a-be29-4cbb-802d-36c6d95c2d5d req-db0a5e1e-1dd7-4038-9d85-3f68f4c7a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] No waiting events found dispatching network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:12 compute-0 nova_compute[247704]: 2026-01-31 07:57:12.569 247708 WARNING nova.compute.manager [req-0de5f24a-be29-4cbb-802d-36c6d95c2d5d req-db0a5e1e-1dd7-4038-9d85-3f68f4c7a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received unexpected event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 for instance with vm_state active and task_state None.
Jan 31 07:57:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 435 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.6 MiB/s wr, 233 op/s
Jan 31 07:57:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:13 compute-0 nova_compute[247704]: 2026-01-31 07:57:13.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:14 compute-0 ceph-mon[74496]: pgmap v1799: 305 pgs: 305 active+clean; 435 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.6 MiB/s wr, 233 op/s
Jan 31 07:57:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:14 compute-0 nova_compute[247704]: 2026-01-31 07:57:14.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:14.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 405 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 31 07:57:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.447 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.448 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.448 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.448 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.449 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:57:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/747376279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:15 compute-0 nova_compute[247704]: 2026-01-31 07:57:15.887 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:16 compute-0 ceph-mon[74496]: pgmap v1800: 305 pgs: 305 active+clean; 405 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 31 07:57:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/747376279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/36840789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:16.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 405 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 174 op/s
Jan 31 07:57:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.483 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.483 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.487 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.488 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.645 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.646 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4093MB free_disk=20.78543472290039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.646 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:17 compute-0 nova_compute[247704]: 2026-01-31 07:57:17.646 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.080 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 4c02576e-848d-4193-88a4-239a9e86e206 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.081 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 7e357b28-4198-49a1-b10b-bbd4274180d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.081 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.081 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:57:18 compute-0 sudo[300767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:18 compute-0 sudo[300767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:18 compute-0 sudo[300767]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.127 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.127 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.128 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.128 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.128 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.130 247708 INFO nova.compute.manager [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Terminating instance
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.131 247708 DEBUG nova.compute.manager [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:57:18 compute-0 sudo[300792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:18 compute-0 sudo[300792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:18 compute-0 kernel: tap608af9e6-ae (unregistering): left promiscuous mode
Jan 31 07:57:18 compute-0 sudo[300792]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:18 compute-0 NetworkManager[49108]: <info>  [1769846238.1734] device (tap608af9e6-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00312|binding|INFO|Releasing lport 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 from this chassis (sb_readonly=0)
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00313|binding|INFO|Setting lport 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 down in Southbound
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00314|binding|INFO|Removing iface tap608af9e6-ae ovn-installed in OVS
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 kernel: tap52146f19-c5 (unregistering): left promiscuous mode
Jan 31 07:57:18 compute-0 NetworkManager[49108]: <info>  [1769846238.2069] device (tap52146f19-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.219 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00315|binding|INFO|Releasing lport 52146f19-c519-41e7-a10c-703af57ee396 from this chassis (sb_readonly=1)
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00316|binding|INFO|Removing iface tap52146f19-c5 ovn-installed in OVS
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00317|if_status|INFO|Not setting lport 52146f19-c519-41e7-a10c-703af57ee396 down as sb is readonly
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.220 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.228 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 ovn_controller[149457]: 2026-01-31T07:57:18Z|00318|binding|INFO|Setting lport 52146f19-c519-41e7-a10c-703af57ee396 down in Southbound
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.280 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:d9:0e 10.100.0.186'], port_security=['fa:16:3e:c5:d9:0e 10.100.0.186'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.186/24', 'neutron:device_id': '7e357b28-4198-49a1-b10b-bbd4274180d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4862ee-e9f8-47d9-848f-baebec117c85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a25596ef-adef-4729-8060-b60652a56c6c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:18 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 31 07:57:18 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000054.scope: Consumed 9.301s CPU time.
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.281 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 in datapath dc4862ee-e9f8-47d9-848f-baebec117c85 unbound from our chassis
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.283 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4862ee-e9f8-47d9-848f-baebec117c85, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:57:18 compute-0 systemd-machined[214448]: Machine qemu-33-instance-00000054 terminated.
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.284 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4dce7b88-dc43-40a7-853d-de9a36ef7b2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.285 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85 namespace which is not needed anymore
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.321 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:18 compute-0 NetworkManager[49108]: <info>  [1769846238.3634] manager: (tap52146f19-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/154)
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.388 247708 INFO nova.virt.libvirt.driver [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Instance destroyed successfully.
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.389 247708 DEBUG nova.objects.instance [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lazy-loading 'resources' on Instance uuid 7e357b28-4198-49a1-b10b-bbd4274180d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.408 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:c0:91 10.100.1.206'], port_security=['fa:16:3e:1e:c0:91 10.100.1.206'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.206/24', 'neutron:device_id': '7e357b28-4198-49a1-b10b-bbd4274180d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20c6d006-2998-430a-ace9-5819b781e4b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56ce08a86486427fbebbfbd075cdb404', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dc2cd802-0b4d-4cd4-bf24-edbd81854edb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e9b9159-08e2-4f64-86b2-1e2537a66cc1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=52146f19-c519-41e7-a10c-703af57ee396) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [NOTICE]   (300633) : haproxy version is 2.8.14-c23fe91
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [NOTICE]   (300633) : path to executable is /usr/sbin/haproxy
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [WARNING]  (300633) : Exiting Master process...
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [ALERT]    (300633) : Current worker (300635) exited with code 143 (Terminated)
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85[300629]: [WARNING]  (300633) : All workers exited. Exiting... (0)
Jan 31 07:57:18 compute-0 systemd[1]: libpod-5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a.scope: Deactivated successfully.
Jan 31 07:57:18 compute-0 podman[300854]: 2026-01-31 07:57:18.429000598 +0000 UTC m=+0.054849247 container died 5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 07:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a-userdata-shm.mount: Deactivated successfully.
Jan 31 07:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-982b73b013a4447a51fc81ca9163e3c967b41d1ff56170a6366a58a4cb179b9c-merged.mount: Deactivated successfully.
Jan 31 07:57:18 compute-0 ceph-mon[74496]: pgmap v1801: 305 pgs: 305 active+clean; 405 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 174 op/s
Jan 31 07:57:18 compute-0 podman[300854]: 2026-01-31 07:57:18.478526674 +0000 UTC m=+0.104375323 container cleanup 5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:18 compute-0 systemd[1]: libpod-conmon-5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a.scope: Deactivated successfully.
Jan 31 07:57:18 compute-0 podman[300918]: 2026-01-31 07:57:18.555439788 +0000 UTC m=+0.059465540 container remove 5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.560 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3f6467-e477-4b1c-afbc-d61a7006e0f4]: (4, ('Sat Jan 31 07:57:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85 (5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a)\n5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a\nSat Jan 31 07:57:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85 (5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a)\n5d294f447fe4b8c7ad2085cb521fd23101cdcc9ff0d5ef6db4bfe970d686367a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.562 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed914b2-c60e-433a-bbda-4d60554c1500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.563 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4862ee-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.565 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 kernel: tapdc4862ee-e0: left promiscuous mode
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.631 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.635 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[08cf1726-8eaa-4cc0-85e0-3904dcf66e22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.649 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9c0391-2ff8-42f2-af1c-580e7adf820c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.651 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1107b8e8-f623-4149-b6d9-d0f1d93582bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.669 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d08c35a3-5f34-4969-ba76-7385419192ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647044, 'reachable_time': 21525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300941, 'error': None, 'target': 'ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.672 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dc4862ee-e9f8-47d9-848f-baebec117c85 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.672 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f376a2d1-8b7e-4fae-adec-fb138c501c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.673 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 52146f19-c519-41e7-a10c-703af57ee396 in datapath 20c6d006-2998-430a-ace9-5819b781e4b4 unbound from our chassis
Jan 31 07:57:18 compute-0 systemd[1]: run-netns-ovnmeta\x2ddc4862ee\x2de9f8\x2d47d9\x2d848f\x2dbaebec117c85.mount: Deactivated successfully.
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.675 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20c6d006-2998-430a-ace9-5819b781e4b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.676 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d10e87bd-4bad-46d6-8723-646c0a396c56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.677 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4 namespace which is not needed anymore
Jan 31 07:57:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:57:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650713589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.758 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.763 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [NOTICE]   (300708) : haproxy version is 2.8.14-c23fe91
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [NOTICE]   (300708) : path to executable is /usr/sbin/haproxy
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [WARNING]  (300708) : Exiting Master process...
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [ALERT]    (300708) : Current worker (300710) exited with code 143 (Terminated)
Jan 31 07:57:18 compute-0 neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4[300704]: [WARNING]  (300708) : All workers exited. Exiting... (0)
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 systemd[1]: libpod-e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043.scope: Deactivated successfully.
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.795 247708 DEBUG nova.virt.libvirt.vif [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1201502254',display_name='tempest-ServersTestMultiNic-server-1201502254',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1201502254',id=84,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:57:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-nqtchzo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:57:10Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=7e357b28-4198-49a1-b10b-bbd4274180d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.796 247708 DEBUG nova.network.os_vif_util [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "address": "fa:16:3e:c5:d9:0e", "network": {"id": "dc4862ee-e9f8-47d9-848f-baebec117c85", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-936777317", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.186", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap608af9e6-ae", "ovs_interfaceid": "608af9e6-ae8b-4d49-9d2c-b8973ad0fe15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.797 247708 DEBUG nova.network.os_vif_util [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.797 247708 DEBUG os_vif [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.800 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.800 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap608af9e6-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:18 compute-0 podman[300961]: 2026-01-31 07:57:18.801218746 +0000 UTC m=+0.050688756 container died e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.804 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.805 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.808 247708 INFO os_vif [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:d9:0e,bridge_name='br-int',has_traffic_filtering=True,id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15,network=Network(dc4862ee-e9f8-47d9-848f-baebec117c85),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap608af9e6-ae')
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.809 247708 DEBUG nova.virt.libvirt.vif [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1201502254',display_name='tempest-ServersTestMultiNic-server-1201502254',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1201502254',id=84,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:57:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56ce08a86486427fbebbfbd075cdb404',ramdisk_id='',reservation_id='r-nqtchzo4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-539084191',owner_user_name='tempest-ServersTestMultiNic-539084191-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:57:10Z,user_data=None,user_id='56eecf4373334b18a454186e0c54e924',uuid=7e357b28-4198-49a1-b10b-bbd4274180d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.809 247708 DEBUG nova.network.os_vif_util [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converting VIF {"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.810 247708 DEBUG nova.network.os_vif_util [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.810 247708 DEBUG os_vif [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.811 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52146f19-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.814 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.816 247708 INFO os_vif [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=52146f19-c519-41e7-a10c-703af57ee396,network=Network(20c6d006-2998-430a-ace9-5819b781e4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52146f19-c5')
Jan 31 07:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043-userdata-shm.mount: Deactivated successfully.
Jan 31 07:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea91a2575b3869ccd08d7e9ab4144f6f7a25bd8dddaf8cb18ebb38dd3997fc2-merged.mount: Deactivated successfully.
Jan 31 07:57:18 compute-0 podman[300961]: 2026-01-31 07:57:18.846380316 +0000 UTC m=+0.095850326 container cleanup e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:57:18 compute-0 systemd[1]: libpod-conmon-e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043.scope: Deactivated successfully.
Jan 31 07:57:18 compute-0 podman[301007]: 2026-01-31 07:57:18.914442244 +0000 UTC m=+0.045382677 container remove e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.919 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b84df529-af0d-4f8c-b074-537e71d9aa3e]: (4, ('Sat Jan 31 07:57:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4 (e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043)\ne744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043\nSat Jan 31 07:57:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4 (e744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043)\ne744ec44294b553c5c5d048309f947104c2c4b019372ad4e919433d7dfb0a043\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.921 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[79174d9b-54a7-4d55-9148-39224474cb3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.922 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20c6d006-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.924 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 kernel: tap20c6d006-20: left promiscuous mode
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.936 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.941 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d59f4a5f-6a38-4535-b2ac-6c24d9cd18dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.953 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24d2f952-45cb-47fc-b902-f587f2da3554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.955 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[35881940-5534-4e9e-9385-4ad829dcf3ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 nova_compute[247704]: 2026-01-31 07:57:18.962 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.969 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3a39669d-ae3d-4ff9-ae58-1436611e8094]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647120, 'reachable_time': 32970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301025, 'error': None, 'target': 'ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.972 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20c6d006-2998-430a-ace9-5819b781e4b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:57:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:18.972 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[81a901a4-ba39-45ab-a53f-81086bcacf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:57:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:18.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.151 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.151 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 405 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 07:57:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:19.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.380 247708 INFO nova.virt.libvirt.driver [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Deleting instance files /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0_del
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.381 247708 INFO nova.virt.libvirt.driver [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Deletion of /var/lib/nova/instances/7e357b28-4198-49a1-b10b-bbd4274180d0_del complete
Jan 31 07:57:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d20c6d006\x2d2998\x2d430a\x2dace9\x2d5819b781e4b4.mount: Deactivated successfully.
Jan 31 07:57:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3650713589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/664972945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.920 247708 INFO nova.compute.manager [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Took 1.79 seconds to destroy the instance on the hypervisor.
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.921 247708 DEBUG oslo.service.loopingcall [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.921 247708 DEBUG nova.compute.manager [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:57:19 compute-0 nova_compute[247704]: 2026-01-31 07:57:19.921 247708 DEBUG nova.network.neutron [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:57:20
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'images', 'volumes']
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:57:20 compute-0 nova_compute[247704]: 2026-01-31 07:57:20.146 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:20 compute-0 nova_compute[247704]: 2026-01-31 07:57:20.146 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:20 compute-0 nova_compute[247704]: 2026-01-31 07:57:20.147 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:57:20 compute-0 nova_compute[247704]: 2026-01-31 07:57:20.147 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:57:20 compute-0 nova_compute[247704]: 2026-01-31 07:57:20.495 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:57:20 compute-0 ceph-mon[74496]: pgmap v1802: 305 pgs: 305 active+clean; 405 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 07:57:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:20.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:21 compute-0 nova_compute[247704]: 2026-01-31 07:57:21.204 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:57:21 compute-0 nova_compute[247704]: 2026-01-31 07:57:21.204 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:57:21 compute-0 nova_compute[247704]: 2026-01-31 07:57:21.205 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:57:21 compute-0 nova_compute[247704]: 2026-01-31 07:57:21.205 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4c02576e-848d-4193-88a4-239a9e86e206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 397 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 362 KiB/s wr, 128 op/s
Jan 31 07:57:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:21.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:21 compute-0 ceph-mon[74496]: pgmap v1803: 305 pgs: 305 active+clean; 397 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 362 KiB/s wr, 128 op/s
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.366 247708 DEBUG nova.compute.manager [req-6460959d-9cd3-4240-ad03-112a3812ed23 req-2e9cbbab-e71e-4a90-96f1-f5fc9295081e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-unplugged-52146f19-c519-41e7-a10c-703af57ee396 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.367 247708 DEBUG oslo_concurrency.lockutils [req-6460959d-9cd3-4240-ad03-112a3812ed23 req-2e9cbbab-e71e-4a90-96f1-f5fc9295081e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.367 247708 DEBUG oslo_concurrency.lockutils [req-6460959d-9cd3-4240-ad03-112a3812ed23 req-2e9cbbab-e71e-4a90-96f1-f5fc9295081e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.368 247708 DEBUG oslo_concurrency.lockutils [req-6460959d-9cd3-4240-ad03-112a3812ed23 req-2e9cbbab-e71e-4a90-96f1-f5fc9295081e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.368 247708 DEBUG nova.compute.manager [req-6460959d-9cd3-4240-ad03-112a3812ed23 req-2e9cbbab-e71e-4a90-96f1-f5fc9295081e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] No waiting events found dispatching network-vif-unplugged-52146f19-c519-41e7-a10c-703af57ee396 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.368 247708 DEBUG nova.compute.manager [req-6460959d-9cd3-4240-ad03-112a3812ed23 req-2e9cbbab-e71e-4a90-96f1-f5fc9295081e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-unplugged-52146f19-c519-41e7-a10c-703af57ee396 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.370 247708 DEBUG nova.compute.manager [req-5de60b26-d700-48b0-824d-a1f5fb326f0a req-2dea8175-2042-445e-a90a-c39a03345d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-unplugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.370 247708 DEBUG oslo_concurrency.lockutils [req-5de60b26-d700-48b0-824d-a1f5fb326f0a req-2dea8175-2042-445e-a90a-c39a03345d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.371 247708 DEBUG oslo_concurrency.lockutils [req-5de60b26-d700-48b0-824d-a1f5fb326f0a req-2dea8175-2042-445e-a90a-c39a03345d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.371 247708 DEBUG oslo_concurrency.lockutils [req-5de60b26-d700-48b0-824d-a1f5fb326f0a req-2dea8175-2042-445e-a90a-c39a03345d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.371 247708 DEBUG nova.compute.manager [req-5de60b26-d700-48b0-824d-a1f5fb326f0a req-2dea8175-2042-445e-a90a-c39a03345d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] No waiting events found dispatching network-vif-unplugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:22 compute-0 nova_compute[247704]: 2026-01-31 07:57:22.371 247708 DEBUG nova.compute.manager [req-5de60b26-d700-48b0-824d-a1f5fb326f0a req-2dea8175-2042-445e-a90a-c39a03345d9e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-unplugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:57:22 compute-0 sudo[301029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:22 compute-0 sudo[301029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-0 sudo[301029]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-0 sudo[301054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:22 compute-0 sudo[301054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-0 sudo[301054]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-0 sudo[301080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:22 compute-0 sudo[301080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-0 sudo[301080]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:22 compute-0 sudo[301105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:57:22 compute-0 sudo[301105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:57:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:22.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:57:23 compute-0 sudo[301105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 387 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 63 KiB/s wr, 100 op/s
Jan 31 07:57:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:57:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:57:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:57:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f5155c1a-a1ed-4e5a-a104-f872d199df10 does not exist
Jan 31 07:57:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9c2c0f7c-53c2-42c5-8d25-6048e492860c does not exist
Jan 31 07:57:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f2201b33-521e-4da5-8e7a-4f62a8d8faa3 does not exist
Jan 31 07:57:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:57:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:57:23 compute-0 nova_compute[247704]: 2026-01-31 07:57:23.274 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:23 compute-0 nova_compute[247704]: 2026-01-31 07:57:23.274 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:57:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:57:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:57:23 compute-0 sudo[301160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:23 compute-0 sudo[301160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:23 compute-0 sudo[301160]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:57:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:23 compute-0 sudo[301185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:23 compute-0 sudo[301185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:23 compute-0 sudo[301185]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:23 compute-0 sudo[301211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:23 compute-0 sudo[301211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:23 compute-0 sudo[301211]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:23 compute-0 sudo[301236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:57:23 compute-0 sudo[301236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:23 compute-0 nova_compute[247704]: 2026-01-31 07:57:23.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:23 compute-0 nova_compute[247704]: 2026-01-31 07:57:23.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.815257658 +0000 UTC m=+0.060134606 container create 4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 07:57:23 compute-0 systemd[1]: Started libpod-conmon-4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131.scope.
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.781949807 +0000 UTC m=+0.026826825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:57:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:23 compute-0 nova_compute[247704]: 2026-01-31 07:57:23.897 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.913070461 +0000 UTC m=+0.157947499 container init 4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_faraday, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.921694241 +0000 UTC m=+0.166571199 container start 4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.928867576 +0000 UTC m=+0.173744604 container attach 4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 07:57:23 compute-0 modest_faraday[301319]: 167 167
Jan 31 07:57:23 compute-0 systemd[1]: libpod-4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131.scope: Deactivated successfully.
Jan 31 07:57:23 compute-0 conmon[301319]: conmon 4b25e0965c5a318a45e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131.scope/container/memory.events
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.932365322 +0000 UTC m=+0.177242290 container died 4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 07:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5bfa1a0cb7ee3ca1f9d7ad65eb797dee40f4781c9dc65b4d1ee325acee06134-merged.mount: Deactivated successfully.
Jan 31 07:57:23 compute-0 podman[301303]: 2026-01-31 07:57:23.989203796 +0000 UTC m=+0.234080734 container remove 4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_faraday, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 07:57:24 compute-0 systemd[1]: libpod-conmon-4b25e0965c5a318a45e659c9ddb02da20d1f9d47570c3a7997f0af688baf9131.scope: Deactivated successfully.
Jan 31 07:57:24 compute-0 podman[301343]: 2026-01-31 07:57:24.142763017 +0000 UTC m=+0.043765497 container create 6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_galileo, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 07:57:24 compute-0 systemd[1]: Started libpod-conmon-6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e.scope.
Jan 31 07:57:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4202732ea58346bd92956cfa893cb66c2506647c7aea5b33890afe821ff8aa8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4202732ea58346bd92956cfa893cb66c2506647c7aea5b33890afe821ff8aa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4202732ea58346bd92956cfa893cb66c2506647c7aea5b33890afe821ff8aa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4202732ea58346bd92956cfa893cb66c2506647c7aea5b33890afe821ff8aa8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4202732ea58346bd92956cfa893cb66c2506647c7aea5b33890afe821ff8aa8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:24 compute-0 podman[301343]: 2026-01-31 07:57:24.122628386 +0000 UTC m=+0.023630876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:57:24 compute-0 podman[301343]: 2026-01-31 07:57:24.231264203 +0000 UTC m=+0.132266733 container init 6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_galileo, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:24 compute-0 podman[301343]: 2026-01-31 07:57:24.238205892 +0000 UTC m=+0.139208382 container start 6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_galileo, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:57:24 compute-0 podman[301343]: 2026-01-31 07:57:24.24551006 +0000 UTC m=+0.146512540 container attach 6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.359 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.362 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.386 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.386 247708 INFO nova.compute.claims [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:57:24 compute-0 ceph-mon[74496]: pgmap v1804: 305 pgs: 305 active+clean; 387 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 63 KiB/s wr, 100 op/s
Jan 31 07:57:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:57:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:57:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:57:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3936542120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.870 247708 DEBUG nova.compute.manager [req-ea1199c2-37af-40b5-ac25-96f1f1158001 req-35730d3e-4f5d-4328-b3aa-c2fe76355a87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.871 247708 DEBUG oslo_concurrency.lockutils [req-ea1199c2-37af-40b5-ac25-96f1f1158001 req-35730d3e-4f5d-4328-b3aa-c2fe76355a87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.871 247708 DEBUG oslo_concurrency.lockutils [req-ea1199c2-37af-40b5-ac25-96f1f1158001 req-35730d3e-4f5d-4328-b3aa-c2fe76355a87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.871 247708 DEBUG oslo_concurrency.lockutils [req-ea1199c2-37af-40b5-ac25-96f1f1158001 req-35730d3e-4f5d-4328-b3aa-c2fe76355a87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.872 247708 DEBUG nova.compute.manager [req-ea1199c2-37af-40b5-ac25-96f1f1158001 req-35730d3e-4f5d-4328-b3aa-c2fe76355a87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] No waiting events found dispatching network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.872 247708 WARNING nova.compute.manager [req-ea1199c2-37af-40b5-ac25-96f1f1158001 req-35730d3e-4f5d-4328-b3aa-c2fe76355a87 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received unexpected event network-vif-plugged-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 for instance with vm_state active and task_state deleting.
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.890 247708 DEBUG nova.compute.manager [req-f02f30e4-3c13-4981-901b-ccb0ef0fe8f2 req-3755a720-0aff-4fb2-be91-68c086ba4c8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.891 247708 DEBUG oslo_concurrency.lockutils [req-f02f30e4-3c13-4981-901b-ccb0ef0fe8f2 req-3755a720-0aff-4fb2-be91-68c086ba4c8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.891 247708 DEBUG oslo_concurrency.lockutils [req-f02f30e4-3c13-4981-901b-ccb0ef0fe8f2 req-3755a720-0aff-4fb2-be91-68c086ba4c8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.892 247708 DEBUG oslo_concurrency.lockutils [req-f02f30e4-3c13-4981-901b-ccb0ef0fe8f2 req-3755a720-0aff-4fb2-be91-68c086ba4c8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.892 247708 DEBUG nova.compute.manager [req-f02f30e4-3c13-4981-901b-ccb0ef0fe8f2 req-3755a720-0aff-4fb2-be91-68c086ba4c8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] No waiting events found dispatching network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:57:24 compute-0 nova_compute[247704]: 2026-01-31 07:57:24.892 247708 WARNING nova.compute.manager [req-f02f30e4-3c13-4981-901b-ccb0ef0fe8f2 req-3755a720-0aff-4fb2-be91-68c086ba4c8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received unexpected event network-vif-plugged-52146f19-c519-41e7-a10c-703af57ee396 for instance with vm_state active and task_state deleting.
Jan 31 07:57:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:24.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.005 247708 DEBUG nova.network.neutron [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:25 compute-0 elastic_galileo[301360]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:57:25 compute-0 elastic_galileo[301360]: --> relative data size: 1.0
Jan 31 07:57:25 compute-0 elastic_galileo[301360]: --> All data devices are unavailable
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.112 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:25 compute-0 systemd[1]: libpod-6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e.scope: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[301343]: 2026-01-31 07:57:25.131477904 +0000 UTC m=+1.032480404 container died 6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_galileo, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4202732ea58346bd92956cfa893cb66c2506647c7aea5b33890afe821ff8aa8-merged.mount: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[301343]: 2026-01-31 07:57:25.181580365 +0000 UTC m=+1.082582835 container remove 6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 07:57:25 compute-0 systemd[1]: libpod-conmon-6d965f9032da59a449fd218fd3f86c7e2865538b3e65a11c5d88ea6cd953a50e.scope: Deactivated successfully.
Jan 31 07:57:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 359 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 34 KiB/s wr, 77 op/s
Jan 31 07:57:25 compute-0 sudo[301236]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:57:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:25.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:57:25 compute-0 sudo[301397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:25 compute-0 sudo[301397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:25 compute-0 sudo[301397]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:25 compute-0 sudo[301431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:25 compute-0 sudo[301431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:25 compute-0 sudo[301431]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.367 247708 DEBUG nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-deleted-608af9e6-ae8b-4d49-9d2c-b8973ad0fe15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.368 247708 INFO nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Neutron deleted interface 608af9e6-ae8b-4d49-9d2c-b8973ad0fe15; detaching it from the instance and deleting it from the info cache
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.368 247708 DEBUG nova.network.neutron [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updating instance_info_cache with network_info: [{"id": "52146f19-c519-41e7-a10c-703af57ee396", "address": "fa:16:3e:1e:c0:91", "network": {"id": "20c6d006-2998-430a-ace9-5819b781e4b4", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-694156903", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.206", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56ce08a86486427fbebbfbd075cdb404", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52146f19-c5", "ovs_interfaceid": "52146f19-c519-41e7-a10c-703af57ee396", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:25 compute-0 sudo[301456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:25 compute-0 sudo[301456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:25 compute-0 sudo[301456]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.423 247708 INFO nova.compute.manager [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Took 5.50 seconds to deallocate network for instance.
Jan 31 07:57:25 compute-0 sudo[301482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:57:25 compute-0 sudo[301482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/914716901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:57:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2697800000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.553 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.562 247708 DEBUG nova.compute.provider_tree [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.688 247708 DEBUG nova.scheduler.client.report [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.784330828 +0000 UTC m=+0.037836172 container create 9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:57:25 compute-0 systemd[1]: Started libpod-conmon-9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed.scope.
Jan 31 07:57:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.770308107 +0000 UTC m=+0.023813471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.867125155 +0000 UTC m=+0.120630529 container init 9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.875297154 +0000 UTC m=+0.128802508 container start 9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.879458096 +0000 UTC m=+0.132963500 container attach 9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:25 compute-0 tender_bouman[301566]: 167 167
Jan 31 07:57:25 compute-0 systemd[1]: libpod-9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed.scope: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.883378042 +0000 UTC m=+0.136883406 container died 9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 07:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b49df24fc074951725c227fd6c3615202d256989f0ebc8e5fcbca39888b8a84b-merged.mount: Deactivated successfully.
Jan 31 07:57:25 compute-0 podman[301550]: 2026-01-31 07:57:25.936832324 +0000 UTC m=+0.190337668 container remove 9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_bouman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.941 247708 DEBUG nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Detach interface failed, port_id=608af9e6-ae8b-4d49-9d2c-b8973ad0fe15, reason: Instance 7e357b28-4198-49a1-b10b-bbd4274180d0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.944 247708 DEBUG nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Received event network-vif-deleted-52146f19-c519-41e7-a10c-703af57ee396 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.945 247708 INFO nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Neutron deleted interface 52146f19-c519-41e7-a10c-703af57ee396; detaching it from the instance and deleting it from the info cache
Jan 31 07:57:25 compute-0 nova_compute[247704]: 2026-01-31 07:57:25.945 247708 DEBUG nova.network.neutron [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:25 compute-0 systemd[1]: libpod-conmon-9093eee5de3dfc0a293963d4ea90ace61f170e07579d8887c7783d6fe1c87aed.scope: Deactivated successfully.
Jan 31 07:57:26 compute-0 podman[301590]: 2026-01-31 07:57:26.097213191 +0000 UTC m=+0.051253020 container create cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.106 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.107 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:57:26 compute-0 systemd[1]: Started libpod-conmon-cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1.scope.
Jan 31 07:57:26 compute-0 podman[301590]: 2026-01-31 07:57:26.074552179 +0000 UTC m=+0.028592048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:57:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad61791c90378c37fdecd88cd1ec41c0f0352c44de830da5f2c9c1d77082e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad61791c90378c37fdecd88cd1ec41c0f0352c44de830da5f2c9c1d77082e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad61791c90378c37fdecd88cd1ec41c0f0352c44de830da5f2c9c1d77082e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ad61791c90378c37fdecd88cd1ec41c0f0352c44de830da5f2c9c1d77082e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:26 compute-0 podman[301590]: 2026-01-31 07:57:26.184522138 +0000 UTC m=+0.138562007 container init cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 07:57:26 compute-0 podman[301590]: 2026-01-31 07:57:26.190506974 +0000 UTC m=+0.144546803 container start cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 07:57:26 compute-0 podman[301590]: 2026-01-31 07:57:26.194949332 +0000 UTC m=+0.148989212 container attach cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.206 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updating instance_info_cache with network_info: [{"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:57:26 compute-0 ceph-mon[74496]: pgmap v1805: 305 pgs: 305 active+clean; 359 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 34 KiB/s wr, 77 op/s
Jan 31 07:57:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2697800000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3826910628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.494 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.495 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.595 247708 DEBUG oslo_concurrency.processutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.743 247708 DEBUG nova.compute.manager [req-ba92d2f0-ccfc-47e7-955f-5212d38506e0 req-f9cc321a-c8d2-4a37-addf-e88d42ff8828 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Detach interface failed, port_id=52146f19-c519-41e7-a10c-703af57ee396, reason: Instance 7e357b28-4198-49a1-b10b-bbd4274180d0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.874 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.877 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.877 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.878 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.878 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.879 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:57:26 compute-0 nova_compute[247704]: 2026-01-31 07:57:26.879 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:57:26 compute-0 kind_wilbur[301607]: {
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:     "0": [
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:         {
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "devices": [
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "/dev/loop3"
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             ],
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "lv_name": "ceph_lv0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "lv_size": "7511998464",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "name": "ceph_lv0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "tags": {
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.cluster_name": "ceph",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.crush_device_class": "",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.encrypted": "0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.osd_id": "0",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.type": "block",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:                 "ceph.vdo": "0"
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             },
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "type": "block",
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:             "vg_name": "ceph_vg0"
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:         }
Jan 31 07:57:26 compute-0 kind_wilbur[301607]:     ]
Jan 31 07:57:26 compute-0 kind_wilbur[301607]: }
Jan 31 07:57:26 compute-0 systemd[1]: libpod-cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1.scope: Deactivated successfully.
Jan 31 07:57:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:57:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923257819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:27.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:27 compute-0 podman[301636]: 2026-01-31 07:57:27.013430422 +0000 UTC m=+0.026255761 container died cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.029 247708 DEBUG oslo_concurrency.processutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.038 247708 DEBUG nova.compute.provider_tree [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.044 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.044 247708 DEBUG nova.network.neutron [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ad61791c90378c37fdecd88cd1ec41c0f0352c44de830da5f2c9c1d77082e9-merged.mount: Deactivated successfully.
Jan 31 07:57:27 compute-0 podman[301636]: 2026-01-31 07:57:27.075411551 +0000 UTC m=+0.088236860 container remove cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:57:27 compute-0 systemd[1]: libpod-conmon-cae017581fbaad2eaa285c6550226b3f6036653e8a83e5cb0ac49b51b0b451d1.scope: Deactivated successfully.
Jan 31 07:57:27 compute-0 sudo[301482]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:27 compute-0 sudo[301675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:27 compute-0 sudo[301675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:27 compute-0 sudo[301675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:27 compute-0 podman[301652]: 2026-01-31 07:57:27.166390348 +0000 UTC m=+0.104908437 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:27 compute-0 sudo[301703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:57:27 compute-0 sudo[301703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:27 compute-0 sudo[301703]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 359 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 34 KiB/s wr, 30 op/s
Jan 31 07:57:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:27.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:27 compute-0 sudo[301728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:27 compute-0 sudo[301728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:27 compute-0 sudo[301728]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:27 compute-0 sudo[301753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:57:27 compute-0 sudo[301753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.312 247708 DEBUG nova.policy [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f64f1ef9370e4177b114d6c71857656d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '777e9a4d5b284c9eb16aa35161bd7517', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:57:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/923257819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.565 247708 DEBUG nova.scheduler.client.report [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.647054408 +0000 UTC m=+0.059811068 container create 32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:27 compute-0 systemd[1]: Started libpod-conmon-32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97.scope.
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.626453376 +0000 UTC m=+0.039210076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.739708986 +0000 UTC m=+0.152465666 container init 32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.746194943 +0000 UTC m=+0.158951583 container start 32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.749964575 +0000 UTC m=+0.162721235 container attach 32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:57:27 compute-0 funny_booth[301837]: 167 167
Jan 31 07:57:27 compute-0 systemd[1]: libpod-32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97.scope: Deactivated successfully.
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.75384374 +0000 UTC m=+0.166600400 container died 32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 07:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbab1fd56a9ece2387830b2a091ba4b6fee8114618fcd86168c97c10d8ae1a3e-merged.mount: Deactivated successfully.
Jan 31 07:57:27 compute-0 podman[301821]: 2026-01-31 07:57:27.790087183 +0000 UTC m=+0.202843833 container remove 32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_booth, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 07:57:27 compute-0 systemd[1]: libpod-conmon-32ca2879a35886d1b524e020c110959aa63c6d6b88c83b9b0b28589098bf7a97.scope: Deactivated successfully.
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.835 247708 INFO nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:57:27 compute-0 podman[301860]: 2026-01-31 07:57:27.907428132 +0000 UTC m=+0.037500355 container create 54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 07:57:27 compute-0 nova_compute[247704]: 2026-01-31 07:57:27.926 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:27 compute-0 systemd[1]: Started libpod-conmon-54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f.scope.
Jan 31 07:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b780c92490c0d9a5b399e7cdd5d3ad0f6f935035c020fe310c640898fade9372/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b780c92490c0d9a5b399e7cdd5d3ad0f6f935035c020fe310c640898fade9372/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b780c92490c0d9a5b399e7cdd5d3ad0f6f935035c020fe310c640898fade9372/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b780c92490c0d9a5b399e7cdd5d3ad0f6f935035c020fe310c640898fade9372/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:57:27 compute-0 podman[301860]: 2026-01-31 07:57:27.89177841 +0000 UTC m=+0.021850663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:57:27 compute-0 podman[301860]: 2026-01-31 07:57:27.997325742 +0000 UTC m=+0.127398035 container init 54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wu, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:57:28 compute-0 podman[301860]: 2026-01-31 07:57:28.007147711 +0000 UTC m=+0.137219934 container start 54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wu, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:57:28 compute-0 podman[301860]: 2026-01-31 07:57:28.011969388 +0000 UTC m=+0.142041621 container attach 54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:57:28 compute-0 nova_compute[247704]: 2026-01-31 07:57:28.011 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:57:28 compute-0 nova_compute[247704]: 2026-01-31 07:57:28.376 247708 INFO nova.scheduler.client.report [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Deleted allocations for instance 7e357b28-4198-49a1-b10b-bbd4274180d0
Jan 31 07:57:28 compute-0 ceph-mon[74496]: pgmap v1806: 305 pgs: 305 active+clean; 359 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 34 KiB/s wr, 30 op/s
Jan 31 07:57:28 compute-0 great_wu[301877]: {
Jan 31 07:57:28 compute-0 great_wu[301877]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:57:28 compute-0 great_wu[301877]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:57:28 compute-0 great_wu[301877]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:57:28 compute-0 great_wu[301877]:         "osd_id": 0,
Jan 31 07:57:28 compute-0 great_wu[301877]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:57:28 compute-0 great_wu[301877]:         "type": "bluestore"
Jan 31 07:57:28 compute-0 great_wu[301877]:     }
Jan 31 07:57:28 compute-0 great_wu[301877]: }
Jan 31 07:57:28 compute-0 nova_compute[247704]: 2026-01-31 07:57:28.795 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:28 compute-0 nova_compute[247704]: 2026-01-31 07:57:28.815 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:28 compute-0 systemd[1]: libpod-54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f.scope: Deactivated successfully.
Jan 31 07:57:28 compute-0 podman[301898]: 2026-01-31 07:57:28.863028691 +0000 UTC m=+0.023818151 container died 54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b780c92490c0d9a5b399e7cdd5d3ad0f6f935035c020fe310c640898fade9372-merged.mount: Deactivated successfully.
Jan 31 07:57:28 compute-0 podman[301898]: 2026-01-31 07:57:28.941005571 +0000 UTC m=+0.101794931 container remove 54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:57:28 compute-0 systemd[1]: libpod-conmon-54da9504d670eb8a5d90fb269aa027414cdc5057556688f28a24f357224dad1f.scope: Deactivated successfully.
Jan 31 07:57:28 compute-0 sudo[301753]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:57:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:57:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:29.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8df6dea5-7c34-4d73-bbb6-9c1f96ce0aa4 does not exist
Jan 31 07:57:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 00785155-edc8-48ec-a83a-ea31d5cffc0b does not exist
Jan 31 07:57:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 28ed0748-2d9d-4ba6-a221-16cb65164f57 does not exist
Jan 31 07:57:29 compute-0 sudo[301913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:29 compute-0 sudo[301913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:29 compute-0 sudo[301913]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:29 compute-0 sudo[301938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:57:29 compute-0 sudo[301938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:29 compute-0 sudo[301938]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 359 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 35 KiB/s wr, 31 op/s
Jan 31 07:57:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:29.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:29 compute-0 nova_compute[247704]: 2026-01-31 07:57:29.657 247708 DEBUG oslo_concurrency.lockutils [None req-458a96d9-c5b8-4710-bad0-8effa63f5f15 56eecf4373334b18a454186e0c54e924 56ce08a86486427fbebbfbd075cdb404 - - default default] Lock "7e357b28-4198-49a1-b10b-bbd4274180d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:57:30 compute-0 ceph-mon[74496]: pgmap v1807: 305 pgs: 305 active+clean; 359 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 35 KiB/s wr, 31 op/s
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.267 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.268 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.268 247708 INFO nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Creating image(s)
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.297 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.330 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.361 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.367 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.424 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.425 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.426 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.426 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.450 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.454 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:57:30 compute-0 nova_compute[247704]: 2026-01-31 07:57:30.923 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:57:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:31.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.031 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] resizing rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.156 247708 DEBUG nova.objects.instance [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:57:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 341 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 263 KiB/s wr, 34 op/s
Jan 31 07:57:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:57:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.423 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.424 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Ensure instance console log exists: /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.425 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.425 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:57:31 compute-0 nova_compute[247704]: 2026-01-31 07:57:31.425 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:57:32 compute-0 ceph-mon[74496]: pgmap v1808: 305 pgs: 305 active+clean; 341 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 263 KiB/s wr, 34 op/s
Jan 31 07:57:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:33.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 328 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 811 KiB/s wr, 57 op/s
Jan 31 07:57:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:33 compute-0 nova_compute[247704]: 2026-01-31 07:57:33.379 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846238.378122, 7e357b28-4198-49a1-b10b-bbd4274180d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:57:33 compute-0 nova_compute[247704]: 2026-01-31 07:57:33.380 247708 INFO nova.compute.manager [-] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] VM Stopped (Lifecycle Event)
Jan 31 07:57:33 compute-0 nova_compute[247704]: 2026-01-31 07:57:33.630 247708 DEBUG nova.compute.manager [None req-9e774acb-2688-4fed-90f0-5c0f5da59300 - - - - - -] [instance: 7e357b28-4198-49a1-b10b-bbd4274180d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:57:33 compute-0 nova_compute[247704]: 2026-01-31 07:57:33.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:33 compute-0 nova_compute[247704]: 2026-01-31 07:57:33.817 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:34 compute-0 nova_compute[247704]: 2026-01-31 07:57:34.136 247708 DEBUG nova.network.neutron [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Successfully created port: 34ae45cc-d3c0-4212-80e5-816568207748 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:57:34 compute-0 ceph-mon[74496]: pgmap v1809: 305 pgs: 305 active+clean; 328 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 811 KiB/s wr, 57 op/s
Jan 31 07:57:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:35.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007780147845710032 of space, bias 1.0, pg target 2.3340443537130096 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 07:57:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 325 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 07:57:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:35.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:36 compute-0 ceph-mon[74496]: pgmap v1810: 305 pgs: 305 active+clean; 325 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 07:57:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:37.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 07:57:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:38 compute-0 sudo[302134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:38 compute-0 sudo[302134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:38 compute-0 sudo[302134]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:38 compute-0 sudo[302159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:38 compute-0 sudo[302159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:38 compute-0 sudo[302159]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:38 compute-0 ceph-mon[74496]: pgmap v1811: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 07:57:38 compute-0 nova_compute[247704]: 2026-01-31 07:57:38.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:38 compute-0 nova_compute[247704]: 2026-01-31 07:57:38.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:39.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 07:57:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:39.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:40 compute-0 ceph-mon[74496]: pgmap v1812: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 07:57:40 compute-0 sshd-session[302185]: Invalid user solv from 45.148.10.240 port 50664
Jan 31 07:57:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:41.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:41 compute-0 sshd-session[302185]: Connection closed by invalid user solv 45.148.10.240 port 50664 [preauth]
Jan 31 07:57:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 07:57:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:41.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:42 compute-0 ceph-mon[74496]: pgmap v1813: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 07:57:42 compute-0 podman[302188]: 2026-01-31 07:57:42.93522799 +0000 UTC m=+0.091478360 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:57:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:43.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Jan 31 07:57:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:43.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:43 compute-0 nova_compute[247704]: 2026-01-31 07:57:43.801 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:43 compute-0 nova_compute[247704]: 2026-01-31 07:57:43.820 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:44 compute-0 ceph-mon[74496]: pgmap v1814: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.5 MiB/s wr, 48 op/s
Jan 31 07:57:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:45.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1015 KiB/s wr, 15 op/s
Jan 31 07:57:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:45.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/516796285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:57:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/516796285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:57:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:45.612 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:57:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:45.613 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:57:45 compute-0 nova_compute[247704]: 2026-01-31 07:57:45.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:46 compute-0 ceph-mon[74496]: pgmap v1815: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1015 KiB/s wr, 15 op/s
Jan 31 07:57:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 4.4 KiB/s wr, 9 op/s
Jan 31 07:57:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:47.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:48 compute-0 ceph-mon[74496]: pgmap v1816: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 4.4 KiB/s wr, 9 op/s
Jan 31 07:57:48 compute-0 nova_compute[247704]: 2026-01-31 07:57:48.803 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:48 compute-0 nova_compute[247704]: 2026-01-31 07:57:48.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:49.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 31 07:57:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:49.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:57:50 compute-0 ceph-mon[74496]: pgmap v1817: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 31 07:57:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2379552753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2640853960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:57:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:57:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:51.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:57:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 2.3 KiB/s wr, 2 op/s
Jan 31 07:57:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:52 compute-0 ceph-mon[74496]: pgmap v1818: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 2.3 KiB/s wr, 2 op/s
Jan 31 07:57:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:53.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 2.4 KiB/s wr, 3 op/s
Jan 31 07:57:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:53 compute-0 ovn_controller[149457]: 2026-01-31T07:57:53Z|00319|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:57:53 compute-0 nova_compute[247704]: 2026-01-31 07:57:53.386 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:53 compute-0 nova_compute[247704]: 2026-01-31 07:57:53.806 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:53 compute-0 nova_compute[247704]: 2026-01-31 07:57:53.823 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:53 compute-0 ceph-mon[74496]: pgmap v1819: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 2.4 KiB/s wr, 3 op/s
Jan 31 07:57:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1737226846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:57:54 compute-0 nova_compute[247704]: 2026-01-31 07:57:54.378 247708 DEBUG nova.network.neutron [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Successfully updated port: 34ae45cc-d3c0-4212-80e5-816568207748 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:57:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:57:54 compute-0 nova_compute[247704]: 2026-01-31 07:57:54.569 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:57:54 compute-0 nova_compute[247704]: 2026-01-31 07:57:54.570 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquired lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:57:54 compute-0 nova_compute[247704]: 2026-01-31 07:57:54.570 247708 DEBUG nova.network.neutron [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:57:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:57:54.616 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:57:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:55.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 597 KiB/s rd, 10 KiB/s wr, 27 op/s
Jan 31 07:57:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:55.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:55 compute-0 nova_compute[247704]: 2026-01-31 07:57:55.288 247708 DEBUG nova.network.neutron [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:57:56 compute-0 ceph-mon[74496]: pgmap v1820: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 597 KiB/s rd, 10 KiB/s wr, 27 op/s
Jan 31 07:57:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:57:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:57.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:57:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.2 KiB/s wr, 49 op/s
Jan 31 07:57:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:57 compute-0 podman[302216]: 2026-01-31 07:57:57.902116353 +0000 UTC m=+0.070874017 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:57:58 compute-0 sudo[302242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:58 compute-0 sudo[302242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:58 compute-0 sudo[302242]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:58 compute-0 ceph-mon[74496]: pgmap v1821: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.2 KiB/s wr, 49 op/s
Jan 31 07:57:58 compute-0 sudo[302267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:57:58 compute-0 sudo[302267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:57:58 compute-0 sudo[302267]: pam_unix(sudo:session): session closed for user root
Jan 31 07:57:58 compute-0 nova_compute[247704]: 2026-01-31 07:57:58.807 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:58 compute-0 nova_compute[247704]: 2026-01-31 07:57:58.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:57:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Jan 31 07:57:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:57:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:57:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:57:59 compute-0 nova_compute[247704]: 2026-01-31 07:57:59.326 247708 DEBUG nova.compute.manager [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-changed-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:57:59 compute-0 nova_compute[247704]: 2026-01-31 07:57:59.327 247708 DEBUG nova.compute.manager [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Refreshing instance network info cache due to event network-changed-34ae45cc-d3c0-4212-80e5-816568207748. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:57:59 compute-0 nova_compute[247704]: 2026-01-31 07:57:59.327 247708 DEBUG oslo_concurrency.lockutils [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:57:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:00 compute-0 nova_compute[247704]: 2026-01-31 07:58:00.380 247708 DEBUG nova.network.neutron [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Updating instance_info_cache with network_info: [{"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:00 compute-0 ceph-mon[74496]: pgmap v1822: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Jan 31 07:58:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:01.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.162 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Releasing lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.163 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Instance network_info: |[{"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.164 247708 DEBUG oslo_concurrency.lockutils [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.165 247708 DEBUG nova.network.neutron [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Refreshing network info cache for port 34ae45cc-d3c0-4212-80e5-816568207748 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.169 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Start _get_guest_xml network_info=[{"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.177 247708 WARNING nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.196 247708 DEBUG nova.virt.libvirt.host [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.198 247708 DEBUG nova.virt.libvirt.host [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.206 247708 DEBUG nova.virt.libvirt.host [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.207 247708 DEBUG nova.virt.libvirt.host [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.209 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.210 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.210 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.211 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.211 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.211 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.212 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.212 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.213 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.213 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.213 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.214 247708 DEBUG nova.virt.hardware [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.219 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Jan 31 07:58:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:58:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1670089036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.679 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.720 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:58:01 compute-0 nova_compute[247704]: 2026-01-31 07:58:01.726 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:01 compute-0 ceph-mon[74496]: pgmap v1823: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Jan 31 07:58:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1670089036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:58:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:58:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564199315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.231 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.233 247708 DEBUG nova.virt.libvirt.vif [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:57:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-744782285',display_name='tempest-ServersTestJSON-server-744782285',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-744782285',id=85,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNS6+yz/blQrIBEUDpuBXFU9VtBzqT5wj/P1ScszZJQaO1UIjFvsF0sJ4sEn7BTHSHnqc3VsUf3gN0JlVO2GOJ/7+LaLJlH/oN62/lOZ7X0Lnyhl0uma3T0wqnQ4mYKu3g==',key_name='tempest-keypair-487746000',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='777e9a4d5b284c9eb16aa35161bd7517',ramdisk_id='',reservation_id='r-sudnzhdx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-54862618',owner_user_name='tempest-ServersTestJSON-54862618-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f64f1ef9370e4177b114d6c71857656d',uuid=6ac7b21e-0a48-4bed-9b36-77bb332f73c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.234 247708 DEBUG nova.network.os_vif_util [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Converting VIF {"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.235 247708 DEBUG nova.network.os_vif_util [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.237 247708 DEBUG nova.objects.instance [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.588 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <uuid>6ac7b21e-0a48-4bed-9b36-77bb332f73c5</uuid>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <name>instance-00000055</name>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersTestJSON-server-744782285</nova:name>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:58:01</nova:creationTime>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:user uuid="f64f1ef9370e4177b114d6c71857656d">tempest-ServersTestJSON-54862618-project-member</nova:user>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:project uuid="777e9a4d5b284c9eb16aa35161bd7517">tempest-ServersTestJSON-54862618</nova:project>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <nova:port uuid="34ae45cc-d3c0-4212-80e5-816568207748">
Jan 31 07:58:02 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <system>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <entry name="serial">6ac7b21e-0a48-4bed-9b36-77bb332f73c5</entry>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <entry name="uuid">6ac7b21e-0a48-4bed-9b36-77bb332f73c5</entry>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </system>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <os>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </os>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <features>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </features>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk">
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </source>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk.config">
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </source>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:58:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:55:8a:23"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <target dev="tap34ae45cc-d3"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/console.log" append="off"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <video>
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </video>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:58:02 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:58:02 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:58:02 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:58:02 compute-0 nova_compute[247704]: </domain>
Jan 31 07:58:02 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.589 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Preparing to wait for external event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.589 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.590 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.590 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.591 247708 DEBUG nova.virt.libvirt.vif [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:57:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-744782285',display_name='tempest-ServersTestJSON-server-744782285',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-744782285',id=85,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNS6+yz/blQrIBEUDpuBXFU9VtBzqT5wj/P1ScszZJQaO1UIjFvsF0sJ4sEn7BTHSHnqc3VsUf3gN0JlVO2GOJ/7+LaLJlH/oN62/lOZ7X0Lnyhl0uma3T0wqnQ4mYKu3g==',key_name='tempest-keypair-487746000',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='777e9a4d5b284c9eb16aa35161bd7517',ramdisk_id='',reservation_id='r-sudnzhdx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-54862618',owner_user_name='tempest-ServersTestJSON-54862618-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f64f1ef9370e4177b114d6c71857656d',uuid=6ac7b21e-0a48-4bed-9b36-77bb332f73c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.591 247708 DEBUG nova.network.os_vif_util [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Converting VIF {"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.591 247708 DEBUG nova.network.os_vif_util [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.592 247708 DEBUG os_vif [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.592 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.593 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.593 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.598 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.599 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34ae45cc-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.600 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34ae45cc-d3, col_values=(('external_ids', {'iface-id': '34ae45cc-d3c0-4212-80e5-816568207748', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:8a:23', 'vm-uuid': '6ac7b21e-0a48-4bed-9b36-77bb332f73c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:02 compute-0 NetworkManager[49108]: <info>  [1769846282.6031] manager: (tap34ae45cc-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.604 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.609 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:02 compute-0 nova_compute[247704]: 2026-01-31 07:58:02.611 247708 INFO os_vif [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3')
Jan 31 07:58:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2564199315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:58:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 07:58:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:03.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:03 compute-0 nova_compute[247704]: 2026-01-31 07:58:03.599 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:58:03 compute-0 nova_compute[247704]: 2026-01-31 07:58:03.599 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:58:03 compute-0 nova_compute[247704]: 2026-01-31 07:58:03.600 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] No VIF found with MAC fa:16:3e:55:8a:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:58:03 compute-0 nova_compute[247704]: 2026-01-31 07:58:03.600 247708 INFO nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Using config drive
Jan 31 07:58:03 compute-0 nova_compute[247704]: 2026-01-31 07:58:03.626 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:58:03 compute-0 nova_compute[247704]: 2026-01-31 07:58:03.809 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:03 compute-0 ceph-mon[74496]: pgmap v1824: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 07:58:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.488282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284488386, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1022, "num_deletes": 250, "total_data_size": 1556899, "memory_usage": 1585104, "flush_reason": "Manual Compaction"}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284501485, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 942484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39062, "largest_seqno": 40082, "table_properties": {"data_size": 938542, "index_size": 1530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10696, "raw_average_key_size": 20, "raw_value_size": 930034, "raw_average_value_size": 1812, "num_data_blocks": 68, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846194, "oldest_key_time": 1769846194, "file_creation_time": 1769846284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 13329 microseconds, and 3480 cpu microseconds.
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.501580) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 942484 bytes OK
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.501647) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.503827) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.503848) EVENT_LOG_v1 {"time_micros": 1769846284503842, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.503869) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1552207, prev total WAL file size 1552207, number of live WAL files 2.
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.504750) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323538' seq:72057594037927935, type:22 .. '6D6772737461740031353039' seq:0, type:0; will stop at (end)
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(920KB)], [83(11MB)]
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284504785, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12794333, "oldest_snapshot_seqno": -1}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6803 keys, 9538024 bytes, temperature: kUnknown
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284615894, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9538024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9494227, "index_size": 25701, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 174657, "raw_average_key_size": 25, "raw_value_size": 9374141, "raw_average_value_size": 1377, "num_data_blocks": 1023, "num_entries": 6803, "num_filter_entries": 6803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846284, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.616244) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9538024 bytes
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.618582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.1 rd, 85.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(23.7) write-amplify(10.1) OK, records in: 7277, records dropped: 474 output_compression: NoCompression
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.618613) EVENT_LOG_v1 {"time_micros": 1769846284618599, "job": 48, "event": "compaction_finished", "compaction_time_micros": 111201, "compaction_time_cpu_micros": 18607, "output_level": 6, "num_output_files": 1, "total_output_size": 9538024, "num_input_records": 7277, "num_output_records": 6803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284618950, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846284620985, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.504650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.621111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.621117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.621119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.621121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:58:04.621123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 07:58:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 79 op/s
Jan 31 07:58:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:05.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.538 247708 INFO nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Creating config drive at /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/disk.config
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.549 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfd8cz834 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.682 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfd8cz834" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.715 247708 DEBUG nova.storage.rbd_utils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] rbd image 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.719 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/disk.config 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.741 247708 DEBUG nova.network.neutron [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Updated VIF entry in instance network info cache for port 34ae45cc-d3c0-4212-80e5-816568207748. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:58:05 compute-0 nova_compute[247704]: 2026-01-31 07:58:05.742 247708 DEBUG nova.network.neutron [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Updating instance_info_cache with network_info: [{"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.200 247708 DEBUG oslo_concurrency.processutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/disk.config 6ac7b21e-0a48-4bed-9b36-77bb332f73c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.201 247708 INFO nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Deleting local config drive /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5/disk.config because it was imported into RBD.
Jan 31 07:58:06 compute-0 kernel: tap34ae45cc-d3: entered promiscuous mode
Jan 31 07:58:06 compute-0 NetworkManager[49108]: <info>  [1769846286.2728] manager: (tap34ae45cc-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Jan 31 07:58:06 compute-0 ovn_controller[149457]: 2026-01-31T07:58:06Z|00320|binding|INFO|Claiming lport 34ae45cc-d3c0-4212-80e5-816568207748 for this chassis.
Jan 31 07:58:06 compute-0 ovn_controller[149457]: 2026-01-31T07:58:06Z|00321|binding|INFO|34ae45cc-d3c0-4212-80e5-816568207748: Claiming fa:16:3e:55:8a:23 10.100.0.3
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.272 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:06 compute-0 systemd-udevd[302429]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:58:06 compute-0 NetworkManager[49108]: <info>  [1769846286.3145] device (tap34ae45cc-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:58:06 compute-0 NetworkManager[49108]: <info>  [1769846286.3149] device (tap34ae45cc-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:58:06 compute-0 systemd-machined[214448]: New machine qemu-34-instance-00000055.
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:06 compute-0 ovn_controller[149457]: 2026-01-31T07:58:06Z|00322|binding|INFO|Setting lport 34ae45cc-d3c0-4212-80e5-816568207748 ovn-installed in OVS
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.330 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:06 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-00000055.
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.443 247708 DEBUG oslo_concurrency.lockutils [req-096e7342-17ca-4944-a35d-6bb3a295c357 req-9e9756e6-2521-42be-8c4f-de76191ae5f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:58:06 compute-0 ovn_controller[149457]: 2026-01-31T07:58:06Z|00323|binding|INFO|Setting lport 34ae45cc-d3c0-4212-80e5-816568207748 up in Southbound
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.584 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:8a:23 10.100.0.3'], port_security=['fa:16:3e:55:8a:23 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6ac7b21e-0a48-4bed-9b36-77bb332f73c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '777e9a4d5b284c9eb16aa35161bd7517', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e7a0247a-7c51-4201-85e8-6b859dee42ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71ff9f10-c15d-4710-bf7b-1ae980d3e2ed, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=34ae45cc-d3c0-4212-80e5-816568207748) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.586 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 34ae45cc-d3c0-4212-80e5-816568207748 in datapath 5ffa4b98-ff53-433c-a228-0a10f10eccfb bound to our chassis
Jan 31 07:58:06 compute-0 ceph-mon[74496]: pgmap v1825: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 79 op/s
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.590 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ffa4b98-ff53-433c-a228-0a10f10eccfb
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.597 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bbed45-e9af-4eec-ab63-b061b8435762]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.598 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ffa4b98-f1 in ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.599 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ffa4b98-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.599 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdba231-c475-4e1a-9515-c217d29ba396]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.604 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea1dc4a-a88a-41d5-a3ad-ce3b4433cf3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.620 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[1a10b9f8-314f-418d-a593-7c86c7656135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.638 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc3453c-b18e-48c8-960b-7d0d253e6e6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.673 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[69eac85e-d259-42f9-af47-066b5c920874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.680 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[db9027c9-0d6a-4a45-bba2-b144efd10555]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 NetworkManager[49108]: <info>  [1769846286.6811] manager: (tap5ffa4b98-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.707 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[14235747-45f2-42a9-aa4e-9570d3b5e086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.711 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[37a92d5d-6052-4f8f-b685-6c6f8ad53004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 NetworkManager[49108]: <info>  [1769846286.7310] device (tap5ffa4b98-f0): carrier: link connected
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.735 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bd65767b-e65c-4814-b03c-4ecfd517a25d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.749 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[31682b6b-e60f-45f4-9814-9d8f8f69b4fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ffa4b98-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:e4:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652934, 'reachable_time': 42848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302483, 'error': None, 'target': 'ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.759 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3834ece5-0871-41c5-ad97-b3607a16ae5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:e46b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652934, 'tstamp': 652934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302484, 'error': None, 'target': 'ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.772 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3891c31c-12c9-4a39-b46e-2f2ae01747a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ffa4b98-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:e4:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652934, 'reachable_time': 42848, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302485, 'error': None, 'target': 'ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.795 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[acdee2fc-65b0-4eaa-b919-c6da3ca38651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.843 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5960fd5d-0f83-4da5-adf3-143791eb7700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.845 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ffa4b98-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.845 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.845 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ffa4b98-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:06 compute-0 NetworkManager[49108]: <info>  [1769846286.8482] manager: (tap5ffa4b98-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Jan 31 07:58:06 compute-0 kernel: tap5ffa4b98-f0: entered promiscuous mode
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.847 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.850 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ffa4b98-f0, col_values=(('external_ids', {'iface-id': '0421d305-ec23-4b5f-a0a0-7a3f3b67f8ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:06 compute-0 ovn_controller[149457]: 2026-01-31T07:58:06Z|00324|binding|INFO|Releasing lport 0421d305-ec23-4b5f-a0a0-7a3f3b67f8ff from this chassis (sb_readonly=0)
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.862 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.863 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ffa4b98-ff53-433c-a228-0a10f10eccfb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ffa4b98-ff53-433c-a228-0a10f10eccfb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.864 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[114014df-5b4d-4317-b5c7-f4bfc62d9e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.865 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5ffa4b98-ff53-433c-a228-0a10f10eccfb
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5ffa4b98-ff53-433c-a228-0a10f10eccfb.pid.haproxy
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5ffa4b98-ff53-433c-a228-0a10f10eccfb
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:58:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:06.866 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'env', 'PROCESS_TAG=haproxy-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ffa4b98-ff53-433c-a228-0a10f10eccfb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.924 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846286.923793, 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:58:06 compute-0 nova_compute[247704]: 2026-01-31 07:58:06.924 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] VM Started (Lifecycle Event)
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.029 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.034 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846286.924052, 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.035 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] VM Paused (Lifecycle Event)
Jan 31 07:58:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:07.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 18 KiB/s wr, 58 op/s
Jan 31 07:58:07 compute-0 podman[302541]: 2026-01-31 07:58:07.249839292 +0000 UTC m=+0.061742946 container create a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 07:58:07 compute-0 systemd[1]: Started libpod-conmon-a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663.scope.
Jan 31 07:58:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:07.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:07 compute-0 podman[302541]: 2026-01-31 07:58:07.213190098 +0000 UTC m=+0.025093842 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:58:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dce5d28a67c6e4504fb0167e8f2db334f52958dd436de7b2051f06a50442b2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:07 compute-0 podman[302541]: 2026-01-31 07:58:07.343492653 +0000 UTC m=+0.155396327 container init a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 07:58:07 compute-0 podman[302541]: 2026-01-31 07:58:07.348867394 +0000 UTC m=+0.160771038 container start a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.365 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.371 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:58:07 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [NOTICE]   (302560) : New worker (302563) forked
Jan 31 07:58:07 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [NOTICE]   (302560) : Loading success.
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.530 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.718 247708 DEBUG nova.compute.manager [req-dad19138-e0db-4468-bdb2-159a7ec7cb0e req-081be7d6-de25-440c-ae41-82913fda83a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.719 247708 DEBUG oslo_concurrency.lockutils [req-dad19138-e0db-4468-bdb2-159a7ec7cb0e req-081be7d6-de25-440c-ae41-82913fda83a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.720 247708 DEBUG oslo_concurrency.lockutils [req-dad19138-e0db-4468-bdb2-159a7ec7cb0e req-081be7d6-de25-440c-ae41-82913fda83a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.720 247708 DEBUG oslo_concurrency.lockutils [req-dad19138-e0db-4468-bdb2-159a7ec7cb0e req-081be7d6-de25-440c-ae41-82913fda83a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.721 247708 DEBUG nova.compute.manager [req-dad19138-e0db-4468-bdb2-159a7ec7cb0e req-081be7d6-de25-440c-ae41-82913fda83a5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Processing event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.722 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.726 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846287.7265418, 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.727 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] VM Resumed (Lifecycle Event)
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.729 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.733 247708 INFO nova.virt.libvirt.driver [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Instance spawned successfully.
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.733 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.783 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.789 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.857 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.858 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.858 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.859 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.859 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.859 247708 DEBUG nova.virt.libvirt.driver [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:58:07 compute-0 nova_compute[247704]: 2026-01-31 07:58:07.912 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:58:08 compute-0 nova_compute[247704]: 2026-01-31 07:58:08.191 247708 INFO nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Took 37.92 seconds to spawn the instance on the hypervisor.
Jan 31 07:58:08 compute-0 nova_compute[247704]: 2026-01-31 07:58:08.191 247708 DEBUG nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:08 compute-0 nova_compute[247704]: 2026-01-31 07:58:08.474 247708 INFO nova.compute.manager [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Took 44.16 seconds to build instance.
Jan 31 07:58:08 compute-0 ceph-mon[74496]: pgmap v1826: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 18 KiB/s wr, 58 op/s
Jan 31 07:58:08 compute-0 nova_compute[247704]: 2026-01-31 07:58:08.778 247708 DEBUG oslo_concurrency.lockutils [None req-a1f866e4-2780-42f8-89eb-a5043d295299 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 45.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:08 compute-0 nova_compute[247704]: 2026-01-31 07:58:08.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:09.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 98 op/s
Jan 31 07:58:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:09.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:09 compute-0 nova_compute[247704]: 2026-01-31 07:58:09.939 247708 DEBUG nova.compute.manager [req-080d7475-1fe8-43aa-a468-8fead3f77e94 req-2f5cbf01-9985-4b06-b6ba-9b78b54b411a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:09 compute-0 nova_compute[247704]: 2026-01-31 07:58:09.940 247708 DEBUG oslo_concurrency.lockutils [req-080d7475-1fe8-43aa-a468-8fead3f77e94 req-2f5cbf01-9985-4b06-b6ba-9b78b54b411a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:09 compute-0 nova_compute[247704]: 2026-01-31 07:58:09.941 247708 DEBUG oslo_concurrency.lockutils [req-080d7475-1fe8-43aa-a468-8fead3f77e94 req-2f5cbf01-9985-4b06-b6ba-9b78b54b411a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:09 compute-0 nova_compute[247704]: 2026-01-31 07:58:09.941 247708 DEBUG oslo_concurrency.lockutils [req-080d7475-1fe8-43aa-a468-8fead3f77e94 req-2f5cbf01-9985-4b06-b6ba-9b78b54b411a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:09 compute-0 nova_compute[247704]: 2026-01-31 07:58:09.942 247708 DEBUG nova.compute.manager [req-080d7475-1fe8-43aa-a468-8fead3f77e94 req-2f5cbf01-9985-4b06-b6ba-9b78b54b411a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] No waiting events found dispatching network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:09 compute-0 nova_compute[247704]: 2026-01-31 07:58:09.942 247708 WARNING nova.compute.manager [req-080d7475-1fe8-43aa-a468-8fead3f77e94 req-2f5cbf01-9985-4b06-b6ba-9b78b54b411a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received unexpected event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 for instance with vm_state active and task_state None.
Jan 31 07:58:09 compute-0 ceph-mon[74496]: pgmap v1827: 305 pgs: 305 active+clean; 325 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 98 op/s
Jan 31 07:58:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:11.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:11.167 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:11.168 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:11.169 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 326 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 26 KiB/s wr, 85 op/s
Jan 31 07:58:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:12 compute-0 nova_compute[247704]: 2026-01-31 07:58:12.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:13.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:13 compute-0 ceph-mon[74496]: pgmap v1828: 305 pgs: 305 active+clean; 326 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 26 KiB/s wr, 85 op/s
Jan 31 07:58:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 326 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 26 KiB/s wr, 111 op/s
Jan 31 07:58:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:13 compute-0 nova_compute[247704]: 2026-01-31 07:58:13.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:13 compute-0 nova_compute[247704]: 2026-01-31 07:58:13.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:13 compute-0 podman[302575]: 2026-01-31 07:58:13.900709289 +0000 UTC m=+0.065412375 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 07:58:14 compute-0 ceph-mon[74496]: pgmap v1829: 305 pgs: 305 active+clean; 326 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 26 KiB/s wr, 111 op/s
Jan 31 07:58:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:14 compute-0 nova_compute[247704]: 2026-01-31 07:58:14.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:15.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 327 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 124 op/s
Jan 31 07:58:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:15.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:16 compute-0 ceph-mon[74496]: pgmap v1830: 305 pgs: 305 active+clean; 327 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 124 op/s
Jan 31 07:58:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1802042998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.826 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.826 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.827 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.827 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.827 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:16 compute-0 ovn_controller[149457]: 2026-01-31T07:58:16Z|00325|binding|INFO|Releasing lport 0421d305-ec23-4b5f-a0a0-7a3f3b67f8ff from this chassis (sb_readonly=0)
Jan 31 07:58:16 compute-0 ovn_controller[149457]: 2026-01-31T07:58:16Z|00326|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:58:16 compute-0 NetworkManager[49108]: <info>  [1769846296.8484] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Jan 31 07:58:16 compute-0 NetworkManager[49108]: <info>  [1769846296.8494] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.849 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:16 compute-0 ovn_controller[149457]: 2026-01-31T07:58:16Z|00327|binding|INFO|Releasing lport 0421d305-ec23-4b5f-a0a0-7a3f3b67f8ff from this chassis (sb_readonly=0)
Jan 31 07:58:16 compute-0 ovn_controller[149457]: 2026-01-31T07:58:16Z|00328|binding|INFO|Releasing lport d19b5f05-fa79-4835-8ef4-51f87493d59b from this chassis (sb_readonly=0)
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.854 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:16 compute-0 nova_compute[247704]: 2026-01-31 07:58:16.859 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:17.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 313 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 26 KiB/s wr, 119 op/s
Jan 31 07:58:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773235226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:17 compute-0 nova_compute[247704]: 2026-01-31 07:58:17.268 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:17.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/14013203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2773235226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:17 compute-0 nova_compute[247704]: 2026-01-31 07:58:17.624 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:17 compute-0 nova_compute[247704]: 2026-01-31 07:58:17.947 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:58:17 compute-0 nova_compute[247704]: 2026-01-31 07:58:17.948 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:58:17 compute-0 nova_compute[247704]: 2026-01-31 07:58:17.953 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:58:17 compute-0 nova_compute[247704]: 2026-01-31 07:58:17.954 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 07:58:18 compute-0 nova_compute[247704]: 2026-01-31 07:58:18.110 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:58:18 compute-0 nova_compute[247704]: 2026-01-31 07:58:18.111 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4201MB free_disk=20.83077621459961GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:58:18 compute-0 nova_compute[247704]: 2026-01-31 07:58:18.112 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:18 compute-0 nova_compute[247704]: 2026-01-31 07:58:18.112 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:18 compute-0 ceph-mon[74496]: pgmap v1831: 305 pgs: 305 active+clean; 313 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 26 KiB/s wr, 119 op/s
Jan 31 07:58:18 compute-0 sudo[302621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:18 compute-0 sudo[302621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:18 compute-0 sudo[302621]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:18 compute-0 sudo[302646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:18 compute-0 sudo[302646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:18 compute-0 sudo[302646]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:18 compute-0 nova_compute[247704]: 2026-01-31 07:58:18.815 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 248 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 133 op/s
Jan 31 07:58:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:19.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:19 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 07:58:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:19 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:58:20
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data']
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:58:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:58:20 compute-0 ceph-mon[74496]: pgmap v1832: 305 pgs: 305 active+clean; 248 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 133 op/s
Jan 31 07:58:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.007000166s ======
Jan 31 07:58:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.007000166s
Jan 31 07:58:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 252 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 357 KiB/s wr, 83 op/s
Jan 31 07:58:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:21.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:22 compute-0 ovn_controller[149457]: 2026-01-31T07:58:22Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:8a:23 10.100.0.3
Jan 31 07:58:22 compute-0 ovn_controller[149457]: 2026-01-31T07:58:22Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:8a:23 10.100.0.3
Jan 31 07:58:22 compute-0 nova_compute[247704]: 2026-01-31 07:58:22.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:22 compute-0 ceph-mon[74496]: pgmap v1833: 305 pgs: 305 active+clean; 252 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 357 KiB/s wr, 83 op/s
Jan 31 07:58:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:58:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:23.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:58:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 259 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 728 KiB/s wr, 135 op/s
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.246 247708 DEBUG nova.compute.manager [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-changed-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.247 247708 DEBUG nova.compute.manager [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Refreshing instance network info cache due to event network-changed-34ae45cc-d3c0-4212-80e5-816568207748. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.247 247708 DEBUG oslo_concurrency.lockutils [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.247 247708 DEBUG oslo_concurrency.lockutils [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.247 247708 DEBUG nova.network.neutron [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Refreshing network info cache for port 34ae45cc-d3c0-4212-80e5-816568207748 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:58:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:23.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.402 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 4c02576e-848d-4193-88a4-239a9e86e206 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.402 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.402 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.402 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.574 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:23 compute-0 ceph-mon[74496]: pgmap v1834: 305 pgs: 305 active+clean; 259 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 728 KiB/s wr, 135 op/s
Jan 31 07:58:23 compute-0 nova_compute[247704]: 2026-01-31 07:58:23.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2488209533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:24 compute-0 nova_compute[247704]: 2026-01-31 07:58:24.019 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:24 compute-0 nova_compute[247704]: 2026-01-31 07:58:24.024 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:58:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:24.044 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:24 compute-0 nova_compute[247704]: 2026-01-31 07:58:24.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:24.045 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:58:24 compute-0 nova_compute[247704]: 2026-01-31 07:58:24.053 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:58:24 compute-0 nova_compute[247704]: 2026-01-31 07:58:24.119 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:58:24 compute-0 nova_compute[247704]: 2026-01-31 07:58:24.120 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2488209533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:25.047 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:25.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.120 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.151 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.152 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.152 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 07:58:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 277 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 693 KiB/s rd, 2.1 MiB/s wr, 168 op/s
Jan 31 07:58:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:25.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.707 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.707 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.708 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 07:58:25 compute-0 nova_compute[247704]: 2026-01-31 07:58:25.709 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4c02576e-848d-4193-88a4-239a9e86e206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:58:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1324782983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:26 compute-0 ceph-mon[74496]: pgmap v1835: 305 pgs: 305 active+clean; 277 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 693 KiB/s rd, 2.1 MiB/s wr, 168 op/s
Jan 31 07:58:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:27.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 204 op/s
Jan 31 07:58:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:27.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:27 compute-0 nova_compute[247704]: 2026-01-31 07:58:27.673 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:28 compute-0 ceph-mon[74496]: pgmap v1836: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 204 op/s
Jan 31 07:58:28 compute-0 nova_compute[247704]: 2026-01-31 07:58:28.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:28 compute-0 podman[302700]: 2026-01-31 07:58:28.929349658 +0000 UTC m=+0.094795380 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:58:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:29.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 234 op/s
Jan 31 07:58:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:58:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:29.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:58:29 compute-0 sudo[302727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:29 compute-0 sudo[302727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 sudo[302727]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:29 compute-0 sudo[302752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:29 compute-0 sudo[302752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 sudo[302752]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 sudo[302777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:29 compute-0 sudo[302777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 sudo[302777]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 sudo[302802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 07:58:29 compute-0 sudo[302802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 sudo[302802]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:58:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:58:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:29 compute-0 sudo[302847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:29 compute-0 sudo[302847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:29 compute-0 sudo[302847]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.138 247708 DEBUG nova.network.neutron [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Updated VIF entry in instance network info cache for port 34ae45cc-d3c0-4212-80e5-816568207748. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.138 247708 DEBUG nova.network.neutron [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Updating instance_info_cache with network_info: [{"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:30 compute-0 sudo[302872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:30 compute-0 sudo[302872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 sudo[302872]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 sudo[302897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:30 compute-0 sudo[302897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 sudo[302897]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 sudo[302922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:58:30 compute-0 sudo[302922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 ceph-mon[74496]: pgmap v1837: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 234 op/s
Jan 31 07:58:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:30 compute-0 sudo[302922]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:58:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:58:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:58:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 23e7c708-55c4-4d7a-8e2b-7a0af9763f81 does not exist
Jan 31 07:58:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8124d7a1-a8f5-41c9-9e6a-08007eae4042 does not exist
Jan 31 07:58:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 478ef2f1-d9ab-4b10-b5ef-fe1d3c095c7c does not exist
Jan 31 07:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:58:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:58:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:58:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.712 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.712 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.713 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.713 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.713 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.715 247708 INFO nova.compute.manager [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Terminating instance
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.716 247708 DEBUG nova.compute.manager [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:58:30 compute-0 sudo[302978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:30 compute-0 sudo[302978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 sudo[302978]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 kernel: tapab92f411-7c (unregistering): left promiscuous mode
Jan 31 07:58:30 compute-0 NetworkManager[49108]: <info>  [1769846310.7755] device (tapab92f411-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:30 compute-0 ovn_controller[149457]: 2026-01-31T07:58:30Z|00329|binding|INFO|Releasing lport ab92f411-7c5d-40bc-b720-f7cea0eb4596 from this chassis (sb_readonly=1)
Jan 31 07:58:30 compute-0 ovn_controller[149457]: 2026-01-31T07:58:30Z|00330|binding|INFO|Removing iface tapab92f411-7c ovn-installed in OVS
Jan 31 07:58:30 compute-0 ovn_controller[149457]: 2026-01-31T07:58:30Z|00331|if_status|INFO|Dropped 3 log messages in last 73 seconds (most recently, 73 seconds ago) due to excessive rate
Jan 31 07:58:30 compute-0 ovn_controller[149457]: 2026-01-31T07:58:30Z|00332|if_status|INFO|Not setting lport ab92f411-7c5d-40bc-b720-f7cea0eb4596 down as sb is readonly
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.795 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:30 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000052.scope: Deactivated successfully.
Jan 31 07:58:30 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000052.scope: Consumed 17.536s CPU time.
Jan 31 07:58:30 compute-0 systemd-machined[214448]: Machine qemu-32-instance-00000052 terminated.
Jan 31 07:58:30 compute-0 ovn_controller[149457]: 2026-01-31T07:58:30Z|00333|binding|INFO|Setting lport ab92f411-7c5d-40bc-b720-f7cea0eb4596 down in Southbound
Jan 31 07:58:30 compute-0 sudo[303004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:30 compute-0 sudo[303004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 sudo[303004]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.895 247708 DEBUG oslo_concurrency.lockutils [req-1197a98d-3116-46d8-8180-6685c0ed900d req-613d056a-f485-4ba0-9de9-1f057c4d0639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6ac7b21e-0a48-4bed-9b36-77bb332f73c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:58:30 compute-0 sudo[303035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:30 compute-0 sudo[303035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 sudo[303035]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.937 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.949 247708 INFO nova.virt.libvirt.driver [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Instance destroyed successfully.
Jan 31 07:58:30 compute-0 nova_compute[247704]: 2026-01-31 07:58:30.950 247708 DEBUG nova.objects.instance [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lazy-loading 'resources' on Instance uuid 4c02576e-848d-4193-88a4-239a9e86e206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:58:30 compute-0 sudo[303060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:58:30 compute-0 sudo[303060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:30.972 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:7a:df 10.100.0.6'], port_security=['fa:16:3e:2a:7a:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4c02576e-848d-4193-88a4-239a9e86e206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1cd91610847a480caeee0ae3cdabf066', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1695a46-1d81-4453-9c52-9917c020bc65', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e38d659-b6a8-4d3d-8a23-b8299c5114da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ab92f411-7c5d-40bc-b720-f7cea0eb4596) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:30.973 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ab92f411-7c5d-40bc-b720-f7cea0eb4596 in datapath ca1ed3b2-b27d-427e-a9bd-cc12393752eb unbound from our chassis
Jan 31 07:58:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:30.975 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca1ed3b2-b27d-427e-a9bd-cc12393752eb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:58:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:30.976 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bb55a2d1-10aa-482f-b60c-044e1865fbad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:30.977 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb namespace which is not needed anymore
Jan 31 07:58:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:31 compute-0 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[300257]: [NOTICE]   (300261) : haproxy version is 2.8.14-c23fe91
Jan 31 07:58:31 compute-0 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[300257]: [NOTICE]   (300261) : path to executable is /usr/sbin/haproxy
Jan 31 07:58:31 compute-0 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[300257]: [ALERT]    (300261) : Current worker (300263) exited with code 143 (Terminated)
Jan 31 07:58:31 compute-0 neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb[300257]: [WARNING]  (300261) : All workers exited. Exiting... (0)
Jan 31 07:58:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:31.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:31 compute-0 systemd[1]: libpod-6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8.scope: Deactivated successfully.
Jan 31 07:58:31 compute-0 podman[303112]: 2026-01-31 07:58:31.11276535 +0000 UTC m=+0.048367010 container died 6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.128 247708 DEBUG nova.virt.libvirt.vif [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-352749004',display_name='tempest-ListServerFiltersTestJSON-instance-352749004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-352749004',id=82,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1cd91610847a480caeee0ae3cdabf066',ramdisk_id='',reservation_id='r-imjxf71g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-334452958',owner_user_name='tempest-ListServerFiltersTestJSON-334452958-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:56:49Z,user_data=None,user_id='f60419a58aea43b9a0b6db7d61d71246',uuid=4c02576e-848d-4193-88a4-239a9e86e206,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.130 247708 DEBUG nova.network.os_vif_util [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converting VIF {"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.131 247708 DEBUG nova.network.os_vif_util [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.132 247708 DEBUG os_vif [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.135 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.136 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab92f411-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8-userdata-shm.mount: Deactivated successfully.
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.143 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc35fb10fa0787aeaab3c398522230f8922fbb3f6c9646b4aca798c02b6dfdf-merged.mount: Deactivated successfully.
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.146 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.152 247708 INFO os_vif [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:7a:df,bridge_name='br-int',has_traffic_filtering=True,id=ab92f411-7c5d-40bc-b720-f7cea0eb4596,network=Network(ca1ed3b2-b27d-427e-a9bd-cc12393752eb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab92f411-7c')
Jan 31 07:58:31 compute-0 podman[303112]: 2026-01-31 07:58:31.158446042 +0000 UTC m=+0.094047702 container cleanup 6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 07:58:31 compute-0 systemd[1]: libpod-conmon-6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8.scope: Deactivated successfully.
Jan 31 07:58:31 compute-0 podman[303178]: 2026-01-31 07:58:31.228966111 +0000 UTC m=+0.053015343 container remove 6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.232 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[99ce03b8-86e5-493a-a820-30e5d1d7f1fb]: (4, ('Sat Jan 31 07:58:31 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8)\n6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8\nSat Jan 31 07:58:31 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb (6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8)\n6a9e062893a0685f6d1fed4389d420ac06ef10d2aaff619afefead598e72d1b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.235 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0aba756f-2a07-47df-9210-988991b5a1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.236 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca1ed3b2-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:31 compute-0 kernel: tapca1ed3b2-b0: left promiscuous mode
Jan 31 07:58:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 216 op/s
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.246 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.247 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.249 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[787516c8-2798-4309-9ffe-1dde9c7f9dc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.264 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c88de763-beda-47c5-b260-b307677628f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.265 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[42b9c9ad-f827-40cf-87dd-c61b4833352b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.283 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fb39cb-dbc8-4e3e-8483-4334ab66ddb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645096, 'reachable_time': 38881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303212, 'error': None, 'target': 'ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.286 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca1ed3b2-b27d-427e-a9bd-cc12393752eb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:58:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dca1ed3b2\x2db27d\x2d427e\x2da9bd\x2dcc12393752eb.mount: Deactivated successfully.
Jan 31 07:58:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:31.286 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[effaad0e-51c9-41bc-be7d-1f6ca530881f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.298 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updating instance_info_cache with network_info: [{"id": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "address": "fa:16:3e:2a:7a:df", "network": {"id": "ca1ed3b2-b27d-427e-a9bd-cc12393752eb", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2069485947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1cd91610847a480caeee0ae3cdabf066", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab92f411-7c", "ovs_interfaceid": "ab92f411-7c5d-40bc-b720-f7cea0eb4596", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.316426771 +0000 UTC m=+0.039384760 container create c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:58:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:31.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:31 compute-0 systemd[1]: Started libpod-conmon-c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3.scope.
Jan 31 07:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.297151502 +0000 UTC m=+0.020109531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:58:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2620656044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.406322061 +0000 UTC m=+0.129280070 container init c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.414016969 +0000 UTC m=+0.136974948 container start c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_joliot, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:58:31 compute-0 hungry_joliot[303228]: 167 167
Jan 31 07:58:31 compute-0 systemd[1]: libpod-c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3.scope: Deactivated successfully.
Jan 31 07:58:31 compute-0 conmon[303228]: conmon c9d3f512f63a2f45335e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3.scope/container/memory.events
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.421709576 +0000 UTC m=+0.144667585 container attach c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_joliot, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.422479995 +0000 UTC m=+0.145437974 container died c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a047e1c6b1ee98e03eba404d25fa6e92f533ae78c102ba0107b4851c9d4994fa-merged.mount: Deactivated successfully.
Jan 31 07:58:31 compute-0 podman[303210]: 2026-01-31 07:58:31.495066144 +0000 UTC m=+0.218024123 container remove c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:31 compute-0 systemd[1]: libpod-conmon-c9d3f512f63a2f45335e98e10e24c48beec592052dd433c520b1bfcbcbd898a3.scope: Deactivated successfully.
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.602 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-4c02576e-848d-4193-88a4-239a9e86e206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.603 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.604 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.604 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.604 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.604 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:58:31 compute-0 podman[303253]: 2026-01-31 07:58:31.649953947 +0000 UTC m=+0.047394455 container create 6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:31 compute-0 systemd[1]: Started libpod-conmon-6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf.scope.
Jan 31 07:58:31 compute-0 podman[303253]: 2026-01-31 07:58:31.629397836 +0000 UTC m=+0.026838364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:58:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3395a3283fd05cfc459d3820a7d9d1b6d11fcdb5d07fb3a27f12138aaca7534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3395a3283fd05cfc459d3820a7d9d1b6d11fcdb5d07fb3a27f12138aaca7534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3395a3283fd05cfc459d3820a7d9d1b6d11fcdb5d07fb3a27f12138aaca7534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3395a3283fd05cfc459d3820a7d9d1b6d11fcdb5d07fb3a27f12138aaca7534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3395a3283fd05cfc459d3820a7d9d1b6d11fcdb5d07fb3a27f12138aaca7534/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:31 compute-0 podman[303253]: 2026-01-31 07:58:31.749089512 +0000 UTC m=+0.146530070 container init 6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_turing, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:31 compute-0 podman[303253]: 2026-01-31 07:58:31.757103387 +0000 UTC m=+0.154543895 container start 6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:31 compute-0 podman[303253]: 2026-01-31 07:58:31.761997906 +0000 UTC m=+0.159438454 container attach 6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.800 247708 INFO nova.virt.libvirt.driver [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Deleting instance files /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206_del
Jan 31 07:58:31 compute-0 nova_compute[247704]: 2026-01-31 07:58:31.802 247708 INFO nova.virt.libvirt.driver [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Deletion of /var/lib/nova/instances/4c02576e-848d-4193-88a4-239a9e86e206_del complete
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.003 247708 DEBUG nova.compute.manager [req-8c77adc5-00a0-4ef5-bc76-9482a7c2be8e req-1aee3d16-9981-45c5-8ccb-6be562800f23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-vif-unplugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.004 247708 DEBUG oslo_concurrency.lockutils [req-8c77adc5-00a0-4ef5-bc76-9482a7c2be8e req-1aee3d16-9981-45c5-8ccb-6be562800f23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.004 247708 DEBUG oslo_concurrency.lockutils [req-8c77adc5-00a0-4ef5-bc76-9482a7c2be8e req-1aee3d16-9981-45c5-8ccb-6be562800f23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.005 247708 DEBUG oslo_concurrency.lockutils [req-8c77adc5-00a0-4ef5-bc76-9482a7c2be8e req-1aee3d16-9981-45c5-8ccb-6be562800f23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.005 247708 DEBUG nova.compute.manager [req-8c77adc5-00a0-4ef5-bc76-9482a7c2be8e req-1aee3d16-9981-45c5-8ccb-6be562800f23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] No waiting events found dispatching network-vif-unplugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.006 247708 DEBUG nova.compute.manager [req-8c77adc5-00a0-4ef5-bc76-9482a7c2be8e req-1aee3d16-9981-45c5-8ccb-6be562800f23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-vif-unplugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.203 247708 INFO nova.compute.manager [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Took 1.49 seconds to destroy the instance on the hypervisor.
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.204 247708 DEBUG oslo.service.loopingcall [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.204 247708 DEBUG nova.compute.manager [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:58:32 compute-0 nova_compute[247704]: 2026-01-31 07:58:32.204 247708 DEBUG nova.network.neutron [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:58:32 compute-0 ceph-mon[74496]: pgmap v1838: 305 pgs: 305 active+clean; 281 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 216 op/s
Jan 31 07:58:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2638932588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:32 compute-0 xenodochial_turing[303271]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:58:32 compute-0 xenodochial_turing[303271]: --> relative data size: 1.0
Jan 31 07:58:32 compute-0 xenodochial_turing[303271]: --> All data devices are unavailable
Jan 31 07:58:32 compute-0 systemd[1]: libpod-6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf.scope: Deactivated successfully.
Jan 31 07:58:32 compute-0 podman[303253]: 2026-01-31 07:58:32.615868318 +0000 UTC m=+1.013308866 container died 6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_turing, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:58:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3395a3283fd05cfc459d3820a7d9d1b6d11fcdb5d07fb3a27f12138aaca7534-merged.mount: Deactivated successfully.
Jan 31 07:58:32 compute-0 podman[303253]: 2026-01-31 07:58:32.672250643 +0000 UTC m=+1.069691151 container remove 6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 07:58:32 compute-0 systemd[1]: libpod-conmon-6271877388d5bc6fc809c0e77c399f09a5a6b19db9df819282ca5ae556a035bf.scope: Deactivated successfully.
Jan 31 07:58:32 compute-0 sudo[303060]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 sudo[303298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:32 compute-0 sudo[303298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:32 compute-0 sudo[303298]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 sudo[303323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:32 compute-0 sudo[303323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:32 compute-0 sudo[303323]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 sudo[303348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:32 compute-0 sudo[303348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:32 compute-0 sudo[303348]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:32 compute-0 sudo[303373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:58:32 compute-0 sudo[303373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:33.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 257 MiB data, 802 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 1.8 MiB/s wr, 220 op/s
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.250175891 +0000 UTC m=+0.038095389 container create a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 07:58:33 compute-0 systemd[1]: Started libpod-conmon-a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60.scope.
Jan 31 07:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.32402755 +0000 UTC m=+0.111947048 container init a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.233035274 +0000 UTC m=+0.020954792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.331157614 +0000 UTC m=+0.119077112 container start a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.334629529 +0000 UTC m=+0.122549027 container attach a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 07:58:33 compute-0 heuristic_chebyshev[303453]: 167 167
Jan 31 07:58:33 compute-0 systemd[1]: libpod-a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60.scope: Deactivated successfully.
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.335986862 +0000 UTC m=+0.123906360 container died a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:33.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e90ebac1b417736268c11123da77be0eeff806546e35e800411b1cb1e88c3f6c-merged.mount: Deactivated successfully.
Jan 31 07:58:33 compute-0 podman[303437]: 2026-01-31 07:58:33.378853616 +0000 UTC m=+0.166773154 container remove a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chebyshev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 07:58:33 compute-0 systemd[1]: libpod-conmon-a476ac92ff05c1c303b03c0b78b8f4998cb3bc7112e7b3eb00493dcf05eeef60.scope: Deactivated successfully.
Jan 31 07:58:33 compute-0 podman[303478]: 2026-01-31 07:58:33.523358756 +0000 UTC m=+0.052790616 container create aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 07:58:33 compute-0 systemd[1]: Started libpod-conmon-aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837.scope.
Jan 31 07:58:33 compute-0 podman[303478]: 2026-01-31 07:58:33.493999201 +0000 UTC m=+0.023431101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcbdac6f3e28dd0293a0e80b2b7592299f91f1114c28bfec74b4c14289e4007d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcbdac6f3e28dd0293a0e80b2b7592299f91f1114c28bfec74b4c14289e4007d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcbdac6f3e28dd0293a0e80b2b7592299f91f1114c28bfec74b4c14289e4007d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcbdac6f3e28dd0293a0e80b2b7592299f91f1114c28bfec74b4c14289e4007d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:33 compute-0 podman[303478]: 2026-01-31 07:58:33.613319809 +0000 UTC m=+0.142751649 container init aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:33 compute-0 podman[303478]: 2026-01-31 07:58:33.62284771 +0000 UTC m=+0.152279530 container start aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 07:58:33 compute-0 podman[303478]: 2026-01-31 07:58:33.629220336 +0000 UTC m=+0.158652176 container attach aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:33 compute-0 nova_compute[247704]: 2026-01-31 07:58:33.825 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 gifted_keller[303494]: {
Jan 31 07:58:34 compute-0 gifted_keller[303494]:     "0": [
Jan 31 07:58:34 compute-0 gifted_keller[303494]:         {
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "devices": [
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "/dev/loop3"
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             ],
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "lv_name": "ceph_lv0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "lv_size": "7511998464",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "name": "ceph_lv0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "tags": {
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.cluster_name": "ceph",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.crush_device_class": "",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.encrypted": "0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.osd_id": "0",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.type": "block",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:                 "ceph.vdo": "0"
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             },
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "type": "block",
Jan 31 07:58:34 compute-0 gifted_keller[303494]:             "vg_name": "ceph_vg0"
Jan 31 07:58:34 compute-0 gifted_keller[303494]:         }
Jan 31 07:58:34 compute-0 gifted_keller[303494]:     ]
Jan 31 07:58:34 compute-0 gifted_keller[303494]: }
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.328 247708 DEBUG nova.compute.manager [req-b0ebb743-3454-4053-81dc-2dc25b55d4ae req-c4bd4c57-0636-4b62-ab07-7dbab232d233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.328 247708 DEBUG oslo_concurrency.lockutils [req-b0ebb743-3454-4053-81dc-2dc25b55d4ae req-c4bd4c57-0636-4b62-ab07-7dbab232d233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4c02576e-848d-4193-88a4-239a9e86e206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.329 247708 DEBUG oslo_concurrency.lockutils [req-b0ebb743-3454-4053-81dc-2dc25b55d4ae req-c4bd4c57-0636-4b62-ab07-7dbab232d233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.329 247708 DEBUG oslo_concurrency.lockutils [req-b0ebb743-3454-4053-81dc-2dc25b55d4ae req-c4bd4c57-0636-4b62-ab07-7dbab232d233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.329 247708 DEBUG nova.compute.manager [req-b0ebb743-3454-4053-81dc-2dc25b55d4ae req-c4bd4c57-0636-4b62-ab07-7dbab232d233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] No waiting events found dispatching network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.329 247708 WARNING nova.compute.manager [req-b0ebb743-3454-4053-81dc-2dc25b55d4ae req-c4bd4c57-0636-4b62-ab07-7dbab232d233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received unexpected event network-vif-plugged-ab92f411-7c5d-40bc-b720-f7cea0eb4596 for instance with vm_state active and task_state deleting.
Jan 31 07:58:34 compute-0 systemd[1]: libpod-aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837.scope: Deactivated successfully.
Jan 31 07:58:34 compute-0 podman[303478]: 2026-01-31 07:58:34.35020653 +0000 UTC m=+0.879638360 container died aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 07:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcbdac6f3e28dd0293a0e80b2b7592299f91f1114c28bfec74b4c14289e4007d-merged.mount: Deactivated successfully.
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.394 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.395 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.395 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.395 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.395 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.397 247708 INFO nova.compute.manager [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Terminating instance
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.398 247708 DEBUG nova.compute.manager [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 07:58:34 compute-0 podman[303478]: 2026-01-31 07:58:34.404071852 +0000 UTC m=+0.933503672 container remove aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 07:58:34 compute-0 systemd[1]: libpod-conmon-aa98b5c6f7f393c97d547217c9c190fe6dc3e81aec126b16ccc19f64669a2837.scope: Deactivated successfully.
Jan 31 07:58:34 compute-0 ceph-mon[74496]: pgmap v1839: 305 pgs: 305 active+clean; 257 MiB data, 802 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 1.8 MiB/s wr, 220 op/s
Jan 31 07:58:34 compute-0 sudo[303373]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 kernel: tap34ae45cc-d3 (unregistering): left promiscuous mode
Jan 31 07:58:34 compute-0 NetworkManager[49108]: <info>  [1769846314.4540] device (tap34ae45cc-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00334|binding|INFO|Releasing lport 34ae45cc-d3c0-4212-80e5-816568207748 from this chassis (sb_readonly=0)
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00335|binding|INFO|Setting lport 34ae45cc-d3c0-4212-80e5-816568207748 down in Southbound
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00336|binding|INFO|Removing iface tap34ae45cc-d3 ovn-installed in OVS
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.461 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.463 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.471 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:34 compute-0 sudo[303517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:34 compute-0 sudo[303517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 sudo[303517]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 31 07:58:34 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000055.scope: Consumed 14.107s CPU time.
Jan 31 07:58:34 compute-0 systemd-machined[214448]: Machine qemu-34-instance-00000055 terminated.
Jan 31 07:58:34 compute-0 sudo[303550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:58:34 compute-0 sudo[303550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 sudo[303550]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.560 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:8a:23 10.100.0.3'], port_security=['fa:16:3e:55:8a:23 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6ac7b21e-0a48-4bed-9b36-77bb332f73c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '777e9a4d5b284c9eb16aa35161bd7517', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e7a0247a-7c51-4201-85e8-6b859dee42ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71ff9f10-c15d-4710-bf7b-1ae980d3e2ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=34ae45cc-d3c0-4212-80e5-816568207748) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.563 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 34ae45cc-d3c0-4212-80e5-816568207748 in datapath 5ffa4b98-ff53-433c-a228-0a10f10eccfb unbound from our chassis
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.564 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ffa4b98-ff53-433c-a228-0a10f10eccfb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.565 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c22394f3-4e35-43ce-84cd-3619554b0582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.565 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb namespace which is not needed anymore
Jan 31 07:58:34 compute-0 sudo[303575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:34 compute-0 sudo[303575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 sudo[303575]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:34 compute-0 NetworkManager[49108]: <info>  [1769846314.6133] manager: (tap34ae45cc-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Jan 31 07:58:34 compute-0 kernel: tap34ae45cc-d3: entered promiscuous mode
Jan 31 07:58:34 compute-0 systemd-udevd[303543]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:58:34 compute-0 kernel: tap34ae45cc-d3 (unregistering): left promiscuous mode
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00337|binding|INFO|Claiming lport 34ae45cc-d3c0-4212-80e5-816568207748 for this chassis.
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00338|binding|INFO|34ae45cc-d3c0-4212-80e5-816568207748: Claiming fa:16:3e:55:8a:23 10.100.0.3
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.615 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.633 247708 INFO nova.virt.libvirt.driver [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Instance destroyed successfully.
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.634 247708 DEBUG nova.objects.instance [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lazy-loading 'resources' on Instance uuid 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.637 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 sudo[303613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:58:34 compute-0 sudo[303613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:34 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [NOTICE]   (302560) : haproxy version is 2.8.14-c23fe91
Jan 31 07:58:34 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [NOTICE]   (302560) : path to executable is /usr/sbin/haproxy
Jan 31 07:58:34 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [WARNING]  (302560) : Exiting Master process...
Jan 31 07:58:34 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [ALERT]    (302560) : Current worker (302563) exited with code 143 (Terminated)
Jan 31 07:58:34 compute-0 neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb[302556]: [WARNING]  (302560) : All workers exited. Exiting... (0)
Jan 31 07:58:34 compute-0 systemd[1]: libpod-a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663.scope: Deactivated successfully.
Jan 31 07:58:34 compute-0 podman[303641]: 2026-01-31 07:58:34.699165451 +0000 UTC m=+0.048849091 container died a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 07:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663-userdata-shm.mount: Deactivated successfully.
Jan 31 07:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dce5d28a67c6e4504fb0167e8f2db334f52958dd436de7b2051f06a50442b2e-merged.mount: Deactivated successfully.
Jan 31 07:58:34 compute-0 podman[303641]: 2026-01-31 07:58:34.741872352 +0000 UTC m=+0.091555982 container cleanup a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 07:58:34 compute-0 systemd[1]: libpod-conmon-a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663.scope: Deactivated successfully.
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00339|binding|INFO|Releasing lport 0421d305-ec23-4b5f-a0a0-7a3f3b67f8ff from this chassis (sb_readonly=0)
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00340|binding|INFO|Releasing lport 34ae45cc-d3c0-4212-80e5-816568207748 from this chassis (sb_readonly=0)
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.759 247708 DEBUG nova.virt.libvirt.vif [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:57:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-744782285',display_name='tempest-ServersTestJSON-server-744782285',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-744782285',id=85,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNS6+yz/blQrIBEUDpuBXFU9VtBzqT5wj/P1ScszZJQaO1UIjFvsF0sJ4sEn7BTHSHnqc3VsUf3gN0JlVO2GOJ/7+LaLJlH/oN62/lOZ7X0Lnyhl0uma3T0wqnQ4mYKu3g==',key_name='tempest-keypair-487746000',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:58:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='777e9a4d5b284c9eb16aa35161bd7517',ramdisk_id='',reservation_id='r-sudnzhdx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-54862618',owner_user_name='tempest-ServersTestJSON-54862618-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:58:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f64f1ef9370e4177b114d6c71857656d',uuid=6ac7b21e-0a48-4bed-9b36-77bb332f73c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.760 247708 DEBUG nova.network.os_vif_util [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Converting VIF {"id": "34ae45cc-d3c0-4212-80e5-816568207748", "address": "fa:16:3e:55:8a:23", "network": {"id": "5ffa4b98-ff53-433c-a228-0a10f10eccfb", "bridge": "br-int", "label": "tempest-ServersTestJSON-76730771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "777e9a4d5b284c9eb16aa35161bd7517", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34ae45cc-d3", "ovs_interfaceid": "34ae45cc-d3c0-4212-80e5-816568207748", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.761 247708 DEBUG nova.network.os_vif_util [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.761 247708 DEBUG os_vif [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.764 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ae45cc-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.770 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:8a:23 10.100.0.3'], port_security=['fa:16:3e:55:8a:23 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6ac7b21e-0a48-4bed-9b36-77bb332f73c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '777e9a4d5b284c9eb16aa35161bd7517', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e7a0247a-7c51-4201-85e8-6b859dee42ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71ff9f10-c15d-4710-bf7b-1ae980d3e2ed, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=34ae45cc-d3c0-4212-80e5-816568207748) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.821 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.828 247708 INFO os_vif [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:8a:23,bridge_name='br-int',has_traffic_filtering=True,id=34ae45cc-d3c0-4212-80e5-816568207748,network=Network(5ffa4b98-ff53-433c-a228-0a10f10eccfb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34ae45cc-d3')
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.844 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:8a:23 10.100.0.3'], port_security=['fa:16:3e:55:8a:23 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6ac7b21e-0a48-4bed-9b36-77bb332f73c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '777e9a4d5b284c9eb16aa35161bd7517', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e7a0247a-7c51-4201-85e8-6b859dee42ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=71ff9f10-c15d-4710-bf7b-1ae980d3e2ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=34ae45cc-d3c0-4212-80e5-816568207748) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:58:34 compute-0 ovn_controller[149457]: 2026-01-31T07:58:34Z|00341|binding|INFO|Releasing lport 0421d305-ec23-4b5f-a0a0-7a3f3b67f8ff from this chassis (sb_readonly=0)
Jan 31 07:58:34 compute-0 podman[303677]: 2026-01-31 07:58:34.854027374 +0000 UTC m=+0.083082665 container remove a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.863 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[44284d3c-3263-4c51-a25a-2ab9a3034ce2]: (4, ('Sat Jan 31 07:58:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb (a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663)\na487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663\nSat Jan 31 07:58:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb (a487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663)\na487b5260da31865fcfbdb07befd9e96b4ad6b5e4bbdfd97f2318317b9a5f663\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.865 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[38c161bc-b0c7-4883-b6a5-ed6d99ebc2ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.866 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ffa4b98-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 kernel: tap5ffa4b98-f0: left promiscuous mode
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.870 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.874 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.875 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24e62a38-1ca1-4341-ace7-cb6190adb545]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 nova_compute[247704]: 2026-01-31 07:58:34.877 247708 DEBUG nova.network.neutron [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.893 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6cacc9-5955-498b-829b-c4bcaa8e121e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.895 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0228668a-d660-41a2-a02a-8e0e85d698cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.907 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[17a37b1f-34bc-48a9-b774-fa0c1b862241]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652928, 'reachable_time': 36670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303734, 'error': None, 'target': 'ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.910 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ffa4b98-ff53-433c-a228-0a10f10eccfb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.911 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[d6673fd1-c2a0-4c35-94bf-9d08a47fb2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.911 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 34ae45cc-d3c0-4212-80e5-816568207748 in datapath 5ffa4b98-ff53-433c-a228-0a10f10eccfb unbound from our chassis
Jan 31 07:58:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d5ffa4b98\x2dff53\x2d433c\x2da228\x2d0a10f10eccfb.mount: Deactivated successfully.
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.912 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ffa4b98-ff53-433c-a228-0a10f10eccfb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.913 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8b0c73-70ec-4220-9a56-f15bfbb5bd24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.913 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 34ae45cc-d3c0-4212-80e5-816568207748 in datapath 5ffa4b98-ff53-433c-a228-0a10f10eccfb unbound from our chassis
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.914 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ffa4b98-ff53-433c-a228-0a10f10eccfb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 07:58:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:58:34.914 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9edab5-70ff-40fa-81c2-04429621b022]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:58:34 compute-0 podman[303748]: 2026-01-31 07:58:34.986874041 +0000 UTC m=+0.032065142 container create 20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:35 compute-0 systemd[1]: Started libpod-conmon-20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2.scope.
Jan 31 07:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:35 compute-0 podman[303748]: 2026-01-31 07:58:35.06606475 +0000 UTC m=+0.111255871 container init 20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:35 compute-0 podman[303748]: 2026-01-31 07:58:34.972789628 +0000 UTC m=+0.017980749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:58:35 compute-0 podman[303748]: 2026-01-31 07:58:35.07265443 +0000 UTC m=+0.117845531 container start 20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 07:58:35 compute-0 serene_pascal[303765]: 167 167
Jan 31 07:58:35 compute-0 systemd[1]: libpod-20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2.scope: Deactivated successfully.
Jan 31 07:58:35 compute-0 podman[303748]: 2026-01-31 07:58:35.07962226 +0000 UTC m=+0.124813371 container attach 20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pascal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:58:35 compute-0 podman[303748]: 2026-01-31 07:58:35.08003135 +0000 UTC m=+0.125222451 container died 20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pascal, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:58:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:35 compute-0 podman[303748]: 2026-01-31 07:58:35.125744144 +0000 UTC m=+0.170935245 container remove 20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pascal, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:58:35 compute-0 systemd[1]: libpod-conmon-20dbd10b80e7f2bbae8a6cdea97d646cf0ed5da688d011ca00e7ec0ea18150d2.scope: Deactivated successfully.
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00576504484226689 of space, bias 1.0, pg target 1.729513452680067 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 07:58:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 222 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 159 KiB/s rd, 1.4 MiB/s wr, 162 op/s
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.252 247708 INFO nova.compute.manager [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Took 3.05 seconds to deallocate network for instance.
Jan 31 07:58:35 compute-0 podman[303790]: 2026-01-31 07:58:35.268308227 +0000 UTC m=+0.051057475 container create 840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lalande, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.294 247708 INFO nova.virt.libvirt.driver [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Deleting instance files /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5_del
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.295 247708 INFO nova.virt.libvirt.driver [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Deletion of /var/lib/nova/instances/6ac7b21e-0a48-4bed-9b36-77bb332f73c5_del complete
Jan 31 07:58:35 compute-0 systemd[1]: Started libpod-conmon-840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83.scope.
Jan 31 07:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734b40ce2d2e92d21fe3ccaf022a26453d269e95045f9a4cd07cb2e5fe56bd3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734b40ce2d2e92d21fe3ccaf022a26453d269e95045f9a4cd07cb2e5fe56bd3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 podman[303790]: 2026-01-31 07:58:35.248265538 +0000 UTC m=+0.031014806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734b40ce2d2e92d21fe3ccaf022a26453d269e95045f9a4cd07cb2e5fe56bd3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/734b40ce2d2e92d21fe3ccaf022a26453d269e95045f9a4cd07cb2e5fe56bd3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:58:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:35.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:35 compute-0 podman[303790]: 2026-01-31 07:58:35.362295857 +0000 UTC m=+0.145045125 container init 840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lalande, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:58:35 compute-0 podman[303790]: 2026-01-31 07:58:35.372467824 +0000 UTC m=+0.155217072 container start 840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 07:58:35 compute-0 podman[303790]: 2026-01-31 07:58:35.376129254 +0000 UTC m=+0.158878492 container attach 840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lalande, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c70a855daad72bc7e103cbe515e83d26e9a4ad8debf4dd14c4ea748231517f34-merged.mount: Deactivated successfully.
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.786 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.788 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.919 247708 DEBUG oslo_concurrency.processutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.966 247708 INFO nova.compute.manager [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Took 1.57 seconds to destroy the instance on the hypervisor.
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.967 247708 DEBUG oslo.service.loopingcall [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.968 247708 DEBUG nova.compute.manager [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 07:58:35 compute-0 nova_compute[247704]: 2026-01-31 07:58:35.969 247708 DEBUG nova.network.neutron [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.082 247708 DEBUG nova.compute.manager [req-f701cde1-ae9d-4060-ac01-08970202fd31 req-9b3f8864-55bb-4f37-a419-12a415938b80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-vif-unplugged-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.083 247708 DEBUG oslo_concurrency.lockutils [req-f701cde1-ae9d-4060-ac01-08970202fd31 req-9b3f8864-55bb-4f37-a419-12a415938b80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.083 247708 DEBUG oslo_concurrency.lockutils [req-f701cde1-ae9d-4060-ac01-08970202fd31 req-9b3f8864-55bb-4f37-a419-12a415938b80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.083 247708 DEBUG oslo_concurrency.lockutils [req-f701cde1-ae9d-4060-ac01-08970202fd31 req-9b3f8864-55bb-4f37-a419-12a415938b80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.084 247708 DEBUG nova.compute.manager [req-f701cde1-ae9d-4060-ac01-08970202fd31 req-9b3f8864-55bb-4f37-a419-12a415938b80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] No waiting events found dispatching network-vif-unplugged-34ae45cc-d3c0-4212-80e5-816568207748 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.084 247708 DEBUG nova.compute.manager [req-f701cde1-ae9d-4060-ac01-08970202fd31 req-9b3f8864-55bb-4f37-a419-12a415938b80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-vif-unplugged-34ae45cc-d3c0-4212-80e5-816568207748 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 07:58:36 compute-0 priceless_lalande[303806]: {
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:         "osd_id": 0,
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:         "type": "bluestore"
Jan 31 07:58:36 compute-0 priceless_lalande[303806]:     }
Jan 31 07:58:36 compute-0 priceless_lalande[303806]: }
Jan 31 07:58:36 compute-0 systemd[1]: libpod-840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83.scope: Deactivated successfully.
Jan 31 07:58:36 compute-0 podman[303790]: 2026-01-31 07:58:36.17022822 +0000 UTC m=+0.952977518 container died 840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 07:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-734b40ce2d2e92d21fe3ccaf022a26453d269e95045f9a4cd07cb2e5fe56bd3e-merged.mount: Deactivated successfully.
Jan 31 07:58:36 compute-0 podman[303790]: 2026-01-31 07:58:36.224461121 +0000 UTC m=+1.007210369 container remove 840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 07:58:36 compute-0 systemd[1]: libpod-conmon-840b684e0be62557ea71596da3db71dff9e9a159f0167c0aa364d9679528da83.scope: Deactivated successfully.
Jan 31 07:58:36 compute-0 sudo[303613]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:58:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:58:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a7ed5ce1-e58b-493d-9d4b-2bb1413248ca does not exist
Jan 31 07:58:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 175813da-fc9c-48e9-b695-8427f19571da does not exist
Jan 31 07:58:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a9a59832-4304-43ef-a5e9-9d07d33e4b98 does not exist
Jan 31 07:58:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854533963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:36 compute-0 sudo[303861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:36 compute-0 sudo[303861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:36 compute-0 sudo[303861]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.351 247708 DEBUG oslo_concurrency.processutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.358 247708 DEBUG nova.compute.provider_tree [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:58:36 compute-0 sudo[303888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:58:36 compute-0 sudo[303888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:36 compute-0 sudo[303888]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.421 247708 DEBUG nova.scheduler.client.report [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:58:36 compute-0 ceph-mon[74496]: pgmap v1840: 305 pgs: 305 active+clean; 222 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 159 KiB/s rd, 1.4 MiB/s wr, 162 op/s
Jan 31 07:58:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:58:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1854533963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.626 247708 DEBUG nova.compute.manager [req-4e48ef88-85e9-4275-ac30-c67e21abe8fa req-23827017-68b9-4cfa-8940-ace120aafee4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Received event network-vif-deleted-ab92f411-7c5d-40bc-b720-f7cea0eb4596 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.744 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:36 compute-0 nova_compute[247704]: 2026-01-31 07:58:36.871 247708 INFO nova.scheduler.client.report [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Deleted allocations for instance 4c02576e-848d-4193-88a4-239a9e86e206
Jan 31 07:58:37 compute-0 nova_compute[247704]: 2026-01-31 07:58:37.047 247708 DEBUG oslo_concurrency.lockutils [None req-07ab8c67-a1ee-4c5f-893a-0af986aeddca f60419a58aea43b9a0b6db7d61d71246 1cd91610847a480caeee0ae3cdabf066 - - default default] Lock "4c02576e-848d-4193-88a4-239a9e86e206" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:37.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 190 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 18 KiB/s wr, 119 op/s
Jan 31 07:58:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:38 compute-0 ceph-mon[74496]: pgmap v1841: 305 pgs: 305 active+clean; 190 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 18 KiB/s wr, 119 op/s
Jan 31 07:58:38 compute-0 sudo[303914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:38 compute-0 sudo[303914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:38 compute-0 sudo[303914]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.699 247708 DEBUG nova.compute.manager [req-fadef9aa-611f-424d-9b9c-e98917e6b665 req-8b1cd95c-19d6-4dbe-8758-edcf5094d85d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.699 247708 DEBUG oslo_concurrency.lockutils [req-fadef9aa-611f-424d-9b9c-e98917e6b665 req-8b1cd95c-19d6-4dbe-8758-edcf5094d85d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.700 247708 DEBUG oslo_concurrency.lockutils [req-fadef9aa-611f-424d-9b9c-e98917e6b665 req-8b1cd95c-19d6-4dbe-8758-edcf5094d85d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.700 247708 DEBUG oslo_concurrency.lockutils [req-fadef9aa-611f-424d-9b9c-e98917e6b665 req-8b1cd95c-19d6-4dbe-8758-edcf5094d85d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.700 247708 DEBUG nova.compute.manager [req-fadef9aa-611f-424d-9b9c-e98917e6b665 req-8b1cd95c-19d6-4dbe-8758-edcf5094d85d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] No waiting events found dispatching network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.701 247708 WARNING nova.compute.manager [req-fadef9aa-611f-424d-9b9c-e98917e6b665 req-8b1cd95c-19d6-4dbe-8758-edcf5094d85d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received unexpected event network-vif-plugged-34ae45cc-d3c0-4212-80e5-816568207748 for instance with vm_state active and task_state deleting.
Jan 31 07:58:38 compute-0 sudo[303939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:38 compute-0 sudo[303939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:38 compute-0 sudo[303939]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:38 compute-0 nova_compute[247704]: 2026-01-31 07:58:38.827 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:58:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:39.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:58:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 122 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 17 KiB/s wr, 88 op/s
Jan 31 07:58:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:39.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:39 compute-0 nova_compute[247704]: 2026-01-31 07:58:39.823 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.018 247708 DEBUG nova.network.neutron [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.089 247708 INFO nova.compute.manager [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Took 4.12 seconds to deallocate network for instance.
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.131 247708 DEBUG nova.compute.manager [req-bb779544-ef6e-480e-bbec-491a01d8ede5 req-a6fc5f3e-a692-48d2-bd61-c138ac27e298 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Received event network-vif-deleted-34ae45cc-d3c0-4212-80e5-816568207748 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.243 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.244 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.309 247708 DEBUG oslo_concurrency.processutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:58:40 compute-0 ceph-mon[74496]: pgmap v1842: 305 pgs: 305 active+clean; 122 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 17 KiB/s wr, 88 op/s
Jan 31 07:58:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:58:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933574814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.748 247708 DEBUG oslo_concurrency.processutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.755 247708 DEBUG nova.compute.provider_tree [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:58:40 compute-0 nova_compute[247704]: 2026-01-31 07:58:40.976 247708 DEBUG nova.scheduler.client.report [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:58:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:41.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:41 compute-0 nova_compute[247704]: 2026-01-31 07:58:41.141 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 97 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Jan 31 07:58:41 compute-0 nova_compute[247704]: 2026-01-31 07:58:41.351 247708 INFO nova.scheduler.client.report [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Deleted allocations for instance 6ac7b21e-0a48-4bed-9b36-77bb332f73c5
Jan 31 07:58:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:41.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:41 compute-0 nova_compute[247704]: 2026-01-31 07:58:41.484 247708 DEBUG oslo_concurrency.lockutils [None req-724edb1a-3a77-44cc-8888-f53777cc09a0 f64f1ef9370e4177b114d6c71857656d 777e9a4d5b284c9eb16aa35161bd7517 - - default default] Lock "6ac7b21e-0a48-4bed-9b36-77bb332f73c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:58:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1933574814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:42 compute-0 ceph-mon[74496]: pgmap v1843: 305 pgs: 305 active+clean; 97 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 5.2 KiB/s wr, 66 op/s
Jan 31 07:58:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1005632506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:58:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:43.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 78 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 5.5 KiB/s wr, 70 op/s
Jan 31 07:58:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:43.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:43 compute-0 ceph-mon[74496]: pgmap v1844: 305 pgs: 305 active+clean; 78 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 5.5 KiB/s wr, 70 op/s
Jan 31 07:58:43 compute-0 nova_compute[247704]: 2026-01-31 07:58:43.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:44 compute-0 nova_compute[247704]: 2026-01-31 07:58:44.825 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:44 compute-0 podman[303989]: 2026-01-31 07:58:44.9175287 +0000 UTC m=+0.075474250 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 07:58:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/388839460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 07:58:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/388839460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 07:58:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:45.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 5.8 KiB/s wr, 67 op/s
Jan 31 07:58:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:45.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:45 compute-0 nova_compute[247704]: 2026-01-31 07:58:45.946 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846310.9456306, 4c02576e-848d-4193-88a4-239a9e86e206 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:58:45 compute-0 nova_compute[247704]: 2026-01-31 07:58:45.946 247708 INFO nova.compute.manager [-] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] VM Stopped (Lifecycle Event)
Jan 31 07:58:46 compute-0 ceph-mon[74496]: pgmap v1845: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 5.8 KiB/s wr, 67 op/s
Jan 31 07:58:46 compute-0 nova_compute[247704]: 2026-01-31 07:58:46.084 247708 DEBUG nova.compute.manager [None req-193c6194-9703-4063-8779-2b555fb10762 - - - - - -] [instance: 4c02576e-848d-4193-88a4-239a9e86e206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:47.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 5.5 KiB/s wr, 61 op/s
Jan 31 07:58:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:47.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:48 compute-0 ceph-mon[74496]: pgmap v1846: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 5.5 KiB/s wr, 61 op/s
Jan 31 07:58:48 compute-0 nova_compute[247704]: 2026-01-31 07:58:48.852 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:49.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 4.2 KiB/s wr, 44 op/s
Jan 31 07:58:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:49 compute-0 nova_compute[247704]: 2026-01-31 07:58:49.634 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846314.63266, 6ac7b21e-0a48-4bed-9b36-77bb332f73c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:58:49 compute-0 nova_compute[247704]: 2026-01-31 07:58:49.634 247708 INFO nova.compute.manager [-] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] VM Stopped (Lifecycle Event)
Jan 31 07:58:49 compute-0 nova_compute[247704]: 2026-01-31 07:58:49.875 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:58:50 compute-0 ceph-mon[74496]: pgmap v1847: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 4.2 KiB/s wr, 44 op/s
Jan 31 07:58:50 compute-0 nova_compute[247704]: 2026-01-31 07:58:50.740 247708 DEBUG nova.compute.manager [None req-1294e209-7f0d-40fc-a183-f8ae2a15f673 - - - - - -] [instance: 6ac7b21e-0a48-4bed-9b36-77bb332f73c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:58:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:51.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 07:58:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:51.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:52 compute-0 ceph-mon[74496]: pgmap v1848: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 07:58:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:53.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 31 07:58:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:53.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:53 compute-0 nova_compute[247704]: 2026-01-31 07:58:53.853 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:54 compute-0 ceph-mon[74496]: pgmap v1849: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Jan 31 07:58:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:54 compute-0 nova_compute[247704]: 2026-01-31 07:58:54.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:55.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 13 op/s
Jan 31 07:58:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:56 compute-0 ceph-mon[74496]: pgmap v1850: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 13 op/s
Jan 31 07:58:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:58:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:58:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:58:58 compute-0 ceph-mon[74496]: pgmap v1851: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:58:58 compute-0 sudo[304015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:58 compute-0 sudo[304015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:58 compute-0 sudo[304015]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:58 compute-0 sudo[304040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:58:58 compute-0 sudo[304040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:58:58 compute-0 sudo[304040]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:58 compute-0 nova_compute[247704]: 2026-01-31 07:58:58.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:58:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:58:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:58:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:59.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:58:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:58:59 compute-0 nova_compute[247704]: 2026-01-31 07:58:59.878 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:58:59 compute-0 podman[304066]: 2026-01-31 07:58:59.903429582 +0000 UTC m=+0.071435741 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 07:59:00 compute-0 ceph-mon[74496]: pgmap v1852: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:01.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:01.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:01 compute-0 ceph-mon[74496]: pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:03.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:03.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:03 compute-0 nova_compute[247704]: 2026-01-31 07:59:03.879 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:04 compute-0 ceph-mon[74496]: pgmap v1854: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:04 compute-0 nova_compute[247704]: 2026-01-31 07:59:04.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:05.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:05.374 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:05.376 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:59:05 compute-0 nova_compute[247704]: 2026-01-31 07:59:05.376 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:05.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:06 compute-0 ceph-mon[74496]: pgmap v1855: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:07.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:07 compute-0 ceph-mon[74496]: pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:08 compute-0 nova_compute[247704]: 2026-01-31 07:59:08.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:09.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:09.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:09 compute-0 nova_compute[247704]: 2026-01-31 07:59:09.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:10.378 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:10 compute-0 ceph-mon[74496]: pgmap v1857: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:11.170 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:11.170 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:11.171 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:11.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:11.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:12 compute-0 ceph-mon[74496]: pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/471546171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:13.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:13.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:13 compute-0 nova_compute[247704]: 2026-01-31 07:59:13.882 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:14 compute-0 ceph-mon[74496]: pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 678 MiB used, 20 GiB / 21 GiB avail
Jan 31 07:59:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:14 compute-0 nova_compute[247704]: 2026-01-31 07:59:14.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:14 compute-0 nova_compute[247704]: 2026-01-31 07:59:14.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:14 compute-0 nova_compute[247704]: 2026-01-31 07:59:14.885 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.075 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.075 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.115 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 07:59:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:15.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.223 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.224 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.234 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.234 247708 INFO nova.compute.claims [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Claim successful on node compute-0.ctlplane.example.com
Jan 31 07:59:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 59 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 915 KiB/s wr, 13 op/s
Jan 31 07:59:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.418 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:59:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140290837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.878 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:15 compute-0 nova_compute[247704]: 2026-01-31 07:59:15.885 247708 DEBUG nova.compute.provider_tree [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:59:15 compute-0 podman[304120]: 2026-01-31 07:59:15.89847139 +0000 UTC m=+0.076470823 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 07:59:16 compute-0 ceph-mon[74496]: pgmap v1860: 305 pgs: 305 active+clean; 59 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 915 KiB/s wr, 13 op/s
Jan 31 07:59:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1140290837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:17.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 70 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.0 MiB/s wr, 25 op/s
Jan 31 07:59:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:17.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:17 compute-0 nova_compute[247704]: 2026-01-31 07:59:17.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:17 compute-0 nova_compute[247704]: 2026-01-31 07:59:17.560 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:17 compute-0 nova_compute[247704]: 2026-01-31 07:59:17.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 07:59:18 compute-0 ceph-mon[74496]: pgmap v1861: 305 pgs: 305 active+clean; 70 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.0 MiB/s wr, 25 op/s
Jan 31 07:59:18 compute-0 sudo[304143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:18 compute-0 sudo[304143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:18 compute-0 sudo[304143]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:18 compute-0 nova_compute[247704]: 2026-01-31 07:59:18.937 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:18 compute-0 sudo[304168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:18 compute-0 sudo[304168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:18 compute-0 sudo[304168]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:19.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 88 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:59:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.760 247708 DEBUG nova.scheduler.client.report [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.765 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.766 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.766 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.961 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.963 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:19 compute-0 nova_compute[247704]: 2026-01-31 07:59:19.964 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.017 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.017 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.018 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.018 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.018 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.068 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.068 247708 DEBUG nova.network.neutron [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_07:59:20
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.107 247708 INFO nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.140 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.375 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.377 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.377 247708 INFO nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Creating image(s)
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.410 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.446 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:59:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182031490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.475 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.479 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.498 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 07:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.545 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.547 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.547 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.548 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.580 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.585 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:20 compute-0 ceph-mon[74496]: pgmap v1862: 305 pgs: 305 active+clean; 88 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:59:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3292487236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2114909117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/182031490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.607 247708 DEBUG nova.policy [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63e95edea0164ae2a9820dc10467335d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.730 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.731 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.732 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.732 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.961 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.962 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 07:59:20 compute-0 nova_compute[247704]: 2026-01-31 07:59:20.963 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.042 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:21.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 88 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.276 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.347 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 07:59:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 07:59:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206870935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.493 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.507 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.655 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.932 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 07:59:21 compute-0 nova_compute[247704]: 2026-01-31 07:59:21.933 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1724540177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3013397379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1206870935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.277 247708 DEBUG nova.objects.instance [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.315 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.316 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Ensure instance console log exists: /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.316 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.317 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.317 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.729 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.730 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.730 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 07:59:22 compute-0 nova_compute[247704]: 2026-01-31 07:59:22.730 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 07:59:23 compute-0 nova_compute[247704]: 2026-01-31 07:59:23.050 247708 DEBUG nova.network.neutron [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Successfully created port: 8833e127-3186-4142-8fd1-89a06d960d79 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 07:59:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:23.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:23 compute-0 ceph-mon[74496]: pgmap v1863: 305 pgs: 305 active+clean; 88 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 07:59:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 96 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Jan 31 07:59:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:23 compute-0 nova_compute[247704]: 2026-01-31 07:59:23.939 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:24 compute-0 ceph-mon[74496]: pgmap v1864: 305 pgs: 305 active+clean; 96 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Jan 31 07:59:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4102117780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2095121666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:24 compute-0 nova_compute[247704]: 2026-01-31 07:59:24.991 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:25.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 123 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.3 MiB/s wr, 53 op/s
Jan 31 07:59:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:26 compute-0 ceph-mon[74496]: pgmap v1865: 305 pgs: 305 active+clean; 123 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.3 MiB/s wr, 53 op/s
Jan 31 07:59:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:27.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Jan 31 07:59:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:27 compute-0 ceph-mon[74496]: pgmap v1866: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Jan 31 07:59:28 compute-0 nova_compute[247704]: 2026-01-31 07:59:28.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.5 MiB/s wr, 34 op/s
Jan 31 07:59:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:29 compute-0 nova_compute[247704]: 2026-01-31 07:59:29.994 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.348 247708 DEBUG nova.network.neutron [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Successfully updated port: 8833e127-3186-4142-8fd1-89a06d960d79 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.585 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.585 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.586 247708 DEBUG nova.network.neutron [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.758 247708 DEBUG nova.compute.manager [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-changed-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.758 247708 DEBUG nova.compute.manager [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Refreshing instance network info cache due to event network-changed-8833e127-3186-4142-8fd1-89a06d960d79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 07:59:30 compute-0 nova_compute[247704]: 2026-01-31 07:59:30.759 247708 DEBUG oslo_concurrency.lockutils [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 07:59:30 compute-0 podman[304411]: 2026-01-31 07:59:30.950031975 +0000 UTC m=+0.118822115 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 31 07:59:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:31.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:31 compute-0 ceph-mon[74496]: pgmap v1867: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.5 MiB/s wr, 34 op/s
Jan 31 07:59:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 07:59:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:31 compute-0 nova_compute[247704]: 2026-01-31 07:59:31.494 247708 DEBUG nova.network.neutron [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 07:59:32 compute-0 ceph-mon[74496]: pgmap v1868: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.206 247708 DEBUG nova.network.neutron [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updating instance_info_cache with network_info: [{"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:59:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:33.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.259 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.260 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance network_info: |[{"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.261 247708 DEBUG oslo_concurrency.lockutils [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.261 247708 DEBUG nova.network.neutron [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Refreshing network info cache for port 8833e127-3186-4142-8fd1-89a06d960d79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.264 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Start _get_guest_xml network_info=[{"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.270 247708 WARNING nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.281 247708 DEBUG nova.virt.libvirt.host [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 07:59:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.284 247708 DEBUG nova.virt.libvirt.host [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.289 247708 DEBUG nova.virt.libvirt.host [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.290 247708 DEBUG nova.virt.libvirt.host [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.291 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.291 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.291 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.292 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.293 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.293 247708 DEBUG nova.virt.hardware [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.295 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:59:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381825303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.711 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.750 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.756 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:33 compute-0 nova_compute[247704]: 2026-01-31 07:59:33.943 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 07:59:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3865166314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.285 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.287 247708 DEBUG nova.virt.libvirt.vif [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1843296913',display_name='tempest-ServerDiskConfigTestJSON-server-1843296913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1843296913',id=87,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fectldu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:20Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.287 247708 DEBUG nova.network.os_vif_util [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.288 247708 DEBUG nova.network.os_vif_util [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.289 247708 DEBUG nova.objects.instance [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.432 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] End _get_guest_xml xml=<domain type="kvm">
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <uuid>0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99</uuid>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <name>instance-00000057</name>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <metadata>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1843296913</nova:name>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 07:59:33</nova:creationTime>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <nova:port uuid="8833e127-3186-4142-8fd1-89a06d960d79">
Jan 31 07:59:34 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </metadata>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <system>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <entry name="serial">0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99</entry>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <entry name="uuid">0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99</entry>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </system>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <os>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </os>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <features>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <apic/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </features>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </clock>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </cpu>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   <devices>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk">
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </source>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config">
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </source>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 07:59:34 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       </auth>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </disk>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:6a:40:8a"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <target dev="tap8833e127-31"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </interface>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/console.log" append="off"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </serial>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <video>
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </video>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </rng>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 07:59:34 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 07:59:34 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 07:59:34 compute-0 nova_compute[247704]:   </devices>
Jan 31 07:59:34 compute-0 nova_compute[247704]: </domain>
Jan 31 07:59:34 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.434 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Preparing to wait for external event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.434 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.434 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.435 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.435 247708 DEBUG nova.virt.libvirt.vif [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1843296913',display_name='tempest-ServerDiskConfigTestJSON-server-1843296913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1843296913',id=87,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fectldu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:20Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.435 247708 DEBUG nova.network.os_vif_util [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.436 247708 DEBUG nova.network.os_vif_util [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.436 247708 DEBUG os_vif [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.437 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.437 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.437 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.440 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8833e127-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.441 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8833e127-31, col_values=(('external_ids', {'iface-id': '8833e127-3186-4142-8fd1-89a06d960d79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:40:8a', 'vm-uuid': '0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:34 compute-0 NetworkManager[49108]: <info>  [1769846374.4437] manager: (tap8833e127-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.444 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:34 compute-0 nova_compute[247704]: 2026-01-31 07:59:34.452 247708 INFO os_vif [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31')
Jan 31 07:59:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:34 compute-0 ceph-mon[74496]: pgmap v1869: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 07:59:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1381825303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3865166314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:35 compute-0 nova_compute[247704]: 2026-01-31 07:59:35.169 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:59:35 compute-0 nova_compute[247704]: 2026-01-31 07:59:35.170 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 07:59:35 compute-0 nova_compute[247704]: 2026-01-31 07:59:35.170 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:6a:40:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 07:59:35 compute-0 nova_compute[247704]: 2026-01-31 07:59:35.170 247708 INFO nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Using config drive
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 07:59:35 compute-0 nova_compute[247704]: 2026-01-31 07:59:35.200 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019845683859110366 of space, bias 1.0, pg target 0.595370515773311 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 07:59:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:35.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 84 op/s
Jan 31 07:59:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:35 compute-0 ovn_controller[149457]: 2026-01-31T07:59:35Z|00342|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 31 07:59:35 compute-0 ceph-mon[74496]: pgmap v1870: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 84 op/s
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.283 247708 INFO nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Creating config drive at /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.289 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnago61tf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.416 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnago61tf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.451 247708 DEBUG nova.storage.rbd_utils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.456 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 07:59:36 compute-0 sudo[304562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:36 compute-0 sudo[304562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-0 sudo[304562]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.664 247708 DEBUG nova.network.neutron [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updated VIF entry in instance network info cache for port 8833e127-3186-4142-8fd1-89a06d960d79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.665 247708 DEBUG nova.network.neutron [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updating instance_info_cache with network_info: [{"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 07:59:36 compute-0 nova_compute[247704]: 2026-01-31 07:59:36.705 247708 DEBUG oslo_concurrency.lockutils [req-2f77cb5e-c50d-4ea6-b707-9d5798a302bc req-2d445a1b-cc35-4edf-8da0-06fb475b939d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 07:59:36 compute-0 sudo[304587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:36 compute-0 sudo[304587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-0 sudo[304587]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:36 compute-0 sudo[304615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:36 compute-0 sudo[304615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:36 compute-0 sudo[304615]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:36 compute-0 sudo[304640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 07:59:36 compute-0 sudo[304640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 07:59:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 07:59:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:37 compute-0 sudo[304640]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 07:59:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 07:59:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:59:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:37.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.251 247708 DEBUG oslo_concurrency.processutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.251 247708 INFO nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deleting local config drive /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config because it was imported into RBD.
Jan 31 07:59:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 241 KiB/s wr, 74 op/s
Jan 31 07:59:37 compute-0 kernel: tap8833e127-31: entered promiscuous mode
Jan 31 07:59:37 compute-0 ovn_controller[149457]: 2026-01-31T07:59:37Z|00343|binding|INFO|Claiming lport 8833e127-3186-4142-8fd1-89a06d960d79 for this chassis.
Jan 31 07:59:37 compute-0 ovn_controller[149457]: 2026-01-31T07:59:37Z|00344|binding|INFO|8833e127-3186-4142-8fd1-89a06d960d79: Claiming fa:16:3e:6a:40:8a 10.100.0.3
Jan 31 07:59:37 compute-0 NetworkManager[49108]: <info>  [1769846377.2948] manager: (tap8833e127-31): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 systemd-udevd[304706]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.316 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:40:8a 10.100.0.3'], port_security=['fa:16:3e:6a:40:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=8833e127-3186-4142-8fd1-89a06d960d79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.317 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 8833e127-3186-4142-8fd1-89a06d960d79 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.318 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 07:59:37 compute-0 NetworkManager[49108]: <info>  [1769846377.3261] device (tap8833e127-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:59:37 compute-0 NetworkManager[49108]: <info>  [1769846377.3269] device (tap8833e127-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.340 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ffb397-e966-4c42-a5e7-fe27525ce6ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.341 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.343 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.344 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d95e10c0-edee-481f-8c83-412b29694f24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.344 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d2dbae3d-5429-43f8-906e-daf01ca93c34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 systemd-machined[214448]: New machine qemu-35-instance-00000057.
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.355 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f88de683-e540-4177-b253-0e1362b7d9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.361 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 ovn_controller[149457]: 2026-01-31T07:59:37Z|00345|binding|INFO|Setting lport 8833e127-3186-4142-8fd1-89a06d960d79 ovn-installed in OVS
Jan 31 07:59:37 compute-0 ovn_controller[149457]: 2026-01-31T07:59:37Z|00346|binding|INFO|Setting lport 8833e127-3186-4142-8fd1-89a06d960d79 up in Southbound
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-00000057.
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.377 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b2a451-1e14-4782-ba90-8a6822853623]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.410 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6ddfa214-5c6f-4c33-b929-89e9633808c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 NetworkManager[49108]: <info>  [1769846377.4157] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.414 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[73543bba-1ac2-403d-9cba-ebb5aca55858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 systemd-udevd[304708]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:59:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:37.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.443 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[98e07303-32c6-4939-9b27-d7e6dbacccf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.447 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[23fe4196-ffc8-4e28-bb25-af412e99fead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 NetworkManager[49108]: <info>  [1769846377.4687] device (tap121329c8-20): carrier: link connected
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.474 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c84e3250-b6e2-4cb5-a746-c7c989c20dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.492 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cec3634b-09f3-4fd3-86b6-af43d36019e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662008, 'reachable_time': 44600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304741, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.507 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[484aefbb-a5c9-479f-9e86-02beef825924]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662008, 'tstamp': 662008}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304742, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.521 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24ec4cad-fd61-43e5-997c-79d6344c8feb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662008, 'reachable_time': 44600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304743, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.546 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d274d656-8efe-461b-a354-2ccd6c357d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.583 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[92ba89cd-e97b-409c-8530-c60d1f268410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.584 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.585 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.585 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:37 compute-0 NetworkManager[49108]: <info>  [1769846377.5896] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Jan 31 07:59:37 compute-0 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.592 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.588 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 ovn_controller[149457]: 2026-01-31T07:59:37Z|00347|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.594 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.595 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.596 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.597 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a5891bf8-defe-420d-bdab-318f86d8f9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.598 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: global
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 07:59:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:37.599 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.939 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846377.9394321, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:59:37 compute-0 nova_compute[247704]: 2026-01-31 07:59:37.940 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Started (Lifecycle Event)
Jan 31 07:59:37 compute-0 kernel: hrtimer: interrupt took 17793213 ns
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:59:38 compute-0 podman[304816]: 2026-01-31 07:59:37.918611814 +0000 UTC m=+0.017611419 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.071 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.077 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846377.9403148, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.078 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Paused (Lifecycle Event)
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.103 247708 DEBUG nova.compute.manager [req-b2cfa3b5-9e10-4b3a-a90d-5d45817b7956 req-ee61b77a-9ac9-47b7-bf12-990631d6ddbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.104 247708 DEBUG oslo_concurrency.lockutils [req-b2cfa3b5-9e10-4b3a-a90d-5d45817b7956 req-ee61b77a-9ac9-47b7-bf12-990631d6ddbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.104 247708 DEBUG oslo_concurrency.lockutils [req-b2cfa3b5-9e10-4b3a-a90d-5d45817b7956 req-ee61b77a-9ac9-47b7-bf12-990631d6ddbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.104 247708 DEBUG oslo_concurrency.lockutils [req-b2cfa3b5-9e10-4b3a-a90d-5d45817b7956 req-ee61b77a-9ac9-47b7-bf12-990631d6ddbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.104 247708 DEBUG nova.compute.manager [req-b2cfa3b5-9e10-4b3a-a90d-5d45817b7956 req-ee61b77a-9ac9-47b7-bf12-990631d6ddbf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Processing event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.105 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.109 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.113 247708 INFO nova.virt.libvirt.driver [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance spawned successfully.
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.113 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a2d19d8c-f26c-4db9-b38c-d2d11164ff19 does not exist
Jan 31 07:59:38 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0e0363c0-8c60-4aa1-8c91-dac1c9576bff does not exist
Jan 31 07:59:38 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f16a2d6d-faea-4241-829a-63fe32953b92 does not exist
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 07:59:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.133 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.136 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846378.1090996, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.136 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Resumed (Lifecycle Event)
Jan 31 07:59:38 compute-0 sudo[304830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:38 compute-0 sudo[304830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:38 compute-0 sudo[304830]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.204 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.206 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.207 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.207 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.208 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.208 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.208 247708 DEBUG nova.virt.libvirt.driver [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.214 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 07:59:38 compute-0 sudo[304855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:38 compute-0 sudo[304855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:38 compute-0 sudo[304855]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:38 compute-0 podman[304816]: 2026-01-31 07:59:38.319155422 +0000 UTC m=+0.418155007 container create 07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 07:59:38 compute-0 sudo[304880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:38 compute-0 sudo[304880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:38 compute-0 sudo[304880]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-0 ceph-mon[74496]: pgmap v1871: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 241 KiB/s wr, 74 op/s
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2139458933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 07:59:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 07:59:38 compute-0 sudo[304905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 07:59:38 compute-0 sudo[304905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:38 compute-0 systemd[1]: Started libpod-conmon-07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324.scope.
Jan 31 07:59:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66b5de6de1c1870f0d0ee6fb0a022068449033ab8cd95e792c4cd9a67b004375/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:38 compute-0 podman[304816]: 2026-01-31 07:59:38.662685992 +0000 UTC m=+0.761685657 container init 07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 07:59:38 compute-0 podman[304816]: 2026-01-31 07:59:38.67244497 +0000 UTC m=+0.771444585 container start 07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 07:59:38 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [NOTICE]   (304959) : New worker (304961) forked
Jan 31 07:59:38 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [NOTICE]   (304959) : Loading success.
Jan 31 07:59:38 compute-0 nova_compute[247704]: 2026-01-31 07:59:38.953 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:38 compute-0 podman[304984]: 2026-01-31 07:59:38.884574507 +0000 UTC m=+0.025533263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:59:39 compute-0 podman[304984]: 2026-01-31 07:59:39.027499059 +0000 UTC m=+0.168457805 container create 382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_fermi, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:39 compute-0 nova_compute[247704]: 2026-01-31 07:59:39.052 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 07:59:39 compute-0 sudo[304998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:39 compute-0 sudo[304998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:39 compute-0 sudo[304998]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:39 compute-0 systemd[1]: Started libpod-conmon-382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e.scope.
Jan 31 07:59:39 compute-0 sudo[305023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:39 compute-0 sudo[305023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:39 compute-0 sudo[305023]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:39.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:39 compute-0 nova_compute[247704]: 2026-01-31 07:59:39.222 247708 INFO nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Took 18.85 seconds to spawn the instance on the hypervisor.
Jan 31 07:59:39 compute-0 nova_compute[247704]: 2026-01-31 07:59:39.223 247708 DEBUG nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 83 op/s
Jan 31 07:59:39 compute-0 podman[304984]: 2026-01-31 07:59:39.295677843 +0000 UTC m=+0.436636649 container init 382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 07:59:39 compute-0 podman[304984]: 2026-01-31 07:59:39.304223691 +0000 UTC m=+0.445182437 container start 382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:59:39 compute-0 silly_fermi[305046]: 167 167
Jan 31 07:59:39 compute-0 systemd[1]: libpod-382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e.scope: Deactivated successfully.
Jan 31 07:59:39 compute-0 conmon[305046]: conmon 382efaf67d0331eef917 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e.scope/container/memory.events
Jan 31 07:59:39 compute-0 podman[304984]: 2026-01-31 07:59:39.350026417 +0000 UTC m=+0.490985233 container attach 382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_fermi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:39 compute-0 podman[304984]: 2026-01-31 07:59:39.351950444 +0000 UTC m=+0.492909210 container died 382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 07:59:39 compute-0 nova_compute[247704]: 2026-01-31 07:59:39.390 247708 INFO nova.compute.manager [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Took 24.20 seconds to build instance.
Jan 31 07:59:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:39.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:39 compute-0 nova_compute[247704]: 2026-01-31 07:59:39.434 247708 DEBUG oslo_concurrency.lockutils [None req-b1a23cdc-3a73-4326-9a97-3bb63e2804e7 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:39 compute-0 nova_compute[247704]: 2026-01-31 07:59:39.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa23380c3024820028908f54145526d3040c7d4f220e0f8af761ee1cba34539b-merged.mount: Deactivated successfully.
Jan 31 07:59:40 compute-0 podman[304984]: 2026-01-31 07:59:40.135298348 +0000 UTC m=+1.276257074 container remove 382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:40 compute-0 ceph-mon[74496]: pgmap v1872: 305 pgs: 305 active+clean; 134 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 83 op/s
Jan 31 07:59:40 compute-0 systemd[1]: libpod-conmon-382efaf67d0331eef9179b90030d7c2296b84fe525fad67958c70304a593050e.scope: Deactivated successfully.
Jan 31 07:59:40 compute-0 nova_compute[247704]: 2026-01-31 07:59:40.261 247708 DEBUG nova.compute.manager [req-76d509ca-6654-407a-b0aa-a6994cb1f813 req-bee0baac-3f3a-41cf-8c89-55036d5e8292 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 07:59:40 compute-0 nova_compute[247704]: 2026-01-31 07:59:40.261 247708 DEBUG oslo_concurrency.lockutils [req-76d509ca-6654-407a-b0aa-a6994cb1f813 req-bee0baac-3f3a-41cf-8c89-55036d5e8292 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 07:59:40 compute-0 nova_compute[247704]: 2026-01-31 07:59:40.262 247708 DEBUG oslo_concurrency.lockutils [req-76d509ca-6654-407a-b0aa-a6994cb1f813 req-bee0baac-3f3a-41cf-8c89-55036d5e8292 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 07:59:40 compute-0 nova_compute[247704]: 2026-01-31 07:59:40.262 247708 DEBUG oslo_concurrency.lockutils [req-76d509ca-6654-407a-b0aa-a6994cb1f813 req-bee0baac-3f3a-41cf-8c89-55036d5e8292 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 07:59:40 compute-0 nova_compute[247704]: 2026-01-31 07:59:40.262 247708 DEBUG nova.compute.manager [req-76d509ca-6654-407a-b0aa-a6994cb1f813 req-bee0baac-3f3a-41cf-8c89-55036d5e8292 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] No waiting events found dispatching network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 07:59:40 compute-0 nova_compute[247704]: 2026-01-31 07:59:40.262 247708 WARNING nova.compute.manager [req-76d509ca-6654-407a-b0aa-a6994cb1f813 req-bee0baac-3f3a-41cf-8c89-55036d5e8292 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received unexpected event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 for instance with vm_state active and task_state None.
Jan 31 07:59:40 compute-0 podman[305076]: 2026-01-31 07:59:40.292250241 +0000 UTC m=+0.022345665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:59:40 compute-0 podman[305076]: 2026-01-31 07:59:40.387289107 +0000 UTC m=+0.117384521 container create f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:40 compute-0 systemd[1]: Started libpod-conmon-f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0.scope.
Jan 31 07:59:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb932dd63b2b8d401d2523c09693937f8ac9041b9fa194de2e3402ebb5928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb932dd63b2b8d401d2523c09693937f8ac9041b9fa194de2e3402ebb5928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb932dd63b2b8d401d2523c09693937f8ac9041b9fa194de2e3402ebb5928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb932dd63b2b8d401d2523c09693937f8ac9041b9fa194de2e3402ebb5928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb932dd63b2b8d401d2523c09693937f8ac9041b9fa194de2e3402ebb5928/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:40 compute-0 podman[305076]: 2026-01-31 07:59:40.67761391 +0000 UTC m=+0.407709394 container init f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:40 compute-0 podman[305076]: 2026-01-31 07:59:40.688259119 +0000 UTC m=+0.418354553 container start f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jemison, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:59:40 compute-0 podman[305076]: 2026-01-31 07:59:40.772271156 +0000 UTC m=+0.502366610 container attach f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 07:59:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:41.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 151 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 523 KiB/s wr, 87 op/s
Jan 31 07:59:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:41.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:41 compute-0 cool_jemison[305093]: --> passed data devices: 0 physical, 1 LVM
Jan 31 07:59:41 compute-0 cool_jemison[305093]: --> relative data size: 1.0
Jan 31 07:59:41 compute-0 cool_jemison[305093]: --> All data devices are unavailable
Jan 31 07:59:41 compute-0 systemd[1]: libpod-f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0.scope: Deactivated successfully.
Jan 31 07:59:41 compute-0 podman[305076]: 2026-01-31 07:59:41.477161348 +0000 UTC m=+1.207256742 container died f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jemison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eedb932dd63b2b8d401d2523c09693937f8ac9041b9fa194de2e3402ebb5928-merged.mount: Deactivated successfully.
Jan 31 07:59:41 compute-0 podman[305076]: 2026-01-31 07:59:41.532614239 +0000 UTC m=+1.262709633 container remove f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 07:59:41 compute-0 systemd[1]: libpod-conmon-f5171a4f14ba0abfb0430f65e1087beca101e5f37aed553b22b7d9444f4661d0.scope: Deactivated successfully.
Jan 31 07:59:41 compute-0 sudo[304905]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[305118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:41 compute-0 sudo[305118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:41 compute-0 sudo[305118]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[305143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:41 compute-0 sudo[305143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:41 compute-0 sudo[305143]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[305168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:41 compute-0 sudo[305168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:41 compute-0 sudo[305168]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:41 compute-0 sudo[305193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 07:59:41 compute-0 sudo[305193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.087332373 +0000 UTC m=+0.049605240 container create 05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 07:59:42 compute-0 systemd[1]: Started libpod-conmon-05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590.scope.
Jan 31 07:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.062109648 +0000 UTC m=+0.024382585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.160362502 +0000 UTC m=+0.122635399 container init 05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.165698742 +0000 UTC m=+0.127971579 container start 05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.169258008 +0000 UTC m=+0.131530905 container attach 05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:42 compute-0 loving_ishizaka[305275]: 167 167
Jan 31 07:59:42 compute-0 systemd[1]: libpod-05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590.scope: Deactivated successfully.
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.170895218 +0000 UTC m=+0.133168075 container died 05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a671bf696070aa77eba11f1a862c53d4addb15810cd0512329b2dd3b6b049456-merged.mount: Deactivated successfully.
Jan 31 07:59:42 compute-0 podman[305259]: 2026-01-31 07:59:42.212645585 +0000 UTC m=+0.174918432 container remove 05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ishizaka, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 07:59:42 compute-0 systemd[1]: libpod-conmon-05390a1eb2840c1b5d7a7154a5d15f1156921675813bda0d5ecfbf2e2acfb590.scope: Deactivated successfully.
Jan 31 07:59:42 compute-0 ceph-mon[74496]: pgmap v1873: 305 pgs: 305 active+clean; 151 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 523 KiB/s wr, 87 op/s
Jan 31 07:59:42 compute-0 podman[305298]: 2026-01-31 07:59:42.359631186 +0000 UTC m=+0.054319694 container create 092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 31 07:59:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2876323651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3586046035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:42 compute-0 systemd[1]: Started libpod-conmon-092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28.scope.
Jan 31 07:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee524f8653f5900e768de290acdd402118fd3cac81d50473a6680cb411c2fabd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee524f8653f5900e768de290acdd402118fd3cac81d50473a6680cb411c2fabd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee524f8653f5900e768de290acdd402118fd3cac81d50473a6680cb411c2fabd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee524f8653f5900e768de290acdd402118fd3cac81d50473a6680cb411c2fabd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:42 compute-0 podman[305298]: 2026-01-31 07:59:42.334550425 +0000 UTC m=+0.029238933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:59:42 compute-0 podman[305298]: 2026-01-31 07:59:42.431741293 +0000 UTC m=+0.126429781 container init 092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 31 07:59:42 compute-0 podman[305298]: 2026-01-31 07:59:42.436987911 +0000 UTC m=+0.131676379 container start 092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:42 compute-0 podman[305298]: 2026-01-31 07:59:42.440196219 +0000 UTC m=+0.134884687 container attach 092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]: {
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:     "0": [
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:         {
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "devices": [
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "/dev/loop3"
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             ],
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "lv_name": "ceph_lv0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "lv_size": "7511998464",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "name": "ceph_lv0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "tags": {
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.cluster_name": "ceph",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.crush_device_class": "",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.encrypted": "0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.osd_id": "0",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.type": "block",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:                 "ceph.vdo": "0"
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             },
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "type": "block",
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:             "vg_name": "ceph_vg0"
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:         }
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]:     ]
Jan 31 07:59:43 compute-0 bold_dubinsky[305314]: }
Jan 31 07:59:43 compute-0 systemd[1]: libpod-092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28.scope: Deactivated successfully.
Jan 31 07:59:43 compute-0 podman[305298]: 2026-01-31 07:59:43.157565155 +0000 UTC m=+0.852253643 container died 092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee524f8653f5900e768de290acdd402118fd3cac81d50473a6680cb411c2fabd-merged.mount: Deactivated successfully.
Jan 31 07:59:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:43 compute-0 podman[305298]: 2026-01-31 07:59:43.222654912 +0000 UTC m=+0.917343380 container remove 092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 07:59:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 07:59:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:43.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 07:59:43 compute-0 systemd[1]: libpod-conmon-092690e01a9af1fafe4219a6dab857379865464df9222b9e5be051e3cea87d28.scope: Deactivated successfully.
Jan 31 07:59:43 compute-0 sudo[305193]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 158 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 729 KiB/s wr, 102 op/s
Jan 31 07:59:43 compute-0 sudo[305335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:43 compute-0 sudo[305335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:43 compute-0 sudo[305335]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:43 compute-0 sudo[305360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 07:59:43 compute-0 sudo[305360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:43 compute-0 sudo[305360]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:43 compute-0 sudo[305386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:43 compute-0 sudo[305386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:43 compute-0 sudo[305386]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:43 compute-0 sudo[305411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 07:59:43 compute-0 sudo[305411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:43 compute-0 podman[305477]: 2026-01-31 07:59:43.825401785 +0000 UTC m=+0.055141264 container create 85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:43 compute-0 podman[305477]: 2026-01-31 07:59:43.787232685 +0000 UTC m=+0.016972194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:59:43 compute-0 systemd[1]: Started libpod-conmon-85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302.scope.
Jan 31 07:59:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:43 compute-0 nova_compute[247704]: 2026-01-31 07:59:43.978 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:44 compute-0 podman[305477]: 2026-01-31 07:59:44.031588728 +0000 UTC m=+0.261328237 container init 85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ellis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 07:59:44 compute-0 podman[305477]: 2026-01-31 07:59:44.040996707 +0000 UTC m=+0.270736196 container start 85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ellis, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 07:59:44 compute-0 recursing_ellis[305494]: 167 167
Jan 31 07:59:44 compute-0 systemd[1]: libpod-85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302.scope: Deactivated successfully.
Jan 31 07:59:44 compute-0 podman[305477]: 2026-01-31 07:59:44.140555243 +0000 UTC m=+0.370294722 container attach 85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ellis, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:59:44 compute-0 podman[305477]: 2026-01-31 07:59:44.141068996 +0000 UTC m=+0.370808475 container died 85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ellis, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eef38ee583e162913bec86e328a6780e8afec83bd28f146d8f9b585ba95492a-merged.mount: Deactivated successfully.
Jan 31 07:59:44 compute-0 nova_compute[247704]: 2026-01-31 07:59:44.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:44 compute-0 podman[305477]: 2026-01-31 07:59:44.603760347 +0000 UTC m=+0.833499866 container remove 85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_ellis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 07:59:44 compute-0 ceph-mon[74496]: pgmap v1874: 305 pgs: 305 active+clean; 158 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 729 KiB/s wr, 102 op/s
Jan 31 07:59:44 compute-0 systemd[1]: libpod-conmon-85bd57a48f1d9b9dc5f35305061453ee21b24590e6262b6e9c534d08198e1302.scope: Deactivated successfully.
Jan 31 07:59:44 compute-0 podman[305520]: 2026-01-31 07:59:44.785914505 +0000 UTC m=+0.072977159 container create d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 07:59:44 compute-0 podman[305520]: 2026-01-31 07:59:44.751752373 +0000 UTC m=+0.038815087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 07:59:44 compute-0 systemd[1]: Started libpod-conmon-d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d.scope.
Jan 31 07:59:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 07:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0780ad00b7f5d1cd752cc9db517b70b20f3491f5b856f23f9fc379700185dd70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0780ad00b7f5d1cd752cc9db517b70b20f3491f5b856f23f9fc379700185dd70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0780ad00b7f5d1cd752cc9db517b70b20f3491f5b856f23f9fc379700185dd70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0780ad00b7f5d1cd752cc9db517b70b20f3491f5b856f23f9fc379700185dd70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 07:59:45 compute-0 podman[305520]: 2026-01-31 07:59:45.03487972 +0000 UTC m=+0.321942414 container init d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 07:59:45 compute-0 podman[305520]: 2026-01-31 07:59:45.041714797 +0000 UTC m=+0.328777471 container start d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 07:59:45 compute-0 podman[305520]: 2026-01-31 07:59:45.22451185 +0000 UTC m=+0.511574504 container attach d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 07:59:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:45.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 181 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 31 07:59:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:45 compute-0 nova_compute[247704]: 2026-01-31 07:59:45.502 247708 INFO nova.compute.manager [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Rebuilding instance
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]: {
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:         "osd_id": 0,
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:         "type": "bluestore"
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]:     }
Jan 31 07:59:45 compute-0 dreamy_noyce[305537]: }
Jan 31 07:59:45 compute-0 systemd[1]: libpod-d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d.scope: Deactivated successfully.
Jan 31 07:59:45 compute-0 podman[305520]: 2026-01-31 07:59:45.948663322 +0000 UTC m=+1.235725976 container died d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 07:59:46 compute-0 nova_compute[247704]: 2026-01-31 07:59:46.059 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:46 compute-0 nova_compute[247704]: 2026-01-31 07:59:46.151 247708 DEBUG nova.compute.manager [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 07:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0780ad00b7f5d1cd752cc9db517b70b20f3491f5b856f23f9fc379700185dd70-merged.mount: Deactivated successfully.
Jan 31 07:59:46 compute-0 podman[305520]: 2026-01-31 07:59:46.53928068 +0000 UTC m=+1.826343364 container remove d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 07:59:46 compute-0 nova_compute[247704]: 2026-01-31 07:59:46.545 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:46 compute-0 systemd[1]: libpod-conmon-d568a9c9c0612b789ba396f80221629a6f05b3879160cb0b487df077c6d8d73d.scope: Deactivated successfully.
Jan 31 07:59:46 compute-0 sudo[305411]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 07:59:46 compute-0 podman[305560]: 2026-01-31 07:59:46.643381717 +0000 UTC m=+0.652803905 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 07:59:46 compute-0 nova_compute[247704]: 2026-01-31 07:59:46.681 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:46 compute-0 nova_compute[247704]: 2026-01-31 07:59:46.981 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 07:59:47 compute-0 nova_compute[247704]: 2026-01-31 07:59:47.172 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 07:59:47 compute-0 ceph-mon[74496]: pgmap v1875: 305 pgs: 305 active+clean; 181 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 31 07:59:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 07:59:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:47.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 07:59:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 181 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 07:59:47 compute-0 nova_compute[247704]: 2026-01-31 07:59:47.316 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 07:59:47 compute-0 nova_compute[247704]: 2026-01-31 07:59:47.320 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 07:59:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 99c8ed63-af0e-4c98-bc7f-2dda49fdcc01 does not exist
Jan 31 07:59:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 65f5655c-2c8c-49c7-a0f4-e8ee8bdeb083 does not exist
Jan 31 07:59:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 73f3ea77-2641-4354-9f8a-3ef4679c1910 does not exist
Jan 31 07:59:47 compute-0 sudo[305593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:47 compute-0 sudo[305593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:47 compute-0 sudo[305593]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:47 compute-0 sudo[305618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 07:59:47 compute-0 sudo[305618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:47 compute-0 sudo[305618]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:48 compute-0 ceph-mon[74496]: pgmap v1876: 305 pgs: 305 active+clean; 181 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 07:59:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 07:59:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4264646496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/851711816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 07:59:48 compute-0 nova_compute[247704]: 2026-01-31 07:59:48.980 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 163 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Jan 31 07:59:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:49.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:49 compute-0 nova_compute[247704]: 2026-01-31 07:59:49.481 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 07:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 07:59:50 compute-0 ceph-mon[74496]: pgmap v1877: 305 pgs: 305 active+clean; 163 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Jan 31 07:59:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 155 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 31 07:59:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:51.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:52 compute-0 ceph-mon[74496]: pgmap v1878: 305 pgs: 305 active+clean; 155 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 31 07:59:52 compute-0 ovn_controller[149457]: 2026-01-31T07:59:52Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:40:8a 10.100.0.3
Jan 31 07:59:52 compute-0 ovn_controller[149457]: 2026-01-31T07:59:52Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:40:8a 10.100.0.3
Jan 31 07:59:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:53.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 145 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Jan 31 07:59:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:53.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:53 compute-0 nova_compute[247704]: 2026-01-31 07:59:53.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:54 compute-0 ceph-mon[74496]: pgmap v1879: 305 pgs: 305 active+clean; 145 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Jan 31 07:59:54 compute-0 nova_compute[247704]: 2026-01-31 07:59:54.483 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:55.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 164 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 224 op/s
Jan 31 07:59:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/18686290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 07:59:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:55.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:56 compute-0 ceph-mon[74496]: pgmap v1880: 305 pgs: 305 active+clean; 164 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 224 op/s
Jan 31 07:59:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:57.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 166 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 31 07:59:57 compute-0 nova_compute[247704]: 2026-01-31 07:59:57.370 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 07:59:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:57.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:57 compute-0 sshd-session[305648]: Invalid user solv from 45.148.10.240 port 57860
Jan 31 07:59:57 compute-0 sshd-session[305648]: Connection closed by invalid user solv 45.148.10.240 port 57860 [preauth]
Jan 31 07:59:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:58.035 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 07:59:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 07:59:58.036 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 07:59:58 compute-0 nova_compute[247704]: 2026-01-31 07:59:58.036 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:58 compute-0 ceph-mon[74496]: pgmap v1881: 305 pgs: 305 active+clean; 166 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 31 07:59:58 compute-0 nova_compute[247704]: 2026-01-31 07:59:58.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:59.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 167 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Jan 31 07:59:59 compute-0 sudo[305650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:59 compute-0 sudo[305650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:59 compute-0 sudo[305650]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:59 compute-0 sudo[305675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 07:59:59 compute-0 sudo[305675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 07:59:59 compute-0 sudo[305675]: pam_unix(sudo:session): session closed for user root
Jan 31 07:59:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 07:59:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 07:59:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:59.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 07:59:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 07:59:59 compute-0 nova_compute[247704]: 2026-01-31 07:59:59.541 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:59:59.795909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399796019, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1264, "num_deletes": 256, "total_data_size": 2132532, "memory_usage": 2165968, "flush_reason": "Manual Compaction"}
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846399912817, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2086952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40084, "largest_seqno": 41346, "table_properties": {"data_size": 2080869, "index_size": 3350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12959, "raw_average_key_size": 19, "raw_value_size": 2068660, "raw_average_value_size": 3172, "num_data_blocks": 147, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846285, "oldest_key_time": 1769846285, "file_creation_time": 1769846399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 116996 microseconds, and 7066 cpu microseconds.
Jan 31 07:59:59 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:00:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:59:59.912901) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2086952 bytes OK
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-07:59:59.912933) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.070454) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.070511) EVENT_LOG_v1 {"time_micros": 1769846400070499, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.070543) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2126920, prev total WAL file size 2142973, number of live WAL files 2.
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.072067) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2038KB)], [86(9314KB)]
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400072201, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11624976, "oldest_snapshot_seqno": -1}
Jan 31 08:00:00 compute-0 ceph-mon[74496]: pgmap v1882: 305 pgs: 305 active+clean; 167 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6926 keys, 11452038 bytes, temperature: kUnknown
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400530246, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11452038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11405313, "index_size": 28309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178221, "raw_average_key_size": 25, "raw_value_size": 11280898, "raw_average_value_size": 1628, "num_data_blocks": 1133, "num_entries": 6926, "num_filter_entries": 6926, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846400, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.530671) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11452038 bytes
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.570458) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.4 rd, 25.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(11.1) write-amplify(5.5) OK, records in: 7455, records dropped: 529 output_compression: NoCompression
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.570526) EVENT_LOG_v1 {"time_micros": 1769846400570494, "job": 50, "event": "compaction_finished", "compaction_time_micros": 458174, "compaction_time_cpu_micros": 23758, "output_level": 6, "num_output_files": 1, "total_output_size": 11452038, "num_input_records": 7455, "num_output_records": 6926, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400571071, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846400572601, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.071877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.572696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.572701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.572702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.572704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:00.572705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:00:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:01.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:01 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:00:01 compute-0 podman[305702]: 2026-01-31 08:00:01.930055281 +0000 UTC m=+0.100443358 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:00:02 compute-0 kernel: tap8833e127-31 (unregistering): left promiscuous mode
Jan 31 08:00:02 compute-0 NetworkManager[49108]: <info>  [1769846402.6187] device (tap8833e127-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:00:02 compute-0 ovn_controller[149457]: 2026-01-31T08:00:02Z|00348|binding|INFO|Releasing lport 8833e127-3186-4142-8fd1-89a06d960d79 from this chassis (sb_readonly=0)
Jan 31 08:00:02 compute-0 ovn_controller[149457]: 2026-01-31T08:00:02Z|00349|binding|INFO|Setting lport 8833e127-3186-4142-8fd1-89a06d960d79 down in Southbound
Jan 31 08:00:02 compute-0 nova_compute[247704]: 2026-01-31 08:00:02.625 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:02 compute-0 ovn_controller[149457]: 2026-01-31T08:00:02Z|00350|binding|INFO|Removing iface tap8833e127-31 ovn-installed in OVS
Jan 31 08:00:02 compute-0 nova_compute[247704]: 2026-01-31 08:00:02.628 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:02 compute-0 nova_compute[247704]: 2026-01-31 08:00:02.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:02 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000057.scope: Deactivated successfully.
Jan 31 08:00:02 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000057.scope: Consumed 14.369s CPU time.
Jan 31 08:00:02 compute-0 systemd-machined[214448]: Machine qemu-35-instance-00000057 terminated.
Jan 31 08:00:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:02.734 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:40:8a 10.100.0.3'], port_security=['fa:16:3e:6a:40:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=8833e127-3186-4142-8fd1-89a06d960d79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:00:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:02.736 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 8833e127-3186-4142-8fd1-89a06d960d79 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis
Jan 31 08:00:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:02.738 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:00:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:02.740 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[579ca45c-de0f-4185-9706-bba88d6c3f77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:02.741 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore
Jan 31 08:00:02 compute-0 ceph-mon[74496]: pgmap v1883: 305 pgs: 305 active+clean; 167 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:00:02 compute-0 nova_compute[247704]: 2026-01-31 08:00:02.845 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:02 compute-0 nova_compute[247704]: 2026-01-31 08:00:02.848 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [NOTICE]   (304959) : haproxy version is 2.8.14-c23fe91
Jan 31 08:00:03 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [NOTICE]   (304959) : path to executable is /usr/sbin/haproxy
Jan 31 08:00:03 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [WARNING]  (304959) : Exiting Master process...
Jan 31 08:00:03 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [WARNING]  (304959) : Exiting Master process...
Jan 31 08:00:03 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [ALERT]    (304959) : Current worker (304961) exited with code 143 (Terminated)
Jan 31 08:00:03 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[304932]: [WARNING]  (304959) : All workers exited. Exiting... (0)
Jan 31 08:00:03 compute-0 systemd[1]: libpod-07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324.scope: Deactivated successfully.
Jan 31 08:00:03 compute-0 podman[305755]: 2026-01-31 08:00:03.019259206 +0000 UTC m=+0.198482367 container died 07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:03.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 160 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.400 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance shutdown successfully after 16 seconds.
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.406 247708 INFO nova.virt.libvirt.driver [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance destroyed successfully.
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.411 247708 INFO nova.virt.libvirt.driver [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance destroyed successfully.
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.412 247708 DEBUG nova.virt.libvirt.vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1843296913',display_name='tempest-ServerDiskConfigTestJSON-server-1843296913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1843296913',id=87,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fectldu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:59:43Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.412 247708 DEBUG nova.network.os_vif_util [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.413 247708 DEBUG nova.network.os_vif_util [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.414 247708 DEBUG os_vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.415 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8833e127-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.417 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.418 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.422 247708 INFO os_vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31')
Jan 31 08:00:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:03.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324-userdata-shm.mount: Deactivated successfully.
Jan 31 08:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-66b5de6de1c1870f0d0ee6fb0a022068449033ab8cd95e792c4cd9a67b004375-merged.mount: Deactivated successfully.
Jan 31 08:00:03 compute-0 podman[305755]: 2026-01-31 08:00:03.529424984 +0000 UTC m=+0.708648175 container cleanup 07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:03 compute-0 systemd[1]: libpod-conmon-07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324.scope: Deactivated successfully.
Jan 31 08:00:03 compute-0 podman[305813]: 2026-01-31 08:00:03.61053199 +0000 UTC m=+0.062634837 container remove 07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.616 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfd30b2-4397-473e-ba99-dc936310a9e3]: (4, ('Sat Jan 31 08:00:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324)\n07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324\nSat Jan 31 08:00:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324)\n07dcb513f8fcd7c134d903d56744cd0ba4117774a48161a3387bda5d9bc55324\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.619 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b932352f-ff1c-4ef4-ae32-3571cad951ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.620 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.622 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-0 kernel: tap121329c8-20: left promiscuous mode
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.626 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[61bafe77-76d3-492b-a750-6d5363a1c27d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.628 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.643 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[32bccb32-5264-4047-8c6d-9d9dc3f3f40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.645 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c1beb471-8af2-43d1-b466-361d9d3cacf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.658 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3be16917-f8cc-4819-82ab-bd8f57fde3ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662002, 'reachable_time': 33293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305827, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.661 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:00:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:03.661 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9aba848c-0037-4201-9a2e-d6a0f84cf755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:03 compute-0 ceph-mon[74496]: pgmap v1884: 305 pgs: 305 active+clean; 160 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Jan 31 08:00:03 compute-0 nova_compute[247704]: 2026-01-31 08:00:03.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:05 compute-0 nova_compute[247704]: 2026-01-31 08:00:05.158 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deleting instance files /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_del
Jan 31 08:00:05 compute-0 nova_compute[247704]: 2026-01-31 08:00:05.159 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deletion of /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_del complete
Jan 31 08:00:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:05.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 138 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 142 op/s
Jan 31 08:00:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:05.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.398 247708 DEBUG nova.compute.manager [req-b10e99e6-b866-44bc-8359-e754ede47677 req-896c7766-973b-408d-82e4-d1c5c68720eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-unplugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.399 247708 DEBUG oslo_concurrency.lockutils [req-b10e99e6-b866-44bc-8359-e754ede47677 req-896c7766-973b-408d-82e4-d1c5c68720eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.399 247708 DEBUG oslo_concurrency.lockutils [req-b10e99e6-b866-44bc-8359-e754ede47677 req-896c7766-973b-408d-82e4-d1c5c68720eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.400 247708 DEBUG oslo_concurrency.lockutils [req-b10e99e6-b866-44bc-8359-e754ede47677 req-896c7766-973b-408d-82e4-d1c5c68720eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.400 247708 DEBUG nova.compute.manager [req-b10e99e6-b866-44bc-8359-e754ede47677 req-896c7766-973b-408d-82e4-d1c5c68720eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] No waiting events found dispatching network-vif-unplugged-8833e127-3186-4142-8fd1-89a06d960d79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.400 247708 WARNING nova.compute.manager [req-b10e99e6-b866-44bc-8359-e754ede47677 req-896c7766-973b-408d-82e4-d1c5c68720eb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received unexpected event network-vif-unplugged-8833e127-3186-4142-8fd1-89a06d960d79 for instance with vm_state active and task_state rebuild_block_device_mapping.
Jan 31 08:00:06 compute-0 ceph-mon[74496]: pgmap v1885: 305 pgs: 305 active+clean; 138 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 142 op/s
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.658 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.659 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Creating image(s)
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.699 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.735 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.764 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.769 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.841 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.842 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.842 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.843 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "8c488581cdd7eb690478040e04ee9da4cb107c7c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.874 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:06 compute-0 nova_compute[247704]: 2026-01-31 08:00:06.878 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:07.038 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:07.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 105 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 103 op/s
Jan 31 08:00:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:07.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.019 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.107 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.456 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:08 compute-0 ceph-mon[74496]: pgmap v1886: 305 pgs: 305 active+clean; 105 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 103 op/s
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.817 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.817 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Ensure instance console log exists: /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.818 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.818 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.819 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.821 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Start _get_guest_xml network_info=[{"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.826 247708 WARNING nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.846 247708 DEBUG nova.virt.libvirt.host [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.847 247708 DEBUG nova.virt.libvirt.host [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.863 247708 DEBUG nova.virt.libvirt.host [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.864 247708 DEBUG nova.virt.libvirt.host [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.865 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.865 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.866 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.866 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.866 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.866 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.867 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.867 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.867 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.867 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.868 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.868 247708 DEBUG nova.virt.hardware [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.868 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.993 247708 DEBUG nova.compute.manager [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.994 247708 DEBUG oslo_concurrency.lockutils [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.994 247708 DEBUG oslo_concurrency.lockutils [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.994 247708 DEBUG oslo_concurrency.lockutils [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.995 247708 DEBUG nova.compute.manager [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] No waiting events found dispatching network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.995 247708 WARNING nova.compute.manager [req-07e81a2e-6c68-4fa8-95fc-23ec1818e2e0 req-5e608684-26bb-44b7-853d-71a6acd4c3cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received unexpected event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 for instance with vm_state active and task_state rebuild_spawning.
Jan 31 08:00:08 compute-0 nova_compute[247704]: 2026-01-31 08:00:08.995 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:09 compute-0 nova_compute[247704]: 2026-01-31 08:00:09.083 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:09.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 31 08:00:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:00:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829613338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:09 compute-0 nova_compute[247704]: 2026-01-31 08:00:09.628 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:09 compute-0 nova_compute[247704]: 2026-01-31 08:00:09.673 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:09 compute-0 nova_compute[247704]: 2026-01-31 08:00:09.679 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3673352418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2829613338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.959513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409959582, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 346, "num_deletes": 251, "total_data_size": 209209, "memory_usage": 217032, "flush_reason": "Manual Compaction"}
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409963245, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 207473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41347, "largest_seqno": 41692, "table_properties": {"data_size": 205294, "index_size": 343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5437, "raw_average_key_size": 18, "raw_value_size": 201023, "raw_average_value_size": 686, "num_data_blocks": 15, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846399, "oldest_key_time": 1769846399, "file_creation_time": 1769846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 3755 microseconds, and 1137 cpu microseconds.
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.963283) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 207473 bytes OK
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.963297) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.965210) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.965252) EVENT_LOG_v1 {"time_micros": 1769846409965242, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.965277) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 206872, prev total WAL file size 206872, number of live WAL files 2.
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.965855) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(202KB)], [89(10MB)]
Jan 31 08:00:09 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846409965902, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11659511, "oldest_snapshot_seqno": -1}
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6709 keys, 9719688 bytes, temperature: kUnknown
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410071229, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9719688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9675868, "index_size": 25916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 174404, "raw_average_key_size": 25, "raw_value_size": 9556768, "raw_average_value_size": 1424, "num_data_blocks": 1024, "num_entries": 6709, "num_filter_entries": 6709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.071485) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9719688 bytes
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.073064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.6 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(103.0) write-amplify(46.8) OK, records in: 7219, records dropped: 510 output_compression: NoCompression
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.073112) EVENT_LOG_v1 {"time_micros": 1769846410073083, "job": 52, "event": "compaction_finished", "compaction_time_micros": 105402, "compaction_time_cpu_micros": 32623, "output_level": 6, "num_output_files": 1, "total_output_size": 9719688, "num_input_records": 7219, "num_output_records": 6709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410073255, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846410074943, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:09.965757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.075063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.075071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.075075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.075077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:00:10.075078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:00:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:00:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388081703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.178 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.181 247708 DEBUG nova.virt.libvirt.vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1843296913',display_name='tempest-ServerDiskConfigTestJSON-server-1843296913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1843296913',id=87,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fectldu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:05Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.181 247708 DEBUG nova.network.os_vif_util [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.183 247708 DEBUG nova.network.os_vif_util [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.188 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <uuid>0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99</uuid>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <name>instance-00000057</name>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1843296913</nova:name>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:00:08</nova:creationTime>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="40cf2ff3-f7ff-4843-b4ab-b7dcc843006f"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <nova:port uuid="8833e127-3186-4142-8fd1-89a06d960d79">
Jan 31 08:00:10 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <system>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <entry name="serial">0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99</entry>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <entry name="uuid">0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99</entry>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </system>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <os>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </os>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <features>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </features>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk">
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </source>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config">
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </source>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:00:10 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:6a:40:8a"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <target dev="tap8833e127-31"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/console.log" append="off"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <video>
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </video>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:00:10 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:00:10 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:00:10 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:00:10 compute-0 nova_compute[247704]: </domain>
Jan 31 08:00:10 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.190 247708 DEBUG nova.compute.manager [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Preparing to wait for external event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.191 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.191 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.191 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.192 247708 DEBUG nova.virt.libvirt.vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1843296913',display_name='tempest-ServerDiskConfigTestJSON-server-1843296913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1843296913',id=87,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:59:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fectldu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:05Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.193 247708 DEBUG nova.network.os_vif_util [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.193 247708 DEBUG nova.network.os_vif_util [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.194 247708 DEBUG os_vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.195 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.196 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.198 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.198 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8833e127-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.199 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8833e127-31, col_values=(('external_ids', {'iface-id': '8833e127-3186-4142-8fd1-89a06d960d79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:40:8a', 'vm-uuid': '0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.201 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:10 compute-0 NetworkManager[49108]: <info>  [1769846410.2023] manager: (tap8833e127-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.206 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.207 247708 INFO os_vif [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31')
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.674 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.674 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.674 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:6a:40:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.675 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Using config drive
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.700 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:10 compute-0 ceph-mon[74496]: pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 31 08:00:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/388081703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:10 compute-0 nova_compute[247704]: 2026-01-31 08:00:10.920 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:00:11 compute-0 nova_compute[247704]: 2026-01-31 08:00:11.151 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'keypairs' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:00:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:11.172 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:11.172 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:11.173 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:00:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:11.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:11 compute-0 ceph-mon[74496]: pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:00:12 compute-0 nova_compute[247704]: 2026-01-31 08:00:12.929 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Creating config drive at /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config
Jan 31 08:00:12 compute-0 nova_compute[247704]: 2026-01-31 08:00:12.935 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmk92klfj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.065 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmk92klfj" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.109 247708 DEBUG nova.storage.rbd_utils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.114 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.332 247708 DEBUG oslo_concurrency.processutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.333 247708 INFO nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deleting local config drive /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99/disk.config because it was imported into RBD.
Jan 31 08:00:13 compute-0 kernel: tap8833e127-31: entered promiscuous mode
Jan 31 08:00:13 compute-0 NetworkManager[49108]: <info>  [1769846413.3837] manager: (tap8833e127-31): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Jan 31 08:00:13 compute-0 ovn_controller[149457]: 2026-01-31T08:00:13Z|00351|binding|INFO|Claiming lport 8833e127-3186-4142-8fd1-89a06d960d79 for this chassis.
Jan 31 08:00:13 compute-0 ovn_controller[149457]: 2026-01-31T08:00:13Z|00352|binding|INFO|8833e127-3186-4142-8fd1-89a06d960d79: Claiming fa:16:3e:6a:40:8a 10.100.0.3
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 ovn_controller[149457]: 2026-01-31T08:00:13Z|00353|binding|INFO|Setting lport 8833e127-3186-4142-8fd1-89a06d960d79 ovn-installed in OVS
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.433 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 systemd-udevd[306135]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:00:13 compute-0 systemd-machined[214448]: New machine qemu-36-instance-00000057.
Jan 31 08:00:13 compute-0 NetworkManager[49108]: <info>  [1769846413.4570] device (tap8833e127-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:00:13 compute-0 NetworkManager[49108]: <info>  [1769846413.4579] device (tap8833e127-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:00:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:13.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:13 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000057.
Jan 31 08:00:13 compute-0 ovn_controller[149457]: 2026-01-31T08:00:13Z|00354|binding|INFO|Setting lport 8833e127-3186-4142-8fd1-89a06d960d79 up in Southbound
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.642 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:40:8a 10.100.0.3'], port_security=['fa:16:3e:6a:40:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '5', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=8833e127-3186-4142-8fd1-89a06d960d79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.646 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 8833e127-3186-4142-8fd1-89a06d960d79 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.649 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.656 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b53373bc-a44b-40e4-8315-32a06470a706]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.657 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.660 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.660 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[473a6117-fd76-43c6-bc2c-d5d51913b030]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.661 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b383b6-3172-445d-89f0-008e759ab907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.681 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f128383e-8f72-46fc-bdd3-6299a25bbcca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.695 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5386114d-a7f2-4f0d-ba1e-739df52505b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.723 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ef188925-4692-4190-9262-31349ac88d40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 NetworkManager[49108]: <info>  [1769846413.7297] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.728 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[515de18a-1f01-4d61-9b90-3a07bec79a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.766 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfd1bf0-14d5-4edd-9565-48c6a4fb494a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.769 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[13d88f3a-ebb6-4b65-aceb-1865736983dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 NetworkManager[49108]: <info>  [1769846413.7873] device (tap121329c8-20): carrier: link connected
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.794 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ea98e29b-cec0-4797-b0e0-b7c2ebe6e5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.809 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[235364b4-4599-429d-af5e-f5f3e2569684]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665640, 'reachable_time': 37199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306169, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.822 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a77337-b7b6-46bf-8b74-c89b2f1f6ded]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 665640, 'tstamp': 665640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306170, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.832 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b01f60-9fc9-42ae-9793-c59c4c7f8313]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 105], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665640, 'reachable_time': 37199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306171, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.857 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[578b176e-fa18-4d54-9996-cae010e8aafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.897 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e26416-8ed8-4f99-a1e1-5fa2b140950e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.902 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.903 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.904 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.906 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 NetworkManager[49108]: <info>  [1769846413.9069] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 31 08:00:13 compute-0 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.908 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.912 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:13 compute-0 ovn_controller[149457]: 2026-01-31T08:00:13Z|00355|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.914 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.916 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.916 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d568a11c-62ff-44e7-90a1-e426dc87a601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.917 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:00:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:13.918 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.919 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:13 compute-0 nova_compute[247704]: 2026-01-31 08:00:13.994 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.280 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.281 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846414.279445, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.281 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Started (Lifecycle Event)
Jan 31 08:00:14 compute-0 podman[306242]: 2026-01-31 08:00:14.296380137 +0000 UTC m=+0.075060990 container create 11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:14 compute-0 systemd[1]: Started libpod-conmon-11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14.scope.
Jan 31 08:00:14 compute-0 podman[306242]: 2026-01-31 08:00:14.260773349 +0000 UTC m=+0.039454292 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:00:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d67cba7fc99c6057b70b888a524abd32c4db08b8dcadbf46d4ce4d48ce42e19e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:14 compute-0 podman[306242]: 2026-01-31 08:00:14.376172451 +0000 UTC m=+0.154853334 container init 11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.383 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:00:14 compute-0 podman[306242]: 2026-01-31 08:00:14.384255188 +0000 UTC m=+0.162936041 container start 11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.388 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846414.2807045, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.389 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Paused (Lifecycle Event)
Jan 31 08:00:14 compute-0 ceph-mon[74496]: pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:00:14 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [NOTICE]   (306263) : New worker (306265) forked
Jan 31 08:00:14 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [NOTICE]   (306263) : Loading success.
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.568 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.572 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:00:14 compute-0 nova_compute[247704]: 2026-01-31 08:00:14.776 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:00:15 compute-0 nova_compute[247704]: 2026-01-31 08:00:15.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:15.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 08:00:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:15.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:16 compute-0 ceph-mon[74496]: pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 08:00:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3962088818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:16 compute-0 podman[306275]: 2026-01-31 08:00:16.882274393 +0000 UTC m=+0.056337453 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 08:00:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:17.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 08:00:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:17 compute-0 nova_compute[247704]: 2026-01-31 08:00:17.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:18 compute-0 ceph-mon[74496]: pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 08:00:18 compute-0 nova_compute[247704]: 2026-01-31 08:00:18.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:18 compute-0 nova_compute[247704]: 2026-01-31 08:00:18.995 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:19.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 31 08:00:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:19 compute-0 sudo[306295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:19 compute-0 sudo[306295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:19 compute-0 sudo[306295]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:19 compute-0 sudo[306321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:19 compute-0 sudo[306321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:19 compute-0 sudo[306321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:00:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.950 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.950 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.951 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:00:19 compute-0 nova_compute[247704]: 2026-01-31 08:00:19.951 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:00:20
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:00:20 compute-0 nova_compute[247704]: 2026-01-31 08:00:20.205 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:20 compute-0 ceph-mon[74496]: pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:00:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:21.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 7.8 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 08:00:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1411870379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.444 247708 DEBUG nova.compute.manager [req-77a3708e-59fe-481d-b2d1-9a9c6321d5e7 req-2bbe3463-0643-4dea-bea5-85f8a1bfa3e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.445 247708 DEBUG oslo_concurrency.lockutils [req-77a3708e-59fe-481d-b2d1-9a9c6321d5e7 req-2bbe3463-0643-4dea-bea5-85f8a1bfa3e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.445 247708 DEBUG oslo_concurrency.lockutils [req-77a3708e-59fe-481d-b2d1-9a9c6321d5e7 req-2bbe3463-0643-4dea-bea5-85f8a1bfa3e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.446 247708 DEBUG oslo_concurrency.lockutils [req-77a3708e-59fe-481d-b2d1-9a9c6321d5e7 req-2bbe3463-0643-4dea-bea5-85f8a1bfa3e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.446 247708 DEBUG nova.compute.manager [req-77a3708e-59fe-481d-b2d1-9a9c6321d5e7 req-2bbe3463-0643-4dea-bea5-85f8a1bfa3e7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Processing event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.447 247708 DEBUG nova.compute.manager [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.452 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846422.4524593, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.453 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Resumed (Lifecycle Event)
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.455 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.459 247708 INFO nova.virt.libvirt.driver [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance spawned successfully.
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.460 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:00:22 compute-0 ceph-mon[74496]: pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 7.8 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 08:00:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2543805942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.588 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.593 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.594 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.595 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.595 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.596 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.597 247708 DEBUG nova.virt.libvirt.driver [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.603 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:00:22 compute-0 nova_compute[247704]: 2026-01-31 08:00:22.834 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:00:23 compute-0 nova_compute[247704]: 2026-01-31 08:00:23.111 247708 DEBUG nova.compute.manager [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:00:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:23.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:00:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:23.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1694252163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:23 compute-0 nova_compute[247704]: 2026-01-31 08:00:23.855 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:23 compute-0 nova_compute[247704]: 2026-01-31 08:00:23.856 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:23 compute-0 nova_compute[247704]: 2026-01-31 08:00:23.856 247708 DEBUG nova.objects.instance [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:00:23 compute-0 nova_compute[247704]: 2026-01-31 08:00:23.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:24 compute-0 ceph-mon[74496]: pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:00:24 compute-0 nova_compute[247704]: 2026-01-31 08:00:24.867 247708 DEBUG oslo_concurrency.lockutils [None req-7dfd439e-b828-453d-a2bb-50d90cd7b43d 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:25 compute-0 nova_compute[247704]: 2026-01-31 08:00:25.207 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:25.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 47 op/s
Jan 31 08:00:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:25.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:25 compute-0 ceph-mon[74496]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 47 op/s
Jan 31 08:00:26 compute-0 nova_compute[247704]: 2026-01-31 08:00:26.129 247708 DEBUG nova.compute.manager [req-bca5c328-969d-45c7-9a96-0750995989da req-d49d6d3d-4494-4bf8-8353-cbd81de8540f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:26 compute-0 nova_compute[247704]: 2026-01-31 08:00:26.130 247708 DEBUG oslo_concurrency.lockutils [req-bca5c328-969d-45c7-9a96-0750995989da req-d49d6d3d-4494-4bf8-8353-cbd81de8540f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:26 compute-0 nova_compute[247704]: 2026-01-31 08:00:26.130 247708 DEBUG oslo_concurrency.lockutils [req-bca5c328-969d-45c7-9a96-0750995989da req-d49d6d3d-4494-4bf8-8353-cbd81de8540f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:26 compute-0 nova_compute[247704]: 2026-01-31 08:00:26.130 247708 DEBUG oslo_concurrency.lockutils [req-bca5c328-969d-45c7-9a96-0750995989da req-d49d6d3d-4494-4bf8-8353-cbd81de8540f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:26 compute-0 nova_compute[247704]: 2026-01-31 08:00:26.130 247708 DEBUG nova.compute.manager [req-bca5c328-969d-45c7-9a96-0750995989da req-d49d6d3d-4494-4bf8-8353-cbd81de8540f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] No waiting events found dispatching network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:00:26 compute-0 nova_compute[247704]: 2026-01-31 08:00:26.130 247708 WARNING nova.compute.manager [req-bca5c328-969d-45c7-9a96-0750995989da req-d49d6d3d-4494-4bf8-8353-cbd81de8540f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received unexpected event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 for instance with vm_state active and task_state None.
Jan 31 08:00:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:27.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 46 op/s
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.329 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updating instance_info_cache with network_info: [{"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.423 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.424 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.424 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.424 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.425 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.425 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.425 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.468 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.469 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.470 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.470 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.470 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982580096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:27 compute-0 nova_compute[247704]: 2026-01-31 08:00:27.922 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.165 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.166 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.324 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.325 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4422MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.325 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.326 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:28 compute-0 ceph-mon[74496]: pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 12 KiB/s wr, 46 op/s
Jan 31 08:00:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2982580096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.643 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.646 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.646 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.647 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.647 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.649 247708 INFO nova.compute.manager [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Terminating instance
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.653 247708 DEBUG nova.compute.manager [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:00:28 compute-0 kernel: tap8833e127-31 (unregistering): left promiscuous mode
Jan 31 08:00:28 compute-0 NetworkManager[49108]: <info>  [1769846428.6985] device (tap8833e127-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:00:28 compute-0 ovn_controller[149457]: 2026-01-31T08:00:28Z|00356|binding|INFO|Releasing lport 8833e127-3186-4142-8fd1-89a06d960d79 from this chassis (sb_readonly=0)
Jan 31 08:00:28 compute-0 ovn_controller[149457]: 2026-01-31T08:00:28Z|00357|binding|INFO|Setting lport 8833e127-3186-4142-8fd1-89a06d960d79 down in Southbound
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.704 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:28 compute-0 ovn_controller[149457]: 2026-01-31T08:00:28Z|00358|binding|INFO|Removing iface tap8833e127-31 ovn-installed in OVS
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.718 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:28 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000057.scope: Deactivated successfully.
Jan 31 08:00:28 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000057.scope: Consumed 7.257s CPU time.
Jan 31 08:00:28 compute-0 systemd-machined[214448]: Machine qemu-36-instance-00000057 terminated.
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.923 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.930 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.934 247708 INFO nova.virt.libvirt.driver [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Instance destroyed successfully.
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.935 247708 DEBUG nova.objects.instance [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:00:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:28.952 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:40:8a 10.100.0.3'], port_security=['fa:16:3e:6a:40:8a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '6', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=8833e127-3186-4142-8fd1-89a06d960d79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:00:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:28.954 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 8833e127-3186-4142-8fd1-89a06d960d79 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis
Jan 31 08:00:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:28.956 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:00:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:28.957 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a7029c4d-af5c-40bf-bcc1-8d19363ecb34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:28.958 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore
Jan 31 08:00:28 compute-0 nova_compute[247704]: 2026-01-31 08:00:28.998 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.048 247708 DEBUG nova.virt.libvirt.vif [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:59:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1843296913',display_name='tempest-ServerDiskConfigTestJSON-server-1843296913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1843296913',id=87,image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:00:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fectldu6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:00:24Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.049 247708 DEBUG nova.network.os_vif_util [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "8833e127-3186-4142-8fd1-89a06d960d79", "address": "fa:16:3e:6a:40:8a", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8833e127-31", "ovs_interfaceid": "8833e127-3186-4142-8fd1-89a06d960d79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.049 247708 DEBUG nova.network.os_vif_util [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.050 247708 DEBUG os_vif [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.051 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8833e127-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.054 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.058 247708 INFO os_vif [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:40:8a,bridge_name='br-int',has_traffic_filtering=True,id=8833e127-3186-4142-8fd1-89a06d960d79,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8833e127-31')
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.076 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.076 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.077 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:00:29 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [NOTICE]   (306263) : haproxy version is 2.8.14-c23fe91
Jan 31 08:00:29 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [NOTICE]   (306263) : path to executable is /usr/sbin/haproxy
Jan 31 08:00:29 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [WARNING]  (306263) : Exiting Master process...
Jan 31 08:00:29 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [ALERT]    (306263) : Current worker (306265) exited with code 143 (Terminated)
Jan 31 08:00:29 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[306259]: [WARNING]  (306263) : All workers exited. Exiting... (0)
Jan 31 08:00:29 compute-0 systemd[1]: libpod-11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14.scope: Deactivated successfully.
Jan 31 08:00:29 compute-0 podman[306411]: 2026-01-31 08:00:29.104809247 +0000 UTC m=+0.046487713 container died 11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d67cba7fc99c6057b70b888a524abd32c4db08b8dcadbf46d4ce4d48ce42e19e-merged.mount: Deactivated successfully.
Jan 31 08:00:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14-userdata-shm.mount: Deactivated successfully.
Jan 31 08:00:29 compute-0 podman[306411]: 2026-01-31 08:00:29.147346754 +0000 UTC m=+0.089025200 container cleanup 11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.150 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:29 compute-0 systemd[1]: libpod-conmon-11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14.scope: Deactivated successfully.
Jan 31 08:00:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:29.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 08:00:29 compute-0 podman[306453]: 2026-01-31 08:00:29.463966657 +0000 UTC m=+0.289843732 container remove 11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.471 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5792039d-6171-4070-840b-1e5d42b6a18a]: (4, ('Sat Jan 31 08:00:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14)\n11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14\nSat Jan 31 08:00:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14)\n11b454ff0e586b87e49ee4d7195de53887a9a8eec216dedd2e42b139067c2d14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.473 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fb19aa7f-f830-4f5a-be62-8af684c4d65b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.474 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:00:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:29 compute-0 kernel: tap121329c8-20: left promiscuous mode
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.477 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.489 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.492 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eabff611-0490-4e0c-aadd-b8f6ec3ed974]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.507 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[45dc6d23-bf68-429f-8b02-a6949106350e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.508 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[24454b40-6838-4e15-8567-bfef71dbe66b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.523 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[85a2413d-9185-44f9-b2f4-927cd6d599f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 665633, 'reachable_time': 32201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306489, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.525 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:00:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:00:29.526 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[579ba0bc-ad32-4375-8cdc-b9d212ada0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:00:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 08:00:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59050685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.614 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.622 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:00:29 compute-0 nova_compute[247704]: 2026-01-31 08:00:29.732 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.202 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.203 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.226 247708 DEBUG nova.compute.manager [req-6d50af1d-2e8f-49c6-9c17-f3d7a2b897f9 req-e8698e7c-fb05-4114-b6a8-4a5193928f3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-unplugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.227 247708 DEBUG oslo_concurrency.lockutils [req-6d50af1d-2e8f-49c6-9c17-f3d7a2b897f9 req-e8698e7c-fb05-4114-b6a8-4a5193928f3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.227 247708 DEBUG oslo_concurrency.lockutils [req-6d50af1d-2e8f-49c6-9c17-f3d7a2b897f9 req-e8698e7c-fb05-4114-b6a8-4a5193928f3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.227 247708 DEBUG oslo_concurrency.lockutils [req-6d50af1d-2e8f-49c6-9c17-f3d7a2b897f9 req-e8698e7c-fb05-4114-b6a8-4a5193928f3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.227 247708 DEBUG nova.compute.manager [req-6d50af1d-2e8f-49c6-9c17-f3d7a2b897f9 req-e8698e7c-fb05-4114-b6a8-4a5193928f3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] No waiting events found dispatching network-vif-unplugged-8833e127-3186-4142-8fd1-89a06d960d79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.227 247708 DEBUG nova.compute.manager [req-6d50af1d-2e8f-49c6-9c17-f3d7a2b897f9 req-e8698e7c-fb05-4114-b6a8-4a5193928f3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-unplugged-8833e127-3186-4142-8fd1-89a06d960d79 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:00:30 compute-0 ceph-mon[74496]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 08:00:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/59050685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.848 247708 INFO nova.virt.libvirt.driver [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deleting instance files /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_del
Jan 31 08:00:30 compute-0 nova_compute[247704]: 2026-01-31 08:00:30.849 247708 INFO nova.virt.libvirt.driver [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deletion of /var/lib/nova/instances/0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99_del complete
Jan 31 08:00:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:31.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:31 compute-0 nova_compute[247704]: 2026-01-31 08:00:31.313 247708 INFO nova.compute.manager [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Took 2.66 seconds to destroy the instance on the hypervisor.
Jan 31 08:00:31 compute-0 nova_compute[247704]: 2026-01-31 08:00:31.314 247708 DEBUG oslo.service.loopingcall [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:00:31 compute-0 nova_compute[247704]: 2026-01-31 08:00:31.314 247708 DEBUG nova.compute.manager [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:00:31 compute-0 nova_compute[247704]: 2026-01-31 08:00:31.314 247708 DEBUG nova.network.neutron [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:00:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 69 op/s
Jan 31 08:00:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:31.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:32 compute-0 ceph-mon[74496]: pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 69 op/s
Jan 31 08:00:32 compute-0 podman[306494]: 2026-01-31 08:00:32.946481026 +0000 UTC m=+0.120294301 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:00:33 compute-0 nova_compute[247704]: 2026-01-31 08:00:33.117 247708 DEBUG nova.compute.manager [req-d4e0fd3d-2bc3-409a-a672-661f2abc1a58 req-f5e96399-c594-4f9c-84fa-db5df3acd541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:33 compute-0 nova_compute[247704]: 2026-01-31 08:00:33.117 247708 DEBUG oslo_concurrency.lockutils [req-d4e0fd3d-2bc3-409a-a672-661f2abc1a58 req-f5e96399-c594-4f9c-84fa-db5df3acd541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:33 compute-0 nova_compute[247704]: 2026-01-31 08:00:33.117 247708 DEBUG oslo_concurrency.lockutils [req-d4e0fd3d-2bc3-409a-a672-661f2abc1a58 req-f5e96399-c594-4f9c-84fa-db5df3acd541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:33 compute-0 nova_compute[247704]: 2026-01-31 08:00:33.118 247708 DEBUG oslo_concurrency.lockutils [req-d4e0fd3d-2bc3-409a-a672-661f2abc1a58 req-f5e96399-c594-4f9c-84fa-db5df3acd541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:33 compute-0 nova_compute[247704]: 2026-01-31 08:00:33.118 247708 DEBUG nova.compute.manager [req-d4e0fd3d-2bc3-409a-a672-661f2abc1a58 req-f5e96399-c594-4f9c-84fa-db5df3acd541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] No waiting events found dispatching network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:00:33 compute-0 nova_compute[247704]: 2026-01-31 08:00:33.118 247708 WARNING nova.compute.manager [req-d4e0fd3d-2bc3-409a-a672-661f2abc1a58 req-f5e96399-c594-4f9c-84fa-db5df3acd541 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received unexpected event network-vif-plugged-8833e127-3186-4142-8fd1-89a06d960d79 for instance with vm_state active and task_state deleting.
Jan 31 08:00:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 68 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Jan 31 08:00:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:33 compute-0 ceph-mon[74496]: pgmap v1899: 305 pgs: 305 active+clean; 68 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 78 op/s
Jan 31 08:00:34 compute-0 nova_compute[247704]: 2026-01-31 08:00:34.008 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:34 compute-0 nova_compute[247704]: 2026-01-31 08:00:34.052 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:34 compute-0 nova_compute[247704]: 2026-01-31 08:00:34.853 247708 DEBUG nova.compute.manager [req-8dcf3025-6bd3-4c7e-b6c7-7658afd57338 req-e65788ff-b3e9-4b05-bc17-d42cc200730b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Received event network-vif-deleted-8833e127-3186-4142-8fd1-89a06d960d79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:00:34 compute-0 nova_compute[247704]: 2026-01-31 08:00:34.854 247708 INFO nova.compute.manager [req-8dcf3025-6bd3-4c7e-b6c7-7658afd57338 req-e65788ff-b3e9-4b05-bc17-d42cc200730b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Neutron deleted interface 8833e127-3186-4142-8fd1-89a06d960d79; detaching it from the instance and deleting it from the info cache
Jan 31 08:00:34 compute-0 nova_compute[247704]: 2026-01-31 08:00:34.854 247708 DEBUG nova.network.neutron [req-8dcf3025-6bd3-4c7e-b6c7-7658afd57338 req-e65788ff-b3e9-4b05-bc17-d42cc200730b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:00:34 compute-0 nova_compute[247704]: 2026-01-31 08:00:34.858 247708 DEBUG nova.network.neutron [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.039 247708 INFO nova.compute.manager [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Took 3.72 seconds to deallocate network for instance.
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.055 247708 DEBUG nova.compute.manager [req-8dcf3025-6bd3-4c7e-b6c7-7658afd57338 req-e65788ff-b3e9-4b05-bc17-d42cc200730b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Detach interface failed, port_id=8833e127-3186-4142-8fd1-89a06d960d79, reason: Instance 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.167 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.168 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007306497766610832 of space, bias 1.0, pg target 0.21919493299832496 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.256 247708 DEBUG oslo_concurrency.processutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:35.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 91 op/s
Jan 31 08:00:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831992663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.695 247708 DEBUG oslo_concurrency.processutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.702 247708 DEBUG nova.compute.provider_tree [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:00:35 compute-0 nova_compute[247704]: 2026-01-31 08:00:35.757 247708 DEBUG nova.scheduler.client.report [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:00:36 compute-0 nova_compute[247704]: 2026-01-31 08:00:36.197 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:00:36 compute-0 nova_compute[247704]: 2026-01-31 08:00:36.457 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:36 compute-0 ceph-mon[74496]: pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 91 op/s
Jan 31 08:00:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1831992663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:36 compute-0 nova_compute[247704]: 2026-01-31 08:00:36.633 247708 INFO nova.scheduler.client.report [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocations for instance 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99
Jan 31 08:00:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:37.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 08:00:37 compute-0 nova_compute[247704]: 2026-01-31 08:00:37.475 247708 DEBUG oslo_concurrency.lockutils [None req-6b7d04ba-480d-42f0-aa73-d7e0534b3040 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:38 compute-0 ceph-mon[74496]: pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 08:00:39 compute-0 nova_compute[247704]: 2026-01-31 08:00:39.009 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:39 compute-0 nova_compute[247704]: 2026-01-31 08:00:39.054 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 08:00:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:39 compute-0 sudo[306546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:39 compute-0 sudo[306546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:39 compute-0 sudo[306546]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:39 compute-0 sudo[306571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:39 compute-0 sudo[306571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:39 compute-0 sudo[306571]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:40 compute-0 ceph-mon[74496]: pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.5 KiB/s wr, 53 op/s
Jan 31 08:00:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:41.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Jan 31 08:00:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:42 compute-0 ceph-mon[74496]: pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Jan 31 08:00:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/937994319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 426 B/s wr, 21 op/s
Jan 31 08:00:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:43 compute-0 ceph-mon[74496]: pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 426 B/s wr, 21 op/s
Jan 31 08:00:43 compute-0 nova_compute[247704]: 2026-01-31 08:00:43.933 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846428.9326367, 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:00:43 compute-0 nova_compute[247704]: 2026-01-31 08:00:43.934 247708 INFO nova.compute.manager [-] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] VM Stopped (Lifecycle Event)
Jan 31 08:00:43 compute-0 nova_compute[247704]: 2026-01-31 08:00:43.986 247708 DEBUG nova.compute.manager [None req-fa3b8926-0ddb-4227-bd53-ce93b17c66e4 - - - - - -] [instance: 0d6ddbfe-aabe-49b7-82c7-3bbc8ccf0e99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:00:44 compute-0 nova_compute[247704]: 2026-01-31 08:00:44.011 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:44 compute-0 nova_compute[247704]: 2026-01-31 08:00:44.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:00:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1479498858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:00:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:00:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1479498858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:00:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1479498858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:00:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1479498858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:00:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:45.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 60 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1021 KiB/s wr, 14 op/s
Jan 31 08:00:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:46 compute-0 ceph-mon[74496]: pgmap v1905: 305 pgs: 305 active+clean; 60 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1021 KiB/s wr, 14 op/s
Jan 31 08:00:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:47.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 71 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Jan 31 08:00:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:47 compute-0 podman[306600]: 2026-01-31 08:00:47.92893631 +0000 UTC m=+0.091444999 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:00:48 compute-0 sudo[306621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:48 compute-0 sudo[306621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 sudo[306621]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 sudo[306646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:48 compute-0 sudo[306646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 sudo[306646]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 sudo[306671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:48 compute-0 sudo[306671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 sudo[306671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 sudo[306696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:00:48 compute-0 sudo[306696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 ceph-mon[74496]: pgmap v1906: 305 pgs: 305 active+clean; 71 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1.3 MiB/s wr, 3 op/s
Jan 31 08:00:48 compute-0 sudo[306696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0d99430d-ca68-4517-890b-7f0a5a52935b does not exist
Jan 31 08:00:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6a97d05e-8bf1-4fc2-a7f6-cf66934cc9bf does not exist
Jan 31 08:00:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b13cc8ba-3818-41fb-9248-a12b4531a6dd does not exist
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:00:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:00:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:00:48 compute-0 sudo[306752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:48 compute-0 sudo[306752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 sudo[306752]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 sudo[306777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:48 compute-0 sudo[306777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 sudo[306777]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 sudo[306802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:48 compute-0 sudo[306802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:48 compute-0 sudo[306802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:48 compute-0 sudo[306827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:00:48 compute-0 sudo[306827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:49 compute-0 nova_compute[247704]: 2026-01-31 08:00:49.013 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:49 compute-0 nova_compute[247704]: 2026-01-31 08:00:49.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.183928814 +0000 UTC m=+0.062777131 container create c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:00:49 compute-0 systemd[1]: Started libpod-conmon-c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c.scope.
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.15012266 +0000 UTC m=+0.028970987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:00:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.295143504 +0000 UTC m=+0.173991851 container init c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.303395004 +0000 UTC m=+0.182243321 container start c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elbakyan, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:49 compute-0 fervent_elbakyan[306910]: 167 167
Jan 31 08:00:49 compute-0 systemd[1]: libpod-c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c.scope: Deactivated successfully.
Jan 31 08:00:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.320229824 +0000 UTC m=+0.199078141 container attach c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elbakyan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:49.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.321186767 +0000 UTC m=+0.200035174 container died c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:00:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-042fd931e83e97b7c3056c2e16d039a74d9406f9c999b2e967e7b676621bec30-merged.mount: Deactivated successfully.
Jan 31 08:00:49 compute-0 podman[306893]: 2026-01-31 08:00:49.375553772 +0000 UTC m=+0.254402089 container remove c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_elbakyan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:00:49 compute-0 systemd[1]: libpod-conmon-c34097e99e4c46ef9807d1c1fee33c5df9f61e558bc778d86818c68944658d9c.scope: Deactivated successfully.
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:00:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:00:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:49.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:49 compute-0 podman[306934]: 2026-01-31 08:00:49.513957174 +0000 UTC m=+0.044537166 container create e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:00:49 compute-0 systemd[1]: Started libpod-conmon-e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95.scope.
Jan 31 08:00:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2668dd2fe1ddf704eb3050078efd3e6f29ea3fb270dafe5efe5695b2fafed19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2668dd2fe1ddf704eb3050078efd3e6f29ea3fb270dafe5efe5695b2fafed19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2668dd2fe1ddf704eb3050078efd3e6f29ea3fb270dafe5efe5695b2fafed19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2668dd2fe1ddf704eb3050078efd3e6f29ea3fb270dafe5efe5695b2fafed19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2668dd2fe1ddf704eb3050078efd3e6f29ea3fb270dafe5efe5695b2fafed19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:49 compute-0 podman[306934]: 2026-01-31 08:00:49.495376752 +0000 UTC m=+0.025956734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:00:49 compute-0 podman[306934]: 2026-01-31 08:00:49.603979257 +0000 UTC m=+0.134559249 container init e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elgamal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 08:00:49 compute-0 podman[306934]: 2026-01-31 08:00:49.61888511 +0000 UTC m=+0.149465092 container start e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:00:49 compute-0 podman[306934]: 2026-01-31 08:00:49.626388413 +0000 UTC m=+0.156968395 container attach e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elgamal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:00:50 compute-0 nervous_elgamal[306950]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:00:50 compute-0 nervous_elgamal[306950]: --> relative data size: 1.0
Jan 31 08:00:50 compute-0 nervous_elgamal[306950]: --> All data devices are unavailable
Jan 31 08:00:50 compute-0 systemd[1]: libpod-e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95.scope: Deactivated successfully.
Jan 31 08:00:50 compute-0 podman[306934]: 2026-01-31 08:00:50.44601132 +0000 UTC m=+0.976591312 container died e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elgamal, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2668dd2fe1ddf704eb3050078efd3e6f29ea3fb270dafe5efe5695b2fafed19-merged.mount: Deactivated successfully.
Jan 31 08:00:50 compute-0 podman[306934]: 2026-01-31 08:00:50.515745999 +0000 UTC m=+1.046325981 container remove e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_elgamal, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:00:50 compute-0 systemd[1]: libpod-conmon-e3aef9973107e7f65cd9cf359db8741855fac5d35e0d92a8d291c219d9b36d95.scope: Deactivated successfully.
Jan 31 08:00:50 compute-0 sudo[306827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:50 compute-0 sudo[306979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:50 compute-0 sudo[306979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:50 compute-0 sudo[306979]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:50 compute-0 ceph-mon[74496]: pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:50 compute-0 sudo[307004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:50 compute-0 sudo[307004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:50 compute-0 sudo[307004]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:50 compute-0 sudo[307029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:50 compute-0 sudo[307029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:50 compute-0 sudo[307029]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:50 compute-0 sudo[307054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:00:50 compute-0 sudo[307054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.129544433 +0000 UTC m=+0.059039939 container create 8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:00:51 compute-0 systemd[1]: Started libpod-conmon-8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2.scope.
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.106310507 +0000 UTC m=+0.035806103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:00:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.212927394 +0000 UTC m=+0.142422940 container init 8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.218857348 +0000 UTC m=+0.148352864 container start 8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.22342714 +0000 UTC m=+0.152922666 container attach 8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:00:51 compute-0 fervent_black[307135]: 167 167
Jan 31 08:00:51 compute-0 systemd[1]: libpod-8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2.scope: Deactivated successfully.
Jan 31 08:00:51 compute-0 conmon[307135]: conmon 8fd59a9b9af9344b1196 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2.scope/container/memory.events
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.225350827 +0000 UTC m=+0.154846383 container died 8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e1708a95fb38247fd3fcc7558e754ce40a5c3b8d8bbb299302632b84ea15fc9-merged.mount: Deactivated successfully.
Jan 31 08:00:51 compute-0 podman[307119]: 2026-01-31 08:00:51.26449888 +0000 UTC m=+0.193994426 container remove 8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:00:51 compute-0 systemd[1]: libpod-conmon-8fd59a9b9af9344b1196b293204d0824ef31af262cb06f33c9cf76bc754b4fd2.scope: Deactivated successfully.
Jan 31 08:00:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:51.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:51 compute-0 podman[307159]: 2026-01-31 08:00:51.405578778 +0000 UTC m=+0.054795466 container create 4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:51 compute-0 systemd[1]: Started libpod-conmon-4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56.scope.
Jan 31 08:00:51 compute-0 podman[307159]: 2026-01-31 08:00:51.379477711 +0000 UTC m=+0.028694459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:00:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:51.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d5ffd527f3883bff1ed8799ff3101990ef715436bd3e5130b80ab214960a5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d5ffd527f3883bff1ed8799ff3101990ef715436bd3e5130b80ab214960a5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d5ffd527f3883bff1ed8799ff3101990ef715436bd3e5130b80ab214960a5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d5ffd527f3883bff1ed8799ff3101990ef715436bd3e5130b80ab214960a5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:51 compute-0 podman[307159]: 2026-01-31 08:00:51.522707 +0000 UTC m=+0.171923768 container init 4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:00:51 compute-0 podman[307159]: 2026-01-31 08:00:51.529962888 +0000 UTC m=+0.179179586 container start 4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_germain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:51 compute-0 podman[307159]: 2026-01-31 08:00:51.535483903 +0000 UTC m=+0.184700621 container attach 4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:00:52 compute-0 nova_compute[247704]: 2026-01-31 08:00:52.162 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:52 compute-0 nova_compute[247704]: 2026-01-31 08:00:52.164 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:52 compute-0 hungry_germain[307176]: {
Jan 31 08:00:52 compute-0 hungry_germain[307176]:     "0": [
Jan 31 08:00:52 compute-0 hungry_germain[307176]:         {
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "devices": [
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "/dev/loop3"
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             ],
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "lv_name": "ceph_lv0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "lv_size": "7511998464",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "name": "ceph_lv0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "tags": {
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.cluster_name": "ceph",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.crush_device_class": "",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.encrypted": "0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.osd_id": "0",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.type": "block",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:                 "ceph.vdo": "0"
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             },
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "type": "block",
Jan 31 08:00:52 compute-0 hungry_germain[307176]:             "vg_name": "ceph_vg0"
Jan 31 08:00:52 compute-0 hungry_germain[307176]:         }
Jan 31 08:00:52 compute-0 hungry_germain[307176]:     ]
Jan 31 08:00:52 compute-0 hungry_germain[307176]: }
Jan 31 08:00:52 compute-0 systemd[1]: libpod-4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56.scope: Deactivated successfully.
Jan 31 08:00:52 compute-0 podman[307159]: 2026-01-31 08:00:52.287988544 +0000 UTC m=+0.937205272 container died 4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1d5ffd527f3883bff1ed8799ff3101990ef715436bd3e5130b80ab214960a5a-merged.mount: Deactivated successfully.
Jan 31 08:00:52 compute-0 podman[307159]: 2026-01-31 08:00:52.342732178 +0000 UTC m=+0.991948876 container remove 4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:00:52 compute-0 systemd[1]: libpod-conmon-4e69dd6ce16c6b965ff52efb0d4540e32cda295387ffffe1a5df00cfca489b56.scope: Deactivated successfully.
Jan 31 08:00:52 compute-0 sudo[307054]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:52 compute-0 sudo[307197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:52 compute-0 sudo[307197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:52 compute-0 sudo[307197]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:52 compute-0 sudo[307222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:00:52 compute-0 sudo[307222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:52 compute-0 sudo[307222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:52 compute-0 sudo[307247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:52 compute-0 sudo[307247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:52 compute-0 sudo[307247]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:52 compute-0 ceph-mon[74496]: pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:52 compute-0 sudo[307272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:00:52 compute-0 sudo[307272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:52 compute-0 nova_compute[247704]: 2026-01-31 08:00:52.782 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:53.022968 +0000 UTC m=+0.057136043 container create ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:00:53 compute-0 systemd[1]: Started libpod-conmon-ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc.scope.
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:52.984454881 +0000 UTC m=+0.018622954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:00:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:53.128925932 +0000 UTC m=+0.163094005 container init ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:53.134298073 +0000 UTC m=+0.168466126 container start ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:00:53 compute-0 systemd[1]: libpod-ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc.scope: Deactivated successfully.
Jan 31 08:00:53 compute-0 unruffled_lovelace[307353]: 167 167
Jan 31 08:00:53 compute-0 conmon[307353]: conmon ed778176c317d4e950cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc.scope/container/memory.events
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:53.138798472 +0000 UTC m=+0.172966545 container attach ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lovelace, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:53.139872668 +0000 UTC m=+0.174040711 container died ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lovelace, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfadffd8fbbceaf1cbcc1b3badc56db1fd7287253d148e417606bdb0906e46a1-merged.mount: Deactivated successfully.
Jan 31 08:00:53 compute-0 podman[307337]: 2026-01-31 08:00:53.218998886 +0000 UTC m=+0.253166929 container remove ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_lovelace, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:00:53 compute-0 systemd[1]: libpod-conmon-ed778176c317d4e950cf9f6294dc50fd51ef026a931ed6026a7d65385937dbbc.scope: Deactivated successfully.
Jan 31 08:00:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:53.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:53 compute-0 podman[307379]: 2026-01-31 08:00:53.359598131 +0000 UTC m=+0.056945538 container create a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mcclintock, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:00:53 compute-0 systemd[1]: Started libpod-conmon-a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9.scope.
Jan 31 08:00:53 compute-0 podman[307379]: 2026-01-31 08:00:53.322486377 +0000 UTC m=+0.019833874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:00:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea7856044da58c442117fd48b28c12e12f35df21fcc9f89733ec450b1fd7413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea7856044da58c442117fd48b28c12e12f35df21fcc9f89733ec450b1fd7413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea7856044da58c442117fd48b28c12e12f35df21fcc9f89733ec450b1fd7413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea7856044da58c442117fd48b28c12e12f35df21fcc9f89733ec450b1fd7413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:00:53 compute-0 podman[307379]: 2026-01-31 08:00:53.478166109 +0000 UTC m=+0.175513566 container init a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:00:53 compute-0 podman[307379]: 2026-01-31 08:00:53.485664062 +0000 UTC m=+0.183011469 container start a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:00:53 compute-0 podman[307379]: 2026-01-31 08:00:53.490406248 +0000 UTC m=+0.187753665 container attach a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:00:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:53.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:53 compute-0 nova_compute[247704]: 2026-01-31 08:00:53.624 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:53 compute-0 nova_compute[247704]: 2026-01-31 08:00:53.625 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:53 compute-0 nova_compute[247704]: 2026-01-31 08:00:53.638 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:00:53 compute-0 nova_compute[247704]: 2026-01-31 08:00:53.639 247708 INFO nova.compute.claims [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:00:54 compute-0 nova_compute[247704]: 2026-01-31 08:00:54.016 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:54 compute-0 nova_compute[247704]: 2026-01-31 08:00:54.074 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]: {
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:         "osd_id": 0,
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:         "type": "bluestore"
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]:     }
Jan 31 08:00:54 compute-0 trusting_mcclintock[307396]: }
Jan 31 08:00:54 compute-0 systemd[1]: libpod-a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9.scope: Deactivated successfully.
Jan 31 08:00:54 compute-0 podman[307379]: 2026-01-31 08:00:54.317325223 +0000 UTC m=+1.014672630 container died a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ea7856044da58c442117fd48b28c12e12f35df21fcc9f89733ec450b1fd7413-merged.mount: Deactivated successfully.
Jan 31 08:00:54 compute-0 podman[307379]: 2026-01-31 08:00:54.379871237 +0000 UTC m=+1.077218644 container remove a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:00:54 compute-0 systemd[1]: libpod-conmon-a314026b9151eb6d6368da79c28d812a1b753713ce46856b39ecd0ad685901d9.scope: Deactivated successfully.
Jan 31 08:00:54 compute-0 sudo[307272]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:00:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:00:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 22b867ef-a346-46d1-a4f0-696a5b415407 does not exist
Jan 31 08:00:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 19e11244-b1c4-4af8-bf46-1507d0cbf0d2 does not exist
Jan 31 08:00:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e1753b01-3e4b-47ba-95c2-915705737f12 does not exist
Jan 31 08:00:54 compute-0 sudo[307432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:54 compute-0 sudo[307432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:54 compute-0 sudo[307432]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:54 compute-0 sudo[307457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:00:54 compute-0 sudo[307457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:54 compute-0 sudo[307457]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:54 compute-0 ceph-mon[74496]: pgmap v1909: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:00:55 compute-0 nova_compute[247704]: 2026-01-31 08:00:55.221 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:00:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:55.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:00:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:00:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2327357670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:55 compute-0 nova_compute[247704]: 2026-01-31 08:00:55.664 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:55 compute-0 nova_compute[247704]: 2026-01-31 08:00:55.676 247708 DEBUG nova.compute.provider_tree [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:00:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2327357670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:00:55 compute-0 nova_compute[247704]: 2026-01-31 08:00:55.938 247708 DEBUG nova.scheduler.client.report [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:00:56 compute-0 nova_compute[247704]: 2026-01-31 08:00:56.371 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:56 compute-0 nova_compute[247704]: 2026-01-31 08:00:56.372 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:00:56 compute-0 nova_compute[247704]: 2026-01-31 08:00:56.728 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:00:56 compute-0 nova_compute[247704]: 2026-01-31 08:00:56.729 247708 DEBUG nova.network.neutron [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:00:56 compute-0 ceph-mon[74496]: pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:00:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/493614597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:57 compute-0 nova_compute[247704]: 2026-01-31 08:00:57.290 247708 INFO nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:00:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:57.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 794 KiB/s wr, 25 op/s
Jan 31 08:00:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:57.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:57 compute-0 nova_compute[247704]: 2026-01-31 08:00:57.730 247708 DEBUG nova.policy [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd454a4ec91e645e992f4a88ac60da747', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ab30fd72657435fb442fc59a53da644', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:00:57 compute-0 nova_compute[247704]: 2026-01-31 08:00:57.736 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:00:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3673755692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:00:57 compute-0 ceph-mon[74496]: pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 794 KiB/s wr, 25 op/s
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.564 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.566 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.566 247708 INFO nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Creating image(s)
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.596 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.628 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.658 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.661 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.725 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.726 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.727 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.727 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.755 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:00:58 compute-0 nova_compute[247704]: 2026-01-31 08:00:58.760 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:00:59 compute-0 nova_compute[247704]: 2026-01-31 08:00:59.017 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:59 compute-0 nova_compute[247704]: 2026-01-31 08:00:59.076 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:00:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 485 KiB/s wr, 23 op/s
Jan 31 08:00:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:00:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:59.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:00:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:00:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:00:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:59.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:00:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:00:59 compute-0 nova_compute[247704]: 2026-01-31 08:00:59.684 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.924s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:00:59 compute-0 nova_compute[247704]: 2026-01-31 08:00:59.767 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] resizing rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:00:59 compute-0 sudo[307635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:59 compute-0 sudo[307635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:59 compute-0 sudo[307635]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:59 compute-0 sudo[307680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:00:59 compute-0 sudo[307680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:00:59 compute-0 sudo[307680]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:59 compute-0 nova_compute[247704]: 2026-01-31 08:00:59.909 247708 DEBUG nova.objects.instance [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'migration_context' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:00 compute-0 nova_compute[247704]: 2026-01-31 08:01:00.084 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:01:00 compute-0 nova_compute[247704]: 2026-01-31 08:01:00.085 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Ensure instance console log exists: /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:01:00 compute-0 nova_compute[247704]: 2026-01-31 08:01:00.085 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:00 compute-0 nova_compute[247704]: 2026-01-31 08:01:00.086 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:00 compute-0 nova_compute[247704]: 2026-01-31 08:01:00.086 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:00 compute-0 ceph-mon[74496]: pgmap v1912: 305 pgs: 305 active+clean; 88 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 485 KiB/s wr, 23 op/s
Jan 31 08:01:01 compute-0 CROND[307724]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-0 run-parts[307727]: (/etc/cron.hourly) starting 0anacron
Jan 31 08:01:01 compute-0 run-parts[307733]: (/etc/cron.hourly) finished 0anacron
Jan 31 08:01:01 compute-0 CROND[307723]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 102 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 767 KiB/s wr, 12 op/s
Jan 31 08:01:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:01.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:02 compute-0 ceph-mon[74496]: pgmap v1913: 305 pgs: 305 active+clean; 102 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 7.5 KiB/s rd, 767 KiB/s wr, 12 op/s
Jan 31 08:01:03 compute-0 nova_compute[247704]: 2026-01-31 08:01:03.226 247708 DEBUG nova.network.neutron [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Successfully created port: f0a0388d-0f3d-4eb7-a195-234dd517f99b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:01:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 123 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 1.1 MiB/s wr, 29 op/s
Jan 31 08:01:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:03.384 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:03.385 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:01:03 compute-0 nova_compute[247704]: 2026-01-31 08:01:03.385 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:03 compute-0 podman[307736]: 2026-01-31 08:01:03.91170712 +0000 UTC m=+0.082697446 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Jan 31 08:01:04 compute-0 nova_compute[247704]: 2026-01-31 08:01:04.019 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:04 compute-0 nova_compute[247704]: 2026-01-31 08:01:04.078 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:04 compute-0 ceph-mon[74496]: pgmap v1914: 305 pgs: 305 active+clean; 123 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 1.1 MiB/s wr, 29 op/s
Jan 31 08:01:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Jan 31 08:01:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:05.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:06 compute-0 ceph-mon[74496]: pgmap v1915: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Jan 31 08:01:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 31 08:01:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:07.388 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:07.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.158 247708 DEBUG nova.network.neutron [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Successfully updated port: f0a0388d-0f3d-4eb7-a195-234dd517f99b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.380 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.380 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.381 247708 DEBUG nova.network.neutron [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.468 247708 DEBUG nova.compute.manager [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-changed-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.469 247708 DEBUG nova.compute.manager [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Refreshing instance network info cache due to event network-changed-f0a0388d-0f3d-4eb7-a195-234dd517f99b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.470 247708 DEBUG oslo_concurrency.lockutils [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:08 compute-0 ceph-mon[74496]: pgmap v1916: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 31 08:01:08 compute-0 nova_compute[247704]: 2026-01-31 08:01:08.763 247708 DEBUG nova.network.neutron [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:01:09 compute-0 nova_compute[247704]: 2026-01-31 08:01:09.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:09 compute-0 nova_compute[247704]: 2026-01-31 08:01:09.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:01:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:09 compute-0 ceph-mon[74496]: pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.751 247708 DEBUG nova.network.neutron [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.878 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.879 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Instance network_info: |[{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.879 247708 DEBUG oslo_concurrency.lockutils [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.880 247708 DEBUG nova.network.neutron [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Refreshing network info cache for port f0a0388d-0f3d-4eb7-a195-234dd517f99b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.885 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Start _get_guest_xml network_info=[{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.892 247708 WARNING nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.906 247708 DEBUG nova.virt.libvirt.host [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.907 247708 DEBUG nova.virt.libvirt.host [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.912 247708 DEBUG nova.virt.libvirt.host [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.913 247708 DEBUG nova.virt.libvirt.host [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.915 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.915 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.916 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.917 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.918 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.918 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.918 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.919 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.919 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.920 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.920 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.921 247708 DEBUG nova.virt.hardware [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:01:10 compute-0 nova_compute[247704]: 2026-01-31 08:01:10.926 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:11.173 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:11.173 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:11.173 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:01:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:11.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911858492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.439 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.462 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.466 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529156101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.977 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.980 247708 DEBUG nova.virt.libvirt.vif [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.980 247708 DEBUG nova.network.os_vif_util [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.982 247708 DEBUG nova.network.os_vif_util [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:11 compute-0 nova_compute[247704]: 2026-01-31 08:01:11.984 247708 DEBUG nova.objects.instance [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'pci_devices' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.057 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <uuid>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</uuid>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <name>instance-0000005a</name>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:01:10</nova:creationTime>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:01:12 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <system>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <entry name="serial">cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <entry name="uuid">cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </system>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <os>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </os>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <features>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </features>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk">
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </source>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config">
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </source>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:01:12 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:ce:51:d8"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <target dev="tapf0a0388d-0f"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log" append="off"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <video>
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </video>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:01:12 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:01:12 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:01:12 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:01:12 compute-0 nova_compute[247704]: </domain>
Jan 31 08:01:12 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.058 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Preparing to wait for external event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.059 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.060 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.061 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.062 247708 DEBUG nova.virt.libvirt.vif [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.062 247708 DEBUG nova.network.os_vif_util [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.064 247708 DEBUG nova.network.os_vif_util [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.064 247708 DEBUG os_vif [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.065 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.066 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.067 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.073 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0a0388d-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.073 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf0a0388d-0f, col_values=(('external_ids', {'iface-id': 'f0a0388d-0f3d-4eb7-a195-234dd517f99b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:51:d8', 'vm-uuid': 'cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.133 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:12 compute-0 NetworkManager[49108]: <info>  [1769846472.1346] manager: (tapf0a0388d-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.138 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.144 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.146 247708 INFO os_vif [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f')
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.364 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.366 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.366 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No VIF found with MAC fa:16:3e:ce:51:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.367 247708 INFO nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Using config drive
Jan 31 08:01:12 compute-0 ceph-mon[74496]: pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:01:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2911858492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/529156101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:12 compute-0 nova_compute[247704]: 2026-01-31 08:01:12.406 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Jan 31 08:01:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:13.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:13.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:14 compute-0 nova_compute[247704]: 2026-01-31 08:01:14.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:14 compute-0 ceph-mon[74496]: pgmap v1919: 305 pgs: 305 active+clean; 134 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 88 op/s
Jan 31 08:01:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4158684158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.216 247708 DEBUG nova.network.neutron [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updated VIF entry in instance network info cache for port f0a0388d-0f3d-4eb7-a195-234dd517f99b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.216 247708 DEBUG nova.network.neutron [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.242 247708 DEBUG oslo_concurrency.lockutils [req-a1303c5b-535e-497e-98e2-880fe0dcd8e1 req-e69971d4-4bb3-4182-bd2f-7ed1812e60cd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 147 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:01:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.809 247708 INFO nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Creating config drive at /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/disk.config
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.818 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkikcp9e_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.954 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkikcp9e_" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:15 compute-0 nova_compute[247704]: 2026-01-31 08:01:15.998 247708 DEBUG nova.storage.rbd_utils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] rbd image cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.002 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/disk.config cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.239 247708 DEBUG oslo_concurrency.processutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/disk.config cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.240 247708 INFO nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Deleting local config drive /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/disk.config because it was imported into RBD.
Jan 31 08:01:16 compute-0 kernel: tapf0a0388d-0f: entered promiscuous mode
Jan 31 08:01:16 compute-0 NetworkManager[49108]: <info>  [1769846476.2959] manager: (tapf0a0388d-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Jan 31 08:01:16 compute-0 ovn_controller[149457]: 2026-01-31T08:01:16Z|00359|binding|INFO|Claiming lport f0a0388d-0f3d-4eb7-a195-234dd517f99b for this chassis.
Jan 31 08:01:16 compute-0 ovn_controller[149457]: 2026-01-31T08:01:16Z|00360|binding|INFO|f0a0388d-0f3d-4eb7-a195-234dd517f99b: Claiming fa:16:3e:ce:51:d8 10.100.0.9
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.296 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.318 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:51:d8 10.100.0.9'], port_security=['fa:16:3e:ce:51:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ab30fd72657435fb442fc59a53da644', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b77be73b-b461-413f-95ea-b1491f487dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=682aac33-b4be-4bd7-9b67-193b4cc436d1, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=f0a0388d-0f3d-4eb7-a195-234dd517f99b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.321 160028 INFO neutron.agent.ovn.metadata.agent [-] Port f0a0388d-0f3d-4eb7-a195-234dd517f99b in datapath 5c660d8f-9cd6-4091-b9a9-e007762da7fa bound to our chassis
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.324 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c660d8f-9cd6-4091-b9a9-e007762da7fa
Jan 31 08:01:16 compute-0 systemd-machined[214448]: New machine qemu-37-instance-0000005a.
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.338 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23075231-ee42-47c6-a471-55378773cfd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.340 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c660d8f-91 in ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.342 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c660d8f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.342 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[133a65d2-5e06-4670-b74d-6095e8cffb08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.343 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b939e97c-de92-457f-9dd0-cd1d35ebfe98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.350 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 ovn_controller[149457]: 2026-01-31T08:01:16Z|00361|binding|INFO|Setting lport f0a0388d-0f3d-4eb7-a195-234dd517f99b ovn-installed in OVS
Jan 31 08:01:16 compute-0 ovn_controller[149457]: 2026-01-31T08:01:16Z|00362|binding|INFO|Setting lport f0a0388d-0f3d-4eb7-a195-234dd517f99b up in Southbound
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.357 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.357 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[0db00d24-3a2e-4bfd-8472-4bd9a63322ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-0000005a.
Jan 31 08:01:16 compute-0 systemd-udevd[307907]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.371 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3e0ea8c5-bc6d-42eb-80b6-e13fb00901d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 NetworkManager[49108]: <info>  [1769846476.3882] device (tapf0a0388d-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:01:16 compute-0 NetworkManager[49108]: <info>  [1769846476.3892] device (tapf0a0388d-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.412 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[830a10b9-927d-42ab-ac46-0da1a3a37837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 NetworkManager[49108]: <info>  [1769846476.4191] manager: (tap5c660d8f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Jan 31 08:01:16 compute-0 systemd-udevd[307910]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.420 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f99181e4-8e28-4f26-983c-09aa917b16d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ceph-mon[74496]: pgmap v1920: 305 pgs: 305 active+clean; 147 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.454 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[18b30f08-828b-41b7-acc0-3386db601062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.458 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0af8464e-4183-4bb8-b7cb-87caa8d5eeb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 NetworkManager[49108]: <info>  [1769846476.4774] device (tap5c660d8f-90): carrier: link connected
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.481 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6f306a59-7c55-44db-b7e1-6650ffeef156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.498 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[361ec7aa-148e-4cff-9e50-7d3b5f63d22c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c660d8f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:53:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671909, 'reachable_time': 44138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307937, 'error': None, 'target': 'ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.510 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[057c9e9f-3e0b-42e2-b45b-25246dd69704]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:53a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671909, 'tstamp': 671909}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307938, 'error': None, 'target': 'ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.528 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dec20d54-98c9-4f44-8c84-ea42269915cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c660d8f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:53:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671909, 'reachable_time': 44138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307939, 'error': None, 'target': 'ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.552 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b8a0c33-36cc-4196-9836-f1addecc99cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.603 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.612 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[58850732-4733-4bc2-bc5c-cfc85ea4061f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.614 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c660d8f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.615 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.615 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c660d8f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.617 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 NetworkManager[49108]: <info>  [1769846476.6190] manager: (tap5c660d8f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Jan 31 08:01:16 compute-0 kernel: tap5c660d8f-90: entered promiscuous mode
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.622 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c660d8f-90, col_values=(('external_ids', {'iface-id': '8cc4176e-d73e-4847-a347-9cfdb887633d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.624 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 ovn_controller[149457]: 2026-01-31T08:01:16Z|00363|binding|INFO|Releasing lport 8cc4176e-d73e-4847-a347-9cfdb887633d from this chassis (sb_readonly=0)
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.625 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.627 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c660d8f-9cd6-4091-b9a9-e007762da7fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c660d8f-9cd6-4091-b9a9-e007762da7fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.628 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[286681fa-44fb-4cf5-ada4-1215a41d2f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.629 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5c660d8f-9cd6-4091-b9a9-e007762da7fa
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5c660d8f-9cd6-4091-b9a9-e007762da7fa.pid.haproxy
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5c660d8f-9cd6-4091-b9a9-e007762da7fa
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:01:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:16.630 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'env', 'PROCESS_TAG=haproxy-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c660d8f-9cd6-4091-b9a9-e007762da7fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:01:16 compute-0 nova_compute[247704]: 2026-01-31 08:01:16.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.024 247708 DEBUG nova.compute.manager [req-8c80fd0f-00b0-4add-9a85-02a759d06545 req-fb59a7b8-4513-453c-b97f-7fbc30c04f47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.025 247708 DEBUG oslo_concurrency.lockutils [req-8c80fd0f-00b0-4add-9a85-02a759d06545 req-fb59a7b8-4513-453c-b97f-7fbc30c04f47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:17 compute-0 podman[307972]: 2026-01-31 08:01:17.025969567 +0000 UTC m=+0.071774520 container create 09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.025 247708 DEBUG oslo_concurrency.lockutils [req-8c80fd0f-00b0-4add-9a85-02a759d06545 req-fb59a7b8-4513-453c-b97f-7fbc30c04f47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.026 247708 DEBUG oslo_concurrency.lockutils [req-8c80fd0f-00b0-4add-9a85-02a759d06545 req-fb59a7b8-4513-453c-b97f-7fbc30c04f47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.026 247708 DEBUG nova.compute.manager [req-8c80fd0f-00b0-4add-9a85-02a759d06545 req-fb59a7b8-4513-453c-b97f-7fbc30c04f47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Processing event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:01:17 compute-0 systemd[1]: Started libpod-conmon-09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685.scope.
Jan 31 08:01:17 compute-0 podman[307972]: 2026-01-31 08:01:16.988426592 +0000 UTC m=+0.034231595 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:01:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3357b0b6a38ad4c588d28ed8dddcfe237f7187daff2acabfca4e5aa465df672e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:17 compute-0 podman[307972]: 2026-01-31 08:01:17.13157662 +0000 UTC m=+0.177381603 container init 09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.132 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:17 compute-0 podman[307972]: 2026-01-31 08:01:17.137204857 +0000 UTC m=+0.183009760 container start 09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:01:17 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [NOTICE]   (308024) : New worker (308029) forked
Jan 31 08:01:17 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [NOTICE]   (308024) : Loading success.
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.281 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.283 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846477.2807922, cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.283 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] VM Started (Lifecycle Event)
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.287 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.292 247708 INFO nova.virt.libvirt.driver [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Instance spawned successfully.
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.293 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.311 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.314 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.323 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.323 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.324 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.324 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.325 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.325 247708 DEBUG nova.virt.libvirt.driver [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 171 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 722 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Jan 31 08:01:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.364 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.365 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846477.2811406, cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.365 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] VM Paused (Lifecycle Event)
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.428 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.437 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846477.2870526, cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.437 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] VM Resumed (Lifecycle Event)
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.443 247708 INFO nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Took 18.88 seconds to spawn the instance on the hypervisor.
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.443 247708 DEBUG nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1041038657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.524 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:17.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.529 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.588 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.615 247708 INFO nova.compute.manager [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Took 24.57 seconds to build instance.
Jan 31 08:01:17 compute-0 nova_compute[247704]: 2026-01-31 08:01:17.646 247708 DEBUG oslo_concurrency.lockutils [None req-ba802c3c-aa50-4e8f-a355-946916a24334 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:18 compute-0 ceph-mon[74496]: pgmap v1921: 305 pgs: 305 active+clean; 171 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 722 KiB/s rd, 2.5 MiB/s wr, 58 op/s
Jan 31 08:01:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/212975079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:18 compute-0 nova_compute[247704]: 2026-01-31 08:01:18.599 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:18 compute-0 nova_compute[247704]: 2026-01-31 08:01:18.600 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:18 compute-0 podman[308045]: 2026-01-31 08:01:18.8921258 +0000 UTC m=+0.062204586 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.025 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 213 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 31 08:01:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:19.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.834 247708 DEBUG nova.compute.manager [req-fd2d444a-8e26-4e62-8185-1668924e990e req-50500fe3-1fa0-4bbd-8282-60f3bbb65d57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.835 247708 DEBUG oslo_concurrency.lockutils [req-fd2d444a-8e26-4e62-8185-1668924e990e req-50500fe3-1fa0-4bbd-8282-60f3bbb65d57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.835 247708 DEBUG oslo_concurrency.lockutils [req-fd2d444a-8e26-4e62-8185-1668924e990e req-50500fe3-1fa0-4bbd-8282-60f3bbb65d57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.835 247708 DEBUG oslo_concurrency.lockutils [req-fd2d444a-8e26-4e62-8185-1668924e990e req-50500fe3-1fa0-4bbd-8282-60f3bbb65d57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.836 247708 DEBUG nova.compute.manager [req-fd2d444a-8e26-4e62-8185-1668924e990e req-50500fe3-1fa0-4bbd-8282-60f3bbb65d57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:19 compute-0 nova_compute[247704]: 2026-01-31 08:01:19.836 247708 WARNING nova.compute.manager [req-fd2d444a-8e26-4e62-8185-1668924e990e req-50500fe3-1fa0-4bbd-8282-60f3bbb65d57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received unexpected event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b for instance with vm_state active and task_state None.
Jan 31 08:01:19 compute-0 sudo[308066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:19 compute-0 sudo[308066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:19 compute-0 sudo[308066]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:20 compute-0 sudo[308091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:20 compute-0 sudo[308091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:20 compute-0 sudo[308091]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:01:20
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups']
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:01:20 compute-0 ceph-mon[74496]: pgmap v1922: 305 pgs: 305 active+clean; 213 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:01:20 compute-0 nova_compute[247704]: 2026-01-31 08:01:20.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:20 compute-0 nova_compute[247704]: 2026-01-31 08:01:20.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:01:20 compute-0 nova_compute[247704]: 2026-01-31 08:01:20.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:01:21 compute-0 nova_compute[247704]: 2026-01-31 08:01:21.125 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:21 compute-0 nova_compute[247704]: 2026-01-31 08:01:21.128 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:21 compute-0 nova_compute[247704]: 2026-01-31 08:01:21.128 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:01:21 compute-0 nova_compute[247704]: 2026-01-31 08:01:21.129 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Jan 31 08:01:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:01:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:21.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:01:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:01:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:21.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:01:22 compute-0 nova_compute[247704]: 2026-01-31 08:01:22.135 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:22 compute-0 ceph-mon[74496]: pgmap v1923: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Jan 31 08:01:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 08:01:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:23.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:01:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:01:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3362426033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-0 NetworkManager[49108]: <info>  [1769846484.2704] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 31 08:01:24 compute-0 NetworkManager[49108]: <info>  [1769846484.2716] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.307 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-0 ovn_controller[149457]: 2026-01-31T08:01:24Z|00364|binding|INFO|Releasing lport 8cc4176e-d73e-4847-a347-9cfdb887633d from this chassis (sb_readonly=0)
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.326 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.443 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.443 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:24 compute-0 nova_compute[247704]: 2026-01-31 08:01:24.561 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:01:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:24 compute-0 ceph-mon[74496]: pgmap v1924: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 08:01:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2288252936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3381635505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.057 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.058 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.060 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.068 247708 DEBUG nova.compute.manager [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-changed-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.069 247708 DEBUG nova.compute.manager [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Refreshing instance network info cache due to event network-changed-f0a0388d-0f3d-4eb7-a195-234dd517f99b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.069 247708 DEBUG oslo_concurrency.lockutils [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.081 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.082 247708 INFO nova.compute.claims [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.091 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.093 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.094 247708 DEBUG oslo_concurrency.lockutils [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.094 247708 DEBUG nova.network.neutron [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Refreshing network info cache for port f0a0388d-0f3d-4eb7-a195-234dd517f99b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.097 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.099 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.101 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.101 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.102 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.129 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.350 247708 DEBUG nova.scheduler.client.report [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:01:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.460 247708 DEBUG nova.scheduler.client.report [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.461 247708 DEBUG nova.compute.provider_tree [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.478 247708 DEBUG nova.scheduler.client.report [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.526 247708 DEBUG nova.scheduler.client.report [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:01:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:25 compute-0 nova_compute[247704]: 2026-01-31 08:01:25.867 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4219408586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:25 compute-0 ceph-mon[74496]: pgmap v1925: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 31 08:01:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496966673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:26 compute-0 nova_compute[247704]: 2026-01-31 08:01:26.478 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:26 compute-0 nova_compute[247704]: 2026-01-31 08:01:26.486 247708 DEBUG nova.compute.provider_tree [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:01:26 compute-0 nova_compute[247704]: 2026-01-31 08:01:26.880 247708 DEBUG nova.scheduler.client.report [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:01:26 compute-0 nova_compute[247704]: 2026-01-31 08:01:26.999 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.000 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.002 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.003 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.003 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.003 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.135 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.138 247708 DEBUG nova.network.neutron [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.173 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.240 247708 INFO nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:01:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/496966673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.315 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:01:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Jan 31 08:01:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:27.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.471 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.477 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.478 247708 INFO nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Creating image(s)
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.507 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.540 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3110385296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.571 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.577 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.606 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.643 247708 DEBUG nova.policy [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31043e345f6b48b585fb7b8ab7304764', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd352316ff6534075952e2d0c28061b09', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.645 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.646 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.646 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.647 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.673 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.677 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 32e56536-3edb-494c-9e8b-87cfa8396dac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.744 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.744 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.947 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.949 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=20.901065826416016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.950 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:27 compute-0 nova_compute[247704]: 2026-01-31 08:01:27.950 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.069 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.070 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 32e56536-3edb-494c-9e8b-87cfa8396dac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.070 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.070 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.139 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.189 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 32e56536-3edb-494c-9e8b-87cfa8396dac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.275 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] resizing rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:01:28 compute-0 ceph-mon[74496]: pgmap v1926: 305 pgs: 305 active+clean; 213 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Jan 31 08:01:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3110385296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.394 247708 DEBUG nova.objects.instance [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 32e56536-3edb-494c-9e8b-87cfa8396dac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.432 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.434 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Ensure instance console log exists: /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.435 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.436 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.437 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:01:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147785232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.616 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.624 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.648 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.689 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.689 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.691 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.691 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:01:28 compute-0 nova_compute[247704]: 2026-01-31 08:01:28.711 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:01:29 compute-0 nova_compute[247704]: 2026-01-31 08:01:29.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3147785232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 191 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 08:01:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:29.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:29 compute-0 nova_compute[247704]: 2026-01-31 08:01:29.673 247708 DEBUG nova.network.neutron [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updated VIF entry in instance network info cache for port f0a0388d-0f3d-4eb7-a195-234dd517f99b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:29 compute-0 nova_compute[247704]: 2026-01-31 08:01:29.673 247708 DEBUG nova.network.neutron [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:29 compute-0 ovn_controller[149457]: 2026-01-31T08:01:29Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ce:51:d8 10.100.0.9
Jan 31 08:01:29 compute-0 ovn_controller[149457]: 2026-01-31T08:01:29Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:51:d8 10.100.0.9
Jan 31 08:01:29 compute-0 nova_compute[247704]: 2026-01-31 08:01:29.746 247708 DEBUG oslo_concurrency.lockutils [req-81946960-cadf-4051-a8c2-d83d48117f28 req-07ad71cb-dc0a-49f1-a888-19b2d137652e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:30 compute-0 nova_compute[247704]: 2026-01-31 08:01:30.182 247708 DEBUG nova.network.neutron [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Successfully created port: 6dcdaf78-571b-42ba-bc17-7f6217ee6587 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:01:30 compute-0 ceph-mon[74496]: pgmap v1927: 305 pgs: 305 active+clean; 191 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 08:01:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3439344745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/824275673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 199 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Jan 31 08:01:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:32 compute-0 nova_compute[247704]: 2026-01-31 08:01:32.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:32 compute-0 ceph-mon[74496]: pgmap v1928: 305 pgs: 305 active+clean; 199 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Jan 31 08:01:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 248 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 190 op/s
Jan 31 08:01:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:33.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:33 compute-0 nova_compute[247704]: 2026-01-31 08:01:33.925 247708 DEBUG nova.network.neutron [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Successfully updated port: 6dcdaf78-571b-42ba-bc17-7f6217ee6587 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:01:33 compute-0 ceph-mon[74496]: pgmap v1929: 305 pgs: 305 active+clean; 248 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 190 op/s
Jan 31 08:01:34 compute-0 nova_compute[247704]: 2026-01-31 08:01:34.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:34 compute-0 nova_compute[247704]: 2026-01-31 08:01:34.208 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:34 compute-0 nova_compute[247704]: 2026-01-31 08:01:34.209 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:34 compute-0 nova_compute[247704]: 2026-01-31 08:01:34.209 247708 DEBUG nova.network.neutron [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:01:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:34 compute-0 podman[308358]: 2026-01-31 08:01:34.945336758 +0000 UTC m=+0.111299033 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004808293492927601 of space, bias 1.0, pg target 1.4424880478782802 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:01:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 246 op/s
Jan 31 08:01:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:35.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:35.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:36 compute-0 ceph-mon[74496]: pgmap v1930: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 246 op/s
Jan 31 08:01:36 compute-0 nova_compute[247704]: 2026-01-31 08:01:36.671 247708 DEBUG nova.network.neutron [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:01:37 compute-0 nova_compute[247704]: 2026-01-31 08:01:37.116 247708 DEBUG nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-changed-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:37 compute-0 nova_compute[247704]: 2026-01-31 08:01:37.116 247708 DEBUG nova.compute.manager [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Refreshing instance network info cache due to event network-changed-6dcdaf78-571b-42ba-bc17-7f6217ee6587. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:37 compute-0 nova_compute[247704]: 2026-01-31 08:01:37.116 247708 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:37 compute-0 nova_compute[247704]: 2026-01-31 08:01:37.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Jan 31 08:01:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:38 compute-0 ceph-mon[74496]: pgmap v1931: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Jan 31 08:01:39 compute-0 nova_compute[247704]: 2026-01-31 08:01:39.035 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 296 op/s
Jan 31 08:01:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:01:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:01:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:39 compute-0 nova_compute[247704]: 2026-01-31 08:01:39.979 247708 DEBUG nova.network.neutron [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:40 compute-0 sudo[308388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:40 compute-0 sudo[308388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:40 compute-0 sudo[308388]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:40 compute-0 sudo[308413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:40 compute-0 sudo[308413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:40 compute-0 sudo[308413]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:40 compute-0 ceph-mon[74496]: pgmap v1932: 305 pgs: 305 active+clean; 260 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 296 op/s
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.861 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.861 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Instance network_info: |[{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.862 247708 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.862 247708 DEBUG nova.network.neutron [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Refreshing network info cache for port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.864 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Start _get_guest_xml network_info=[{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.871 247708 WARNING nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.878 247708 DEBUG nova.virt.libvirt.host [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.879 247708 DEBUG nova.virt.libvirt.host [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.889 247708 DEBUG nova.virt.libvirt.host [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.890 247708 DEBUG nova.virt.libvirt.host [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.891 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.892 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.893 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.893 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.893 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.894 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.894 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.895 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.895 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.896 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.896 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.896 247708 DEBUG nova.virt.hardware [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:01:40 compute-0 nova_compute[247704]: 2026-01-31 08:01:40.901 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 265 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.4 MiB/s wr, 244 op/s
Jan 31 08:01:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963186934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.386 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:41.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.412 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.417 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/963186934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:41.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:01:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768615480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.849 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.850 247708 DEBUG nova.virt.libvirt.vif [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-114931390',display_name='tempest-ServerActionsTestOtherA-server-114931390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-114931390',id=92,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-nn0tigkg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=32e56536-3edb-494c-9e8b-87cfa8396dac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.851 247708 DEBUG nova.network.os_vif_util [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.851 247708 DEBUG nova.network.os_vif_util [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:41 compute-0 nova_compute[247704]: 2026-01-31 08:01:41.852 247708 DEBUG nova.objects.instance [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32e56536-3edb-494c-9e8b-87cfa8396dac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:42 compute-0 nova_compute[247704]: 2026-01-31 08:01:42.206 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:42 compute-0 ceph-mon[74496]: pgmap v1933: 305 pgs: 305 active+clean; 265 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.4 MiB/s wr, 244 op/s
Jan 31 08:01:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3768615480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.015 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <uuid>32e56536-3edb-494c-9e8b-87cfa8396dac</uuid>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <name>instance-0000005c</name>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsTestOtherA-server-114931390</nova:name>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:01:40</nova:creationTime>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <nova:port uuid="6dcdaf78-571b-42ba-bc17-7f6217ee6587">
Jan 31 08:01:43 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <system>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <entry name="serial">32e56536-3edb-494c-9e8b-87cfa8396dac</entry>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <entry name="uuid">32e56536-3edb-494c-9e8b-87cfa8396dac</entry>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </system>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <os>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </os>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <features>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </features>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/32e56536-3edb-494c-9e8b-87cfa8396dac_disk">
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </source>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/32e56536-3edb-494c-9e8b-87cfa8396dac_disk.config">
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </source>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:01:43 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:cf:23:9c"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <target dev="tap6dcdaf78-57"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/console.log" append="off"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <video>
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </video>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:01:43 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:01:43 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:01:43 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:01:43 compute-0 nova_compute[247704]: </domain>
Jan 31 08:01:43 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.017 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Preparing to wait for external event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.018 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.018 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.018 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.019 247708 DEBUG nova.virt.libvirt.vif [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-114931390',display_name='tempest-ServerActionsTestOtherA-server-114931390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-114931390',id=92,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-nn0tigkg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=32e56536-3edb-494c-9e8b-87cfa8396dac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.020 247708 DEBUG nova.network.os_vif_util [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.021 247708 DEBUG nova.network.os_vif_util [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.021 247708 DEBUG os_vif [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.022 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.023 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.027 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6dcdaf78-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.028 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6dcdaf78-57, col_values=(('external_ids', {'iface-id': '6dcdaf78-571b-42ba-bc17-7f6217ee6587', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:23:9c', 'vm-uuid': '32e56536-3edb-494c-9e8b-87cfa8396dac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.029 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:43 compute-0 NetworkManager[49108]: <info>  [1769846503.0309] manager: (tap6dcdaf78-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.039 247708 INFO os_vif [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57')
Jan 31 08:01:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 270 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.7 MiB/s wr, 181 op/s
Jan 31 08:01:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:43.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.484 247708 DEBUG oslo_concurrency.lockutils [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "interface-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.484 247708 DEBUG oslo_concurrency.lockutils [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "interface-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.485 247708 DEBUG nova.objects.instance [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'flavor' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.530 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.530 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.531 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:cf:23:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.531 247708 INFO nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Using config drive
Jan 31 08:01:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:43 compute-0 nova_compute[247704]: 2026-01-31 08:01:43.571 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:44 compute-0 nova_compute[247704]: 2026-01-31 08:01:44.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:44 compute-0 nova_compute[247704]: 2026-01-31 08:01:44.304 247708 DEBUG nova.objects.instance [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'pci_requests' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:44 compute-0 nova_compute[247704]: 2026-01-31 08:01:44.408 247708 DEBUG nova.network.neutron [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updated VIF entry in instance network info cache for port 6dcdaf78-571b-42ba-bc17-7f6217ee6587. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:44 compute-0 nova_compute[247704]: 2026-01-31 08:01:44.409 247708 DEBUG nova.network.neutron [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:44 compute-0 ceph-mon[74496]: pgmap v1934: 305 pgs: 305 active+clean; 270 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.7 MiB/s wr, 181 op/s
Jan 31 08:01:44 compute-0 nova_compute[247704]: 2026-01-31 08:01:44.841 247708 DEBUG nova.network.neutron [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:01:44 compute-0 nova_compute[247704]: 2026-01-31 08:01:44.846 247708 DEBUG oslo_concurrency.lockutils [req-e5ee381e-e661-40f6-8306-aa3b60a895a2 req-808f61c6-888d-4bb4-95ee-ffef6ebeda9c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:01:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/563089829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:01:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:01:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/563089829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:01:45 compute-0 nova_compute[247704]: 2026-01-31 08:01:45.141 247708 INFO nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Creating config drive at /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/disk.config
Jan 31 08:01:45 compute-0 nova_compute[247704]: 2026-01-31 08:01:45.146 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9tzey66p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:45 compute-0 nova_compute[247704]: 2026-01-31 08:01:45.276 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9tzey66p" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:45 compute-0 nova_compute[247704]: 2026-01-31 08:01:45.312 247708 DEBUG nova.storage.rbd_utils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 32e56536-3edb-494c-9e8b-87cfa8396dac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:01:45 compute-0 nova_compute[247704]: 2026-01-31 08:01:45.317 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/disk.config 32e56536-3edb-494c-9e8b-87cfa8396dac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 265 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.8 MiB/s wr, 180 op/s
Jan 31 08:01:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:45.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/563089829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:01:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/563089829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.033 247708 DEBUG oslo_concurrency.processutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/disk.config 32e56536-3edb-494c-9e8b-87cfa8396dac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.035 247708 INFO nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Deleting local config drive /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac/disk.config because it was imported into RBD.
Jan 31 08:01:46 compute-0 kernel: tap6dcdaf78-57: entered promiscuous mode
Jan 31 08:01:46 compute-0 NetworkManager[49108]: <info>  [1769846506.1016] manager: (tap6dcdaf78-57): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Jan 31 08:01:46 compute-0 ovn_controller[149457]: 2026-01-31T08:01:46Z|00365|binding|INFO|Claiming lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 for this chassis.
Jan 31 08:01:46 compute-0 ovn_controller[149457]: 2026-01-31T08:01:46Z|00366|binding|INFO|6dcdaf78-571b-42ba-bc17-7f6217ee6587: Claiming fa:16:3e:cf:23:9c 10.100.0.11
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.103 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 ovn_controller[149457]: 2026-01-31T08:01:46Z|00367|binding|INFO|Setting lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 ovn-installed in OVS
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.110 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.114 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 systemd-machined[214448]: New machine qemu-38-instance-0000005c.
Jan 31 08:01:46 compute-0 systemd-udevd[308576]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:46 compute-0 NetworkManager[49108]: <info>  [1769846506.1496] device (tap6dcdaf78-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:01:46 compute-0 NetworkManager[49108]: <info>  [1769846506.1504] device (tap6dcdaf78-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:01:46 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-0000005c.
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.259 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:23:9c 10.100.0.11'], port_security=['fa:16:3e:cf:23:9c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32e56536-3edb-494c-9e8b-87cfa8396dac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6dcdaf78-571b-42ba-bc17-7f6217ee6587) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:46 compute-0 ovn_controller[149457]: 2026-01-31T08:01:46Z|00368|binding|INFO|Setting lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 up in Southbound
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.261 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.263 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.272 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8ceb0385-4864-41b8-a576-11e64b0805a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.273 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.274 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.275 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[18369c3d-9974-4f91-9726-5f5f00710ec9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.275 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aeed0ed8-b6fa-4199-8e51-84c40d76b2fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.285 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2f9bbe-afdc-4ed1-ab38-225e290f7786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.296 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b345a63-38b1-4d0b-8fe8-b0335b5c8b5e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.320 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3addd5-b65b-4481-91ca-be4ec0a103c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 NetworkManager[49108]: <info>  [1769846506.3302] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/178)
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.329 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d7724c7d-ccb1-4155-8c8a-b76643f2e015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.360 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4619c20a-14ba-4188-98c2-4b78cba898fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.362 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[667df79e-7d79-4faa-a1ec-50a53232034f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 NetworkManager[49108]: <info>  [1769846506.3787] device (tap79cb2b81-30): carrier: link connected
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.382 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5796ec8f-8914-4ca8-bd3d-41edf92e112b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.394 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3628c48b-fbdc-4ccc-a032-3aab578e194b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674899, 'reachable_time': 19122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308611, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.406 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0ffc3d-b6b3-4516-9bce-83d83c5d04cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674899, 'tstamp': 674899}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308612, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.419 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e61d0e88-01a1-42a2-bc84-4bf9606556bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674899, 'reachable_time': 19122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308613, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.439 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3e05c5d4-6bcf-4505-aca1-28ae841d2b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.492 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3d88d5a8-88ab-4ddf-b622-313aa1f838f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.494 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.494 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.495 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 NetworkManager[49108]: <info>  [1769846506.4977] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Jan 31 08:01:46 compute-0 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.504 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.504 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:46 compute-0 ovn_controller[149457]: 2026-01-31T08:01:46Z|00369|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=1)
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.515 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.517 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.518 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.519 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5413add5-0b62-413f-a732-4e87dc43d2ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.519 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:01:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:46.520 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.734 247708 DEBUG nova.policy [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd454a4ec91e645e992f4a88ac60da747', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ab30fd72657435fb442fc59a53da644', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:01:46 compute-0 ceph-mon[74496]: pgmap v1935: 305 pgs: 305 active+clean; 265 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.8 MiB/s wr, 180 op/s
Jan 31 08:01:46 compute-0 podman[308681]: 2026-01-31 08:01:46.882710954 +0000 UTC m=+0.073184494 container create 2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.890 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846506.8903847, 32e56536-3edb-494c-9e8b-87cfa8396dac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:46 compute-0 nova_compute[247704]: 2026-01-31 08:01:46.891 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] VM Started (Lifecycle Event)
Jan 31 08:01:46 compute-0 systemd[1]: Started libpod-conmon-2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1.scope.
Jan 31 08:01:46 compute-0 podman[308681]: 2026-01-31 08:01:46.835770621 +0000 UTC m=+0.026244341 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:01:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5c0d814f48bbf2ba71cfdedb5500f2b19084433b27b6361618559b01f6b3621/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:46 compute-0 podman[308681]: 2026-01-31 08:01:46.983057389 +0000 UTC m=+0.173530979 container init 2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 08:01:46 compute-0 podman[308681]: 2026-01-31 08:01:46.991810842 +0000 UTC m=+0.182284382 container start 2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 08:01:47 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[308703]: [NOTICE]   (308707) : New worker (308709) forked
Jan 31 08:01:47 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[308703]: [NOTICE]   (308707) : Loading success.
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.266 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.271 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846506.8910422, 32e56536-3edb-494c-9e8b-87cfa8396dac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.272 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] VM Paused (Lifecycle Event)
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.312 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.316 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 259 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.377 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:01:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:47.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:47.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.790 247708 DEBUG nova.compute.manager [req-def559ce-1e7a-4194-883b-8dea594e8de4 req-b615ef9a-9c27-4cc9-96bd-8aab273d8a1d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.791 247708 DEBUG oslo_concurrency.lockutils [req-def559ce-1e7a-4194-883b-8dea594e8de4 req-b615ef9a-9c27-4cc9-96bd-8aab273d8a1d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.791 247708 DEBUG oslo_concurrency.lockutils [req-def559ce-1e7a-4194-883b-8dea594e8de4 req-b615ef9a-9c27-4cc9-96bd-8aab273d8a1d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.791 247708 DEBUG oslo_concurrency.lockutils [req-def559ce-1e7a-4194-883b-8dea594e8de4 req-b615ef9a-9c27-4cc9-96bd-8aab273d8a1d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.791 247708 DEBUG nova.compute.manager [req-def559ce-1e7a-4194-883b-8dea594e8de4 req-b615ef9a-9c27-4cc9-96bd-8aab273d8a1d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Processing event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.792 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.796 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.797 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846507.7961438, 32e56536-3edb-494c-9e8b-87cfa8396dac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.797 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] VM Resumed (Lifecycle Event)
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.803 247708 INFO nova.virt.libvirt.driver [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Instance spawned successfully.
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.804 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.862 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.870 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.871 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.872 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.873 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.873 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.874 247708 DEBUG nova.virt.libvirt.driver [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.881 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:01:47 compute-0 ceph-mon[74496]: pgmap v1936: 305 pgs: 305 active+clean; 259 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 08:01:47 compute-0 nova_compute[247704]: 2026-01-31 08:01:47.970 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:01:48 compute-0 nova_compute[247704]: 2026-01-31 08:01:48.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:48 compute-0 nova_compute[247704]: 2026-01-31 08:01:48.045 247708 INFO nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Took 20.57 seconds to spawn the instance on the hypervisor.
Jan 31 08:01:48 compute-0 nova_compute[247704]: 2026-01-31 08:01:48.045 247708 DEBUG nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:01:48 compute-0 nova_compute[247704]: 2026-01-31 08:01:48.477 247708 INFO nova.compute.manager [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Took 23.77 seconds to build instance.
Jan 31 08:01:48 compute-0 nova_compute[247704]: 2026-01-31 08:01:48.622 247708 DEBUG oslo_concurrency.lockutils [None req-7f6314f0-032c-4801-b71c-da97a46015f6 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.041 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1292270437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.181 247708 DEBUG nova.network.neutron [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Successfully created port: a4bb4520-7491-4c40-8d8b-bf1044a7d29c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:01:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 246 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:01:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:49.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:01:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:49.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:01:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:49 compute-0 podman[308720]: 2026-01-31 08:01:49.878577791 +0000 UTC m=+0.048198435 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.928 247708 DEBUG nova.compute.manager [req-2d3fdcec-a428-471f-ba83-b18db8b1af97 req-efaa1f3a-1be3-4ad9-ad7f-c86b49eb2da2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.929 247708 DEBUG oslo_concurrency.lockutils [req-2d3fdcec-a428-471f-ba83-b18db8b1af97 req-efaa1f3a-1be3-4ad9-ad7f-c86b49eb2da2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.929 247708 DEBUG oslo_concurrency.lockutils [req-2d3fdcec-a428-471f-ba83-b18db8b1af97 req-efaa1f3a-1be3-4ad9-ad7f-c86b49eb2da2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.930 247708 DEBUG oslo_concurrency.lockutils [req-2d3fdcec-a428-471f-ba83-b18db8b1af97 req-efaa1f3a-1be3-4ad9-ad7f-c86b49eb2da2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.930 247708 DEBUG nova.compute.manager [req-2d3fdcec-a428-471f-ba83-b18db8b1af97 req-efaa1f3a-1be3-4ad9-ad7f-c86b49eb2da2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] No waiting events found dispatching network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:49 compute-0 nova_compute[247704]: 2026-01-31 08:01:49.930 247708 WARNING nova.compute.manager [req-2d3fdcec-a428-471f-ba83-b18db8b1af97 req-efaa1f3a-1be3-4ad9-ad7f-c86b49eb2da2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received unexpected event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 for instance with vm_state active and task_state None.
Jan 31 08:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:01:50 compute-0 ceph-mon[74496]: pgmap v1937: 305 pgs: 305 active+clean; 246 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:01:50 compute-0 nova_compute[247704]: 2026-01-31 08:01:50.868 247708 DEBUG nova.network.neutron [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Successfully updated port: a4bb4520-7491-4c40-8d8b-bf1044a7d29c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:01:50 compute-0 nova_compute[247704]: 2026-01-31 08:01:50.931 247708 DEBUG oslo_concurrency.lockutils [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:50 compute-0 nova_compute[247704]: 2026-01-31 08:01:50.931 247708 DEBUG oslo_concurrency.lockutils [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:50 compute-0 nova_compute[247704]: 2026-01-31 08:01:50.932 247708 DEBUG nova.network.neutron [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:01:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 247 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 31 08:01:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:51.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:51.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:51 compute-0 nova_compute[247704]: 2026-01-31 08:01:51.650 247708 DEBUG nova.compute.manager [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-changed-a4bb4520-7491-4c40-8d8b-bf1044a7d29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:51 compute-0 nova_compute[247704]: 2026-01-31 08:01:51.650 247708 DEBUG nova.compute.manager [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Refreshing instance network info cache due to event network-changed-a4bb4520-7491-4c40-8d8b-bf1044a7d29c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:51 compute-0 nova_compute[247704]: 2026-01-31 08:01:51.651 247708 DEBUG oslo_concurrency.lockutils [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1763936160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:01:52 compute-0 ceph-mon[74496]: pgmap v1938: 305 pgs: 305 active+clean; 247 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Jan 31 08:01:53 compute-0 nova_compute[247704]: 2026-01-31 08:01:53.035 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 250 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 151 op/s
Jan 31 08:01:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:01:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:53.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:01:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:01:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:53.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:01:53 compute-0 nova_compute[247704]: 2026-01-31 08:01:53.796 247708 DEBUG nova.compute.manager [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-changed-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:53 compute-0 nova_compute[247704]: 2026-01-31 08:01:53.797 247708 DEBUG nova.compute.manager [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Refreshing instance network info cache due to event network-changed-6dcdaf78-571b-42ba-bc17-7f6217ee6587. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:01:53 compute-0 nova_compute[247704]: 2026-01-31 08:01:53.798 247708 DEBUG oslo_concurrency.lockutils [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:01:53 compute-0 nova_compute[247704]: 2026-01-31 08:01:53.798 247708 DEBUG oslo_concurrency.lockutils [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:53 compute-0 nova_compute[247704]: 2026-01-31 08:01:53.799 247708 DEBUG nova.network.neutron [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Refreshing network info cache for port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:53 compute-0 ceph-mon[74496]: pgmap v1939: 305 pgs: 305 active+clean; 250 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 151 op/s
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.356 247708 DEBUG nova.network.neutron [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.387 247708 DEBUG oslo_concurrency.lockutils [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.389 247708 DEBUG oslo_concurrency.lockutils [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.389 247708 DEBUG nova.network.neutron [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Refreshing network info cache for port a4bb4520-7491-4c40-8d8b-bf1044a7d29c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.393 247708 DEBUG nova.virt.libvirt.vif [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.395 247708 DEBUG nova.network.os_vif_util [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.396 247708 DEBUG nova.network.os_vif_util [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.398 247708 DEBUG os_vif [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.399 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.400 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.401 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.405 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4bb4520-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.405 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4bb4520-74, col_values=(('external_ids', {'iface-id': 'a4bb4520-7491-4c40-8d8b-bf1044a7d29c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:f7:dd', 'vm-uuid': 'cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.407 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.4091] manager: (tapa4bb4520-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.414 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.416 247708 INFO os_vif [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74')
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.418 247708 DEBUG nova.virt.libvirt.vif [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.418 247708 DEBUG nova.network.os_vif_util [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.419 247708 DEBUG nova.network.os_vif_util [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.429 247708 DEBUG nova.virt.libvirt.guest [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] attach device xml: <interface type="ethernet">
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <mac address="fa:16:3e:d7:f7:dd"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <model type="virtio"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <mtu size="1442"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <target dev="tapa4bb4520-74"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]: </interface>
Jan 31 08:01:54 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.4409] manager: (tapa4bb4520-74): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Jan 31 08:01:54 compute-0 kernel: tapa4bb4520-74: entered promiscuous mode
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.447 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 ovn_controller[149457]: 2026-01-31T08:01:54Z|00370|binding|INFO|Claiming lport a4bb4520-7491-4c40-8d8b-bf1044a7d29c for this chassis.
Jan 31 08:01:54 compute-0 ovn_controller[149457]: 2026-01-31T08:01:54Z|00371|binding|INFO|a4bb4520-7491-4c40-8d8b-bf1044a7d29c: Claiming fa:16:3e:d7:f7:dd 10.10.10.23
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.471 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:f7:dd 10.10.10.23'], port_security=['fa:16:3e:d7:f7:dd 10.10.10.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.23/24', 'neutron:device_id': 'cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ab30fd72657435fb442fc59a53da644', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4581686a-bc33-40dd-9725-667831be9a62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76275e80-3a87-4424-b971-4a6b4136f99e, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a4bb4520-7491-4c40-8d8b-bf1044a7d29c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.475 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a4bb4520-7491-4c40-8d8b-bf1044a7d29c in datapath 6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a bound to our chassis
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.479 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a
Jan 31 08:01:54 compute-0 systemd-udevd[308748]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.493 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c368328-f1e5-4b2a-bbb4-c880561bee20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.494 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6741c3f8-41 in ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:01:54 compute-0 ovn_controller[149457]: 2026-01-31T08:01:54Z|00372|binding|INFO|Setting lport a4bb4520-7491-4c40-8d8b-bf1044a7d29c ovn-installed in OVS
Jan 31 08:01:54 compute-0 ovn_controller[149457]: 2026-01-31T08:01:54Z|00373|binding|INFO|Setting lport a4bb4520-7491-4c40-8d8b-bf1044a7d29c up in Southbound
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.5033] device (tapa4bb4520-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.498 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6741c3f8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.498 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eca681ff-0e23-4ebc-95fe-ecace9035929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.501 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[75a1a39a-f8a0-4aab-aa44-1a18583f67c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.5045] device (tapa4bb4520-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.518 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[10d39d18-ed1a-4c83-a07d-5cffb9c1a731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.530 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c97e02-a071-450f-9d35-7392c7f2c09f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.559 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[38a71c2b-08c6-4294-a9d4-0d1e1f8b6109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.564 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d758a914-8b3d-4316-8f94-6b3e33377eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.5652] manager: (tap6741c3f8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/182)
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.578 247708 DEBUG nova.virt.libvirt.driver [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.578 247708 DEBUG nova.virt.libvirt.driver [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.578 247708 DEBUG nova.virt.libvirt.driver [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No VIF found with MAC fa:16:3e:ce:51:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.595 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[179a21df-353f-40be-b243-732e224c985e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.599 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[58fa308d-2b74-4d0a-80f2-c3427783f65f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.610 247708 DEBUG nova.virt.libvirt.guest [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:01:54</nova:creationTime>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:01:54 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     <nova:port uuid="a4bb4520-7491-4c40-8d8b-bf1044a7d29c">
Jan 31 08:01:54 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.10.10.23" ipVersion="4"/>
Jan 31 08:01:54 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:01:54 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:01:54 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:01:54 compute-0 nova_compute[247704]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.6259] device (tap6741c3f8-40): carrier: link connected
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.631 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9f984f17-731f-4ad0-a27c-7b96c442f01d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.649 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc910ce-aeda-4c6d-abe9-5f8af0929af2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6741c3f8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:f6:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675724, 'reachable_time': 15874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308775, 'error': None, 'target': 'ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.668 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[19956cc6-6842-43b2-86ed-6f3ff273f102]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:f63a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675724, 'tstamp': 675724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308776, 'error': None, 'target': 'ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.685 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cd9c3b-e23b-4cae-aad6-2de68e57516b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6741c3f8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:f6:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675724, 'reachable_time': 15874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308777, 'error': None, 'target': 'ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.687 247708 DEBUG oslo_concurrency.lockutils [None req-06211948-2ad3-436a-be17-36c810f08f2d d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "interface-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.705 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7a48dca5-7108-46ef-9abb-9e63ed1ebb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.767 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6eefea04-5c51-45be-a21a-a3472885e9f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.769 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6741c3f8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.769 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.769 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6741c3f8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:54 compute-0 kernel: tap6741c3f8-40: entered promiscuous mode
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 NetworkManager[49108]: <info>  [1769846514.7726] manager: (tap6741c3f8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.773 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.774 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6741c3f8-40, col_values=(('external_ids', {'iface-id': '31905072-2c84-4619-8690-c8e8f872e019'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 ovn_controller[149457]: 2026-01-31T08:01:54Z|00374|binding|INFO|Releasing lport 31905072-2c84-4619-8690-c8e8f872e019 from this chassis (sb_readonly=0)
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.784 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.785 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.786 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd79c45-7ea4-48f1-9cb7-c0127bae733f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.786 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a.pid.haproxy
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:01:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:54.787 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'env', 'PROCESS_TAG=haproxy-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:01:54 compute-0 sudo[308787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:54 compute-0 sudo[308787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:54 compute-0 sudo[308787]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.995 247708 DEBUG nova.compute.manager [req-a443a95b-1f6b-4e3a-99c7-c511b0c86230 req-0b4277d5-e438-4ba0-8621-d3451b992697 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.997 247708 DEBUG oslo_concurrency.lockutils [req-a443a95b-1f6b-4e3a-99c7-c511b0c86230 req-0b4277d5-e438-4ba0-8621-d3451b992697 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.998 247708 DEBUG oslo_concurrency.lockutils [req-a443a95b-1f6b-4e3a-99c7-c511b0c86230 req-0b4277d5-e438-4ba0-8621-d3451b992697 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:54 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.999 247708 DEBUG oslo_concurrency.lockutils [req-a443a95b-1f6b-4e3a-99c7-c511b0c86230 req-0b4277d5-e438-4ba0-8621-d3451b992697 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:55 compute-0 nova_compute[247704]: 2026-01-31 08:01:54.999 247708 DEBUG nova.compute.manager [req-a443a95b-1f6b-4e3a-99c7-c511b0c86230 req-0b4277d5-e438-4ba0-8621-d3451b992697 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:55 compute-0 nova_compute[247704]: 2026-01-31 08:01:55.000 247708 WARNING nova.compute.manager [req-a443a95b-1f6b-4e3a-99c7-c511b0c86230 req-0b4277d5-e438-4ba0-8621-d3451b992697 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received unexpected event network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c for instance with vm_state active and task_state None.
Jan 31 08:01:55 compute-0 sudo[308812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:55 compute-0 sudo[308812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 sudo[308812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 sudo[308847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:55 compute-0 sudo[308847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 sudo[308847]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 podman[308882]: 2026-01-31 08:01:55.120205148 +0000 UTC m=+0.040556099 container create 3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:55 compute-0 systemd[1]: Started libpod-conmon-3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731.scope.
Jan 31 08:01:55 compute-0 sudo[308894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:01:55 compute-0 sudo[308894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad01de360f1b85e79093689cf0235ba39d4fa7997d1f4aeb1141cb811ce2ee88/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:55 compute-0 podman[308882]: 2026-01-31 08:01:55.095709151 +0000 UTC m=+0.016060112 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:01:55 compute-0 podman[308882]: 2026-01-31 08:01:55.19668376 +0000 UTC m=+0.117034701 container init 3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:01:55 compute-0 podman[308882]: 2026-01-31 08:01:55.201461427 +0000 UTC m=+0.121812368 container start 3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:01:55 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [NOTICE]   (308927) : New worker (308929) forked
Jan 31 08:01:55 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [NOTICE]   (308927) : Loading success.
Jan 31 08:01:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 272 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 156 op/s
Jan 31 08:01:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:55.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:55 compute-0 sudo[308894]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:01:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:01:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:01:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:01:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2831c221-42ae-472d-a7f1-b9cf54367c37 does not exist
Jan 31 08:01:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fe113492-c672-44d6-a89a-275211006263 does not exist
Jan 31 08:01:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c7213194-a2ce-4c54-9d77-4787f1ccb1ae does not exist
Jan 31 08:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:01:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:01:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:01:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:01:55 compute-0 sudo[308969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:55 compute-0 sudo[308969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 sudo[308969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 sudo[308994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:55 compute-0 sudo[308994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 sudo[308994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 sudo[309019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:55 compute-0 sudo[309019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 sudo[309019]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 sudo[309044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:01:55 compute-0 sudo[309044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:55 compute-0 ovn_controller[149457]: 2026-01-31T08:01:55Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:f7:dd 10.10.10.23
Jan 31 08:01:55 compute-0 ovn_controller[149457]: 2026-01-31T08:01:55Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:f7:dd 10.10.10.23
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.054 247708 DEBUG nova.network.neutron [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updated VIF entry in instance network info cache for port 6dcdaf78-571b-42ba-bc17-7f6217ee6587. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.055 247708 DEBUG nova.network.neutron [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.111 247708 DEBUG oslo_concurrency.lockutils [req-69b5dc52-126f-4eb1-ba1f-b570ff4fa8f2 req-06f766e4-7dcf-4f3d-b9dc-a3d5f5a93e99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.182 247708 DEBUG oslo_concurrency.lockutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.182 247708 DEBUG oslo_concurrency.lockutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.226 247708 DEBUG nova.objects.instance [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'flavor' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:56 compute-0 podman[309109]: 2026-01-31 08:01:56.151636835 +0000 UTC m=+0.021880963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:01:56 compute-0 podman[309109]: 2026-01-31 08:01:56.251049507 +0000 UTC m=+0.121293645 container create 8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lichterman, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.283 247708 DEBUG oslo_concurrency.lockutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:56 compute-0 systemd[1]: Started libpod-conmon-8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2.scope.
Jan 31 08:01:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:56 compute-0 podman[309109]: 2026-01-31 08:01:56.570784057 +0000 UTC m=+0.441028185 container init 8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:56 compute-0 podman[309109]: 2026-01-31 08:01:56.583300512 +0000 UTC m=+0.453544630 container start 8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lichterman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:01:56 compute-0 funny_lichterman[309126]: 167 167
Jan 31 08:01:56 compute-0 systemd[1]: libpod-8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2.scope: Deactivated successfully.
Jan 31 08:01:56 compute-0 ceph-mon[74496]: pgmap v1940: 305 pgs: 305 active+clean; 272 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 156 op/s
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3520477777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3165637769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:56 compute-0 podman[309109]: 2026-01-31 08:01:56.760718863 +0000 UTC m=+0.630963011 container attach 8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 31 08:01:56 compute-0 podman[309109]: 2026-01-31 08:01:56.763178414 +0000 UTC m=+0.633422602 container died 8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.821 247708 DEBUG oslo_concurrency.lockutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.823 247708 DEBUG oslo_concurrency.lockutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:56 compute-0 nova_compute[247704]: 2026-01-31 08:01:56.824 247708 INFO nova.compute.manager [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Attaching volume 746b1dd4-a34a-454d-a628-2fc84c0145c9 to /dev/vdb
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.015 247708 DEBUG nova.network.neutron [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updated VIF entry in instance network info cache for port a4bb4520-7491-4c40-8d8b-bf1044a7d29c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.017 247708 DEBUG nova.network.neutron [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.047 247708 DEBUG oslo_concurrency.lockutils [req-2ff96ea4-2f05-45bf-9b10-60ab2d8f648c req-9430afd7-b514-498b-92b9-5c9763eedd7e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.079 247708 DEBUG os_brick.utils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.081 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.100 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.101 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[2e004014-7fcb-4599-b812-e7b71c2b9aef]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.103 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.112 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.112 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[80e84b5a-aca7-4f73-8e00-aa619ff927cf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.114 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.122 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.122 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[dda4f249-886f-4f62-9755-23d1e95172d5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.123 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[79e39e54-d12e-47f3-a545-404a83cd5a1f]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.124 247708 DEBUG oslo_concurrency.processutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.154 247708 DEBUG oslo_concurrency.processutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.156 247708 DEBUG os_brick.initiator.connectors.lightos [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.157 247708 DEBUG os_brick.initiator.connectors.lightos [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.157 247708 DEBUG os_brick.initiator.connectors.lightos [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.158 247708 DEBUG os_brick.utils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.158 247708 DEBUG nova.virt.block_device [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating existing volume attachment record: 13f4ec6b-044a-427f-b078-a9d63d3a5753 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.166 247708 DEBUG nova.compute.manager [req-9d8dbe6c-d3da-40ae-8d29-a01f041e40c9 req-6d8cc58b-f9ce-44e7-8f05-04e18235462a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.166 247708 DEBUG oslo_concurrency.lockutils [req-9d8dbe6c-d3da-40ae-8d29-a01f041e40c9 req-6d8cc58b-f9ce-44e7-8f05-04e18235462a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.167 247708 DEBUG oslo_concurrency.lockutils [req-9d8dbe6c-d3da-40ae-8d29-a01f041e40c9 req-6d8cc58b-f9ce-44e7-8f05-04e18235462a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.168 247708 DEBUG oslo_concurrency.lockutils [req-9d8dbe6c-d3da-40ae-8d29-a01f041e40c9 req-6d8cc58b-f9ce-44e7-8f05-04e18235462a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.168 247708 DEBUG nova.compute.manager [req-9d8dbe6c-d3da-40ae-8d29-a01f041e40c9 req-6d8cc58b-f9ce-44e7-8f05-04e18235462a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:01:57 compute-0 nova_compute[247704]: 2026-01-31 08:01:57.168 247708 WARNING nova.compute.manager [req-9d8dbe6c-d3da-40ae-8d29-a01f041e40c9 req-6d8cc58b-f9ce-44e7-8f05-04e18235462a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received unexpected event network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c for instance with vm_state active and task_state None.
Jan 31 08:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f4462ceaa89a6d099a17ea68236353361c1148988d2f056c25851e92cedcf1-merged.mount: Deactivated successfully.
Jan 31 08:01:57 compute-0 podman[309109]: 2026-01-31 08:01:57.221535601 +0000 UTC m=+1.091779699 container remove 8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:57 compute-0 systemd[1]: libpod-conmon-8d6cde3bba5f0d7082815cac5c64f6c6d8ad58fee46021a7f564bc6f35b818e2.scope: Deactivated successfully.
Jan 31 08:01:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 293 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 31 08:01:57 compute-0 podman[309158]: 2026-01-31 08:01:57.377713475 +0000 UTC m=+0.035291381 container create 22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:01:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:57.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:57 compute-0 systemd[1]: Started libpod-conmon-22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73.scope.
Jan 31 08:01:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc691fbcbf586e7e93454f1b14af1056873f6a02beb73d78b7f60c7087c0c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc691fbcbf586e7e93454f1b14af1056873f6a02beb73d78b7f60c7087c0c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc691fbcbf586e7e93454f1b14af1056873f6a02beb73d78b7f60c7087c0c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc691fbcbf586e7e93454f1b14af1056873f6a02beb73d78b7f60c7087c0c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bedc691fbcbf586e7e93454f1b14af1056873f6a02beb73d78b7f60c7087c0c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:57 compute-0 podman[309158]: 2026-01-31 08:01:57.454020544 +0000 UTC m=+0.111598470 container init 22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:01:57 compute-0 podman[309158]: 2026-01-31 08:01:57.360501236 +0000 UTC m=+0.018079172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:01:57 compute-0 podman[309158]: 2026-01-31 08:01:57.464337165 +0000 UTC m=+0.121915071 container start 22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:01:57 compute-0 podman[309158]: 2026-01-31 08:01:57.467627135 +0000 UTC m=+0.125205071 container attach 22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3425632881' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-0 stupefied_aryabhata[309176]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:01:58 compute-0 stupefied_aryabhata[309176]: --> relative data size: 1.0
Jan 31 08:01:58 compute-0 stupefied_aryabhata[309176]: --> All data devices are unavailable
Jan 31 08:01:58 compute-0 systemd[1]: libpod-22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73.scope: Deactivated successfully.
Jan 31 08:01:58 compute-0 podman[309158]: 2026-01-31 08:01:58.267462971 +0000 UTC m=+0.925040917 container died 22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:01:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bedc691fbcbf586e7e93454f1b14af1056873f6a02beb73d78b7f60c7087c0c1-merged.mount: Deactivated successfully.
Jan 31 08:01:58 compute-0 podman[309158]: 2026-01-31 08:01:58.324240744 +0000 UTC m=+0.981818650 container remove 22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_aryabhata, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:01:58 compute-0 systemd[1]: libpod-conmon-22eb8ab0229870520cec4558a23bff1cedbf3e1921543b98c2b29d2534b48b73.scope: Deactivated successfully.
Jan 31 08:01:58 compute-0 sudo[309044]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.421 247708 DEBUG nova.objects.instance [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'flavor' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:01:58 compute-0 sudo[309203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:58 compute-0 sudo[309203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:58 compute-0 sudo[309203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.455 247708 DEBUG nova.virt.libvirt.driver [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Attempting to attach volume 746b1dd4-a34a-454d-a628-2fc84c0145c9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.460 247708 DEBUG nova.virt.libvirt.guest [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:01:58 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:01:58 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-746b1dd4-a34a-454d-a628-2fc84c0145c9">
Jan 31 08:01:58 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:01:58 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:01:58 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:01:58 compute-0 nova_compute[247704]:   </source>
Jan 31 08:01:58 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:01:58 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:01:58 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:01:58 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:01:58 compute-0 nova_compute[247704]:   <serial>746b1dd4-a34a-454d-a628-2fc84c0145c9</serial>
Jan 31 08:01:58 compute-0 nova_compute[247704]: </disk>
Jan 31 08:01:58 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:01:58 compute-0 sudo[309228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:01:58 compute-0 sudo[309228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:58 compute-0 sudo[309228]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:58 compute-0 sudo[309262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:01:58 compute-0 sudo[309262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:58 compute-0 sudo[309262]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:58 compute-0 sudo[309298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:01:58 compute-0 sudo[309298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.629 247708 DEBUG nova.virt.libvirt.driver [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.631 247708 DEBUG nova.virt.libvirt.driver [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.631 247708 DEBUG nova.virt.libvirt.driver [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] No VIF found with MAC fa:16:3e:ce:51:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:01:58 compute-0 ceph-mon[74496]: pgmap v1941: 305 pgs: 305 active+clean; 293 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 31 08:01:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1697821044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3381306407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:01:58 compute-0 podman[309364]: 2026-01-31 08:01:58.931868857 +0000 UTC m=+0.056698612 container create f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:58 compute-0 systemd[1]: Started libpod-conmon-f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627.scope.
Jan 31 08:01:58 compute-0 nova_compute[247704]: 2026-01-31 08:01:58.982 247708 DEBUG oslo_concurrency.lockutils [None req-219fb1bc-ea24-48b8-809b-b35852113fa6 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:01:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:58 compute-0 podman[309364]: 2026-01-31 08:01:58.90985134 +0000 UTC m=+0.034681135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:01:59 compute-0 podman[309364]: 2026-01-31 08:01:59.002236061 +0000 UTC m=+0.127065836 container init f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:01:59 compute-0 podman[309364]: 2026-01-31 08:01:59.007707944 +0000 UTC m=+0.132537689 container start f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:59 compute-0 boring_wilson[309381]: 167 167
Jan 31 08:01:59 compute-0 systemd[1]: libpod-f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627.scope: Deactivated successfully.
Jan 31 08:01:59 compute-0 conmon[309381]: conmon f08931533394d66661e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627.scope/container/memory.events
Jan 31 08:01:59 compute-0 podman[309364]: 2026-01-31 08:01:59.012481551 +0000 UTC m=+0.137311406 container attach f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:01:59 compute-0 podman[309364]: 2026-01-31 08:01:59.013005544 +0000 UTC m=+0.137835349 container died f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:01:59 compute-0 nova_compute[247704]: 2026-01-31 08:01:59.047 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-86fbf6da47aa14df3be2f7e598e8c2abeeee50648d85d790634fdc701d53b848-merged.mount: Deactivated successfully.
Jan 31 08:01:59 compute-0 podman[309364]: 2026-01-31 08:01:59.067246365 +0000 UTC m=+0.192076120 container remove f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wilson, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:01:59 compute-0 systemd[1]: libpod-conmon-f08931533394d66661e90027db8e47a32ad93971ce640c671e654ebf33022627.scope: Deactivated successfully.
Jan 31 08:01:59 compute-0 podman[309405]: 2026-01-31 08:01:59.218577212 +0000 UTC m=+0.046763621 container create 9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:01:59 compute-0 systemd[1]: Started libpod-conmon-9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9.scope.
Jan 31 08:01:59 compute-0 podman[309405]: 2026-01-31 08:01:59.196677098 +0000 UTC m=+0.024863497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:01:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ffd7c7bdba5dbdd0320e08bc261f3b8fe792b57be3e172f7687412e8d12751/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ffd7c7bdba5dbdd0320e08bc261f3b8fe792b57be3e172f7687412e8d12751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ffd7c7bdba5dbdd0320e08bc261f3b8fe792b57be3e172f7687412e8d12751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ffd7c7bdba5dbdd0320e08bc261f3b8fe792b57be3e172f7687412e8d12751/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:01:59 compute-0 podman[309405]: 2026-01-31 08:01:59.321760576 +0000 UTC m=+0.149947045 container init 9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:59 compute-0 podman[309405]: 2026-01-31 08:01:59.327702391 +0000 UTC m=+0.155888800 container start 9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:01:59 compute-0 podman[309405]: 2026-01-31 08:01:59.332034746 +0000 UTC m=+0.160221205 container attach 9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:01:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 293 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Jan 31 08:01:59 compute-0 nova_compute[247704]: 2026-01-31 08:01:59.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:01:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:59.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:01:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:01:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:59.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:01:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:59.833 160292 DEBUG eventlet.wsgi.server [-] (160292) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:01:59.834 160292 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: Accept: */*
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: Connection: close
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: Content-Type: text/plain
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: Host: 169.254.169.254
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: User-Agent: curl/7.84.0
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: X-Forwarded-For: 10.100.0.9
Jan 31 08:01:59 compute-0 ovn_metadata_agent[160021]: X-Ovn-Network-Id: 5c660d8f-9cd6-4091-b9a9-e007762da7fa __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]: {
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:     "0": [
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:         {
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "devices": [
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "/dev/loop3"
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             ],
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "lv_name": "ceph_lv0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "lv_size": "7511998464",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "name": "ceph_lv0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "tags": {
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.cluster_name": "ceph",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.crush_device_class": "",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.encrypted": "0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.osd_id": "0",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.type": "block",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:                 "ceph.vdo": "0"
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             },
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "type": "block",
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:             "vg_name": "ceph_vg0"
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:         }
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]:     ]
Jan 31 08:02:00 compute-0 hungry_wilbur[309422]: }
Jan 31 08:02:00 compute-0 systemd[1]: libpod-9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9.scope: Deactivated successfully.
Jan 31 08:02:00 compute-0 conmon[309422]: conmon 9ecd9668814c95d85fb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9.scope/container/memory.events
Jan 31 08:02:00 compute-0 podman[309405]: 2026-01-31 08:02:00.135537871 +0000 UTC m=+0.963724240 container died 9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:02:00 compute-0 sudo[309443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:00 compute-0 sudo[309443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-0 sudo[309443]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 sudo[309468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:00 compute-0 sudo[309468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-0 sudo[309468]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 ceph-mon[74496]: pgmap v1942: 305 pgs: 305 active+clean; 293 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Jan 31 08:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5ffd7c7bdba5dbdd0320e08bc261f3b8fe792b57be3e172f7687412e8d12751-merged.mount: Deactivated successfully.
Jan 31 08:02:00 compute-0 podman[309405]: 2026-01-31 08:02:00.380479118 +0000 UTC m=+1.208665497 container remove 9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:02:00 compute-0 systemd[1]: libpod-conmon-9ecd9668814c95d85fb525ccb4459122fd23b5c66d62a9fb5f118f37b2bcc2a9.scope: Deactivated successfully.
Jan 31 08:02:00 compute-0 sudo[309298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 sudo[309494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:00 compute-0 sudo[309494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-0 sudo[309494]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 sudo[309519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:02:00 compute-0 sudo[309519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-0 sudo[309519]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 sudo[309544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:00 compute-0 sudo[309544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-0 sudo[309544]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 sudo[309569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:02:00 compute-0 sudo[309569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:00 compute-0 podman[309634]: 2026-01-31 08:02:00.933791838 +0000 UTC m=+0.044655069 container create 1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:02:00 compute-0 systemd[1]: Started libpod-conmon-1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb.scope.
Jan 31 08:02:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:01 compute-0 podman[309634]: 2026-01-31 08:02:01.006280484 +0000 UTC m=+0.117143755 container init 1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:02:01 compute-0 podman[309634]: 2026-01-31 08:02:00.915331888 +0000 UTC m=+0.026195159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:02:01 compute-0 podman[309634]: 2026-01-31 08:02:01.013130041 +0000 UTC m=+0.123993252 container start 1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:02:01 compute-0 podman[309634]: 2026-01-31 08:02:01.017773174 +0000 UTC m=+0.128636435 container attach 1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:02:01 compute-0 laughing_newton[309650]: 167 167
Jan 31 08:02:01 compute-0 systemd[1]: libpod-1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb.scope: Deactivated successfully.
Jan 31 08:02:01 compute-0 podman[309634]: 2026-01-31 08:02:01.020364397 +0000 UTC m=+0.131227608 container died 1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1753b95d7bd17b580a4a31d08fc51f41e7aef49399c9bddc8dd9bbb3aebcc0b-merged.mount: Deactivated successfully.
Jan 31 08:02:01 compute-0 podman[309634]: 2026-01-31 08:02:01.068309245 +0000 UTC m=+0.179172496 container remove 1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_newton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:02:01 compute-0 systemd[1]: libpod-conmon-1e8155e63ce55047df9f8faddaf4815fbc1452c80fb76fabcf5d1ec6a7be69cb.scope: Deactivated successfully.
Jan 31 08:02:01 compute-0 podman[309673]: 2026-01-31 08:02:01.26064864 +0000 UTC m=+0.052867728 container create a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:02:01 compute-0 systemd[1]: Started libpod-conmon-a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3.scope.
Jan 31 08:02:01 compute-0 podman[309673]: 2026-01-31 08:02:01.22777782 +0000 UTC m=+0.019996908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:02:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf551ef2e6f8abaadb7af35c456d67e237d2695cf331616026b3f21b3093ac00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf551ef2e6f8abaadb7af35c456d67e237d2695cf331616026b3f21b3093ac00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf551ef2e6f8abaadb7af35c456d67e237d2695cf331616026b3f21b3093ac00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf551ef2e6f8abaadb7af35c456d67e237d2695cf331616026b3f21b3093ac00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:02:01 compute-0 podman[309673]: 2026-01-31 08:02:01.359468478 +0000 UTC m=+0.151687566 container init a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:02:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 307 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 31 08:02:01 compute-0 podman[309673]: 2026-01-31 08:02:01.366751265 +0000 UTC m=+0.158970333 container start a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:02:01 compute-0 podman[309673]: 2026-01-31 08:02:01.371149472 +0000 UTC m=+0.163368570 container attach a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:02:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:01.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:01 compute-0 ovn_controller[149457]: 2026-01-31T08:02:01Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:23:9c 10.100.0.11
Jan 31 08:02:01 compute-0 ovn_controller[149457]: 2026-01-31T08:02:01Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:23:9c 10.100.0.11
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]: {
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:         "osd_id": 0,
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:         "type": "bluestore"
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]:     }
Jan 31 08:02:02 compute-0 priceless_meninsky[309689]: }
Jan 31 08:02:02 compute-0 systemd[1]: libpod-a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3.scope: Deactivated successfully.
Jan 31 08:02:02 compute-0 podman[309673]: 2026-01-31 08:02:02.236108695 +0000 UTC m=+1.028327803 container died a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf551ef2e6f8abaadb7af35c456d67e237d2695cf331616026b3f21b3093ac00-merged.mount: Deactivated successfully.
Jan 31 08:02:02 compute-0 podman[309673]: 2026-01-31 08:02:02.299787076 +0000 UTC m=+1.092006154 container remove a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:02:02 compute-0 systemd[1]: libpod-conmon-a0594ab8a88ffcb8ec61169b8bef26d5ceb2dd7c5885d6333a08a22b43fc37e3.scope: Deactivated successfully.
Jan 31 08:02:02 compute-0 sudo[309569]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:02:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:02.360 160292 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 31 08:02:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:02.361 160292 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1916 time: 2.5268085
Jan 31 08:02:02 compute-0 haproxy-metadata-proxy-5c660d8f-9cd6-4091-b9a9-e007762da7fa[308029]: 10.100.0.9:41352 [31/Jan/2026:08:01:59.831] listener listener/metadata 0/0/0/2529/2529 200 1900 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 08:02:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:02:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:02:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:02:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 465f848f-8ca2-47b9-8d3d-249bfdf044ee does not exist
Jan 31 08:02:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 98e663f3-cb0c-4340-bd2a-aa9b4d5d7fdb does not exist
Jan 31 08:02:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2f10897b-72e0-4617-9774-8b578191a679 does not exist
Jan 31 08:02:02 compute-0 ceph-mon[74496]: pgmap v1943: 305 pgs: 305 active+clean; 307 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 31 08:02:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:02:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:02:02 compute-0 sudo[309725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:02 compute-0 sudo[309725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:02 compute-0 sudo[309725]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:02 compute-0 sudo[309750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:02:02 compute-0 sudo[309750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:02 compute-0 sudo[309750]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.281 247708 DEBUG oslo_concurrency.lockutils [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.282 247708 DEBUG oslo_concurrency.lockutils [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.301 247708 INFO nova.compute.manager [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Detaching volume 746b1dd4-a34a-454d-a628-2fc84c0145c9
Jan 31 08:02:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 309 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 167 op/s
Jan 31 08:02:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:03.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:03.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.634 247708 INFO nova.virt.block_device [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Attempting to driver detach volume 746b1dd4-a34a-454d-a628-2fc84c0145c9 from mountpoint /dev/vdb
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.649 247708 DEBUG nova.virt.libvirt.driver [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Attempting to detach device vdb from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.650 247708 DEBUG nova.virt.libvirt.guest [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-746b1dd4-a34a-454d-a628-2fc84c0145c9">
Jan 31 08:02:03 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   </source>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <serial>746b1dd4-a34a-454d-a628-2fc84c0145c9</serial>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]: </disk>
Jan 31 08:02:03 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.661 247708 INFO nova.virt.libvirt.driver [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully detached device vdb from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the persistent domain config.
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.662 247708 DEBUG nova.virt.libvirt.driver [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.662 247708 DEBUG nova.virt.libvirt.guest [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-746b1dd4-a34a-454d-a628-2fc84c0145c9">
Jan 31 08:02:03 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   </source>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <serial>746b1dd4-a34a-454d-a628-2fc84c0145c9</serial>
Jan 31 08:02:03 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 08:02:03 compute-0 nova_compute[247704]: </disk>
Jan 31 08:02:03 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.798 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769846523.7978463, cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.801 247708 DEBUG nova.virt.libvirt.driver [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:02:03 compute-0 nova_compute[247704]: 2026-01-31 08:02:03.805 247708 INFO nova.virt.libvirt.driver [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully detached device vdb from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the live domain config.
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.050 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.410 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:04 compute-0 ceph-mon[74496]: pgmap v1944: 305 pgs: 305 active+clean; 309 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 167 op/s
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.446 247708 DEBUG nova.objects.instance [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'flavor' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.575 247708 DEBUG oslo_concurrency.lockutils [None req-f2f1574a-f03c-40b2-afdb-57b13a04fa0b d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.797 247708 DEBUG oslo_concurrency.lockutils [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "interface-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-a4bb4520-7491-4c40-8d8b-bf1044a7d29c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.798 247708 DEBUG oslo_concurrency.lockutils [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "interface-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-a4bb4520-7491-4c40-8d8b-bf1044a7d29c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.838 247708 DEBUG nova.objects.instance [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'flavor' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.875 247708 DEBUG nova.virt.libvirt.vif [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.876 247708 DEBUG nova.network.os_vif_util [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.878 247708 DEBUG nova.network.os_vif_util [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.883 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.888 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.892 247708 DEBUG nova.virt.libvirt.driver [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Attempting to detach device tapa4bb4520-74 from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.894 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] detach device xml: <interface type="ethernet">
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <mac address="fa:16:3e:d7:f7:dd"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <model type="virtio"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <mtu size="1442"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <target dev="tapa4bb4520-74"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]: </interface>
Jan 31 08:02:04 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.903 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.910 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface>not found in domain: <domain type='kvm' id='37'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <name>instance-0000005a</name>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <uuid>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</uuid>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:01:54</nova:creationTime>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <nova:port uuid="a4bb4520-7491-4c40-8d8b-bf1044a7d29c">
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.10.10.23" ipVersion="4"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:02:04 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <memory unit='KiB'>131072</memory>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <vcpu placement='static'>1</vcpu>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <resource>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <partition>/machine</partition>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </resource>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <sysinfo type='smbios'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <system>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <entry name='manufacturer'>RDO</entry>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <entry name='serial'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <entry name='uuid'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <entry name='family'>Virtual Machine</entry>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </system>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <os>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <boot dev='hd'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <smbios mode='sysinfo'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </os>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <features>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <vmcoreinfo state='on'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </features>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <model fallback='forbid'>Nehalem</model>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <feature policy='require' name='x2apic'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <feature policy='require' name='hypervisor'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <feature policy='require' name='vme'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <clock offset='utc'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <timer name='hpet' present='no'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <on_poweroff>destroy</on_poweroff>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <on_reboot>restart</on_reboot>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <on_crash>destroy</on_crash>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <disk type='network' device='disk'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk' index='2'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target dev='vda' bus='virtio'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='virtio-disk0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <disk type='network' device='cdrom'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config' index='1'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target dev='sda' bus='sata'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <readonly/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='sata0-0-0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pcie.0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='1' port='0x10'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='2' port='0x11'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.2'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='3' port='0x12'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.3'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='4' port='0x13'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.4'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='5' port='0x14'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.5'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='6' port='0x15'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.6'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='7' port='0x16'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.7'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='8' port='0x17'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.8'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='9' port='0x18'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.9'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='10' port='0x19'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.10'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='11' port='0x1a'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.11'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='12' port='0x1b'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.12'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='13' port='0x1c'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.13'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='14' port='0x1d'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.14'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='15' port='0x1e'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.15'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='16' port='0x1f'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.16'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='17' port='0x20'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.17'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='18' port='0x21'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.18'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='19' port='0x22'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.19'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='20' port='0x23'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.20'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='21' port='0x24'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.21'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='22' port='0x25'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.22'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='23' port='0x26'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.23'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='24' port='0x27'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.24'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target chassis='25' port='0x28'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.25'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model name='pcie-pci-bridge'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='pci.26'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='usb'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <controller type='sata' index='0'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='ide'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <interface type='ethernet'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <mac address='fa:16:3e:ce:51:d8'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target dev='tapf0a0388d-0f'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model type='virtio'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <mtu size='1442'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='net0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <interface type='ethernet'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <mac address='fa:16:3e:d7:f7:dd'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target dev='tapa4bb4520-74'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model type='virtio'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <mtu size='1442'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='net1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <serial type='pty'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target type='isa-serial' port='0'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:         <model name='isa-serial'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       </target>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <target type='serial' port='0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </console>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <input type='tablet' bus='usb'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='input0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='usb' bus='0' port='1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <input type='mouse' bus='ps2'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='input1'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <input type='keyboard' bus='ps2'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='input2'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <listen type='address' address='::0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </graphics>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <audio id='1' type='none'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <video>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='video0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </video>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <watchdog model='itco' action='reset'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='watchdog0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </watchdog>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <memballoon model='virtio'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <stats period='10'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='balloon0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <rng model='virtio'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <backend model='random'>/dev/urandom</backend>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <alias name='rng0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <label>system_u:system_r:svirt_t:s0:c171,c865</label>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c171,c865</imagelabel>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <label>+107:+107</label>
Jan 31 08:02:04 compute-0 nova_compute[247704]:     <imagelabel>+107:+107</imagelabel>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:04 compute-0 nova_compute[247704]: </domain>
Jan 31 08:02:04 compute-0 nova_compute[247704]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.912 247708 INFO nova.virt.libvirt.driver [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully detached device tapa4bb4520-74 from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the persistent domain config.
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.912 247708 DEBUG nova.virt.libvirt.driver [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] (1/8): Attempting to detach device tapa4bb4520-74 with device alias net1 from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:02:04 compute-0 nova_compute[247704]: 2026-01-31 08:02:04.913 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] detach device xml: <interface type="ethernet">
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <mac address="fa:16:3e:d7:f7:dd"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <model type="virtio"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <mtu size="1442"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]:   <target dev="tapa4bb4520-74"/>
Jan 31 08:02:04 compute-0 nova_compute[247704]: </interface>
Jan 31 08:02:04 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:02:05 compute-0 kernel: tapa4bb4520-74 (unregistering): left promiscuous mode
Jan 31 08:02:05 compute-0 NetworkManager[49108]: <info>  [1769846525.0366] device (tapa4bb4520-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.042 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 ovn_controller[149457]: 2026-01-31T08:02:05Z|00375|binding|INFO|Releasing lport a4bb4520-7491-4c40-8d8b-bf1044a7d29c from this chassis (sb_readonly=0)
Jan 31 08:02:05 compute-0 ovn_controller[149457]: 2026-01-31T08:02:05Z|00376|binding|INFO|Setting lport a4bb4520-7491-4c40-8d8b-bf1044a7d29c down in Southbound
Jan 31 08:02:05 compute-0 ovn_controller[149457]: 2026-01-31T08:02:05Z|00377|binding|INFO|Removing iface tapa4bb4520-74 ovn-installed in OVS
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.056 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.057 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769846525.0568643, cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.060 247708 DEBUG nova.virt.libvirt.driver [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Start waiting for the detach event from libvirt for device tapa4bb4520-74 with device alias net1 for instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.061 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.062 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:f7:dd 10.10.10.23'], port_security=['fa:16:3e:d7:f7:dd 10.10.10.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.23/24', 'neutron:device_id': 'cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ab30fd72657435fb442fc59a53da644', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4581686a-bc33-40dd-9725-667831be9a62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76275e80-3a87-4424-b971-4a6b4136f99e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a4bb4520-7491-4c40-8d8b-bf1044a7d29c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.063 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a4bb4520-7491-4c40-8d8b-bf1044a7d29c in datapath 6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a unbound from our chassis
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.065 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.067 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[383393a6-1b0b-474a-ba8c-797a1ed785bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.068 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a namespace which is not needed anymore
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.066 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface>not found in domain: <domain type='kvm' id='37'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <name>instance-0000005a</name>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <uuid>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</uuid>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:01:54</nova:creationTime>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:port uuid="a4bb4520-7491-4c40-8d8b-bf1044a7d29c">
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.10.10.23" ipVersion="4"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:02:05 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <memory unit='KiB'>131072</memory>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <vcpu placement='static'>1</vcpu>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <resource>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <partition>/machine</partition>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </resource>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <sysinfo type='smbios'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <system>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <entry name='manufacturer'>RDO</entry>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <entry name='serial'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <entry name='uuid'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <entry name='family'>Virtual Machine</entry>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </system>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <os>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <boot dev='hd'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <smbios mode='sysinfo'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </os>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <features>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <vmcoreinfo state='on'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </features>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <model fallback='forbid'>Nehalem</model>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <feature policy='require' name='x2apic'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <feature policy='require' name='hypervisor'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <feature policy='require' name='vme'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <clock offset='utc'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <timer name='hpet' present='no'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <on_poweroff>destroy</on_poweroff>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <on_reboot>restart</on_reboot>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <on_crash>destroy</on_crash>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <disk type='network' device='disk'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk' index='2'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target dev='vda' bus='virtio'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='virtio-disk0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <disk type='network' device='cdrom'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config' index='1'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target dev='sda' bus='sata'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <readonly/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='sata0-0-0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pcie.0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='1' port='0x10'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='2' port='0x11'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.2'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='3' port='0x12'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.3'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='4' port='0x13'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.4'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='5' port='0x14'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.5'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='6' port='0x15'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.6'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='7' port='0x16'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.7'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='8' port='0x17'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.8'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='9' port='0x18'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.9'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='10' port='0x19'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.10'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='11' port='0x1a'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.11'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='12' port='0x1b'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.12'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='13' port='0x1c'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.13'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='14' port='0x1d'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.14'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='15' port='0x1e'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.15'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='16' port='0x1f'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.16'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='17' port='0x20'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.17'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='18' port='0x21'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.18'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='19' port='0x22'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.19'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='20' port='0x23'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.20'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='21' port='0x24'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.21'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='22' port='0x25'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.22'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='23' port='0x26'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.23'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='24' port='0x27'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.24'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target chassis='25' port='0x28'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.25'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model name='pcie-pci-bridge'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='pci.26'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='usb'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <controller type='sata' index='0'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='ide'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <interface type='ethernet'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <mac address='fa:16:3e:ce:51:d8'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target dev='tapf0a0388d-0f'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model type='virtio'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <mtu size='1442'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='net0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <serial type='pty'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target type='isa-serial' port='0'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:         <model name='isa-serial'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       </target>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <target type='serial' port='0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </console>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <input type='tablet' bus='usb'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='input0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='usb' bus='0' port='1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <input type='mouse' bus='ps2'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='input1'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <input type='keyboard' bus='ps2'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='input2'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <listen type='address' address='::0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </graphics>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <audio id='1' type='none'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <video>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='video0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </video>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <watchdog model='itco' action='reset'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='watchdog0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </watchdog>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <memballoon model='virtio'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <stats period='10'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='balloon0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <rng model='virtio'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <backend model='random'>/dev/urandom</backend>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <alias name='rng0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <label>system_u:system_r:svirt_t:s0:c171,c865</label>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c171,c865</imagelabel>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <label>+107:+107</label>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <imagelabel>+107:+107</imagelabel>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:05 compute-0 nova_compute[247704]: </domain>
Jan 31 08:02:05 compute-0 nova_compute[247704]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.067 247708 INFO nova.virt.libvirt.driver [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully detached device tapa4bb4520-74 from instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb from the live domain config.
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.068 247708 DEBUG nova.virt.libvirt.vif [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.068 247708 DEBUG nova.network.os_vif_util [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.069 247708 DEBUG nova.network.os_vif_util [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.069 247708 DEBUG os_vif [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.071 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.072 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4bb4520-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.076 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.080 247708 INFO os_vif [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74')
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.081 247708 DEBUG nova.virt.libvirt.guest [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:02:05</nova:creationTime>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:02:05 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:02:05 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:05 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:02:05 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:02:05 compute-0 nova_compute[247704]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 08:02:05 compute-0 podman[309780]: 2026-01-31 08:02:05.174907209 +0000 UTC m=+0.102227432 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:02:05 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [NOTICE]   (308927) : haproxy version is 2.8.14-c23fe91
Jan 31 08:02:05 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [NOTICE]   (308927) : path to executable is /usr/sbin/haproxy
Jan 31 08:02:05 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [WARNING]  (308927) : Exiting Master process...
Jan 31 08:02:05 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [WARNING]  (308927) : Exiting Master process...
Jan 31 08:02:05 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [ALERT]    (308927) : Current worker (308929) exited with code 143 (Terminated)
Jan 31 08:02:05 compute-0 neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a[308922]: [WARNING]  (308927) : All workers exited. Exiting... (0)
Jan 31 08:02:05 compute-0 systemd[1]: libpod-3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731.scope: Deactivated successfully.
Jan 31 08:02:05 compute-0 podman[309826]: 2026-01-31 08:02:05.225936622 +0000 UTC m=+0.050799939 container died 3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731-userdata-shm.mount: Deactivated successfully.
Jan 31 08:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad01de360f1b85e79093689cf0235ba39d4fa7997d1f4aeb1141cb811ce2ee88-merged.mount: Deactivated successfully.
Jan 31 08:02:05 compute-0 podman[309826]: 2026-01-31 08:02:05.270513558 +0000 UTC m=+0.095376845 container cleanup 3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:02:05 compute-0 systemd[1]: libpod-conmon-3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731.scope: Deactivated successfully.
Jan 31 08:02:05 compute-0 podman[309859]: 2026-01-31 08:02:05.343827174 +0000 UTC m=+0.053345021 container remove 3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.350 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[70f47d03-641c-422b-af63-85495c53c6dc]: (4, ('Sat Jan 31 08:02:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a (3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731)\n3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731\nSat Jan 31 08:02:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a (3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731)\n3ffa0e89835f97eebbfd6866616f3177e5ca902e23a2b441808e7c88b43be731\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.353 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e27287-cdc6-473c-98c8-6fe01110ea1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.355 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6741c3f8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.357 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 kernel: tap6741c3f8-40: left promiscuous mode
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.363 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 320 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.369 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[25a0fed5-9fc6-49d7-b8c2-a8b7cb84f364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.398 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f07d16c5-ffc8-4585-9908-bf857e695fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.400 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[19e6f9dc-aadf-4f17-88ff-9b74cfa7a505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.413 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fb4dde-e8e4-4910-958f-33a18faf9a55]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675717, 'reachable_time': 37466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309874, 'error': None, 'target': 'ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.416 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:02:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:05.416 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd78b45-ccc6-4095-9392-15487784dea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d6741c3f8\x2d4cb7\x2d4a56\x2dbc6c\x2d70a0d9fdd36a.mount: Deactivated successfully.
Jan 31 08:02:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:05.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:05.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.688 247708 DEBUG nova.compute.manager [req-86a300e2-a925-4f10-a55d-f825873598af req-468bec46-42ab-401a-9af9-7055276d39f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-unplugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.689 247708 DEBUG oslo_concurrency.lockutils [req-86a300e2-a925-4f10-a55d-f825873598af req-468bec46-42ab-401a-9af9-7055276d39f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.689 247708 DEBUG oslo_concurrency.lockutils [req-86a300e2-a925-4f10-a55d-f825873598af req-468bec46-42ab-401a-9af9-7055276d39f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.689 247708 DEBUG oslo_concurrency.lockutils [req-86a300e2-a925-4f10-a55d-f825873598af req-468bec46-42ab-401a-9af9-7055276d39f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.690 247708 DEBUG nova.compute.manager [req-86a300e2-a925-4f10-a55d-f825873598af req-468bec46-42ab-401a-9af9-7055276d39f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-unplugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:05 compute-0 nova_compute[247704]: 2026-01-31 08:02:05.690 247708 WARNING nova.compute.manager [req-86a300e2-a925-4f10-a55d-f825873598af req-468bec46-42ab-401a-9af9-7055276d39f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received unexpected event network-vif-unplugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c for instance with vm_state active and task_state None.
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.019 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:06.020 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:06.022 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.147 247708 DEBUG oslo_concurrency.lockutils [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.148 247708 DEBUG oslo_concurrency.lockutils [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquired lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.149 247708 DEBUG nova.network.neutron [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.277 247708 DEBUG nova.compute.manager [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-deleted-a4bb4520-7491-4c40-8d8b-bf1044a7d29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.278 247708 INFO nova.compute.manager [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Neutron deleted interface a4bb4520-7491-4c40-8d8b-bf1044a7d29c; detaching it from the instance and deleting it from the info cache
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.278 247708 DEBUG nova.network.neutron [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.308 247708 DEBUG nova.objects.instance [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'system_metadata' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.349 247708 DEBUG nova.objects.instance [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lazy-loading 'flavor' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.386 247708 DEBUG nova.virt.libvirt.vif [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.387 247708 DEBUG nova.network.os_vif_util [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.388 247708 DEBUG nova.network.os_vif_util [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.392 247708 DEBUG nova.virt.libvirt.guest [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.395 247708 DEBUG nova.virt.libvirt.guest [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface>not found in domain: <domain type='kvm' id='37'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <name>instance-0000005a</name>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <uuid>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</uuid>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:02:05</nova:creationTime>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:02:06 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <memory unit='KiB'>131072</memory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <vcpu placement='static'>1</vcpu>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <resource>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <partition>/machine</partition>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </resource>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <sysinfo type='smbios'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <system>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='manufacturer'>RDO</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='serial'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='uuid'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='family'>Virtual Machine</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </system>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <os>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <boot dev='hd'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <smbios mode='sysinfo'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </os>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <features>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <vmcoreinfo state='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </features>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <model fallback='forbid'>Nehalem</model>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <feature policy='require' name='x2apic'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <feature policy='require' name='hypervisor'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <feature policy='require' name='vme'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <clock offset='utc'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <timer name='hpet' present='no'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <on_poweroff>destroy</on_poweroff>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <on_reboot>restart</on_reboot>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <on_crash>destroy</on_crash>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <disk type='network' device='disk'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk' index='2'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target dev='vda' bus='virtio'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='virtio-disk0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <disk type='network' device='cdrom'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config' index='1'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target dev='sda' bus='sata'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <readonly/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='sata0-0-0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pcie.0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='1' port='0x10'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='2' port='0x11'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='3' port='0x12'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='4' port='0x13'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='5' port='0x14'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='6' port='0x15'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='7' port='0x16'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='8' port='0x17'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.8'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='9' port='0x18'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.9'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='10' port='0x19'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.10'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='11' port='0x1a'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.11'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='12' port='0x1b'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.12'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='13' port='0x1c'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.13'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='14' port='0x1d'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.14'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='15' port='0x1e'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.15'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='16' port='0x1f'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.16'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='17' port='0x20'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.17'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='18' port='0x21'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.18'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='19' port='0x22'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.19'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='20' port='0x23'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.20'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='21' port='0x24'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.21'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='22' port='0x25'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.22'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='23' port='0x26'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.23'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='24' port='0x27'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.24'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='25' port='0x28'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.25'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-pci-bridge'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.26'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='usb'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='sata' index='0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='ide'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <interface type='ethernet'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <mac address='fa:16:3e:ce:51:d8'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target dev='tapf0a0388d-0f'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model type='virtio'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <mtu size='1442'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='net0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <serial type='pty'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target type='isa-serial' port='0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <model name='isa-serial'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </target>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target type='serial' port='0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </console>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <input type='tablet' bus='usb'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='input0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='usb' bus='0' port='1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <input type='mouse' bus='ps2'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='input1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <input type='keyboard' bus='ps2'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='input2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <listen type='address' address='::0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </graphics>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <audio id='1' type='none'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <video>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='video0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </video>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <watchdog model='itco' action='reset'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='watchdog0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </watchdog>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <memballoon model='virtio'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <stats period='10'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='balloon0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <rng model='virtio'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <backend model='random'>/dev/urandom</backend>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='rng0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <label>system_u:system_r:svirt_t:s0:c171,c865</label>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c171,c865</imagelabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <label>+107:+107</label>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <imagelabel>+107:+107</imagelabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]: </domain>
Jan 31 08:02:06 compute-0 nova_compute[247704]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.397 247708 DEBUG nova.virt.libvirt.guest [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.401 247708 DEBUG nova.virt.libvirt.guest [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:d7:f7:dd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa4bb4520-74"/></interface>not found in domain: <domain type='kvm' id='37'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <name>instance-0000005a</name>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <uuid>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</uuid>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:02:05</nova:creationTime>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:02:06 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <memory unit='KiB'>131072</memory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <vcpu placement='static'>1</vcpu>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <resource>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <partition>/machine</partition>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </resource>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <sysinfo type='smbios'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <system>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='manufacturer'>RDO</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='product'>OpenStack Compute</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='serial'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='uuid'>cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <entry name='family'>Virtual Machine</entry>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </system>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <os>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <boot dev='hd'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <smbios mode='sysinfo'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </os>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <features>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <vmcoreinfo state='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </features>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <cpu mode='custom' match='exact' check='full'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <model fallback='forbid'>Nehalem</model>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <feature policy='require' name='x2apic'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <feature policy='require' name='hypervisor'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <feature policy='require' name='vme'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <clock offset='utc'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <timer name='pit' tickpolicy='delay'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <timer name='hpet' present='no'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <on_poweroff>destroy</on_poweroff>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <on_reboot>restart</on_reboot>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <on_crash>destroy</on_crash>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <disk type='network' device='disk'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk' index='2'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target dev='vda' bus='virtio'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='virtio-disk0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <disk type='network' device='cdrom'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <driver name='qemu' type='raw' cache='none'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <auth username='openstack'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <secret type='ceph' uuid='f70fcd2a-dcb4-5f89-a4ba-79a09959083b'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source protocol='rbd' name='vms/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_disk.config' index='1'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.100' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.102' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <host name='192.168.122.101' port='6789'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </source>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target dev='sda' bus='sata'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <readonly/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='sata0-0-0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='0' model='pcie-root'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pcie.0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='1' port='0x10'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='2' port='0x11'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='3' port='0x12'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='4' port='0x13'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='5' port='0x14'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='6' port='0x15'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='7' port='0x16'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='8' port='0x17'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.8'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='9' port='0x18'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.9'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='10' port='0x19'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.10'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='11' port='0x1a'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.11'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='12' port='0x1b'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.12'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='13' port='0x1c'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.13'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='14' port='0x1d'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.14'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='15' port='0x1e'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.15'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='16' port='0x1f'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.16'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='17' port='0x20'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.17'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='18' port='0x21'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.18'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='19' port='0x22'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.19'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='20' port='0x23'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.20'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='21' port='0x24'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.21'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='22' port='0x25'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.22'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='23' port='0x26'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.23'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='24' port='0x27'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.24'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-root-port'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target chassis='25' port='0x28'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.25'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model name='pcie-pci-bridge'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='pci.26'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='usb'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <controller type='sata' index='0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='ide'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </controller>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <interface type='ethernet'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <mac address='fa:16:3e:ce:51:d8'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target dev='tapf0a0388d-0f'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model type='virtio'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <driver name='vhost' rx_queue_size='512'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <mtu size='1442'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='net0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <serial type='pty'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target type='isa-serial' port='0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:         <model name='isa-serial'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       </target>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <console type='pty' tty='/dev/pts/0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <source path='/dev/pts/0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <log file='/var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb/console.log' append='off'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <target type='serial' port='0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='serial0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </console>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <input type='tablet' bus='usb'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='input0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='usb' bus='0' port='1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <input type='mouse' bus='ps2'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='input1'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <input type='keyboard' bus='ps2'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='input2'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </input>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <listen type='address' address='::0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </graphics>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <audio id='1' type='none'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <video>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <model type='virtio' heads='1' primary='yes'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='video0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </video>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <watchdog model='itco' action='reset'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='watchdog0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </watchdog>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <memballoon model='virtio'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <stats period='10'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='balloon0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <rng model='virtio'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <backend model='random'>/dev/urandom</backend>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <alias name='rng0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <label>system_u:system_r:svirt_t:s0:c171,c865</label>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c171,c865</imagelabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <label>+107:+107</label>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <imagelabel>+107:+107</imagelabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </seclabel>
Jan 31 08:02:06 compute-0 nova_compute[247704]: </domain>
Jan 31 08:02:06 compute-0 nova_compute[247704]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.402 247708 WARNING nova.virt.libvirt.driver [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Detaching interface fa:16:3e:d7:f7:dd failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapa4bb4520-74' not found.
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.403 247708 DEBUG nova.virt.libvirt.vif [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.403 247708 DEBUG nova.network.os_vif_util [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converting VIF {"id": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "address": "fa:16:3e:d7:f7:dd", "network": {"id": "6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1778148081", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4bb4520-74", "ovs_interfaceid": "a4bb4520-7491-4c40-8d8b-bf1044a7d29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.404 247708 DEBUG nova.network.os_vif_util [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.404 247708 DEBUG os_vif [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.406 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4bb4520-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.406 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.408 247708 INFO os_vif [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:f7:dd,bridge_name='br-int',has_traffic_filtering=True,id=a4bb4520-7491-4c40-8d8b-bf1044a7d29c,network=Network(6741c3f8-4cb7-4a56-bc6c-70a0d9fdd36a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4bb4520-74')
Jan 31 08:02:06 compute-0 nova_compute[247704]: 2026-01-31 08:02:06.409 247708 DEBUG nova.virt.libvirt.guest [req-c300eb41-0312-46ec-a290-ec3ab11f49a6 req-d7733c25-bd3d-4698-b28c-0c2b305796d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:name>tempest-device-tagging-server-1787730133</nova:name>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:creationTime>2026-01-31 08:02:06</nova:creationTime>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:flavor name="m1.nano">
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:memory>128</nova:memory>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:disk>1</nova:disk>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:swap>0</nova:swap>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:vcpus>1</nova:vcpus>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:flavor>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:owner>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:user uuid="d454a4ec91e645e992f4a88ac60da747">tempest-TaggedAttachmentsTest-1028778083-project-member</nova:user>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:project uuid="2ab30fd72657435fb442fc59a53da644">tempest-TaggedAttachmentsTest-1028778083</nova:project>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:owner>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   <nova:ports>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     <nova:port uuid="f0a0388d-0f3d-4eb7-a195-234dd517f99b">
Jan 31 08:02:06 compute-0 nova_compute[247704]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:02:06 compute-0 nova_compute[247704]:     </nova:port>
Jan 31 08:02:06 compute-0 nova_compute[247704]:   </nova:ports>
Jan 31 08:02:06 compute-0 nova_compute[247704]: </nova:instance>
Jan 31 08:02:06 compute-0 nova_compute[247704]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 31 08:02:06 compute-0 ceph-mon[74496]: pgmap v1945: 305 pgs: 305 active+clean; 320 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Jan 31 08:02:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 215 op/s
Jan 31 08:02:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:07.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:07.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:07 compute-0 nova_compute[247704]: 2026-01-31 08:02:07.838 247708 DEBUG nova.compute.manager [req-c3c7a9c6-b32b-4ea8-941b-b72fa40c1f58 req-ef23da0e-d8c1-4973-add8-873eb63c23d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:07 compute-0 nova_compute[247704]: 2026-01-31 08:02:07.838 247708 DEBUG oslo_concurrency.lockutils [req-c3c7a9c6-b32b-4ea8-941b-b72fa40c1f58 req-ef23da0e-d8c1-4973-add8-873eb63c23d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:07 compute-0 nova_compute[247704]: 2026-01-31 08:02:07.838 247708 DEBUG oslo_concurrency.lockutils [req-c3c7a9c6-b32b-4ea8-941b-b72fa40c1f58 req-ef23da0e-d8c1-4973-add8-873eb63c23d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:07 compute-0 nova_compute[247704]: 2026-01-31 08:02:07.839 247708 DEBUG oslo_concurrency.lockutils [req-c3c7a9c6-b32b-4ea8-941b-b72fa40c1f58 req-ef23da0e-d8c1-4973-add8-873eb63c23d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:07 compute-0 nova_compute[247704]: 2026-01-31 08:02:07.839 247708 DEBUG nova.compute.manager [req-c3c7a9c6-b32b-4ea8-941b-b72fa40c1f58 req-ef23da0e-d8c1-4973-add8-873eb63c23d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:07 compute-0 nova_compute[247704]: 2026-01-31 08:02:07.839 247708 WARNING nova.compute.manager [req-c3c7a9c6-b32b-4ea8-941b-b72fa40c1f58 req-ef23da0e-d8c1-4973-add8-873eb63c23d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received unexpected event network-vif-plugged-a4bb4520-7491-4c40-8d8b-bf1044a7d29c for instance with vm_state active and task_state None.
Jan 31 08:02:08 compute-0 ceph-mon[74496]: pgmap v1946: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 215 op/s
Jan 31 08:02:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:09.024 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:09 compute-0 nova_compute[247704]: 2026-01-31 08:02:09.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 219 op/s
Jan 31 08:02:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:09.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:09 compute-0 nova_compute[247704]: 2026-01-31 08:02:09.953 247708 INFO nova.network.neutron [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Port a4bb4520-7491-4c40-8d8b-bf1044a7d29c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 31 08:02:09 compute-0 nova_compute[247704]: 2026-01-31 08:02:09.954 247708 DEBUG nova.network.neutron [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [{"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:09 compute-0 nova_compute[247704]: 2026-01-31 08:02:09.994 247708 DEBUG oslo_concurrency.lockutils [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Releasing lock "refresh_cache-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:10 compute-0 nova_compute[247704]: 2026-01-31 08:02:10.030 247708 DEBUG oslo_concurrency.lockutils [None req-e9fc058c-0ad5-4569-b154-0c3b3ffd22d0 d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "interface-cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-a4bb4520-7491-4c40-8d8b-bf1044a7d29c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:10 compute-0 nova_compute[247704]: 2026-01-31 08:02:10.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:10 compute-0 ceph-mon[74496]: pgmap v1947: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 219 op/s
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.174 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.174 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.175 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.311 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.312 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.312 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.313 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.313 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.314 247708 INFO nova.compute.manager [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Terminating instance
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.315 247708 DEBUG nova.compute.manager [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:02:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:02:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:11.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:11 compute-0 kernel: tapf0a0388d-0f (unregistering): left promiscuous mode
Jan 31 08:02:11 compute-0 NetworkManager[49108]: <info>  [1769846531.5344] device (tapf0a0388d-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:02:11 compute-0 ovn_controller[149457]: 2026-01-31T08:02:11Z|00378|binding|INFO|Releasing lport f0a0388d-0f3d-4eb7-a195-234dd517f99b from this chassis (sb_readonly=0)
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.544 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 ovn_controller[149457]: 2026-01-31T08:02:11Z|00379|binding|INFO|Setting lport f0a0388d-0f3d-4eb7-a195-234dd517f99b down in Southbound
Jan 31 08:02:11 compute-0 ovn_controller[149457]: 2026-01-31T08:02:11Z|00380|binding|INFO|Removing iface tapf0a0388d-0f ovn-installed in OVS
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.546 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.553 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:51:d8 10.100.0.9'], port_security=['fa:16:3e:ce:51:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ab30fd72657435fb442fc59a53da644', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b77be73b-b461-413f-95ea-b1491f487dca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=682aac33-b4be-4bd7-9b67-193b4cc436d1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=f0a0388d-0f3d-4eb7-a195-234dd517f99b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.556 160028 INFO neutron.agent.ovn.metadata.agent [-] Port f0a0388d-0f3d-4eb7-a195-234dd517f99b in datapath 5c660d8f-9cd6-4091-b9a9-e007762da7fa unbound from our chassis
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.560 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c660d8f-9cd6-4091-b9a9-e007762da7fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.562 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7331a3b4-fc30-49ec-9d11-025d0e65875e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:11.562 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa namespace which is not needed anymore
Jan 31 08:02:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:11 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Jan 31 08:02:11 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005a.scope: Consumed 15.958s CPU time.
Jan 31 08:02:11 compute-0 systemd-machined[214448]: Machine qemu-37-instance-0000005a terminated.
Jan 31 08:02:11 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [NOTICE]   (308024) : haproxy version is 2.8.14-c23fe91
Jan 31 08:02:11 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [NOTICE]   (308024) : path to executable is /usr/sbin/haproxy
Jan 31 08:02:11 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [WARNING]  (308024) : Exiting Master process...
Jan 31 08:02:11 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [ALERT]    (308024) : Current worker (308029) exited with code 143 (Terminated)
Jan 31 08:02:11 compute-0 neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa[307994]: [WARNING]  (308024) : All workers exited. Exiting... (0)
Jan 31 08:02:11 compute-0 systemd[1]: libpod-09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685.scope: Deactivated successfully.
Jan 31 08:02:11 compute-0 podman[309903]: 2026-01-31 08:02:11.699546422 +0000 UTC m=+0.058587899 container died 09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.756 247708 INFO nova.virt.libvirt.driver [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Instance destroyed successfully.
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.757 247708 DEBUG nova.objects.instance [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lazy-loading 'resources' on Instance uuid cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.786 247708 DEBUG nova.virt.libvirt.vif [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1787730133',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1787730133',id=90,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoG3wNzba0rmKSok2Vb0AKg8KlISIMOzAqbaj48bWEod/rzrEnZg+p8NTjcOH6lPnd9pKK1UB3fRDJO5pz9nQCguQ7XNRSR9yG97Gp9kW9HtHhuGMRGMY59oA4mPkq4CQ==',key_name='tempest-keypair-353418820',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ab30fd72657435fb442fc59a53da644',ramdisk_id='',reservation_id='r-dw0h0o2d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1028778083',owner_user_name='tempest-TaggedAttachmentsTest-1028778083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d454a4ec91e645e992f4a88ac60da747',uuid=cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.787 247708 DEBUG nova.network.os_vif_util [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converting VIF {"id": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "address": "fa:16:3e:ce:51:d8", "network": {"id": "5c660d8f-9cd6-4091-b9a9-e007762da7fa", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1145899655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ab30fd72657435fb442fc59a53da644", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf0a0388d-0f", "ovs_interfaceid": "f0a0388d-0f3d-4eb7-a195-234dd517f99b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.788 247708 DEBUG nova.network.os_vif_util [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.788 247708 DEBUG os_vif [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.790 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0a0388d-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:11 compute-0 nova_compute[247704]: 2026-01-31 08:02:11.796 247708 INFO os_vif [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ce:51:d8,bridge_name='br-int',has_traffic_filtering=True,id=f0a0388d-0f3d-4eb7-a195-234dd517f99b,network=Network(5c660d8f-9cd6-4091-b9a9-e007762da7fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf0a0388d-0f')
Jan 31 08:02:11 compute-0 ceph-mon[74496]: pgmap v1948: 305 pgs: 305 active+clean; 326 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685-userdata-shm.mount: Deactivated successfully.
Jan 31 08:02:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3357b0b6a38ad4c588d28ed8dddcfe237f7187daff2acabfca4e5aa465df672e-merged.mount: Deactivated successfully.
Jan 31 08:02:11 compute-0 podman[309903]: 2026-01-31 08:02:11.956299406 +0000 UTC m=+0.315340883 container cleanup 09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:02:11 compute-0 systemd[1]: libpod-conmon-09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685.scope: Deactivated successfully.
Jan 31 08:02:12 compute-0 podman[309961]: 2026-01-31 08:02:12.044514446 +0000 UTC m=+0.062615106 container remove 09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.049 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbb766a-8166-4554-afa4-ddbb6eb5a87d]: (4, ('Sat Jan 31 08:02:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa (09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685)\n09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685\nSat Jan 31 08:02:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa (09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685)\n09664a17b4a79a33f36cfd5a9163178c276e29953d7b5f7cea0e6854db792685\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.052 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d315d84-683e-4cad-87e2-460dc96e4c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.052 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c660d8f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.054 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:12 compute-0 kernel: tap5c660d8f-90: left promiscuous mode
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.065 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.067 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.069 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2e782c22-6303-44d0-a7d8-86aede09019e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.081 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f4215b48-cf7f-41be-9719-e8c9e75f93db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.082 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4e67f227-753e-48b8-9017-d436f1e23525]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.092 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4a0c91-4c81-4309-b2d2-2922afaf78ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671902, 'reachable_time': 38018, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309977, 'error': None, 'target': 'ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.096 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c660d8f-9cd6-4091-b9a9-e007762da7fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:02:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:12.096 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[47e1482e-56c9-4f97-a7dc-3fa1013ee044]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:02:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c660d8f\x2d9cd6\x2d4091\x2db9a9\x2de007762da7fa.mount: Deactivated successfully.
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.128 247708 DEBUG nova.compute.manager [req-8d1a7ac8-166e-4538-858e-a17731c39620 req-298c5687-b1a3-492b-a769-443ce8d7bec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-unplugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.128 247708 DEBUG oslo_concurrency.lockutils [req-8d1a7ac8-166e-4538-858e-a17731c39620 req-298c5687-b1a3-492b-a769-443ce8d7bec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.128 247708 DEBUG oslo_concurrency.lockutils [req-8d1a7ac8-166e-4538-858e-a17731c39620 req-298c5687-b1a3-492b-a769-443ce8d7bec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.129 247708 DEBUG oslo_concurrency.lockutils [req-8d1a7ac8-166e-4538-858e-a17731c39620 req-298c5687-b1a3-492b-a769-443ce8d7bec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.129 247708 DEBUG nova.compute.manager [req-8d1a7ac8-166e-4538-858e-a17731c39620 req-298c5687-b1a3-492b-a769-443ce8d7bec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-unplugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.129 247708 DEBUG nova.compute.manager [req-8d1a7ac8-166e-4538-858e-a17731c39620 req-298c5687-b1a3-492b-a769-443ce8d7bec4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-unplugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.328 247708 INFO nova.virt.libvirt.driver [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Deleting instance files /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_del
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.329 247708 INFO nova.virt.libvirt.driver [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Deletion of /var/lib/nova/instances/cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb_del complete
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.431 247708 INFO nova.compute.manager [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Took 1.12 seconds to destroy the instance on the hypervisor.
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.432 247708 DEBUG oslo.service.loopingcall [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.433 247708 DEBUG nova.compute.manager [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:02:12 compute-0 nova_compute[247704]: 2026-01-31 08:02:12.433 247708 DEBUG nova.network.neutron [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:02:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 296 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 152 op/s
Jan 31 08:02:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:13.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:14 compute-0 nova_compute[247704]: 2026-01-31 08:02:14.057 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:14 compute-0 ceph-mon[74496]: pgmap v1949: 305 pgs: 305 active+clean; 296 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 152 op/s
Jan 31 08:02:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:14 compute-0 nova_compute[247704]: 2026-01-31 08:02:14.837 247708 DEBUG nova.network.neutron [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:14 compute-0 nova_compute[247704]: 2026-01-31 08:02:14.981 247708 DEBUG nova.compute.manager [req-3b9ed0e4-1094-4c1a-8570-34b8cd031795 req-510c3352-123f-487e-8d88-4f217099e17b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-deleted-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:14 compute-0 nova_compute[247704]: 2026-01-31 08:02:14.981 247708 INFO nova.compute.manager [req-3b9ed0e4-1094-4c1a-8570-34b8cd031795 req-510c3352-123f-487e-8d88-4f217099e17b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Neutron deleted interface f0a0388d-0f3d-4eb7-a195-234dd517f99b; detaching it from the instance and deleting it from the info cache
Jan 31 08:02:14 compute-0 nova_compute[247704]: 2026-01-31 08:02:14.982 247708 DEBUG nova.network.neutron [req-3b9ed0e4-1094-4c1a-8570-34b8cd031795 req-510c3352-123f-487e-8d88-4f217099e17b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:14 compute-0 nova_compute[247704]: 2026-01-31 08:02:14.985 247708 INFO nova.compute.manager [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Took 2.55 seconds to deallocate network for instance.
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.140 247708 DEBUG nova.compute.manager [req-3b9ed0e4-1094-4c1a-8570-34b8cd031795 req-510c3352-123f-487e-8d88-4f217099e17b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Detach interface failed, port_id=f0a0388d-0f3d-4eb7-a195-234dd517f99b, reason: Instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.150 247708 DEBUG nova.compute.manager [req-36c4ac4c-c4c3-4c70-a517-638f76a5a07a req-f74f2524-07df-4ccd-a77a-f8a31d29ea49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.151 247708 DEBUG oslo_concurrency.lockutils [req-36c4ac4c-c4c3-4c70-a517-638f76a5a07a req-f74f2524-07df-4ccd-a77a-f8a31d29ea49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.151 247708 DEBUG oslo_concurrency.lockutils [req-36c4ac4c-c4c3-4c70-a517-638f76a5a07a req-f74f2524-07df-4ccd-a77a-f8a31d29ea49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.151 247708 DEBUG oslo_concurrency.lockutils [req-36c4ac4c-c4c3-4c70-a517-638f76a5a07a req-f74f2524-07df-4ccd-a77a-f8a31d29ea49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.151 247708 DEBUG nova.compute.manager [req-36c4ac4c-c4c3-4c70-a517-638f76a5a07a req-f74f2524-07df-4ccd-a77a-f8a31d29ea49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] No waiting events found dispatching network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.152 247708 WARNING nova.compute.manager [req-36c4ac4c-c4c3-4c70-a517-638f76a5a07a req-f74f2524-07df-4ccd-a77a-f8a31d29ea49 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Received unexpected event network-vif-plugged-f0a0388d-0f3d-4eb7-a195-234dd517f99b for instance with vm_state active and task_state deleting.
Jan 31 08:02:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 274 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 720 KiB/s wr, 165 op/s
Jan 31 08:02:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:15.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.457 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:15 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.458 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:15 compute-0 sshd-session[309980]: Invalid user solv from 45.148.10.240 port 46956
Jan 31 08:02:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:15 compute-0 sshd-session[309980]: Connection closed by invalid user solv 45.148.10.240 port 46956 [preauth]
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:15.999 247708 DEBUG oslo_concurrency.processutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:02:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810113601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:16 compute-0 ceph-mon[74496]: pgmap v1950: 305 pgs: 305 active+clean; 274 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 720 KiB/s wr, 165 op/s
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.465 247708 DEBUG oslo_concurrency.processutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.472 247708 DEBUG nova.compute.provider_tree [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.596 247708 DEBUG nova.scheduler.client.report [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:02:16 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:02:16 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.739 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.804 247708 INFO nova.scheduler.client.report [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Deleted allocations for instance cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb
Jan 31 08:02:16 compute-0 nova_compute[247704]: 2026-01-31 08:02:16.912 247708 DEBUG oslo_concurrency.lockutils [None req-06c0fc66-cd57-436b-9f24-b137db4757ab d454a4ec91e645e992f4a88ac60da747 2ab30fd72657435fb442fc59a53da644 - - default default] Lock "cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 254 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 839 KiB/s wr, 129 op/s
Jan 31 08:02:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1810113601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3785980968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:17.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:18 compute-0 ceph-mon[74496]: pgmap v1951: 305 pgs: 305 active+clean; 254 MiB data, 863 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 839 KiB/s wr, 129 op/s
Jan 31 08:02:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3615033175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:19 compute-0 nova_compute[247704]: 2026-01-31 08:02:19.060 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 299 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 31 08:02:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:19.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1313823788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:02:20
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', '.mgr', 'backups', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:02:20 compute-0 nova_compute[247704]: 2026-01-31 08:02:20.193 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:20 compute-0 nova_compute[247704]: 2026-01-31 08:02:20.193 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:20 compute-0 sudo[310008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:20 compute-0 sudo[310008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:20 compute-0 sudo[310008]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:20 compute-0 sudo[310039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:20 compute-0 sudo[310039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:02:20 compute-0 sudo[310039]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:02:20 compute-0 ceph-mon[74496]: pgmap v1952: 305 pgs: 305 active+clean; 299 MiB data, 878 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 31 08:02:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3581130653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:20 compute-0 podman[310032]: 2026-01-31 08:02:20.527788347 +0000 UTC m=+0.099879404 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:02:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:02:20 compute-0 nova_compute[247704]: 2026-01-31 08:02:20.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:20 compute-0 nova_compute[247704]: 2026-01-31 08:02:20.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:20 compute-0 nova_compute[247704]: 2026-01-31 08:02:20.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:02:20 compute-0 nova_compute[247704]: 2026-01-31 08:02:20.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:02:21 compute-0 nova_compute[247704]: 2026-01-31 08:02:21.026 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:02:21 compute-0 nova_compute[247704]: 2026-01-31 08:02:21.027 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:02:21 compute-0 nova_compute[247704]: 2026-01-31 08:02:21.027 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:02:21 compute-0 nova_compute[247704]: 2026-01-31 08:02:21.027 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 32e56536-3edb-494c-9e8b-87cfa8396dac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:02:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 313 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 882 KiB/s rd, 3.1 MiB/s wr, 151 op/s
Jan 31 08:02:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:21.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:21.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:21 compute-0 nova_compute[247704]: 2026-01-31 08:02:21.797 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:22 compute-0 ceph-mon[74496]: pgmap v1953: 305 pgs: 305 active+clean; 313 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 882 KiB/s rd, 3.1 MiB/s wr, 151 op/s
Jan 31 08:02:23 compute-0 ovn_controller[149457]: 2026-01-31T08:02:23Z|00381|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 328 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 31 08:02:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:23.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:23.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.897 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.937 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.938 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.938 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.939 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.939 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.939 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.939 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.997 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.998 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.999 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.999 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:02:23 compute-0 nova_compute[247704]: 2026-01-31 08:02:23.999 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:02:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440359653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.465 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.610 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.610 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:02:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:24 compute-0 ceph-mon[74496]: pgmap v1954: 305 pgs: 305 active+clean; 328 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 710 KiB/s rd, 3.9 MiB/s wr, 150 op/s
Jan 31 08:02:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1830699388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/440359653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1328614510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.791 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.793 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4336MB free_disk=20.83095932006836GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.793 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.793 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.901 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 32e56536-3edb-494c-9e8b-87cfa8396dac actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.901 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.902 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:02:24 compute-0 nova_compute[247704]: 2026-01-31 08:02:24.982 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:02:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 328 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 08:02:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:02:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928281601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-0 nova_compute[247704]: 2026-01-31 08:02:25.411 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:02:25 compute-0 nova_compute[247704]: 2026-01-31 08:02:25.419 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:02:25 compute-0 nova_compute[247704]: 2026-01-31 08:02:25.441 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:02:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:25.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:25 compute-0 nova_compute[247704]: 2026-01-31 08:02:25.475 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:02:25 compute-0 nova_compute[247704]: 2026-01-31 08:02:25.476 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:02:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:25.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2310884254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4180076573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3503687341' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:02:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3503687341' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:02:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3928281601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:26 compute-0 nova_compute[247704]: 2026-01-31 08:02:26.099 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:26 compute-0 ceph-mon[74496]: pgmap v1955: 305 pgs: 305 active+clean; 328 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 08:02:26 compute-0 nova_compute[247704]: 2026-01-31 08:02:26.755 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846531.7540689, cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:02:26 compute-0 nova_compute[247704]: 2026-01-31 08:02:26.755 247708 INFO nova.compute.manager [-] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] VM Stopped (Lifecycle Event)
Jan 31 08:02:26 compute-0 nova_compute[247704]: 2026-01-31 08:02:26.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:26 compute-0 nova_compute[247704]: 2026-01-31 08:02:26.878 247708 DEBUG nova.compute.manager [None req-e608a217-eea1-4ec7-b939-7babde5cdbe2 - - - - - -] [instance: cebc73e4-5b0d-4a7c-b9e3-4bb7575794bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:02:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 328 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 4.0 MiB/s wr, 122 op/s
Jan 31 08:02:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:27.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:28 compute-0 ceph-mon[74496]: pgmap v1956: 305 pgs: 305 active+clean; 328 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 4.0 MiB/s wr, 122 op/s
Jan 31 08:02:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/718281349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2772903479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:29 compute-0 nova_compute[247704]: 2026-01-31 08:02:29.064 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Jan 31 08:02:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:29.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:29 compute-0 ceph-mon[74496]: pgmap v1957: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Jan 31 08:02:30 compute-0 nova_compute[247704]: 2026-01-31 08:02:30.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:02:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 31 08:02:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 31 08:02:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 31 08:02:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 832 KiB/s rd, 1.0 MiB/s wr, 78 op/s
Jan 31 08:02:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:31.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:02:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:02:31 compute-0 nova_compute[247704]: 2026-01-31 08:02:31.804 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:32 compute-0 ceph-mon[74496]: osdmap e251: 3 total, 3 up, 3 in
Jan 31 08:02:32 compute-0 ceph-mon[74496]: pgmap v1959: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 832 KiB/s rd, 1.0 MiB/s wr, 78 op/s
Jan 31 08:02:33 compute-0 ovn_controller[149457]: 2026-01-31T08:02:33Z|00382|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:02:33 compute-0 nova_compute[247704]: 2026-01-31 08:02:33.190 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3417555742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 46 KiB/s wr, 114 op/s
Jan 31 08:02:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:33.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:33.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:34 compute-0 nova_compute[247704]: 2026-01-31 08:02:34.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:34 compute-0 ceph-mon[74496]: pgmap v1960: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 46 KiB/s wr, 114 op/s
Jan 31 08:02:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/520128495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007510061883491532 of space, bias 1.0, pg target 2.2530185650474595 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:02:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 45 KiB/s wr, 198 op/s
Jan 31 08:02:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:35 compute-0 podman[310133]: 2026-01-31 08:02:35.95437443 +0000 UTC m=+0.122224699 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:02:36 compute-0 nova_compute[247704]: 2026-01-31 08:02:36.806 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:37 compute-0 ceph-mon[74496]: pgmap v1961: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 45 KiB/s wr, 198 op/s
Jan 31 08:02:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 18 KiB/s wr, 230 op/s
Jan 31 08:02:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:38 compute-0 ceph-mon[74496]: pgmap v1962: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 18 KiB/s wr, 230 op/s
Jan 31 08:02:39 compute-0 nova_compute[247704]: 2026-01-31 08:02:39.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.7 KiB/s wr, 266 op/s
Jan 31 08:02:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:39.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:40 compute-0 ceph-mon[74496]: pgmap v1963: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.7 KiB/s wr, 266 op/s
Jan 31 08:02:40 compute-0 sudo[310163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:40 compute-0 sudo[310163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:40 compute-0 sudo[310163]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:40 compute-0 sudo[310188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:02:40 compute-0 sudo[310188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:02:40 compute-0 sudo[310188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.4 KiB/s wr, 236 op/s
Jan 31 08:02:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:41.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:41 compute-0 ovn_controller[149457]: 2026-01-31T08:02:41Z|00383|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:02:41 compute-0 nova_compute[247704]: 2026-01-31 08:02:41.553 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:41.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:41 compute-0 nova_compute[247704]: 2026-01-31 08:02:41.808 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:42 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 31 08:02:42 compute-0 ceph-mon[74496]: pgmap v1964: 305 pgs: 305 active+clean; 328 MiB data, 901 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.4 KiB/s wr, 236 op/s
Jan 31 08:02:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 332 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 341 KiB/s wr, 203 op/s
Jan 31 08:02:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:43 compute-0 ceph-mon[74496]: pgmap v1965: 305 pgs: 305 active+clean; 332 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 341 KiB/s wr, 203 op/s
Jan 31 08:02:44 compute-0 nova_compute[247704]: 2026-01-31 08:02:44.071 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 31 08:02:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 31 08:02:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 31 08:02:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 343 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 142 op/s
Jan 31 08:02:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:45.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:45.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3863939065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:02:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3863939065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:02:46 compute-0 ceph-mon[74496]: osdmap e252: 3 total, 3 up, 3 in
Jan 31 08:02:46 compute-0 ceph-mon[74496]: pgmap v1967: 305 pgs: 305 active+clean; 343 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.7 MiB/s wr, 142 op/s
Jan 31 08:02:46 compute-0 nova_compute[247704]: 2026-01-31 08:02:46.814 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3153227033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:02:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 349 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Jan 31 08:02:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:47.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:48 compute-0 ceph-mon[74496]: pgmap v1968: 305 pgs: 305 active+clean; 349 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Jan 31 08:02:49 compute-0 nova_compute[247704]: 2026-01-31 08:02:49.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 360 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Jan 31 08:02:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 31 08:02:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 31 08:02:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 31 08:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:02:50 compute-0 ceph-mon[74496]: pgmap v1969: 305 pgs: 305 active+clean; 360 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 167 op/s
Jan 31 08:02:50 compute-0 ceph-mon[74496]: osdmap e253: 3 total, 3 up, 3 in
Jan 31 08:02:50 compute-0 podman[310218]: 2026-01-31 08:02:50.893840897 +0000 UTC m=+0.063353280 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:02:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Jan 31 08:02:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:51.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:51.773 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:02:51 compute-0 nova_compute[247704]: 2026-01-31 08:02:51.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:51.775 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:02:51 compute-0 nova_compute[247704]: 2026-01-31 08:02:51.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:52 compute-0 ceph-mon[74496]: pgmap v1971: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Jan 31 08:02:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 181 op/s
Jan 31 08:02:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:53.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:02:53.778 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:02:54 compute-0 nova_compute[247704]: 2026-01-31 08:02:54.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:02:54 compute-0 ceph-mon[74496]: pgmap v1972: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 181 op/s
Jan 31 08:02:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 947 KiB/s wr, 152 op/s
Jan 31 08:02:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:55.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:55.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:55 compute-0 ceph-mon[74496]: pgmap v1973: 305 pgs: 305 active+clean; 361 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 947 KiB/s wr, 152 op/s
Jan 31 08:02:56 compute-0 nova_compute[247704]: 2026-01-31 08:02:56.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 345 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 425 KiB/s wr, 130 op/s
Jan 31 08:02:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:02:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:57.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:02:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:58 compute-0 nova_compute[247704]: 2026-01-31 08:02:58.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:58 compute-0 ceph-mon[74496]: pgmap v1974: 305 pgs: 305 active+clean; 345 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 425 KiB/s wr, 130 op/s
Jan 31 08:02:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1288948321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/729951909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:02:59 compute-0 nova_compute[247704]: 2026-01-31 08:02:59.111 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:02:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 1.2 MiB/s wr, 117 op/s
Jan 31 08:02:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:02:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:02:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:02:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:02:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:02:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:00 compute-0 ceph-mon[74496]: pgmap v1975: 305 pgs: 305 active+clean; 271 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 1.2 MiB/s wr, 117 op/s
Jan 31 08:03:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2639171500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2933201801' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:00 compute-0 sudo[310244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:00 compute-0 sudo[310244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:00 compute-0 sudo[310244]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:00 compute-0 sudo[310269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:00 compute-0 sudo[310269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:00 compute-0 sudo[310269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 258 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 262 KiB/s rd, 1.9 MiB/s wr, 115 op/s
Jan 31 08:03:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:01.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:01 compute-0 nova_compute[247704]: 2026-01-31 08:03:01.853 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.043 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.044 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.082 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.202 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.203 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.209 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.209 247708 INFO nova.compute.claims [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.365 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:02 compute-0 ceph-mon[74496]: pgmap v1976: 305 pgs: 305 active+clean; 258 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 262 KiB/s rd, 1.9 MiB/s wr, 115 op/s
Jan 31 08:03:02 compute-0 sudo[310315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:02 compute-0 sudo[310315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:02 compute-0 sudo[310315]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:02 compute-0 sudo[310340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:02 compute-0 sudo[310340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:02 compute-0 sudo[310340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381090428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.872 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.877 247708 DEBUG nova.compute.provider_tree [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:03:02 compute-0 sudo[310367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:02 compute-0 sudo[310367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:02 compute-0 sudo[310367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:02 compute-0 nova_compute[247704]: 2026-01-31 08:03:02.903 247708 DEBUG nova.scheduler.client.report [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:03:02 compute-0 sudo[310392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:03:02 compute-0 sudo[310392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.002 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.003 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.066 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.067 247708 DEBUG nova.network.neutron [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.091 247708 INFO nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.118 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.271 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.273 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.273 247708 INFO nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Creating image(s)
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.304 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.337 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.373 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.378 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:03 compute-0 sudo[310392]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 249 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 208 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.399 247708 DEBUG nova.policy [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63e95edea0164ae2a9820dc10467335d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.427 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.428 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.428 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.429 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.454 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.457 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 dd1cfd57-90d1-4f96-b16e-96d64815af69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/381090428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.770 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 dd1cfd57-90d1-4f96-b16e-96d64815af69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:03:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:03:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.867 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:03:03 compute-0 nova_compute[247704]: 2026-01-31 08:03:03.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.005 247708 DEBUG nova.objects.instance [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.109 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.110 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Ensure instance console log exists: /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.111 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.112 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.113 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.115 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:03:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:03:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:03:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7f92b6cb-a424-43ff-91c1-dd99bf518668 does not exist
Jan 31 08:03:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9db12b6e-4cf9-43d6-830c-54b5d7b1c195 does not exist
Jan 31 08:03:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cb7d3b4a-0b37-4bcc-bba5-28047189e7ce does not exist
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:03:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:03:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:03:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: pgmap v1977: 305 pgs: 305 active+clean; 249 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 208 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:03:04 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:03:04 compute-0 sudo[310615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:04 compute-0 sudo[310615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:04 compute-0 sudo[310615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:04 compute-0 sudo[310640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:04 compute-0 sudo[310640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:04 compute-0 sudo[310640]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:04 compute-0 nova_compute[247704]: 2026-01-31 08:03:04.629 247708 DEBUG nova.network.neutron [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Successfully created port: 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:03:04 compute-0 sudo[310665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:04 compute-0 sudo[310665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:04 compute-0 sudo[310665]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:04 compute-0 sudo[310690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:03:04 compute-0 sudo[310690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.052121922 +0000 UTC m=+0.066697332 container create 155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:03:05 compute-0 systemd[1]: Started libpod-conmon-155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a.scope.
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.021874032 +0000 UTC m=+0.036449472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:03:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.145236069 +0000 UTC m=+0.159811539 container init 155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.15063162 +0000 UTC m=+0.165207030 container start 155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.15428663 +0000 UTC m=+0.168862040 container attach 155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:03:05 compute-0 keen_perlman[310770]: 167 167
Jan 31 08:03:05 compute-0 systemd[1]: libpod-155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a.scope: Deactivated successfully.
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.157775586 +0000 UTC m=+0.172351016 container died 155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:03:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-57108ca40b48f2d978725c7db0c3fc9c2c4e7e59d51b87a9d8577a8c0611e337-merged.mount: Deactivated successfully.
Jan 31 08:03:05 compute-0 podman[310754]: 2026-01-31 08:03:05.20091187 +0000 UTC m=+0.215487290 container remove 155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:05 compute-0 systemd[1]: libpod-conmon-155dc5d42ae11c3c7f956850152d2506a90db43eea8d43b421a4e243cad2238a.scope: Deactivated successfully.
Jan 31 08:03:05 compute-0 podman[310796]: 2026-01-31 08:03:05.343539048 +0000 UTC m=+0.042408028 container create c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:03:05 compute-0 systemd[1]: Started libpod-conmon-c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af.scope.
Jan 31 08:03:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 264 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 148 op/s
Jan 31 08:03:05 compute-0 podman[310796]: 2026-01-31 08:03:05.325805135 +0000 UTC m=+0.024674155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:03:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470d74ceda7465fe942998ceb6dfd2614cfddeff65b4af03fd7ce0040ec3006f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470d74ceda7465fe942998ceb6dfd2614cfddeff65b4af03fd7ce0040ec3006f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470d74ceda7465fe942998ceb6dfd2614cfddeff65b4af03fd7ce0040ec3006f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470d74ceda7465fe942998ceb6dfd2614cfddeff65b4af03fd7ce0040ec3006f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470d74ceda7465fe942998ceb6dfd2614cfddeff65b4af03fd7ce0040ec3006f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:05 compute-0 podman[310796]: 2026-01-31 08:03:05.444337162 +0000 UTC m=+0.143206152 container init c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:05 compute-0 podman[310796]: 2026-01-31 08:03:05.456122021 +0000 UTC m=+0.154990991 container start c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:03:05 compute-0 podman[310796]: 2026-01-31 08:03:05.46059814 +0000 UTC m=+0.159467120 container attach c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:03:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:05.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:05.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:05 compute-0 nova_compute[247704]: 2026-01-31 08:03:05.802 247708 DEBUG nova.network.neutron [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Successfully updated port: 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:03:05 compute-0 nova_compute[247704]: 2026-01-31 08:03:05.824 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:05 compute-0 nova_compute[247704]: 2026-01-31 08:03:05.825 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:05 compute-0 nova_compute[247704]: 2026-01-31 08:03:05.826 247708 DEBUG nova.network.neutron [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:03:06 compute-0 nova_compute[247704]: 2026-01-31 08:03:06.096 247708 DEBUG nova.compute.manager [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-changed-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:06 compute-0 nova_compute[247704]: 2026-01-31 08:03:06.096 247708 DEBUG nova.compute.manager [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Refreshing instance network info cache due to event network-changed-0681aec4-62fb-4ff7-9e0f-3038c32e48a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:03:06 compute-0 nova_compute[247704]: 2026-01-31 08:03:06.097 247708 DEBUG oslo_concurrency.lockutils [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:06 compute-0 quirky_elgamal[310812]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:03:06 compute-0 quirky_elgamal[310812]: --> relative data size: 1.0
Jan 31 08:03:06 compute-0 quirky_elgamal[310812]: --> All data devices are unavailable
Jan 31 08:03:06 compute-0 systemd[1]: libpod-c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af.scope: Deactivated successfully.
Jan 31 08:03:06 compute-0 podman[310796]: 2026-01-31 08:03:06.228797944 +0000 UTC m=+0.927666914 container died c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-470d74ceda7465fe942998ceb6dfd2614cfddeff65b4af03fd7ce0040ec3006f-merged.mount: Deactivated successfully.
Jan 31 08:03:06 compute-0 podman[310796]: 2026-01-31 08:03:06.285898671 +0000 UTC m=+0.984767651 container remove c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:03:06 compute-0 systemd[1]: libpod-conmon-c278e346b2e25639bcef2f5bfee2237d3bcea4208e548058075b2ba4b48310af.scope: Deactivated successfully.
Jan 31 08:03:06 compute-0 sudo[310690]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:06 compute-0 nova_compute[247704]: 2026-01-31 08:03:06.330 247708 DEBUG nova.network.neutron [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:03:06 compute-0 sudo[310860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:06 compute-0 sudo[310860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:06 compute-0 sudo[310860]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:06 compute-0 podman[310829]: 2026-01-31 08:03:06.38319037 +0000 UTC m=+0.126890574 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:03:06 compute-0 sudo[310888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:06 compute-0 sudo[310888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:06 compute-0 sudo[310888]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:06 compute-0 sudo[310916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:06 compute-0 sudo[310916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:06 compute-0 sudo[310916]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:06 compute-0 sudo[310941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:03:06 compute-0 sudo[310941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:06 compute-0 ceph-mon[74496]: pgmap v1978: 305 pgs: 305 active+clean; 264 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 148 op/s
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.771168357 +0000 UTC m=+0.037023557 container create 96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:03:06 compute-0 systemd[1]: Started libpod-conmon-96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892.scope.
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.753207538 +0000 UTC m=+0.019062768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:03:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:06 compute-0 nova_compute[247704]: 2026-01-31 08:03:06.855 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.868921867 +0000 UTC m=+0.134777337 container init 96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pascal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.875017266 +0000 UTC m=+0.140872476 container start 96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.879517926 +0000 UTC m=+0.145373166 container attach 96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pascal, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:03:06 compute-0 stupefied_pascal[311023]: 167 167
Jan 31 08:03:06 compute-0 systemd[1]: libpod-96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892.scope: Deactivated successfully.
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.881781852 +0000 UTC m=+0.147637062 container died 96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pascal, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a1af55291277d1986ca7393abbad9fc950b7189048ac62f305735d2cbfbd73-merged.mount: Deactivated successfully.
Jan 31 08:03:06 compute-0 podman[311006]: 2026-01-31 08:03:06.919238808 +0000 UTC m=+0.185094018 container remove 96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:03:06 compute-0 systemd[1]: libpod-conmon-96729f492e3183b66dcde5179c1613244752b42d35658203cda5ee99da669892.scope: Deactivated successfully.
Jan 31 08:03:07 compute-0 podman[311047]: 2026-01-31 08:03:07.047904293 +0000 UTC m=+0.019885687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:03:07 compute-0 podman[311047]: 2026-01-31 08:03:07.21134197 +0000 UTC m=+0.183323394 container create 6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shtern, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:03:07 compute-0 systemd[1]: Started libpod-conmon-6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463.scope.
Jan 31 08:03:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6e6d2bf60167dfcb608e3be0de24c9c66ad94e5effdeaffcc55eaf4bab6916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6e6d2bf60167dfcb608e3be0de24c9c66ad94e5effdeaffcc55eaf4bab6916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6e6d2bf60167dfcb608e3be0de24c9c66ad94e5effdeaffcc55eaf4bab6916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6e6d2bf60167dfcb608e3be0de24c9c66ad94e5effdeaffcc55eaf4bab6916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:07 compute-0 podman[311047]: 2026-01-31 08:03:07.334640265 +0000 UTC m=+0.306621639 container init 6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:03:07 compute-0 podman[311047]: 2026-01-31 08:03:07.342153359 +0000 UTC m=+0.314134733 container start 6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shtern, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:03:07 compute-0 podman[311047]: 2026-01-31 08:03:07.345577372 +0000 UTC m=+0.317558746 container attach 6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shtern, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.387 247708 DEBUG nova.network.neutron [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 280 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.414 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.414 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance network_info: |[{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.414 247708 DEBUG oslo_concurrency.lockutils [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.415 247708 DEBUG nova.network.neutron [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Refreshing network info cache for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.417 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Start _get_guest_xml network_info=[{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.421 247708 WARNING nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.425 247708 DEBUG nova.virt.libvirt.host [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.426 247708 DEBUG nova.virt.libvirt.host [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.429 247708 DEBUG nova.virt.libvirt.host [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.429 247708 DEBUG nova.virt.libvirt.host [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.430 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.431 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.431 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.431 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.431 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.431 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.432 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.432 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.432 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.432 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.432 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.433 247708 DEBUG nova.virt.hardware [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.435 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:07.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:07.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:03:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623362724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.899 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.924 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:07 compute-0 nova_compute[247704]: 2026-01-31 08:03:07.930 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]: {
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:     "0": [
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:         {
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "devices": [
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "/dev/loop3"
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             ],
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "lv_name": "ceph_lv0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "lv_size": "7511998464",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "name": "ceph_lv0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "tags": {
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.cluster_name": "ceph",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.crush_device_class": "",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.encrypted": "0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.osd_id": "0",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.type": "block",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:                 "ceph.vdo": "0"
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             },
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "type": "block",
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:             "vg_name": "ceph_vg0"
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:         }
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]:     ]
Jan 31 08:03:08 compute-0 sleepy_shtern[311064]: }
Jan 31 08:03:08 compute-0 systemd[1]: libpod-6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463.scope: Deactivated successfully.
Jan 31 08:03:08 compute-0 podman[311047]: 2026-01-31 08:03:08.13187705 +0000 UTC m=+1.103858434 container died 6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b6e6d2bf60167dfcb608e3be0de24c9c66ad94e5effdeaffcc55eaf4bab6916-merged.mount: Deactivated successfully.
Jan 31 08:03:08 compute-0 podman[311047]: 2026-01-31 08:03:08.195239269 +0000 UTC m=+1.167220643 container remove 6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_shtern, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:03:08 compute-0 sudo[310941]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:08 compute-0 systemd[1]: libpod-conmon-6e9c621950f16a63cbf19edb72331fca892df8021188ffbacceebf0190b1d463.scope: Deactivated successfully.
Jan 31 08:03:08 compute-0 sudo[311148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:08 compute-0 sudo[311148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:08 compute-0 sudo[311148]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:03:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514863003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:08 compute-0 sudo[311173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:03:08 compute-0 sudo[311173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.347 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.349 247708 DEBUG nova.virt.libvirt.vif [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:03Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.349 247708 DEBUG nova.network.os_vif_util [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:03:08 compute-0 sudo[311173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.350 247708 DEBUG nova.network.os_vif_util [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.351 247708 DEBUG nova.objects.instance [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.381 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <uuid>dd1cfd57-90d1-4f96-b16e-96d64815af69</uuid>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <name>instance-0000005f</name>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1636985884</nova:name>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:03:07</nova:creationTime>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <nova:port uuid="0681aec4-62fb-4ff7-9e0f-3038c32e48a2">
Jan 31 08:03:08 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <system>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <entry name="serial">dd1cfd57-90d1-4f96-b16e-96d64815af69</entry>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <entry name="uuid">dd1cfd57-90d1-4f96-b16e-96d64815af69</entry>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </system>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <os>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </os>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <features>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </features>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/dd1cfd57-90d1-4f96-b16e-96d64815af69_disk">
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </source>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config">
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </source>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:03:08 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:7c:f2:ba"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <target dev="tap0681aec4-62"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/console.log" append="off"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <video>
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </video>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:03:08 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:03:08 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:03:08 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:03:08 compute-0 nova_compute[247704]: </domain>
Jan 31 08:03:08 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:03:08 compute-0 sudo[311200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.381 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Preparing to wait for external event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.381 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.382 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.382 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.382 247708 DEBUG nova.virt.libvirt.vif [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:03:03Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.383 247708 DEBUG nova.network.os_vif_util [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.383 247708 DEBUG nova.network.os_vif_util [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.383 247708 DEBUG os_vif [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.384 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.385 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.385 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.388 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0681aec4-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.389 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0681aec4-62, col_values=(('external_ids', {'iface-id': '0681aec4-62fb-4ff7-9e0f-3038c32e48a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:f2:ba', 'vm-uuid': 'dd1cfd57-90d1-4f96-b16e-96d64815af69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:08 compute-0 sudo[311200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:08 compute-0 NetworkManager[49108]: <info>  [1769846588.3920] manager: (tap0681aec4-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.392 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:08 compute-0 sudo[311200]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.397 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.398 247708 INFO os_vif [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62')
Jan 31 08:03:08 compute-0 sudo[311227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:03:08 compute-0 sudo[311227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.514 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.515 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.515 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:7c:f2:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.516 247708 INFO nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Using config drive
Jan 31 08:03:08 compute-0 nova_compute[247704]: 2026-01-31 08:03:08.548 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:08 compute-0 ceph-mon[74496]: pgmap v1979: 305 pgs: 305 active+clean; 280 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 08:03:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/623362724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3514863003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.789663373 +0000 UTC m=+0.037826516 container create 3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:03:08 compute-0 systemd[1]: Started libpod-conmon-3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9.scope.
Jan 31 08:03:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.775821936 +0000 UTC m=+0.023985109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.871264309 +0000 UTC m=+0.119427532 container init 3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.879779877 +0000 UTC m=+0.127943020 container start 3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.884104353 +0000 UTC m=+0.132267576 container attach 3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:03:08 compute-0 optimistic_hellman[311327]: 167 167
Jan 31 08:03:08 compute-0 systemd[1]: libpod-3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9.scope: Deactivated successfully.
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.887016754 +0000 UTC m=+0.135179937 container died 3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c3592e4e05fdc439c11e6900c9fe828fdda1240c48ef9f48ea9f76eba771d2c-merged.mount: Deactivated successfully.
Jan 31 08:03:08 compute-0 podman[311311]: 2026-01-31 08:03:08.936632997 +0000 UTC m=+0.184796150 container remove 3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hellman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:03:08 compute-0 systemd[1]: libpod-conmon-3f8b68c71e00a3951c138af433fa14d1cee0c4e4de5d7d033e834855b5fb91b9.scope: Deactivated successfully.
Jan 31 08:03:09 compute-0 podman[311352]: 2026-01-31 08:03:09.096823345 +0000 UTC m=+0.051207354 container create 67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shamir, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.116 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 systemd[1]: Started libpod-conmon-67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23.scope.
Jan 31 08:03:09 compute-0 podman[311352]: 2026-01-31 08:03:09.080173267 +0000 UTC m=+0.034557296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:03:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e380438c26973bd18c29657860f9ad02217689e5061aebd9c51d892f02e1e44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e380438c26973bd18c29657860f9ad02217689e5061aebd9c51d892f02e1e44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e380438c26973bd18c29657860f9ad02217689e5061aebd9c51d892f02e1e44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e380438c26973bd18c29657860f9ad02217689e5061aebd9c51d892f02e1e44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:09 compute-0 podman[311352]: 2026-01-31 08:03:09.199182907 +0000 UTC m=+0.153566966 container init 67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shamir, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:03:09 compute-0 podman[311352]: 2026-01-31 08:03:09.205308397 +0000 UTC m=+0.159692396 container start 67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 08:03:09 compute-0 podman[311352]: 2026-01-31 08:03:09.209057899 +0000 UTC m=+0.163441988 container attach 67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.262 247708 INFO nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Creating config drive at /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/disk.config
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.268 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8l4j01km execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.296 247708 DEBUG nova.network.neutron [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updated VIF entry in instance network info cache for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.297 247708 DEBUG nova.network.neutron [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.330 247708 DEBUG oslo_concurrency.lockutils [req-789d3c4d-5612-4937-8619-c791dd11af6f req-ee9fdf9b-d1d3-4708-ac0f-9a6501879eef 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 277 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.400 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8l4j01km" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.426 247708 DEBUG nova.storage.rbd_utils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.430 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/disk.config dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.599 247708 DEBUG oslo_concurrency.processutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/disk.config dd1cfd57-90d1-4f96-b16e-96d64815af69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.601 247708 INFO nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Deleting local config drive /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/disk.config because it was imported into RBD.
Jan 31 08:03:09 compute-0 kernel: tap0681aec4-62: entered promiscuous mode
Jan 31 08:03:09 compute-0 NetworkManager[49108]: <info>  [1769846589.6519] manager: (tap0681aec4-62): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Jan 31 08:03:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:09.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:09 compute-0 ovn_controller[149457]: 2026-01-31T08:03:09Z|00384|binding|INFO|Claiming lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for this chassis.
Jan 31 08:03:09 compute-0 ovn_controller[149457]: 2026-01-31T08:03:09Z|00385|binding|INFO|0681aec4-62fb-4ff7-9e0f-3038c32e48a2: Claiming fa:16:3e:7c:f2:ba 10.100.0.13
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.692 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 systemd-udevd[311424]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:03:09 compute-0 ovn_controller[149457]: 2026-01-31T08:03:09Z|00386|binding|INFO|Setting lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 ovn-installed in OVS
Jan 31 08:03:09 compute-0 ovn_controller[149457]: 2026-01-31T08:03:09Z|00387|binding|INFO|Setting lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 up in Southbound
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.702 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:f2:ba 10.100.0.13'], port_security=['fa:16:3e:7c:f2:ba 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dd1cfd57-90d1-4f96-b16e-96d64815af69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=0681aec4-62fb-4ff7-9e0f-3038c32e48a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.705 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis
Jan 31 08:03:09 compute-0 NetworkManager[49108]: <info>  [1769846589.7106] device (tap0681aec4-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.710 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:03:09 compute-0 NetworkManager[49108]: <info>  [1769846589.7113] device (tap0681aec4-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:03:09 compute-0 systemd-machined[214448]: New machine qemu-39-instance-0000005f.
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.719 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0f392b-34ae-40f4-b5bf-07ffa3d2a99c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.720 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.721 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.722 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d7884208-98b2-41ec-b9f2-306466399543]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-0000005f.
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.722 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8399080-dc29-4d7d-8e98-33c312882c02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.733 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[2face7a0-8487-44b1-b919-917d7b2b595d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.743 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1f718b07-dfa9-4f47-9902-8ae0044a199e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.767 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[50f8f4d9-1540-480a-9591-07da5289b7b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.772 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8f280c79-164a-4a95-9444-250466ed2bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 NetworkManager[49108]: <info>  [1769846589.7730] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/186)
Jan 31 08:03:09 compute-0 systemd-udevd[311429]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.794 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[975a4d6d-890d-4f41-964b-f5e6ee67efbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.797 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f88aef83-c089-4eaf-814e-2aa30b85e4c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 NetworkManager[49108]: <info>  [1769846589.8175] device (tap121329c8-20): carrier: link connected
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.823 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d4caf6-27e0-446b-9f93-769f6974aad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.842 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[679cf6bc-d758-49f7-993f-b44b3c276dbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683243, 'reachable_time': 29991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311460, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.858 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[09c83989-0619-4811-9fbe-4f92f0de6996]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683243, 'tstamp': 683243}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311461, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.878 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[582c5576-ffc4-48c5-8945-a243d4d52c65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683243, 'reachable_time': 29991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311462, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.905 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d780a1a8-e0c1-41d6-872d-ab4de8bb0515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.962 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2f398b33-94f7-41a9-9c5e-b9aa0bbe63f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.964 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.965 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.965 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.967 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 NetworkManager[49108]: <info>  [1769846589.9680] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 31 08:03:09 compute-0 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.970 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.971 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.971 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 ovn_controller[149457]: 2026-01-31T08:03:09Z|00388|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 08:03:09 compute-0 nova_compute[247704]: 2026-01-31 08:03:09.977 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.978 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.979 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[46be5fd8-18ae-44d0-934a-67b0c163b9b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.980 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:03:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:09.980 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]: {
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:         "osd_id": 0,
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:         "type": "bluestore"
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]:     }
Jan 31 08:03:10 compute-0 xenodochial_shamir[311368]: }
Jan 31 08:03:10 compute-0 systemd[1]: libpod-67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23.scope: Deactivated successfully.
Jan 31 08:03:10 compute-0 conmon[311368]: conmon 67345134f9bd70afd7c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23.scope/container/memory.events
Jan 31 08:03:10 compute-0 podman[311352]: 2026-01-31 08:03:10.08262628 +0000 UTC m=+1.037010279 container died 67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e380438c26973bd18c29657860f9ad02217689e5061aebd9c51d892f02e1e44-merged.mount: Deactivated successfully.
Jan 31 08:03:10 compute-0 podman[311352]: 2026-01-31 08:03:10.228325252 +0000 UTC m=+1.182709251 container remove 67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shamir, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:03:10 compute-0 systemd[1]: libpod-conmon-67345134f9bd70afd7c4de0615cae5ce3062291faadb97e5142a40f56ffa1a23.scope: Deactivated successfully.
Jan 31 08:03:10 compute-0 sudo[311227]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:03:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:03:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 99faf1ad-dda6-4315-a266-5f535b96c840 does not exist
Jan 31 08:03:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 395cfec7-5913-4ac7-a89f-c270a80ed929 does not exist
Jan 31 08:03:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6a0aa533-6442-45d6-b7b7-b042b96d3616 does not exist
Jan 31 08:03:10 compute-0 podman[311563]: 2026-01-31 08:03:10.321643685 +0000 UTC m=+0.049521773 container create 18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.336 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846590.3361588, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.337 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Started (Lifecycle Event)
Jan 31 08:03:10 compute-0 systemd[1]: Started libpod-conmon-18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110.scope.
Jan 31 08:03:10 compute-0 sudo[311578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:10 compute-0 sudo[311578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:10 compute-0 sudo[311578]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.384 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:03:10 compute-0 podman[311563]: 2026-01-31 08:03:10.292253566 +0000 UTC m=+0.020131674 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.390 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846590.3363194, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.391 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Paused (Lifecycle Event)
Jan 31 08:03:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65913f9d83ce6e0d0caa2c6590da6fc5e7d43b3f74c224b181446f35179fb51/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:03:10 compute-0 podman[311563]: 2026-01-31 08:03:10.419020275 +0000 UTC m=+0.146898383 container init 18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 08:03:10 compute-0 sudo[311608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:03:10 compute-0 sudo[311608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:10 compute-0 sudo[311608]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.425 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:10 compute-0 podman[311563]: 2026-01-31 08:03:10.427513013 +0000 UTC m=+0.155391111 container start 18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.428 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:10 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [NOTICE]   (311634) : New worker (311636) forked
Jan 31 08:03:10 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [NOTICE]   (311634) : Loading success.
Jan 31 08:03:10 compute-0 nova_compute[247704]: 2026-01-31 08:03:10.490 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:03:10 compute-0 ceph-mon[74496]: pgmap v1980: 305 pgs: 305 active+clean; 277 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Jan 31 08:03:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:03:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1024500865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:11.175 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:11.175 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:11.176 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 260 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Jan 31 08:03:11 compute-0 nova_compute[247704]: 2026-01-31 08:03:11.503 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:11.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:11.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.580 247708 DEBUG nova.compute.manager [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.580 247708 DEBUG oslo_concurrency.lockutils [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.581 247708 DEBUG oslo_concurrency.lockutils [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.581 247708 DEBUG oslo_concurrency.lockutils [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.581 247708 DEBUG nova.compute.manager [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Processing event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.582 247708 DEBUG nova.compute.manager [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.582 247708 DEBUG oslo_concurrency.lockutils [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.583 247708 DEBUG oslo_concurrency.lockutils [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.583 247708 DEBUG oslo_concurrency.lockutils [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.584 247708 DEBUG nova.compute.manager [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.584 247708 WARNING nova.compute.manager [req-8523b9a8-a30c-4b52-a745-73c12d974782 req-2db1b6e8-fdd3-42b3-bcab-c1f72b5c7b57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state building and task_state spawning.
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.585 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:03:12 compute-0 ceph-mon[74496]: pgmap v1981: 305 pgs: 305 active+clean; 260 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.589 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846592.5891569, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.591 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Resumed (Lifecycle Event)
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.593 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.596 247708 INFO nova.virt.libvirt.driver [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance spawned successfully.
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.597 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.618 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.626 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.630 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.631 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.631 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.632 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.632 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.633 247708 DEBUG nova.virt.libvirt.driver [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.648 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.716 247708 INFO nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Took 9.44 seconds to spawn the instance on the hypervisor.
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.717 247708 DEBUG nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.802 247708 INFO nova.compute.manager [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Took 10.63 seconds to build instance.
Jan 31 08:03:12 compute-0 nova_compute[247704]: 2026-01-31 08:03:12.830 247708 DEBUG oslo_concurrency.lockutils [None req-fc4b66ef-ed85-4c5b-a776-6bde6218f592 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:13 compute-0 nova_compute[247704]: 2026-01-31 08:03:13.391 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 31 08:03:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:13.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:13.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:14 compute-0 nova_compute[247704]: 2026-01-31 08:03:14.118 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:14 compute-0 ceph-mon[74496]: pgmap v1982: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 31 08:03:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 31 08:03:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:15.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:15.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:15 compute-0 ceph-mon[74496]: pgmap v1983: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 31 08:03:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Jan 31 08:03:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:17.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:17 compute-0 nova_compute[247704]: 2026-01-31 08:03:17.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:17.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:18 compute-0 nova_compute[247704]: 2026-01-31 08:03:18.395 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:18 compute-0 ceph-mon[74496]: pgmap v1984: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Jan 31 08:03:19 compute-0 nova_compute[247704]: 2026-01-31 08:03:19.121 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 511 KiB/s wr, 148 op/s
Jan 31 08:03:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/449284864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:19 compute-0 nova_compute[247704]: 2026-01-31 08:03:19.512 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:19.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:03:20
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'backups', 'default.rgw.log']
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:03:20 compute-0 ceph-mon[74496]: pgmap v1985: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 511 KiB/s wr, 148 op/s
Jan 31 08:03:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1309241626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:03:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:03:20 compute-0 nova_compute[247704]: 2026-01-31 08:03:20.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:20 compute-0 nova_compute[247704]: 2026-01-31 08:03:20.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:20 compute-0 sudo[311650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:20 compute-0 sudo[311650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:20 compute-0 sudo[311650]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:20 compute-0 sudo[311675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:20 compute-0 sudo[311675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:20 compute-0 sudo[311675]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:21 compute-0 podman[311699]: 2026-01-31 08:03:21.024186637 +0000 UTC m=+0.091541930 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.035 247708 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.036 247708 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.037 247708 DEBUG nova.network.neutron [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:03:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 33 KiB/s wr, 134 op/s
Jan 31 08:03:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/689929559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:21.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:03:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:21.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.979 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.979 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.979 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:03:21 compute-0 nova_compute[247704]: 2026-01-31 08:03:21.980 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 32e56536-3edb-494c-9e8b-87cfa8396dac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:22 compute-0 ceph-mon[74496]: pgmap v1986: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 33 KiB/s wr, 134 op/s
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.166 247708 DEBUG nova.network.neutron [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.201 247708 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.398 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 116 op/s
Jan 31 08:03:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:23.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.579 247708 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.579 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Creating file /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/97a04c4306c04c3398435926b4cddc02.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.580 247708 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/97a04c4306c04c3398435926b4cddc02.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:23.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.951 247708 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/97a04c4306c04c3398435926b4cddc02.tmp" returned: 1 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.952 247708 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69/97a04c4306c04c3398435926b4cddc02.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.953 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Creating directory /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 08:03:23 compute-0 nova_compute[247704]: 2026-01-31 08:03:23.954 247708 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.124 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.160 247708 DEBUG oslo_concurrency.processutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/dd1cfd57-90d1-4f96-b16e-96d64815af69" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.165 247708 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.252 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.286 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.287 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.288 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.288 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.288 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.456 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.457 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.458 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.458 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.459 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:24 compute-0 ceph-mon[74496]: pgmap v1987: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 116 op/s
Jan 31 08:03:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142760437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.884 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.964 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.964 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.968 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:24 compute-0 nova_compute[247704]: 2026-01-31 08:03:24.968 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.123 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.125 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4143MB free_disk=20.876117706298828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.125 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.125 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.327 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating resource usage from migration 47022a27-72dc-45c4-962a-90a5e5996bee
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.376 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 32e56536-3edb-494c-9e8b-87cfa8396dac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.376 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Migration 47022a27-72dc-45c4-962a-90a5e5996bee is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.377 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.377 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:03:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 16 KiB/s wr, 107 op/s
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.484 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:25.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:25.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1142760437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2436905090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2769752874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.949 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.958 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.973 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.994 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:03:25 compute-0 nova_compute[247704]: 2026-01-31 08:03:25.995 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:26 compute-0 nova_compute[247704]: 2026-01-31 08:03:26.269 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:26 compute-0 nova_compute[247704]: 2026-01-31 08:03:26.269 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:26 compute-0 nova_compute[247704]: 2026-01-31 08:03:26.270 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:03:26 compute-0 ovn_controller[149457]: 2026-01-31T08:03:26Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:f2:ba 10.100.0.13
Jan 31 08:03:26 compute-0 ovn_controller[149457]: 2026-01-31T08:03:26Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:f2:ba 10.100.0.13
Jan 31 08:03:26 compute-0 ceph-mon[74496]: pgmap v1988: 305 pgs: 305 active+clean; 249 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 16 KiB/s wr, 107 op/s
Jan 31 08:03:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2769752874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/521678790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 254 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 464 KiB/s wr, 93 op/s
Jan 31 08:03:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:27.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/125820870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:28 compute-0 nova_compute[247704]: 2026-01-31 08:03:28.401 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:28 compute-0 ceph-mon[74496]: pgmap v1989: 305 pgs: 305 active+clean; 254 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 464 KiB/s wr, 93 op/s
Jan 31 08:03:29 compute-0 nova_compute[247704]: 2026-01-31 08:03:29.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 315 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.5 MiB/s wr, 127 op/s
Jan 31 08:03:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:30 compute-0 ceph-mon[74496]: pgmap v1990: 305 pgs: 305 active+clean; 315 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.5 MiB/s wr, 127 op/s
Jan 31 08:03:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 328 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 31 08:03:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:31.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:32 compute-0 ceph-mon[74496]: pgmap v1991: 305 pgs: 305 active+clean; 328 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 31 08:03:33 compute-0 nova_compute[247704]: 2026-01-31 08:03:33.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 343 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Jan 31 08:03:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:33.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/787839996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:34 compute-0 nova_compute[247704]: 2026-01-31 08:03:34.128 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:34 compute-0 nova_compute[247704]: 2026-01-31 08:03:34.212 247708 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:03:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:34 compute-0 ceph-mon[74496]: pgmap v1992: 305 pgs: 305 active+clean; 343 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Jan 31 08:03:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4228796052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0075037005048390094 of space, bias 1.0, pg target 2.251110151451703 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0002982577819653825 of space, bias 1.0, pg target 0.08888081902568398 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:03:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 367 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.6 MiB/s wr, 120 op/s
Jan 31 08:03:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:35.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:36 compute-0 ceph-mon[74496]: pgmap v1993: 305 pgs: 305 active+clean; 367 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.6 MiB/s wr, 120 op/s
Jan 31 08:03:36 compute-0 podman[311775]: 2026-01-31 08:03:36.924595619 +0000 UTC m=+0.104269071 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 08:03:36 compute-0 kernel: tap0681aec4-62 (unregistering): left promiscuous mode
Jan 31 08:03:36 compute-0 NetworkManager[49108]: <info>  [1769846616.9705] device (tap0681aec4-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:03:36 compute-0 nova_compute[247704]: 2026-01-31 08:03:36.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:36 compute-0 ovn_controller[149457]: 2026-01-31T08:03:36Z|00389|binding|INFO|Releasing lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 from this chassis (sb_readonly=0)
Jan 31 08:03:36 compute-0 ovn_controller[149457]: 2026-01-31T08:03:36Z|00390|binding|INFO|Setting lport 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 down in Southbound
Jan 31 08:03:36 compute-0 ovn_controller[149457]: 2026-01-31T08:03:36Z|00391|binding|INFO|Removing iface tap0681aec4-62 ovn-installed in OVS
Jan 31 08:03:36 compute-0 nova_compute[247704]: 2026-01-31 08:03:36.984 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:36 compute-0 nova_compute[247704]: 2026-01-31 08:03:36.993 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.002 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:f2:ba 10.100.0.13'], port_security=['fa:16:3e:7c:f2:ba 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'dd1cfd57-90d1-4f96-b16e-96d64815af69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=0681aec4-62fb-4ff7-9e0f-3038c32e48a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.004 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.007 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.008 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[14c0ff08-9b4e-4c18-ab93-178e7e0c68a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.009 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore
Jan 31 08:03:37 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 31 08:03:37 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005f.scope: Consumed 13.966s CPU time.
Jan 31 08:03:37 compute-0 systemd-machined[214448]: Machine qemu-39-instance-0000005f terminated.
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.209 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.224 247708 INFO nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance shutdown successfully after 13 seconds.
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.232 247708 INFO nova.virt.libvirt.driver [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Instance destroyed successfully.
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.234 247708 DEBUG nova.virt.libvirt.vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:03:19Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.234 247708 DEBUG nova.network.os_vif_util [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "vif_mac": "fa:16:3e:7c:f2:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.235 247708 DEBUG nova.network.os_vif_util [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.235 247708 DEBUG os_vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.238 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0681aec4-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.241 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.242 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.244 247708 INFO os_vif [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62')
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.248 247708 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.249 247708 DEBUG nova.virt.libvirt.driver [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:03:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 134 op/s
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.420 247708 DEBUG nova.compute.manager [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.421 247708 DEBUG oslo_concurrency.lockutils [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.422 247708 DEBUG oslo_concurrency.lockutils [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.422 247708 DEBUG oslo_concurrency.lockutils [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.423 247708 DEBUG nova.compute.manager [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.423 247708 WARNING nova.compute.manager [req-f54885ab-9648-4466-b6ff-a055bd9c9563 req-ee00b238-5252-41d1-b9e6-d7123c3e92a4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-unplugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.579 247708 DEBUG neutronclient.v2_0.client [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:03:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:37.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:37 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [NOTICE]   (311634) : haproxy version is 2.8.14-c23fe91
Jan 31 08:03:37 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [NOTICE]   (311634) : path to executable is /usr/sbin/haproxy
Jan 31 08:03:37 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [WARNING]  (311634) : Exiting Master process...
Jan 31 08:03:37 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [WARNING]  (311634) : Exiting Master process...
Jan 31 08:03:37 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [ALERT]    (311634) : Current worker (311636) exited with code 143 (Terminated)
Jan 31 08:03:37 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[311604]: [WARNING]  (311634) : All workers exited. Exiting... (0)
Jan 31 08:03:37 compute-0 systemd[1]: libpod-18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110.scope: Deactivated successfully.
Jan 31 08:03:37 compute-0 podman[311827]: 2026-01-31 08:03:37.608651005 +0000 UTC m=+0.486349953 container died 18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110-userdata-shm.mount: Deactivated successfully.
Jan 31 08:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65913f9d83ce6e0d0caa2c6590da6fc5e7d43b3f74c224b181446f35179fb51-merged.mount: Deactivated successfully.
Jan 31 08:03:37 compute-0 podman[311827]: 2026-01-31 08:03:37.656328401 +0000 UTC m=+0.534027309 container cleanup 18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:03:37 compute-0 systemd[1]: libpod-conmon-18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110.scope: Deactivated successfully.
Jan 31 08:03:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:37.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.707 247708 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.708 247708 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.708 247708 DEBUG oslo_concurrency.lockutils [None req-3af99eeb-d30b-49ab-9f45-01e97f5e2499 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:37 compute-0 podman[311869]: 2026-01-31 08:03:37.734432491 +0000 UTC m=+0.052123765 container remove 18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.739 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c81b9c62-1b03-4b41-96da-4aab980b2429]: (4, ('Sat Jan 31 08:03:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110)\n18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110\nSat Jan 31 08:03:37 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110)\n18a65506d7cbe28ad72f2ec184c00cd10719b16cd5b7008a0dfe8a6f10567110\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.741 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[80f9b96a-70f8-4f61-8f6f-f5133322eddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.743 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 kernel: tap121329c8-20: left promiscuous mode
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.752 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1a8c4a-4543-49e2-8691-5287c3060aff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 nova_compute[247704]: 2026-01-31 08:03:37.754 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.769 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[79de95ee-b8f8-48b0-b2c5-c54eadd78391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 ceph-mon[74496]: pgmap v1994: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 134 op/s
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.772 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[46697c6c-ee3b-46c7-8210-3f30aef12065]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.786 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4228f025-7ed5-4aad-8af9-0329a561090f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683238, 'reachable_time': 44076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311883, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.791 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:03:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:37.791 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a16b61b4-a12c-43b9-af8e-16143bec4abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 31 08:03:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:03:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:39.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:03:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.742 247708 DEBUG nova.compute.manager [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.742 247708 DEBUG oslo_concurrency.lockutils [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.742 247708 DEBUG oslo_concurrency.lockutils [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.743 247708 DEBUG oslo_concurrency.lockutils [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.743 247708 DEBUG nova.compute.manager [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:39 compute-0 nova_compute[247704]: 2026-01-31 08:03:39.743 247708 WARNING nova.compute.manager [req-9ab6613a-ef89-4d8a-9723-948c2f6c1dec req-f0b65800-25f1-4e88-aa9b-d39f8eef7dca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:03:40 compute-0 ceph-mon[74496]: pgmap v1995: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.497352) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620497436, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 252, "total_data_size": 3755506, "memory_usage": 3817168, "flush_reason": "Manual Compaction"}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620540892, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3688012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41694, "largest_seqno": 43841, "table_properties": {"data_size": 3678370, "index_size": 6072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20449, "raw_average_key_size": 20, "raw_value_size": 3658904, "raw_average_value_size": 3684, "num_data_blocks": 264, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846410, "oldest_key_time": 1769846410, "file_creation_time": 1769846620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 43613 microseconds, and 8881 cpu microseconds.
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.540971) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3688012 bytes OK
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.541002) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.543770) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.543795) EVENT_LOG_v1 {"time_micros": 1769846620543787, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.543821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3746775, prev total WAL file size 3746775, number of live WAL files 2.
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.545042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3601KB)], [92(9491KB)]
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620545122, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13407700, "oldest_snapshot_seqno": -1}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7179 keys, 11455151 bytes, temperature: kUnknown
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620749149, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11455151, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11406762, "index_size": 29333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 185081, "raw_average_key_size": 25, "raw_value_size": 11278039, "raw_average_value_size": 1570, "num_data_blocks": 1164, "num_entries": 7179, "num_filter_entries": 7179, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.749461) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11455151 bytes
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.751267) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.7 rd, 56.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7702, records dropped: 523 output_compression: NoCompression
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.751311) EVENT_LOG_v1 {"time_micros": 1769846620751288, "job": 54, "event": "compaction_finished", "compaction_time_micros": 204110, "compaction_time_cpu_micros": 35029, "output_level": 6, "num_output_files": 1, "total_output_size": 11455151, "num_input_records": 7702, "num_output_records": 7179, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620752313, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846620754057, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.544913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.754203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.754224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.754227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.754229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:40 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:03:40.754231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:03:41 compute-0 sudo[311885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:41 compute-0 sudo[311885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:41 compute-0 sudo[311885]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:41 compute-0 sudo[311910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:03:41 compute-0 sudo[311910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:03:41 compute-0 sudo[311910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 104 op/s
Jan 31 08:03:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:41.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:03:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:03:41 compute-0 nova_compute[247704]: 2026-01-31 08:03:41.994 247708 DEBUG nova.compute.manager [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-changed-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:41 compute-0 nova_compute[247704]: 2026-01-31 08:03:41.994 247708 DEBUG nova.compute.manager [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Refreshing instance network info cache due to event network-changed-0681aec4-62fb-4ff7-9e0f-3038c32e48a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:03:41 compute-0 nova_compute[247704]: 2026-01-31 08:03:41.995 247708 DEBUG oslo_concurrency.lockutils [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:41 compute-0 nova_compute[247704]: 2026-01-31 08:03:41.995 247708 DEBUG oslo_concurrency.lockutils [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:41 compute-0 nova_compute[247704]: 2026-01-31 08:03:41.996 247708 DEBUG nova.network.neutron [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Refreshing network info cache for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:03:42 compute-0 nova_compute[247704]: 2026-01-31 08:03:42.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:42 compute-0 ceph-mon[74496]: pgmap v1996: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 104 op/s
Jan 31 08:03:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Jan 31 08:03:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 31 08:03:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 31 08:03:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 31 08:03:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4063449277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:03:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:03:44 compute-0 nova_compute[247704]: 2026-01-31 08:03:44.132 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:44 compute-0 nova_compute[247704]: 2026-01-31 08:03:44.215 247708 DEBUG nova.network.neutron [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updated VIF entry in instance network info cache for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:03:44 compute-0 nova_compute[247704]: 2026-01-31 08:03:44.215 247708 DEBUG nova.network.neutron [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:44 compute-0 nova_compute[247704]: 2026-01-31 08:03:44.236 247708 DEBUG oslo_concurrency.lockutils [req-637b3e0a-c9de-4044-b07b-f76c27210071 req-c8dbea9f-42d3-4241-b95b-c2902fb9a9bd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:44 compute-0 ceph-mon[74496]: pgmap v1997: 305 pgs: 305 active+clean; 374 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Jan 31 08:03:44 compute-0 ceph-mon[74496]: osdmap e254: 3 total, 3 up, 3 in
Jan 31 08:03:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2913285531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 399 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 08:03:45 compute-0 nova_compute[247704]: 2026-01-31 08:03:45.505 247708 DEBUG nova.compute.manager [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:45 compute-0 nova_compute[247704]: 2026-01-31 08:03:45.506 247708 DEBUG oslo_concurrency.lockutils [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:45 compute-0 nova_compute[247704]: 2026-01-31 08:03:45.506 247708 DEBUG oslo_concurrency.lockutils [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:45 compute-0 nova_compute[247704]: 2026-01-31 08:03:45.506 247708 DEBUG oslo_concurrency.lockutils [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:45 compute-0 nova_compute[247704]: 2026-01-31 08:03:45.507 247708 DEBUG nova.compute.manager [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:45 compute-0 nova_compute[247704]: 2026-01-31 08:03:45.507 247708 WARNING nova.compute.manager [req-d5b213c0-4955-477e-8c75-b0070ad32672 req-8af824d5-0ff0-47c5-8009-ffe3a35e7708 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state active and task_state resize_finish.
Jan 31 08:03:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:03:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:45.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:03:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:45.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1226616456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1766540124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:03:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1766540124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:03:46 compute-0 ceph-mon[74496]: pgmap v1999: 305 pgs: 305 active+clean; 399 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.241 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 406 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 31 08:03:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:47.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.625 247708 DEBUG nova.compute.manager [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.625 247708 DEBUG oslo_concurrency.lockutils [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.625 247708 DEBUG oslo_concurrency.lockutils [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.626 247708 DEBUG oslo_concurrency.lockutils [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.626 247708 DEBUG nova.compute.manager [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] No waiting events found dispatching network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:03:47 compute-0 nova_compute[247704]: 2026-01-31 08:03:47.626 247708 WARNING nova.compute.manager [req-c700cf15-f4f3-45b6-ae6d-04824f5d3d08 req-c3551996-15c0-437e-8f82-b06419d7cd92 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Received unexpected event network-vif-plugged-0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for instance with vm_state resized and task_state None.
Jan 31 08:03:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:47.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:48 compute-0 ceph-mon[74496]: pgmap v2000: 305 pgs: 305 active+clean; 406 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 31 08:03:49 compute-0 nova_compute[247704]: 2026-01-31 08:03:49.134 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 426 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Jan 31 08:03:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:49.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:03:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:03:50 compute-0 ceph-mon[74496]: pgmap v2001: 305 pgs: 305 active+clean; 426 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Jan 31 08:03:50 compute-0 nova_compute[247704]: 2026-01-31 08:03:50.593 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:50 compute-0 nova_compute[247704]: 2026-01-31 08:03:50.593 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:50 compute-0 nova_compute[247704]: 2026-01-31 08:03:50.593 247708 DEBUG nova.compute.manager [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 31 08:03:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 433 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 31 08:03:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1927397384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3553796967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:03:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:51.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:03:51 compute-0 nova_compute[247704]: 2026-01-31 08:03:51.679 247708 DEBUG neutronclient.v2_0.client [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0681aec4-62fb-4ff7-9e0f-3038c32e48a2 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:03:51 compute-0 nova_compute[247704]: 2026-01-31 08:03:51.680 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:03:51 compute-0 nova_compute[247704]: 2026-01-31 08:03:51.681 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:03:51 compute-0 nova_compute[247704]: 2026-01-31 08:03:51.681 247708 DEBUG nova.network.neutron [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:03:51 compute-0 nova_compute[247704]: 2026-01-31 08:03:51.682 247708 DEBUG nova.objects.instance [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'info_cache' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:51 compute-0 podman[311941]: 2026-01-31 08:03:51.922296898 +0000 UTC m=+0.083640455 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:03:52 compute-0 nova_compute[247704]: 2026-01-31 08:03:52.219 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846617.2177215, dd1cfd57-90d1-4f96-b16e-96d64815af69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:03:52 compute-0 nova_compute[247704]: 2026-01-31 08:03:52.219 247708 INFO nova.compute.manager [-] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] VM Stopped (Lifecycle Event)
Jan 31 08:03:52 compute-0 nova_compute[247704]: 2026-01-31 08:03:52.241 247708 DEBUG nova.compute.manager [None req-93449d12-7252-4b12-9da2-363a4ccb79ea - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:03:52 compute-0 nova_compute[247704]: 2026-01-31 08:03:52.243 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:52 compute-0 nova_compute[247704]: 2026-01-31 08:03:52.247 247708 DEBUG nova.compute.manager [None req-93449d12-7252-4b12-9da2-363a4ccb79ea - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:03:52 compute-0 nova_compute[247704]: 2026-01-31 08:03:52.269 247708 INFO nova.compute.manager [None req-93449d12-7252-4b12-9da2-363a4ccb79ea - - - - - -] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 08:03:52 compute-0 ceph-mon[74496]: pgmap v2002: 305 pgs: 305 active+clean; 433 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 183 op/s
Jan 31 08:03:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 439 MiB data, 958 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Jan 31 08:03:53 compute-0 nova_compute[247704]: 2026-01-31 08:03:53.433 247708 DEBUG nova.network.neutron [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: dd1cfd57-90d1-4f96-b16e-96d64815af69] Updating instance_info_cache with network_info: [{"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:03:53 compute-0 nova_compute[247704]: 2026-01-31 08:03:53.475 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-dd1cfd57-90d1-4f96-b16e-96d64815af69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:03:53 compute-0 nova_compute[247704]: 2026-01-31 08:03:53.476 247708 DEBUG nova.objects.instance [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid dd1cfd57-90d1-4f96-b16e-96d64815af69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:03:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:53.525 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:03:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:03:53.526 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:03:53 compute-0 nova_compute[247704]: 2026-01-31 08:03:53.554 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:53 compute-0 nova_compute[247704]: 2026-01-31 08:03:53.602 247708 DEBUG nova.storage.rbd_utils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] removing snapshot(nova-resize) on rbd image(dd1cfd57-90d1-4f96-b16e-96d64815af69_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:03:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:53.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:03:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.135 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 31 08:03:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 31 08:03:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 31 08:03:54 compute-0 ceph-mon[74496]: pgmap v2003: 305 pgs: 305 active+clean; 439 MiB data, 958 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.584 247708 DEBUG nova.virt.libvirt.vif [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1636985884',display_name='tempest-ServerDiskConfigTestJSON-server-1636985884',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1636985884',id=95,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:03:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-chqeaza3',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:03:46Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=dd1cfd57-90d1-4f96-b16e-96d64815af69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.585 247708 DEBUG nova.network.os_vif_util [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "address": "fa:16:3e:7c:f2:ba", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0681aec4-62", "ovs_interfaceid": "0681aec4-62fb-4ff7-9e0f-3038c32e48a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.585 247708 DEBUG nova.network.os_vif_util [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.586 247708 DEBUG os_vif [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.588 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.588 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0681aec4-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.589 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.592 247708 INFO os_vif [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7c:f2:ba,bridge_name='br-int',has_traffic_filtering=True,id=0681aec4-62fb-4ff7-9e0f-3038c32e48a2,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0681aec4-62')
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.593 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.594 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:03:54 compute-0 nova_compute[247704]: 2026-01-31 08:03:54.685 247708 DEBUG oslo_concurrency.processutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:03:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:03:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818416923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:55 compute-0 nova_compute[247704]: 2026-01-31 08:03:55.144 247708 DEBUG oslo_concurrency.processutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:03:55 compute-0 nova_compute[247704]: 2026-01-31 08:03:55.153 247708 DEBUG nova.compute.provider_tree [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:03:55 compute-0 nova_compute[247704]: 2026-01-31 08:03:55.194 247708 DEBUG nova.scheduler.client.report [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:03:55 compute-0 nova_compute[247704]: 2026-01-31 08:03:55.248 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:55 compute-0 nova_compute[247704]: 2026-01-31 08:03:55.377 247708 INFO nova.scheduler.client.report [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocation for migration 47022a27-72dc-45c4-962a-90a5e5996bee
Jan 31 08:03:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 451 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 248 op/s
Jan 31 08:03:55 compute-0 nova_compute[247704]: 2026-01-31 08:03:55.433 247708 DEBUG oslo_concurrency.lockutils [None req-641475d5-2b01-4863-9c24-89806ceaa5e5 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "dd1cfd57-90d1-4f96-b16e-96d64815af69" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:03:55 compute-0 ceph-mon[74496]: osdmap e255: 3 total, 3 up, 3 in
Jan 31 08:03:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2818416923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:03:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:55.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:56 compute-0 ceph-mon[74496]: pgmap v2005: 305 pgs: 305 active+clean; 451 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 248 op/s
Jan 31 08:03:57 compute-0 nova_compute[247704]: 2026-01-31 08:03:57.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 453 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 232 op/s
Jan 31 08:03:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:57.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:58 compute-0 ceph-mon[74496]: pgmap v2006: 305 pgs: 305 active+clean; 453 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 232 op/s
Jan 31 08:03:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1010431320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:03:59 compute-0 nova_compute[247704]: 2026-01-31 08:03:59.137 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:03:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 453 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 234 op/s
Jan 31 08:03:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:59.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:03:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 31 08:03:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:03:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:03:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:59.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:03:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 31 08:03:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 31 08:04:00 compute-0 ceph-mon[74496]: pgmap v2007: 305 pgs: 305 active+clean; 453 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.7 MiB/s wr, 234 op/s
Jan 31 08:04:00 compute-0 ceph-mon[74496]: osdmap e256: 3 total, 3 up, 3 in
Jan 31 08:04:01 compute-0 sudo[312022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:01 compute-0 sudo[312022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:01 compute-0 sudo[312022]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-0 sudo[312047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:01 compute-0 sudo[312047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:01 compute-0 sudo[312047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 433 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1008 KiB/s wr, 243 op/s
Jan 31 08:04:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:01.529 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:01.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:01 compute-0 ceph-mon[74496]: pgmap v2009: 305 pgs: 305 active+clean; 433 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1008 KiB/s wr, 243 op/s
Jan 31 08:04:02 compute-0 nova_compute[247704]: 2026-01-31 08:04:02.246 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:03 compute-0 nova_compute[247704]: 2026-01-31 08:04:03.204 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:03 compute-0 nova_compute[247704]: 2026-01-31 08:04:03.204 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 409 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 98 KiB/s wr, 165 op/s
Jan 31 08:04:03 compute-0 nova_compute[247704]: 2026-01-31 08:04:03.492 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:04:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:03.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:04 compute-0 nova_compute[247704]: 2026-01-31 08:04:04.121 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:04 compute-0 nova_compute[247704]: 2026-01-31 08:04:04.122 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:04 compute-0 nova_compute[247704]: 2026-01-31 08:04:04.132 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:04:04 compute-0 nova_compute[247704]: 2026-01-31 08:04:04.132 247708 INFO nova.compute.claims [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:04:04 compute-0 nova_compute[247704]: 2026-01-31 08:04:04.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:04 compute-0 ceph-mon[74496]: pgmap v2010: 305 pgs: 305 active+clean; 409 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 98 KiB/s wr, 165 op/s
Jan 31 08:04:04 compute-0 nova_compute[247704]: 2026-01-31 08:04:04.624 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218588558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.086 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.093 247708 DEBUG nova.compute.provider_tree [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.251 247708 DEBUG nova.scheduler.client.report [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 374 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 154 op/s
Jan 31 08:04:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/218588558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:05.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.628 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.629 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:04:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:05.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.857 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.857 247708 DEBUG nova.network.neutron [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:04:05 compute-0 nova_compute[247704]: 2026-01-31 08:04:05.979 247708 INFO nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.073 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.458 247708 DEBUG nova.policy [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63e95edea0164ae2a9820dc10467335d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.523 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.525 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.525 247708 INFO nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Creating image(s)
Jan 31 08:04:06 compute-0 ceph-mon[74496]: pgmap v2011: 305 pgs: 305 active+clean; 374 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 154 op/s
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.560 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.590 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.621 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.624 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.691 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.692 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.693 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.694 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.730 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:06 compute-0 nova_compute[247704]: 2026-01-31 08:04:06.736 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:07 compute-0 nova_compute[247704]: 2026-01-31 08:04:07.072 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:07 compute-0 nova_compute[247704]: 2026-01-31 08:04:07.161 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] resizing rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:04:07 compute-0 nova_compute[247704]: 2026-01-31 08:04:07.249 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 376 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 142 op/s
Jan 31 08:04:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:07.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:07.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:07 compute-0 podman[312246]: 2026-01-31 08:04:07.897755878 +0000 UTC m=+0.068888835 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:04:07 compute-0 nova_compute[247704]: 2026-01-31 08:04:07.933 247708 DEBUG nova.objects.instance [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'migration_context' on Instance uuid e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:08 compute-0 nova_compute[247704]: 2026-01-31 08:04:08.026 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:04:08 compute-0 nova_compute[247704]: 2026-01-31 08:04:08.027 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Ensure instance console log exists: /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:04:08 compute-0 nova_compute[247704]: 2026-01-31 08:04:08.028 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:08 compute-0 nova_compute[247704]: 2026-01-31 08:04:08.028 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:08 compute-0 nova_compute[247704]: 2026-01-31 08:04:08.029 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:08 compute-0 nova_compute[247704]: 2026-01-31 08:04:08.218 247708 DEBUG nova.network.neutron [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Successfully created port: 4bbc984e-71f8-4e5f-b802-c135b54248f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:08 compute-0 ceph-mon[74496]: pgmap v2012: 305 pgs: 305 active+clean; 376 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 446 KiB/s wr, 142 op/s
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 392 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 4.0 MiB/s wr, 181 op/s
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.581 247708 DEBUG nova.network.neutron [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Successfully updated port: 4bbc984e-71f8-4e5f-b802-c135b54248f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.680 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "refresh_cache-e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.680 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquired lock "refresh_cache-e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.680 247708 DEBUG nova.network.neutron [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:04:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.785 247708 DEBUG nova.compute.manager [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-changed-4bbc984e-71f8-4e5f-b802-c135b54248f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.786 247708 DEBUG nova.compute.manager [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Refreshing instance network info cache due to event network-changed-4bbc984e-71f8-4e5f-b802-c135b54248f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:09 compute-0 nova_compute[247704]: 2026-01-31 08:04:09.786 247708 DEBUG oslo_concurrency.lockutils [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:10 compute-0 nova_compute[247704]: 2026-01-31 08:04:10.437 247708 DEBUG nova.network.neutron [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4123609605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:10 compute-0 sudo[312291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:10 compute-0 sudo[312291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-0 sudo[312291]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-0 ceph-mon[74496]: pgmap v2013: 305 pgs: 305 active+clean; 392 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 4.0 MiB/s wr, 181 op/s
Jan 31 08:04:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4123609605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:10 compute-0 sudo[312316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:10 compute-0 sudo[312316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-0 sudo[312316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-0 sudo[312341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:10 compute-0 sudo[312341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:10 compute-0 sudo[312341]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-0 sudo[312366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:04:10 compute-0 sudo[312366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:11.175 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:11.176 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:11.176 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 396 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 521 KiB/s rd, 4.6 MiB/s wr, 174 op/s
Jan 31 08:04:11 compute-0 podman[312461]: 2026-01-31 08:04:11.552041885 +0000 UTC m=+0.289135891 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:04:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:11 compute-0 podman[312461]: 2026-01-31 08:04:11.721923299 +0000 UTC m=+0.459017325 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:04:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3314209745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1075596497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.252 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:12 compute-0 podman[312618]: 2026-01-31 08:04:12.737744998 +0000 UTC m=+0.207581196 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:04:12 compute-0 podman[312618]: 2026-01-31 08:04:12.784695057 +0000 UTC m=+0.254531255 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:04:12 compute-0 ceph-mon[74496]: pgmap v2014: 305 pgs: 305 active+clean; 396 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 521 KiB/s rd, 4.6 MiB/s wr, 174 op/s
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.945 247708 DEBUG nova.network.neutron [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Updating instance_info_cache with network_info: [{"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.973 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Releasing lock "refresh_cache-e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.973 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Instance network_info: |[{"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.973 247708 DEBUG oslo_concurrency.lockutils [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.974 247708 DEBUG nova.network.neutron [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Refreshing network info cache for port 4bbc984e-71f8-4e5f-b802-c135b54248f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.976 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Start _get_guest_xml network_info=[{"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.980 247708 WARNING nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.985 247708 DEBUG nova.virt.libvirt.host [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.986 247708 DEBUG nova.virt.libvirt.host [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.989 247708 DEBUG nova.virt.libvirt.host [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.989 247708 DEBUG nova.virt.libvirt.host [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.990 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.991 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.991 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.991 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.991 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.992 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.992 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.992 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.992 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.992 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.993 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.993 247708 DEBUG nova.virt.hardware [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:04:12 compute-0 nova_compute[247704]: 2026-01-31 08:04:12.996 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:13 compute-0 podman[312682]: 2026-01-31 08:04:13.237414027 +0000 UTC m=+0.220366480 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, com.redhat.component=keepalived-container, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, name=keepalived, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc.)
Jan 31 08:04:13 compute-0 podman[312719]: 2026-01-31 08:04:13.363253074 +0000 UTC m=+0.107584902 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2, vcs-type=git)
Jan 31 08:04:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 361 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Jan 31 08:04:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909634964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.471 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:13 compute-0 podman[312682]: 2026-01-31 08:04:13.489743836 +0000 UTC m=+0.472696289 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20)
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.506 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.510 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:04:13 compute-0 sudo[312366]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:04:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:04:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:13 compute-0 sudo[312773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:13 compute-0 sudo[312773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:13 compute-0 sudo[312773]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:13 compute-0 sudo[312798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:13 compute-0 sudo[312798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:13 compute-0 sudo[312798]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:13 compute-0 ceph-mon[74496]: pgmap v2015: 305 pgs: 305 active+clean; 361 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 4.7 MiB/s wr, 156 op/s
Jan 31 08:04:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1909634964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:13 compute-0 sudo[312823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:13 compute-0 sudo[312823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501060088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:13 compute-0 sudo[312823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.992 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.994 247708 DEBUG nova.virt.libvirt.vif [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569326212',display_name='tempest-ServerDiskConfigTestJSON-server-1569326212',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569326212',id=98,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fvmgp08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:06Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.994 247708 DEBUG nova.network.os_vif_util [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.995 247708 DEBUG nova.network.os_vif_util [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:13 compute-0 nova_compute[247704]: 2026-01-31 08:04:13.996 247708 DEBUG nova.objects.instance [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'pci_devices' on Instance uuid e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:14 compute-0 sudo[312850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:04:14 compute-0 sudo[312850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.033 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <uuid>e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0</uuid>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <name>instance-00000062</name>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1569326212</nova:name>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:04:12</nova:creationTime>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:user uuid="63e95edea0164ae2a9820dc10467335d">tempest-ServerDiskConfigTestJSON-984925022-project-member</nova:user>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:project uuid="be74d11d2f5a4d9aae2dbe32c31ad9c3">tempest-ServerDiskConfigTestJSON-984925022</nova:project>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <nova:port uuid="4bbc984e-71f8-4e5f-b802-c135b54248f1">
Jan 31 08:04:14 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <system>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <entry name="serial">e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0</entry>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <entry name="uuid">e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0</entry>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </system>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <os>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </os>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <features>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </features>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk">
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </source>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk.config">
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </source>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:04:14 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:39:c4:b0"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <target dev="tap4bbc984e-71"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/console.log" append="off"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <video>
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </video>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:04:14 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:04:14 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:04:14 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:04:14 compute-0 nova_compute[247704]: </domain>
Jan 31 08:04:14 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.034 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Preparing to wait for external event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.035 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.035 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.035 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.036 247708 DEBUG nova.virt.libvirt.vif [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569326212',display_name='tempest-ServerDiskConfigTestJSON-server-1569326212',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569326212',id=98,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fvmgp08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:06Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.036 247708 DEBUG nova.network.os_vif_util [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.036 247708 DEBUG nova.network.os_vif_util [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.037 247708 DEBUG os_vif [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.038 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.038 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.041 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.041 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bbc984e-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.041 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bbc984e-71, col_values=(('external_ids', {'iface-id': '4bbc984e-71f8-4e5f-b802-c135b54248f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:c4:b0', 'vm-uuid': 'e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-0 NetworkManager[49108]: <info>  [1769846654.0438] manager: (tap4bbc984e-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.046 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.052 247708 INFO os_vif [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71')
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.161 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.161 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.161 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] No VIF found with MAC fa:16:3e:39:c4:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.162 247708 INFO nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Using config drive
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.193 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.199 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:14 compute-0 sudo[312850]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:04:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:04:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:04:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7aab13d4-e9ea-453b-81ae-d8ce156d0bf5 does not exist
Jan 31 08:04:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b479e984-57b2-4685-82ae-905fc75c4feb does not exist
Jan 31 08:04:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4b6c801d-2164-4e87-b7c2-2c6732801768 does not exist
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:04:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:04:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:04:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:04:14 compute-0 sudo[312927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:14 compute-0 sudo[312927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:14 compute-0 sudo[312927]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 sudo[312952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:14 compute-0 sudo[312952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:14 compute-0 sudo[312952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:14 compute-0 sudo[312977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:14 compute-0 sudo[312977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:14 compute-0 sudo[312977]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 sudo[313002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:04:14 compute-0 sudo[313002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.844 247708 INFO nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Creating config drive at /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/disk.config
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.851 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7xh3zd8f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:14 compute-0 nova_compute[247704]: 2026-01-31 08:04:14.976 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7xh3zd8f" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.012 247708 DEBUG nova.storage.rbd_utils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] rbd image e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.016 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/disk.config e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.130377864 +0000 UTC m=+0.024007858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1873123469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3501060088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:04:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.430270397 +0000 UTC m=+0.323900371 container create f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:04:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 363 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 6.6 MiB/s wr, 196 op/s
Jan 31 08:04:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:15.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:15 compute-0 systemd[1]: Started libpod-conmon-f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a.scope.
Jan 31 08:04:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.707005314 +0000 UTC m=+0.600635308 container init f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.713912873 +0000 UTC m=+0.607542847 container start f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:04:15 compute-0 frosty_chaum[313126]: 167 167
Jan 31 08:04:15 compute-0 systemd[1]: libpod-f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a.scope: Deactivated successfully.
Jan 31 08:04:15 compute-0 conmon[313126]: conmon f2706e80261d565574b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a.scope/container/memory.events
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.721856227 +0000 UTC m=+0.615486211 container attach f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaum, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.722471123 +0000 UTC m=+0.616101107 container died f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:04:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.737 247708 DEBUG oslo_concurrency.processutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/disk.config e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.741 247708 INFO nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Deleting local config drive /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0/disk.config because it was imported into RBD.
Jan 31 08:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7bbed2c12a717b4a5cc5a30f52e3d2afc7afb8434979b914a704081255a5a36-merged.mount: Deactivated successfully.
Jan 31 08:04:15 compute-0 podman[313099]: 2026-01-31 08:04:15.766250613 +0000 UTC m=+0.659880587 container remove f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:04:15 compute-0 systemd[1]: libpod-conmon-f2706e80261d565574b5621a12e5ad7123145d036778df6f0365acab4a60dd4a.scope: Deactivated successfully.
Jan 31 08:04:15 compute-0 kernel: tap4bbc984e-71: entered promiscuous mode
Jan 31 08:04:15 compute-0 NetworkManager[49108]: <info>  [1769846655.7882] manager: (tap4bbc984e-71): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Jan 31 08:04:15 compute-0 ovn_controller[149457]: 2026-01-31T08:04:15Z|00392|binding|INFO|Claiming lport 4bbc984e-71f8-4e5f-b802-c135b54248f1 for this chassis.
Jan 31 08:04:15 compute-0 ovn_controller[149457]: 2026-01-31T08:04:15Z|00393|binding|INFO|4bbc984e-71f8-4e5f-b802-c135b54248f1: Claiming fa:16:3e:39:c4:b0 10.100.0.5
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.789 247708 DEBUG nova.network.neutron [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Updated VIF entry in instance network info cache for port 4bbc984e-71f8-4e5f-b802-c135b54248f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.790 247708 DEBUG nova.network.neutron [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Updating instance_info_cache with network_info: [{"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.791 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:15 compute-0 ovn_controller[149457]: 2026-01-31T08:04:15Z|00394|binding|INFO|Setting lport 4bbc984e-71f8-4e5f-b802-c135b54248f1 ovn-installed in OVS
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.800 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:15 compute-0 nova_compute[247704]: 2026-01-31 08:04:15.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:15 compute-0 systemd-machined[214448]: New machine qemu-40-instance-00000062.
Jan 31 08:04:15 compute-0 systemd-udevd[313158]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:04:15 compute-0 NetworkManager[49108]: <info>  [1769846655.8292] device (tap4bbc984e-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:04:15 compute-0 NetworkManager[49108]: <info>  [1769846655.8299] device (tap4bbc984e-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:04:15 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000062.
Jan 31 08:04:15 compute-0 podman[313168]: 2026-01-31 08:04:15.91091977 +0000 UTC m=+0.039167999 container create 0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:04:15 compute-0 systemd[1]: Started libpod-conmon-0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1.scope.
Jan 31 08:04:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843ffa2fc981ee4909ed3099d359c4d627750da7eb1df2423c834575948e396b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843ffa2fc981ee4909ed3099d359c4d627750da7eb1df2423c834575948e396b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843ffa2fc981ee4909ed3099d359c4d627750da7eb1df2423c834575948e396b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843ffa2fc981ee4909ed3099d359c4d627750da7eb1df2423c834575948e396b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/843ffa2fc981ee4909ed3099d359c4d627750da7eb1df2423c834575948e396b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:15 compute-0 podman[313168]: 2026-01-31 08:04:15.893568886 +0000 UTC m=+0.021817145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:04:15 compute-0 podman[313168]: 2026-01-31 08:04:15.997093288 +0000 UTC m=+0.125341537 container init 0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:16 compute-0 podman[313168]: 2026-01-31 08:04:16.007269206 +0000 UTC m=+0.135517435 container start 0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:04:16 compute-0 podman[313168]: 2026-01-31 08:04:16.01189933 +0000 UTC m=+0.140147609 container attach 0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:04:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1632053411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:16 compute-0 ceph-mon[74496]: pgmap v2016: 305 pgs: 305 active+clean; 363 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 6.6 MiB/s wr, 196 op/s
Jan 31 08:04:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4027809735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1474416236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.352 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846656.3523633, e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.354 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] VM Started (Lifecycle Event)
Jan 31 08:04:16 compute-0 ovn_controller[149457]: 2026-01-31T08:04:16Z|00395|binding|INFO|Setting lport 4bbc984e-71f8-4e5f-b802-c135b54248f1 up in Southbound
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.425 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:c4:b0 10.100.0.5'], port_security=['fa:16:3e:39:c4:b0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4bbc984e-71f8-4e5f-b802-c135b54248f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.427 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4bbc984e-71f8-4e5f-b802-c135b54248f1 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 bound to our chassis
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.432 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.446 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bf008c51-5c83-43a8-9636-05c55aa14286]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.447 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap121329c8-21 in ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.451 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap121329c8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.451 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3b63ca70-7c45-4cf1-a071-446512be4140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.452 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e10f3a98-458d-4358-b7d1-f48bca0aa230]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.465 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[80018f0b-7d96-4749-8780-b35bdc08b7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.478 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3792b638-0729-4274-ac81-b78f6c5bbf55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.489 247708 DEBUG oslo_concurrency.lockutils [req-89696570-6dd1-4d9c-b13a-4c90c9e37721 req-fba7d1df-4103-48da-9086-8bb38363440d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.506 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.506 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a423daa7-94cd-491b-aa93-013736b67a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.511 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846656.3525956, e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.512 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] VM Paused (Lifecycle Event)
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.512 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f926ab39-91d4-4573-a58a-29df84787ca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 NetworkManager[49108]: <info>  [1769846656.5140] manager: (tap121329c8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/190)
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.535 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[124112c0-a312-41b5-b3ca-f209a0385cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.538 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[92deb7d2-2bc8-44ee-a5ce-82d626c10c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.544 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.548 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:04:16 compute-0 NetworkManager[49108]: <info>  [1769846656.5560] device (tap121329c8-20): carrier: link connected
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.559 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c87a6e64-8a61-4ce9-b6be-827684c80af0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.571 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[35114f21-3b0f-485c-bb5e-35a71d902f5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689917, 'reachable_time': 40112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313259, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.580 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4d36f84f-855d-4aab-b034-ab68ada1d4c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:a3c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 689917, 'tstamp': 689917}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313260, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.592 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4e19b478-2af3-4f8d-a301-43b04e4683b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap121329c8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:a3:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689917, 'reachable_time': 40112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313261, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.605 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.616 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ec20ad0c-5e20-41cf-b9e4-cce3ddf44e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.664 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d430ada5-2a8d-4dfd-b141-1fd2ef678735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.665 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.666 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.666 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap121329c8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:16 compute-0 NetworkManager[49108]: <info>  [1769846656.6687] manager: (tap121329c8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Jan 31 08:04:16 compute-0 kernel: tap121329c8-20: entered promiscuous mode
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.668 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.672 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap121329c8-20, col_values=(('external_ids', {'iface-id': 'e59d8348-5c5c-4c82-ba21-91f3a512c65e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.673 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:16 compute-0 ovn_controller[149457]: 2026-01-31T08:04:16Z|00396|binding|INFO|Releasing lport e59d8348-5c5c-4c82-ba21-91f3a512c65e from this chassis (sb_readonly=0)
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.676 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.676 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[58a65741-5408-41b1-b7b2-ff7b00ef393c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.677 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/121329c8-2359-4e9d-9f2b-4932f8740470.pid.haproxy
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 121329c8-2359-4e9d-9f2b-4932f8740470
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:04:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:16.678 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'env', 'PROCESS_TAG=haproxy-121329c8-2359-4e9d-9f2b-4932f8740470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/121329c8-2359-4e9d-9f2b-4932f8740470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:16 compute-0 hopeful_kirch[313188]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:04:16 compute-0 hopeful_kirch[313188]: --> relative data size: 1.0
Jan 31 08:04:16 compute-0 hopeful_kirch[313188]: --> All data devices are unavailable
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.776 247708 DEBUG nova.compute.manager [req-d714a209-4843-45b0-98ef-b03a0f0f48d8 req-a01ed2b5-393c-4a9f-9fec-d687e051b531 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.777 247708 DEBUG oslo_concurrency.lockutils [req-d714a209-4843-45b0-98ef-b03a0f0f48d8 req-a01ed2b5-393c-4a9f-9fec-d687e051b531 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.777 247708 DEBUG oslo_concurrency.lockutils [req-d714a209-4843-45b0-98ef-b03a0f0f48d8 req-a01ed2b5-393c-4a9f-9fec-d687e051b531 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.777 247708 DEBUG oslo_concurrency.lockutils [req-d714a209-4843-45b0-98ef-b03a0f0f48d8 req-a01ed2b5-393c-4a9f-9fec-d687e051b531 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.777 247708 DEBUG nova.compute.manager [req-d714a209-4843-45b0-98ef-b03a0f0f48d8 req-a01ed2b5-393c-4a9f-9fec-d687e051b531 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Processing event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.778 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.782 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846656.782274, e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.782 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] VM Resumed (Lifecycle Event)
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.784 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:04:16 compute-0 systemd[1]: libpod-0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1.scope: Deactivated successfully.
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.791 247708 INFO nova.virt.libvirt.driver [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Instance spawned successfully.
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.797 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:04:16 compute-0 podman[313281]: 2026-01-31 08:04:16.836920104 +0000 UTC m=+0.030795304 container died 0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.910 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.916 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.917 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.918 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.918 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.919 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.919 247708 DEBUG nova.virt.libvirt.driver [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.924 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:04:16 compute-0 nova_compute[247704]: 2026-01-31 08:04:16.988 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:04:17 compute-0 nova_compute[247704]: 2026-01-31 08:04:17.041 247708 INFO nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Took 10.52 seconds to spawn the instance on the hypervisor.
Jan 31 08:04:17 compute-0 nova_compute[247704]: 2026-01-31 08:04:17.041 247708 DEBUG nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:17 compute-0 nova_compute[247704]: 2026-01-31 08:04:17.140 247708 INFO nova.compute.manager [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Took 13.04 seconds to build instance.
Jan 31 08:04:17 compute-0 nova_compute[247704]: 2026-01-31 08:04:17.185 247708 DEBUG oslo_concurrency.lockutils [None req-91854954-fd93-4e76-90bf-9416314bfcff 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-843ffa2fc981ee4909ed3099d359c4d627750da7eb1df2423c834575948e396b-merged.mount: Deactivated successfully.
Jan 31 08:04:17 compute-0 podman[313281]: 2026-01-31 08:04:17.330868442 +0000 UTC m=+0.524743622 container remove 0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:17 compute-0 systemd[1]: libpod-conmon-0271f33cc73c5c00a529a74743538f0f7d391267ace9e7db906a0a63e3670dc1.scope: Deactivated successfully.
Jan 31 08:04:17 compute-0 sudo[313002]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:17 compute-0 sudo[313312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:17 compute-0 sudo[313312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:17 compute-0 sudo[313312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 384 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 7.5 MiB/s wr, 199 op/s
Jan 31 08:04:17 compute-0 podman[313335]: 2026-01-31 08:04:17.476382879 +0000 UTC m=+0.071240852 container create 0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 08:04:17 compute-0 sudo[313356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:17 compute-0 sudo[313356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:17 compute-0 sudo[313356]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:17 compute-0 systemd[1]: Started libpod-conmon-0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df.scope.
Jan 31 08:04:17 compute-0 podman[313335]: 2026-01-31 08:04:17.42449151 +0000 UTC m=+0.019349483 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:04:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17877cee8497a588439943aa417b4e6a31e5e1dd382cd3f3378e1777568b0db1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:17 compute-0 sudo[313383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:17 compute-0 sudo[313383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:17 compute-0 sudo[313383]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:17 compute-0 podman[313335]: 2026-01-31 08:04:17.563256674 +0000 UTC m=+0.158114657 container init 0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:04:17 compute-0 podman[313335]: 2026-01-31 08:04:17.5675982 +0000 UTC m=+0.162456173 container start 0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:04:17 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [NOTICE]   (313419) : New worker (313437) forked
Jan 31 08:04:17 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [NOTICE]   (313419) : Loading success.
Jan 31 08:04:17 compute-0 sudo[313411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:04:17 compute-0 sudo[313411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:17.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:04:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3197452955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:17 compute-0 podman[313488]: 2026-01-31 08:04:17.899240899 +0000 UTC m=+0.043909094 container create c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:04:17 compute-0 systemd[1]: Started libpod-conmon-c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c.scope.
Jan 31 08:04:17 compute-0 podman[313488]: 2026-01-31 08:04:17.879450385 +0000 UTC m=+0.024118600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:04:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:18 compute-0 podman[313488]: 2026-01-31 08:04:18.021714534 +0000 UTC m=+0.166382739 container init c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:04:18 compute-0 podman[313488]: 2026-01-31 08:04:18.02847718 +0000 UTC m=+0.173145375 container start c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:04:18 compute-0 trusting_mendel[313504]: 167 167
Jan 31 08:04:18 compute-0 systemd[1]: libpod-c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c.scope: Deactivated successfully.
Jan 31 08:04:18 compute-0 podman[313488]: 2026-01-31 08:04:18.035328968 +0000 UTC m=+0.179997203 container attach c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:04:18 compute-0 podman[313488]: 2026-01-31 08:04:18.035768678 +0000 UTC m=+0.180436893 container died c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:04:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b08fd444b04ed251928d3af13ce6ba764314ad67412befb3f3c21165cd6207f0-merged.mount: Deactivated successfully.
Jan 31 08:04:18 compute-0 podman[313488]: 2026-01-31 08:04:18.086867847 +0000 UTC m=+0.231536042 container remove c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:04:18 compute-0 systemd[1]: libpod-conmon-c46f1656c5129104d99ffe308e6a29e47ecd3cc7cced9655df537e7f4913f03c.scope: Deactivated successfully.
Jan 31 08:04:18 compute-0 podman[313527]: 2026-01-31 08:04:18.219263525 +0000 UTC m=+0.038307568 container create ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:04:18 compute-0 systemd[1]: Started libpod-conmon-ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610.scope.
Jan 31 08:04:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95e9a7b6a56d1472d78e3d0f4cbd91ea18edc0e251ffc0ffd58b3b129bb359e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95e9a7b6a56d1472d78e3d0f4cbd91ea18edc0e251ffc0ffd58b3b129bb359e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:18 compute-0 podman[313527]: 2026-01-31 08:04:18.203966351 +0000 UTC m=+0.023010414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95e9a7b6a56d1472d78e3d0f4cbd91ea18edc0e251ffc0ffd58b3b129bb359e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d95e9a7b6a56d1472d78e3d0f4cbd91ea18edc0e251ffc0ffd58b3b129bb359e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:18 compute-0 podman[313527]: 2026-01-31 08:04:18.318541702 +0000 UTC m=+0.137585785 container init ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:04:18 compute-0 podman[313527]: 2026-01-31 08:04:18.326827115 +0000 UTC m=+0.145871168 container start ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:04:18 compute-0 podman[313527]: 2026-01-31 08:04:18.333936329 +0000 UTC m=+0.152980482 container attach ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:04:18 compute-0 ceph-mon[74496]: pgmap v2017: 305 pgs: 305 active+clean; 384 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 7.5 MiB/s wr, 199 op/s
Jan 31 08:04:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3197452955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.046 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]: {
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:     "0": [
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:         {
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "devices": [
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "/dev/loop3"
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             ],
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "lv_name": "ceph_lv0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "lv_size": "7511998464",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "name": "ceph_lv0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "tags": {
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.cluster_name": "ceph",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.crush_device_class": "",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.encrypted": "0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.osd_id": "0",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.type": "block",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:                 "ceph.vdo": "0"
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             },
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "type": "block",
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:             "vg_name": "ceph_vg0"
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:         }
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]:     ]
Jan 31 08:04:19 compute-0 naughty_torvalds[313543]: }
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.137 247708 DEBUG nova.compute.manager [req-41bfe42d-8dea-4b0b-b78f-9be2f2ac56b0 req-7de84578-f52f-4fc2-a89f-b57662616f76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.137 247708 DEBUG oslo_concurrency.lockutils [req-41bfe42d-8dea-4b0b-b78f-9be2f2ac56b0 req-7de84578-f52f-4fc2-a89f-b57662616f76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.138 247708 DEBUG oslo_concurrency.lockutils [req-41bfe42d-8dea-4b0b-b78f-9be2f2ac56b0 req-7de84578-f52f-4fc2-a89f-b57662616f76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.138 247708 DEBUG oslo_concurrency.lockutils [req-41bfe42d-8dea-4b0b-b78f-9be2f2ac56b0 req-7de84578-f52f-4fc2-a89f-b57662616f76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.138 247708 DEBUG nova.compute.manager [req-41bfe42d-8dea-4b0b-b78f-9be2f2ac56b0 req-7de84578-f52f-4fc2-a89f-b57662616f76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] No waiting events found dispatching network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.139 247708 WARNING nova.compute.manager [req-41bfe42d-8dea-4b0b-b78f-9be2f2ac56b0 req-7de84578-f52f-4fc2-a89f-b57662616f76 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received unexpected event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 for instance with vm_state active and task_state None.
Jan 31 08:04:19 compute-0 systemd[1]: libpod-ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610.scope: Deactivated successfully.
Jan 31 08:04:19 compute-0 podman[313527]: 2026-01-31 08:04:19.149871451 +0000 UTC m=+0.968915494 container died ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d95e9a7b6a56d1472d78e3d0f4cbd91ea18edc0e251ffc0ffd58b3b129bb359e-merged.mount: Deactivated successfully.
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:19 compute-0 podman[313527]: 2026-01-31 08:04:19.211802395 +0000 UTC m=+1.030846438 container remove ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:04:19 compute-0 systemd[1]: libpod-conmon-ba1512faf4a7740cd4bb98e58b87f4d5f85dbcc5f0575d18543774c2ffde3610.scope: Deactivated successfully.
Jan 31 08:04:19 compute-0 sudo[313411]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:19 compute-0 sudo[313565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:19 compute-0 sudo[313565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:19 compute-0 sudo[313565]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:19 compute-0 sudo[313590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:04:19 compute-0 sudo[313590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:19 compute-0 sudo[313590]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:19 compute-0 sudo[313615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:19 compute-0 sudo[313615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:19 compute-0 sudo[313615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:19 compute-0 sudo[313640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:04:19 compute-0 sudo[313640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 388 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.2 MiB/s wr, 278 op/s
Jan 31 08:04:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2681368728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:19 compute-0 nova_compute[247704]: 2026-01-31 08:04:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:19.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.730330374 +0000 UTC m=+0.054274559 container create d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:04:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:19.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:19 compute-0 systemd[1]: Started libpod-conmon-d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e.scope.
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.699389307 +0000 UTC m=+0.023333542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:04:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.82790206 +0000 UTC m=+0.151846235 container init d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.83443968 +0000 UTC m=+0.158383865 container start d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:04:19 compute-0 confident_lewin[313722]: 167 167
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.841966944 +0000 UTC m=+0.165911089 container attach d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lewin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:04:19 compute-0 systemd[1]: libpod-d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e.scope: Deactivated successfully.
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.844511046 +0000 UTC m=+0.168455181 container died d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-857b58db3407a21ec9f9d3802a7251c2f1883c2f301f23be96218df7eda3f93f-merged.mount: Deactivated successfully.
Jan 31 08:04:19 compute-0 podman[313706]: 2026-01-31 08:04:19.891341361 +0000 UTC m=+0.215285506 container remove d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:04:19 compute-0 systemd[1]: libpod-conmon-d17e1d37ef8bd7100dd78f46bc621b479fab40f376be0ec200df2e2356cb821e.scope: Deactivated successfully.
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:20 compute-0 podman[313747]: 2026-01-31 08:04:20.052496731 +0000 UTC m=+0.053386526 container create 0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:20 compute-0 systemd[1]: Started libpod-conmon-0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892.scope.
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:04:20
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:04:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:04:20 compute-0 podman[313747]: 2026-01-31 08:04:20.030959845 +0000 UTC m=+0.031849670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c13e21d06a277a61497bc57be0266fe58ba5f660548ed402433d97b2489dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c13e21d06a277a61497bc57be0266fe58ba5f660548ed402433d97b2489dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c13e21d06a277a61497bc57be0266fe58ba5f660548ed402433d97b2489dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35c13e21d06a277a61497bc57be0266fe58ba5f660548ed402433d97b2489dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:04:20 compute-0 podman[313747]: 2026-01-31 08:04:20.142168824 +0000 UTC m=+0.143058639 container init 0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:04:20 compute-0 podman[313747]: 2026-01-31 08:04:20.150383025 +0000 UTC m=+0.151272820 container start 0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 31 08:04:20 compute-0 podman[313747]: 2026-01-31 08:04:20.163034224 +0000 UTC m=+0.163924039 container attach 0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:04:20 compute-0 ceph-mon[74496]: pgmap v2018: 305 pgs: 305 active+clean; 388 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.2 MiB/s wr, 278 op/s
Jan 31 08:04:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:04:20 compute-0 nova_compute[247704]: 2026-01-31 08:04:20.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]: {
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:         "osd_id": 0,
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:         "type": "bluestore"
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]:     }
Jan 31 08:04:20 compute-0 upbeat_ganguly[313764]: }
Jan 31 08:04:20 compute-0 systemd[1]: libpod-0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892.scope: Deactivated successfully.
Jan 31 08:04:20 compute-0 podman[313785]: 2026-01-31 08:04:20.985703181 +0000 UTC m=+0.026200462 container died 0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ganguly, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e35c13e21d06a277a61497bc57be0266fe58ba5f660548ed402433d97b2489dc-merged.mount: Deactivated successfully.
Jan 31 08:04:21 compute-0 podman[313785]: 2026-01-31 08:04:21.039357843 +0000 UTC m=+0.079855154 container remove 0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:04:21 compute-0 systemd[1]: libpod-conmon-0ce5c8ebde7a15f2f0ced89ac28831f28f86d34ce6e2afb2f22f91c379177892.scope: Deactivated successfully.
Jan 31 08:04:21 compute-0 sudo[313640]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:04:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:04:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e782f246-7e76-40a3-a75a-1adc0cd9e7e9 does not exist
Jan 31 08:04:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f6b3dbac-fe60-4e2b-888b-18d6c4b35b20 does not exist
Jan 31 08:04:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bb468ce0-211d-49e1-a21c-1a4b700191b9 does not exist
Jan 31 08:04:21 compute-0 sudo[313800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:21 compute-0 sudo[313800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-0 sudo[313800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-0 sudo[313825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:04:21 compute-0 sudo[313832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:21 compute-0 sudo[313832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-0 sudo[313825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-0 sudo[313832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-0 sudo[313825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-0 sudo[313875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:21 compute-0 sudo[313875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:21 compute-0 sudo[313875]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 372 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 261 op/s
Jan 31 08:04:21 compute-0 nova_compute[247704]: 2026-01-31 08:04:21.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:21 compute-0 nova_compute[247704]: 2026-01-31 08:04:21.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:04:21 compute-0 nova_compute[247704]: 2026-01-31 08:04:21.610 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:04:21 compute-0 nova_compute[247704]: 2026-01-31 08:04:21.611 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:21.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:04:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3994239849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:22 compute-0 ceph-mon[74496]: pgmap v2019: 305 pgs: 305 active+clean; 372 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.2 MiB/s wr, 261 op/s
Jan 31 08:04:22 compute-0 podman[313902]: 2026-01-31 08:04:22.880916533 +0000 UTC m=+0.059498466 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:04:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1877866369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 362 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.0 MiB/s wr, 289 op/s
Jan 31 08:04:23 compute-0 nova_compute[247704]: 2026-01-31 08:04:23.603 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:23.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:04:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 ceph-mon[74496]: pgmap v2020: 305 pgs: 305 active+clean; 362 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.0 MiB/s wr, 289 op/s
Jan 31 08:04:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2361180351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.197 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.198 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.198 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.198 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.199 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.200 247708 INFO nova.compute.manager [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Terminating instance
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.202 247708 DEBUG nova.compute.manager [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.203 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 kernel: tap4bbc984e-71 (unregistering): left promiscuous mode
Jan 31 08:04:24 compute-0 NetworkManager[49108]: <info>  [1769846664.2857] device (tap4bbc984e-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:04:24 compute-0 ovn_controller[149457]: 2026-01-31T08:04:24Z|00397|binding|INFO|Releasing lport 4bbc984e-71f8-4e5f-b802-c135b54248f1 from this chassis (sb_readonly=0)
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 ovn_controller[149457]: 2026-01-31T08:04:24Z|00398|binding|INFO|Setting lport 4bbc984e-71f8-4e5f-b802-c135b54248f1 down in Southbound
Jan 31 08:04:24 compute-0 ovn_controller[149457]: 2026-01-31T08:04:24Z|00399|binding|INFO|Removing iface tap4bbc984e-71 ovn-installed in OVS
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.298 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.308 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 31 08:04:24 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000062.scope: Consumed 7.983s CPU time.
Jan 31 08:04:24 compute-0 systemd-machined[214448]: Machine qemu-40-instance-00000062 terminated.
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.426 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:c4:b0 10.100.0.5'], port_security=['fa:16:3e:39:c4:b0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-121329c8-2359-4e9d-9f2b-4932f8740470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be74d11d2f5a4d9aae2dbe32c31ad9c3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '755edf0d-318a-4b49-b9f5-851611889f15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cae77ce7-0851-4c6f-a030-c066a50c0f3d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4bbc984e-71f8-4e5f-b802-c135b54248f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.429 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4bbc984e-71f8-4e5f-b802-c135b54248f1 in datapath 121329c8-2359-4e9d-9f2b-4932f8740470 unbound from our chassis
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.434 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 121329c8-2359-4e9d-9f2b-4932f8740470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.436 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7508d1b8-4528-475b-b700-ca88a378733e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.437 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 namespace which is not needed anymore
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.446 247708 INFO nova.virt.libvirt.driver [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Instance destroyed successfully.
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.447 247708 DEBUG nova.objects.instance [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lazy-loading 'resources' on Instance uuid e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:04:24 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [NOTICE]   (313419) : haproxy version is 2.8.14-c23fe91
Jan 31 08:04:24 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [NOTICE]   (313419) : path to executable is /usr/sbin/haproxy
Jan 31 08:04:24 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [WARNING]  (313419) : Exiting Master process...
Jan 31 08:04:24 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [ALERT]    (313419) : Current worker (313437) exited with code 143 (Terminated)
Jan 31 08:04:24 compute-0 neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470[313389]: [WARNING]  (313419) : All workers exited. Exiting... (0)
Jan 31 08:04:24 compute-0 systemd[1]: libpod-0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df.scope: Deactivated successfully.
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.607 247708 DEBUG nova.virt.libvirt.vif [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1569326212',display_name='tempest-ServerDiskConfigTestJSON-server-1569326212',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1569326212',id=98,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:04:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be74d11d2f5a4d9aae2dbe32c31ad9c3',ramdisk_id='',reservation_id='r-fvmgp08m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-984925022',owner_user_name='tempest-ServerDiskConfigTestJSON-984925022-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:04:22Z,user_data=None,user_id='63e95edea0164ae2a9820dc10467335d',uuid=e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.608 247708 DEBUG nova.network.os_vif_util [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converting VIF {"id": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "address": "fa:16:3e:39:c4:b0", "network": {"id": "121329c8-2359-4e9d-9f2b-4932f8740470", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-93002743-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be74d11d2f5a4d9aae2dbe32c31ad9c3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bbc984e-71", "ovs_interfaceid": "4bbc984e-71f8-4e5f-b802-c135b54248f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.608 247708 DEBUG nova.network.os_vif_util [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.609 247708 DEBUG os_vif [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.610 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bbc984e-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 podman[313954]: 2026-01-31 08:04:24.614179435 +0000 UTC m=+0.075090796 container died 0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.615 247708 INFO os_vif [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c4:b0,bridge_name='br-int',has_traffic_filtering=True,id=4bbc984e-71f8-4e5f-b802-c135b54248f1,network=Network(121329c8-2359-4e9d-9f2b-4932f8740470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bbc984e-71')
Jan 31 08:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df-userdata-shm.mount: Deactivated successfully.
Jan 31 08:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-17877cee8497a588439943aa417b4e6a31e5e1dd382cd3f3378e1777568b0db1-merged.mount: Deactivated successfully.
Jan 31 08:04:24 compute-0 podman[313954]: 2026-01-31 08:04:24.653927738 +0000 UTC m=+0.114839089 container cleanup 0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:04:24 compute-0 systemd[1]: libpod-conmon-0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df.scope: Deactivated successfully.
Jan 31 08:04:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:24 compute-0 podman[314001]: 2026-01-31 08:04:24.714915168 +0000 UTC m=+0.043753010 container remove 0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.719 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf8f2a4-3fee-4234-95ff-393dec2783c6]: (4, ('Sat Jan 31 08:04:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df)\n0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df\nSat Jan 31 08:04:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 (0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df)\n0fd1ecb9d753864b05c667329075e4368059fac4b6f7e44016928899268871df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.721 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a20b7e42-5ddc-4fa4-af0f-25c1a2c3872a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.722 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap121329c8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:24 compute-0 kernel: tap121329c8-20: left promiscuous mode
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.724 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.729 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.732 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b047a385-a3f0-41b1-9676-4118b15f250a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.739 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d931cd3-6ba4-4811-a174-8a2557e0c167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.741 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[656ead66-c46e-42eb-9513-91bdc00d7081]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.752 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2129d8a4-f4d3-47f3-b7ec-46171d445226]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 689912, 'reachable_time': 19422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314017, 'error': None, 'target': 'ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.755 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-121329c8-2359-4e9d-9f2b-4932f8740470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:04:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:24.755 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[32879fec-114d-4231-9bb0-36abedd03c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:04:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d121329c8\x2d2359\x2d4e9d\x2d9f2b\x2d4932f8740470.mount: Deactivated successfully.
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.908 247708 DEBUG nova.compute.manager [req-7b7b7b0f-d178-416c-900f-6334f213d49c req-2d0cda65-4bfc-4b57-a97d-0fb5cb1186c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-vif-unplugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.909 247708 DEBUG oslo_concurrency.lockutils [req-7b7b7b0f-d178-416c-900f-6334f213d49c req-2d0cda65-4bfc-4b57-a97d-0fb5cb1186c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.909 247708 DEBUG oslo_concurrency.lockutils [req-7b7b7b0f-d178-416c-900f-6334f213d49c req-2d0cda65-4bfc-4b57-a97d-0fb5cb1186c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.909 247708 DEBUG oslo_concurrency.lockutils [req-7b7b7b0f-d178-416c-900f-6334f213d49c req-2d0cda65-4bfc-4b57-a97d-0fb5cb1186c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.910 247708 DEBUG nova.compute.manager [req-7b7b7b0f-d178-416c-900f-6334f213d49c req-2d0cda65-4bfc-4b57-a97d-0fb5cb1186c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] No waiting events found dispatching network-vif-unplugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:24 compute-0 nova_compute[247704]: 2026-01-31 08:04:24.910 247708 DEBUG nova.compute.manager [req-7b7b7b0f-d178-416c-900f-6334f213d49c req-2d0cda65-4bfc-4b57-a97d-0fb5cb1186c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-vif-unplugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.070 247708 INFO nova.virt.libvirt.driver [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Deleting instance files /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_del
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.071 247708 INFO nova.virt.libvirt.driver [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Deletion of /var/lib/nova/instances/e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0_del complete
Jan 31 08:04:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 341 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.9 MiB/s wr, 312 op/s
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:25.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.900 247708 INFO nova.compute.manager [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Took 1.70 seconds to destroy the instance on the hypervisor.
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.901 247708 DEBUG oslo.service.loopingcall [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.902 247708 DEBUG nova.compute.manager [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:04:25 compute-0 nova_compute[247704]: 2026-01-31 08:04:25.902 247708 DEBUG nova.network.neutron [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:04:26 compute-0 nova_compute[247704]: 2026-01-31 08:04:26.013 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:26 compute-0 nova_compute[247704]: 2026-01-31 08:04:26.014 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:26 compute-0 nova_compute[247704]: 2026-01-31 08:04:26.014 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:26 compute-0 nova_compute[247704]: 2026-01-31 08:04:26.015 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:04:26 compute-0 nova_compute[247704]: 2026-01-31 08:04:26.015 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666690667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:26 compute-0 nova_compute[247704]: 2026-01-31 08:04:26.473 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:26 compute-0 ceph-mon[74496]: pgmap v2021: 305 pgs: 305 active+clean; 341 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.9 MiB/s wr, 312 op/s
Jan 31 08:04:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3666690667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 321 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1011 KiB/s wr, 279 op/s
Jan 31 08:04:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:27.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:28 compute-0 ceph-mon[74496]: pgmap v2022: 305 pgs: 305 active+clean; 321 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1011 KiB/s wr, 279 op/s
Jan 31 08:04:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4046848482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:29 compute-0 nova_compute[247704]: 2026-01-31 08:04:29.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 295 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 35 KiB/s wr, 283 op/s
Jan 31 08:04:29 compute-0 nova_compute[247704]: 2026-01-31 08:04:29.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:29.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:04:30 compute-0 nova_compute[247704]: 2026-01-31 08:04:30.198 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:04:30 compute-0 nova_compute[247704]: 2026-01-31 08:04:30.198 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:04:30 compute-0 nova_compute[247704]: 2026-01-31 08:04:30.349 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:04:30 compute-0 nova_compute[247704]: 2026-01-31 08:04:30.350 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4238MB free_disk=20.855194091796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:04:30 compute-0 nova_compute[247704]: 2026-01-31 08:04:30.350 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:30 compute-0 nova_compute[247704]: 2026-01-31 08:04:30.350 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:30 compute-0 ceph-mon[74496]: pgmap v2023: 305 pgs: 305 active+clean; 295 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 35 KiB/s wr, 283 op/s
Jan 31 08:04:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 304 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 202 op/s
Jan 31 08:04:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:31.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:31 compute-0 nova_compute[247704]: 2026-01-31 08:04:31.842 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 32e56536-3edb-494c-9e8b-87cfa8396dac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:04:31 compute-0 nova_compute[247704]: 2026-01-31 08:04:31.842 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:04:31 compute-0 nova_compute[247704]: 2026-01-31 08:04:31.842 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:04:31 compute-0 nova_compute[247704]: 2026-01-31 08:04:31.842 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:04:31 compute-0 nova_compute[247704]: 2026-01-31 08:04:31.976 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.366 247708 DEBUG nova.compute.manager [req-488c175d-a12c-4962-a592-ef5def4694e0 req-f8a1a1e9-65bb-4403-a335-c50c8ff0c892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.367 247708 DEBUG oslo_concurrency.lockutils [req-488c175d-a12c-4962-a592-ef5def4694e0 req-f8a1a1e9-65bb-4403-a335-c50c8ff0c892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.367 247708 DEBUG oslo_concurrency.lockutils [req-488c175d-a12c-4962-a592-ef5def4694e0 req-f8a1a1e9-65bb-4403-a335-c50c8ff0c892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.368 247708 DEBUG oslo_concurrency.lockutils [req-488c175d-a12c-4962-a592-ef5def4694e0 req-f8a1a1e9-65bb-4403-a335-c50c8ff0c892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.368 247708 DEBUG nova.compute.manager [req-488c175d-a12c-4962-a592-ef5def4694e0 req-f8a1a1e9-65bb-4403-a335-c50c8ff0c892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] No waiting events found dispatching network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.368 247708 WARNING nova.compute.manager [req-488c175d-a12c-4962-a592-ef5def4694e0 req-f8a1a1e9-65bb-4403-a335-c50c8ff0c892 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received unexpected event network-vif-plugged-4bbc984e-71f8-4e5f-b802-c135b54248f1 for instance with vm_state active and task_state deleting.
Jan 31 08:04:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472333272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.462 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.468 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.494 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.541 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.541 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:32.666 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:32.668 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.681 247708 DEBUG nova.network.neutron [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:32 compute-0 ceph-mon[74496]: pgmap v2024: 305 pgs: 305 active+clean; 304 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 202 op/s
Jan 31 08:04:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1682547861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3472333272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.721 247708 INFO nova.compute.manager [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Took 6.82 seconds to deallocate network for instance.
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.777 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.778 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:32 compute-0 nova_compute[247704]: 2026-01-31 08:04:32.904 247708 DEBUG oslo_concurrency.processutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480641687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.355 247708 DEBUG oslo_concurrency.processutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.362 247708 DEBUG nova.compute.provider_tree [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 309 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.492 247708 DEBUG nova.scheduler.client.report [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.542 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.589 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.643 247708 INFO nova.scheduler.client.report [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Deleted allocations for instance e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0
Jan 31 08:04:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:33.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3480641687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:33.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:33 compute-0 nova_compute[247704]: 2026-01-31 08:04:33.750 247708 DEBUG oslo_concurrency.lockutils [None req-9e86307b-8b9d-4f6a-9d1e-e6b8dd30304f 63e95edea0164ae2a9820dc10467335d be74d11d2f5a4d9aae2dbe32c31ad9c3 - - default default] Lock "e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:34 compute-0 nova_compute[247704]: 2026-01-31 08:04:34.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1527 writes, 6553 keys, 1527 commit groups, 1.0 writes per commit group, ingest: 10.12 MB, 0.02 MB/s
                                           Interval WAL: 1527 writes, 1527 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.6      0.85              0.14        27    0.031       0      0       0.0       0.0
                                             L6      1/0   10.92 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1     69.5     58.0      4.13              0.58        26    0.159    151K    14K       0.0       0.0
                                            Sum      1/0   10.92 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     57.6     59.8      4.98              0.72        53    0.094    151K    14K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.0     45.8     46.8      1.32              0.17        10    0.132     36K   2561       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     69.5     58.0      4.13              0.58        26    0.159    151K    14K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     68.9      0.84              0.14        26    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.057, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 5.0 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 32.90 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000439 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1946,31.75 MB,10.443%) FilterBlock(54,431.11 KB,0.138489%) IndexBlock(54,749.70 KB,0.240833%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:04:34 compute-0 nova_compute[247704]: 2026-01-31 08:04:34.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:04:34 compute-0 nova_compute[247704]: 2026-01-31 08:04:34.591 247708 DEBUG nova.compute.manager [req-bc15b90f-5b8a-4369-bcb2-fc38590351cc req-5d9ca616-7a22-46e6-b3e0-b840fae65add 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Received event network-vif-deleted-4bbc984e-71f8-4e5f-b802-c135b54248f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:34 compute-0 nova_compute[247704]: 2026-01-31 08:04:34.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:04:34.670 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:04:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:34 compute-0 ceph-mon[74496]: pgmap v2025: 305 pgs: 305 active+clean; 309 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006092201458682301 of space, bias 1.0, pg target 1.82766043760469 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29601366933044854 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:04:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 325 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 31 08:04:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:35.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:36 compute-0 sshd-session[314092]: Invalid user validator from 45.148.10.240 port 60772
Jan 31 08:04:36 compute-0 sshd-session[314092]: Connection closed by invalid user validator 45.148.10.240 port 60772 [preauth]
Jan 31 08:04:36 compute-0 ceph-mon[74496]: pgmap v2026: 305 pgs: 305 active+clean; 325 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 31 08:04:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 294 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 496 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 31 08:04:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:37.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/29766981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3004535060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:38 compute-0 podman[314095]: 2026-01-31 08:04:38.89852112 +0000 UTC m=+0.063979094 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:04:39 compute-0 ceph-mon[74496]: pgmap v2027: 305 pgs: 305 active+clean; 294 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 496 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 31 08:04:39 compute-0 nova_compute[247704]: 2026-01-31 08:04:39.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 503 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 31 08:04:39 compute-0 nova_compute[247704]: 2026-01-31 08:04:39.445 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846664.4436092, e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:04:39 compute-0 nova_compute[247704]: 2026-01-31 08:04:39.446 247708 INFO nova.compute.manager [-] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] VM Stopped (Lifecycle Event)
Jan 31 08:04:39 compute-0 nova_compute[247704]: 2026-01-31 08:04:39.616 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:39.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:39.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:39 compute-0 nova_compute[247704]: 2026-01-31 08:04:39.781 247708 DEBUG nova.compute.manager [None req-3102dbc1-115a-4e1f-83f5-b15636e042a7 - - - - - -] [instance: e62dd73e-76b3-4b2c-aa51-0d33d13a4ff0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:04:40 compute-0 ceph-mon[74496]: pgmap v2028: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 503 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 31 08:04:41 compute-0 sudo[314124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:41 compute-0 sudo[314124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:41 compute-0 sudo[314124]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 495 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 08:04:41 compute-0 sudo[314149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:04:41 compute-0 sudo[314149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:04:41 compute-0 sudo[314149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:41.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:41.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:42 compute-0 ceph-mon[74496]: pgmap v2029: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 495 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 08:04:42 compute-0 ovn_controller[149457]: 2026-01-31T08:04:42Z|00400|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:04:42 compute-0 nova_compute[247704]: 2026-01-31 08:04:42.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 31 08:04:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:43.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:43.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:44 compute-0 nova_compute[247704]: 2026-01-31 08:04:44.207 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:44 compute-0 ceph-mon[74496]: pgmap v2030: 305 pgs: 305 active+clean; 248 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 482 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 31 08:04:44 compute-0 nova_compute[247704]: 2026-01-31 08:04:44.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 860 KiB/s wr, 86 op/s
Jan 31 08:04:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3868303849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:04:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3868303849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:04:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:45.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:46 compute-0 ceph-mon[74496]: pgmap v2031: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 860 KiB/s wr, 86 op/s
Jan 31 08:04:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 64 KiB/s wr, 43 op/s
Jan 31 08:04:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:47.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:47.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.426 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.426 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.500 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:04:48 compute-0 ceph-mon[74496]: pgmap v2032: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 64 KiB/s wr, 43 op/s
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.670 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.671 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.681 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.681 247708 INFO nova.compute.claims [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:04:48 compute-0 nova_compute[247704]: 2026-01-31 08:04:48.911 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:04:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069683438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.348 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.357 247708 DEBUG nova.compute.provider_tree [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.409 247708 DEBUG nova.scheduler.client.report [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:04:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 6.2 KiB/s wr, 25 op/s
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.498 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.499 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:04:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2069683438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.587 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.587 247708 DEBUG nova.network.neutron [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.619 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.649 247708 INFO nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:04:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.702 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:04:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:49.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.842 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.844 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.868 247708 INFO nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Creating image(s)
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.902 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.933 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.967 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:49 compute-0 nova_compute[247704]: 2026-01-31 08:04:49.973 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.058 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.059 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.061 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.062 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.095 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.098 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 021ee385-cfad-415f-a2ac-cdcf925fccac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.120 247708 DEBUG nova.policy [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31043e345f6b48b585fb7b8ab7304764', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd352316ff6534075952e2d0c28061b09', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.423 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 021ee385-cfad-415f-a2ac-cdcf925fccac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.509 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] resizing rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:04:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 31 08:04:50 compute-0 ceph-mon[74496]: pgmap v2033: 305 pgs: 305 active+clean; 248 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 6.2 KiB/s wr, 25 op/s
Jan 31 08:04:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 31 08:04:50 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.695 247708 DEBUG nova.objects.instance [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'migration_context' on Instance uuid 021ee385-cfad-415f-a2ac-cdcf925fccac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.726 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.726 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Ensure instance console log exists: /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.727 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.727 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:04:50 compute-0 nova_compute[247704]: 2026-01-31 08:04:50.727 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:04:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 264 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 5.1 KiB/s rd, 640 KiB/s wr, 6 op/s
Jan 31 08:04:51 compute-0 ceph-mon[74496]: osdmap e257: 3 total, 3 up, 3 in
Jan 31 08:04:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/91292284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:51 compute-0 nova_compute[247704]: 2026-01-31 08:04:51.925 247708 DEBUG nova.network.neutron [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Successfully created port: efaa3b34-7260-4b2d-aca0-09a2d2ffe251 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:04:52 compute-0 ceph-mon[74496]: pgmap v2035: 305 pgs: 305 active+clean; 264 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 5.1 KiB/s rd, 640 KiB/s wr, 6 op/s
Jan 31 08:04:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3432603807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 269 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 848 KiB/s wr, 34 op/s
Jan 31 08:04:53 compute-0 podman[314369]: 2026-01-31 08:04:53.52569196 +0000 UTC m=+0.048632800 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:04:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:53.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:53 compute-0 nova_compute[247704]: 2026-01-31 08:04:53.704 247708 DEBUG nova.network.neutron [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Successfully updated port: efaa3b34-7260-4b2d-aca0-09a2d2ffe251 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:04:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:53.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:54 compute-0 ceph-mon[74496]: pgmap v2036: 305 pgs: 305 active+clean; 269 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 848 KiB/s wr, 34 op/s
Jan 31 08:04:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/884422311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.068 247708 DEBUG nova.compute.manager [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-changed-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.069 247708 DEBUG nova.compute.manager [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Refreshing instance network info cache due to event network-changed-efaa3b34-7260-4b2d-aca0-09a2d2ffe251. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.069 247708 DEBUG oslo_concurrency.lockutils [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.069 247708 DEBUG oslo_concurrency.lockutils [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.070 247708 DEBUG nova.network.neutron [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Refreshing network info cache for port efaa3b34-7260-4b2d-aca0-09a2d2ffe251 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.168 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.210 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:54 compute-0 nova_compute[247704]: 2026-01-31 08:04:54.621 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 295 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 31 08:04:55 compute-0 nova_compute[247704]: 2026-01-31 08:04:55.467 247708 DEBUG nova.network.neutron [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:55.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:04:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:04:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:04:55 compute-0 nova_compute[247704]: 2026-01-31 08:04:55.862 247708 DEBUG nova.network.neutron [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:56 compute-0 nova_compute[247704]: 2026-01-31 08:04:56.055 247708 DEBUG oslo_concurrency.lockutils [req-83342e2c-349c-4833-8198-64496535cd48 req-f62798df-cbaf-48fb-a236-e657b6c3a348 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:56 compute-0 nova_compute[247704]: 2026-01-31 08:04:56.056 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:04:56 compute-0 nova_compute[247704]: 2026-01-31 08:04:56.056 247708 DEBUG nova.network.neutron [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:04:56 compute-0 nova_compute[247704]: 2026-01-31 08:04:56.463 247708 DEBUG nova.network.neutron [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:04:56 compute-0 nova_compute[247704]: 2026-01-31 08:04:56.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:56 compute-0 ceph-mon[74496]: pgmap v2037: 305 pgs: 305 active+clean; 295 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 31 08:04:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 08:04:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:57.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:04:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:57.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:04:58 compute-0 ceph-mon[74496]: pgmap v2038: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.213 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.623 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.652 247708 DEBUG nova.network.neutron [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updating instance_info_cache with network_info: [{"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.674 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.675 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Instance network_info: |[{"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.680 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Start _get_guest_xml network_info=[{"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.686 247708 WARNING nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.694 247708 DEBUG nova.virt.libvirt.host [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.694 247708 DEBUG nova.virt.libvirt.host [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.699 247708 DEBUG nova.virt.libvirt.host [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.700 247708 DEBUG nova.virt.libvirt.host [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.701 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.701 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.702 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.702 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.702 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.703 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.703 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.703 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.704 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.704 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.704 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.704 247708 DEBUG nova.virt.hardware [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:04:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:59.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:04:59 compute-0 nova_compute[247704]: 2026-01-31 08:04:59.708 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:04:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:04:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:04:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:04:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:05:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410870166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.163 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.192 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.196 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:05:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505055136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.620 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.623 247708 DEBUG nova.virt.libvirt.vif [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-552048963',display_name='tempest-ServerActionsTestOtherA-server-552048963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-552048963',id=100,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-n9p8t3du',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:49Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=021ee385-cfad-415f-a2ac-cdcf925fccac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.624 247708 DEBUG nova.network.os_vif_util [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.625 247708 DEBUG nova.network.os_vif_util [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.627 247708 DEBUG nova.objects.instance [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 021ee385-cfad-415f-a2ac-cdcf925fccac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.646 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <uuid>021ee385-cfad-415f-a2ac-cdcf925fccac</uuid>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <name>instance-00000064</name>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsTestOtherA-server-552048963</nova:name>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:04:59</nova:creationTime>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <nova:port uuid="efaa3b34-7260-4b2d-aca0-09a2d2ffe251">
Jan 31 08:05:00 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <system>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <entry name="serial">021ee385-cfad-415f-a2ac-cdcf925fccac</entry>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <entry name="uuid">021ee385-cfad-415f-a2ac-cdcf925fccac</entry>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </system>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <os>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </os>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <features>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </features>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/021ee385-cfad-415f-a2ac-cdcf925fccac_disk">
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </source>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/021ee385-cfad-415f-a2ac-cdcf925fccac_disk.config">
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </source>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:05:00 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:40:5f:b3"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <target dev="tapefaa3b34-72"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/console.log" append="off"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <video>
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </video>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:05:00 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:05:00 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:05:00 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:05:00 compute-0 nova_compute[247704]: </domain>
Jan 31 08:05:00 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.647 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Preparing to wait for external event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.649 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.650 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.650 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.651 247708 DEBUG nova.virt.libvirt.vif [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-552048963',display_name='tempest-ServerActionsTestOtherA-server-552048963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-552048963',id=100,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-n9p8t3du',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:49Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=021ee385-cfad-415f-a2ac-cdcf925fccac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.651 247708 DEBUG nova.network.os_vif_util [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.651 247708 DEBUG nova.network.os_vif_util [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.652 247708 DEBUG os_vif [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.652 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.653 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.653 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.655 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.656 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefaa3b34-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.656 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefaa3b34-72, col_values=(('external_ids', {'iface-id': 'efaa3b34-7260-4b2d-aca0-09a2d2ffe251', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:5f:b3', 'vm-uuid': '021ee385-cfad-415f-a2ac-cdcf925fccac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.658 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-0 NetworkManager[49108]: <info>  [1769846700.6590] manager: (tapefaa3b34-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.663 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.664 247708 INFO os_vif [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72')
Jan 31 08:05:00 compute-0 ceph-mon[74496]: pgmap v2039: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 31 08:05:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3410870166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1505055136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.736 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.737 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.737 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:40:5f:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.737 247708 INFO nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Using config drive
Jan 31 08:05:00 compute-0 nova_compute[247704]: 2026-01-31 08:05:00.764 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Jan 31 08:05:01 compute-0 sudo[314474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:01 compute-0 sudo[314474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:01 compute-0 sudo[314474]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:01 compute-0 sudo[314499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:01 compute-0 sudo[314499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:01 compute-0 sudo[314499]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:01.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:02 compute-0 nova_compute[247704]: 2026-01-31 08:05:02.517 247708 INFO nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Creating config drive at /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/disk.config
Jan 31 08:05:02 compute-0 nova_compute[247704]: 2026-01-31 08:05:02.524 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmf0pdvpb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:02 compute-0 nova_compute[247704]: 2026-01-31 08:05:02.657 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmf0pdvpb" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 31 08:05:02 compute-0 nova_compute[247704]: 2026-01-31 08:05:02.686 247708 DEBUG nova.storage.rbd_utils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] rbd image 021ee385-cfad-415f-a2ac-cdcf925fccac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:05:02 compute-0 nova_compute[247704]: 2026-01-31 08:05:02.691 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/disk.config 021ee385-cfad-415f-a2ac-cdcf925fccac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 31 08:05:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 31 08:05:02 compute-0 ceph-mon[74496]: pgmap v2040: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.076 247708 DEBUG oslo_concurrency.processutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/disk.config 021ee385-cfad-415f-a2ac-cdcf925fccac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.077 247708 INFO nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Deleting local config drive /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac/disk.config because it was imported into RBD.
Jan 31 08:05:03 compute-0 kernel: tapefaa3b34-72: entered promiscuous mode
Jan 31 08:05:03 compute-0 NetworkManager[49108]: <info>  [1769846703.1283] manager: (tapefaa3b34-72): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Jan 31 08:05:03 compute-0 ovn_controller[149457]: 2026-01-31T08:05:03Z|00401|binding|INFO|Claiming lport efaa3b34-7260-4b2d-aca0-09a2d2ffe251 for this chassis.
Jan 31 08:05:03 compute-0 ovn_controller[149457]: 2026-01-31T08:05:03Z|00402|binding|INFO|efaa3b34-7260-4b2d-aca0-09a2d2ffe251: Claiming fa:16:3e:40:5f:b3 10.100.0.10
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.137 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:5f:b3 10.100.0.10'], port_security=['fa:16:3e:40:5f:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '021ee385-cfad-415f-a2ac-cdcf925fccac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd60e8396-567a-4b0f-955f-0777fc5fbc4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=efaa3b34-7260-4b2d-aca0-09a2d2ffe251) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:03 compute-0 ovn_controller[149457]: 2026-01-31T08:05:03Z|00403|binding|INFO|Setting lport efaa3b34-7260-4b2d-aca0-09a2d2ffe251 ovn-installed in OVS
Jan 31 08:05:03 compute-0 ovn_controller[149457]: 2026-01-31T08:05:03Z|00404|binding|INFO|Setting lport efaa3b34-7260-4b2d-aca0-09a2d2ffe251 up in Southbound
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.140 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.139 160028 INFO neutron.agent.ovn.metadata.agent [-] Port efaa3b34-7260-4b2d-aca0-09a2d2ffe251 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.144 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.144 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:05:03 compute-0 systemd-udevd[314577]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.155 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[125007ab-ad16-4f39-a159-59299eb6fc03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:03 compute-0 systemd-machined[214448]: New machine qemu-41-instance-00000064.
Jan 31 08:05:03 compute-0 NetworkManager[49108]: <info>  [1769846703.1707] device (tapefaa3b34-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:05:03 compute-0 NetworkManager[49108]: <info>  [1769846703.1717] device (tapefaa3b34-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:05:03 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000064.
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.181 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2a53c7-aed7-4db5-ae8a-32fd35d555c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.184 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[14c9f2cf-9367-4466-889d-fd74130f6aa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.202 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[34b7fdc4-cdab-4de1-85b5-59518d21cb4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.216 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2e399f-34eb-445b-8234-c4f8220520bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674899, 'reachable_time': 41105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314586, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.234 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3b425453-0070-4aef-9379-4b8c2127f10e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674908, 'tstamp': 674908}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314591, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674910, 'tstamp': 674910}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314591, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.237 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.241 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.241 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.242 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:03.243 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:05:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.571 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846703.5700192, 021ee385-cfad-415f-a2ac-cdcf925fccac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.572 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] VM Started (Lifecycle Event)
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.599 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.605 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846703.570747, 021ee385-cfad-415f-a2ac-cdcf925fccac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.605 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] VM Paused (Lifecycle Event)
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.624 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.628 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:05:03 compute-0 nova_compute[247704]: 2026-01-31 08:05:03.658 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:05:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:03.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:03 compute-0 ceph-mon[74496]: osdmap e258: 3 total, 3 up, 3 in
Jan 31 08:05:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3126004755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.216 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.776 247708 DEBUG nova.compute.manager [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.777 247708 DEBUG oslo_concurrency.lockutils [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.777 247708 DEBUG oslo_concurrency.lockutils [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.777 247708 DEBUG oslo_concurrency.lockutils [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.778 247708 DEBUG nova.compute.manager [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Processing event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.778 247708 DEBUG nova.compute.manager [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.778 247708 DEBUG oslo_concurrency.lockutils [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.778 247708 DEBUG oslo_concurrency.lockutils [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.778 247708 DEBUG oslo_concurrency.lockutils [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.778 247708 DEBUG nova.compute.manager [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] No waiting events found dispatching network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.779 247708 WARNING nova.compute.manager [req-4b80e528-d698-49fe-8f92-623ff15c21b9 req-89e1e6be-8fde-4076-b00a-807512faa4df 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received unexpected event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 for instance with vm_state building and task_state spawning.
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.779 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.783 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846704.7830284, 021ee385-cfad-415f-a2ac-cdcf925fccac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.783 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] VM Resumed (Lifecycle Event)
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.785 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.788 247708 INFO nova.virt.libvirt.driver [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Instance spawned successfully.
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.789 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.820 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.825 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.828 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.828 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.829 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.829 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.829 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.830 247708 DEBUG nova.virt.libvirt.driver [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:05:04 compute-0 ceph-mon[74496]: pgmap v2042: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.862 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.889 247708 INFO nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Took 15.05 seconds to spawn the instance on the hypervisor.
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.890 247708 DEBUG nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.958 247708 INFO nova.compute.manager [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Took 16.33 seconds to build instance.
Jan 31 08:05:04 compute-0 nova_compute[247704]: 2026-01-31 08:05:04.976 247708 DEBUG oslo_concurrency.lockutils [None req-a723b4de-414a-47e1-aa15-850cb321f812 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 20 KiB/s wr, 70 op/s
Jan 31 08:05:05 compute-0 nova_compute[247704]: 2026-01-31 08:05:05.659 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:05.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:05.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:05 compute-0 ceph-mon[74496]: pgmap v2043: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 20 KiB/s wr, 70 op/s
Jan 31 08:05:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 96 op/s
Jan 31 08:05:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:08 compute-0 ceph-mon[74496]: pgmap v2044: 305 pgs: 305 active+clean; 295 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 96 op/s
Jan 31 08:05:09 compute-0 nova_compute[247704]: 2026-01-31 08:05:09.219 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 241 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 27 KiB/s wr, 269 op/s
Jan 31 08:05:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 31 08:05:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:09.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 31 08:05:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 31 08:05:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:09 compute-0 nova_compute[247704]: 2026-01-31 08:05:09.879 247708 DEBUG nova.compute.manager [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-changed-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:09 compute-0 nova_compute[247704]: 2026-01-31 08:05:09.881 247708 DEBUG nova.compute.manager [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Refreshing instance network info cache due to event network-changed-efaa3b34-7260-4b2d-aca0-09a2d2ffe251. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:05:09 compute-0 nova_compute[247704]: 2026-01-31 08:05:09.881 247708 DEBUG oslo_concurrency.lockutils [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:09 compute-0 nova_compute[247704]: 2026-01-31 08:05:09.881 247708 DEBUG oslo_concurrency.lockutils [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:09 compute-0 nova_compute[247704]: 2026-01-31 08:05:09.881 247708 DEBUG nova.network.neutron [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Refreshing network info cache for port efaa3b34-7260-4b2d-aca0-09a2d2ffe251 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:05:09 compute-0 podman[314639]: 2026-01-31 08:05:09.942010207 +0000 UTC m=+0.105853069 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:05:10 compute-0 nova_compute[247704]: 2026-01-31 08:05:10.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:10 compute-0 ceph-mon[74496]: pgmap v2045: 305 pgs: 305 active+clean; 241 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 27 KiB/s wr, 269 op/s
Jan 31 08:05:10 compute-0 ceph-mon[74496]: osdmap e259: 3 total, 3 up, 3 in
Jan 31 08:05:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:11.176 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:11.177 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:11.177 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 214 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 31 KiB/s wr, 304 op/s
Jan 31 08:05:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:11.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3053920949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:11.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:11 compute-0 nova_compute[247704]: 2026-01-31 08:05:11.897 247708 DEBUG nova.network.neutron [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updated VIF entry in instance network info cache for port efaa3b34-7260-4b2d-aca0-09a2d2ffe251. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:05:11 compute-0 nova_compute[247704]: 2026-01-31 08:05:11.897 247708 DEBUG nova.network.neutron [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updating instance_info_cache with network_info: [{"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:11 compute-0 nova_compute[247704]: 2026-01-31 08:05:11.926 247708 DEBUG oslo_concurrency.lockutils [req-2eaf03b4-ad4c-4f37-b955-f155313ed77b req-91eac776-2744-4e54-b417-baeb1d3dabae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:12 compute-0 ceph-mon[74496]: pgmap v2047: 305 pgs: 305 active+clean; 214 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 31 KiB/s wr, 304 op/s
Jan 31 08:05:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 222 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 433 KiB/s wr, 279 op/s
Jan 31 08:05:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:13.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:13 compute-0 ceph-mon[74496]: pgmap v2048: 305 pgs: 305 active+clean; 222 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 433 KiB/s wr, 279 op/s
Jan 31 08:05:14 compute-0 nova_compute[247704]: 2026-01-31 08:05:14.222 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 236 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.1 MiB/s wr, 276 op/s
Jan 31 08:05:15 compute-0 nova_compute[247704]: 2026-01-31 08:05:15.663 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:15.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:16 compute-0 ceph-mon[74496]: pgmap v2049: 305 pgs: 305 active+clean; 236 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.1 MiB/s wr, 276 op/s
Jan 31 08:05:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1257996655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3115382932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 260 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 31 08:05:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3374096316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:19 compute-0 nova_compute[247704]: 2026-01-31 08:05:19.222 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:19 compute-0 ceph-mon[74496]: pgmap v2050: 305 pgs: 305 active+clean; 260 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Jan 31 08:05:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 322 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 877 KiB/s rd, 7.0 MiB/s wr, 152 op/s
Jan 31 08:05:19 compute-0 nova_compute[247704]: 2026-01-31 08:05:19.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:05:20
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:05:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:20.199 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:05:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:20.200 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:05:20 compute-0 nova_compute[247704]: 2026-01-31 08:05:20.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:20 compute-0 ceph-mon[74496]: pgmap v2051: 305 pgs: 305 active+clean; 322 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 877 KiB/s rd, 7.0 MiB/s wr, 152 op/s
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:05:20 compute-0 nova_compute[247704]: 2026-01-31 08:05:20.665 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:20 compute-0 ceph-mgr[74791]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:05:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 359 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.9 MiB/s wr, 207 op/s
Jan 31 08:05:21 compute-0 sudo[314671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-0 sudo[314671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-0 nova_compute[247704]: 2026-01-31 08:05:21.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:21 compute-0 sudo[314671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 sudo[314696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:21 compute-0 sudo[314696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-0 sudo[314696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 sudo[314719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-0 sudo[314719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-0 sudo[314719]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 sudo[314738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-0 sudo[314738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-0 sudo[314738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 sudo[314770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:21 compute-0 sudo[314770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-0 sudo[314770]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:21.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:21 compute-0 sudo[314779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:05:21 compute-0 sudo[314779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:22 compute-0 sudo[314779]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5cf8f762-6e8b-42a7-95e0-b7ac5477c138 does not exist
Jan 31 08:05:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0e681d34-1f5c-47b2-a759-8499384bca64 does not exist
Jan 31 08:05:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 621fc7fb-352e-4600-b510-c23006e79b2c does not exist
Jan 31 08:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:05:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:05:22 compute-0 sudo[314853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:22 compute-0 sudo[314853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:22 compute-0 sudo[314853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-0 sudo[314878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:22 compute-0 sudo[314878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:22 compute-0 sudo[314878]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-0 sudo[314903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:22 compute-0 sudo[314903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:22 compute-0 sudo[314903]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-0 sudo[314928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:05:22 compute-0 sudo[314928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:05:22 compute-0 ceph-mon[74496]: pgmap v2052: 305 pgs: 305 active+clean; 359 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.9 MiB/s wr, 207 op/s
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/18868591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:05:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.796 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.797 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.797 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:05:22 compute-0 nova_compute[247704]: 2026-01-31 08:05:22.797 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 32e56536-3edb-494c-9e8b-87cfa8396dac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:22 compute-0 podman[314993]: 2026-01-31 08:05:22.802026746 +0000 UTC m=+0.070079755 container create cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:05:22 compute-0 podman[314993]: 2026-01-31 08:05:22.751185063 +0000 UTC m=+0.019238102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:05:22 compute-0 systemd[1]: Started libpod-conmon-cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561.scope.
Jan 31 08:05:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:23 compute-0 podman[314993]: 2026-01-31 08:05:23.010626527 +0000 UTC m=+0.278679576 container init cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:05:23 compute-0 podman[314993]: 2026-01-31 08:05:23.018306614 +0000 UTC m=+0.286359623 container start cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:23 compute-0 zealous_khorana[315010]: 167 167
Jan 31 08:05:23 compute-0 systemd[1]: libpod-cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561.scope: Deactivated successfully.
Jan 31 08:05:23 compute-0 podman[314993]: 2026-01-31 08:05:23.066314599 +0000 UTC m=+0.334367618 container attach cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:05:23 compute-0 podman[314993]: 2026-01-31 08:05:23.068344238 +0000 UTC m=+0.336397247 container died cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8486bf95976bfc0c310226382a4ef48afe60ad77e1e276ff1c7b864a9821d381-merged.mount: Deactivated successfully.
Jan 31 08:05:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 366 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.8 MiB/s wr, 219 op/s
Jan 31 08:05:23 compute-0 podman[314993]: 2026-01-31 08:05:23.673684 +0000 UTC m=+0.941737029 container remove cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:23 compute-0 systemd[1]: libpod-conmon-cd7b43bc40e291159bbe029dbcc18762f3eb7a8e32063538b24127ad1605c561.scope: Deactivated successfully.
Jan 31 08:05:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:23.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:23 compute-0 podman[315031]: 2026-01-31 08:05:23.792580968 +0000 UTC m=+0.078911950 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:05:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:23.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:23 compute-0 podman[315050]: 2026-01-31 08:05:23.899432131 +0000 UTC m=+0.105790058 container create 9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:05:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1767179113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1249913127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/737999452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:23 compute-0 podman[315050]: 2026-01-31 08:05:23.821467655 +0000 UTC m=+0.027825602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:05:23 compute-0 systemd[1]: Started libpod-conmon-9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61.scope.
Jan 31 08:05:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beacc8ef2e6d012ab755d9b5d7d8d818968606851918dad3e1d41d2850edb90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beacc8ef2e6d012ab755d9b5d7d8d818968606851918dad3e1d41d2850edb90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beacc8ef2e6d012ab755d9b5d7d8d818968606851918dad3e1d41d2850edb90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beacc8ef2e6d012ab755d9b5d7d8d818968606851918dad3e1d41d2850edb90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8beacc8ef2e6d012ab755d9b5d7d8d818968606851918dad3e1d41d2850edb90/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:24 compute-0 podman[315050]: 2026-01-31 08:05:24.092516362 +0000 UTC m=+0.298874309 container init 9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_faraday, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:05:24 compute-0 podman[315050]: 2026-01-31 08:05:24.10265899 +0000 UTC m=+0.309016917 container start 9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:24 compute-0 podman[315050]: 2026-01-31 08:05:24.151775511 +0000 UTC m=+0.358133458 container attach 9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:05:24 compute-0 nova_compute[247704]: 2026-01-31 08:05:24.226 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:24 compute-0 nova_compute[247704]: 2026-01-31 08:05:24.446 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:24 compute-0 nova_compute[247704]: 2026-01-31 08:05:24.479 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:24 compute-0 nova_compute[247704]: 2026-01-31 08:05:24.480 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:05:24 compute-0 nova_compute[247704]: 2026-01-31 08:05:24.481 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:24 compute-0 zen_faraday[315071]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:05:24 compute-0 zen_faraday[315071]: --> relative data size: 1.0
Jan 31 08:05:24 compute-0 zen_faraday[315071]: --> All data devices are unavailable
Jan 31 08:05:24 compute-0 systemd[1]: libpod-9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61.scope: Deactivated successfully.
Jan 31 08:05:24 compute-0 conmon[315071]: conmon 9700d978decfbc2016e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61.scope/container/memory.events
Jan 31 08:05:24 compute-0 podman[315050]: 2026-01-31 08:05:24.860902711 +0000 UTC m=+1.067260678 container died 9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:05:25 compute-0 ceph-mon[74496]: pgmap v2053: 305 pgs: 305 active+clean; 366 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.8 MiB/s wr, 219 op/s
Jan 31 08:05:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8beacc8ef2e6d012ab755d9b5d7d8d818968606851918dad3e1d41d2850edb90-merged.mount: Deactivated successfully.
Jan 31 08:05:25 compute-0 podman[315050]: 2026-01-31 08:05:25.311660333 +0000 UTC m=+1.518018280 container remove 9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_faraday, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:05:25 compute-0 systemd[1]: libpod-conmon-9700d978decfbc2016e3d72da695b1f275d5901c9aa12414f45a8591003a2a61.scope: Deactivated successfully.
Jan 31 08:05:25 compute-0 sudo[314928]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 sudo[315101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:25 compute-0 sudo[315101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:25 compute-0 sudo[315101]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 359 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 246 op/s
Jan 31 08:05:25 compute-0 sudo[315127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:25 compute-0 sudo[315127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:25 compute-0 sudo[315127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 sudo[315152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:25 compute-0 sudo[315152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:25 compute-0 sudo[315152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.596 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.597 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.597 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:25 compute-0 sudo[315177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:05:25 compute-0 sudo[315177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:25 compute-0 nova_compute[247704]: 2026-01-31 08:05:25.678 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:25.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:25.943921714 +0000 UTC m=+0.026884299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:05:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225765375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:26.092409294 +0000 UTC m=+0.175371899 container create 1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:05:26 compute-0 nova_compute[247704]: 2026-01-31 08:05:26.106 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:05:26.203 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:05:26 compute-0 systemd[1]: Started libpod-conmon-1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250.scope.
Jan 31 08:05:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:26.378206903 +0000 UTC m=+0.461169548 container init 1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:05:26 compute-0 ceph-mon[74496]: pgmap v2054: 305 pgs: 305 active+clean; 359 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 246 op/s
Jan 31 08:05:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1225765375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:26.391172139 +0000 UTC m=+0.474134744 container start 1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:05:26 compute-0 kind_goldberg[315279]: 167 167
Jan 31 08:05:26 compute-0 systemd[1]: libpod-1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250.scope: Deactivated successfully.
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:26.458030894 +0000 UTC m=+0.540993509 container attach 1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:26.458521206 +0000 UTC m=+0.541483771 container died 1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:05:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4b4105d5bc1e0e05a90d8f1a6f4c9bf3b571de4fe87b708ac6cf9dba33c5524-merged.mount: Deactivated successfully.
Jan 31 08:05:26 compute-0 podman[315260]: 2026-01-31 08:05:26.638368924 +0000 UTC m=+0.721331509 container remove 1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:05:26 compute-0 systemd[1]: libpod-conmon-1be2bcbfc05325300218e7692d6d480b9ff48c82cfa13e6289675f5022909250.scope: Deactivated successfully.
Jan 31 08:05:26 compute-0 podman[315304]: 2026-01-31 08:05:26.829638352 +0000 UTC m=+0.060281856 container create 7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:05:26 compute-0 systemd[1]: Started libpod-conmon-7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2.scope.
Jan 31 08:05:26 compute-0 podman[315304]: 2026-01-31 08:05:26.80384269 +0000 UTC m=+0.034486214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:05:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200d31918b4051c7e2982a7fe92d4cdbea7505bee8a8e2b6b80930758b191169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200d31918b4051c7e2982a7fe92d4cdbea7505bee8a8e2b6b80930758b191169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200d31918b4051c7e2982a7fe92d4cdbea7505bee8a8e2b6b80930758b191169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200d31918b4051c7e2982a7fe92d4cdbea7505bee8a8e2b6b80930758b191169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:26 compute-0 podman[315304]: 2026-01-31 08:05:26.937021757 +0000 UTC m=+0.167665271 container init 7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:05:26 compute-0 podman[315304]: 2026-01-31 08:05:26.942532782 +0000 UTC m=+0.173176286 container start 7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:05:26 compute-0 podman[315304]: 2026-01-31 08:05:26.951723027 +0000 UTC m=+0.182366541 container attach 7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:05:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 334 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.0 MiB/s wr, 268 op/s
Jan 31 08:05:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:27.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.774 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.776 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.780 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.780 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:05:27 compute-0 boring_benz[315320]: {
Jan 31 08:05:27 compute-0 boring_benz[315320]:     "0": [
Jan 31 08:05:27 compute-0 boring_benz[315320]:         {
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "devices": [
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "/dev/loop3"
Jan 31 08:05:27 compute-0 boring_benz[315320]:             ],
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "lv_name": "ceph_lv0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "lv_size": "7511998464",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "name": "ceph_lv0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "tags": {
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.cluster_name": "ceph",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.crush_device_class": "",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.encrypted": "0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.osd_id": "0",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.type": "block",
Jan 31 08:05:27 compute-0 boring_benz[315320]:                 "ceph.vdo": "0"
Jan 31 08:05:27 compute-0 boring_benz[315320]:             },
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "type": "block",
Jan 31 08:05:27 compute-0 boring_benz[315320]:             "vg_name": "ceph_vg0"
Jan 31 08:05:27 compute-0 boring_benz[315320]:         }
Jan 31 08:05:27 compute-0 boring_benz[315320]:     ]
Jan 31 08:05:27 compute-0 boring_benz[315320]: }
Jan 31 08:05:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:27 compute-0 systemd[1]: libpod-7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2.scope: Deactivated successfully.
Jan 31 08:05:27 compute-0 conmon[315320]: conmon 7469a79d403ff293048a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2.scope/container/memory.events
Jan 31 08:05:27 compute-0 podman[315304]: 2026-01-31 08:05:27.844059306 +0000 UTC m=+1.074702810 container died 7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:05:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-200d31918b4051c7e2982a7fe92d4cdbea7505bee8a8e2b6b80930758b191169-merged.mount: Deactivated successfully.
Jan 31 08:05:27 compute-0 podman[315304]: 2026-01-31 08:05:27.911400793 +0000 UTC m=+1.142044297 container remove 7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:05:27 compute-0 systemd[1]: libpod-conmon-7469a79d403ff293048ab3d89dbc9c122a8d25669e2f593e3c6d879f379947b2.scope: Deactivated successfully.
Jan 31 08:05:27 compute-0 sudo[315177]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.961 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.963 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4027MB free_disk=20.855541229248047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.963 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:27 compute-0 nova_compute[247704]: 2026-01-31 08:05:27.963 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:27 compute-0 sudo[315342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:28 compute-0 sudo[315342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:28 compute-0 sudo[315342]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:28 compute-0 sudo[315367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.045 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 32e56536-3edb-494c-9e8b-87cfa8396dac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.046 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 021ee385-cfad-415f-a2ac-cdcf925fccac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.046 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.046 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:05:28 compute-0 sudo[315367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:28 compute-0 sudo[315367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:28 compute-0 sudo[315392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:28 compute-0 sudo[315392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:28 compute-0 sudo[315392]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.108 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:28 compute-0 sudo[315417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:05:28 compute-0 sudo[315417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2404807119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:28 compute-0 podman[315501]: 2026-01-31 08:05:28.467678465 +0000 UTC m=+0.031962212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.562 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.570 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.598 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.640 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:05:28 compute-0 nova_compute[247704]: 2026-01-31 08:05:28.640 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:28 compute-0 podman[315501]: 2026-01-31 08:05:28.812908547 +0000 UTC m=+0.377192194 container create 7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_boyd, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:05:28 compute-0 ceph-mon[74496]: pgmap v2055: 305 pgs: 305 active+clean; 334 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.0 MiB/s wr, 268 op/s
Jan 31 08:05:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2404807119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:28 compute-0 systemd[1]: Started libpod-conmon-7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199.scope.
Jan 31 08:05:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:29 compute-0 podman[315501]: 2026-01-31 08:05:29.130646247 +0000 UTC m=+0.694930004 container init 7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_boyd, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:05:29 compute-0 podman[315501]: 2026-01-31 08:05:29.144487766 +0000 UTC m=+0.708771443 container start 7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_boyd, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:05:29 compute-0 goofy_boyd[315520]: 167 167
Jan 31 08:05:29 compute-0 systemd[1]: libpod-7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199.scope: Deactivated successfully.
Jan 31 08:05:29 compute-0 podman[315501]: 2026-01-31 08:05:29.195889513 +0000 UTC m=+0.760173150 container attach 7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_boyd, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:05:29 compute-0 podman[315501]: 2026-01-31 08:05:29.197703827 +0000 UTC m=+0.761987504 container died 7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_boyd, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:05:29 compute-0 nova_compute[247704]: 2026-01-31 08:05:29.229 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 356 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.6 MiB/s wr, 325 op/s
Jan 31 08:05:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b2d8874d21aff9fcf66999ea7adec6a91e9a48ddf7127c0ef99b0d6b7c1d8d-merged.mount: Deactivated successfully.
Jan 31 08:05:29 compute-0 nova_compute[247704]: 2026-01-31 08:05:29.641 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:29 compute-0 nova_compute[247704]: 2026-01-31 08:05:29.641 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:29 compute-0 nova_compute[247704]: 2026-01-31 08:05:29.642 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:29 compute-0 nova_compute[247704]: 2026-01-31 08:05:29.642 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:05:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:29 compute-0 podman[315501]: 2026-01-31 08:05:29.746319963 +0000 UTC m=+1.310603600 container remove 7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_boyd, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:05:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:29.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:29.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:29 compute-0 systemd[1]: libpod-conmon-7faa6ed981e75e469c3f7438c855f9f8945192235d826e2c534852301573d199.scope: Deactivated successfully.
Jan 31 08:05:29 compute-0 podman[315546]: 2026-01-31 08:05:29.923930975 +0000 UTC m=+0.063180456 container create 97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:05:29 compute-0 ceph-mon[74496]: pgmap v2056: 305 pgs: 305 active+clean; 356 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.6 MiB/s wr, 325 op/s
Jan 31 08:05:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1934745028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:29 compute-0 podman[315546]: 2026-01-31 08:05:29.902420299 +0000 UTC m=+0.041669870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:05:30 compute-0 systemd[1]: Started libpod-conmon-97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e.scope.
Jan 31 08:05:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd1b4215bb2d911fb99ba7b661f1b0539cde5d6d2753d1094ae500db819cdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd1b4215bb2d911fb99ba7b661f1b0539cde5d6d2753d1094ae500db819cdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd1b4215bb2d911fb99ba7b661f1b0539cde5d6d2753d1094ae500db819cdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd1b4215bb2d911fb99ba7b661f1b0539cde5d6d2753d1094ae500db819cdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:05:30 compute-0 podman[315546]: 2026-01-31 08:05:30.145856292 +0000 UTC m=+0.285105783 container init 97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:05:30 compute-0 podman[315546]: 2026-01-31 08:05:30.151780397 +0000 UTC m=+0.291029888 container start 97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:05:30 compute-0 podman[315546]: 2026-01-31 08:05:30.208941765 +0000 UTC m=+0.348191256 container attach 97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:05:30 compute-0 nova_compute[247704]: 2026-01-31 08:05:30.687 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]: {
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:         "osd_id": 0,
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:         "type": "bluestore"
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]:     }
Jan 31 08:05:30 compute-0 pedantic_gagarin[315562]: }
Jan 31 08:05:30 compute-0 systemd[1]: libpod-97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e.scope: Deactivated successfully.
Jan 31 08:05:30 compute-0 podman[315546]: 2026-01-31 08:05:30.973328556 +0000 UTC m=+1.112578037 container died 97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-27fd1b4215bb2d911fb99ba7b661f1b0539cde5d6d2753d1094ae500db819cdb-merged.mount: Deactivated successfully.
Jan 31 08:05:31 compute-0 podman[315546]: 2026-01-31 08:05:31.072300106 +0000 UTC m=+1.211549627 container remove 97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gagarin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:05:31 compute-0 systemd[1]: libpod-conmon-97ee6383570f27f48fb138f9f080e53a5eee15612e191323bedb294623f22b0e.scope: Deactivated successfully.
Jan 31 08:05:31 compute-0 sudo[315417]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:05:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:05:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ea49385d-095d-46ca-86ea-9e653f16fad8 does not exist
Jan 31 08:05:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 843e8ae1-bc22-40ee-ab6b-3a511ac58798 does not exist
Jan 31 08:05:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d175aed1-4be8-4b0f-939b-f0c6dd1647bc does not exist
Jan 31 08:05:31 compute-0 sudo[315597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:31 compute-0 sudo[315597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:31 compute-0 sudo[315597]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:31 compute-0 sudo[315622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:05:31 compute-0 sudo[315622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:31 compute-0 sudo[315622]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 267 op/s
Jan 31 08:05:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:31.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:05:32 compute-0 ceph-mon[74496]: pgmap v2057: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.8 MiB/s wr, 267 op/s
Jan 31 08:05:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 198 op/s
Jan 31 08:05:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:33.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:33.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:34 compute-0 nova_compute[247704]: 2026-01-31 08:05:34.231 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:34 compute-0 ceph-mon[74496]: pgmap v2058: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 198 op/s
Jan 31 08:05:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1535432295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005345012097524661 of space, bias 1.0, pg target 1.6035036292573983 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031559708263539923 of space, bias 1.0, pg target 0.9436352770798437 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:05:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 31 08:05:35 compute-0 nova_compute[247704]: 2026-01-31 08:05:35.691 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:35.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/148157636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:36 compute-0 ovn_controller[149457]: 2026-01-31T08:05:36Z|00405|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:05:36 compute-0 nova_compute[247704]: 2026-01-31 08:05:36.702 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:37 compute-0 ceph-mon[74496]: pgmap v2059: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 31 08:05:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/285771705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 31 08:05:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:37.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1753826075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:38 compute-0 ceph-mon[74496]: pgmap v2060: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 31 08:05:39 compute-0 nova_compute[247704]: 2026-01-31 08:05:39.234 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:05:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:39.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:40 compute-0 ceph-mon[74496]: pgmap v2061: 305 pgs: 305 active+clean; 372 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:05:40 compute-0 nova_compute[247704]: 2026-01-31 08:05:40.693 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:40 compute-0 podman[315652]: 2026-01-31 08:05:40.962433554 +0000 UTC m=+0.132581773 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:05:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 376 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 989 KiB/s wr, 57 op/s
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.563 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.564 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.565 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.565 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.566 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.566 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.614 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.615 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Image id 7c23949f-bba8-4466-bb79-caf568852d38 yields fingerprint ff90c10b8251df1dd96780c3025774cae23123c6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.615 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): checking
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.615 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] image 7c23949f-bba8-4466-bb79-caf568852d38 at (/var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 31 08:05:41 compute-0 nova_compute[247704]: 2026-01-31 08:05:41.616 247708 INFO oslo.privsep.daemon [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpel5it6fi/privsep.sock']
Jan 31 08:05:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:41 compute-0 sudo[315682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:41 compute-0 sudo[315682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:41 compute-0 sudo[315682]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:41 compute-0 sudo[315707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:05:41 compute-0 sudo[315707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:41 compute-0 sudo[315707]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.442 247708 INFO oslo.privsep.daemon [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Spawned new privsep daemon via rootwrap
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.312 315733 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.316 315733 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.318 315733 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.318 315733 INFO oslo.privsep.daemon [-] privsep daemon running as pid 315733
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.543 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.543 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] 32e56536-3edb-494c-9e8b-87cfa8396dac is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.543 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] 021ee385-cfad-415f-a2ac-cdcf925fccac is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.544 247708 WARNING nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.544 247708 WARNING nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.544 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Active base files: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.544 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Removable base files: /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.544 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.545 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.545 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.545 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 31 08:05:42 compute-0 nova_compute[247704]: 2026-01-31 08:05:42.545 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 31 08:05:42 compute-0 ceph-mon[74496]: pgmap v2062: 305 pgs: 305 active+clean; 376 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 989 KiB/s wr, 57 op/s
Jan 31 08:05:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 388 MiB data, 1024 MiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 1.5 MiB/s wr, 35 op/s
Jan 31 08:05:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:43.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:05:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1434333554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:05:43 compute-0 ceph-mon[74496]: pgmap v2063: 305 pgs: 305 active+clean; 388 MiB data, 1024 MiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 1.5 MiB/s wr, 35 op/s
Jan 31 08:05:44 compute-0 nova_compute[247704]: 2026-01-31 08:05:44.236 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3064647811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:05:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3064647811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:05:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 401 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 197 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 08:05:45 compute-0 nova_compute[247704]: 2026-01-31 08:05:45.727 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:45.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:46 compute-0 ceph-mon[74496]: pgmap v2064: 305 pgs: 305 active+clean; 401 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 197 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 08:05:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:05:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:47.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:05:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:47.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:05:48 compute-0 ceph-mon[74496]: pgmap v2065: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.028 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.238 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 31 08:05:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.738 247708 DEBUG nova.compute.manager [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-changed-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.738 247708 DEBUG nova.compute.manager [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Refreshing instance network info cache due to event network-changed-6dcdaf78-571b-42ba-bc17-7f6217ee6587. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.738 247708 DEBUG oslo_concurrency.lockutils [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.739 247708 DEBUG oslo_concurrency.lockutils [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:05:49 compute-0 nova_compute[247704]: 2026-01-31 08:05:49.739 247708 DEBUG nova.network.neutron [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Refreshing network info cache for port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:05:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:49.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:49 compute-0 ceph-mon[74496]: pgmap v2066: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 31 08:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:05:50 compute-0 nova_compute[247704]: 2026-01-31 08:05:50.729 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 08:05:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:51.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:51.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:52 compute-0 nova_compute[247704]: 2026-01-31 08:05:52.017 247708 DEBUG nova.network.neutron [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updated VIF entry in instance network info cache for port 6dcdaf78-571b-42ba-bc17-7f6217ee6587. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:05:52 compute-0 nova_compute[247704]: 2026-01-31 08:05:52.017 247708 DEBUG nova.network.neutron [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [{"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:05:52 compute-0 nova_compute[247704]: 2026-01-31 08:05:52.059 247708 DEBUG oslo_concurrency.lockutils [req-259250df-e1a5-4f8a-94ac-47414798769b req-c8a70929-ff95-44e7-a8ed-99ceba74495b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-32e56536-3edb-494c-9e8b-87cfa8396dac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:05:52 compute-0 ceph-mon[74496]: pgmap v2067: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.079 247708 DEBUG nova.compute.manager [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.163 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.164 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.191 247708 DEBUG nova.objects.instance [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.212 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.213 247708 INFO nova.compute.claims [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.213 247708 DEBUG nova.objects.instance [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.225 247708 DEBUG nova.objects.instance [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.271 247708 INFO nova.compute.resource_tracker [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating resource usage from migration 4a86c12b-502d-451d-8b81-01c67cc03b3a
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.272 247708 DEBUG nova.compute.resource_tracker [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Starting to track incoming migration 4a86c12b-502d-451d-8b81-01c67cc03b3a with flavor e3bd1dad-95f3-4ed9-94b4-27245cd798b5 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.357 247708 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:05:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Jan 31 08:05:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:05:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722636730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:53.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.786 247708 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.793 247708 DEBUG nova.compute.provider_tree [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:05:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3561526686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.824 247708 DEBUG nova.scheduler.client.report [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:05:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.870 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:05:53 compute-0 nova_compute[247704]: 2026-01-31 08:05:53.871 247708 INFO nova.compute.manager [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Migrating
Jan 31 08:05:54 compute-0 nova_compute[247704]: 2026-01-31 08:05:54.097 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:54 compute-0 nova_compute[247704]: 2026-01-31 08:05:54.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:54 compute-0 podman[315763]: 2026-01-31 08:05:54.895021279 +0000 UTC m=+0.067645155 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:05:55 compute-0 ceph-mon[74496]: pgmap v2068: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Jan 31 08:05:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2722636730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:05:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 727 KiB/s wr, 120 op/s
Jan 31 08:05:55 compute-0 nova_compute[247704]: 2026-01-31 08:05:55.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:55.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:55.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:56 compute-0 ceph-mon[74496]: pgmap v2069: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 727 KiB/s wr, 120 op/s
Jan 31 08:05:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 67 KiB/s wr, 95 op/s
Jan 31 08:05:57 compute-0 sshd-session[315784]: Accepted publickey for nova from 192.168.122.101 port 53344 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:05:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:57.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:57 compute-0 systemd-logind[816]: New session 58 of user nova.
Jan 31 08:05:57 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 08:05:57 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 08:05:57 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 08:05:57 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 31 08:05:57 compute-0 systemd[315788]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:05:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:57.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:57 compute-0 systemd[315788]: Queued start job for default target Main User Target.
Jan 31 08:05:57 compute-0 systemd[315788]: Created slice User Application Slice.
Jan 31 08:05:57 compute-0 systemd[315788]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:05:57 compute-0 systemd[315788]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:05:57 compute-0 systemd[315788]: Reached target Paths.
Jan 31 08:05:57 compute-0 systemd[315788]: Reached target Timers.
Jan 31 08:05:57 compute-0 systemd[315788]: Starting D-Bus User Message Bus Socket...
Jan 31 08:05:58 compute-0 systemd[315788]: Starting Create User's Volatile Files and Directories...
Jan 31 08:05:58 compute-0 systemd[315788]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:05:58 compute-0 systemd[315788]: Finished Create User's Volatile Files and Directories.
Jan 31 08:05:58 compute-0 systemd[315788]: Reached target Sockets.
Jan 31 08:05:58 compute-0 systemd[315788]: Reached target Basic System.
Jan 31 08:05:58 compute-0 systemd[315788]: Reached target Main User Target.
Jan 31 08:05:58 compute-0 systemd[315788]: Startup finished in 152ms.
Jan 31 08:05:58 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 31 08:05:58 compute-0 systemd[1]: Started Session 58 of User nova.
Jan 31 08:05:58 compute-0 sshd-session[315784]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:05:58 compute-0 sshd-session[315803]: Received disconnect from 192.168.122.101 port 53344:11: disconnected by user
Jan 31 08:05:58 compute-0 sshd-session[315803]: Disconnected from user nova 192.168.122.101 port 53344
Jan 31 08:05:58 compute-0 sshd-session[315784]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:05:58 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Jan 31 08:05:58 compute-0 systemd-logind[816]: Session 58 logged out. Waiting for processes to exit.
Jan 31 08:05:58 compute-0 systemd-logind[816]: Removed session 58.
Jan 31 08:05:58 compute-0 sshd-session[315805]: Accepted publickey for nova from 192.168.122.101 port 53358 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:05:58 compute-0 systemd-logind[816]: New session 60 of user nova.
Jan 31 08:05:58 compute-0 systemd[1]: Started Session 60 of User nova.
Jan 31 08:05:58 compute-0 sshd-session[315805]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:05:58 compute-0 sshd-session[315808]: Received disconnect from 192.168.122.101 port 53358:11: disconnected by user
Jan 31 08:05:58 compute-0 sshd-session[315808]: Disconnected from user nova 192.168.122.101 port 53358
Jan 31 08:05:58 compute-0 sshd-session[315805]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:05:58 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Jan 31 08:05:58 compute-0 systemd-logind[816]: Session 60 logged out. Waiting for processes to exit.
Jan 31 08:05:58 compute-0 systemd-logind[816]: Removed session 60.
Jan 31 08:05:58 compute-0 ceph-mon[74496]: pgmap v2070: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 67 KiB/s wr, 95 op/s
Jan 31 08:05:59 compute-0 nova_compute[247704]: 2026-01-31 08:05:59.242 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:05:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 619 KiB/s wr, 91 op/s
Jan 31 08:05:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:05:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:05:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:59.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:05:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:05:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:05:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:05:59 compute-0 ceph-mon[74496]: pgmap v2071: 305 pgs: 305 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 619 KiB/s wr, 91 op/s
Jan 31 08:06:00 compute-0 nova_compute[247704]: 2026-01-31 08:06:00.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 422 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 905 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.523 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.524 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.590 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:06:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.888 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.888 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.897 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.898 247708 INFO nova.compute.claims [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:06:01 compute-0 sudo[315813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:01 compute-0 sudo[315813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:01 compute-0 sudo[315813]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:01 compute-0 nova_compute[247704]: 2026-01-31 08:06:01.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:01.916 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:01.917 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:06:01 compute-0 sudo[315838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:01 compute-0 sudo[315838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:01 compute-0 sudo[315838]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:02 compute-0 nova_compute[247704]: 2026-01-31 08:06:02.118 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411586841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-0 nova_compute[247704]: 2026-01-31 08:06:02.553 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:02 compute-0 nova_compute[247704]: 2026-01-31 08:06:02.559 247708 DEBUG nova.compute.provider_tree [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:02 compute-0 ceph-mon[74496]: pgmap v2072: 305 pgs: 305 active+clean; 422 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 905 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Jan 31 08:06:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/963996101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/411586841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1697033539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:02 compute-0 nova_compute[247704]: 2026-01-31 08:06:02.678 247708 DEBUG nova.scheduler.client.report [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:02 compute-0 nova_compute[247704]: 2026-01-31 08:06:02.845 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:02 compute-0 nova_compute[247704]: 2026-01-31 08:06:02.846 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.028 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.029 247708 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.095 247708 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.178 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.366 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.367 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.368 247708 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Creating image(s)
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.403 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.435 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.474 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.479 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 432 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 530 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.553 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.554 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.554 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.555 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.592 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.596 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:03 compute-0 nova_compute[247704]: 2026-01-31 08:06:03.612 247708 DEBUG nova.policy [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '972b6e928f014e5394261f9c8655f1de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43b462f5b43d48b4a33a13b069618e4c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:06:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:03.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.021 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.122 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] resizing rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.244 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.303 247708 DEBUG nova.objects.instance [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'migration_context' on Instance uuid f2056d13-c1bf-45cf-82c7-f8e1aa48472b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.358 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.359 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Ensure instance console log exists: /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.360 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.361 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.361 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:04 compute-0 ceph-mon[74496]: pgmap v2073: 305 pgs: 305 active+clean; 432 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 530 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Jan 31 08:06:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:04 compute-0 nova_compute[247704]: 2026-01-31 08:06:04.802 247708 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Successfully created port: fd1cbebb-a755-4e08-a23f-5d1d70b501d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:06:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 489 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Jan 31 08:06:05 compute-0 nova_compute[247704]: 2026-01-31 08:06:05.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:06:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:05.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:06:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:05.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.107 247708 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Successfully updated port: fd1cbebb-a755-4e08-a23f-5d1d70b501d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.143 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "refresh_cache-f2056d13-c1bf-45cf-82c7-f8e1aa48472b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.143 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquired lock "refresh_cache-f2056d13-c1bf-45cf-82c7-f8e1aa48472b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.143 247708 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.363 247708 DEBUG nova.compute.manager [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-changed-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.364 247708 DEBUG nova.compute.manager [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Refreshing instance network info cache due to event network-changed-fd1cbebb-a755-4e08-a23f-5d1d70b501d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.364 247708 DEBUG oslo_concurrency.lockutils [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f2056d13-c1bf-45cf-82c7-f8e1aa48472b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:06 compute-0 nova_compute[247704]: 2026-01-31 08:06:06.409 247708 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:06:06 compute-0 ceph-mon[74496]: pgmap v2074: 305 pgs: 305 active+clean; 489 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 4.4 MiB/s wr, 96 op/s
Jan 31 08:06:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1254059917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1254059917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.482 247708 DEBUG nova.network.neutron [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Updating instance_info_cache with network_info: [{"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 527 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 31 08:06:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:07.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.946 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Releasing lock "refresh_cache-f2056d13-c1bf-45cf-82c7-f8e1aa48472b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.947 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Instance network_info: |[{"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.947 247708 DEBUG oslo_concurrency.lockutils [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f2056d13-c1bf-45cf-82c7-f8e1aa48472b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.947 247708 DEBUG nova.network.neutron [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Refreshing network info cache for port fd1cbebb-a755-4e08-a23f-5d1d70b501d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.951 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Start _get_guest_xml network_info=[{"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.954 247708 WARNING nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.959 247708 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.960 247708 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.963 247708 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.963 247708 DEBUG nova.virt.libvirt.host [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.964 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.965 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.965 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.965 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.965 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.966 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.966 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.966 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.966 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.966 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.967 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.967 247708 DEBUG nova.virt.hardware [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:06:07 compute-0 nova_compute[247704]: 2026-01-31 08:06:07.970 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:07 compute-0 ceph-mon[74496]: pgmap v2075: 305 pgs: 305 active+clean; 527 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 31 08:06:08 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 08:06:08 compute-0 systemd[315788]: Activating special unit Exit the Session...
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped target Main User Target.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped target Basic System.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped target Paths.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped target Sockets.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped target Timers.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:06:08 compute-0 systemd[315788]: Closed D-Bus User Message Bus Socket.
Jan 31 08:06:08 compute-0 systemd[315788]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:06:08 compute-0 systemd[315788]: Removed slice User Application Slice.
Jan 31 08:06:08 compute-0 systemd[315788]: Reached target Shutdown.
Jan 31 08:06:08 compute-0 systemd[315788]: Finished Exit the Session.
Jan 31 08:06:08 compute-0 systemd[315788]: Reached target Exit the Session.
Jan 31 08:06:08 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 08:06:08 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 08:06:08 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 08:06:08 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 08:06:08 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 08:06:08 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 08:06:08 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 08:06:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236691885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.452 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.487 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.492 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537159548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.931 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.933 247708 DEBUG nova.virt.libvirt.vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-2',id=105,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:03Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=f2056d13-c1bf-45cf-82c7-f8e1aa48472b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.933 247708 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.934 247708 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.936 247708 DEBUG nova.objects.instance [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'pci_devices' on Instance uuid f2056d13-c1bf-45cf-82c7-f8e1aa48472b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.971 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <uuid>f2056d13-c1bf-45cf-82c7-f8e1aa48472b</uuid>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <name>instance-00000069</name>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:name>tempest-ListServersNegativeTestJSON-server-19898720-2</nova:name>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:06:07</nova:creationTime>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:user uuid="972b6e928f014e5394261f9c8655f1de">tempest-ListServersNegativeTestJSON-1511652820-project-member</nova:user>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:project uuid="43b462f5b43d48b4a33a13b069618e4c">tempest-ListServersNegativeTestJSON-1511652820</nova:project>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <nova:port uuid="fd1cbebb-a755-4e08-a23f-5d1d70b501d0">
Jan 31 08:06:08 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <system>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <entry name="serial">f2056d13-c1bf-45cf-82c7-f8e1aa48472b</entry>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <entry name="uuid">f2056d13-c1bf-45cf-82c7-f8e1aa48472b</entry>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </system>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <os>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </os>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <features>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </features>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk">
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </source>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk.config">
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </source>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:06:08 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:ac:52:a3"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <target dev="tapfd1cbebb-a7"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/console.log" append="off"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <video>
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </video>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:06:08 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:06:08 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:06:08 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:06:08 compute-0 nova_compute[247704]: </domain>
Jan 31 08:06:08 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.972 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Preparing to wait for external event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.973 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.973 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.973 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.974 247708 DEBUG nova.virt.libvirt.vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-2',id=105,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:03Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=f2056d13-c1bf-45cf-82c7-f8e1aa48472b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.974 247708 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.974 247708 DEBUG nova.network.os_vif_util [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.975 247708 DEBUG os_vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.975 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.976 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.976 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.978 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.979 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd1cbebb-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.979 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd1cbebb-a7, col_values=(('external_ids', {'iface-id': 'fd1cbebb-a755-4e08-a23f-5d1d70b501d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:52:a3', 'vm-uuid': 'f2056d13-c1bf-45cf-82c7-f8e1aa48472b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:08 compute-0 NetworkManager[49108]: <info>  [1769846768.9822] manager: (tapfd1cbebb-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.984 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:08 compute-0 nova_compute[247704]: 2026-01-31 08:06:08.989 247708 INFO os_vif [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7')
Jan 31 08:06:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1421020183' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1421020183' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4236691885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2537159548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.070 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.070 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.070 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] No VIF found with MAC fa:16:3e:ac:52:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.071 247708 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Using config drive
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.098 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.247 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 7.5 MiB/s wr, 178 op/s
Jan 31 08:06:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.778 247708 DEBUG nova.network.neutron [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Updated VIF entry in instance network info cache for port fd1cbebb-a755-4e08-a23f-5d1d70b501d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.779 247708 DEBUG nova.network.neutron [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Updating instance_info_cache with network_info: [{"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:09.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.814 247708 DEBUG oslo_concurrency.lockutils [req-da705177-0af0-4cdb-9851-9778b707a2c9 req-4508be6c-e867-4f4f-b062-28c2045d7419 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f2056d13-c1bf-45cf-82c7-f8e1aa48472b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.822 247708 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Creating config drive at /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/disk.config
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.826 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpk1k3b_lm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:09.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.960 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpk1k3b_lm" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:09 compute-0 nova_compute[247704]: 2026-01-31 08:06:09.995 247708 DEBUG nova.storage.rbd_utils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] rbd image f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.001 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/disk.config f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1553681548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:10 compute-0 ceph-mon[74496]: pgmap v2076: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 7.5 MiB/s wr, 178 op/s
Jan 31 08:06:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3616744025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3616744025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.187 247708 DEBUG oslo_concurrency.processutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/disk.config f2056d13-c1bf-45cf-82c7-f8e1aa48472b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.189 247708 INFO nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Deleting local config drive /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b/disk.config because it was imported into RBD.
Jan 31 08:06:10 compute-0 NetworkManager[49108]: <info>  [1769846770.2280] manager: (tapfd1cbebb-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Jan 31 08:06:10 compute-0 kernel: tapfd1cbebb-a7: entered promiscuous mode
Jan 31 08:06:10 compute-0 ovn_controller[149457]: 2026-01-31T08:06:10Z|00406|binding|INFO|Claiming lport fd1cbebb-a755-4e08-a23f-5d1d70b501d0 for this chassis.
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.230 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 ovn_controller[149457]: 2026-01-31T08:06:10Z|00407|binding|INFO|fd1cbebb-a755-4e08-a23f-5d1d70b501d0: Claiming fa:16:3e:ac:52:a3 10.100.0.6
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 ovn_controller[149457]: 2026-01-31T08:06:10Z|00408|binding|INFO|Setting lport fd1cbebb-a755-4e08-a23f-5d1d70b501d0 ovn-installed in OVS
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.238 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 systemd-udevd[316190]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:06:10 compute-0 systemd-machined[214448]: New machine qemu-42-instance-00000069.
Jan 31 08:06:10 compute-0 NetworkManager[49108]: <info>  [1769846770.2675] device (tapfd1cbebb-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:06:10 compute-0 NetworkManager[49108]: <info>  [1769846770.2682] device (tapfd1cbebb-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:06:10 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000069.
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.283 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:52:a3 10.100.0.6'], port_security=['fa:16:3e:ac:52:a3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f2056d13-c1bf-45cf-82c7-f8e1aa48472b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43b462f5b43d48b4a33a13b069618e4c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '001ae016-61eb-444d-a215-8f70012d923a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaf32b66-0fe8-4826-8186-77a88483534c, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=fd1cbebb-a755-4e08-a23f-5d1d70b501d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:10 compute-0 ovn_controller[149457]: 2026-01-31T08:06:10Z|00409|binding|INFO|Setting lport fd1cbebb-a755-4e08-a23f-5d1d70b501d0 up in Southbound
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.284 160028 INFO neutron.agent.ovn.metadata.agent [-] Port fd1cbebb-a755-4e08-a23f-5d1d70b501d0 in datapath f3091b0d-0fc9-4172-b2af-6d9c678c6569 bound to our chassis
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.286 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.293 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfe7495-44c0-4fbe-9d4b-36ac5dc348a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.294 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3091b0d-01 in ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.296 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3091b0d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.296 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2a96d6f7-4062-4db9-ba62-b9cdd697b063]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.296 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[14307070-a54d-4548-b6d7-0090fd1e69aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.307 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[178ecd8b-78bb-4f06-88b8-8792a70328bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.330 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcda32c-8188-4ce9-8a75-a00a31dcaeef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.362 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e677bf-85b0-4c53-bc00-aad10082ecd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.368 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f51cd7-3531-43cf-b8d3-95d8affeefc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 NetworkManager[49108]: <info>  [1769846770.3698] manager: (tapf3091b0d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/196)
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.396 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc63fc6-cb5c-43f5-8063-f079be94bab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.399 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ff47ab7c-74f0-4c80-adb4-5155e71fb080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 NetworkManager[49108]: <info>  [1769846770.4229] device (tapf3091b0d-00): carrier: link connected
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.429 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[44c3d898-6e40-421d-8395-c57f4ff38bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.447 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[938547ac-5a33-43d3-b1b8-d14692d556a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3091b0d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701304, 'reachable_time': 43340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316224, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.466 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[157d9363-fd61-4bd9-a32f-b49228f2a05f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:a99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 701304, 'tstamp': 701304}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316225, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.483 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b0fd2f-7fcd-4aae-a8f6-77b1f8fd9379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3091b0d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701304, 'reachable_time': 43340, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316226, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.511 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7670b6a8-adee-4306-9c75-64368cc95b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.566 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7e78ae-7566-491d-9d54-ae6dfac664b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.567 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3091b0d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.567 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.567 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3091b0d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 NetworkManager[49108]: <info>  [1769846770.5705] manager: (tapf3091b0d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Jan 31 08:06:10 compute-0 kernel: tapf3091b0d-00: entered promiscuous mode
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.572 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3091b0d-00, col_values=(('external_ids', {'iface-id': '67f0642a-d8ea-421d-83b2-8b692f5bc044'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:10 compute-0 ovn_controller[149457]: 2026-01-31T08:06:10Z|00410|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.573 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.575 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.577 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.579 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a558e591-4a41-4a04-aa23-36d73fee3985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.581 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.581 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/f3091b0d-0fc9-4172-b2af-6d9c678c6569.pid.haproxy
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID f3091b0d-0fc9-4172-b2af-6d9c678c6569
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:06:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:10.582 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'env', 'PROCESS_TAG=haproxy-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3091b0d-0fc9-4172-b2af-6d9c678c6569.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.711 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846770.7110286, f2056d13-c1bf-45cf-82c7-f8e1aa48472b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.712 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] VM Started (Lifecycle Event)
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.821 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.827 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846770.7139347, f2056d13-c1bf-45cf-82c7-f8e1aa48472b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.827 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] VM Paused (Lifecycle Event)
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.892 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:10 compute-0 nova_compute[247704]: 2026-01-31 08:06:10.896 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:10 compute-0 podman[316301]: 2026-01-31 08:06:10.938210914 +0000 UTC m=+0.051519110 container create 5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:10 compute-0 systemd[1]: Started libpod-conmon-5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4.scope.
Jan 31 08:06:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0b6cbb1e5e46dda71e1522703fa9a3dcdc980afc8113113052a5b72892e2c1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 podman[316301]: 2026-01-31 08:06:10.907033192 +0000 UTC m=+0.020341408 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:06:11 compute-0 podman[316301]: 2026-01-31 08:06:11.023988862 +0000 UTC m=+0.137297088 container init 5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:06:11 compute-0 podman[316301]: 2026-01-31 08:06:11.028983734 +0000 UTC m=+0.142291940 container start 5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:06:11 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [NOTICE]   (316336) : New worker (316344) forked
Jan 31 08:06:11 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [NOTICE]   (316336) : Loading success.
Jan 31 08:06:11 compute-0 podman[316319]: 2026-01-31 08:06:11.105805652 +0000 UTC m=+0.111185719 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 08:06:11 compute-0 nova_compute[247704]: 2026-01-31 08:06:11.167 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:11.177 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:11.178 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:11.179 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 563 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 6.9 MiB/s wr, 177 op/s
Jan 31 08:06:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:11.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:11.919 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:12 compute-0 ceph-mon[74496]: pgmap v2077: 305 pgs: 305 active+clean; 563 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 6.9 MiB/s wr, 177 op/s
Jan 31 08:06:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/219294127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 6.0 MiB/s wr, 170 op/s
Jan 31 08:06:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:13.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2044255212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:13 compute-0 ceph-mon[74496]: pgmap v2078: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 6.0 MiB/s wr, 170 op/s
Jan 31 08:06:13 compute-0 nova_compute[247704]: 2026-01-31 08:06:13.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:14 compute-0 nova_compute[247704]: 2026-01-31 08:06:14.250 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/551986456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/594456422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 5.5 MiB/s wr, 166 op/s
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.550 247708 DEBUG nova.compute.manager [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-unplugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.551 247708 DEBUG oslo_concurrency.lockutils [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.552 247708 DEBUG oslo_concurrency.lockutils [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.552 247708 DEBUG oslo_concurrency.lockutils [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.553 247708 DEBUG nova.compute.manager [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-unplugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.553 247708 WARNING nova.compute.manager [req-2034425e-f398-4d3d-b548-24e5af0f69f8 req-eaecf80f-38a6-4906-906e-c4fb0b501344 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-unplugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.554 247708 DEBUG nova.compute.manager [req-09d51807-7991-4111-8032-ec9e5d1adc55 req-81b6c31e-ec8a-45e4-835f-14c5e42435c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.554 247708 DEBUG oslo_concurrency.lockutils [req-09d51807-7991-4111-8032-ec9e5d1adc55 req-81b6c31e-ec8a-45e4-835f-14c5e42435c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.554 247708 DEBUG oslo_concurrency.lockutils [req-09d51807-7991-4111-8032-ec9e5d1adc55 req-81b6c31e-ec8a-45e4-835f-14c5e42435c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.554 247708 DEBUG oslo_concurrency.lockutils [req-09d51807-7991-4111-8032-ec9e5d1adc55 req-81b6c31e-ec8a-45e4-835f-14c5e42435c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.555 247708 DEBUG nova.compute.manager [req-09d51807-7991-4111-8032-ec9e5d1adc55 req-81b6c31e-ec8a-45e4-835f-14c5e42435c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Processing event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.555 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.559 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846775.5595505, f2056d13-c1bf-45cf-82c7-f8e1aa48472b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.560 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] VM Resumed (Lifecycle Event)
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.561 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.564 247708 INFO nova.virt.libvirt.driver [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Instance spawned successfully.
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.565 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.754 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.754 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.755 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.755 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.756 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.756 247708 DEBUG nova.virt.libvirt.driver [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.761 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.764 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.833 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:06:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:15.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.904 247708 INFO nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Took 12.54 seconds to spawn the instance on the hypervisor.
Jan 31 08:06:15 compute-0 nova_compute[247704]: 2026-01-31 08:06:15.904 247708 DEBUG nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:16 compute-0 nova_compute[247704]: 2026-01-31 08:06:16.004 247708 INFO nova.compute.manager [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Took 14.16 seconds to build instance.
Jan 31 08:06:16 compute-0 nova_compute[247704]: 2026-01-31 08:06:16.045 247708 DEBUG oslo_concurrency.lockutils [None req-0a02f07e-cece-4c82-b3a6-0d1aa14d11eb 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:16 compute-0 ceph-mon[74496]: pgmap v2079: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 5.5 MiB/s wr, 166 op/s
Jan 31 08:06:16 compute-0 nova_compute[247704]: 2026-01-31 08:06:16.688 247708 INFO nova.network.neutron [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating port 67a82948-b72f-49d6-b07c-3058947bd453 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:06:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 3.1 MiB/s wr, 110 op/s
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.747 247708 DEBUG nova.compute.manager [req-64b6908d-cd9f-4d2b-9256-115c41cfdba9 req-9300967a-1c43-4d2f-b20e-1f793c0e2dd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.748 247708 DEBUG oslo_concurrency.lockutils [req-64b6908d-cd9f-4d2b-9256-115c41cfdba9 req-9300967a-1c43-4d2f-b20e-1f793c0e2dd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.748 247708 DEBUG oslo_concurrency.lockutils [req-64b6908d-cd9f-4d2b-9256-115c41cfdba9 req-9300967a-1c43-4d2f-b20e-1f793c0e2dd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.749 247708 DEBUG oslo_concurrency.lockutils [req-64b6908d-cd9f-4d2b-9256-115c41cfdba9 req-9300967a-1c43-4d2f-b20e-1f793c0e2dd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.749 247708 DEBUG nova.compute.manager [req-64b6908d-cd9f-4d2b-9256-115c41cfdba9 req-9300967a-1c43-4d2f-b20e-1f793c0e2dd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] No waiting events found dispatching network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.749 247708 WARNING nova.compute.manager [req-64b6908d-cd9f-4d2b-9256-115c41cfdba9 req-9300967a-1c43-4d2f-b20e-1f793c0e2dd4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received unexpected event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 for instance with vm_state active and task_state None.
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.764 247708 DEBUG nova.compute.manager [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.765 247708 DEBUG oslo_concurrency.lockutils [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.765 247708 DEBUG oslo_concurrency.lockutils [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.766 247708 DEBUG oslo_concurrency.lockutils [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.766 247708 DEBUG nova.compute.manager [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.766 247708 WARNING nova.compute.manager [req-99543007-a0fd-4406-9827-86c0598cafbb req-34513819-1852-4544-9801-8321bb6bac43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:06:17 compute-0 nova_compute[247704]: 2026-01-31 08:06:17.769 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:06:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:17.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:18 compute-0 ceph-mon[74496]: pgmap v2080: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 3.1 MiB/s wr, 110 op/s
Jan 31 08:06:18 compute-0 nova_compute[247704]: 2026-01-31 08:06:18.985 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-0 nova_compute[247704]: 2026-01-31 08:06:19.254 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 31 08:06:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:19 compute-0 nova_compute[247704]: 2026-01-31 08:06:19.770 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:19.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:19 compute-0 nova_compute[247704]: 2026-01-31 08:06:19.991 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:19 compute-0 nova_compute[247704]: 2026-01-31 08:06:19.992 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:19 compute-0 nova_compute[247704]: 2026-01-31 08:06:19.992 247708 DEBUG nova.network.neutron [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:20
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta']
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:06:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:06:20 compute-0 ceph-mon[74496]: pgmap v2081: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 31 08:06:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 84 KiB/s wr, 137 op/s
Jan 31 08:06:21 compute-0 ovn_controller[149457]: 2026-01-31T08:06:21Z|00411|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:06:21 compute-0 ovn_controller[149457]: 2026-01-31T08:06:21Z|00412|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 08:06:21 compute-0 nova_compute[247704]: 2026-01-31 08:06:21.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:06:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:21.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:06:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:22 compute-0 sudo[316364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:22 compute-0 sudo[316364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:22 compute-0 sudo[316364]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:22 compute-0 sudo[316389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:22 compute-0 sudo[316389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:22 compute-0 sudo[316389]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:22 compute-0 ceph-mon[74496]: pgmap v2082: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 84 KiB/s wr, 137 op/s
Jan 31 08:06:22 compute-0 nova_compute[247704]: 2026-01-31 08:06:22.187 247708 DEBUG nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-changed-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:22 compute-0 nova_compute[247704]: 2026-01-31 08:06:22.188 247708 DEBUG nova.compute.manager [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing instance network info cache due to event network-changed-67a82948-b72f-49d6-b07c-3058947bd453. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:06:22 compute-0 nova_compute[247704]: 2026-01-31 08:06:22.188 247708 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:22 compute-0 nova_compute[247704]: 2026-01-31 08:06:22.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:22 compute-0 nova_compute[247704]: 2026-01-31 08:06:22.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 66 KiB/s wr, 167 op/s
Jan 31 08:06:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.833 247708 DEBUG nova.network.neutron [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:23.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:23.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.895 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.899 247708 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.900 247708 DEBUG nova.network.neutron [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Refreshing network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.968 247708 DEBUG os_brick.utils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.969 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.979 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.979 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd5b37e-8525-4a84-8a95-7faa0281d18b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.981 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.986 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.986 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[09c90aef-8247-4e05-a83a-7dcd8f885216]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.988 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.988 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.996 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.997 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[d01a3840-9275-4f49-9b63-362ea4d5f201]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:23 compute-0 nova_compute[247704]: 2026-01-31 08:06:23.998 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4a98fd-5ee1-4a3a-b6bb-ed49a594f51c]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.000 247708 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.021 247708 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.023 247708 DEBUG os_brick.initiator.connectors.lightos [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.023 247708 DEBUG os_brick.initiator.connectors.lightos [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.024 247708 DEBUG os_brick.initiator.connectors.lightos [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.024 247708 DEBUG os_brick.utils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.256 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:24 compute-0 nova_compute[247704]: 2026-01-31 08:06:24.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 31 08:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 31 08:06:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 31 08:06:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:24 compute-0 ceph-mon[74496]: pgmap v2083: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 66 KiB/s wr, 167 op/s
Jan 31 08:06:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/46344885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.761320) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784761382, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1809, "num_deletes": 252, "total_data_size": 3026789, "memory_usage": 3068360, "flush_reason": "Manual Compaction"}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784807524, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 1805591, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43843, "largest_seqno": 45650, "table_properties": {"data_size": 1799383, "index_size": 3154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16854, "raw_average_key_size": 21, "raw_value_size": 1785465, "raw_average_value_size": 2254, "num_data_blocks": 141, "num_entries": 792, "num_filter_entries": 792, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846621, "oldest_key_time": 1769846621, "file_creation_time": 1769846784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 46265 microseconds, and 3519 cpu microseconds.
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.807581) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 1805591 bytes OK
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.807609) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.809374) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.809392) EVENT_LOG_v1 {"time_micros": 1769846784809386, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.809412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3019202, prev total WAL file size 3019202, number of live WAL files 2.
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.810050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353038' seq:72057594037927935, type:22 .. '6D6772737461740031373539' seq:0, type:0; will stop at (end)
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(1763KB)], [95(10MB)]
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784810113, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 13260742, "oldest_snapshot_seqno": -1}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7527 keys, 10585282 bytes, temperature: kUnknown
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784926900, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10585282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10536949, "index_size": 28381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 192778, "raw_average_key_size": 25, "raw_value_size": 10404618, "raw_average_value_size": 1382, "num_data_blocks": 1128, "num_entries": 7527, "num_filter_entries": 7527, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.927217) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10585282 bytes
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.935265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.5 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 7971, records dropped: 444 output_compression: NoCompression
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.935307) EVENT_LOG_v1 {"time_micros": 1769846784935292, "job": 56, "event": "compaction_finished", "compaction_time_micros": 116873, "compaction_time_cpu_micros": 22094, "output_level": 6, "num_output_files": 1, "total_output_size": 10585282, "num_input_records": 7971, "num_output_records": 7527, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784935633, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846784937016, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.809954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.937091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.937097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.937099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.937100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:24.937102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:25 compute-0 nova_compute[247704]: 2026-01-31 08:06:25.290 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:06:25 compute-0 nova_compute[247704]: 2026-01-31 08:06:25.290 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:06:25 compute-0 nova_compute[247704]: 2026-01-31 08:06:25.291 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:06:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 59 KiB/s wr, 254 op/s
Jan 31 08:06:25 compute-0 ceph-mon[74496]: osdmap e260: 3 total, 3 up, 3 in
Jan 31 08:06:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2562189325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/207896373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1547766470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:25.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:25 compute-0 podman[316424]: 2026-01-31 08:06:25.882135249 +0000 UTC m=+0.049482562 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:06:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:25.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.167 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.170 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.170 247708 INFO nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Creating image(s)
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.171 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.172 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Ensure instance console log exists: /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.172 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.173 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.173 247708 DEBUG oslo_concurrency.lockutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.176 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Start _get_guest_xml network_info=[{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'ded4e542-1993-4756-afd5-73d156885b75', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0eed4460-dbe8-45e7-8d1d-2c1a6334d70e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0eed4460-dbe8-45e7-8d1d-2c1a6334d70e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e', 'attached_at': '2026-01-31T08:06:25.000000', 'detached_at': '', 'volume_id': '0eed4460-dbe8-45e7-8d1d-2c1a6334d70e', 'serial': '0eed4460-dbe8-45e7-8d1d-2c1a6334d70e'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.181 247708 WARNING nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.190 247708 DEBUG nova.virt.libvirt.host [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.191 247708 DEBUG nova.virt.libvirt.host [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.201 247708 DEBUG nova.virt.libvirt.host [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.202 247708 DEBUG nova.virt.libvirt.host [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.204 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.204 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e3bd1dad-95f3-4ed9-94b4-27245cd798b5',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.205 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.205 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.205 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.206 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.206 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.206 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.207 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.207 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.207 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.208 247708 DEBUG nova.virt.hardware [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.208 247708 DEBUG nova.objects.instance [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.259 247708 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.369 247708 DEBUG nova.network.neutron [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updated VIF entry in instance network info cache for port 67a82948-b72f-49d6-b07c-3058947bd453. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.369 247708 DEBUG nova.network.neutron [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [{"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.393 247708 DEBUG oslo_concurrency.lockutils [req-0f08a54c-eb06-4727-8dc3-6254218b742e req-aa7befd2-3721-4a13-ab53-3c7b313ee71d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:06:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1441244861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.744 247708 DEBUG oslo_concurrency.processutils [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.774 247708 DEBUG nova.virt.libvirt.vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.775 247708 DEBUG nova.network.os_vif_util [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.776 247708 DEBUG nova.network.os_vif_util [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.779 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <uuid>0f94fbbc-a8e1-4d6e-838f-925bcbdf538e</uuid>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <name>instance-00000067</name>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <memory>196608</memory>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsTestOtherA-server-372861281</nova:name>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:06:26</nova:creationTime>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <nova:flavor name="m1.micro">
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:memory>192</nova:memory>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:user uuid="31043e345f6b48b585fb7b8ab7304764">tempest-ServerActionsTestOtherA-527878807-project-member</nova:user>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:project uuid="d352316ff6534075952e2d0c28061b09">tempest-ServerActionsTestOtherA-527878807</nova:project>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <nova:port uuid="67a82948-b72f-49d6-b07c-3058947bd453">
Jan 31 08:06:26 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <system>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <entry name="serial">0f94fbbc-a8e1-4d6e-838f-925bcbdf538e</entry>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <entry name="uuid">0f94fbbc-a8e1-4d6e-838f-925bcbdf538e</entry>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </system>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <os>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </os>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <features>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </features>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_disk.config">
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </source>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-0eed4460-dbe8-45e7-8d1d-2c1a6334d70e">
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </source>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:06:26 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <serial>0eed4460-dbe8-45e7-8d1d-2c1a6334d70e</serial>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:48:8d:f4"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <target dev="tap67a82948-b7"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e/console.log" append="off"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <video>
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </video>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:06:26 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:06:26 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:06:26 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:06:26 compute-0 nova_compute[247704]: </domain>
Jan 31 08:06:26 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.781 247708 DEBUG nova.virt.libvirt.vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.781 247708 DEBUG nova.network.os_vif_util [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1343233929-network", "vif_mac": "fa:16:3e:48:8d:f4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.782 247708 DEBUG nova.network.os_vif_util [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.782 247708 DEBUG os_vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.783 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.783 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.784 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.787 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.787 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67a82948-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.788 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67a82948-b7, col_values=(('external_ids', {'iface-id': '67a82948-b72f-49d6-b07c-3058947bd453', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:8d:f4', 'vm-uuid': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.789 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-0 NetworkManager[49108]: <info>  [1769846786.7913] manager: (tap67a82948-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.799 247708 INFO os_vif [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7')
Jan 31 08:06:26 compute-0 ceph-mon[74496]: pgmap v2085: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 59 KiB/s wr, 254 op/s
Jan 31 08:06:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1149175025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1441244861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.951 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.951 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.952 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] No VIF found with MAC fa:16:3e:48:8d:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:06:26 compute-0 nova_compute[247704]: 2026-01-31 08:06:26.952 247708 INFO nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Using config drive
Jan 31 08:06:27 compute-0 NetworkManager[49108]: <info>  [1769846787.0547] manager: (tap67a82948-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Jan 31 08:06:27 compute-0 kernel: tap67a82948-b7: entered promiscuous mode
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.058 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:27 compute-0 ovn_controller[149457]: 2026-01-31T08:06:27Z|00413|binding|INFO|Claiming lport 67a82948-b72f-49d6-b07c-3058947bd453 for this chassis.
Jan 31 08:06:27 compute-0 ovn_controller[149457]: 2026-01-31T08:06:27Z|00414|binding|INFO|67a82948-b72f-49d6-b07c-3058947bd453: Claiming fa:16:3e:48:8d:f4 10.100.0.6
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.065 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8d:f4 10.100.0.6'], port_security=['fa:16:3e:48:8d:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=67a82948-b72f-49d6-b07c-3058947bd453) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:27 compute-0 ovn_controller[149457]: 2026-01-31T08:06:27Z|00415|binding|INFO|Setting lport 67a82948-b72f-49d6-b07c-3058947bd453 ovn-installed in OVS
Jan 31 08:06:27 compute-0 ovn_controller[149457]: 2026-01-31T08:06:27Z|00416|binding|INFO|Setting lport 67a82948-b72f-49d6-b07c-3058947bd453 up in Southbound
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.067 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 67a82948-b72f-49d6-b07c-3058947bd453 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.068 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.085 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e8735d3e-64b2-4a1d-aad6-1a31771dbb23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:27 compute-0 systemd-udevd[316517]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:06:27 compute-0 systemd-machined[214448]: New machine qemu-43-instance-00000067.
Jan 31 08:06:27 compute-0 NetworkManager[49108]: <info>  [1769846787.1041] device (tap67a82948-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:06:27 compute-0 NetworkManager[49108]: <info>  [1769846787.1046] device (tap67a82948-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:06:27 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000067.
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.121 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[31e5548f-c87a-4c0b-a2e4-0428e13a3bd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.124 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c4440e-93a6-4236-8c79-4ba41cdc64f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.148 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb713c2-9abf-4288-b48d-0fb68e71a77d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.164 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d3df205e-e67a-440f-bc6b-bb3489921fa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674899, 'reachable_time': 41105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316527, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.183 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd80bc2-6073-41e0-b808-f3fc1bd314e5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674908, 'tstamp': 674908}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316530, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674910, 'tstamp': 674910}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316530, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.186 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.191 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.191 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:27 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:27.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 57 KiB/s wr, 259 op/s
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.774 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updating instance_info_cache with network_info: [{"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.808 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-021ee385-cfad-415f-a2ac-cdcf925fccac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.809 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.809 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.810 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.810 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.810 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.822 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846787.8219862, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.822 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Resumed (Lifecycle Event)
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.826 247708 DEBUG nova.compute.manager [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.831 247708 INFO nova.virt.libvirt.driver [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance running successfully.
Jan 31 08:06:27 compute-0 virtqemud[247621]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:06:27 compute-0 ceph-mon[74496]: pgmap v2086: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 57 KiB/s wr, 259 op/s
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.834 247708 DEBUG nova.virt.libvirt.guest [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.834 247708 DEBUG nova.virt.libvirt.driver [None req-da243f15-a388-4c93-8626-6f60a5ba70d2 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:06:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:27.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.862 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.862 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.862 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.863 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.863 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.884 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.889 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.959 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.959 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846787.8255653, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:27 compute-0 nova_compute[247704]: 2026-01-31 08:06:27.959 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Started (Lifecycle Event)
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.010 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.016 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.075 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:06:28 compute-0 ovn_controller[149457]: 2026-01-31T08:06:28Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:52:a3 10.100.0.6
Jan 31 08:06:28 compute-0 ovn_controller[149457]: 2026-01-31T08:06:28Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:52:a3 10.100.0.6
Jan 31 08:06:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759180012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.341 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.450 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.451 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.455 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.455 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.459 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.459 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.463 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.463 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.653 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.655 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3808MB free_disk=20.788639068603516GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.655 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.656 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.722 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Applying migration context for instance 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e as it has an incoming, in-progress migration 4a86c12b-502d-451d-8b81-01c67cc03b3a. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.723 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating resource usage from migration 4a86c12b-502d-451d-8b81-01c67cc03b3a
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.749 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 32e56536-3edb-494c-9e8b-87cfa8396dac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.749 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 021ee385-cfad-415f-a2ac-cdcf925fccac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.749 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.749 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance f2056d13-c1bf-45cf-82c7-f8e1aa48472b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.750 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.750 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.779 247708 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.779 247708 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.780 247708 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.780 247708 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.781 247708 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.781 247708 WARNING nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state resized and task_state None.
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.781 247708 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.782 247708 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.782 247708 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.782 247708 DEBUG oslo_concurrency.lockutils [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.782 247708 DEBUG nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] No waiting events found dispatching network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.783 247708 WARNING nova.compute.manager [req-b67728cc-3468-4f3d-b745-f65e2a439168 req-5a54fdd4-7be2-455e-adb7-7d962672cd8b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received unexpected event network-vif-plugged-67a82948-b72f-49d6-b07c-3058947bd453 for instance with vm_state resized and task_state None.
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.837 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.924 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.924 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.941 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:06:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2964881060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3759180012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:28 compute-0 nova_compute[247704]: 2026-01-31 08:06:28.981 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.235 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.258 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 31K writes, 121K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 10K syncs, 2.91 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5344 writes, 19K keys, 5344 commit groups, 1.0 writes per commit group, ingest: 21.33 MB, 0.04 MB/s
                                           Interval WAL: 5344 writes, 2108 syncs, 2.54 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:06:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.1 MiB/s wr, 349 op/s
Jan 31 08:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999485390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.725 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.731 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:06:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:29.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.852 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:29.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.897 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.898 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.898 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:29 compute-0 nova_compute[247704]: 2026-01-31 08:06:29.899 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:06:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3666056923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:30 compute-0 ceph-mon[74496]: pgmap v2087: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.1 MiB/s wr, 349 op/s
Jan 31 08:06:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3999485390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:30 compute-0 nova_compute[247704]: 2026-01-31 08:06:30.675 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:30 compute-0 nova_compute[247704]: 2026-01-31 08:06:30.675 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/599890779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:31 compute-0 ovn_controller[149457]: 2026-01-31T08:06:31Z|00417|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:06:31 compute-0 ovn_controller[149457]: 2026-01-31T08:06:31Z|00418|binding|INFO|Releasing lport 67f0642a-d8ea-421d-83b2-8b692f5bc044 from this chassis (sb_readonly=0)
Jan 31 08:06:31 compute-0 nova_compute[247704]: 2026-01-31 08:06:31.338 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 2.5 MiB/s wr, 378 op/s
Jan 31 08:06:31 compute-0 sudo[316622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:31 compute-0 sudo[316622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-0 sudo[316622]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:31 compute-0 sudo[316647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:31 compute-0 sudo[316647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-0 sudo[316647]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:31 compute-0 sudo[316672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:31 compute-0 sudo[316672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-0 sudo[316672]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:31 compute-0 sudo[316697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:06:31 compute-0 sudo[316697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:31 compute-0 nova_compute[247704]: 2026-01-31 08:06:31.791 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:31.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:32 compute-0 sudo[316697]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:32 compute-0 sudo[316753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:32 compute-0 sudo[316753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:32 compute-0 sudo[316753]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:32 compute-0 sudo[316778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:32 compute-0 sudo[316778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:32 compute-0 sudo[316778]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:32 compute-0 sudo[316803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:32 compute-0 sudo[316803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:32 compute-0 sudo[316803]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:32 compute-0 sudo[316828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 08:06:32 compute-0 sudo[316828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:32 compute-0 sudo[316828]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:06:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 366 op/s
Jan 31 08:06:33 compute-0 nova_compute[247704]: 2026-01-31 08:06:33.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:33 compute-0 ceph-mon[74496]: pgmap v2088: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 2.5 MiB/s wr, 378 op/s
Jan 31 08:06:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:06:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:06:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:33.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:06:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:33.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:06:34 compute-0 nova_compute[247704]: 2026-01-31 08:06:34.261 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: pgmap v2089: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 366 op/s
Jan 31 08:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2213965646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3270420668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.757474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794757537, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 392, "num_deletes": 256, "total_data_size": 284591, "memory_usage": 293544, "flush_reason": "Manual Compaction"}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794766145, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 282570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45651, "largest_seqno": 46042, "table_properties": {"data_size": 280152, "index_size": 518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6039, "raw_average_key_size": 18, "raw_value_size": 275223, "raw_average_value_size": 846, "num_data_blocks": 21, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846785, "oldest_key_time": 1769846785, "file_creation_time": 1769846794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 8719 microseconds, and 1603 cpu microseconds.
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.766190) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 282570 bytes OK
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.766217) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.770221) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.770253) EVENT_LOG_v1 {"time_micros": 1769846794770244, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.770275) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 282023, prev total WAL file size 282023, number of live WAL files 2.
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.770750) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(275KB)], [98(10MB)]
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794770788, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 10867852, "oldest_snapshot_seqno": -1}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7327 keys, 10720153 bytes, temperature: kUnknown
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794923325, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10720153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10672561, "index_size": 28152, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 189605, "raw_average_key_size": 25, "raw_value_size": 10543065, "raw_average_value_size": 1438, "num_data_blocks": 1116, "num_entries": 7327, "num_filter_entries": 7327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846794, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.923666) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10720153 bytes
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.933281) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.2 rd, 70.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.1 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(76.4) write-amplify(37.9) OK, records in: 7852, records dropped: 525 output_compression: NoCompression
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.933306) EVENT_LOG_v1 {"time_micros": 1769846794933295, "job": 58, "event": "compaction_finished", "compaction_time_micros": 152600, "compaction_time_cpu_micros": 18091, "output_level": 6, "num_output_files": 1, "total_output_size": 10720153, "num_input_records": 7852, "num_output_records": 7327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794933527, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 31 08:06:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6351d759-a390-40dc-9fbf-7a025991d4a3 does not exist
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846794934644, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.770653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.934719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5ea1bbbf-3ab4-49cd-a0dd-6fe9e5b8a6a0 does not exist
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.934727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.934731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.934734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:34.934737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9b8dc340-9be8-4fb6-a5ba-5d6657b8a8cb does not exist
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:06:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:06:35 compute-0 sudo[316872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:35 compute-0 sudo[316872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:35 compute-0 sudo[316872]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.062 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.063 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.063 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.063 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.063 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.065 247708 INFO nova.compute.manager [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Terminating instance
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.066 247708 DEBUG nova.compute.manager [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:06:35 compute-0 sudo[316897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:35 compute-0 sudo[316897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:35 compute-0 sudo[316897]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:35 compute-0 kernel: tapfd1cbebb-a7 (unregistering): left promiscuous mode
Jan 31 08:06:35 compute-0 sudo[316922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:35 compute-0 NetworkManager[49108]: <info>  [1769846795.1347] device (tapfd1cbebb-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:06:35 compute-0 sudo[316922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:35 compute-0 sudo[316922]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:35 compute-0 ovn_controller[149457]: 2026-01-31T08:06:35Z|00419|binding|INFO|Releasing lport fd1cbebb-a755-4e08-a23f-5d1d70b501d0 from this chassis (sb_readonly=0)
Jan 31 08:06:35 compute-0 ovn_controller[149457]: 2026-01-31T08:06:35Z|00420|binding|INFO|Setting lport fd1cbebb-a755-4e08-a23f-5d1d70b501d0 down in Southbound
Jan 31 08:06:35 compute-0 ovn_controller[149457]: 2026-01-31T08:06:35Z|00421|binding|INFO|Removing iface tapfd1cbebb-a7 ovn-installed in OVS
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.144 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.145 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.151 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.153 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:52:a3 10.100.0.6'], port_security=['fa:16:3e:ac:52:a3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f2056d13-c1bf-45cf-82c7-f8e1aa48472b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43b462f5b43d48b4a33a13b069618e4c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '001ae016-61eb-444d-a215-8f70012d923a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aaf32b66-0fe8-4826-8186-77a88483534c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=fd1cbebb-a755-4e08-a23f-5d1d70b501d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.154 160028 INFO neutron.agent.ovn.metadata.agent [-] Port fd1cbebb-a755-4e08-a23f-5d1d70b501d0 in datapath f3091b0d-0fc9-4172-b2af-6d9c678c6569 unbound from our chassis
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.155 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3091b0d-0fc9-4172-b2af-6d9c678c6569, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.156 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e12b2e5c-8438-4971-bd83-7eb7fdbd3652]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.156 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 namespace which is not needed anymore
Jan 31 08:06:35 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000069.scope: Consumed 13.038s CPU time.
Jan 31 08:06:35 compute-0 systemd-machined[214448]: Machine qemu-42-instance-00000069 terminated.
Jan 31 08:06:35 compute-0 sudo[316949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:06:35 compute-0 sudo[316949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:35 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [NOTICE]   (316336) : haproxy version is 2.8.14-c23fe91
Jan 31 08:06:35 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [NOTICE]   (316336) : path to executable is /usr/sbin/haproxy
Jan 31 08:06:35 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [WARNING]  (316336) : Exiting Master process...
Jan 31 08:06:35 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [ALERT]    (316336) : Current worker (316344) exited with code 143 (Terminated)
Jan 31 08:06:35 compute-0 neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569[316317]: [WARNING]  (316336) : All workers exited. Exiting... (0)
Jan 31 08:06:35 compute-0 systemd[1]: libpod-5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[316996]: 2026-01-31 08:06:35.285714909 +0000 UTC m=+0.042354597 container died 5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009694741066443328 of space, bias 1.0, pg target 2.9084223199329986 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002165958554345803 of space, bias 1.0, pg target 0.6454556491950494 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.316 247708 INFO nova.virt.libvirt.driver [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Instance destroyed successfully.
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.316 247708 DEBUG nova.objects.instance [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lazy-loading 'resources' on Instance uuid f2056d13-c1bf-45cf-82c7-f8e1aa48472b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4-userdata-shm.mount: Deactivated successfully.
Jan 31 08:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0b6cbb1e5e46dda71e1522703fa9a3dcdc980afc8113113052a5b72892e2c1c-merged.mount: Deactivated successfully.
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.337 247708 DEBUG nova.virt.libvirt.vif [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-19898720',display_name='tempest-ListServersNegativeTestJSON-server-19898720-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-19898720-2',id=105,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-31T08:06:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43b462f5b43d48b4a33a13b069618e4c',ramdisk_id='',reservation_id='r-c7zkmfmo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1511652820',owner_user_name='tempest-ListServersNegativeTestJSON-1511652820-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:15Z,user_data=None,user_id='972b6e928f014e5394261f9c8655f1de',uuid=f2056d13-c1bf-45cf-82c7-f8e1aa48472b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.338 247708 DEBUG nova.network.os_vif_util [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converting VIF {"id": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "address": "fa:16:3e:ac:52:a3", "network": {"id": "f3091b0d-0fc9-4172-b2af-6d9c678c6569", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1117225874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43b462f5b43d48b4a33a13b069618e4c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1cbebb-a7", "ovs_interfaceid": "fd1cbebb-a755-4e08-a23f-5d1d70b501d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.339 247708 DEBUG nova.network.os_vif_util [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.339 247708 DEBUG os_vif [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.341 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.341 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd1cbebb-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.343 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.347 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.350 247708 INFO os_vif [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:52:a3,bridge_name='br-int',has_traffic_filtering=True,id=fd1cbebb-a755-4e08-a23f-5d1d70b501d0,network=Network(f3091b0d-0fc9-4172-b2af-6d9c678c6569),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd1cbebb-a7')
Jan 31 08:06:35 compute-0 podman[316996]: 2026-01-31 08:06:35.352982663 +0000 UTC m=+0.109622351 container cleanup 5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:06:35 compute-0 systemd[1]: libpod-conmon-5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[317053]: 2026-01-31 08:06:35.429741251 +0000 UTC m=+0.056534624 container remove 5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.435 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[34142132-6afe-4d85-a555-9ccf604044c9]: (4, ('Sat Jan 31 08:06:35 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 (5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4)\n5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4\nSat Jan 31 08:06:35 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 (5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4)\n5874166026d31da4525a0f052d65f0cc25480fa546af47c63ae18444e26059f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.437 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ce641d11-1f19-447d-85af-4749ee43061c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.447 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3091b0d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.451 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 kernel: tapf3091b0d-00: left promiscuous mode
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.460 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.465 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[afb47869-529f-4a61-8b19-3876ecb34db8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.482 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0a9ff5-d726-405d-9ee5-970a5284ad7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.486 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e7127a1c-a6e5-43e7-8c83-5f19bf24806c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.497 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[add092ab-47f7-43e8-ac02-038a9c8115cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 701297, 'reachable_time': 25187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317105, 'error': None, 'target': 'ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 systemd[1]: run-netns-ovnmeta\x2df3091b0d\x2d0fc9\x2d4172\x2db2af\x2d6d9c678c6569.mount: Deactivated successfully.
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.502 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3091b0d-0fc9-4172-b2af-6d9c678c6569 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:06:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:35.502 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7a849a-4f84-4c69-8cf5-6382f50f7f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.0 MiB/s wr, 295 op/s
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.564774422 +0000 UTC m=+0.042856609 container create 40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.603 247708 DEBUG nova.compute.manager [req-78868630-3059-4d26-a7ca-b612c3b3480c req-33f9876d-bd34-4a65-830b-1e932ce42022 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-vif-unplugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.604 247708 DEBUG oslo_concurrency.lockutils [req-78868630-3059-4d26-a7ca-b612c3b3480c req-33f9876d-bd34-4a65-830b-1e932ce42022 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.604 247708 DEBUG oslo_concurrency.lockutils [req-78868630-3059-4d26-a7ca-b612c3b3480c req-33f9876d-bd34-4a65-830b-1e932ce42022 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.604 247708 DEBUG oslo_concurrency.lockutils [req-78868630-3059-4d26-a7ca-b612c3b3480c req-33f9876d-bd34-4a65-830b-1e932ce42022 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.605 247708 DEBUG nova.compute.manager [req-78868630-3059-4d26-a7ca-b612c3b3480c req-33f9876d-bd34-4a65-830b-1e932ce42022 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] No waiting events found dispatching network-vif-unplugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:35 compute-0 nova_compute[247704]: 2026-01-31 08:06:35.605 247708 DEBUG nova.compute.manager [req-78868630-3059-4d26-a7ca-b612c3b3480c req-33f9876d-bd34-4a65-830b-1e932ce42022 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-vif-unplugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:06:35 compute-0 systemd[1]: Started libpod-conmon-40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e.scope.
Jan 31 08:06:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.543632485 +0000 UTC m=+0.021714692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.655654174 +0000 UTC m=+0.133736411 container init 40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.663120957 +0000 UTC m=+0.141203144 container start 40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.668271653 +0000 UTC m=+0.146353840 container attach 40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:35 compute-0 systemd[1]: libpod-40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 keen_feistel[317122]: 167 167
Jan 31 08:06:35 compute-0 conmon[317122]: conmon 40488d9ff9491a8d4837 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e.scope/container/memory.events
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.671947143 +0000 UTC m=+0.150029330 container died 40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-30feebb00a532917d0d310e7660dc7c1f7c9c0371ea0ebe68921b294edda604c-merged.mount: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[317106]: 2026-01-31 08:06:35.721363361 +0000 UTC m=+0.199445548 container remove 40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_feistel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:35 compute-0 systemd[1]: libpod-conmon-40488d9ff9491a8d48373b8d47516fd1884fa6241c01def516ff3abd4d7d339e.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:06:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:35.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:35 compute-0 podman[317146]: 2026-01-31 08:06:35.870745004 +0000 UTC m=+0.046615651 container create 6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:06:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:35.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:35 compute-0 systemd[1]: Started libpod-conmon-6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376.scope.
Jan 31 08:06:35 compute-0 podman[317146]: 2026-01-31 08:06:35.854264031 +0000 UTC m=+0.030134708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:06:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c0341f183b061091dc44359a7bb46f40ff852c9cedf31199f4fa405f3dca91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c0341f183b061091dc44359a7bb46f40ff852c9cedf31199f4fa405f3dca91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c0341f183b061091dc44359a7bb46f40ff852c9cedf31199f4fa405f3dca91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c0341f183b061091dc44359a7bb46f40ff852c9cedf31199f4fa405f3dca91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1c0341f183b061091dc44359a7bb46f40ff852c9cedf31199f4fa405f3dca91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 podman[317146]: 2026-01-31 08:06:35.995123915 +0000 UTC m=+0.170994592 container init 6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chebyshev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:36 compute-0 podman[317146]: 2026-01-31 08:06:36.003551261 +0000 UTC m=+0.179421928 container start 6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:06:36 compute-0 podman[317146]: 2026-01-31 08:06:36.011032514 +0000 UTC m=+0.186903171 container attach 6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.368 247708 INFO nova.virt.libvirt.driver [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Deleting instance files /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b_del
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.372 247708 INFO nova.virt.libvirt.driver [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Deletion of /var/lib/nova/instances/f2056d13-c1bf-45cf-82c7-f8e1aa48472b_del complete
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.466 247708 INFO nova.compute.manager [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Took 1.40 seconds to destroy the instance on the hypervisor.
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.467 247708 DEBUG oslo.service.loopingcall [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.467 247708 DEBUG nova.compute.manager [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.467 247708 DEBUG nova.network.neutron [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:06:36 compute-0 nova_compute[247704]: 2026-01-31 08:06:36.829 247708 INFO nova.compute.manager [None req-b70fbac0-a284-4562-8311-ef82f0e07fc4 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Get console output
Jan 31 08:06:37 compute-0 gifted_chebyshev[317162]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:06:37 compute-0 gifted_chebyshev[317162]: --> relative data size: 1.0
Jan 31 08:06:37 compute-0 gifted_chebyshev[317162]: --> All data devices are unavailable
Jan 31 08:06:37 compute-0 ceph-mon[74496]: pgmap v2090: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.0 MiB/s wr, 295 op/s
Jan 31 08:06:37 compute-0 systemd[1]: libpod-6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376.scope: Deactivated successfully.
Jan 31 08:06:37 compute-0 podman[317146]: 2026-01-31 08:06:37.054063569 +0000 UTC m=+1.229934226 container died 6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chebyshev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1c0341f183b061091dc44359a7bb46f40ff852c9cedf31199f4fa405f3dca91-merged.mount: Deactivated successfully.
Jan 31 08:06:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 477 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 309 op/s
Jan 31 08:06:37 compute-0 podman[317146]: 2026-01-31 08:06:37.556489185 +0000 UTC m=+1.732359842 container remove 6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_chebyshev, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:06:37 compute-0 systemd[1]: libpod-conmon-6de9932dce3e976442202143359cb71c6e547d626ad73cfb6e0b5280a241f376.scope: Deactivated successfully.
Jan 31 08:06:37 compute-0 sudo[316949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.612 247708 DEBUG nova.network.neutron [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.631 247708 INFO nova.compute.manager [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Took 1.16 seconds to deallocate network for instance.
Jan 31 08:06:37 compute-0 sudo[317192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:37 compute-0 sudo[317192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:37 compute-0 sudo[317192]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.680 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.682 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:37 compute-0 sudo[317217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:37 compute-0 sudo[317217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:37 compute-0 sudo[317217]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:37 compute-0 sudo[317242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:37 compute-0 sudo[317242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:37 compute-0 sudo[317242]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.770 247708 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.771 247708 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.772 247708 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.772 247708 DEBUG oslo_concurrency.lockutils [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.772 247708 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] No waiting events found dispatching network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.772 247708 WARNING nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received unexpected event network-vif-plugged-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 for instance with vm_state deleted and task_state None.
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.773 247708 DEBUG nova.compute.manager [req-8e3c7c99-d8bd-4d4f-822c-e16d77d17372 req-59245392-38b8-431a-9e6d-1caf74b724ab 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Received event network-vif-deleted-fd1cbebb-a755-4e08-a23f-5d1d70b501d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:37 compute-0 sudo[317267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:06:37 compute-0 sudo[317267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:37 compute-0 nova_compute[247704]: 2026-01-31 08:06:37.830 247708 DEBUG oslo_concurrency.processutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:37.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:37.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:38 compute-0 podman[317353]: 2026-01-31 08:06:38.164298627 +0000 UTC m=+0.026199082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:06:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641212261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.343 247708 DEBUG oslo_concurrency.processutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.349 247708 DEBUG nova.compute.provider_tree [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.394 247708 DEBUG nova.scheduler.client.report [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:38 compute-0 podman[317353]: 2026-01-31 08:06:38.469485239 +0000 UTC m=+0.331385674 container create 207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.577 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.736 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.780 247708 INFO nova.scheduler.client.report [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Deleted allocations for instance f2056d13-c1bf-45cf-82c7-f8e1aa48472b
Jan 31 08:06:38 compute-0 nova_compute[247704]: 2026-01-31 08:06:38.906 247708 DEBUG oslo_concurrency.lockutils [None req-e78846e6-534e-45e2-8846-245c3e392d3e 972b6e928f014e5394261f9c8655f1de 43b462f5b43d48b4a33a13b069618e4c - - default default] Lock "f2056d13-c1bf-45cf-82c7-f8e1aa48472b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:39 compute-0 nova_compute[247704]: 2026-01-31 08:06:39.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:39 compute-0 systemd[1]: Started libpod-conmon-207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c.scope.
Jan 31 08:06:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 412 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 345 op/s
Jan 31 08:06:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:39.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:40 compute-0 podman[317353]: 2026-01-31 08:06:40.115921249 +0000 UTC m=+1.977821724 container init 207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:06:40 compute-0 podman[317353]: 2026-01-31 08:06:40.127624604 +0000 UTC m=+1.989525079 container start 207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:40 compute-0 romantic_murdock[317372]: 167 167
Jan 31 08:06:40 compute-0 systemd[1]: libpod-207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c.scope: Deactivated successfully.
Jan 31 08:06:40 compute-0 conmon[317372]: conmon 207b09fff1e20a7e4c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c.scope/container/memory.events
Jan 31 08:06:40 compute-0 ceph-mon[74496]: pgmap v2091: 305 pgs: 305 active+clean; 477 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 309 op/s
Jan 31 08:06:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2586767570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:40 compute-0 nova_compute[247704]: 2026-01-31 08:06:40.343 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:40 compute-0 podman[317353]: 2026-01-31 08:06:40.503121896 +0000 UTC m=+2.365022351 container attach 207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:06:40 compute-0 podman[317353]: 2026-01-31 08:06:40.504221773 +0000 UTC m=+2.366122208 container died 207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a1cc22a303b80dd314d86ca42815b5e2e3fcc5c38e6b98226eb039893660b08-merged.mount: Deactivated successfully.
Jan 31 08:06:41 compute-0 podman[317353]: 2026-01-31 08:06:41.251527196 +0000 UTC m=+3.113427681 container remove 207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:41 compute-0 systemd[1]: libpod-conmon-207b09fff1e20a7e4c592b2d5703fa25ecf4cb9e0bd34ad882474955c4f7c23c.scope: Deactivated successfully.
Jan 31 08:06:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/641212261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:41 compute-0 ceph-mon[74496]: pgmap v2092: 305 pgs: 305 active+clean; 412 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 345 op/s
Jan 31 08:06:41 compute-0 podman[317406]: 2026-01-31 08:06:41.402445656 +0000 UTC m=+0.027220866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:06:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Jan 31 08:06:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 08:06:41 compute-0 podman[317406]: 2026-01-31 08:06:41.809635793 +0000 UTC m=+0.434411023 container create a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khayyam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:06:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:06:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:06:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:41.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:42 compute-0 systemd[1]: Started libpod-conmon-a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b.scope.
Jan 31 08:06:42 compute-0 podman[317392]: 2026-01-31 08:06:42.157606132 +0000 UTC m=+0.808278985 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:06:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:42 compute-0 sudo[317438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92437ed7de8890bf2f34965f46f652a736702abba4c59508c97f1c3883205420/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92437ed7de8890bf2f34965f46f652a736702abba4c59508c97f1c3883205420/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92437ed7de8890bf2f34965f46f652a736702abba4c59508c97f1c3883205420/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92437ed7de8890bf2f34965f46f652a736702abba4c59508c97f1c3883205420/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:42 compute-0 sudo[317438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:42 compute-0 sudo[317438]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:42 compute-0 sudo[317469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:42 compute-0 sudo[317469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:42 compute-0 sudo[317469]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:42 compute-0 podman[317406]: 2026-01-31 08:06:42.268521114 +0000 UTC m=+0.893296324 container init a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khayyam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:06:42 compute-0 podman[317406]: 2026-01-31 08:06:42.275621547 +0000 UTC m=+0.900396737 container start a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khayyam, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:06:42 compute-0 podman[317406]: 2026-01-31 08:06:42.388826126 +0000 UTC m=+1.013601446 container attach a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:06:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]: {
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:     "0": [
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:         {
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "devices": [
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "/dev/loop3"
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             ],
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "lv_name": "ceph_lv0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "lv_size": "7511998464",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "name": "ceph_lv0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "tags": {
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.cluster_name": "ceph",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.crush_device_class": "",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.encrypted": "0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.osd_id": "0",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.type": "block",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:                 "ceph.vdo": "0"
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             },
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "type": "block",
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:             "vg_name": "ceph_vg0"
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:         }
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]:     ]
Jan 31 08:06:43 compute-0 recursing_khayyam[317464]: }
Jan 31 08:06:43 compute-0 systemd[1]: libpod-a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b.scope: Deactivated successfully.
Jan 31 08:06:43 compute-0 podman[317406]: 2026-01-31 08:06:43.043496213 +0000 UTC m=+1.668271433 container died a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:06:43 compute-0 ceph-mon[74496]: pgmap v2093: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Jan 31 08:06:43 compute-0 ovn_controller[149457]: 2026-01-31T08:06:43Z|00422|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=0)
Jan 31 08:06:43 compute-0 nova_compute[247704]: 2026-01-31 08:06:43.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Jan 31 08:06:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 31 08:06:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 31 08:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-92437ed7de8890bf2f34965f46f652a736702abba4c59508c97f1c3883205420-merged.mount: Deactivated successfully.
Jan 31 08:06:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:44 compute-0 podman[317406]: 2026-01-31 08:06:44.032763544 +0000 UTC m=+2.657538734 container remove a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:06:44 compute-0 sudo[317267]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:44 compute-0 sudo[317513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:44 compute-0 systemd[1]: libpod-conmon-a4e83c62adcbebf371e29ee12908cf5a09cb7f49fc5ed600d5f6de43429d867b.scope: Deactivated successfully.
Jan 31 08:06:44 compute-0 sudo[317513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:44 compute-0 sudo[317513]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:44 compute-0 sudo[317538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:44 compute-0 sudo[317538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:44 compute-0 sudo[317538]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:44 compute-0 sudo[317563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:44 compute-0 sudo[317563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:44 compute-0 sudo[317563]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:44 compute-0 nova_compute[247704]: 2026-01-31 08:06:44.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:44 compute-0 sudo[317588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:06:44 compute-0 sudo[317588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:44 compute-0 ovn_controller[149457]: 2026-01-31T08:06:44Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:8d:f4 10.100.0.6
Jan 31 08:06:44 compute-0 podman[317651]: 2026-01-31 08:06:44.615226998 +0000 UTC m=+0.057118388 container create 8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:44 compute-0 systemd[1]: Started libpod-conmon-8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486.scope.
Jan 31 08:06:44 compute-0 podman[317651]: 2026-01-31 08:06:44.578982241 +0000 UTC m=+0.020873651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:06:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:44 compute-0 podman[317651]: 2026-01-31 08:06:44.715442698 +0000 UTC m=+0.157334118 container init 8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:06:44 compute-0 podman[317651]: 2026-01-31 08:06:44.721493616 +0000 UTC m=+0.163385036 container start 8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 31 08:06:44 compute-0 systemd[1]: libpod-8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486.scope: Deactivated successfully.
Jan 31 08:06:44 compute-0 dazzling_galois[317667]: 167 167
Jan 31 08:06:44 compute-0 conmon[317667]: conmon 8b33b582c3755ebb986d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486.scope/container/memory.events
Jan 31 08:06:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:44 compute-0 podman[317651]: 2026-01-31 08:06:44.755909258 +0000 UTC m=+0.197800678 container attach 8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:06:44 compute-0 podman[317651]: 2026-01-31 08:06:44.757125197 +0000 UTC m=+0.199016597 container died 8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:06:44 compute-0 nova_compute[247704]: 2026-01-31 08:06:44.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf2079daa40cd78882edce944369120e3f05f1efccbcb791c1a2f914526255f-merged.mount: Deactivated successfully.
Jan 31 08:06:44 compute-0 podman[317672]: 2026-01-31 08:06:44.904358478 +0000 UTC m=+0.165508509 container remove 8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:44 compute-0 systemd[1]: libpod-conmon-8b33b582c3755ebb986dc724170f5562f67820ac9105c8d20b3443aeab08c486.scope: Deactivated successfully.
Jan 31 08:06:44 compute-0 ceph-mon[74496]: pgmap v2094: 305 pgs: 305 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Jan 31 08:06:44 compute-0 ceph-mon[74496]: osdmap e261: 3 total, 3 up, 3 in
Jan 31 08:06:45 compute-0 podman[317693]: 2026-01-31 08:06:45.105284561 +0000 UTC m=+0.077852625 container create 3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:45 compute-0 podman[317693]: 2026-01-31 08:06:45.049578089 +0000 UTC m=+0.022146143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:06:45 compute-0 systemd[1]: Started libpod-conmon-3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3.scope.
Jan 31 08:06:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f9129c0244e376fca7ea9eedb2220d17e9de0c0216b8d30736125bb56fc691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f9129c0244e376fca7ea9eedb2220d17e9de0c0216b8d30736125bb56fc691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f9129c0244e376fca7ea9eedb2220d17e9de0c0216b8d30736125bb56fc691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33f9129c0244e376fca7ea9eedb2220d17e9de0c0216b8d30736125bb56fc691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 nova_compute[247704]: 2026-01-31 08:06:45.345 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:45 compute-0 podman[317693]: 2026-01-31 08:06:45.40906586 +0000 UTC m=+0.381633914 container init 3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 08:06:45 compute-0 podman[317693]: 2026-01-31 08:06:45.41722585 +0000 UTC m=+0.389793874 container start 3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:06:45 compute-0 podman[317693]: 2026-01-31 08:06:45.442406975 +0000 UTC m=+0.414975009 container attach 3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:06:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 590 KiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 31 08:06:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:45.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:45.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/507011724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1300736106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:06:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1300736106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:06:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1345138318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:06:46 compute-0 ceph-mon[74496]: pgmap v2096: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 590 KiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 31 08:06:46 compute-0 confident_black[317709]: {
Jan 31 08:06:46 compute-0 confident_black[317709]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:06:46 compute-0 confident_black[317709]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:06:46 compute-0 confident_black[317709]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:06:46 compute-0 confident_black[317709]:         "osd_id": 0,
Jan 31 08:06:46 compute-0 confident_black[317709]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:06:46 compute-0 confident_black[317709]:         "type": "bluestore"
Jan 31 08:06:46 compute-0 confident_black[317709]:     }
Jan 31 08:06:46 compute-0 confident_black[317709]: }
Jan 31 08:06:46 compute-0 systemd[1]: libpod-3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 conmon[317709]: conmon 3782811e1fe3009aa880 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3.scope/container/memory.events
Jan 31 08:06:46 compute-0 podman[317693]: 2026-01-31 08:06:46.276028579 +0000 UTC m=+1.248596603 container died 3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-33f9129c0244e376fca7ea9eedb2220d17e9de0c0216b8d30736125bb56fc691-merged.mount: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[317693]: 2026-01-31 08:06:46.338773214 +0000 UTC m=+1.311341248 container remove 3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 31 08:06:46 compute-0 systemd[1]: libpod-conmon-3782811e1fe3009aa880bd9f54b0afc4acce7a65e0d8275d455ce4cb19c5f0b3.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 sudo[317588]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:06:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:06:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e4dbf618-7d46-44ec-b7b9-d92be146b5a6 does not exist
Jan 31 08:06:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 91df298a-f581-4abb-a98e-172512c7204a does not exist
Jan 31 08:06:46 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cb81dd8b-9b76-487c-a8b9-83b474cb1c82 does not exist
Jan 31 08:06:46 compute-0 sudo[317742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:06:46 compute-0 sudo[317742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:46 compute-0 sudo[317742]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:46 compute-0 sudo[317767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:06:46 compute-0 sudo[317767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:46 compute-0 sudo[317767]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 48 KiB/s wr, 126 op/s
Jan 31 08:06:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:06:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:47.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:47.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:48 compute-0 ceph-mon[74496]: pgmap v2097: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 48 KiB/s wr, 126 op/s
Jan 31 08:06:49 compute-0 nova_compute[247704]: 2026-01-31 08:06:49.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 27 KiB/s wr, 86 op/s
Jan 31 08:06:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:49.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:49.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:06:50 compute-0 nova_compute[247704]: 2026-01-31 08:06:50.312 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846795.3110495, f2056d13-c1bf-45cf-82c7-f8e1aa48472b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:06:50 compute-0 nova_compute[247704]: 2026-01-31 08:06:50.313 247708 INFO nova.compute.manager [-] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] VM Stopped (Lifecycle Event)
Jan 31 08:06:50 compute-0 nova_compute[247704]: 2026-01-31 08:06:50.347 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:50 compute-0 ceph-mon[74496]: pgmap v2098: 305 pgs: 305 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 27 KiB/s wr, 86 op/s
Jan 31 08:06:50 compute-0 nova_compute[247704]: 2026-01-31 08:06:50.377 247708 DEBUG nova.compute.manager [None req-9ad9f648-d725-4d46-99d0-eed7a90e5f27 - - - - - -] [instance: f2056d13-c1bf-45cf-82c7-f8e1aa48472b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:06:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 921 KiB/s rd, 26 KiB/s wr, 79 op/s
Jan 31 08:06:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:51.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:51.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:52 compute-0 ceph-mon[74496]: pgmap v2099: 305 pgs: 305 active+clean; 359 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 921 KiB/s rd, 26 KiB/s wr, 79 op/s
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.509 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.510 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.510 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.510 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.511 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:06:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 361 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 26 KiB/s wr, 84 op/s
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.512 247708 INFO nova.compute.manager [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Terminating instance
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.513 247708 DEBUG nova.compute.manager [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:06:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:53.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:53 compute-0 kernel: tap67a82948-b7 (unregistering): left promiscuous mode
Jan 31 08:06:53 compute-0 NetworkManager[49108]: <info>  [1769846813.9439] device (tap67a82948-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.951 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-0 ovn_controller[149457]: 2026-01-31T08:06:53Z|00423|binding|INFO|Releasing lport 67a82948-b72f-49d6-b07c-3058947bd453 from this chassis (sb_readonly=0)
Jan 31 08:06:53 compute-0 ovn_controller[149457]: 2026-01-31T08:06:53Z|00424|binding|INFO|Setting lport 67a82948-b72f-49d6-b07c-3058947bd453 down in Southbound
Jan 31 08:06:53 compute-0 ovn_controller[149457]: 2026-01-31T08:06:53Z|00425|binding|INFO|Removing iface tap67a82948-b7 ovn-installed in OVS
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.954 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-0 nova_compute[247704]: 2026-01-31 08:06:53.961 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:53 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 31 08:06:53 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000067.scope: Consumed 14.244s CPU time.
Jan 31 08:06:54 compute-0 systemd-machined[214448]: Machine qemu-43-instance-00000067 terminated.
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.056 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:8d:f4 10.100.0.6'], port_security=['fa:16:3e:48:8d:f4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0f94fbbc-a8e1-4d6e-838f-925bcbdf538e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.173', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=67a82948-b72f-49d6-b07c-3058947bd453) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.058 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 67a82948-b72f-49d6-b07c-3058947bd453 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.061 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.074 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6827c863-2de0-480d-ad86-53db74722dd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.102 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[63d745b8-7e6a-4a03-b8ca-84a592e32833]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.106 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba80bc1-66f6-4726-a177-2282b23cfb6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.139 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e06173ec-7d56-4b8c-92c2-3b529eba7b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.151 247708 INFO nova.virt.libvirt.driver [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Instance destroyed successfully.
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.152 247708 DEBUG nova.objects.instance [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.160 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7e3fd4-541a-474f-b8a4-871296264761]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674899, 'reachable_time': 41105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317818, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.181 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f6646605-7486-494a-99f1-03fb6c761616]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674908, 'tstamp': 674908}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317820, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674910, 'tstamp': 674910}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317820, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.184 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.191 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:06:54.193 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.257 247708 DEBUG nova.virt.libvirt.vif [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-372861281',display_name='tempest-ServerActionsTestOtherA-server-372861281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-372861281',id=103,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-sctr4iai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=0f94fbbc-a8e1-4d6e-838f-925bcbdf538e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.258 247708 DEBUG nova.network.os_vif_util [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "67a82948-b72f-49d6-b07c-3058947bd453", "address": "fa:16:3e:48:8d:f4", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67a82948-b7", "ovs_interfaceid": "67a82948-b72f-49d6-b07c-3058947bd453", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.259 247708 DEBUG nova.network.os_vif_util [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.259 247708 DEBUG os_vif [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.260 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.261 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67a82948-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.264 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.268 247708 INFO os_vif [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:8d:f4,bridge_name='br-int',has_traffic_filtering=True,id=67a82948-b72f-49d6-b07c-3058947bd453,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67a82948-b7')
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.269 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 nova_compute[247704]: 2026-01-31 08:06:54.631 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 31 08:06:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 31 08:06:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 31 08:06:54 compute-0 ceph-mon[74496]: pgmap v2100: 305 pgs: 305 active+clean; 361 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 26 KiB/s wr, 84 op/s
Jan 31 08:06:55 compute-0 nova_compute[247704]: 2026-01-31 08:06:55.055 247708 INFO nova.virt.libvirt.driver [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Deleting instance files /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_del
Jan 31 08:06:55 compute-0 nova_compute[247704]: 2026-01-31 08:06:55.056 247708 INFO nova.virt.libvirt.driver [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Deletion of /var/lib/nova/instances/0f94fbbc-a8e1-4d6e-838f-925bcbdf538e_del complete
Jan 31 08:06:55 compute-0 nova_compute[247704]: 2026-01-31 08:06:55.310 247708 INFO nova.compute.manager [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Took 1.80 seconds to destroy the instance on the hypervisor.
Jan 31 08:06:55 compute-0 nova_compute[247704]: 2026-01-31 08:06:55.311 247708 DEBUG oslo.service.loopingcall [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:06:55 compute-0 nova_compute[247704]: 2026-01-31 08:06:55.311 247708 DEBUG nova.compute.manager [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:06:55 compute-0 nova_compute[247704]: 2026-01-31 08:06:55.311 247708 DEBUG nova.network.neutron [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:06:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 361 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 83 op/s
Jan 31 08:06:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:55.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:06:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:06:56 compute-0 ceph-mon[74496]: osdmap e262: 3 total, 3 up, 3 in
Jan 31 08:06:56 compute-0 ceph-mon[74496]: pgmap v2102: 305 pgs: 305 active+clean; 361 MiB data, 1014 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 83 op/s
Jan 31 08:06:56 compute-0 podman[317842]: 2026-01-31 08:06:56.896338843 +0000 UTC m=+0.062570880 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:06:57 compute-0 nova_compute[247704]: 2026-01-31 08:06:57.463 247708 DEBUG nova.network.neutron [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 312 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 14 KiB/s wr, 112 op/s
Jan 31 08:06:57 compute-0 nova_compute[247704]: 2026-01-31 08:06:57.553 247708 DEBUG nova.compute.manager [req-4d074f36-3d8a-488b-97d1-160f8be77e41 req-681bde27-8073-44e0-9f10-3147d070d32d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Received event network-vif-deleted-67a82948-b72f-49d6-b07c-3058947bd453 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:06:57 compute-0 nova_compute[247704]: 2026-01-31 08:06:57.554 247708 INFO nova.compute.manager [req-4d074f36-3d8a-488b-97d1-160f8be77e41 req-681bde27-8073-44e0-9f10-3147d070d32d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Neutron deleted interface 67a82948-b72f-49d6-b07c-3058947bd453; detaching it from the instance and deleting it from the info cache
Jan 31 08:06:57 compute-0 nova_compute[247704]: 2026-01-31 08:06:57.555 247708 DEBUG nova.network.neutron [req-4d074f36-3d8a-488b-97d1-160f8be77e41 req-681bde27-8073-44e0-9f10-3147d070d32d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:06:57 compute-0 sshd-session[317861]: Invalid user solana from 45.148.10.240 port 33996
Jan 31 08:06:57 compute-0 sshd-session[317861]: Connection closed by invalid user solana 45.148.10.240 port 33996 [preauth]
Jan 31 08:06:57 compute-0 nova_compute[247704]: 2026-01-31 08:06:57.846 247708 INFO nova.compute.manager [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Took 2.53 seconds to deallocate network for instance.
Jan 31 08:06:57 compute-0 nova_compute[247704]: 2026-01-31 08:06:57.856 247708 DEBUG nova.compute.manager [req-4d074f36-3d8a-488b-97d1-160f8be77e41 req-681bde27-8073-44e0-9f10-3147d070d32d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Detach interface failed, port_id=67a82948-b72f-49d6-b07c-3058947bd453, reason: Instance 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:06:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:06:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:57.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:06:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:57 compute-0 ceph-mon[74496]: pgmap v2103: 305 pgs: 305 active+clean; 312 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 14 KiB/s wr, 112 op/s
Jan 31 08:06:58 compute-0 nova_compute[247704]: 2026-01-31 08:06:58.230 247708 INFO nova.compute.manager [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Took 0.38 seconds to detach 1 volumes for instance.
Jan 31 08:06:58 compute-0 nova_compute[247704]: 2026-01-31 08:06:58.231 247708 DEBUG nova.compute.manager [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Deleting volume: 0eed4460-dbe8-45e7-8d1d-2c1a6334d70e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.097 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.098 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.185 247708 DEBUG oslo_concurrency.processutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.271 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:06:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 281 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 128 op/s
Jan 31 08:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/27261143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.644 247708 DEBUG oslo_concurrency.processutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.650 247708 DEBUG nova.compute.provider_tree [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.676543) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819676670, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 505, "num_deletes": 252, "total_data_size": 528455, "memory_usage": 538200, "flush_reason": "Manual Compaction"}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819737589, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 523258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46043, "largest_seqno": 46547, "table_properties": {"data_size": 520325, "index_size": 905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7142, "raw_average_key_size": 19, "raw_value_size": 514368, "raw_average_value_size": 1413, "num_data_blocks": 39, "num_entries": 364, "num_filter_entries": 364, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846794, "oldest_key_time": 1769846794, "file_creation_time": 1769846819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 61080 microseconds, and 2909 cpu microseconds.
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:06:59 compute-0 nova_compute[247704]: 2026-01-31 08:06:59.804 247708 DEBUG nova.scheduler.client.report [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.737662) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 523258 bytes OK
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.737694) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.808701) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.808752) EVENT_LOG_v1 {"time_micros": 1769846819808741, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.808780) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 525520, prev total WAL file size 525520, number of live WAL files 2.
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.809377) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(510KB)], [101(10MB)]
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819809463, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11243411, "oldest_snapshot_seqno": -1}
Jan 31 08:06:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:59.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7171 keys, 9276529 bytes, temperature: kUnknown
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819913179, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9276529, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9231069, "index_size": 26404, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 187107, "raw_average_key_size": 26, "raw_value_size": 9105493, "raw_average_value_size": 1269, "num_data_blocks": 1034, "num_entries": 7171, "num_filter_entries": 7171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769846819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:06:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:06:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:06:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.913601) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9276529 bytes
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.936565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.2 rd, 89.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(39.2) write-amplify(17.7) OK, records in: 7691, records dropped: 520 output_compression: NoCompression
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.936617) EVENT_LOG_v1 {"time_micros": 1769846819936598, "job": 60, "event": "compaction_finished", "compaction_time_micros": 103907, "compaction_time_cpu_micros": 34615, "output_level": 6, "num_output_files": 1, "total_output_size": 9276529, "num_input_records": 7691, "num_output_records": 7171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819937050, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846819938109, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.809233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.938152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.938157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.938158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.938160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:06:59 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:06:59.938162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:07:00 compute-0 nova_compute[247704]: 2026-01-31 08:07:00.053 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:00 compute-0 ceph-mon[74496]: pgmap v2104: 305 pgs: 305 active+clean; 281 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 128 op/s
Jan 31 08:07:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/27261143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.464 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 250 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 KiB/s wr, 122 op/s
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.753 247708 WARNING nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 2 instances on the hypervisor.
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.754 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 32e56536-3edb-494c-9e8b-87cfa8396dac _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.754 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 021ee385-cfad-415f-a2ac-cdcf925fccac _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.754 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.755 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.755 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.756 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.756 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.757 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.885 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:01 compute-0 nova_compute[247704]: 2026-01-31 08:07:01.887 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:01.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:01 compute-0 ceph-mon[74496]: pgmap v2105: 305 pgs: 305 active+clean; 250 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 KiB/s wr, 122 op/s
Jan 31 08:07:02 compute-0 sudo[317889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:02 compute-0 sudo[317889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:02 compute-0 sudo[317889]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-0 sudo[317914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:02 compute-0 sudo[317914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:02 compute-0 sudo[317914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:03 compute-0 nova_compute[247704]: 2026-01-31 08:07:03.352 247708 INFO nova.scheduler.client.report [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Deleted allocations for instance 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e
Jan 31 08:07:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 235 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 118 op/s
Jan 31 08:07:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:03.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:03.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.162 247708 DEBUG oslo_concurrency.lockutils [None req-1ba5efc3-a4c1-4a55-88f3-6c17a0915666 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.166 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.209 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "0f94fbbc-a8e1-4d6e-838f-925bcbdf538e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.274 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.786 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:04 compute-0 ceph-mon[74496]: pgmap v2106: 305 pgs: 305 active+clean; 235 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 118 op/s
Jan 31 08:07:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2669144292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2669144292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.995 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.996 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.996 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.996 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.997 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.998 247708 INFO nova.compute.manager [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Terminating instance
Jan 31 08:07:04 compute-0 nova_compute[247704]: 2026-01-31 08:07:04.999 247708 DEBUG nova.compute.manager [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:07:05 compute-0 kernel: tapefaa3b34-72 (unregistering): left promiscuous mode
Jan 31 08:07:05 compute-0 NetworkManager[49108]: <info>  [1769846825.2607] device (tapefaa3b34-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:07:05 compute-0 ovn_controller[149457]: 2026-01-31T08:07:05Z|00426|binding|INFO|Releasing lport efaa3b34-7260-4b2d-aca0-09a2d2ffe251 from this chassis (sb_readonly=0)
Jan 31 08:07:05 compute-0 ovn_controller[149457]: 2026-01-31T08:07:05Z|00427|binding|INFO|Setting lport efaa3b34-7260-4b2d-aca0-09a2d2ffe251 down in Southbound
Jan 31 08:07:05 compute-0 ovn_controller[149457]: 2026-01-31T08:07:05Z|00428|binding|INFO|Removing iface tapefaa3b34-72 ovn-installed in OVS
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.307 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.314 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.333 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.334 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.333 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 31 08:07:05 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000064.scope: Consumed 17.479s CPU time.
Jan 31 08:07:05 compute-0 systemd-machined[214448]: Machine qemu-41-instance-00000064 terminated.
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.360 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:5f:b3 10.100.0.10'], port_security=['fa:16:3e:40:5f:b3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '021ee385-cfad-415f-a2ac-cdcf925fccac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=efaa3b34-7260-4b2d-aca0-09a2d2ffe251) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.361 160028 INFO neutron.agent.ovn.metadata.agent [-] Port efaa3b34-7260-4b2d-aca0-09a2d2ffe251 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.363 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.377 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b415bac-d268-4a7b-b0fc-825c125a248e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.399 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8b79913e-facb-47f7-963f-1d7e4eeefd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.402 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4132f7-7bb6-4c0c-8bac-6a2de6bc1a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.423 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[25f22c8b-7938-4981-9f0c-54840ad7b727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.437 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e10cf4b2-7003-4575-a8be-989812e90e57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674899, 'reachable_time': 41105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317958, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.442 247708 INFO nova.virt.libvirt.driver [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Instance destroyed successfully.
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.442 247708 DEBUG nova.objects.instance [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 021ee385-cfad-415f-a2ac-cdcf925fccac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.449 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca9fa01-e0a0-42f1-9db6-aa1adca2dc77]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674908, 'tstamp': 674908}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317964, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79cb2b81-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674910, 'tstamp': 674910}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317964, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.452 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.454 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.458 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.459 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.459 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.459 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:05.460 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 200 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 2.6 KiB/s wr, 90 op/s
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.698 247708 DEBUG nova.virt.libvirt.vif [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-552048963',display_name='tempest-ServerActionsTestOtherA-server-552048963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-552048963',id=100,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-n9p8t3du',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:04Z,user_data=None,user_id='31043e345f6b48b585fb7b8ab7304764',uuid=021ee385-cfad-415f-a2ac-cdcf925fccac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.699 247708 DEBUG nova.network.os_vif_util [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "address": "fa:16:3e:40:5f:b3", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefaa3b34-72", "ovs_interfaceid": "efaa3b34-7260-4b2d-aca0-09a2d2ffe251", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.699 247708 DEBUG nova.network.os_vif_util [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.700 247708 DEBUG os_vif [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.701 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.702 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefaa3b34-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.705 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.707 247708 INFO os_vif [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:5f:b3,bridge_name='br-int',has_traffic_filtering=True,id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefaa3b34-72')
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.787 247708 DEBUG nova.compute.manager [req-052cdb91-c2f2-4fc1-89ea-25da87070afc req-d6ce0032-1e91-485b-8f5f-cc8130b3a763 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-vif-unplugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.787 247708 DEBUG oslo_concurrency.lockutils [req-052cdb91-c2f2-4fc1-89ea-25da87070afc req-d6ce0032-1e91-485b-8f5f-cc8130b3a763 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.788 247708 DEBUG oslo_concurrency.lockutils [req-052cdb91-c2f2-4fc1-89ea-25da87070afc req-d6ce0032-1e91-485b-8f5f-cc8130b3a763 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.788 247708 DEBUG oslo_concurrency.lockutils [req-052cdb91-c2f2-4fc1-89ea-25da87070afc req-d6ce0032-1e91-485b-8f5f-cc8130b3a763 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.788 247708 DEBUG nova.compute.manager [req-052cdb91-c2f2-4fc1-89ea-25da87070afc req-d6ce0032-1e91-485b-8f5f-cc8130b3a763 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] No waiting events found dispatching network-vif-unplugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:05 compute-0 nova_compute[247704]: 2026-01-31 08:07:05.788 247708 DEBUG nova.compute.manager [req-052cdb91-c2f2-4fc1-89ea-25da87070afc req-d6ce0032-1e91-485b-8f5f-cc8130b3a763 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-vif-unplugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:07:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:05.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:06 compute-0 ceph-mon[74496]: pgmap v2107: 305 pgs: 305 active+clean; 200 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 2.6 KiB/s wr, 90 op/s
Jan 31 08:07:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 181 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 842 KiB/s rd, 2.3 KiB/s wr, 88 op/s
Jan 31 08:07:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:07.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:07 compute-0 nova_compute[247704]: 2026-01-31 08:07:07.922 247708 DEBUG nova.compute.manager [req-f8ea32f3-5fa8-4a17-8a1f-c76bf9831940 req-f219a1e8-93f6-4430-8051-0763c7b5f160 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:07 compute-0 nova_compute[247704]: 2026-01-31 08:07:07.922 247708 DEBUG oslo_concurrency.lockutils [req-f8ea32f3-5fa8-4a17-8a1f-c76bf9831940 req-f219a1e8-93f6-4430-8051-0763c7b5f160 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:07 compute-0 nova_compute[247704]: 2026-01-31 08:07:07.923 247708 DEBUG oslo_concurrency.lockutils [req-f8ea32f3-5fa8-4a17-8a1f-c76bf9831940 req-f219a1e8-93f6-4430-8051-0763c7b5f160 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:07 compute-0 nova_compute[247704]: 2026-01-31 08:07:07.923 247708 DEBUG oslo_concurrency.lockutils [req-f8ea32f3-5fa8-4a17-8a1f-c76bf9831940 req-f219a1e8-93f6-4430-8051-0763c7b5f160 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:07 compute-0 nova_compute[247704]: 2026-01-31 08:07:07.924 247708 DEBUG nova.compute.manager [req-f8ea32f3-5fa8-4a17-8a1f-c76bf9831940 req-f219a1e8-93f6-4430-8051-0763c7b5f160 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] No waiting events found dispatching network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:07 compute-0 nova_compute[247704]: 2026-01-31 08:07:07.924 247708 WARNING nova.compute.manager [req-f8ea32f3-5fa8-4a17-8a1f-c76bf9831940 req-f219a1e8-93f6-4430-8051-0763c7b5f160 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received unexpected event network-vif-plugged-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 for instance with vm_state active and task_state deleting.
Jan 31 08:07:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.180 247708 INFO nova.virt.libvirt.driver [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Deleting instance files /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac_del
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.182 247708 INFO nova.virt.libvirt.driver [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Deletion of /var/lib/nova/instances/021ee385-cfad-415f-a2ac-cdcf925fccac_del complete
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.277 247708 INFO nova.compute.manager [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Took 3.28 seconds to destroy the instance on the hypervisor.
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.278 247708 DEBUG oslo.service.loopingcall [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.279 247708 DEBUG nova.compute.manager [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.279 247708 DEBUG nova.network.neutron [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:07:08 compute-0 ceph-mon[74496]: pgmap v2108: 305 pgs: 305 active+clean; 181 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 842 KiB/s rd, 2.3 KiB/s wr, 88 op/s
Jan 31 08:07:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1370603499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:08 compute-0 nova_compute[247704]: 2026-01-31 08:07:08.649 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.148 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846814.1478145, 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.149 247708 INFO nova.compute.manager [-] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] VM Stopped (Lifecycle Event)
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.275 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:09.336 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.466 247708 DEBUG nova.compute.manager [None req-6bd63b36-c91e-4bc1-a08b-04ab50575e86 - - - - - -] [instance: 0f94fbbc-a8e1-4d6e-838f-925bcbdf538e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 141 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.6 KiB/s wr, 60 op/s
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.689 247708 DEBUG nova.network.neutron [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.734 247708 DEBUG nova.compute.manager [req-2fbf336d-9ecf-445a-9f50-7cb4f65fe1dd req-1f0bba4a-68ab-4607-a29b-68bcee32dac3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Received event network-vif-deleted-efaa3b34-7260-4b2d-aca0-09a2d2ffe251 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.734 247708 INFO nova.compute.manager [req-2fbf336d-9ecf-445a-9f50-7cb4f65fe1dd req-1f0bba4a-68ab-4607-a29b-68bcee32dac3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Neutron deleted interface efaa3b34-7260-4b2d-aca0-09a2d2ffe251; detaching it from the instance and deleting it from the info cache
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.734 247708 DEBUG nova.network.neutron [req-2fbf336d-9ecf-445a-9f50-7cb4f65fe1dd req-1f0bba4a-68ab-4607-a29b-68bcee32dac3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.777 247708 INFO nova.compute.manager [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Took 1.50 seconds to deallocate network for instance.
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.792 247708 DEBUG nova.compute.manager [req-2fbf336d-9ecf-445a-9f50-7cb4f65fe1dd req-1f0bba4a-68ab-4607-a29b-68bcee32dac3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Detach interface failed, port_id=efaa3b34-7260-4b2d-aca0-09a2d2ffe251, reason: Instance 021ee385-cfad-415f-a2ac-cdcf925fccac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:07:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:09.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.927 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:09 compute-0 nova_compute[247704]: 2026-01-31 08:07:09.927 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.015 247708 DEBUG oslo_concurrency.processutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707410916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.531 247708 DEBUG oslo_concurrency.processutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.539 247708 DEBUG nova.compute.provider_tree [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:10 compute-0 ceph-mon[74496]: pgmap v2109: 305 pgs: 305 active+clean; 141 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.6 KiB/s wr, 60 op/s
Jan 31 08:07:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/325742308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2707410916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.615 247708 DEBUG nova.scheduler.client.report [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.704 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.714 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:10 compute-0 nova_compute[247704]: 2026-01-31 08:07:10.821 247708 INFO nova.scheduler.client.report [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Deleted allocations for instance 021ee385-cfad-415f-a2ac-cdcf925fccac
Jan 31 08:07:11 compute-0 nova_compute[247704]: 2026-01-31 08:07:11.057 247708 DEBUG oslo_concurrency.lockutils [None req-d929c299-8d97-4a8d-9b28-ea63f184dc87 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "021ee385-cfad-415f-a2ac-cdcf925fccac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:11.178 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:11.178 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:11.179 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 138 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 506 KiB/s wr, 44 op/s
Jan 31 08:07:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:12 compute-0 ceph-mon[74496]: pgmap v2110: 305 pgs: 305 active+clean; 138 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 506 KiB/s wr, 44 op/s
Jan 31 08:07:12 compute-0 podman[318010]: 2026-01-31 08:07:12.934286351 +0000 UTC m=+0.094277506 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 08:07:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 157 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 836 KiB/s wr, 57 op/s
Jan 31 08:07:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:07:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:13.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:07:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:14 compute-0 ceph-mon[74496]: pgmap v2111: 305 pgs: 305 active+clean; 157 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 836 KiB/s wr, 57 op/s
Jan 31 08:07:14 compute-0 nova_compute[247704]: 2026-01-31 08:07:14.278 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 202 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.1 MiB/s wr, 79 op/s
Jan 31 08:07:15 compute-0 nova_compute[247704]: 2026-01-31 08:07:15.706 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:15.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:16 compute-0 ceph-mon[74496]: pgmap v2112: 305 pgs: 305 active+clean; 202 MiB data, 911 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.1 MiB/s wr, 79 op/s
Jan 31 08:07:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3047707787' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3047707787' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:16 compute-0 nova_compute[247704]: 2026-01-31 08:07:16.936 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:16 compute-0 nova_compute[247704]: 2026-01-31 08:07:16.937 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.177 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:07:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 3.5 MiB/s wr, 94 op/s
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.560 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.561 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.566 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.567 247708 INFO nova.compute.claims [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.915 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.916 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.916 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.917 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.917 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.919 247708 INFO nova.compute.manager [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Terminating instance
Jan 31 08:07:17 compute-0 nova_compute[247704]: 2026-01-31 08:07:17.919 247708 DEBUG nova.compute.manager [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:07:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.296 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:18 compute-0 kernel: tap6dcdaf78-57 (unregistering): left promiscuous mode
Jan 31 08:07:18 compute-0 NetworkManager[49108]: <info>  [1769846838.4983] device (tap6dcdaf78-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00429|binding|INFO|Releasing lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 from this chassis (sb_readonly=0)
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00430|binding|INFO|Setting lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 down in Southbound
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00431|binding|INFO|Removing iface tap6dcdaf78-57 ovn-installed in OVS
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.513 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.519 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.536 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:23:9c 10.100.0.11'], port_security=['fa:16:3e:cf:23:9c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32e56536-3edb-494c-9e8b-87cfa8396dac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6dcdaf78-571b-42ba-bc17-7f6217ee6587) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.537 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.539 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.540 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d359f7bf-f2f2-48dc-bffd-9b5adae07472]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.540 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore
Jan 31 08:07:18 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Jan 31 08:07:18 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005c.scope: Consumed 26.935s CPU time.
Jan 31 08:07:18 compute-0 systemd-machined[214448]: Machine qemu-38-instance-0000005c terminated.
Jan 31 08:07:18 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[308703]: [NOTICE]   (308707) : haproxy version is 2.8.14-c23fe91
Jan 31 08:07:18 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[308703]: [NOTICE]   (308707) : path to executable is /usr/sbin/haproxy
Jan 31 08:07:18 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[308703]: [ALERT]    (308707) : Current worker (308709) exited with code 143 (Terminated)
Jan 31 08:07:18 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[308703]: [WARNING]  (308707) : All workers exited. Exiting... (0)
Jan 31 08:07:18 compute-0 systemd[1]: libpod-2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1.scope: Deactivated successfully.
Jan 31 08:07:18 compute-0 podman[318084]: 2026-01-31 08:07:18.664730556 +0000 UTC m=+0.049764869 container died 2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1-userdata-shm.mount: Deactivated successfully.
Jan 31 08:07:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5c0d814f48bbf2ba71cfdedb5500f2b19084433b27b6361618559b01f6b3621-merged.mount: Deactivated successfully.
Jan 31 08:07:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761439231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:18 compute-0 podman[318084]: 2026-01-31 08:07:18.719784632 +0000 UTC m=+0.104818955 container cleanup 2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:07:18 compute-0 systemd[1]: libpod-conmon-2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1.scope: Deactivated successfully.
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.731 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:18 compute-0 kernel: tap6dcdaf78-57: entered promiscuous mode
Jan 31 08:07:18 compute-0 NetworkManager[49108]: <info>  [1769846838.7363] manager: (tap6dcdaf78-57): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Jan 31 08:07:18 compute-0 systemd-udevd[318062]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:07:18 compute-0 kernel: tap6dcdaf78-57 (unregistering): left promiscuous mode
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00432|binding|INFO|Claiming lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 for this chassis.
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00433|binding|INFO|6dcdaf78-571b-42ba-bc17-7f6217ee6587: Claiming fa:16:3e:cf:23:9c 10.100.0.11
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.743 247708 DEBUG nova.compute.provider_tree [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00434|binding|INFO|Setting lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 ovn-installed in OVS
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00435|if_status|INFO|Dropped 7 log messages in last 528 seconds (most recently, 528 seconds ago) due to excessive rate
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00436|if_status|INFO|Not setting lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 down as sb is readonly
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.758 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 ceph-mon[74496]: pgmap v2113: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 3.5 MiB/s wr, 94 op/s
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.763 247708 INFO nova.virt.libvirt.driver [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Instance destroyed successfully.
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.763 247708 DEBUG nova.objects.instance [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lazy-loading 'resources' on Instance uuid 32e56536-3edb-494c-9e8b-87cfa8396dac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:18 compute-0 podman[318114]: 2026-01-31 08:07:18.807392134 +0000 UTC m=+0.063108485 container remove 2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:07:18 compute-0 ovn_controller[149457]: 2026-01-31T08:07:18Z|00437|binding|INFO|Releasing lport 6dcdaf78-571b-42ba-bc17-7f6217ee6587 from this chassis (sb_readonly=0)
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.814 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:23:9c 10.100.0.11'], port_security=['fa:16:3e:cf:23:9c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32e56536-3edb-494c-9e8b-87cfa8396dac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6dcdaf78-571b-42ba-bc17-7f6217ee6587) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.813 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2646421f-b696-47be-83d1-a27925df15bd]: (4, ('Sat Jan 31 08:07:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1)\n2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1\nSat Jan 31 08:07:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1)\n2a62ea9eb1a43e8416d9784c6d88a4dfc06e110625c228b2981b0d2a9f1245f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.818 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[17f22ecc-f3c1-448d-a653-a391456389d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.819 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.820 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.835 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c26fa3fb-68ee-4b6e-89db-bec4fc41ec38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.847 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[088bdd7e-0838-45b3-a8ab-e3296cd33ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.849 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e0455402-74ee-44f1-a81f-1e4deb06c864]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.864 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a8917b-dff2-47c1-997a-fe83e25e2faf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674893, 'reachable_time': 27911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318137, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.867 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:07:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.867 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[6422c778-8424-4c52-90aa-b40f6b903d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.869 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b bound to our chassis
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.872 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.886 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4e4f52ef-f0da-40bc-a7e5-f0871c8b5480]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.887 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79cb2b81-31 in ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.890 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79cb2b81-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.890 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba31580-f2c6-4695-b9cf-4e1cc24b4483]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.891 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[66317df1-b468-415c-969b-61f20b35a71e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 nova_compute[247704]: 2026-01-31 08:07:18.894 247708 DEBUG nova.scheduler.client.report [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.901 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[6f821fe0-3b99-45f5-af49-b716a0e7b143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.912 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8dfa27-7b61-4f40-a147-cc14bbc8f082]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.941 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ec0914-4cf1-469e-9433-3dd1f198d5e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.948 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[20654348-0eb2-4e81-ba1c-89fc5a2cdbb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 NetworkManager[49108]: <info>  [1769846838.9494] manager: (tap79cb2b81-30): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.981 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[491499c8-ab81-44fc-8a03-60de91950fa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:18.984 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a606573e-7a4b-4f5b-b125-50a2e0267403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 NetworkManager[49108]: <info>  [1769846839.0058] device (tap79cb2b81-30): carrier: link connected
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.010 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e587b082-81f7-4086-a4d8-f59c97285593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.029 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c02e4d-2fdc-4ae8-8316-e890332e19db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708162, 'reachable_time': 24725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318164, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.044 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[192b9811-7a39-408b-a1db-66303e30feb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:12e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708162, 'tstamp': 708162}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318165, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.065 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[89c04575-ad49-4df1-b965-507cb36c9393]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79cb2b81-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:12:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708162, 'reachable_time': 24725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318166, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.099 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3613fc5b-2ae8-4a29-ab42-951be85d2dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.165 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f6dc33-dae2-4dc4-8620-5b9a167f1b81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.167 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.168 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.168 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79cb2b81-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.171 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 NetworkManager[49108]: <info>  [1769846839.1717] manager: (tap79cb2b81-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 31 08:07:19 compute-0 kernel: tap79cb2b81-30: entered promiscuous mode
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.180 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79cb2b81-30, col_values=(('external_ids', {'iface-id': '9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 ovn_controller[149457]: 2026-01-31T08:07:19Z|00438|binding|INFO|Releasing lport 9edfb786-bff8-4b8b-89b7-3ba5b0f2e9ef from this chassis (sb_readonly=1)
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.196 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.197 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[93027d25-5d7e-4fb9-8ae8-0ab55fe30bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.198 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/79cb2b81-3369-468a-8bf6-7e13d5df334b.pid.haproxy
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 79cb2b81-3369-468a-8bf6-7e13d5df334b
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.199 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'env', 'PROCESS_TAG=haproxy-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79cb2b81-3369-468a-8bf6-7e13d5df334b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.255 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:23:9c 10.100.0.11'], port_security=['fa:16:3e:cf:23:9c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '32e56536-3edb-494c-9e8b-87cfa8396dac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd352316ff6534075952e2d0c28061b09', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8897503-0019-487c-aacb-6eb623a53e29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4b782827-2c44-4671-8f12-bb67b4f64804, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6dcdaf78-571b-42ba-bc17-7f6217ee6587) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.280 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.387 247708 DEBUG nova.virt.libvirt.vif [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-114931390',display_name='tempest-ServerActionsTestOtherA-server-114931390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-114931390',id=92,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDZ2ulplX2H2AL5NuU34s6GJNVqGMriUIj2eQg4OgerjQ8NWhsk6znxGcALW3k4Z9H1uedU1AeWtQAxMMtaMSBGS2G2VQQrwipi4fvjn/GJrPshiFNiDq6ym/pNUZzm75g==',key_name='tempest-keypair-1727230566',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d352316ff6534075952e2d0c28061b09',ramdisk_id='',reservation_id='r-nn0tigkg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-527878807',owner_user_name='tempest-ServerActionsTestOtherA-527878807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='31043e345f6b48b585fb7b8ab7304764',uuid=32e56536-3edb-494c-9e8b-87cfa8396dac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.390 247708 DEBUG nova.network.os_vif_util [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converting VIF {"id": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "address": "fa:16:3e:cf:23:9c", "network": {"id": "79cb2b81-3369-468a-8bf6-7e13d5df334b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1343233929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d352316ff6534075952e2d0c28061b09", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6dcdaf78-57", "ovs_interfaceid": "6dcdaf78-571b-42ba-bc17-7f6217ee6587", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.391 247708 DEBUG nova.network.os_vif_util [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.392 247708 DEBUG os_vif [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.395 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.396 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6dcdaf78-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.398 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.401 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.404 247708 INFO os_vif [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:23:9c,bridge_name='br-int',has_traffic_filtering=True,id=6dcdaf78-571b-42ba-bc17-7f6217ee6587,network=Network(79cb2b81-3369-468a-8bf6-7e13d5df334b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6dcdaf78-57')
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.433 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.435 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:07:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 08:07:19 compute-0 podman[318218]: 2026-01-31 08:07:19.551049868 +0000 UTC m=+0.055885007 container create 5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:07:19 compute-0 systemd[1]: Started libpod-conmon-5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e.scope.
Jan 31 08:07:19 compute-0 podman[318218]: 2026-01-31 08:07:19.520426589 +0000 UTC m=+0.025261788 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:07:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391cf42f1a8aec7fde30b6e8debcb9e3321a1636bc9d2fdfbe387e6af4c069b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.640 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:07:19 compute-0 podman[318218]: 2026-01-31 08:07:19.641404607 +0000 UTC m=+0.146239806 container init 5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.642 247708 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:07:19 compute-0 podman[318218]: 2026-01-31 08:07:19.645470697 +0000 UTC m=+0.150305836 container start 5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [NOTICE]   (318237) : New worker (318239) forked
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [NOTICE]   (318237) : Loading success.
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.708 247708 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.714 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6dcdaf78-571b-42ba-bc17-7f6217ee6587 in datapath 79cb2b81-3369-468a-8bf6-7e13d5df334b unbound from our chassis
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.716 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79cb2b81-3369-468a-8bf6-7e13d5df334b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.717 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[394fd81c-6ef4-45ca-ab36-5ccc555bbbd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.718 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b namespace which is not needed anymore
Jan 31 08:07:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [NOTICE]   (318237) : haproxy version is 2.8.14-c23fe91
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [NOTICE]   (318237) : path to executable is /usr/sbin/haproxy
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [WARNING]  (318237) : Exiting Master process...
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [WARNING]  (318237) : Exiting Master process...
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [ALERT]    (318237) : Current worker (318239) exited with code 143 (Terminated)
Jan 31 08:07:19 compute-0 neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b[318233]: [WARNING]  (318237) : All workers exited. Exiting... (0)
Jan 31 08:07:19 compute-0 systemd[1]: libpod-5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e.scope: Deactivated successfully.
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.841 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:07:19 compute-0 podman[318265]: 2026-01-31 08:07:19.842550046 +0000 UTC m=+0.040092802 container died 5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.854 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e-userdata-shm.mount: Deactivated successfully.
Jan 31 08:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-391cf42f1a8aec7fde30b6e8debcb9e3321a1636bc9d2fdfbe387e6af4c069b8-merged.mount: Deactivated successfully.
Jan 31 08:07:19 compute-0 podman[318265]: 2026-01-31 08:07:19.891436511 +0000 UTC m=+0.088979307 container cleanup 5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:07:19 compute-0 systemd[1]: libpod-conmon-5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e.scope: Deactivated successfully.
Jan 31 08:07:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:19.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:19 compute-0 podman[318291]: 2026-01-31 08:07:19.966202929 +0000 UTC m=+0.053723364 container remove 5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.972 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9442ef10-ff6c-4c99-87b8-fe60c95a1625]: (4, ('Sat Jan 31 08:07:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e)\n5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e\nSat Jan 31 08:07:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b (5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e)\n5a7cf4e5c978d9a2dccd88e4a308e2223331b5a166843430f5638fff2915478e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.974 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5b34fc2f-f062-4917-9c7b-b6a214896c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.975 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79cb2b81-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:19 compute-0 kernel: tap79cb2b81-30: left promiscuous mode
Jan 31 08:07:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:19.982 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[364d7703-6004-475c-ac85-9a62fdb01c72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:19 compute-0 nova_compute[247704]: 2026-01-31 08:07:19.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:20.002 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[802190aa-223e-4e9b-bf4c-8bfaa4cf83b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:20.003 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d97885ae-babd-427b-b7e2-733f6e18b056]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:20.014 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6d93c6-4dc1-4a9b-8750-ee41c5218a5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708155, 'reachable_time': 30059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318311, 'error': None, 'target': 'ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d79cb2b81\x2d3369\x2d468a\x2d8bf6\x2d7e13d5df334b.mount: Deactivated successfully.
Jan 31 08:07:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:20.016 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79cb2b81-3369-468a-8bf6-7e13d5df334b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:07:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:20.016 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8a5585-19ad-45a2-9d44-65af5a13ed2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2761439231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/719574498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:07:20
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images']
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.351 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.354 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.355 247708 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Creating image(s)
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.393 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.426 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.451 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.455 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.472 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846825.4387956, 021ee385-cfad-415f-a2ac-cdcf925fccac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.473 247708 INFO nova.compute.manager [-] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] VM Stopped (Lifecycle Event)
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.513 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.514 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.514 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.515 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.539 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.543 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:07:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.580 247708 DEBUG nova.compute.manager [None req-42f90443-1fb6-40c1-b154-c09cb1a52d12 - - - - - -] [instance: 021ee385-cfad-415f-a2ac-cdcf925fccac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:20 compute-0 nova_compute[247704]: 2026-01-31 08:07:20.612 247708 DEBUG nova.policy [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ba00f420cd940ff802c16e8c25c35c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b97d933ec6c34696b0483a895f47feef', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:07:21 compute-0 ceph-mon[74496]: pgmap v2114: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 08:07:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3349957308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 3.5 MiB/s wr, 76 op/s
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.810 247708 DEBUG nova.compute.manager [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-vif-unplugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.810 247708 DEBUG oslo_concurrency.lockutils [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.811 247708 DEBUG oslo_concurrency.lockutils [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.811 247708 DEBUG oslo_concurrency.lockutils [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.811 247708 DEBUG nova.compute.manager [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] No waiting events found dispatching network-vif-unplugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.811 247708 DEBUG nova.compute.manager [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-vif-unplugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.811 247708 DEBUG nova.compute.manager [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.812 247708 DEBUG oslo_concurrency.lockutils [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.812 247708 DEBUG oslo_concurrency.lockutils [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.812 247708 DEBUG oslo_concurrency.lockutils [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.812 247708 DEBUG nova.compute.manager [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] No waiting events found dispatching network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:21 compute-0 nova_compute[247704]: 2026-01-31 08:07:21.812 247708 WARNING nova.compute.manager [req-05768ad0-ea98-41bd-9130-78b7f6c3857a req-6ee74c6b-671b-4d92-9a32-6e6a773d2758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received unexpected event network-vif-plugged-6dcdaf78-571b-42ba-bc17-7f6217ee6587 for instance with vm_state active and task_state deleting.
Jan 31 08:07:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:21.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:21.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:22 compute-0 sudo[318407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:22 compute-0 sudo[318407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:22 compute-0 sudo[318407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:22 compute-0 sudo[318432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:22 compute-0 sudo[318432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:22 compute-0 sudo[318432]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:22 compute-0 nova_compute[247704]: 2026-01-31 08:07:22.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2887923547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:22 compute-0 ceph-mon[74496]: pgmap v2115: 305 pgs: 305 active+clean; 213 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 3.5 MiB/s wr, 76 op/s
Jan 31 08:07:23 compute-0 nova_compute[247704]: 2026-01-31 08:07:23.064 247708 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Successfully created port: 2e83b2ea-6a97-4f9c-b834-cea4fb36733d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:07:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 221 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Jan 31 08:07:23 compute-0 nova_compute[247704]: 2026-01-31 08:07:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:23.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:23.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.114 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.243 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] resizing rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.389 247708 DEBUG nova.objects.instance [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'migration_context' on Instance uuid 71f4fe98-bb2f-4117-8dec-d6d8b464d01d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.421 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.422 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Ensure instance console log exists: /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.423 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.423 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.424 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:24 compute-0 ceph-mon[74496]: pgmap v2116: 305 pgs: 305 active+clean; 221 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 3.1 MiB/s wr, 71 op/s
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.621 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.622 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.622 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:07:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.780 247708 INFO nova.virt.libvirt.driver [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Deleting instance files /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac_del
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.782 247708 INFO nova.virt.libvirt.driver [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Deletion of /var/lib/nova/instances/32e56536-3edb-494c-9e8b-87cfa8396dac_del complete
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.874 247708 INFO nova.compute.manager [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Took 6.95 seconds to destroy the instance on the hypervisor.
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.875 247708 DEBUG oslo.service.loopingcall [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.876 247708 DEBUG nova.compute.manager [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.876 247708 DEBUG nova.network.neutron [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:07:24 compute-0 nova_compute[247704]: 2026-01-31 08:07:24.998 247708 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Successfully updated port: 2e83b2ea-6a97-4f9c-b834-cea4fb36733d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:07:25 compute-0 nova_compute[247704]: 2026-01-31 08:07:25.141 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "refresh_cache-71f4fe98-bb2f-4117-8dec-d6d8b464d01d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:25 compute-0 nova_compute[247704]: 2026-01-31 08:07:25.141 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquired lock "refresh_cache-71f4fe98-bb2f-4117-8dec-d6d8b464d01d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:25 compute-0 nova_compute[247704]: 2026-01-31 08:07:25.141 247708 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:07:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 242 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 31 08:07:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/853840470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1519004852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:25 compute-0 nova_compute[247704]: 2026-01-31 08:07:25.740 247708 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:07:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:25.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:26 compute-0 ceph-mon[74496]: pgmap v2117: 305 pgs: 305 active+clean; 242 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.729 247708 DEBUG nova.network.neutron [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.807 247708 INFO nova.compute.manager [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Took 1.93 seconds to deallocate network for instance.
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.816 247708 DEBUG nova.compute.manager [req-1d430bd6-3ce1-4191-8df5-dc9e5262801e req-62e26c57-2fe3-4fe9-b6eb-2fb72d9b1d6f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Received event network-vif-deleted-6dcdaf78-571b-42ba-bc17-7f6217ee6587 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.817 247708 INFO nova.compute.manager [req-1d430bd6-3ce1-4191-8df5-dc9e5262801e req-62e26c57-2fe3-4fe9-b6eb-2fb72d9b1d6f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Neutron deleted interface 6dcdaf78-571b-42ba-bc17-7f6217ee6587; detaching it from the instance and deleting it from the info cache
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.817 247708 DEBUG nova.network.neutron [req-1d430bd6-3ce1-4191-8df5-dc9e5262801e req-62e26c57-2fe3-4fe9-b6eb-2fb72d9b1d6f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.904 247708 DEBUG nova.compute.manager [req-1d430bd6-3ce1-4191-8df5-dc9e5262801e req-62e26c57-2fe3-4fe9-b6eb-2fb72d9b1d6f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Detach interface failed, port_id=6dcdaf78-571b-42ba-bc17-7f6217ee6587, reason: Instance 32e56536-3edb-494c-9e8b-87cfa8396dac could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.982 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:26 compute-0 nova_compute[247704]: 2026-01-31 08:07:26.982 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.089 247708 DEBUG oslo_concurrency.processutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126133665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.521 247708 DEBUG oslo_concurrency.processutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 226 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 3.2 MiB/s wr, 121 op/s
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.528 247708 DEBUG nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-changed-2e83b2ea-6a97-4f9c-b834-cea4fb36733d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.529 247708 DEBUG nova.compute.manager [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Refreshing instance network info cache due to event network-changed-2e83b2ea-6a97-4f9c-b834-cea4fb36733d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.530 247708 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-71f4fe98-bb2f-4117-8dec-d6d8b464d01d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.531 247708 DEBUG nova.compute.provider_tree [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.597 247708 DEBUG nova.scheduler.client.report [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.656 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1126133665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.822 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.826 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.826 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.827 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:07:27 compute-0 nova_compute[247704]: 2026-01-31 08:07:27.827 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:27 compute-0 podman[318556]: 2026-01-31 08:07:27.87754296 +0000 UTC m=+0.049593414 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:07:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:27.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:27.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.111 247708 DEBUG nova.network.neutron [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Updating instance_info_cache with network_info: [{"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.151 247708 INFO nova.scheduler.client.report [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Deleted allocations for instance 32e56536-3edb-494c-9e8b-87cfa8396dac
Jan 31 08:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/216643493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.338 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.366 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Releasing lock "refresh_cache-71f4fe98-bb2f-4117-8dec-d6d8b464d01d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.368 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Instance network_info: |[{"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.369 247708 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-71f4fe98-bb2f-4117-8dec-d6d8b464d01d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.370 247708 DEBUG nova.network.neutron [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Refreshing network info cache for port 2e83b2ea-6a97-4f9c-b834-cea4fb36733d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.375 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Start _get_guest_xml network_info=[{"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.381 247708 WARNING nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.386 247708 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.387 247708 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.396 247708 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.397 247708 DEBUG nova.virt.libvirt.host [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.399 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.399 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.399 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.400 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.400 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.400 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.401 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.401 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.401 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.402 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.402 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.402 247708 DEBUG nova.virt.hardware [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.406 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.559 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.560 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4413MB free_disk=20.90032958984375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.560 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.560 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.627 247708 DEBUG oslo_concurrency.lockutils [None req-63403a4e-c35d-41a8-9362-38a20f898985 31043e345f6b48b585fb7b8ab7304764 d352316ff6534075952e2d0c28061b09 - - default default] Lock "32e56536-3edb-494c-9e8b-87cfa8396dac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733325384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.847 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:28 compute-0 ceph-mon[74496]: pgmap v2118: 305 pgs: 305 active+clean; 226 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 3.2 MiB/s wr, 121 op/s
Jan 31 08:07:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/216643493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.892 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.897 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.920 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 71f4fe98-bb2f-4117-8dec-d6d8b464d01d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.921 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.921 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:07:28 compute-0 nova_compute[247704]: 2026-01-31 08:07:28.996 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.287 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248679973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.349 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.351 247708 DEBUG nova.virt.libvirt.vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-261953401',display_name='tempest-tempest.common.compute-instance-261953401-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-261953401-2',id=110,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-w1iilz7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:20Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=71f4fe98-bb2f-4117-8dec-d6d8b464d01d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.352 247708 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.353 247708 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.354 247708 DEBUG nova.objects.instance [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'pci_devices' on Instance uuid 71f4fe98-bb2f-4117-8dec-d6d8b464d01d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.423 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <uuid>71f4fe98-bb2f-4117-8dec-d6d8b464d01d</uuid>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <name>instance-0000006e</name>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:name>tempest-tempest.common.compute-instance-261953401-2</nova:name>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:07:28</nova:creationTime>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:user uuid="5ba00f420cd940ff802c16e8c25c35c4">tempest-MultipleCreateTestJSON-744612571-project-member</nova:user>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:project uuid="b97d933ec6c34696b0483a895f47feef">tempest-MultipleCreateTestJSON-744612571</nova:project>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <nova:port uuid="2e83b2ea-6a97-4f9c-b834-cea4fb36733d">
Jan 31 08:07:29 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <system>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <entry name="serial">71f4fe98-bb2f-4117-8dec-d6d8b464d01d</entry>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <entry name="uuid">71f4fe98-bb2f-4117-8dec-d6d8b464d01d</entry>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </system>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <os>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </os>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <features>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </features>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk">
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </source>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk.config">
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </source>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:07:29 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:47:07:96"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <target dev="tap2e83b2ea-6a"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/console.log" append="off"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <video>
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </video>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:07:29 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:07:29 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:07:29 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:07:29 compute-0 nova_compute[247704]: </domain>
Jan 31 08:07:29 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.424 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Preparing to wait for external event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.424 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.425 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.425 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.425 247708 DEBUG nova.virt.libvirt.vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-261953401',display_name='tempest-tempest.common.compute-instance-261953401-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-261953401-2',id=110,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-w1iilz7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:20Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=71f4fe98-bb2f-4117-8dec-d6d8b464d01d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.426 247708 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.426 247708 DEBUG nova.network.os_vif_util [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.426 247708 DEBUG os_vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.427 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.427 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.428 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.430 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.430 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e83b2ea-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.430 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e83b2ea-6a, col_values=(('external_ids', {'iface-id': '2e83b2ea-6a97-4f9c-b834-cea4fb36733d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:07:96', 'vm-uuid': '71f4fe98-bb2f-4117-8dec-d6d8b464d01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-0 NetworkManager[49108]: <info>  [1769846849.4329] manager: (tap2e83b2ea-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.438 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.439 247708 INFO os_vif [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a')
Jan 31 08:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150270100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.475 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.480 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.570 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.635 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.636 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.636 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No VIF found with MAC fa:16:3e:47:07:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.636 247708 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Using config drive
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.665 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.706 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:07:29 compute-0 nova_compute[247704]: 2026-01-31 08:07:29.706 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:29.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:29.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2733325384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1322478330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1080046223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4248679973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2150270100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-0 ceph-mon[74496]: pgmap v2119: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Jan 31 08:07:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/828928554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.678 247708 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Creating config drive at /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/disk.config
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.684 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbmqj0a6h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.705 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.706 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.706 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.812 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbmqj0a6h" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.854 247708 DEBUG nova.storage.rbd_utils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:30 compute-0 nova_compute[247704]: 2026-01-31 08:07:30.860 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/disk.config 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.028 247708 DEBUG oslo_concurrency.processutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/disk.config 71f4fe98-bb2f-4117-8dec-d6d8b464d01d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.029 247708 INFO nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Deleting local config drive /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d/disk.config because it was imported into RBD.
Jan 31 08:07:31 compute-0 kernel: tap2e83b2ea-6a: entered promiscuous mode
Jan 31 08:07:31 compute-0 ovn_controller[149457]: 2026-01-31T08:07:31Z|00439|binding|INFO|Claiming lport 2e83b2ea-6a97-4f9c-b834-cea4fb36733d for this chassis.
Jan 31 08:07:31 compute-0 ovn_controller[149457]: 2026-01-31T08:07:31Z|00440|binding|INFO|2e83b2ea-6a97-4f9c-b834-cea4fb36733d: Claiming fa:16:3e:47:07:96 10.100.0.12
Jan 31 08:07:31 compute-0 NetworkManager[49108]: <info>  [1769846851.0912] manager: (tap2e83b2ea-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 ovn_controller[149457]: 2026-01-31T08:07:31Z|00441|binding|INFO|Setting lport 2e83b2ea-6a97-4f9c-b834-cea4fb36733d ovn-installed in OVS
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.101 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.106 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 ovn_controller[149457]: 2026-01-31T08:07:31Z|00442|binding|INFO|Setting lport 2e83b2ea-6a97-4f9c-b834-cea4fb36733d up in Southbound
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.116 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:07:96 10.100.0.12'], port_security=['fa:16:3e:47:07:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '71f4fe98-bb2f-4117-8dec-d6d8b464d01d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=2e83b2ea-6a97-4f9c-b834-cea4fb36733d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.118 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 2e83b2ea-6a97-4f9c-b834-cea4fb36733d in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 bound to our chassis
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.120 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.129 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a04169e8-efe4-4523-8b53-099abf9e12d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.129 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaae9ebc2-f1 in ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.131 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaae9ebc2-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.131 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[617ee424-0f44-4d87-85e7-134d7301280b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.132 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a6599b88-5581-4a22-8aa3-c43f40fe4d61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 systemd-udevd[318757]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:07:31 compute-0 systemd-machined[214448]: New machine qemu-44-instance-0000006e.
Jan 31 08:07:31 compute-0 NetworkManager[49108]: <info>  [1769846851.1458] device (tap2e83b2ea-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:07:31 compute-0 NetworkManager[49108]: <info>  [1769846851.1465] device (tap2e83b2ea-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.144 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[628d2e89-2bc5-4e33-b4af-40e4d5db11f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-0000006e.
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.156 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3b00073f-ea26-4cf5-a034-e44574f6b693]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.175 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3a6f70-4dbd-4344-88a8-2c1b0fe09a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.179 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4f3935-ba71-4093-8ab3-507b9c0f5127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 NetworkManager[49108]: <info>  [1769846851.1807] manager: (tapaae9ebc2-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/205)
Jan 31 08:07:31 compute-0 systemd-udevd[318760]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:07:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1898498656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.209 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[272da4ae-8d02-47ea-a11a-ed3595fe098f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.212 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[79d27801-8ea5-4678-93ae-a0433fbc9223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 NetworkManager[49108]: <info>  [1769846851.2334] device (tapaae9ebc2-f0): carrier: link connected
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.241 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1676d18c-2f21-4acc-a5a3-866377732fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.258 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[89f9b78a-c9f1-4984-b429-ca9f32fe2e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709385, 'reachable_time': 17555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318789, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.274 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9adfc8a1-247a-4357-b41b-187bd14c0b17]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:4b98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709385, 'tstamp': 709385}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318790, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.293 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[873e2a4e-297f-4edf-8c0d-c5fc23770658]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709385, 'reachable_time': 17555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318791, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.327 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1616fa42-bb5b-44ee-aae8-aabce2fb3188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.399 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fee441ed-3ba9-4d8b-aade-d1e0eb224b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.401 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.401 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.402 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaae9ebc2-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:31 compute-0 NetworkManager[49108]: <info>  [1769846851.4057] manager: (tapaae9ebc2-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 kernel: tapaae9ebc2-f0: entered promiscuous mode
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.412 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaae9ebc2-f0, col_values=(('external_ids', {'iface-id': '18dd3685-0abe-42cf-9017-dc52c6cb4266'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.413 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 ovn_controller[149457]: 2026-01-31T08:07:31Z|00443|binding|INFO|Releasing lport 18dd3685-0abe-42cf-9017-dc52c6cb4266 from this chassis (sb_readonly=0)
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.416 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.420 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b399ee-f1e9-4a12-80a8-44394d8fa5d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.421 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:07:31 compute-0 nova_compute[247704]: 2026-01-31 08:07:31.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:31.421 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'env', 'PROCESS_TAG=haproxy-aae9ebc2-f854-4add-b86c-a5209381ad20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aae9ebc2-f854-4add-b86c-a5209381ad20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:07:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Jan 31 08:07:31 compute-0 podman[318824]: 2026-01-31 08:07:31.762422014 +0000 UTC m=+0.049608213 container create a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:07:31 compute-0 systemd[1]: Started libpod-conmon-a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef.scope.
Jan 31 08:07:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:31 compute-0 podman[318824]: 2026-01-31 08:07:31.734242385 +0000 UTC m=+0.021428594 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2106c7f71af1490e4a6f8e61f39f4c5d6e6a181dfb64692ece651ebc0b144c15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:31 compute-0 podman[318824]: 2026-01-31 08:07:31.851339949 +0000 UTC m=+0.138526168 container init a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:07:31 compute-0 podman[318824]: 2026-01-31 08:07:31.86362724 +0000 UTC m=+0.150813429 container start a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:07:31 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[318839]: [NOTICE]   (318843) : New worker (318845) forked
Jan 31 08:07:31 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[318839]: [NOTICE]   (318843) : Loading success.
Jan 31 08:07:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:31.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:31.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.085 247708 DEBUG nova.network.neutron [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Updated VIF entry in instance network info cache for port 2e83b2ea-6a97-4f9c-b834-cea4fb36733d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.086 247708 DEBUG nova.network.neutron [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Updating instance_info_cache with network_info: [{"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.094 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846852.0938528, 71f4fe98-bb2f-4117-8dec-d6d8b464d01d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.095 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] VM Started (Lifecycle Event)
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.162 247708 DEBUG nova.compute.manager [req-cf036670-2e09-4ac3-b7df-3060a0db1ae1 req-cd3fd2f7-0dea-4653-97cf-0dbdadcd76f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.163 247708 DEBUG oslo_concurrency.lockutils [req-cf036670-2e09-4ac3-b7df-3060a0db1ae1 req-cd3fd2f7-0dea-4653-97cf-0dbdadcd76f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.163 247708 DEBUG oslo_concurrency.lockutils [req-cf036670-2e09-4ac3-b7df-3060a0db1ae1 req-cd3fd2f7-0dea-4653-97cf-0dbdadcd76f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.164 247708 DEBUG oslo_concurrency.lockutils [req-cf036670-2e09-4ac3-b7df-3060a0db1ae1 req-cd3fd2f7-0dea-4653-97cf-0dbdadcd76f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.164 247708 DEBUG nova.compute.manager [req-cf036670-2e09-4ac3-b7df-3060a0db1ae1 req-cd3fd2f7-0dea-4653-97cf-0dbdadcd76f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Processing event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.165 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.172 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.177 247708 INFO nova.virt.libvirt.driver [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Instance spawned successfully.
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.177 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.185 247708 DEBUG oslo_concurrency.lockutils [req-a93a61b9-1b78-4a0f-95ec-f78fc03969ba req-1266f7bf-9510-474c-bf40-5ed2745d4d25 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-71f4fe98-bb2f-4117-8dec-d6d8b464d01d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.187 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.189 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:07:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1252239058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:32 compute-0 ceph-mon[74496]: pgmap v2120: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.241 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.242 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.242 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.243 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.243 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.244 247708 DEBUG nova.virt.libvirt.driver [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.278 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.281 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846852.0944262, 71f4fe98-bb2f-4117-8dec-d6d8b464d01d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.281 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] VM Paused (Lifecycle Event)
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.428 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.433 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846852.1711106, 71f4fe98-bb2f-4117-8dec-d6d8b464d01d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.434 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] VM Resumed (Lifecycle Event)
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.570 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.575 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.647 247708 INFO nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Took 12.30 seconds to spawn the instance on the hypervisor.
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.648 247708 DEBUG nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.668 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.812 247708 INFO nova.compute.manager [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Took 15.28 seconds to build instance.
Jan 31 08:07:32 compute-0 nova_compute[247704]: 2026-01-31 08:07:32.932 247708 DEBUG oslo_concurrency.lockutils [None req-88995467-35cf-42fc-acb7-1084c454bce1 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1848110150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Jan 31 08:07:33 compute-0 nova_compute[247704]: 2026-01-31 08:07:33.759 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846838.7576814, 32e56536-3edb-494c-9e8b-87cfa8396dac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:33 compute-0 nova_compute[247704]: 2026-01-31 08:07:33.759 247708 INFO nova.compute.manager [-] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] VM Stopped (Lifecycle Event)
Jan 31 08:07:33 compute-0 nova_compute[247704]: 2026-01-31 08:07:33.909 247708 DEBUG nova.compute.manager [None req-84400fc1-aece-4a45-9fd2-1abc5822060f - - - - - -] [instance: 32e56536-3edb-494c-9e8b-87cfa8396dac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:33.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:33.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:34 compute-0 ceph-mon[74496]: pgmap v2121: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.288 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.356 247708 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.357 247708 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.357 247708 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.358 247708 DEBUG oslo_concurrency.lockutils [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.358 247708 DEBUG nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] No waiting events found dispatching network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.358 247708 WARNING nova.compute.manager [req-5ece3e9e-e3d0-41dc-ac34-6079ef55ed52 req-71fda6e0-ba20-4510-ae65-b0ce9e9539ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received unexpected event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d for instance with vm_state active and task_state None.
Jan 31 08:07:34 compute-0 nova_compute[247704]: 2026-01-31 08:07:34.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003971136062255723 of space, bias 1.0, pg target 1.191340818676717 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:07:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.5 MiB/s wr, 261 op/s
Jan 31 08:07:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:35.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:36 compute-0 ceph-mon[74496]: pgmap v2122: 305 pgs: 305 active+clean; 227 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.5 MiB/s wr, 261 op/s
Jan 31 08:07:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 235 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 3.1 MiB/s wr, 347 op/s
Jan 31 08:07:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:37.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:37.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:37 compute-0 ceph-mon[74496]: pgmap v2123: 305 pgs: 305 active+clean; 235 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 3.1 MiB/s wr, 347 op/s
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.797 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.797 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.798 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.798 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.798 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.800 247708 INFO nova.compute.manager [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Terminating instance
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.801 247708 DEBUG nova.compute.manager [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:07:38 compute-0 kernel: tap2e83b2ea-6a (unregistering): left promiscuous mode
Jan 31 08:07:38 compute-0 NetworkManager[49108]: <info>  [1769846858.8401] device (tap2e83b2ea-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-0 ovn_controller[149457]: 2026-01-31T08:07:38Z|00444|binding|INFO|Releasing lport 2e83b2ea-6a97-4f9c-b834-cea4fb36733d from this chassis (sb_readonly=0)
Jan 31 08:07:38 compute-0 ovn_controller[149457]: 2026-01-31T08:07:38Z|00445|binding|INFO|Setting lport 2e83b2ea-6a97-4f9c-b834-cea4fb36733d down in Southbound
Jan 31 08:07:38 compute-0 ovn_controller[149457]: 2026-01-31T08:07:38Z|00446|binding|INFO|Removing iface tap2e83b2ea-6a ovn-installed in OVS
Jan 31 08:07:38 compute-0 nova_compute[247704]: 2026-01-31 08:07:38.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:38 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 31 08:07:38 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006e.scope: Consumed 7.786s CPU time.
Jan 31 08:07:38 compute-0 systemd-machined[214448]: Machine qemu-44-instance-0000006e terminated.
Jan 31 08:07:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:38.928 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:07:96 10.100.0.12'], port_security=['fa:16:3e:47:07:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '71f4fe98-bb2f-4117-8dec-d6d8b464d01d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=2e83b2ea-6a97-4f9c-b834-cea4fb36733d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:38.930 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 2e83b2ea-6a97-4f9c-b834-cea4fb36733d in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 unbound from our chassis
Jan 31 08:07:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:38.931 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aae9ebc2-f854-4add-b86c-a5209381ad20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:07:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:38.932 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a676131f-0e45-4497-839d-7fd6bc6161b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:38.933 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace which is not needed anymore
Jan 31 08:07:39 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[318839]: [NOTICE]   (318843) : haproxy version is 2.8.14-c23fe91
Jan 31 08:07:39 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[318839]: [NOTICE]   (318843) : path to executable is /usr/sbin/haproxy
Jan 31 08:07:39 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[318839]: [ALERT]    (318843) : Current worker (318845) exited with code 143 (Terminated)
Jan 31 08:07:39 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[318839]: [WARNING]  (318843) : All workers exited. Exiting... (0)
Jan 31 08:07:39 compute-0 systemd[1]: libpod-a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef.scope: Deactivated successfully.
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.043 247708 INFO nova.virt.libvirt.driver [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Instance destroyed successfully.
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.044 247708 DEBUG nova.objects.instance [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'resources' on Instance uuid 71f4fe98-bb2f-4117-8dec-d6d8b464d01d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:39 compute-0 podman[318923]: 2026-01-31 08:07:39.050580258 +0000 UTC m=+0.055307744 container died a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.076 247708 DEBUG nova.virt.libvirt.vif [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-261953401',display_name='tempest-tempest.common.compute-instance-261953401-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-261953401-2',id=110,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-31T08:07:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-w1iilz7a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:32Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=71f4fe98-bb2f-4117-8dec-d6d8b464d01d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.078 247708 DEBUG nova.network.os_vif_util [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "address": "fa:16:3e:47:07:96", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e83b2ea-6a", "ovs_interfaceid": "2e83b2ea-6a97-4f9c-b834-cea4fb36733d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.079 247708 DEBUG nova.network.os_vif_util [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.080 247708 DEBUG os_vif [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.082 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.083 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e83b2ea-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.087 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.090 247708 INFO os_vif [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:07:96,bridge_name='br-int',has_traffic_filtering=True,id=2e83b2ea-6a97-4f9c-b834-cea4fb36733d,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e83b2ea-6a')
Jan 31 08:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2106c7f71af1490e4a6f8e61f39f4c5d6e6a181dfb64692ece651ebc0b144c15-merged.mount: Deactivated successfully.
Jan 31 08:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef-userdata-shm.mount: Deactivated successfully.
Jan 31 08:07:39 compute-0 podman[318923]: 2026-01-31 08:07:39.126544395 +0000 UTC m=+0.131271891 container cleanup a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:39 compute-0 systemd[1]: libpod-conmon-a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef.scope: Deactivated successfully.
Jan 31 08:07:39 compute-0 podman[318979]: 2026-01-31 08:07:39.212562698 +0000 UTC m=+0.068613689 container remove a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.217 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dffeffe5-791e-4ae2-b93f-68bfd0be0b68]: (4, ('Sat Jan 31 08:07:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef)\na41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef\nSat Jan 31 08:07:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (a41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef)\na41a243f7b9d2838b2472023789cde6097cb6822aeee602925f803dd2f22f1ef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.219 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c87dd726-cc45-42c5-93f0-f924c0f5966d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.220 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.222 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 kernel: tapaae9ebc2-f0: left promiscuous mode
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.225 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.228 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b5cbda7e-a1eb-4885-a314-cd110d31edb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.230 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.241 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[92d6bc88-0b81-4e54-a884-05fd15f950ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.243 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b6d452-a20e-42cf-b6c4-c4fe02d5ee19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.255 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4a5de0-da33-45cb-a0c3-e472aff6967e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709379, 'reachable_time': 20771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318994, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 systemd[1]: run-netns-ovnmeta\x2daae9ebc2\x2df854\x2d4add\x2db86c\x2da5209381ad20.mount: Deactivated successfully.
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.260 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:07:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:39.261 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5ca6a9-5436-4d6e-bdc3-e297c54792fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.291 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 226 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.2 MiB/s wr, 330 op/s
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.657 247708 INFO nova.virt.libvirt.driver [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Deleting instance files /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d_del
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.658 247708 INFO nova.virt.libvirt.driver [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Deletion of /var/lib/nova/instances/71f4fe98-bb2f-4117-8dec-d6d8b464d01d_del complete
Jan 31 08:07:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.827 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.895 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.946 247708 INFO nova.compute.manager [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.947 247708 DEBUG oslo.service.loopingcall [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.947 247708 DEBUG nova.compute.manager [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:07:39 compute-0 nova_compute[247704]: 2026-01-31 08:07:39.948 247708 DEBUG nova.network.neutron [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:07:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:39.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:39.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:40 compute-0 ceph-mon[74496]: pgmap v2124: 305 pgs: 305 active+clean; 226 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.2 MiB/s wr, 330 op/s
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.058 247708 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-vif-unplugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.058 247708 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.059 247708 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.059 247708 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.059 247708 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] No waiting events found dispatching network-vif-unplugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.059 247708 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-vif-unplugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.060 247708 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.060 247708 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.060 247708 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.060 247708 DEBUG oslo_concurrency.lockutils [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.061 247708 DEBUG nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] No waiting events found dispatching network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:07:41 compute-0 nova_compute[247704]: 2026-01-31 08:07:41.061 247708 WARNING nova.compute.manager [req-26e536ef-fe7d-468e-967b-994cb39863b8 req-9d1c7627-a254-4dfe-ac5a-03ee96c5f0ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received unexpected event network-vif-plugged-2e83b2ea-6a97-4f9c-b834-cea4fb36733d for instance with vm_state active and task_state deleting.
Jan 31 08:07:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 217 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.0 MiB/s wr, 296 op/s
Jan 31 08:07:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:41.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:41.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:42 compute-0 ceph-mon[74496]: pgmap v2125: 305 pgs: 305 active+clean; 217 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.0 MiB/s wr, 296 op/s
Jan 31 08:07:42 compute-0 sudo[318998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:42 compute-0 sudo[318998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:42 compute-0 sudo[318998]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:42 compute-0 sudo[319023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:42 compute-0 sudo[319023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:42 compute-0 sudo[319023]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:42.724 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:07:42 compute-0 nova_compute[247704]: 2026-01-31 08:07:42.725 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:42.727 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:07:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3483021904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 206 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.5 MiB/s wr, 285 op/s
Jan 31 08:07:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:07:43.729 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:07:43 compute-0 podman[319049]: 2026-01-31 08:07:43.919575406 +0000 UTC m=+0.088005223 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:07:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:43.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:43 compute-0 nova_compute[247704]: 2026-01-31 08:07:43.955 247708 DEBUG nova.network.neutron [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:43.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.085 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.194 247708 INFO nova.compute.manager [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Took 4.25 seconds to deallocate network for instance.
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.345 247708 DEBUG nova.compute.manager [req-eb8ffaa6-0a71-4172-876d-86db029291ca req-d2535f8e-4b6d-4e9e-aa0d-80581a59ef6f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Received event network-vif-deleted-2e83b2ea-6a97-4f9c-b834-cea4fb36733d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.356 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.356 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:44 compute-0 ceph-mon[74496]: pgmap v2126: 305 pgs: 305 active+clean; 206 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.5 MiB/s wr, 285 op/s
Jan 31 08:07:44 compute-0 nova_compute[247704]: 2026-01-31 08:07:44.563 247708 DEBUG oslo_concurrency.processutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089367561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:45 compute-0 nova_compute[247704]: 2026-01-31 08:07:45.009 247708 DEBUG oslo_concurrency.processutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:45 compute-0 nova_compute[247704]: 2026-01-31 08:07:45.016 247708 DEBUG nova.compute.provider_tree [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:45 compute-0 nova_compute[247704]: 2026-01-31 08:07:45.051 247708 DEBUG nova.scheduler.client.report [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:45 compute-0 nova_compute[247704]: 2026-01-31 08:07:45.154 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:45 compute-0 nova_compute[247704]: 2026-01-31 08:07:45.212 247708 INFO nova.scheduler.client.report [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Deleted allocations for instance 71f4fe98-bb2f-4117-8dec-d6d8b464d01d
Jan 31 08:07:45 compute-0 nova_compute[247704]: 2026-01-31 08:07:45.356 247708 DEBUG oslo_concurrency.lockutils [None req-10184237-ae5e-4a3b-a750-cd547ad79044 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "71f4fe98-bb2f-4117-8dec-d6d8b464d01d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 196 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 314 op/s
Jan 31 08:07:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3791122339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:07:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3791122339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:07:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2089367561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:45.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:45.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:46 compute-0 ceph-mon[74496]: pgmap v2127: 305 pgs: 305 active+clean; 196 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 314 op/s
Jan 31 08:07:47 compute-0 sudo[319098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-0 sudo[319098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 sudo[319098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 sudo[319123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:47 compute-0 sudo[319123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 sudo[319123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 sudo[319148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-0 sudo[319148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 sudo[319148]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 sudo[319173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:07:47 compute-0 sudo[319173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 292 op/s
Jan 31 08:07:47 compute-0 sudo[319173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 sudo[319229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-0 sudo[319229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 sudo[319229]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 sudo[319254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:47 compute-0 sudo[319254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 sudo[319254]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:07:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:47.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:07:47 compute-0 sudo[319279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:47 compute-0 sudo[319279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:47 compute-0 sudo[319279]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:47.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:48 compute-0 sudo[319304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:07:48 compute-0 sudo[319304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:07:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:07:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:07:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.278105433 +0000 UTC m=+0.033960101 container create 1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 systemd[1]: Started libpod-conmon-1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71.scope.
Jan 31 08:07:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.345686895 +0000 UTC m=+0.101541563 container init 1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.350718489 +0000 UTC m=+0.106573157 container start 1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.354177553 +0000 UTC m=+0.110032221 container attach 1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:07:48 compute-0 jovial_hertz[319388]: 167 167
Jan 31 08:07:48 compute-0 systemd[1]: libpod-1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71.scope: Deactivated successfully.
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.356838488 +0000 UTC m=+0.112693186 container died 1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.26123058 +0000 UTC m=+0.017085268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4e7ec86b04c5db001a3bc125f3cf52b7befd3b8be352df28f934546bee1d639-merged.mount: Deactivated successfully.
Jan 31 08:07:48 compute-0 podman[319371]: 2026-01-31 08:07:48.413066733 +0000 UTC m=+0.168921401 container remove 1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:48 compute-0 systemd[1]: libpod-conmon-1e9310d7e9c943ee1bd97a3f2e5cf841843eae05615e5705ecaf825a7ddc1f71.scope: Deactivated successfully.
Jan 31 08:07:48 compute-0 podman[319414]: 2026-01-31 08:07:48.556857519 +0000 UTC m=+0.050385373 container create 4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:07:48 compute-0 ceph-mon[74496]: pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 292 op/s
Jan 31 08:07:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:48 compute-0 systemd[1]: Started libpod-conmon-4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4.scope.
Jan 31 08:07:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de42bd15daab78cf6c4e7ed30d09244aee070f72294682c818d875df1c99dcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de42bd15daab78cf6c4e7ed30d09244aee070f72294682c818d875df1c99dcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de42bd15daab78cf6c4e7ed30d09244aee070f72294682c818d875df1c99dcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de42bd15daab78cf6c4e7ed30d09244aee070f72294682c818d875df1c99dcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:48 compute-0 podman[319414]: 2026-01-31 08:07:48.534359519 +0000 UTC m=+0.027887393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:48 compute-0 podman[319414]: 2026-01-31 08:07:48.632062558 +0000 UTC m=+0.125590512 container init 4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:48 compute-0 podman[319414]: 2026-01-31 08:07:48.639699045 +0000 UTC m=+0.133226899 container start 4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:48 compute-0 podman[319414]: 2026-01-31 08:07:48.64359692 +0000 UTC m=+0.137124854 container attach 4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:49 compute-0 nova_compute[247704]: 2026-01-31 08:07:49.133 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:49 compute-0 nova_compute[247704]: 2026-01-31 08:07:49.295 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 204 op/s
Jan 31 08:07:49 compute-0 loving_poitras[319431]: [
Jan 31 08:07:49 compute-0 loving_poitras[319431]:     {
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "available": false,
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "ceph_device": false,
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "lsm_data": {},
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "lvs": [],
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "path": "/dev/sr0",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "rejected_reasons": [
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "Has a FileSystem",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "Insufficient space (<5GB)"
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         ],
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         "sys_api": {
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "actuators": null,
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "device_nodes": "sr0",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "devname": "sr0",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "human_readable_size": "482.00 KB",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "id_bus": "ata",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "model": "QEMU DVD-ROM",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "nr_requests": "2",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "parent": "/dev/sr0",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "partitions": {},
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "path": "/dev/sr0",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "removable": "1",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "rev": "2.5+",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "ro": "0",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "rotational": "1",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "sas_address": "",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "sas_device_handle": "",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "scheduler_mode": "mq-deadline",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "sectors": 0,
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "sectorsize": "2048",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "size": 493568.0,
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "support_discard": "2048",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "type": "disk",
Jan 31 08:07:49 compute-0 loving_poitras[319431]:             "vendor": "QEMU"
Jan 31 08:07:49 compute-0 loving_poitras[319431]:         }
Jan 31 08:07:49 compute-0 loving_poitras[319431]:     }
Jan 31 08:07:49 compute-0 loving_poitras[319431]: ]
Jan 31 08:07:49 compute-0 systemd[1]: libpod-4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4.scope: Deactivated successfully.
Jan 31 08:07:49 compute-0 systemd[1]: libpod-4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4.scope: Consumed 1.063s CPU time.
Jan 31 08:07:49 compute-0 conmon[319431]: conmon 4b00a893bc63f22db6aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4.scope/container/memory.events
Jan 31 08:07:49 compute-0 podman[320630]: 2026-01-31 08:07:49.735327026 +0000 UTC m=+0.021821425 container died 4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6de42bd15daab78cf6c4e7ed30d09244aee070f72294682c818d875df1c99dcb-merged.mount: Deactivated successfully.
Jan 31 08:07:49 compute-0 podman[320630]: 2026-01-31 08:07:49.784486428 +0000 UTC m=+0.070980807 container remove 4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:49 compute-0 systemd[1]: libpod-conmon-4b00a893bc63f22db6aa064d28a69f8dea3e6cb58ff82f1fde84b3f98c6ff9c4.scope: Deactivated successfully.
Jan 31 08:07:49 compute-0 sudo[319304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:07:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:07:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:49.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 150d8562-b18c-439d-9ba3-28b7688face2 does not exist
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5eb852d8-6f46-4737-b918-1cf39d80cb48 does not exist
Jan 31 08:07:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4334b0d2-e332-4271-86e8-7326be1a5ea7 does not exist
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:07:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:07:50 compute-0 sudo[320645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:50 compute-0 sudo[320645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 sudo[320645]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 sudo[320670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:50 compute-0 sudo[320670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 sudo[320670]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 sudo[320695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:50 compute-0 sudo[320695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 sudo[320695]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 sudo[320720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:07:50 compute-0 sudo[320720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:50 compute-0 ceph-mon[74496]: pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 204 op/s
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:07:50 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:07:50 compute-0 podman[320787]: 2026-01-31 08:07:50.892070621 +0000 UTC m=+0.042612214 container create 2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:07:50 compute-0 systemd[1]: Started libpod-conmon-2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f.scope.
Jan 31 08:07:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:50 compute-0 podman[320787]: 2026-01-31 08:07:50.871548029 +0000 UTC m=+0.022089672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:50 compute-0 podman[320787]: 2026-01-31 08:07:50.974575938 +0000 UTC m=+0.125117561 container init 2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:07:50 compute-0 podman[320787]: 2026-01-31 08:07:50.980714758 +0000 UTC m=+0.131256381 container start 2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:07:50 compute-0 upbeat_hugle[320803]: 167 167
Jan 31 08:07:50 compute-0 systemd[1]: libpod-2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f.scope: Deactivated successfully.
Jan 31 08:07:50 compute-0 podman[320787]: 2026-01-31 08:07:50.985916696 +0000 UTC m=+0.136458319 container attach 2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:50 compute-0 podman[320787]: 2026-01-31 08:07:50.986595782 +0000 UTC m=+0.137137375 container died 2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fbe45cd6e9c9cc89238591b66c7b4abda674471ad282292a1cc202bd17d1f6f-merged.mount: Deactivated successfully.
Jan 31 08:07:51 compute-0 podman[320787]: 2026-01-31 08:07:51.024337135 +0000 UTC m=+0.174878728 container remove 2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:51 compute-0 systemd[1]: libpod-conmon-2b122a95b440154b863f26fdd6f105f3f2668274ed86afc69b667d34d976d45f.scope: Deactivated successfully.
Jan 31 08:07:51 compute-0 podman[320826]: 2026-01-31 08:07:51.189204986 +0000 UTC m=+0.046067707 container create 8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:07:51 compute-0 systemd[1]: Started libpod-conmon-8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e.scope.
Jan 31 08:07:51 compute-0 podman[320826]: 2026-01-31 08:07:51.163524359 +0000 UTC m=+0.020387120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e207821f7b5880e9e533c6e53272ccaa784ce3888b60a47f01707ba4df0ab2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e207821f7b5880e9e533c6e53272ccaa784ce3888b60a47f01707ba4df0ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e207821f7b5880e9e533c6e53272ccaa784ce3888b60a47f01707ba4df0ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e207821f7b5880e9e533c6e53272ccaa784ce3888b60a47f01707ba4df0ab2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e207821f7b5880e9e533c6e53272ccaa784ce3888b60a47f01707ba4df0ab2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:51 compute-0 podman[320826]: 2026-01-31 08:07:51.290719399 +0000 UTC m=+0.147582180 container init 8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:07:51 compute-0 podman[320826]: 2026-01-31 08:07:51.299379271 +0000 UTC m=+0.156241952 container start 8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:07:51 compute-0 podman[320826]: 2026-01-31 08:07:51.303146562 +0000 UTC m=+0.160009323 container attach 8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:07:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Jan 31 08:07:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:51.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:52 compute-0 nervous_kare[320842]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:07:52 compute-0 nervous_kare[320842]: --> relative data size: 1.0
Jan 31 08:07:52 compute-0 nervous_kare[320842]: --> All data devices are unavailable
Jan 31 08:07:52 compute-0 systemd[1]: libpod-8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[320826]: 2026-01-31 08:07:52.131520988 +0000 UTC m=+0.988383669 container died 8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-59e207821f7b5880e9e533c6e53272ccaa784ce3888b60a47f01707ba4df0ab2-merged.mount: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[320826]: 2026-01-31 08:07:52.190003278 +0000 UTC m=+1.046865959 container remove 8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:07:52 compute-0 systemd[1]: libpod-conmon-8a9c3e812b70f9706650f8e94bcdff2fe5677b4119f9fd3e93080c2947ab917e.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 sudo[320720]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 sudo[320870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:52 compute-0 sudo[320870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:52 compute-0 sudo[320870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 sudo[320895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:52 compute-0 sudo[320895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:52 compute-0 sudo[320895]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.381 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.382 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:52 compute-0 sudo[320920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:52 compute-0 sudo[320920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:52 compute-0 sudo[320920]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.495 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:07:52 compute-0 sudo[320945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:07:52 compute-0 sudo[320945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.622 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.622 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:52 compute-0 ceph-mon[74496]: pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 3.1 MiB/s wr, 135 op/s
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.759 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.760 247708 INFO nova.compute.claims [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.843778675 +0000 UTC m=+0.054980505 container create dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:07:52 compute-0 systemd[1]: Started libpod-conmon-dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3.scope.
Jan 31 08:07:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.823414566 +0000 UTC m=+0.034616496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.927996204 +0000 UTC m=+0.139198064 container init dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:07:52 compute-0 nova_compute[247704]: 2026-01-31 08:07:52.933 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.934105493 +0000 UTC m=+0.145307333 container start dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:52 compute-0 hopeful_hertz[321026]: 167 167
Jan 31 08:07:52 compute-0 systemd[1]: libpod-dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.937689781 +0000 UTC m=+0.148891721 container attach dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.938902281 +0000 UTC m=+0.150104201 container died dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-39dba4a4f702bd56c90215a0f436a99015c191389a037839913c4a75bf932dfa-merged.mount: Deactivated successfully.
Jan 31 08:07:52 compute-0 podman[321010]: 2026-01-31 08:07:52.979035511 +0000 UTC m=+0.190237351 container remove dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:52 compute-0 systemd[1]: libpod-conmon-dc8b03144b35f697cc654057c8d6469c6e4d785d0402c0a2d1fdf585484a25d3.scope: Deactivated successfully.
Jan 31 08:07:53 compute-0 podman[321068]: 2026-01-31 08:07:53.134523974 +0000 UTC m=+0.048234921 container create eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ramanujan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:07:53 compute-0 systemd[1]: Started libpod-conmon-eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8.scope.
Jan 31 08:07:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:53 compute-0 podman[321068]: 2026-01-31 08:07:53.114164826 +0000 UTC m=+0.027875803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bd396ca7156cccf9a728d70cce5102902396d31fe087afe29b60f31cf13e26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bd396ca7156cccf9a728d70cce5102902396d31fe087afe29b60f31cf13e26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bd396ca7156cccf9a728d70cce5102902396d31fe087afe29b60f31cf13e26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22bd396ca7156cccf9a728d70cce5102902396d31fe087afe29b60f31cf13e26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:53 compute-0 podman[321068]: 2026-01-31 08:07:53.22351653 +0000 UTC m=+0.137227537 container init eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ramanujan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:07:53 compute-0 podman[321068]: 2026-01-31 08:07:53.230348497 +0000 UTC m=+0.144059444 container start eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:07:53 compute-0 podman[321068]: 2026-01-31 08:07:53.234025827 +0000 UTC m=+0.147736884 container attach eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 08:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487880865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.364 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.371 247708 DEBUG nova.compute.provider_tree [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.415 247708 DEBUG nova.scheduler.client.report [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.480 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.481 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:07:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 1.4 MiB/s wr, 73 op/s
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.621 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.622 247708 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.679 247708 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:07:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3487880865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2015146701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.797 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]: {
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:     "0": [
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:         {
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "devices": [
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "/dev/loop3"
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             ],
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "lv_name": "ceph_lv0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "lv_size": "7511998464",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "name": "ceph_lv0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "tags": {
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.crush_device_class": "",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.encrypted": "0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.osd_id": "0",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.type": "block",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:                 "ceph.vdo": "0"
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             },
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "type": "block",
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:             "vg_name": "ceph_vg0"
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:         }
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]:     ]
Jan 31 08:07:53 compute-0 amazing_ramanujan[321084]: }
Jan 31 08:07:53 compute-0 systemd[1]: libpod-eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8.scope: Deactivated successfully.
Jan 31 08:07:53 compute-0 podman[321068]: 2026-01-31 08:07:53.955787296 +0000 UTC m=+0.869498283 container died eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ramanujan, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:07:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:53.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:53 compute-0 nova_compute[247704]: 2026-01-31 08:07:53.982 247708 DEBUG nova.policy [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ba00f420cd940ff802c16e8c25c35c4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b97d933ec6c34696b0483a895f47feef', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:07:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:53.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.040 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846859.0403194, 71f4fe98-bb2f-4117-8dec-d6d8b464d01d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.041 247708 INFO nova.compute.manager [-] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] VM Stopped (Lifecycle Event)
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.057 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.058 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.058 247708 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Creating image(s)
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.088 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-22bd396ca7156cccf9a728d70cce5102902396d31fe087afe29b60f31cf13e26-merged.mount: Deactivated successfully.
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.116 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.150 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.156 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.183 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.187 247708 DEBUG nova.compute.manager [None req-3bffd467-52c3-4904-9a49-275b261fe411 - - - - - -] [instance: 71f4fe98-bb2f-4117-8dec-d6d8b464d01d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:07:54 compute-0 podman[321068]: 2026-01-31 08:07:54.215562788 +0000 UTC m=+1.129273735 container remove eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:54 compute-0 systemd[1]: libpod-conmon-eb94af1445ed5afa3865cc4e60addf00f50e4434ffe0978a7495bea409ece8b8.scope: Deactivated successfully.
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.223 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.224 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.224 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.224 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:54 compute-0 sudo[320945]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.263 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.268 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:54 compute-0 nova_compute[247704]: 2026-01-31 08:07:54.298 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:54 compute-0 sudo[321185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:54 compute-0 sudo[321185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 sudo[321185]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 sudo[321225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:54 compute-0 sudo[321225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 sudo[321225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 sudo[321251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:54 compute-0 sudo[321251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 sudo[321251]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 sudo[321276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:07:54 compute-0 sudo[321276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:54 compute-0 podman[321345]: 2026-01-31 08:07:54.803743691 +0000 UTC m=+0.051456520 container create 84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:07:54 compute-0 ceph-mon[74496]: pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 1.4 MiB/s wr, 73 op/s
Jan 31 08:07:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/755635357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3162634110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:07:54 compute-0 systemd[1]: Started libpod-conmon-84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9.scope.
Jan 31 08:07:54 compute-0 podman[321345]: 2026-01-31 08:07:54.771597904 +0000 UTC m=+0.019310693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:54 compute-0 podman[321345]: 2026-01-31 08:07:54.927454945 +0000 UTC m=+0.175167754 container init 84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:54 compute-0 podman[321345]: 2026-01-31 08:07:54.934117668 +0000 UTC m=+0.181830477 container start 84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_murdock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:07:54 compute-0 sad_murdock[321362]: 167 167
Jan 31 08:07:54 compute-0 systemd[1]: libpod-84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9.scope: Deactivated successfully.
Jan 31 08:07:54 compute-0 conmon[321362]: conmon 84bdcd2c48e0f9c0be37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9.scope/container/memory.events
Jan 31 08:07:54 compute-0 podman[321345]: 2026-01-31 08:07:54.945194439 +0000 UTC m=+0.192907228 container attach 84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:07:54 compute-0 podman[321345]: 2026-01-31 08:07:54.94564762 +0000 UTC m=+0.193360409 container died 84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4d644d68248c7ed8bf3b3e167e843e8ffb770699dfef59b44d05f9571a1e96d-merged.mount: Deactivated successfully.
Jan 31 08:07:55 compute-0 podman[321345]: 2026-01-31 08:07:55.025458592 +0000 UTC m=+0.273171421 container remove 84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:55 compute-0 systemd[1]: libpod-conmon-84bdcd2c48e0f9c0be3711ccd25b3de7484d38eb3b8f62acec5dea32ec6322e9.scope: Deactivated successfully.
Jan 31 08:07:55 compute-0 podman[321386]: 2026-01-31 08:07:55.168265914 +0000 UTC m=+0.054922924 container create 96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:07:55 compute-0 systemd[1]: Started libpod-conmon-96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de.scope.
Jan 31 08:07:55 compute-0 podman[321386]: 2026-01-31 08:07:55.145903287 +0000 UTC m=+0.032560307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0965ca2d113ac3f41997114975ed470bdce9ec6e19cea3a9b5111f4c920ee38f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0965ca2d113ac3f41997114975ed470bdce9ec6e19cea3a9b5111f4c920ee38f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0965ca2d113ac3f41997114975ed470bdce9ec6e19cea3a9b5111f4c920ee38f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0965ca2d113ac3f41997114975ed470bdce9ec6e19cea3a9b5111f4c920ee38f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 podman[321386]: 2026-01-31 08:07:55.275511986 +0000 UTC m=+0.162169006 container init 96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:07:55 compute-0 podman[321386]: 2026-01-31 08:07:55.287779166 +0000 UTC m=+0.174436186 container start 96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:55 compute-0 podman[321386]: 2026-01-31 08:07:55.292701797 +0000 UTC m=+0.179358797 container attach 96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:07:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 893 KiB/s wr, 54 op/s
Jan 31 08:07:55 compute-0 nova_compute[247704]: 2026-01-31 08:07:55.624 247708 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Successfully created port: a8f37427-8a41-4ba9-b278-a64c13d982b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:07:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:55 compute-0 ceph-mon[74496]: pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 893 KiB/s wr, 54 op/s
Jan 31 08:07:56 compute-0 nova_compute[247704]: 2026-01-31 08:07:56.104 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.836s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]: {
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:         "osd_id": 0,
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:         "type": "bluestore"
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]:     }
Jan 31 08:07:56 compute-0 happy_heisenberg[321402]: }
Jan 31 08:07:56 compute-0 systemd[1]: libpod-96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de.scope: Deactivated successfully.
Jan 31 08:07:56 compute-0 podman[321386]: 2026-01-31 08:07:56.172169351 +0000 UTC m=+1.058826321 container died 96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:07:56 compute-0 nova_compute[247704]: 2026-01-31 08:07:56.180 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] resizing rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0965ca2d113ac3f41997114975ed470bdce9ec6e19cea3a9b5111f4c920ee38f-merged.mount: Deactivated successfully.
Jan 31 08:07:56 compute-0 podman[321386]: 2026-01-31 08:07:56.211604486 +0000 UTC m=+1.098261456 container remove 96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:56 compute-0 systemd[1]: libpod-conmon-96dc5a6fdb5afa8d0a9dbf24c1e55b3fff269065a1fc2c57380198a5c12526de.scope: Deactivated successfully.
Jan 31 08:07:56 compute-0 sudo[321276]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:07:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:07:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c14ed655-c0be-4316-a1de-025d699ffb39 does not exist
Jan 31 08:07:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7e334755-16ad-463e-997d-5af9e1c07547 does not exist
Jan 31 08:07:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 10d3afee-75de-4e9d-bb18-cc0c31edd400 does not exist
Jan 31 08:07:56 compute-0 nova_compute[247704]: 2026-01-31 08:07:56.299 247708 DEBUG nova.objects.instance [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'migration_context' on Instance uuid 53e406eb-6457-4cc1-ae9a-a525e21631c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:07:56 compute-0 sudo[321499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:07:56 compute-0 sudo[321499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:56 compute-0 sudo[321499]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:56 compute-0 sudo[321531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:07:56 compute-0 sudo[321531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:56 compute-0 sudo[321531]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 31 08:07:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 31 08:07:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 31 08:07:57 compute-0 nova_compute[247704]: 2026-01-31 08:07:57.186 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:07:57 compute-0 nova_compute[247704]: 2026-01-31 08:07:57.187 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Ensure instance console log exists: /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:07:57 compute-0 nova_compute[247704]: 2026-01-31 08:07:57.188 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:07:57 compute-0 nova_compute[247704]: 2026-01-31 08:07:57.188 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:07:57 compute-0 nova_compute[247704]: 2026-01-31 08:07:57.188 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:07:57 compute-0 ceph-mon[74496]: osdmap e263: 3 total, 3 up, 3 in
Jan 31 08:07:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 258 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 78 op/s
Jan 31 08:07:57 compute-0 nova_compute[247704]: 2026-01-31 08:07:57.963 247708 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Successfully updated port: a8f37427-8a41-4ba9-b278-a64c13d982b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:07:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:07:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.112 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "refresh_cache-53e406eb-6457-4cc1-ae9a-a525e21631c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.113 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquired lock "refresh_cache-53e406eb-6457-4cc1-ae9a-a525e21631c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.113 247708 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.149 247708 DEBUG nova.compute.manager [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-changed-a8f37427-8a41-4ba9-b278-a64c13d982b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.149 247708 DEBUG nova.compute.manager [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Refreshing instance network info cache due to event network-changed-a8f37427-8a41-4ba9-b278-a64c13d982b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.149 247708 DEBUG oslo_concurrency.lockutils [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-53e406eb-6457-4cc1-ae9a-a525e21631c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 31 08:07:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 31 08:07:58 compute-0 ceph-mon[74496]: pgmap v2134: 305 pgs: 305 active+clean; 258 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 78 op/s
Jan 31 08:07:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 31 08:07:58 compute-0 nova_compute[247704]: 2026-01-31 08:07:58.370 247708 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:07:58 compute-0 podman[321557]: 2026-01-31 08:07:58.876029857 +0000 UTC m=+0.049779947 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 31 08:07:59 compute-0 ceph-mon[74496]: osdmap e264: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.301 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:07:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.437 247708 DEBUG nova.network.neutron [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Updating instance_info_cache with network_info: [{"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.512 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Releasing lock "refresh_cache-53e406eb-6457-4cc1-ae9a-a525e21631c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.513 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Instance network_info: |[{"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.513 247708 DEBUG oslo_concurrency.lockutils [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-53e406eb-6457-4cc1-ae9a-a525e21631c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.514 247708 DEBUG nova.network.neutron [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Refreshing network info cache for port a8f37427-8a41-4ba9-b278-a64c13d982b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.517 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Start _get_guest_xml network_info=[{"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.521 247708 WARNING nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.527 247708 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.527 247708 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.531 247708 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.532 247708 DEBUG nova.virt.libvirt.host [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.534 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.534 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.535 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.535 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.536 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.536 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.537 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.537 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.538 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.538 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.539 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.539 247708 DEBUG nova.virt.hardware [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:07:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 333 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 11 MiB/s wr, 316 op/s
Jan 31 08:07:59 compute-0 nova_compute[247704]: 2026-01-31 08:07:59.544 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:07:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:07:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:07:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:07:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:07:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3387995892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.009 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.036 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.041 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:00 compute-0 ceph-mon[74496]: osdmap e265: 3 total, 3 up, 3 in
Jan 31 08:08:00 compute-0 ceph-mon[74496]: pgmap v2137: 305 pgs: 305 active+clean; 333 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 11 MiB/s wr, 316 op/s
Jan 31 08:08:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3387995892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:08:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031210827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.475 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.477 247708 DEBUG nova.virt.libvirt.vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-882171554',display_name='tempest-MultipleCreateTestJSON-server-882171554-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-882171554-2',id=112,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-z1qzyfi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:53Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=53e406eb-6457-4cc1-ae9a-a525e21631c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.477 247708 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.478 247708 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.480 247708 DEBUG nova.objects.instance [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'pci_devices' on Instance uuid 53e406eb-6457-4cc1-ae9a-a525e21631c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.552 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <uuid>53e406eb-6457-4cc1-ae9a-a525e21631c2</uuid>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <name>instance-00000070</name>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:name>tempest-MultipleCreateTestJSON-server-882171554-2</nova:name>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:07:59</nova:creationTime>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:user uuid="5ba00f420cd940ff802c16e8c25c35c4">tempest-MultipleCreateTestJSON-744612571-project-member</nova:user>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:project uuid="b97d933ec6c34696b0483a895f47feef">tempest-MultipleCreateTestJSON-744612571</nova:project>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <nova:port uuid="a8f37427-8a41-4ba9-b278-a64c13d982b3">
Jan 31 08:08:00 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <system>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <entry name="serial">53e406eb-6457-4cc1-ae9a-a525e21631c2</entry>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <entry name="uuid">53e406eb-6457-4cc1-ae9a-a525e21631c2</entry>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </system>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <os>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </os>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <features>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </features>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/53e406eb-6457-4cc1-ae9a-a525e21631c2_disk">
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </source>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/53e406eb-6457-4cc1-ae9a-a525e21631c2_disk.config">
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </source>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:08:00 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:09:d3:b9"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <target dev="tapa8f37427-8a"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/console.log" append="off"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <video>
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </video>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:08:00 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:08:00 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:08:00 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:08:00 compute-0 nova_compute[247704]: </domain>
Jan 31 08:08:00 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.553 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Preparing to wait for external event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.554 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.554 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.554 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.555 247708 DEBUG nova.virt.libvirt.vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-882171554',display_name='tempest-MultipleCreateTestJSON-server-882171554-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-882171554-2',id=112,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-z1qzyfi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:53Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=53e406eb-6457-4cc1-ae9a-a525e21631c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.555 247708 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.555 247708 DEBUG nova.network.os_vif_util [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.556 247708 DEBUG os_vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.556 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.557 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.557 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.561 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.561 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8f37427-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.561 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8f37427-8a, col_values=(('external_ids', {'iface-id': 'a8f37427-8a41-4ba9-b278-a64c13d982b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:d3:b9', 'vm-uuid': '53e406eb-6457-4cc1-ae9a-a525e21631c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.563 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:00 compute-0 NetworkManager[49108]: <info>  [1769846880.5648] manager: (tapa8f37427-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.565 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.570 247708 INFO os_vif [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a')
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.888 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.889 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.889 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] No VIF found with MAC fa:16:3e:09:d3:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.891 247708 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Using config drive
Jan 31 08:08:00 compute-0 nova_compute[247704]: 2026-01-31 08:08:00.929 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.262 247708 DEBUG nova.network.neutron [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Updated VIF entry in instance network info cache for port a8f37427-8a41-4ba9-b278-a64c13d982b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.262 247708 DEBUG nova.network.neutron [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Updating instance_info_cache with network_info: [{"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.283 247708 DEBUG oslo_concurrency.lockutils [req-1ce93a58-b129-46c2-b55a-0390bf14ab1b req-44e9171f-cee3-41e4-b87e-66b2f6c01078 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-53e406eb-6457-4cc1-ae9a-a525e21631c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.386 247708 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Creating config drive at /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/disk.config
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.390 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7qsy1688 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.511 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7qsy1688" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 364 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 386 op/s
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.675 247708 DEBUG nova.storage.rbd_utils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] rbd image 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:01 compute-0 nova_compute[247704]: 2026-01-31 08:08:01.681 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/disk.config 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4031210827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:01.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:01.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.330 247708 DEBUG oslo_concurrency.processutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/disk.config 53e406eb-6457-4cc1-ae9a-a525e21631c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.331 247708 INFO nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Deleting local config drive /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2/disk.config because it was imported into RBD.
Jan 31 08:08:02 compute-0 kernel: tapa8f37427-8a: entered promiscuous mode
Jan 31 08:08:02 compute-0 NetworkManager[49108]: <info>  [1769846882.3852] manager: (tapa8f37427-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Jan 31 08:08:02 compute-0 ovn_controller[149457]: 2026-01-31T08:08:02Z|00447|binding|INFO|Claiming lport a8f37427-8a41-4ba9-b278-a64c13d982b3 for this chassis.
Jan 31 08:08:02 compute-0 ovn_controller[149457]: 2026-01-31T08:08:02Z|00448|binding|INFO|a8f37427-8a41-4ba9-b278-a64c13d982b3: Claiming fa:16:3e:09:d3:b9 10.100.0.10
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.384 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 systemd-udevd[321713]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.413 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 systemd-machined[214448]: New machine qemu-45-instance-00000070.
Jan 31 08:08:02 compute-0 ovn_controller[149457]: 2026-01-31T08:08:02Z|00449|binding|INFO|Setting lport a8f37427-8a41-4ba9-b278-a64c13d982b3 ovn-installed in OVS
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 NetworkManager[49108]: <info>  [1769846882.4266] device (tapa8f37427-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:08:02 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000070.
Jan 31 08:08:02 compute-0 NetworkManager[49108]: <info>  [1769846882.4278] device (tapa8f37427-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:08:02 compute-0 ovn_controller[149457]: 2026-01-31T08:08:02Z|00450|binding|INFO|Setting lport a8f37427-8a41-4ba9-b278-a64c13d982b3 up in Southbound
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.656 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:d3:b9 10.100.0.10'], port_security=['fa:16:3e:09:d3:b9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '53e406eb-6457-4cc1-ae9a-a525e21631c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a8f37427-8a41-4ba9-b278-a64c13d982b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.658 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a8f37427-8a41-4ba9-b278-a64c13d982b3 in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 bound to our chassis
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.660 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.669 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6abc41-bf03-41e7-851c-14e2660568de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.671 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaae9ebc2-f1 in ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.673 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaae9ebc2-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.673 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[02ab119c-b208-4ecd-a29e-9baabf1e23e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.675 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5541ed88-5004-4f00-bc33-419842915571]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.687 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ee8463-e47c-449f-8c75-331186f96a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 sudo[321723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.703 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e6a9435c-cb2c-4782-a342-a50ad109f0c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 sudo[321723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:02 compute-0 sudo[321723]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.738 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[34258cd1-5d3d-421a-94d8-ca09934fd83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 NetworkManager[49108]: <info>  [1769846882.7475] manager: (tapaae9ebc2-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.745 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1070bcf8-192f-480b-9d09-babdac7c85d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 sudo[321752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:02 compute-0 sudo[321752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.777 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1c3ac7-9e29-4467-9a20-39f7cdbb6f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 sudo[321752]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.780 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9651ca87-c88c-4a48-b6d2-3b2b001d10e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 NetworkManager[49108]: <info>  [1769846882.8037] device (tapaae9ebc2-f0): carrier: link connected
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.808 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[aa073513-35ef-4cf3-b4c8-4ab881421aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.830 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e9b600-49ad-41a5-8adb-abe593896ece]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712542, 'reachable_time': 22473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321830, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.848 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ed81caf0-eada-4053-890d-a5845c99f498]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:4b98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 712542, 'tstamp': 712542}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321834, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.868 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d2874e69-22a1-42ad-8eec-ef9340f7ccf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae9ebc2-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:4b:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712542, 'reachable_time': 22473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321836, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1609935540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:02 compute-0 ceph-mon[74496]: pgmap v2138: 305 pgs: 305 active+clean; 364 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 14 MiB/s wr, 386 op/s
Jan 31 08:08:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4021844921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.899 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6404fee8-96bb-4880-b1fe-16728cc7e8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.955 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846882.9542243, 53e406eb-6457-4cc1-ae9a-a525e21631c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.955 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] VM Started (Lifecycle Event)
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.971 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b0540aba-bd03-4d0a-893e-4d7ea20bd923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.973 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.973 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.974 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaae9ebc2-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 NetworkManager[49108]: <info>  [1769846882.9766] manager: (tapaae9ebc2-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Jan 31 08:08:02 compute-0 kernel: tapaae9ebc2-f0: entered promiscuous mode
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.979 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.981 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaae9ebc2-f0, col_values=(('external_ids', {'iface-id': '18dd3685-0abe-42cf-9017-dc52c6cb4266'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.982 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 ovn_controller[149457]: 2026-01-31T08:08:02Z|00451|binding|INFO|Releasing lport 18dd3685-0abe-42cf-9017-dc52c6cb4266 from this chassis (sb_readonly=0)
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.983 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.984 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[277c326d-2fea-4abc-a35c-b5de9415b540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.985 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/aae9ebc2-f854-4add-b86c-a5209381ad20.pid.haproxy
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID aae9ebc2-f854-4add-b86c-a5209381ad20
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:08:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:02.985 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'env', 'PROCESS_TAG=haproxy-aae9ebc2-f854-4add-b86c-a5209381ad20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aae9ebc2-f854-4add-b86c-a5209381ad20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:08:02 compute-0 nova_compute[247704]: 2026-01-31 08:08:02.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.020 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.024 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846882.9557838, 53e406eb-6457-4cc1-ae9a-a525e21631c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.025 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] VM Paused (Lifecycle Event)
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.063 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.067 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.103 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.296 247708 DEBUG nova.compute.manager [req-f84600e4-f0a7-4553-aa8e-6c8351915e1f req-bf5a1b1c-7ef2-4b4b-a887-a15aa2023fc7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.298 247708 DEBUG oslo_concurrency.lockutils [req-f84600e4-f0a7-4553-aa8e-6c8351915e1f req-bf5a1b1c-7ef2-4b4b-a887-a15aa2023fc7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.298 247708 DEBUG oslo_concurrency.lockutils [req-f84600e4-f0a7-4553-aa8e-6c8351915e1f req-bf5a1b1c-7ef2-4b4b-a887-a15aa2023fc7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.298 247708 DEBUG oslo_concurrency.lockutils [req-f84600e4-f0a7-4553-aa8e-6c8351915e1f req-bf5a1b1c-7ef2-4b4b-a887-a15aa2023fc7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.299 247708 DEBUG nova.compute.manager [req-f84600e4-f0a7-4553-aa8e-6c8351915e1f req-bf5a1b1c-7ef2-4b4b-a887-a15aa2023fc7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Processing event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.299 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.302 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846883.302451, 53e406eb-6457-4cc1-ae9a-a525e21631c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.303 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] VM Resumed (Lifecycle Event)
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.304 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.307 247708 INFO nova.virt.libvirt.driver [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Instance spawned successfully.
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.307 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:08:03 compute-0 podman[321873]: 2026-01-31 08:08:03.373578442 +0000 UTC m=+0.058393348 container create e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:08:03 compute-0 systemd[1]: Started libpod-conmon-e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155.scope.
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.418 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.424 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.425 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.425 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.426 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.426 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.426 247708 DEBUG nova.virt.libvirt.driver [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745a9c3bb00ba4681a75bd19a6271a617754c4b4b4c014ca77423fadb9fb02d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.431 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:08:03 compute-0 podman[321873]: 2026-01-31 08:08:03.344307686 +0000 UTC m=+0.029122672 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:08:03 compute-0 podman[321873]: 2026-01-31 08:08:03.443929653 +0000 UTC m=+0.128744589 container init e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:08:03 compute-0 podman[321873]: 2026-01-31 08:08:03.448266609 +0000 UTC m=+0.133081555 container start e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 08:08:03 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [NOTICE]   (321893) : New worker (321895) forked
Jan 31 08:08:03 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [NOTICE]   (321893) : Loading success.
Jan 31 08:08:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 372 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 337 op/s
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.614 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.692 247708 INFO nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Took 9.64 seconds to spawn the instance on the hypervisor.
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.693 247708 DEBUG nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.865 247708 INFO nova.compute.manager [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Took 11.28 seconds to build instance.
Jan 31 08:08:03 compute-0 ceph-mon[74496]: pgmap v2139: 305 pgs: 305 active+clean; 372 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 12 MiB/s wr, 337 op/s
Jan 31 08:08:03 compute-0 nova_compute[247704]: 2026-01-31 08:08:03.922 247708 DEBUG oslo_concurrency.lockutils [None req-19ad48e4-7f9f-4c57-a493-e1fb82e2598b 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:03.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:04 compute-0 nova_compute[247704]: 2026-01-31 08:08:04.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 31 08:08:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 31 08:08:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 31 08:08:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 372 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 8.2 MiB/s wr, 338 op/s
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.615 247708 DEBUG nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.616 247708 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.616 247708 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.616 247708 DEBUG oslo_concurrency.lockutils [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.616 247708 DEBUG nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] No waiting events found dispatching network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:05 compute-0 nova_compute[247704]: 2026-01-31 08:08:05.617 247708 WARNING nova.compute.manager [req-b91fa46c-e347-4be0-a12c-692a3a775faa req-aa5dcbfc-0d2e-464f-88b3-73b31cd2006f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received unexpected event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 for instance with vm_state active and task_state None.
Jan 31 08:08:05 compute-0 ceph-mon[74496]: osdmap e266: 3 total, 3 up, 3 in
Jan 31 08:08:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:05.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:06 compute-0 ceph-mon[74496]: pgmap v2141: 305 pgs: 305 active+clean; 372 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 8.2 MiB/s wr, 338 op/s
Jan 31 08:08:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 372 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 195 op/s
Jan 31 08:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 31 08:08:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 31 08:08:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 31 08:08:07 compute-0 ceph-mon[74496]: pgmap v2142: 305 pgs: 305 active+clean; 372 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 195 op/s
Jan 31 08:08:07 compute-0 ceph-mon[74496]: osdmap e267: 3 total, 3 up, 3 in
Jan 31 08:08:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 31 08:08:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 31 08:08:08 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.308 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.508 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.508 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.509 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.510 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.510 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.512 247708 INFO nova.compute.manager [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Terminating instance
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.514 247708 DEBUG nova.compute.manager [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:08:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 389 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 2.1 MiB/s wr, 352 op/s
Jan 31 08:08:09 compute-0 kernel: tapa8f37427-8a (unregistering): left promiscuous mode
Jan 31 08:08:09 compute-0 NetworkManager[49108]: <info>  [1769846889.6073] device (tapa8f37427-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 ovn_controller[149457]: 2026-01-31T08:08:09Z|00452|binding|INFO|Releasing lport a8f37427-8a41-4ba9-b278-a64c13d982b3 from this chassis (sb_readonly=0)
Jan 31 08:08:09 compute-0 ovn_controller[149457]: 2026-01-31T08:08:09Z|00453|binding|INFO|Setting lport a8f37427-8a41-4ba9-b278-a64c13d982b3 down in Southbound
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 ovn_controller[149457]: 2026-01-31T08:08:09Z|00454|binding|INFO|Removing iface tapa8f37427-8a ovn-installed in OVS
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.616 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.622 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000070.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000070.scope: Consumed 6.887s CPU time.
Jan 31 08:08:09 compute-0 systemd-machined[214448]: Machine qemu-45-instance-00000070 terminated.
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.655 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:d3:b9 10.100.0.10'], port_security=['fa:16:3e:09:d3:b9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '53e406eb-6457-4cc1-ae9a-a525e21631c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae9ebc2-f854-4add-b86c-a5209381ad20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b97d933ec6c34696b0483a895f47feef', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5e2db8d-c3a5-46be-bb72-92eb36b476fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14b206ab-a379-4d4a-9b80-58ba0ce20e17, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a8f37427-8a41-4ba9-b278-a64c13d982b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.656 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a8f37427-8a41-4ba9-b278-a64c13d982b3 in datapath aae9ebc2-f854-4add-b86c-a5209381ad20 unbound from our chassis
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.658 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aae9ebc2-f854-4add-b86c-a5209381ad20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.658 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[419dbc79-4202-4cfd-9518-f62b57c9a275]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.659 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 namespace which is not needed anymore
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.746 247708 INFO nova.virt.libvirt.driver [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Instance destroyed successfully.
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.746 247708 DEBUG nova.objects.instance [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lazy-loading 'resources' on Instance uuid 53e406eb-6457-4cc1-ae9a-a525e21631c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:09 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [NOTICE]   (321893) : haproxy version is 2.8.14-c23fe91
Jan 31 08:08:09 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [NOTICE]   (321893) : path to executable is /usr/sbin/haproxy
Jan 31 08:08:09 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [WARNING]  (321893) : Exiting Master process...
Jan 31 08:08:09 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [WARNING]  (321893) : Exiting Master process...
Jan 31 08:08:09 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [ALERT]    (321893) : Current worker (321895) exited with code 143 (Terminated)
Jan 31 08:08:09 compute-0 neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20[321888]: [WARNING]  (321893) : All workers exited. Exiting... (0)
Jan 31 08:08:09 compute-0 systemd[1]: libpod-e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.767 247708 DEBUG nova.virt.libvirt.vif [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-882171554',display_name='tempest-MultipleCreateTestJSON-server-882171554-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-882171554-2',id=112,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-31T08:08:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b97d933ec6c34696b0483a895f47feef',ramdisk_id='',reservation_id='r-z1qzyfi5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-744612571',owner_user_name='tempest-MultipleCreateTestJSON-744612571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:08:03Z,user_data=None,user_id='5ba00f420cd940ff802c16e8c25c35c4',uuid=53e406eb-6457-4cc1-ae9a-a525e21631c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.768 247708 DEBUG nova.network.os_vif_util [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converting VIF {"id": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "address": "fa:16:3e:09:d3:b9", "network": {"id": "aae9ebc2-f854-4add-b86c-a5209381ad20", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-667077248-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b97d933ec6c34696b0483a895f47feef", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8f37427-8a", "ovs_interfaceid": "a8f37427-8a41-4ba9-b278-a64c13d982b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.769 247708 DEBUG nova.network.os_vif_util [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:09 compute-0 podman[321931]: 2026-01-31 08:08:09.769870028 +0000 UTC m=+0.041592278 container died e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.769 247708 DEBUG os_vif [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.772 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8f37427-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.776 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.779 247708 INFO os_vif [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:d3:b9,bridge_name='br-int',has_traffic_filtering=True,id=a8f37427-8a41-4ba9-b278-a64c13d982b3,network=Network(aae9ebc2-f854-4add-b86c-a5209381ad20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8f37427-8a')
Jan 31 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155-userdata-shm.mount: Deactivated successfully.
Jan 31 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-745a9c3bb00ba4681a75bd19a6271a617754c4b4b4c014ca77423fadb9fb02d5-merged.mount: Deactivated successfully.
Jan 31 08:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:09 compute-0 podman[321931]: 2026-01-31 08:08:09.810402789 +0000 UTC m=+0.082125029 container cleanup e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:09 compute-0 systemd[1]: libpod-conmon-e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[321986]: 2026-01-31 08:08:09.863929748 +0000 UTC m=+0.038435091 container remove e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.867 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[14d7b5b4-acc7-4eee-8741-006672dc794a]: (4, ('Sat Jan 31 08:08:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155)\ne1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155\nSat Jan 31 08:08:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 (e1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155)\ne1b449a9ca363d22f0336349ab58d70978139c4cd5ccf563b79cc081313c3155\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.869 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e6755a2d-d4f3-4b64-b7bb-3f355da83f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.869 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae9ebc2-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.872 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 kernel: tapaae9ebc2-f0: left promiscuous mode
Jan 31 08:08:09 compute-0 nova_compute[247704]: 2026-01-31 08:08:09.878 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.880 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d06fb1f0-013b-48f2-81db-a1306d25ef72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.898 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8e23873c-57a5-43f4-908c-10f7fa862e2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.900 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ab22bfd2-664c-4a96-97e2-c42ced9e76c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.910 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[740f0e4c-6bc4-4b99-b024-03d148e40e6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 712534, 'reachable_time': 17647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322004, 'error': None, 'target': 'ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.913 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aae9ebc2-f854-4add-b86c-a5209381ad20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:08:09 compute-0 systemd[1]: run-netns-ovnmeta\x2daae9ebc2\x2df854\x2d4add\x2db86c\x2da5209381ad20.mount: Deactivated successfully.
Jan 31 08:08:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:09.913 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[164ea57f-9799-4307-b501-49757f880970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 31 08:08:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 31 08:08:09 compute-0 ceph-mon[74496]: osdmap e268: 3 total, 3 up, 3 in
Jan 31 08:08:09 compute-0 ceph-mon[74496]: pgmap v2145: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 389 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 2.1 MiB/s wr, 352 op/s
Jan 31 08:08:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 31 08:08:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:10.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:10 compute-0 nova_compute[247704]: 2026-01-31 08:08:10.320 247708 INFO nova.virt.libvirt.driver [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Deleting instance files /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2_del
Jan 31 08:08:10 compute-0 nova_compute[247704]: 2026-01-31 08:08:10.321 247708 INFO nova.virt.libvirt.driver [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Deletion of /var/lib/nova/instances/53e406eb-6457-4cc1-ae9a-a525e21631c2_del complete
Jan 31 08:08:10 compute-0 nova_compute[247704]: 2026-01-31 08:08:10.560 247708 INFO nova.compute.manager [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Took 1.05 seconds to destroy the instance on the hypervisor.
Jan 31 08:08:10 compute-0 nova_compute[247704]: 2026-01-31 08:08:10.561 247708 DEBUG oslo.service.loopingcall [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:08:10 compute-0 nova_compute[247704]: 2026-01-31 08:08:10.561 247708 DEBUG nova.compute.manager [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:08:10 compute-0 nova_compute[247704]: 2026-01-31 08:08:10.561 247708 DEBUG nova.network.neutron [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:08:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:11.179 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:11.179 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:11.179 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:11 compute-0 ceph-mon[74496]: osdmap e269: 3 total, 3 up, 3 in
Jan 31 08:08:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 396 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.4 MiB/s wr, 348 op/s
Jan 31 08:08:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:12.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.042 247708 DEBUG nova.compute.manager [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-vif-unplugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.042 247708 DEBUG oslo_concurrency.lockutils [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.043 247708 DEBUG oslo_concurrency.lockutils [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.043 247708 DEBUG oslo_concurrency.lockutils [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.044 247708 DEBUG nova.compute.manager [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] No waiting events found dispatching network-vif-unplugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.044 247708 DEBUG nova.compute.manager [req-c559cad4-4abc-46ed-828f-9d3024c42ee9 req-aa694c5b-3eb8-4c41-91d9-aa9a5f8766d6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-vif-unplugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:08:12 compute-0 ceph-mon[74496]: pgmap v2147: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 396 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.4 MiB/s wr, 348 op/s
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.510 247708 DEBUG nova.network.neutron [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.619 247708 INFO nova.compute.manager [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Took 2.06 seconds to deallocate network for instance.
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.653 247708 DEBUG nova.compute.manager [req-7185f890-112a-4e87-85c0-48a6bd506fa5 req-99487984-0184-4ca4-adbe-842742b2c0fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-vif-deleted-a8f37427-8a41-4ba9-b278-a64c13d982b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.654 247708 INFO nova.compute.manager [req-7185f890-112a-4e87-85c0-48a6bd506fa5 req-99487984-0184-4ca4-adbe-842742b2c0fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Neutron deleted interface a8f37427-8a41-4ba9-b278-a64c13d982b3; detaching it from the instance and deleting it from the info cache
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.654 247708 DEBUG nova.network.neutron [req-7185f890-112a-4e87-85c0-48a6bd506fa5 req-99487984-0184-4ca4-adbe-842742b2c0fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.765 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.766 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.771 247708 DEBUG nova.compute.manager [req-7185f890-112a-4e87-85c0-48a6bd506fa5 req-99487984-0184-4ca4-adbe-842742b2c0fe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Detach interface failed, port_id=a8f37427-8a41-4ba9-b278-a64c13d982b3, reason: Instance 53e406eb-6457-4cc1-ae9a-a525e21631c2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:08:12 compute-0 nova_compute[247704]: 2026-01-31 08:08:12.834 247708 DEBUG oslo_concurrency.processutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167090398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:13 compute-0 nova_compute[247704]: 2026-01-31 08:08:13.284 247708 DEBUG oslo_concurrency.processutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:13 compute-0 nova_compute[247704]: 2026-01-31 08:08:13.293 247708 DEBUG nova.compute.provider_tree [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4039298737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4167090398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:13 compute-0 nova_compute[247704]: 2026-01-31 08:08:13.446 247708 DEBUG nova.scheduler.client.report [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:13 compute-0 nova_compute[247704]: 2026-01-31 08:08:13.545 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 378 op/s
Jan 31 08:08:13 compute-0 nova_compute[247704]: 2026-01-31 08:08:13.594 247708 INFO nova.scheduler.client.report [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Deleted allocations for instance 53e406eb-6457-4cc1-ae9a-a525e21631c2
Jan 31 08:08:13 compute-0 nova_compute[247704]: 2026-01-31 08:08:13.784 247708 DEBUG oslo_concurrency.lockutils [None req-8e4a6474-0eb5-458e-883a-dfcbb7a3830f 5ba00f420cd940ff802c16e8c25c35c4 b97d933ec6c34696b0483a895f47feef - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:13.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:14.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.308 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.327 247708 DEBUG nova.compute.manager [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.327 247708 DEBUG oslo_concurrency.lockutils [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.327 247708 DEBUG oslo_concurrency.lockutils [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.327 247708 DEBUG oslo_concurrency.lockutils [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53e406eb-6457-4cc1-ae9a-a525e21631c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.327 247708 DEBUG nova.compute.manager [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] No waiting events found dispatching network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.328 247708 WARNING nova.compute.manager [req-c9c03ce6-de3d-4d96-98f4-19ac37038aa0 req-0d594f13-0d9f-4172-b894-70c4e9d2b85f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Received unexpected event network-vif-plugged-a8f37427-8a41-4ba9-b278-a64c13d982b3 for instance with vm_state deleted and task_state None.
Jan 31 08:08:14 compute-0 ceph-mon[74496]: pgmap v2148: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 378 op/s
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.528 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.529 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:14 compute-0 nova_compute[247704]: 2026-01-31 08:08:14.782 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:08:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:14 compute-0 podman[322030]: 2026-01-31 08:08:14.913418631 +0000 UTC m=+0.084472056 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.017 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.018 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.026 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.026 247708 INFO nova.compute.claims [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.439 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 297 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.1 MiB/s wr, 346 op/s
Jan 31 08:08:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589736456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.890 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.897 247708 DEBUG nova.compute.provider_tree [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:15 compute-0 nova_compute[247704]: 2026-01-31 08:08:15.980 247708 DEBUG nova.scheduler.client.report [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:15.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.037 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.038 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.190 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.190 247708 DEBUG nova.network.neutron [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.309 247708 INFO nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.360 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.538 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.540 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.540 247708 INFO nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Creating image(s)
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.578 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.609 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.649 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.653 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.675 247708 DEBUG nova.policy [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '41e4bf44a95742c68d5709b7cd31d18b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '861b60b5b65c4460a2d622f9ae973d86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:08:16 compute-0 ceph-mon[74496]: pgmap v2149: 305 pgs: 305 active+clean; 297 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.1 MiB/s wr, 346 op/s
Jan 31 08:08:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/589736456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 31 08:08:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.721 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.722 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.722 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.723 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.748 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:16 compute-0 nova_compute[247704]: 2026-01-31 08:08:16.753 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.024 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.271s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.107 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] resizing rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.317 247708 DEBUG nova.objects.instance [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lazy-loading 'migration_context' on Instance uuid 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.358 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.358 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Ensure instance console log exists: /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.358 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.359 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:17 compute-0 nova_compute[247704]: 2026-01-31 08:08:17.359 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 320 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 246 op/s
Jan 31 08:08:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 31 08:08:17 compute-0 ceph-mon[74496]: osdmap e270: 3 total, 3 up, 3 in
Jan 31 08:08:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3096185768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 31 08:08:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 31 08:08:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:18.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:18.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:18 compute-0 nova_compute[247704]: 2026-01-31 08:08:18.256 247708 DEBUG nova.network.neutron [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Successfully created port: fe3618d7-d627-4b17-9860-0f219ded52bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:08:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 31 08:08:18 compute-0 ceph-mon[74496]: pgmap v2151: 305 pgs: 305 active+clean; 320 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.8 MiB/s wr, 246 op/s
Jan 31 08:08:18 compute-0 ceph-mon[74496]: osdmap e271: 3 total, 3 up, 3 in
Jan 31 08:08:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 31 08:08:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-0 nova_compute[247704]: 2026-01-31 08:08:19.310 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 357 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 7.5 MiB/s wr, 257 op/s
Jan 31 08:08:19 compute-0 ceph-mon[74496]: osdmap e272: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-0 nova_compute[247704]: 2026-01-31 08:08:19.776 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 31 08:08:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 31 08:08:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:20.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:20.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:08:20
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'backups', 'default.rgw.control']
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:08:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.717 247708 DEBUG nova.network.neutron [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Successfully updated port: fe3618d7-d627-4b17-9860-0f219ded52bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.765 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "refresh_cache-4daa5779-6af5-48ab-bb17-1fbc0d9040c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.765 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquired lock "refresh_cache-4daa5779-6af5-48ab-bb17-1fbc0d9040c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.765 247708 DEBUG nova.network.neutron [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:08:20 compute-0 ceph-mon[74496]: pgmap v2154: 305 pgs: 305 active+clean; 357 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 7.5 MiB/s wr, 257 op/s
Jan 31 08:08:20 compute-0 ceph-mon[74496]: osdmap e273: 3 total, 3 up, 3 in
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.838 247708 DEBUG nova.compute.manager [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-changed-fe3618d7-d627-4b17-9860-0f219ded52bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.838 247708 DEBUG nova.compute.manager [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Refreshing instance network info cache due to event network-changed-fe3618d7-d627-4b17-9860-0f219ded52bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:08:20 compute-0 nova_compute[247704]: 2026-01-31 08:08:20.838 247708 DEBUG oslo_concurrency.lockutils [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4daa5779-6af5-48ab-bb17-1fbc0d9040c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:08:21 compute-0 nova_compute[247704]: 2026-01-31 08:08:21.195 247708 DEBUG nova.network.neutron [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:08:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 14 MiB/s wr, 378 op/s
Jan 31 08:08:21 compute-0 nova_compute[247704]: 2026-01-31 08:08:21.890 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:22.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:22.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:22 compute-0 ceph-mon[74496]: pgmap v2156: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 14 MiB/s wr, 378 op/s
Jan 31 08:08:22 compute-0 sudo[322250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:22 compute-0 sudo[322250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:22 compute-0 sudo[322250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:22 compute-0 sudo[322275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:22 compute-0 sudo[322275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:22 compute-0 sudo[322275]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 31 08:08:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 424 op/s
Jan 31 08:08:23 compute-0 nova_compute[247704]: 2026-01-31 08:08:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:08:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:24.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:08:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:24.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.070 247708 DEBUG nova.network.neutron [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Updating instance_info_cache with network_info: [{"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:24 compute-0 ceph-mon[74496]: osdmap e274: 3 total, 3 up, 3 in
Jan 31 08:08:24 compute-0 ceph-mon[74496]: pgmap v2158: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 424 op/s
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.248 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Releasing lock "refresh_cache-4daa5779-6af5-48ab-bb17-1fbc0d9040c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.249 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Instance network_info: |[{"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.249 247708 DEBUG oslo_concurrency.lockutils [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4daa5779-6af5-48ab-bb17-1fbc0d9040c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.249 247708 DEBUG nova.network.neutron [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Refreshing network info cache for port fe3618d7-d627-4b17-9860-0f219ded52bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.253 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Start _get_guest_xml network_info=[{"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.260 247708 WARNING nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.265 247708 DEBUG nova.virt.libvirt.host [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.266 247708 DEBUG nova.virt.libvirt.host [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.272 247708 DEBUG nova.virt.libvirt.host [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.273 247708 DEBUG nova.virt.libvirt.host [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.274 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.274 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.275 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.275 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.275 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.275 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.276 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.276 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.276 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.276 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.277 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.277 247708 DEBUG nova.virt.hardware [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.280 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.312 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:08:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701679411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.694 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.720 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.724 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.745 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846889.7438436, 53e406eb-6457-4cc1-ae9a-a525e21631c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.745 247708 INFO nova.compute.manager [-] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] VM Stopped (Lifecycle Event)
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.778 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 31 08:08:24 compute-0 nova_compute[247704]: 2026-01-31 08:08:24.816 247708 DEBUG nova.compute.manager [None req-467d50e0-2d3f-48e4-adbd-24ae09364dd9 - - - - - -] [instance: 53e406eb-6457-4cc1-ae9a-a525e21631c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 31 08:08:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 31 08:08:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:08:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2592313796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.128 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.129 247708 DEBUG nova.virt.libvirt.vif [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-842598130',display_name='tempest-ServerMetadataNegativeTestJSON-server-842598130',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-842598130',id=113,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='861b60b5b65c4460a2d622f9ae973d86',ramdisk_id='',reservation_id='r-n7anozzp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-966082543',owner_user_name='tempest-ServerMetadataNegativeTestJSON-966082543-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:08:16Z,user_data=None,user_id='41e4bf44a95742c68d5709b7cd31d18b',uuid=4daa5779-6af5-48ab-bb17-1fbc0d9040c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.129 247708 DEBUG nova.network.os_vif_util [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Converting VIF {"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.130 247708 DEBUG nova.network.os_vif_util [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.131 247708 DEBUG nova.objects.instance [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.158 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <uuid>4daa5779-6af5-48ab-bb17-1fbc0d9040c7</uuid>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <name>instance-00000071</name>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-842598130</nova:name>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:08:24</nova:creationTime>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:user uuid="41e4bf44a95742c68d5709b7cd31d18b">tempest-ServerMetadataNegativeTestJSON-966082543-project-member</nova:user>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:project uuid="861b60b5b65c4460a2d622f9ae973d86">tempest-ServerMetadataNegativeTestJSON-966082543</nova:project>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <nova:port uuid="fe3618d7-d627-4b17-9860-0f219ded52bf">
Jan 31 08:08:25 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <system>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <entry name="serial">4daa5779-6af5-48ab-bb17-1fbc0d9040c7</entry>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <entry name="uuid">4daa5779-6af5-48ab-bb17-1fbc0d9040c7</entry>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </system>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <os>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </os>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <features>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </features>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk">
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </source>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk.config">
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </source>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:08:25 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:13:67:58"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <target dev="tapfe3618d7-d6"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/console.log" append="off"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <video>
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </video>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:08:25 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:08:25 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:08:25 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:08:25 compute-0 nova_compute[247704]: </domain>
Jan 31 08:08:25 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.159 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Preparing to wait for external event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.160 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.160 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.160 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.161 247708 DEBUG nova.virt.libvirt.vif [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-842598130',display_name='tempest-ServerMetadataNegativeTestJSON-server-842598130',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-842598130',id=113,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='861b60b5b65c4460a2d622f9ae973d86',ramdisk_id='',reservation_id='r-n7anozzp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-966082543',owner_user_name='tempest-ServerMetadataNegativeTestJSON-966082543-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:08:16Z,user_data=None,user_id='41e4bf44a95742c68d5709b7cd31d18b',uuid=4daa5779-6af5-48ab-bb17-1fbc0d9040c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.161 247708 DEBUG nova.network.os_vif_util [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Converting VIF {"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.162 247708 DEBUG nova.network.os_vif_util [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.162 247708 DEBUG os_vif [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.162 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.163 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.163 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.167 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.167 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe3618d7-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.167 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe3618d7-d6, col_values=(('external_ids', {'iface-id': 'fe3618d7-d627-4b17-9860-0f219ded52bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:67:58', 'vm-uuid': '4daa5779-6af5-48ab-bb17-1fbc0d9040c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.169 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:25 compute-0 NetworkManager[49108]: <info>  [1769846905.1709] manager: (tapfe3618d7-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.178 247708 INFO os_vif [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6')
Jan 31 08:08:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/701679411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:25 compute-0 ceph-mon[74496]: osdmap e275: 3 total, 3 up, 3 in
Jan 31 08:08:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2592313796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.304 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.304 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.304 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] No VIF found with MAC fa:16:3e:13:67:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.305 247708 INFO nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Using config drive
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.330 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 350 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 500 op/s
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.882 247708 INFO nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Creating config drive at /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/disk.config
Jan 31 08:08:25 compute-0 nova_compute[247704]: 2026-01-31 08:08:25.889 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpypyf1td9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:26.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.020 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpypyf1td9" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.050 247708 DEBUG nova.storage.rbd_utils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] rbd image 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.054 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/disk.config 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.251 247708 DEBUG oslo_concurrency.processutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/disk.config 4daa5779-6af5-48ab-bb17-1fbc0d9040c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.252 247708 INFO nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Deleting local config drive /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7/disk.config because it was imported into RBD.
Jan 31 08:08:26 compute-0 ceph-mon[74496]: pgmap v2160: 305 pgs: 305 active+clean; 350 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 500 op/s
Jan 31 08:08:26 compute-0 kernel: tapfe3618d7-d6: entered promiscuous mode
Jan 31 08:08:26 compute-0 NetworkManager[49108]: <info>  [1769846906.2882] manager: (tapfe3618d7-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Jan 31 08:08:26 compute-0 ovn_controller[149457]: 2026-01-31T08:08:26Z|00455|binding|INFO|Claiming lport fe3618d7-d627-4b17-9860-0f219ded52bf for this chassis.
Jan 31 08:08:26 compute-0 ovn_controller[149457]: 2026-01-31T08:08:26Z|00456|binding|INFO|fe3618d7-d627-4b17-9860-0f219ded52bf: Claiming fa:16:3e:13:67:58 10.100.0.7
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.298 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 systemd-udevd[322438]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:08:26 compute-0 systemd-machined[214448]: New machine qemu-46-instance-00000071.
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.317 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:67:58 10.100.0.7'], port_security=['fa:16:3e:13:67:58 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4daa5779-6af5-48ab-bb17-1fbc0d9040c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c451338f-2e79-4900-b14c-c179e6229528', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '861b60b5b65c4460a2d622f9ae973d86', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac34cda3-35e1-49fa-b49e-154956beb7de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b1ebad-c7d7-4365-83c5-85961d55938b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=fe3618d7-d627-4b17-9860-0f219ded52bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.318 160028 INFO neutron.agent.ovn.metadata.agent [-] Port fe3618d7-d627-4b17-9860-0f219ded52bf in datapath c451338f-2e79-4900-b14c-c179e6229528 bound to our chassis
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.320 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c451338f-2e79-4900-b14c-c179e6229528
Jan 31 08:08:26 compute-0 NetworkManager[49108]: <info>  [1769846906.3244] device (tapfe3618d7-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:08:26 compute-0 NetworkManager[49108]: <info>  [1769846906.3256] device (tapfe3618d7-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.327 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3c383a-7e81-4d8a-a343-6e258b76962e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.328 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc451338f-21 in ovnmeta-c451338f-2e79-4900-b14c-c179e6229528 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:08:26 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000071.
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.329 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc451338f-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.330 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[48300057-00f7-408f-857c-cd2856ee6a37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.330 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c26add65-101d-4a49-8065-8a11ce6f84a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.334 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 ovn_controller[149457]: 2026-01-31T08:08:26Z|00457|binding|INFO|Setting lport fe3618d7-d627-4b17-9860-0f219ded52bf ovn-installed in OVS
Jan 31 08:08:26 compute-0 ovn_controller[149457]: 2026-01-31T08:08:26Z|00458|binding|INFO|Setting lport fe3618d7-d627-4b17-9860-0f219ded52bf up in Southbound
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.340 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.341 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[94dd40df-addb-4fe1-a081-0e7d9de9039e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.350 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[369ff6e4-221a-48c6-94eb-c043e22fe109]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.371 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a89dcfa2-38d6-4b3f-bf98-026c172bca09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.375 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d9780a-2e1e-4ee4-87d2-0631b8e61d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 NetworkManager[49108]: <info>  [1769846906.3775] manager: (tapc451338f-20): new Veth device (/org/freedesktop/NetworkManager/Devices/213)
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.397 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c65927d7-cbf2-4804-9afb-282548f213cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.400 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d661ad3d-ae18-4624-9859-fc47f3680b3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 NetworkManager[49108]: <info>  [1769846906.4130] device (tapc451338f-20): carrier: link connected
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.414 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[46e625e8-1cfc-4601-9e82-6393b9fca6d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.423 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[385f7f59-4e24-4f54-b0b8-52300e3e6bba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc451338f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:c0:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714903, 'reachable_time': 18161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322471, 'error': None, 'target': 'ovnmeta-c451338f-2e79-4900-b14c-c179e6229528', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.436 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0945358e-e03c-43d4-bded-f062deb85e72]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:c059'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714903, 'tstamp': 714903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322472, 'error': None, 'target': 'ovnmeta-c451338f-2e79-4900-b14c-c179e6229528', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.448 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[856a2229-34d9-439c-a2a7-1be4b2dabd15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc451338f-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:c0:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714903, 'reachable_time': 18161, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322473, 'error': None, 'target': 'ovnmeta-c451338f-2e79-4900-b14c-c179e6229528', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.466 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[59cd3bc5-54d4-4053-aead-bb6d6d49c737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.497 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[682bcf19-6db9-4140-af1e-5f4c898e6ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.499 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc451338f-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.499 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.500 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc451338f-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:26 compute-0 kernel: tapc451338f-20: entered promiscuous mode
Jan 31 08:08:26 compute-0 NetworkManager[49108]: <info>  [1769846906.5025] manager: (tapc451338f-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.506 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.508 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc451338f-20, col_values=(('external_ids', {'iface-id': '7dd85e7c-b37b-4af9-8c03-38070f5565e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:26 compute-0 ovn_controller[149457]: 2026-01-31T08:08:26Z|00459|binding|INFO|Releasing lport 7dd85e7c-b37b-4af9-8c03-38070f5565e0 from this chassis (sb_readonly=0)
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.513 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c451338f-2e79-4900-b14c-c179e6229528.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c451338f-2e79-4900-b14c-c179e6229528.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.514 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.516 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[42c6ba31-31e2-41cb-b49b-f504ddcc0886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.518 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-c451338f-2e79-4900-b14c-c179e6229528
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/c451338f-2e79-4900-b14c-c179e6229528.pid.haproxy
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID c451338f-2e79-4900-b14c-c179e6229528
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:08:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:26.521 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c451338f-2e79-4900-b14c-c179e6229528', 'env', 'PROCESS_TAG=haproxy-c451338f-2e79-4900-b14c-c179e6229528', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c451338f-2e79-4900-b14c-c179e6229528.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.600 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.601 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.720 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846906.7202077, 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.721 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] VM Started (Lifecycle Event)
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.762 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.767 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846906.7204418, 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.768 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] VM Paused (Lifecycle Event)
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.807 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.811 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:08:26 compute-0 podman[322547]: 2026-01-31 08:08:26.870252927 +0000 UTC m=+0.051468980 container create db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.879 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.897 247708 DEBUG nova.network.neutron [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Updated VIF entry in instance network info cache for port fe3618d7-d627-4b17-9860-0f219ded52bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.898 247708 DEBUG nova.network.neutron [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Updating instance_info_cache with network_info: [{"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:26 compute-0 systemd[1]: Started libpod-conmon-db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459.scope.
Jan 31 08:08:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3c62a1c28ef3144e5fbc6531de470c421e712e44cc70f01a3c765fd5a10c17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 podman[322547]: 2026-01-31 08:08:26.83850045 +0000 UTC m=+0.019716563 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:08:26 compute-0 podman[322547]: 2026-01-31 08:08:26.936399074 +0000 UTC m=+0.117615157 container init db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:08:26 compute-0 podman[322547]: 2026-01-31 08:08:26.940627568 +0000 UTC m=+0.121843631 container start db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:08:26 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [NOTICE]   (322566) : New worker (322568) forked
Jan 31 08:08:26 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [NOTICE]   (322566) : Loading success.
Jan 31 08:08:26 compute-0 nova_compute[247704]: 2026-01-31 08:08:26.959 247708 DEBUG oslo_concurrency.lockutils [req-4a09574f-0789-44c0-a238-7759f106ece3 req-7e8c51b7-4194-4d27-8223-d7c4c4579c9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4daa5779-6af5-48ab-bb17-1fbc0d9040c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:08:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 326 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 439 op/s
Jan 31 08:08:27 compute-0 nova_compute[247704]: 2026-01-31 08:08:27.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:27 compute-0 nova_compute[247704]: 2026-01-31 08:08:27.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:08:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:28.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.032 247708 DEBUG nova.compute.manager [req-1a7d2697-d103-438e-8bba-dab691e4b090 req-f939a7ab-9465-483a-9562-e5ce9e039f61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.033 247708 DEBUG oslo_concurrency.lockutils [req-1a7d2697-d103-438e-8bba-dab691e4b090 req-f939a7ab-9465-483a-9562-e5ce9e039f61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.033 247708 DEBUG oslo_concurrency.lockutils [req-1a7d2697-d103-438e-8bba-dab691e4b090 req-f939a7ab-9465-483a-9562-e5ce9e039f61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.033 247708 DEBUG oslo_concurrency.lockutils [req-1a7d2697-d103-438e-8bba-dab691e4b090 req-f939a7ab-9465-483a-9562-e5ce9e039f61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.034 247708 DEBUG nova.compute.manager [req-1a7d2697-d103-438e-8bba-dab691e4b090 req-f939a7ab-9465-483a-9562-e5ce9e039f61 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Processing event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.034 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.038 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846908.0381172, 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.038 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] VM Resumed (Lifecycle Event)
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.040 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.043 247708 INFO nova.virt.libvirt.driver [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Instance spawned successfully.
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.044 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.129 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.137 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.138 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.139 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.140 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.141 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.141 247708 DEBUG nova.virt.libvirt.driver [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.149 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.195 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.299 247708 INFO nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Took 11.76 seconds to spawn the instance on the hypervisor.
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.299 247708 DEBUG nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:28 compute-0 ceph-mon[74496]: pgmap v2161: 305 pgs: 305 active+clean; 326 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 439 op/s
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.692 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.693 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.693 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.694 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.694 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.840 247708 INFO nova.compute.manager [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Took 13.86 seconds to build instance.
Jan 31 08:08:28 compute-0 nova_compute[247704]: 2026-01-31 08:08:28.953 247708 DEBUG oslo_concurrency.lockutils [None req-b68ab0af-a219-4e3a-8241-30240a373018 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59320802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.176 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:29 compute-0 podman[322600]: 2026-01-31 08:08:29.291757938 +0000 UTC m=+0.069411698 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.308 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.308 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.479 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.480 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4268MB free_disk=20.921802520751953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.481 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.481 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 326 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 34 KiB/s wr, 364 op/s
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.613 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.614 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.614 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 31 08:08:29 compute-0 nova_compute[247704]: 2026-01-31 08:08:29.660 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 31 08:08:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/59320802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2030909049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 31 08:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 31 08:08:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 31 08:08:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 31 08:08:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/477472252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.105 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.119 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.169 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.198 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.303 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.304 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.617 247708 DEBUG nova.compute.manager [req-8e91244b-f5cb-432b-82df-8d6aa370a2b2 req-56f9d2fa-a3e7-47f2-a8bb-9bd1d533d78a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.617 247708 DEBUG oslo_concurrency.lockutils [req-8e91244b-f5cb-432b-82df-8d6aa370a2b2 req-56f9d2fa-a3e7-47f2-a8bb-9bd1d533d78a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.618 247708 DEBUG oslo_concurrency.lockutils [req-8e91244b-f5cb-432b-82df-8d6aa370a2b2 req-56f9d2fa-a3e7-47f2-a8bb-9bd1d533d78a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.619 247708 DEBUG oslo_concurrency.lockutils [req-8e91244b-f5cb-432b-82df-8d6aa370a2b2 req-56f9d2fa-a3e7-47f2-a8bb-9bd1d533d78a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.619 247708 DEBUG nova.compute.manager [req-8e91244b-f5cb-432b-82df-8d6aa370a2b2 req-56f9d2fa-a3e7-47f2-a8bb-9bd1d533d78a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] No waiting events found dispatching network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:30 compute-0 nova_compute[247704]: 2026-01-31 08:08:30.619 247708 WARNING nova.compute.manager [req-8e91244b-f5cb-432b-82df-8d6aa370a2b2 req-56f9d2fa-a3e7-47f2-a8bb-9bd1d533d78a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received unexpected event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf for instance with vm_state active and task_state None.
Jan 31 08:08:30 compute-0 ceph-mon[74496]: pgmap v2162: 305 pgs: 305 active+clean; 326 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 34 KiB/s wr, 364 op/s
Jan 31 08:08:30 compute-0 ceph-mon[74496]: osdmap e276: 3 total, 3 up, 3 in
Jan 31 08:08:30 compute-0 ceph-mon[74496]: osdmap e277: 3 total, 3 up, 3 in
Jan 31 08:08:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2460577806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/477472252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:31 compute-0 nova_compute[247704]: 2026-01-31 08:08:31.305 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:31 compute-0 nova_compute[247704]: 2026-01-31 08:08:31.305 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:31 compute-0 nova_compute[247704]: 2026-01-31 08:08:31.306 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 293 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 226 op/s
Jan 31 08:08:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 08:08:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:32 compute-0 ceph-mon[74496]: pgmap v2165: 305 pgs: 305 active+clean; 293 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 226 op/s
Jan 31 08:08:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 260 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 158 op/s
Jan 31 08:08:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 31 08:08:34 compute-0 ceph-mon[74496]: pgmap v2166: 305 pgs: 305 active+clean; 260 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 34 KiB/s wr, 158 op/s
Jan 31 08:08:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1859182276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:34.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:34.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:34 compute-0 nova_compute[247704]: 2026-01-31 08:08:34.326 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 31 08:08:34 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 31 08:08:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:35 compute-0 nova_compute[247704]: 2026-01-31 08:08:35.171 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031676030616043177 of space, bias 1.0, pg target 0.9502809184812954 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066920249395123 of space, bias 1.0, pg target 1.220076074818537 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:08:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.5 KiB/s wr, 167 op/s
Jan 31 08:08:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2428103131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:35 compute-0 ceph-mon[74496]: osdmap e278: 3 total, 3 up, 3 in
Jan 31 08:08:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:36 compute-0 nova_compute[247704]: 2026-01-31 08:08:36.434 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:36.434 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:36.435 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:08:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:36.436 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:37 compute-0 ceph-mon[74496]: pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.5 KiB/s wr, 167 op/s
Jan 31 08:08:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 KiB/s wr, 131 op/s
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.905 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.906 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.907 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.908 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.908 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.910 247708 INFO nova.compute.manager [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Terminating instance
Jan 31 08:08:37 compute-0 nova_compute[247704]: 2026-01-31 08:08:37.911 247708 DEBUG nova.compute.manager [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:08:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:38.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:38 compute-0 kernel: tapfe3618d7-d6 (unregistering): left promiscuous mode
Jan 31 08:08:38 compute-0 NetworkManager[49108]: <info>  [1769846918.1606] device (tapfe3618d7-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:08:38 compute-0 ovn_controller[149457]: 2026-01-31T08:08:38Z|00460|binding|INFO|Releasing lport fe3618d7-d627-4b17-9860-0f219ded52bf from this chassis (sb_readonly=0)
Jan 31 08:08:38 compute-0 ovn_controller[149457]: 2026-01-31T08:08:38Z|00461|binding|INFO|Setting lport fe3618d7-d627-4b17-9860-0f219ded52bf down in Southbound
Jan 31 08:08:38 compute-0 ovn_controller[149457]: 2026-01-31T08:08:38Z|00462|binding|INFO|Removing iface tapfe3618d7-d6 ovn-installed in OVS
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:38.179 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:67:58 10.100.0.7'], port_security=['fa:16:3e:13:67:58 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4daa5779-6af5-48ab-bb17-1fbc0d9040c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c451338f-2e79-4900-b14c-c179e6229528', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '861b60b5b65c4460a2d622f9ae973d86', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac34cda3-35e1-49fa-b49e-154956beb7de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91b1ebad-c7d7-4365-83c5-85961d55938b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=fe3618d7-d627-4b17-9860-0f219ded52bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:08:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:38.180 160028 INFO neutron.agent.ovn.metadata.agent [-] Port fe3618d7-d627-4b17-9860-0f219ded52bf in datapath c451338f-2e79-4900-b14c-c179e6229528 unbound from our chassis
Jan 31 08:08:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:38.183 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c451338f-2e79-4900-b14c-c179e6229528, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:08:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:38.184 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[30981313-073f-4aa3-aadb-59e26e21cf0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:38.185 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c451338f-2e79-4900-b14c-c179e6229528 namespace which is not needed anymore
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:38 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 31 08:08:38 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000071.scope: Consumed 10.419s CPU time.
Jan 31 08:08:38 compute-0 systemd-machined[214448]: Machine qemu-46-instance-00000071 terminated.
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.344 247708 INFO nova.virt.libvirt.driver [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Instance destroyed successfully.
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.345 247708 DEBUG nova.objects.instance [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lazy-loading 'resources' on Instance uuid 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.366 247708 DEBUG nova.virt.libvirt.vif [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:08:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-842598130',display_name='tempest-ServerMetadataNegativeTestJSON-server-842598130',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-842598130',id=113,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:08:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='861b60b5b65c4460a2d622f9ae973d86',ramdisk_id='',reservation_id='r-n7anozzp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-966082543',owner_user_name='tempest-ServerMetadataNegativeTestJSON-966082543-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:08:28Z,user_data=None,user_id='41e4bf44a95742c68d5709b7cd31d18b',uuid=4daa5779-6af5-48ab-bb17-1fbc0d9040c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.367 247708 DEBUG nova.network.os_vif_util [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Converting VIF {"id": "fe3618d7-d627-4b17-9860-0f219ded52bf", "address": "fa:16:3e:13:67:58", "network": {"id": "c451338f-2e79-4900-b14c-c179e6229528", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-1154549442-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "861b60b5b65c4460a2d622f9ae973d86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe3618d7-d6", "ovs_interfaceid": "fe3618d7-d627-4b17-9860-0f219ded52bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.367 247708 DEBUG nova.network.os_vif_util [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.368 247708 DEBUG os_vif [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.370 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe3618d7-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:38 compute-0 nova_compute[247704]: 2026-01-31 08:08:38.426 247708 INFO os_vif [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:67:58,bridge_name='br-int',has_traffic_filtering=True,id=fe3618d7-d627-4b17-9860-0f219ded52bf,network=Network(c451338f-2e79-4900-b14c-c179e6229528),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe3618d7-d6')
Jan 31 08:08:38 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [NOTICE]   (322566) : haproxy version is 2.8.14-c23fe91
Jan 31 08:08:38 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [NOTICE]   (322566) : path to executable is /usr/sbin/haproxy
Jan 31 08:08:38 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [WARNING]  (322566) : Exiting Master process...
Jan 31 08:08:38 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [ALERT]    (322566) : Current worker (322568) exited with code 143 (Terminated)
Jan 31 08:08:38 compute-0 neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528[322562]: [WARNING]  (322566) : All workers exited. Exiting... (0)
Jan 31 08:08:38 compute-0 systemd[1]: libpod-db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459.scope: Deactivated successfully.
Jan 31 08:08:38 compute-0 podman[322672]: 2026-01-31 08:08:38.471118096 +0000 UTC m=+0.203263982 container died db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:08:38 compute-0 ceph-mon[74496]: pgmap v2169: 305 pgs: 305 active+clean; 246 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 KiB/s wr, 131 op/s
Jan 31 08:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459-userdata-shm.mount: Deactivated successfully.
Jan 31 08:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d3c62a1c28ef3144e5fbc6531de470c421e712e44cc70f01a3c765fd5a10c17-merged.mount: Deactivated successfully.
Jan 31 08:08:39 compute-0 podman[322672]: 2026-01-31 08:08:39.029204652 +0000 UTC m=+0.761350518 container cleanup db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.130 247708 DEBUG nova.compute.manager [req-1f2fcf1a-3a30-43c6-9cbd-b0961c5b63b0 req-4b5f577a-b62c-43a7-9077-39aa66f00cde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-vif-unplugged-fe3618d7-d627-4b17-9860-0f219ded52bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.130 247708 DEBUG oslo_concurrency.lockutils [req-1f2fcf1a-3a30-43c6-9cbd-b0961c5b63b0 req-4b5f577a-b62c-43a7-9077-39aa66f00cde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.130 247708 DEBUG oslo_concurrency.lockutils [req-1f2fcf1a-3a30-43c6-9cbd-b0961c5b63b0 req-4b5f577a-b62c-43a7-9077-39aa66f00cde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.131 247708 DEBUG oslo_concurrency.lockutils [req-1f2fcf1a-3a30-43c6-9cbd-b0961c5b63b0 req-4b5f577a-b62c-43a7-9077-39aa66f00cde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.131 247708 DEBUG nova.compute.manager [req-1f2fcf1a-3a30-43c6-9cbd-b0961c5b63b0 req-4b5f577a-b62c-43a7-9077-39aa66f00cde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] No waiting events found dispatching network-vif-unplugged-fe3618d7-d627-4b17-9860-0f219ded52bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.131 247708 DEBUG nova.compute.manager [req-1f2fcf1a-3a30-43c6-9cbd-b0961c5b63b0 req-4b5f577a-b62c-43a7-9077-39aa66f00cde 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-vif-unplugged-fe3618d7-d627-4b17-9860-0f219ded52bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:08:39 compute-0 podman[322730]: 2026-01-31 08:08:39.304680509 +0000 UTC m=+0.249143414 container remove db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.308 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[66360095-6e0f-4614-90b7-67f7c4c96b7b]: (4, ('Sat Jan 31 08:08:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528 (db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459)\ndb0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459\nSat Jan 31 08:08:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c451338f-2e79-4900-b14c-c179e6229528 (db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459)\ndb0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.310 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0e4856-e9c2-41bb-93bf-e60ac6855fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.311 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc451338f-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:39 compute-0 kernel: tapc451338f-20: left promiscuous mode
Jan 31 08:08:39 compute-0 systemd[1]: libpod-conmon-db0531834bb41388f1a74e5728d31d46f2416cc02af36e2845b5f5c85c86d459.scope: Deactivated successfully.
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.321 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.323 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a27a55b6-00e6-4016-b8a1-2d6e93f96604]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 nova_compute[247704]: 2026-01-31 08:08:39.329 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.335 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[44d9ff66-6589-4c6c-accf-cf7df1cf0bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.337 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c5938e4e-8d1a-4406-ab57-3c16c099e807]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.349 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8984e5cb-3327-4cc6-b1b1-574fa1df63e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714898, 'reachable_time': 20635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322744, 'error': None, 'target': 'ovnmeta-c451338f-2e79-4900-b14c-c179e6229528', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.350 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c451338f-2e79-4900-b14c-c179e6229528 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:08:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:08:39.351 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[65d356d8-a5cf-421c-bb79-c9b377bd0562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:08:39 compute-0 systemd[1]: run-netns-ovnmeta\x2dc451338f\x2d2e79\x2d4900\x2db14c\x2dc179e6229528.mount: Deactivated successfully.
Jan 31 08:08:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 230 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 113 op/s
Jan 31 08:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 31 08:08:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:40.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-0 nova_compute[247704]: 2026-01-31 08:08:40.674 247708 INFO nova.virt.libvirt.driver [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Deleting instance files /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7_del
Jan 31 08:08:40 compute-0 nova_compute[247704]: 2026-01-31 08:08:40.676 247708 INFO nova.virt.libvirt.driver [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Deletion of /var/lib/nova/instances/4daa5779-6af5-48ab-bb17-1fbc0d9040c7_del complete
Jan 31 08:08:40 compute-0 nova_compute[247704]: 2026-01-31 08:08:40.851 247708 INFO nova.compute.manager [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Took 2.94 seconds to destroy the instance on the hypervisor.
Jan 31 08:08:40 compute-0 nova_compute[247704]: 2026-01-31 08:08:40.852 247708 DEBUG oslo.service.loopingcall [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:08:40 compute-0 nova_compute[247704]: 2026-01-31 08:08:40.852 247708 DEBUG nova.compute.manager [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:08:40 compute-0 nova_compute[247704]: 2026-01-31 08:08:40.852 247708 DEBUG nova.network.neutron [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:08:40 compute-0 ceph-mon[74496]: pgmap v2170: 305 pgs: 305 active+clean; 230 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 113 op/s
Jan 31 08:08:40 compute-0 ceph-mon[74496]: osdmap e279: 3 total, 3 up, 3 in
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.271 247708 DEBUG nova.compute.manager [req-a5f9c4f9-3887-4e0b-9d43-b254ea2eda58 req-24a5a916-bb51-459a-baf8-ce4bfa829d1b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.272 247708 DEBUG oslo_concurrency.lockutils [req-a5f9c4f9-3887-4e0b-9d43-b254ea2eda58 req-24a5a916-bb51-459a-baf8-ce4bfa829d1b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.273 247708 DEBUG oslo_concurrency.lockutils [req-a5f9c4f9-3887-4e0b-9d43-b254ea2eda58 req-24a5a916-bb51-459a-baf8-ce4bfa829d1b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.274 247708 DEBUG oslo_concurrency.lockutils [req-a5f9c4f9-3887-4e0b-9d43-b254ea2eda58 req-24a5a916-bb51-459a-baf8-ce4bfa829d1b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.274 247708 DEBUG nova.compute.manager [req-a5f9c4f9-3887-4e0b-9d43-b254ea2eda58 req-24a5a916-bb51-459a-baf8-ce4bfa829d1b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] No waiting events found dispatching network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.275 247708 WARNING nova.compute.manager [req-a5f9c4f9-3887-4e0b-9d43-b254ea2eda58 req-24a5a916-bb51-459a-baf8-ce4bfa829d1b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received unexpected event network-vif-plugged-fe3618d7-d627-4b17-9860-0f219ded52bf for instance with vm_state active and task_state deleting.
Jan 31 08:08:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 184 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 982 KiB/s rd, 2.2 KiB/s wr, 77 op/s
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.961 247708 DEBUG nova.network.neutron [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:08:41 compute-0 nova_compute[247704]: 2026-01-31 08:08:41.984 247708 INFO nova.compute.manager [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Took 1.13 seconds to deallocate network for instance.
Jan 31 08:08:42 compute-0 ceph-mon[74496]: pgmap v2172: 305 pgs: 305 active+clean; 184 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 982 KiB/s rd, 2.2 KiB/s wr, 77 op/s
Jan 31 08:08:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:42.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.038 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.038 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:42.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.066 247708 DEBUG nova.compute.manager [req-704bb599-102d-4866-b0ed-080a42640ad1 req-9cd0e8a8-32cd-45f0-a98a-1054aa99a256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Received event network-vif-deleted-fe3618d7-d627-4b17-9860-0f219ded52bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.096 247708 DEBUG oslo_concurrency.processutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:08:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666301356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.601 247708 DEBUG oslo_concurrency.processutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.609 247708 DEBUG nova.compute.provider_tree [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.627 247708 DEBUG nova.scheduler.client.report [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.658 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.684 247708 INFO nova.scheduler.client.report [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Deleted allocations for instance 4daa5779-6af5-48ab-bb17-1fbc0d9040c7
Jan 31 08:08:42 compute-0 nova_compute[247704]: 2026-01-31 08:08:42.772 247708 DEBUG oslo_concurrency.lockutils [None req-e13ec00b-5aa8-4d4f-8f93-e22b5bfdcf83 41e4bf44a95742c68d5709b7cd31d18b 861b60b5b65c4460a2d622f9ae973d86 - - default default] Lock "4daa5779-6af5-48ab-bb17-1fbc0d9040c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:42 compute-0 sudo[322770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:42 compute-0 sudo[322770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:42 compute-0 sudo[322770]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:43 compute-0 sudo[322795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:43 compute-0 sudo[322795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:43 compute-0 sudo[322795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3666301356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:43 compute-0 nova_compute[247704]: 2026-01-31 08:08:43.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 144 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Jan 31 08:08:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:44.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:44.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:44 compute-0 ceph-mon[74496]: pgmap v2173: 305 pgs: 305 active+clean; 144 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Jan 31 08:08:44 compute-0 nova_compute[247704]: 2026-01-31 08:08:44.331 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/537857212' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:08:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/537857212' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:08:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 49 op/s
Jan 31 08:08:45 compute-0 podman[322822]: 2026-01-31 08:08:45.908737202 +0000 UTC m=+0.079434543 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:08:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:46.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:46 compute-0 ceph-mon[74496]: pgmap v2174: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 49 op/s
Jan 31 08:08:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 46 op/s
Jan 31 08:08:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:48 compute-0 nova_compute[247704]: 2026-01-31 08:08:48.424 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:48 compute-0 ceph-mon[74496]: pgmap v2175: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 46 op/s
Jan 31 08:08:49 compute-0 nova_compute[247704]: 2026-01-31 08:08:49.333 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Jan 31 08:08:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:50.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:50 compute-0 ceph-mon[74496]: pgmap v2176: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Jan 31 08:08:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:50.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Jan 31 08:08:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:08:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:52.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:08:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:52.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:52 compute-0 ceph-mon[74496]: pgmap v2177: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Jan 31 08:08:53 compute-0 nova_compute[247704]: 2026-01-31 08:08:53.344 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846918.3424468, 4daa5779-6af5-48ab-bb17-1fbc0d9040c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:08:53 compute-0 nova_compute[247704]: 2026-01-31 08:08:53.344 247708 INFO nova.compute.manager [-] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] VM Stopped (Lifecycle Event)
Jan 31 08:08:53 compute-0 nova_compute[247704]: 2026-01-31 08:08:53.419 247708 DEBUG nova.compute.manager [None req-262f220b-8fd8-4040-89e9-64e835986c53 - - - - - -] [instance: 4daa5779-6af5-48ab-bb17-1fbc0d9040c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:08:53 compute-0 nova_compute[247704]: 2026-01-31 08:08:53.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 852 B/s wr, 18 op/s
Jan 31 08:08:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:54.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:54 compute-0 nova_compute[247704]: 2026-01-31 08:08:54.333 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:54 compute-0 nova_compute[247704]: 2026-01-31 08:08:54.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:08:55 compute-0 nova_compute[247704]: 2026-01-31 08:08:55.010 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:55 compute-0 nova_compute[247704]: 2026-01-31 08:08:55.011 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:55 compute-0 ceph-mon[74496]: pgmap v2178: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 852 B/s wr, 18 op/s
Jan 31 08:08:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 08:08:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:56.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:56 compute-0 nova_compute[247704]: 2026-01-31 08:08:56.058 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:08:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:56 compute-0 nova_compute[247704]: 2026-01-31 08:08:56.585 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:56 compute-0 nova_compute[247704]: 2026-01-31 08:08:56.586 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:56 compute-0 nova_compute[247704]: 2026-01-31 08:08:56.598 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:08:56 compute-0 ceph-mon[74496]: pgmap v2179: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 08:08:56 compute-0 nova_compute[247704]: 2026-01-31 08:08:56.598 247708 INFO nova.compute.claims [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:08:56 compute-0 sudo[322854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:56 compute-0 sudo[322854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:56 compute-0 sudo[322854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-0 sudo[322879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:56 compute-0 sudo[322879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:56 compute-0 sudo[322879]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-0 sudo[322904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:56 compute-0 sudo[322904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:56 compute-0 sudo[322904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:56 compute-0 nova_compute[247704]: 2026-01-31 08:08:56.897 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:56 compute-0 sudo[322929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:08:56 compute-0 sudo[322929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-0 sudo[322929]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:08:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:08:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:08:57 compute-0 sudo[322994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:57 compute-0 sudo[322994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-0 sudo[322994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:08:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022534877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.376 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.381 247708 DEBUG nova.compute.provider_tree [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:08:57 compute-0 sudo[323019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:57 compute-0 sudo[323019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-0 sudo[323019]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-0 sudo[323046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:08:57 compute-0 sudo[323046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-0 sudo[323046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-0 sudo[323072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:08:57 compute-0 sudo[323072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.528 247708 DEBUG nova.scheduler.client.report [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:08:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.720 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.722 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.771 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.772 247708 DEBUG nova.network.neutron [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.814 247708 INFO nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:08:57 compute-0 sudo[323072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:57 compute-0 nova_compute[247704]: 2026-01-31 08:08:57.979 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:08:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:08:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:58.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:08:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:08:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:08:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:58.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.115 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.117 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.117 247708 INFO nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Creating image(s)
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.145 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.175 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.203 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.207 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.231 247708 DEBUG nova.policy [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '18aee9d81d404f77ac81cde538f140d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:08:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:08:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:08:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4022534877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:08:58 compute-0 ceph-mon[74496]: pgmap v2180: 305 pgs: 305 active+clean; 121 MiB data, 871 MiB used, 20 GiB / 21 GiB avail
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.275 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.276 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.277 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.278 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.314 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.317 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 53a5c321-1278-4df4-9fb0-feb465508681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:08:58 compute-0 nova_compute[247704]: 2026-01-31 08:08:58.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:59 compute-0 nova_compute[247704]: 2026-01-31 08:08:59.335 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:08:59 compute-0 nova_compute[247704]: 2026-01-31 08:08:59.338 247708 DEBUG nova.network.neutron [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Successfully created port: 86453c5b-b126-4762-81e9-44c33b078bfc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:08:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 123 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 92 KiB/s wr, 3 op/s
Jan 31 08:08:59 compute-0 nova_compute[247704]: 2026-01-31 08:08:59.644 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 53a5c321-1278-4df4-9fb0-feb465508681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:08:59 compute-0 nova_compute[247704]: 2026-01-31 08:08:59.712 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] resizing rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:08:59 compute-0 podman[323278]: 2026-01-31 08:08:59.920228197 +0000 UTC m=+0.090033813 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 31 08:08:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:00.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:00.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:00 compute-0 nova_compute[247704]: 2026-01-31 08:09:00.657 247708 DEBUG nova.network.neutron [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Successfully updated port: 86453c5b-b126-4762-81e9-44c33b078bfc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:09:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:09:00 compute-0 nova_compute[247704]: 2026-01-31 08:09:00.862 247708 DEBUG nova.objects.instance [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:00 compute-0 nova_compute[247704]: 2026-01-31 08:09:00.993 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:00 compute-0 nova_compute[247704]: 2026-01-31 08:09:00.994 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:00 compute-0 nova_compute[247704]: 2026-01-31 08:09:00.994 247708 DEBUG nova.network.neutron [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.072 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.073 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Ensure instance console log exists: /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.073 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.074 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.074 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.134 247708 DEBUG nova.compute.manager [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-changed-86453c5b-b126-4762-81e9-44c33b078bfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.134 247708 DEBUG nova.compute.manager [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Refreshing instance network info cache due to event network-changed-86453c5b-b126-4762-81e9-44c33b078bfc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.135 247708 DEBUG oslo_concurrency.lockutils [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:01 compute-0 ceph-mon[74496]: pgmap v2181: 305 pgs: 305 active+clean; 123 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 92 KiB/s wr, 3 op/s
Jan 31 08:09:01 compute-0 nova_compute[247704]: 2026-01-31 08:09:01.461 247708 DEBUG nova.network.neutron [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:09:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 140 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 1008 KiB/s wr, 4 op/s
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 28d40cfd-9ec3-4d04-b414-03971b7cb8b1 does not exist
Jan 31 08:09:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6532c18a-a205-4feb-8840-298846ebe69a does not exist
Jan 31 08:09:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b28e5f68-8e46-46cb-8f42-1e41030c8aff does not exist
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:09:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:09:01 compute-0 sudo[323316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:01 compute-0 sudo[323316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:01 compute-0 sudo[323316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:01 compute-0 sudo[323341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:01 compute-0 sudo[323341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:01 compute-0 sudo[323341]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:01 compute-0 sudo[323366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:01 compute-0 sudo[323366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:01 compute-0 sudo[323366]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:01 compute-0 sudo[323391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:09:01 compute-0 sudo[323391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:02.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:02.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:02 compute-0 podman[323456]: 2026-01-31 08:09:02.289245055 +0000 UTC m=+0.118505379 container create 2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:09:02 compute-0 podman[323456]: 2026-01-31 08:09:02.197105972 +0000 UTC m=+0.026366316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:09:02 compute-0 ceph-mon[74496]: pgmap v2182: 305 pgs: 305 active+clean; 140 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 2.5 KiB/s rd, 1008 KiB/s wr, 4 op/s
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:09:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:09:02 compute-0 systemd[1]: Started libpod-conmon-2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1.scope.
Jan 31 08:09:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:02 compute-0 podman[323456]: 2026-01-31 08:09:02.636579899 +0000 UTC m=+0.465840223 container init 2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:09:02 compute-0 podman[323456]: 2026-01-31 08:09:02.646711226 +0000 UTC m=+0.475971530 container start 2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:09:02 compute-0 sweet_gauss[323472]: 167 167
Jan 31 08:09:02 compute-0 systemd[1]: libpod-2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1.scope: Deactivated successfully.
Jan 31 08:09:02 compute-0 conmon[323472]: conmon 2e9b8813f16da001470e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1.scope/container/memory.events
Jan 31 08:09:02 compute-0 podman[323456]: 2026-01-31 08:09:02.728419973 +0000 UTC m=+0.557680287 container attach 2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:09:02 compute-0 podman[323456]: 2026-01-31 08:09:02.730050884 +0000 UTC m=+0.559311248 container died 2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a1b0b747d94df8ba73030210304c35c0de8faa6e0af2cf1d4fad6e42a0d1bb-merged.mount: Deactivated successfully.
Jan 31 08:09:03 compute-0 sudo[323489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:03 compute-0 sudo[323489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:03 compute-0 sudo[323489]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-0 sudo[323514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:03 compute-0 sudo[323514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:03 compute-0 sudo[323514]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:03 compute-0 podman[323456]: 2026-01-31 08:09:03.164689722 +0000 UTC m=+0.993950026 container remove 2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:09:03 compute-0 systemd[1]: libpod-conmon-2e9b8813f16da001470e2cb50e49c93d22c6470a9a5949085f2b2d8ce24fb2a1.scope: Deactivated successfully.
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.280 247708 DEBUG nova.network.neutron [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:03 compute-0 podman[323546]: 2026-01-31 08:09:03.267899666 +0000 UTC m=+0.022154603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:09:03 compute-0 podman[323546]: 2026-01-31 08:09:03.422632339 +0000 UTC m=+0.176887216 container create c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.431 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:03 compute-0 systemd[1]: Started libpod-conmon-c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375.scope.
Jan 31 08:09:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b793d73c01f671aeb7c84d15ee839ae98ae37da505935015687abd9b1d1844/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b793d73c01f671aeb7c84d15ee839ae98ae37da505935015687abd9b1d1844/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b793d73c01f671aeb7c84d15ee839ae98ae37da505935015687abd9b1d1844/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b793d73c01f671aeb7c84d15ee839ae98ae37da505935015687abd9b1d1844/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b793d73c01f671aeb7c84d15ee839ae98ae37da505935015687abd9b1d1844/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 156 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Jan 31 08:09:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/983292820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.621 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.622 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Instance network_info: |[{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.623 247708 DEBUG oslo_concurrency.lockutils [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.623 247708 DEBUG nova.network.neutron [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Refreshing network info cache for port 86453c5b-b126-4762-81e9-44c33b078bfc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.626 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Start _get_guest_xml network_info=[{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.631 247708 WARNING nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.637 247708 DEBUG nova.virt.libvirt.host [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.638 247708 DEBUG nova.virt.libvirt.host [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.642 247708 DEBUG nova.virt.libvirt.host [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.642 247708 DEBUG nova.virt.libvirt.host [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.644 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.644 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.644 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.645 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.645 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.645 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.646 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.646 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.646 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.646 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.647 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.647 247708 DEBUG nova.virt.hardware [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:09:03 compute-0 nova_compute[247704]: 2026-01-31 08:09:03.650 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:03 compute-0 podman[323546]: 2026-01-31 08:09:03.692836536 +0000 UTC m=+0.447091503 container init c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:03 compute-0 podman[323546]: 2026-01-31 08:09:03.706551092 +0000 UTC m=+0.460806009 container start c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:09:03 compute-0 podman[323546]: 2026-01-31 08:09:03.839160684 +0000 UTC m=+0.593415561 container attach c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:09:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:04.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:04.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:09:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061122625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.148 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.179 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.183 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.336 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:04 compute-0 crazy_gould[323563]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:09:04 compute-0 crazy_gould[323563]: --> relative data size: 1.0
Jan 31 08:09:04 compute-0 crazy_gould[323563]: --> All data devices are unavailable
Jan 31 08:09:04 compute-0 systemd[1]: libpod-c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375.scope: Deactivated successfully.
Jan 31 08:09:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:09:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860341834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.696 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.699 247708 DEBUG nova.virt.libvirt.vif [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2046159',display_name='tempest-ServerActionsTestOtherB-server-2046159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2046159',id=114,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-138tdv2g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:08:58Z,user_data=None,user_id='18aee9d81d404f77ac81cde538f140d8',uuid=53a5c321-1278-4df4-9fb0-feb465508681,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.700 247708 DEBUG nova.network.os_vif_util [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.702 247708 DEBUG nova.network.os_vif_util [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.705 247708 DEBUG nova.objects.instance [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:04 compute-0 podman[323639]: 2026-01-31 08:09:04.706324668 +0000 UTC m=+0.035954850 container died c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.725 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <uuid>53a5c321-1278-4df4-9fb0-feb465508681</uuid>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <name>instance-00000072</name>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsTestOtherB-server-2046159</nova:name>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:09:03</nova:creationTime>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <nova:port uuid="86453c5b-b126-4762-81e9-44c33b078bfc">
Jan 31 08:09:04 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <system>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <entry name="serial">53a5c321-1278-4df4-9fb0-feb465508681</entry>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <entry name="uuid">53a5c321-1278-4df4-9fb0-feb465508681</entry>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </system>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <os>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </os>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <features>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </features>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/53a5c321-1278-4df4-9fb0-feb465508681_disk">
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </source>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/53a5c321-1278-4df4-9fb0-feb465508681_disk.config">
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </source>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:09:04 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:42:2f:9f"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <target dev="tap86453c5b-b1"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/console.log" append="off"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <video>
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </video>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:09:04 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:09:04 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:09:04 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:09:04 compute-0 nova_compute[247704]: </domain>
Jan 31 08:09:04 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.726 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Preparing to wait for external event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.726 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.727 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.727 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.728 247708 DEBUG nova.virt.libvirt.vif [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2046159',display_name='tempest-ServerActionsTestOtherB-server-2046159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2046159',id=114,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-138tdv2g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:08:58Z,user_data=None,user_id='18aee9d81d404f77ac81cde538f140d8',uuid=53a5c321-1278-4df4-9fb0-feb465508681,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.729 247708 DEBUG nova.network.os_vif_util [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.730 247708 DEBUG nova.network.os_vif_util [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.730 247708 DEBUG os_vif [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.731 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.731 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.732 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.735 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86453c5b-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.736 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86453c5b-b1, col_values=(('external_ids', {'iface-id': '86453c5b-b126-4762-81e9-44c33b078bfc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:2f:9f', 'vm-uuid': '53a5c321-1278-4df4-9fb0-feb465508681'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:04 compute-0 NetworkManager[49108]: <info>  [1769846944.7400] manager: (tap86453c5b-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/215)
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:04 compute-0 nova_compute[247704]: 2026-01-31 08:09:04.749 247708 INFO os_vif [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1')
Jan 31 08:09:04 compute-0 ceph-mon[74496]: pgmap v2183: 305 pgs: 305 active+clean; 156 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.6 MiB/s wr, 17 op/s
Jan 31 08:09:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4061122625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.173 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.174 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.174 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:42:2f:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.175 247708 INFO nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Using config drive
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.208 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b793d73c01f671aeb7c84d15ee839ae98ae37da505935015687abd9b1d1844-merged.mount: Deactivated successfully.
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.988 247708 DEBUG nova.network.neutron [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updated VIF entry in instance network info cache for port 86453c5b-b126-4762-81e9-44c33b078bfc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:09:05 compute-0 nova_compute[247704]: 2026-01-31 08:09:05.989 247708 DEBUG nova.network.neutron [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:06 compute-0 nova_compute[247704]: 2026-01-31 08:09:06.015 247708 DEBUG oslo_concurrency.lockutils [req-f133b92d-1efc-422d-b340-7df698021107 req-a6cb1013-eab5-4b1c-a219-8a6a46dfa6ae 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:06.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:06 compute-0 nova_compute[247704]: 2026-01-31 08:09:06.122 247708 INFO nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Creating config drive at /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/disk.config
Jan 31 08:09:06 compute-0 nova_compute[247704]: 2026-01-31 08:09:06.127 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplmux7slx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2860341834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2926926082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:06 compute-0 nova_compute[247704]: 2026-01-31 08:09:06.249 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplmux7slx" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:06 compute-0 nova_compute[247704]: 2026-01-31 08:09:06.313 247708 DEBUG nova.storage.rbd_utils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image 53a5c321-1278-4df4-9fb0-feb465508681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:09:06 compute-0 nova_compute[247704]: 2026-01-31 08:09:06.317 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/disk.config 53a5c321-1278-4df4-9fb0-feb465508681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:06 compute-0 podman[323639]: 2026-01-31 08:09:06.321335529 +0000 UTC m=+1.650965731 container remove c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:09:06 compute-0 systemd[1]: libpod-conmon-c11048fd32dcf1f9ca1ce146966f27c4813a0af4d47110295b0a4c5b9670e375.scope: Deactivated successfully.
Jan 31 08:09:06 compute-0 sudo[323391]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:06 compute-0 sudo[323698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:06 compute-0 sudo[323698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:06 compute-0 sudo[323698]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:06 compute-0 sudo[323738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:06 compute-0 sudo[323738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:06 compute-0 sudo[323738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:06 compute-0 sudo[323763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:06 compute-0 sudo[323763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:06 compute-0 sudo[323763]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:06 compute-0 sudo[323788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:09:06 compute-0 sudo[323788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:07 compute-0 podman[323853]: 2026-01-31 08:09:07.103770021 +0000 UTC m=+0.108151345 container create a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:09:07 compute-0 podman[323853]: 2026-01-31 08:09:07.022250208 +0000 UTC m=+0.026631572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:09:07 compute-0 systemd[1]: Started libpod-conmon-a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a.scope.
Jan 31 08:09:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:07 compute-0 podman[323853]: 2026-01-31 08:09:07.49467049 +0000 UTC m=+0.499051794 container init a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:09:07 compute-0 podman[323853]: 2026-01-31 08:09:07.500289968 +0000 UTC m=+0.504671252 container start a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:09:07 compute-0 cool_johnson[323869]: 167 167
Jan 31 08:09:07 compute-0 systemd[1]: libpod-a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a.scope: Deactivated successfully.
Jan 31 08:09:07 compute-0 conmon[323869]: conmon a1f71579edf1a0f92e7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a.scope/container/memory.events
Jan 31 08:09:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 181 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 08:09:07 compute-0 ceph-mon[74496]: pgmap v2184: 305 pgs: 305 active+clean; 167 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:09:07 compute-0 podman[323853]: 2026-01-31 08:09:07.698176186 +0000 UTC m=+0.702557520 container attach a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:09:07 compute-0 podman[323853]: 2026-01-31 08:09:07.698640007 +0000 UTC m=+0.703021301 container died a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:09:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:08.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a37bff5dd8047d431b9345462971d9a8008e2ce4f1cc8603ec1e3bad0581d3ea-merged.mount: Deactivated successfully.
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.614 247708 DEBUG oslo_concurrency.processutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/disk.config 53a5c321-1278-4df4-9fb0-feb465508681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.617 247708 INFO nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Deleting local config drive /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681/disk.config because it was imported into RBD.
Jan 31 08:09:08 compute-0 kernel: tap86453c5b-b1: entered promiscuous mode
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.6862] manager: (tap86453c5b-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/216)
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.687 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:08 compute-0 ovn_controller[149457]: 2026-01-31T08:09:08Z|00463|binding|INFO|Claiming lport 86453c5b-b126-4762-81e9-44c33b078bfc for this chassis.
Jan 31 08:09:08 compute-0 ovn_controller[149457]: 2026-01-31T08:09:08Z|00464|binding|INFO|86453c5b-b126-4762-81e9-44c33b078bfc: Claiming fa:16:3e:42:2f:9f 10.100.0.7
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.695 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.714 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:2f:9f 10.100.0.7'], port_security=['fa:16:3e:42:2f:9f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '53a5c321-1278-4df4-9fb0-feb465508681', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cef7bb84-6ec0-48b7-8775-111e40762c53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=86453c5b-b126-4762-81e9-44c33b078bfc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.716 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 86453c5b-b126-4762-81e9-44c33b078bfc in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.719 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.728 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f06aab72-14cd-492e-b872-50140ba664cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.729 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8014d6b-21 in ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.735 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8014d6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.735 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[75f211c2-fc28-4173-9150-86600331fa32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.737 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[31a16093-0e55-4a90-8b25-6034158f29cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.779 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.7812] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.7822] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.783 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:08 compute-0 systemd-machined[214448]: New machine qemu-47-instance-00000072.
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.788 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[870b4d76-4808-42dd-8b11-d0a2c2c6a414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 podman[323853]: 2026-01-31 08:09:08.795237972 +0000 UTC m=+1.799619256 container remove a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:09:08 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-00000072.
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.801 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.804 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c51ffbef-2b2b-4f87-8413-5ad2cabdebde]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_controller[149457]: 2026-01-31T08:09:08Z|00465|binding|INFO|Setting lport 86453c5b-b126-4762-81e9-44c33b078bfc ovn-installed in OVS
Jan 31 08:09:08 compute-0 ovn_controller[149457]: 2026-01-31T08:09:08Z|00466|binding|INFO|Setting lport 86453c5b-b126-4762-81e9-44c33b078bfc up in Southbound
Jan 31 08:09:08 compute-0 systemd-udevd[323906]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:09:08 compute-0 nova_compute[247704]: 2026-01-31 08:09:08.810 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.8259] device (tap86453c5b-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.8274] device (tap86453c5b-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.837 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c03ffc1b-e43e-4495-b14d-b05fdd163875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.841 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7367a7b7-6bf5-4f0b-af73-e507bf97a9e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.8426] manager: (tape8014d6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.867 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a98d7464-ffc7-4a35-b006-c56e2ef84e9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.872 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d09a4558-c8c3-4ace-a4c2-171dcea7708e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 systemd[1]: libpod-conmon-a1f71579edf1a0f92e7fad02b37c2495fc305a05263afc31200e071c9cfec16a.scope: Deactivated successfully.
Jan 31 08:09:08 compute-0 NetworkManager[49108]: <info>  [1769846948.8947] device (tape8014d6b-20): carrier: link connected
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.901 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd5ecab-3b40-4bce-8b76-e62a7c3c8430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.914 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6e55a260-f05f-420e-9c44-08c59cc8480c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719151, 'reachable_time': 38392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323945, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ceph-mon[74496]: pgmap v2185: 305 pgs: 305 active+clean; 181 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.931 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[431e7564-785d-4cc1-b5a1-2b5c8b20b2bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:c1c3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719151, 'tstamp': 719151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323953, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.948 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e790aa10-c3ce-4190-9b28-8d26386fff28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719151, 'reachable_time': 38392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323956, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:08.972 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[05888c59-d265-4a03-9f58-9ac7702a7e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.021 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f466e4d5-07a9-4c5d-8e9c-8048438d33a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.023 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.023 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.024 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:09 compute-0 podman[323943]: 2026-01-31 08:09:08.928150202 +0000 UTC m=+0.020975694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:09:09 compute-0 kernel: tape8014d6b-20: entered promiscuous mode
Jan 31 08:09:09 compute-0 NetworkManager[49108]: <info>  [1769846949.0269] manager: (tape8014d6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.034 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.035 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.037 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.038 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5f345754-7576-43de-b1d3-e4d669e0eb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.039 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e8014d6b-23e1-41ef-b5e2-3d770d302e72.pid.haproxy
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:09:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:09.039 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'env', 'PROCESS_TAG=haproxy-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8014d6b-23e1-41ef-b5e2-3d770d302e72.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:09:09 compute-0 ovn_controller[149457]: 2026-01-31T08:09:09Z|00467|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.061 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:09 compute-0 podman[323943]: 2026-01-31 08:09:09.069869258 +0000 UTC m=+0.162694730 container create cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:09:09 compute-0 systemd[1]: Started libpod-conmon-cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb.scope.
Jan 31 08:09:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670591583a1de018213955547899b32de83543daf169f88dc94c75066818eba7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670591583a1de018213955547899b32de83543daf169f88dc94c75066818eba7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670591583a1de018213955547899b32de83543daf169f88dc94c75066818eba7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670591583a1de018213955547899b32de83543daf169f88dc94c75066818eba7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.338 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:09 compute-0 podman[323943]: 2026-01-31 08:09:09.395967211 +0000 UTC m=+0.488792743 container init cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:09:09 compute-0 podman[323943]: 2026-01-31 08:09:09.406472508 +0000 UTC m=+0.499297980 container start cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ptolemy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:09 compute-0 podman[323943]: 2026-01-31 08:09:09.56889094 +0000 UTC m=+0.661716462 container attach cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:09:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 228 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.9 MiB/s wr, 64 op/s
Jan 31 08:09:09 compute-0 podman[323997]: 2026-01-31 08:09:09.588761786 +0000 UTC m=+0.130212515 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.739 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.921 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846949.9207754, 53a5c321-1278-4df4-9fb0-feb465508681 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:09 compute-0 nova_compute[247704]: 2026-01-31 08:09:09.921 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] VM Started (Lifecycle Event)
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.019 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.024 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846949.924169, 53a5c321-1278-4df4-9fb0-feb465508681 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.024 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] VM Paused (Lifecycle Event)
Jan 31 08:09:10 compute-0 podman[323997]: 2026-01-31 08:09:10.029754359 +0000 UTC m=+0.571204998 container create d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.045 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.050 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:09:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:10.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.072 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:09:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3208905229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:10 compute-0 ceph-mon[74496]: pgmap v2186: 305 pgs: 305 active+clean; 228 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.9 MiB/s wr, 64 op/s
Jan 31 08:09:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1987200555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]: {
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:     "0": [
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:         {
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "devices": [
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "/dev/loop3"
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             ],
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "lv_name": "ceph_lv0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "lv_size": "7511998464",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "name": "ceph_lv0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "tags": {
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.crush_device_class": "",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.encrypted": "0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.osd_id": "0",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.type": "block",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:                 "ceph.vdo": "0"
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             },
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "type": "block",
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:             "vg_name": "ceph_vg0"
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:         }
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]:     ]
Jan 31 08:09:10 compute-0 priceless_ptolemy[323969]: }
Jan 31 08:09:10 compute-0 systemd[1]: libpod-cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb.scope: Deactivated successfully.
Jan 31 08:09:10 compute-0 podman[323943]: 2026-01-31 08:09:10.324382154 +0000 UTC m=+1.417207646 container died cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.336 247708 DEBUG nova.compute.manager [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.338 247708 DEBUG oslo_concurrency.lockutils [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.338 247708 DEBUG oslo_concurrency.lockutils [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.338 247708 DEBUG oslo_concurrency.lockutils [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.339 247708 DEBUG nova.compute.manager [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Processing event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.339 247708 DEBUG nova.compute.manager [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.339 247708 DEBUG oslo_concurrency.lockutils [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.339 247708 DEBUG oslo_concurrency.lockutils [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.339 247708 DEBUG oslo_concurrency.lockutils [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.340 247708 DEBUG nova.compute.manager [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] No waiting events found dispatching network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.340 247708 WARNING nova.compute.manager [req-7217433e-ade9-46f5-8a6d-762277bd529f req-dca9d8ce-c15e-4ed4-8b69-f5af924a608b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received unexpected event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc for instance with vm_state building and task_state spawning.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.341 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:09:10 compute-0 systemd[1]: Started libpod-conmon-d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725.scope.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.345 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769846950.3452585, 53a5c321-1278-4df4-9fb0-feb465508681 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.345 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] VM Resumed (Lifecycle Event)
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.347 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.350 247708 INFO nova.virt.libvirt.driver [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Instance spawned successfully.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.351 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.369 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.374 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:09:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.377 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.377 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.378 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.378 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.379 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.379 247708 DEBUG nova.virt.libvirt.driver [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297c3994664edfbd55c78c2ea4265a198135f89872f84c8dba35506fa9f76182/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.412 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.445 247708 INFO nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Took 12.33 seconds to spawn the instance on the hypervisor.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.446 247708 DEBUG nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:09:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.526 247708 INFO nova.compute.manager [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Took 14.00 seconds to build instance.
Jan 31 08:09:10 compute-0 nova_compute[247704]: 2026-01-31 08:09:10.546 247708 DEBUG oslo_concurrency.lockutils [None req-84e42b88-b2bb-44a1-a3f3-44fb81b99002 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-670591583a1de018213955547899b32de83543daf169f88dc94c75066818eba7-merged.mount: Deactivated successfully.
Jan 31 08:09:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:11.179 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:11.180 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:11.181 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:11 compute-0 podman[323943]: 2026-01-31 08:09:11.441855549 +0000 UTC m=+2.534681031 container remove cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:09:11 compute-0 sudo[323788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:11 compute-0 systemd[1]: libpod-conmon-cd1e321c902f0e6189b6bc666c92c9e201cf6426a7819885db299cdc88740efb.scope: Deactivated successfully.
Jan 31 08:09:11 compute-0 sudo[324073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:11 compute-0 sudo[324073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:11 compute-0 sudo[324073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 252 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.2 MiB/s wr, 80 op/s
Jan 31 08:09:11 compute-0 sudo[324098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:11 compute-0 sudo[324098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:11 compute-0 sudo[324098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:11 compute-0 sudo[324123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:11 compute-0 nova_compute[247704]: 2026-01-31 08:09:11.668 247708 INFO nova.compute.manager [None req-c4e70657-163b-4c01-a31b-e9f5434effe3 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Get console output
Jan 31 08:09:11 compute-0 sudo[324123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:11 compute-0 sudo[324123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:11 compute-0 nova_compute[247704]: 2026-01-31 08:09:11.675 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:09:11 compute-0 sudo[324148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:09:11 compute-0 sudo[324148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:11 compute-0 podman[323997]: 2026-01-31 08:09:11.802927598 +0000 UTC m=+2.344378257 container init d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:09:11 compute-0 podman[323997]: 2026-01-31 08:09:11.810484962 +0000 UTC m=+2.351935641 container start d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 08:09:11 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [NOTICE]   (324174) : New worker (324176) forked
Jan 31 08:09:11 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [NOTICE]   (324174) : Loading success.
Jan 31 08:09:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/686841026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1666735505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:12.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.077812209 +0000 UTC m=+0.081034172 container create ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lovelace, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:09:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.019859421 +0000 UTC m=+0.023081394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:09:12 compute-0 systemd[1]: Started libpod-conmon-ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e.scope.
Jan 31 08:09:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.18988383 +0000 UTC m=+0.193105783 container init ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.199800602 +0000 UTC m=+0.203022525 container start ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:09:12 compute-0 fervent_lovelace[324240]: 167 167
Jan 31 08:09:12 compute-0 systemd[1]: libpod-ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e.scope: Deactivated successfully.
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.208583006 +0000 UTC m=+0.211804969 container attach ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.208910684 +0000 UTC m=+0.212132617 container died ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:09:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb044b3c3e78f88aa78dbeb5c53e32b65660333177b922920a7a74fe5b699bfc-merged.mount: Deactivated successfully.
Jan 31 08:09:12 compute-0 podman[324224]: 2026-01-31 08:09:12.270034199 +0000 UTC m=+0.273256122 container remove ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:09:12 compute-0 systemd[1]: libpod-conmon-ad5f65c789c652977aae6c2f2a75894c4f69641732fd0855004667c16a9a266e.scope: Deactivated successfully.
Jan 31 08:09:12 compute-0 podman[324263]: 2026-01-31 08:09:12.463963412 +0000 UTC m=+0.078182413 container create 95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jang, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:09:12 compute-0 podman[324263]: 2026-01-31 08:09:12.408749611 +0000 UTC m=+0.022968622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:09:12 compute-0 systemd[1]: Started libpod-conmon-95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267.scope.
Jan 31 08:09:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43d4de60207d578bd6299cee3bc942f51b439e9883a386424c063263da9a7e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43d4de60207d578bd6299cee3bc942f51b439e9883a386424c063263da9a7e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43d4de60207d578bd6299cee3bc942f51b439e9883a386424c063263da9a7e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d43d4de60207d578bd6299cee3bc942f51b439e9883a386424c063263da9a7e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:12 compute-0 podman[324263]: 2026-01-31 08:09:12.595942539 +0000 UTC m=+0.210161540 container init 95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:12 compute-0 podman[324263]: 2026-01-31 08:09:12.604228611 +0000 UTC m=+0.218447622 container start 95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 08:09:12 compute-0 podman[324263]: 2026-01-31 08:09:12.61111038 +0000 UTC m=+0.225329391 container attach 95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jang, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:09:12 compute-0 nova_compute[247704]: 2026-01-31 08:09:12.797 247708 INFO nova.compute.manager [None req-7e94253c-c009-448e-8a55-53841422edc9 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Get console output
Jan 31 08:09:12 compute-0 nova_compute[247704]: 2026-01-31 08:09:12.805 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:09:12 compute-0 ceph-mon[74496]: pgmap v2187: 305 pgs: 305 active+clean; 252 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.2 MiB/s wr, 80 op/s
Jan 31 08:09:13 compute-0 nova_compute[247704]: 2026-01-31 08:09:13.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:13 compute-0 musing_jang[324280]: {
Jan 31 08:09:13 compute-0 musing_jang[324280]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:09:13 compute-0 musing_jang[324280]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:09:13 compute-0 musing_jang[324280]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:09:13 compute-0 musing_jang[324280]:         "osd_id": 0,
Jan 31 08:09:13 compute-0 musing_jang[324280]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:09:13 compute-0 musing_jang[324280]:         "type": "bluestore"
Jan 31 08:09:13 compute-0 musing_jang[324280]:     }
Jan 31 08:09:13 compute-0 musing_jang[324280]: }
Jan 31 08:09:13 compute-0 systemd[1]: libpod-95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267.scope: Deactivated successfully.
Jan 31 08:09:13 compute-0 podman[324263]: 2026-01-31 08:09:13.48874093 +0000 UTC m=+1.102959921 container died 95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d43d4de60207d578bd6299cee3bc942f51b439e9883a386424c063263da9a7e2-merged.mount: Deactivated successfully.
Jan 31 08:09:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 260 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 4.3 MiB/s wr, 108 op/s
Jan 31 08:09:13 compute-0 podman[324263]: 2026-01-31 08:09:13.626696023 +0000 UTC m=+1.240915014 container remove 95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jang, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:09:13 compute-0 systemd[1]: libpod-conmon-95b0a63cb60c2927bf9028c20e85b2694246dd3865c6cdf1325f11b80141c267.scope: Deactivated successfully.
Jan 31 08:09:13 compute-0 sudo[324148]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:09:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:09:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:13 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 699fd85d-56d4-47c2-8de6-08b2409e472c does not exist
Jan 31 08:09:13 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0d50ca40-7254-47e0-a515-5aa408ee7440 does not exist
Jan 31 08:09:13 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 117ecc94-cccc-462c-9302-aacf5246fe48 does not exist
Jan 31 08:09:13 compute-0 sudo[324315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:13 compute-0 sudo[324315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:13 compute-0 sudo[324315]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:13 compute-0 sudo[324340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:09:13 compute-0 sudo[324340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:13 compute-0 sudo[324340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:14.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:14 compute-0 nova_compute[247704]: 2026-01-31 08:09:14.340 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:14 compute-0 sshd-session[324365]: Invalid user solana from 45.148.10.240 port 40032
Jan 31 08:09:14 compute-0 sshd-session[324365]: Connection closed by invalid user solana 45.148.10.240 port 40032 [preauth]
Jan 31 08:09:14 compute-0 ceph-mon[74496]: pgmap v2188: 305 pgs: 305 active+clean; 260 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 4.3 MiB/s wr, 108 op/s
Jan 31 08:09:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:09:14 compute-0 nova_compute[247704]: 2026-01-31 08:09:14.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 260 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 154 op/s
Jan 31 08:09:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:16.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:16.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:16 compute-0 podman[324368]: 2026-01-31 08:09:16.907621369 +0000 UTC m=+0.075975479 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:09:17 compute-0 ceph-mon[74496]: pgmap v2189: 305 pgs: 305 active+clean; 260 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 154 op/s
Jan 31 08:09:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 238 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 250 op/s
Jan 31 08:09:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:18.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:18 compute-0 ceph-mon[74496]: pgmap v2190: 305 pgs: 305 active+clean; 238 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.6 MiB/s wr, 250 op/s
Jan 31 08:09:19 compute-0 nova_compute[247704]: 2026-01-31 08:09:19.342 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 214 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.3 MiB/s wr, 269 op/s
Jan 31 08:09:19 compute-0 nova_compute[247704]: 2026-01-31 08:09:19.743 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:20.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:20.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:09:20
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images']
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:09:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:09:20 compute-0 ceph-mon[74496]: pgmap v2191: 305 pgs: 305 active+clean; 214 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.3 MiB/s wr, 269 op/s
Jan 31 08:09:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 214 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 261 op/s
Jan 31 08:09:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:22.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:22 compute-0 nova_compute[247704]: 2026-01-31 08:09:22.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:23 compute-0 ceph-mon[74496]: pgmap v2192: 305 pgs: 305 active+clean; 214 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 261 op/s
Jan 31 08:09:23 compute-0 sudo[324400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:23 compute-0 sudo[324400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:23 compute-0 sudo[324400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:23 compute-0 sudo[324425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:23 compute-0 sudo[324425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:23 compute-0 sudo[324425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:23 compute-0 nova_compute[247704]: 2026-01-31 08:09:23.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 214 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 116 KiB/s wr, 242 op/s
Jan 31 08:09:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:24.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:24.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:24 compute-0 nova_compute[247704]: 2026-01-31 08:09:24.346 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2203133916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:24 compute-0 ceph-mon[74496]: pgmap v2193: 305 pgs: 305 active+clean; 214 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 116 KiB/s wr, 242 op/s
Jan 31 08:09:24 compute-0 nova_compute[247704]: 2026-01-31 08:09:24.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:25 compute-0 nova_compute[247704]: 2026-01-31 08:09:25.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 214 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 220 KiB/s wr, 228 op/s
Jan 31 08:09:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:26.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:26.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:26 compute-0 ceph-mon[74496]: pgmap v2194: 305 pgs: 305 active+clean; 214 MiB data, 933 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 220 KiB/s wr, 228 op/s
Jan 31 08:09:27 compute-0 nova_compute[247704]: 2026-01-31 08:09:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:27 compute-0 nova_compute[247704]: 2026-01-31 08:09:27.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:09:27 compute-0 nova_compute[247704]: 2026-01-31 08:09:27.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:09:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 214 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Jan 31 08:09:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:28.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:28.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:29 compute-0 nova_compute[247704]: 2026-01-31 08:09:29.347 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:29 compute-0 ceph-mon[74496]: pgmap v2195: 305 pgs: 305 active+clean; 214 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Jan 31 08:09:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 225 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Jan 31 08:09:29 compute-0 nova_compute[247704]: 2026-01-31 08:09:29.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:30 compute-0 podman[324454]: 2026-01-31 08:09:30.880161042 +0000 UTC m=+0.048051377 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:09:30 compute-0 ceph-mon[74496]: pgmap v2196: 305 pgs: 305 active+clean; 225 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Jan 31 08:09:31 compute-0 ovn_controller[149457]: 2026-01-31T08:09:31Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:2f:9f 10.100.0.7
Jan 31 08:09:31 compute-0 ovn_controller[149457]: 2026-01-31T08:09:31Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:2f:9f 10.100.0.7
Jan 31 08:09:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.8 MiB/s wr, 86 op/s
Jan 31 08:09:31 compute-0 nova_compute[247704]: 2026-01-31 08:09:31.762 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:09:31 compute-0 nova_compute[247704]: 2026-01-31 08:09:31.763 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:09:31 compute-0 nova_compute[247704]: 2026-01-31 08:09:31.763 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:09:31 compute-0 nova_compute[247704]: 2026-01-31 08:09:31.764 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:09:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:32.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:32.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:32 compute-0 ceph-mon[74496]: pgmap v2197: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.8 MiB/s wr, 86 op/s
Jan 31 08:09:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 31 08:09:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2588865719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:34.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:34 compute-0 nova_compute[247704]: 2026-01-31 08:09:34.350 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:34 compute-0 nova_compute[247704]: 2026-01-31 08:09:34.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:35 compute-0 ceph-mon[74496]: pgmap v2198: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 31 08:09:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2999692136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.206 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0052904859947887585 of space, bias 1.0, pg target 1.5871457984366275 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:09:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.825 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.825 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.826 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.827 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.827 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.827 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:09:35 compute-0 nova_compute[247704]: 2026-01-31 08:09:35.827 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:36.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/307380271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:36 compute-0 ceph-mon[74496]: pgmap v2199: 305 pgs: 305 active+clean; 239 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 301 KiB/s rd, 3.9 MiB/s wr, 106 op/s
Jan 31 08:09:36 compute-0 nova_compute[247704]: 2026-01-31 08:09:36.412 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:36 compute-0 nova_compute[247704]: 2026-01-31 08:09:36.413 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:36 compute-0 nova_compute[247704]: 2026-01-31 08:09:36.414 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:36 compute-0 nova_compute[247704]: 2026-01-31 08:09:36.415 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:09:36 compute-0 nova_compute[247704]: 2026-01-31 08:09:36.416 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/804275966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:36 compute-0 nova_compute[247704]: 2026-01-31 08:09:36.956 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:37 compute-0 nova_compute[247704]: 2026-01-31 08:09:37.591 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:09:37 compute-0 nova_compute[247704]: 2026-01-31 08:09:37.592 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:09:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 240 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 3.7 MiB/s wr, 100 op/s
Jan 31 08:09:37 compute-0 nova_compute[247704]: 2026-01-31 08:09:37.876 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:09:37 compute-0 nova_compute[247704]: 2026-01-31 08:09:37.877 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4244MB free_disk=20.876785278320312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:09:37 compute-0 nova_compute[247704]: 2026-01-31 08:09:37.877 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:09:37 compute-0 nova_compute[247704]: 2026-01-31 08:09:37.877 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:09:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/804275966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/969850163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1831518582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:38.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:38.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:38 compute-0 nova_compute[247704]: 2026-01-31 08:09:38.725 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:09:38 compute-0 nova_compute[247704]: 2026-01-31 08:09:38.726 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:09:38 compute-0 nova_compute[247704]: 2026-01-31 08:09:38.726 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:09:38 compute-0 nova_compute[247704]: 2026-01-31 08:09:38.767 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:09:38 compute-0 ceph-mon[74496]: pgmap v2200: 305 pgs: 305 active+clean; 240 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 313 KiB/s rd, 3.7 MiB/s wr, 100 op/s
Jan 31 08:09:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:09:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259734391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.261 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.265 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.351 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.357 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.395 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.395 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:09:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 242 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 31 08:09:39 compute-0 nova_compute[247704]: 2026-01-31 08:09:39.752 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/821602593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4259734391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-0 ceph-mon[74496]: pgmap v2201: 305 pgs: 305 active+clean; 242 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 31 08:09:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2347554547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:40.058 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:09:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:40.060 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:09:40 compute-0 nova_compute[247704]: 2026-01-31 08:09:40.103 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:40.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:40.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2439194594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:09:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/966626106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 246 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 870 KiB/s wr, 53 op/s
Jan 31 08:09:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:42.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:42.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:42 compute-0 ceph-mon[74496]: pgmap v2202: 305 pgs: 305 active+clean; 246 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 870 KiB/s wr, 53 op/s
Jan 31 08:09:43 compute-0 sudo[324524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:43 compute-0 sudo[324524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:43 compute-0 sudo[324524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:43 compute-0 sudo[324549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:09:43 compute-0 sudo[324549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:43 compute-0 sudo[324549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 246 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 89 KiB/s wr, 41 op/s
Jan 31 08:09:44 compute-0 ovn_controller[149457]: 2026-01-31T08:09:44Z|00468|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:09:44 compute-0 nova_compute[247704]: 2026-01-31 08:09:44.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:44.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:44.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:44 compute-0 nova_compute[247704]: 2026-01-31 08:09:44.353 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:44 compute-0 nova_compute[247704]: 2026-01-31 08:09:44.389 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:09:44 compute-0 ceph-mon[74496]: pgmap v2203: 305 pgs: 305 active+clean; 246 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 89 KiB/s wr, 41 op/s
Jan 31 08:09:44 compute-0 nova_compute[247704]: 2026-01-31 08:09:44.754 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:09:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1188548440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:09:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:09:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1188548440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:09:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 246 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 110 KiB/s wr, 31 op/s
Jan 31 08:09:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1188548440' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:09:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1188548440' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:09:46 compute-0 ovn_controller[149457]: 2026-01-31T08:09:46Z|00469|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:09:46 compute-0 nova_compute[247704]: 2026-01-31 08:09:46.035 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:46.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:46.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:46 compute-0 ceph-mon[74496]: pgmap v2204: 305 pgs: 305 active+clean; 246 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 110 KiB/s wr, 31 op/s
Jan 31 08:09:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 69 KiB/s wr, 44 op/s
Jan 31 08:09:47 compute-0 podman[324577]: 2026-01-31 08:09:47.955795581 +0000 UTC m=+0.124563497 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:09:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:48.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:48.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 31 08:09:48 compute-0 ceph-mon[74496]: pgmap v2205: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 69 KiB/s wr, 44 op/s
Jan 31 08:09:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:09:49.062 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:09:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 31 08:09:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 31 08:09:49 compute-0 nova_compute[247704]: 2026-01-31 08:09:49.355 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 82 op/s
Jan 31 08:09:49 compute-0 nova_compute[247704]: 2026-01-31 08:09:49.756 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:50.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:50.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:50 compute-0 ceph-mon[74496]: osdmap e280: 3 total, 3 up, 3 in
Jan 31 08:09:50 compute-0 ceph-mon[74496]: pgmap v2207: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 82 op/s
Jan 31 08:09:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/611290882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2197638865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:09:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 95 op/s
Jan 31 08:09:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:52.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:52.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:52 compute-0 ceph-mon[74496]: pgmap v2208: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 27 KiB/s wr, 95 op/s
Jan 31 08:09:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 101 op/s
Jan 31 08:09:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:09:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:54.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:09:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:54.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:54 compute-0 nova_compute[247704]: 2026-01-31 08:09:54.358 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:54 compute-0 ceph-mon[74496]: pgmap v2209: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 101 op/s
Jan 31 08:09:54 compute-0 nova_compute[247704]: 2026-01-31 08:09:54.758 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:09:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 108 op/s
Jan 31 08:09:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:56.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:56 compute-0 ceph-mon[74496]: pgmap v2210: 305 pgs: 305 active+clean; 246 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 108 op/s
Jan 31 08:09:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 247 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 76 op/s
Jan 31 08:09:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:09:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:09:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:09:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:09:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:58.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:09:58 compute-0 ceph-mon[74496]: pgmap v2211: 305 pgs: 305 active+clean; 247 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 76 op/s
Jan 31 08:09:59 compute-0 nova_compute[247704]: 2026-01-31 08:09:59.361 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:09:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 258 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 31 08:09:59 compute-0 nova_compute[247704]: 2026-01-31 08:09:59.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:10:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:00.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:00.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:00 compute-0 ceph-mon[74496]: pgmap v2212: 305 pgs: 305 active+clean; 258 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 31 08:10:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:10:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 270 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 08:10:01 compute-0 podman[324613]: 2026-01-31 08:10:01.892784694 +0000 UTC m=+0.059526086 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:10:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:02.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:02.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 31 08:10:02 compute-0 ceph-mon[74496]: pgmap v2213: 305 pgs: 305 active+clean; 270 MiB data, 975 MiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 08:10:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 31 08:10:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 31 08:10:03 compute-0 sudo[324632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:03 compute-0 sudo[324632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:03 compute-0 sudo[324632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:03 compute-0 sudo[324658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:03 compute-0 sudo[324658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:03 compute-0 sudo[324658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 279 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 2.6 MiB/s wr, 80 op/s
Jan 31 08:10:03 compute-0 ceph-mon[74496]: osdmap e281: 3 total, 3 up, 3 in
Jan 31 08:10:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:04.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:04 compute-0 nova_compute[247704]: 2026-01-31 08:10:04.365 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:04 compute-0 nova_compute[247704]: 2026-01-31 08:10:04.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:04 compute-0 ceph-mon[74496]: pgmap v2215: 305 pgs: 305 active+clean; 279 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 2.6 MiB/s wr, 80 op/s
Jan 31 08:10:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1587739989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 279 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Jan 31 08:10:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:06.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:06.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:06 compute-0 ceph-mon[74496]: pgmap v2216: 305 pgs: 305 active+clean; 279 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Jan 31 08:10:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 290 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.900623) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007900727, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2240, "num_deletes": 258, "total_data_size": 3874938, "memory_usage": 3933304, "flush_reason": "Manual Compaction"}
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007943394, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3773074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46548, "largest_seqno": 48787, "table_properties": {"data_size": 3762830, "index_size": 6546, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21677, "raw_average_key_size": 20, "raw_value_size": 3742244, "raw_average_value_size": 3619, "num_data_blocks": 282, "num_entries": 1034, "num_filter_entries": 1034, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846820, "oldest_key_time": 1769846820, "file_creation_time": 1769847007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 42928 microseconds, and 12338 cpu microseconds.
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.943560) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3773074 bytes OK
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.943610) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.950951) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.950972) EVENT_LOG_v1 {"time_micros": 1769847007950967, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.950990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3865708, prev total WAL file size 3865708, number of live WAL files 2.
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.951860) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3684KB)], [104(9059KB)]
Jan 31 08:10:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847007951988, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 13049603, "oldest_snapshot_seqno": -1}
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7673 keys, 11136811 bytes, temperature: kUnknown
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847008076154, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11136811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11086272, "index_size": 30267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198575, "raw_average_key_size": 25, "raw_value_size": 10950159, "raw_average_value_size": 1427, "num_data_blocks": 1192, "num_entries": 7673, "num_filter_entries": 7673, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.076532) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11136811 bytes
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.078575) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.0 rd, 89.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8205, records dropped: 532 output_compression: NoCompression
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.078607) EVENT_LOG_v1 {"time_micros": 1769847008078593, "job": 62, "event": "compaction_finished", "compaction_time_micros": 124279, "compaction_time_cpu_micros": 36362, "output_level": 6, "num_output_files": 1, "total_output_size": 11136811, "num_input_records": 8205, "num_output_records": 7673, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847008079595, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847008081923, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:07.951715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.082187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.082199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.082204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.082207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:10:08.082211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:10:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:08.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:08.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:08 compute-0 ceph-mon[74496]: pgmap v2217: 305 pgs: 305 active+clean; 290 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Jan 31 08:10:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3304592696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2532497826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:09 compute-0 nova_compute[247704]: 2026-01-31 08:10:09.382 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 311 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.9 MiB/s wr, 59 op/s
Jan 31 08:10:09 compute-0 nova_compute[247704]: 2026-01-31 08:10:09.765 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/544179835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:09 compute-0 ceph-mon[74496]: pgmap v2218: 305 pgs: 305 active+clean; 311 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 176 KiB/s rd, 2.9 MiB/s wr, 59 op/s
Jan 31 08:10:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:10.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:10.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 31 08:10:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 31 08:10:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 31 08:10:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:11.180 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:11.181 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:11.182 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:11 compute-0 ceph-mon[74496]: osdmap e282: 3 total, 3 up, 3 in
Jan 31 08:10:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 326 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.5 MiB/s wr, 42 op/s
Jan 31 08:10:11 compute-0 nova_compute[247704]: 2026-01-31 08:10:11.710 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:12.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:12 compute-0 ceph-mon[74496]: pgmap v2220: 305 pgs: 305 active+clean; 326 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.5 MiB/s wr, 42 op/s
Jan 31 08:10:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1128975270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 326 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.2 MiB/s wr, 45 op/s
Jan 31 08:10:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/14629453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:14 compute-0 sudo[324688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:14 compute-0 sudo[324688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-0 sudo[324688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:14.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:14 compute-0 sudo[324713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:14 compute-0 sudo[324713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-0 sudo[324713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:14 compute-0 sudo[324738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:14 compute-0 sudo[324738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-0 sudo[324738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-0 sudo[324763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:10:14 compute-0 sudo[324763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:14 compute-0 nova_compute[247704]: 2026-01-31 08:10:14.384 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:10:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:10:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:10:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 sudo[324763]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:14 compute-0 nova_compute[247704]: 2026-01-31 08:10:14.767 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 08:10:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:10:14 compute-0 ceph-mon[74496]: pgmap v2221: 305 pgs: 305 active+clean; 326 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.2 MiB/s wr, 45 op/s
Jan 31 08:10:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c138500f-d3b6-45b7-9c99-fd5e3eef56ab does not exist
Jan 31 08:10:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 50296ad1-0a82-4d6d-988b-8e7715db997f does not exist
Jan 31 08:10:15 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev de538951-00b4-497d-af9b-e27ec42a0358 does not exist
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:10:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:10:15 compute-0 sudo[324818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:15 compute-0 sudo[324818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:15 compute-0 sudo[324818]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:15 compute-0 sudo[324844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:15 compute-0 sudo[324844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:15 compute-0 sudo[324844]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:15 compute-0 sudo[324869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:15 compute-0 sudo[324869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:15 compute-0 sudo[324869]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:15 compute-0 sudo[324894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:10:15 compute-0 sudo[324894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 326 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 81 op/s
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:10:15 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:10:15 compute-0 nova_compute[247704]: 2026-01-31 08:10:15.889 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:15 compute-0 podman[324959]: 2026-01-31 08:10:15.926215955 +0000 UTC m=+0.058585824 container create e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_liskov, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:10:15 compute-0 systemd[1]: Started libpod-conmon-e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992.scope.
Jan 31 08:10:15 compute-0 podman[324959]: 2026-01-31 08:10:15.898702503 +0000 UTC m=+0.031072472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:10:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:16 compute-0 podman[324959]: 2026-01-31 08:10:16.025641186 +0000 UTC m=+0.158011075 container init e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:10:16 compute-0 podman[324959]: 2026-01-31 08:10:16.036723917 +0000 UTC m=+0.169093786 container start e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:10:16 compute-0 podman[324959]: 2026-01-31 08:10:16.040882559 +0000 UTC m=+0.173252468 container attach e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:16 compute-0 competent_liskov[324975]: 167 167
Jan 31 08:10:16 compute-0 systemd[1]: libpod-e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992.scope: Deactivated successfully.
Jan 31 08:10:16 compute-0 podman[324959]: 2026-01-31 08:10:16.046379164 +0000 UTC m=+0.178749053 container died e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-75dcde9c7af173b19803f543b8cd18d930da666da5687c54c6008f3e6f92381d-merged.mount: Deactivated successfully.
Jan 31 08:10:16 compute-0 podman[324959]: 2026-01-31 08:10:16.093500206 +0000 UTC m=+0.225870075 container remove e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_liskov, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:10:16 compute-0 systemd[1]: libpod-conmon-e7c50386bf573a4425d6d10cb03612e0f8d6a19ed815b909a5fe0097c53c5992.scope: Deactivated successfully.
Jan 31 08:10:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:10:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:16.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:10:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:16 compute-0 podman[324999]: 2026-01-31 08:10:16.288602666 +0000 UTC m=+0.063809420 container create 4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:10:16 compute-0 systemd[1]: Started libpod-conmon-4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879.scope.
Jan 31 08:10:16 compute-0 podman[324999]: 2026-01-31 08:10:16.262500718 +0000 UTC m=+0.037707552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:10:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610201dc8b82dc1e8a0da32a734a0e426c630283105cae27c89f0a58fc8d7fdb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610201dc8b82dc1e8a0da32a734a0e426c630283105cae27c89f0a58fc8d7fdb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610201dc8b82dc1e8a0da32a734a0e426c630283105cae27c89f0a58fc8d7fdb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610201dc8b82dc1e8a0da32a734a0e426c630283105cae27c89f0a58fc8d7fdb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610201dc8b82dc1e8a0da32a734a0e426c630283105cae27c89f0a58fc8d7fdb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:16 compute-0 podman[324999]: 2026-01-31 08:10:16.403612329 +0000 UTC m=+0.178819093 container init 4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ride, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:10:16 compute-0 podman[324999]: 2026-01-31 08:10:16.41142844 +0000 UTC m=+0.186635214 container start 4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ride, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:10:16 compute-0 podman[324999]: 2026-01-31 08:10:16.415545791 +0000 UTC m=+0.190752555 container attach 4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:10:16 compute-0 ceph-mon[74496]: pgmap v2222: 305 pgs: 305 active+clean; 326 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 81 op/s
Jan 31 08:10:17 compute-0 modest_ride[325015]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:10:17 compute-0 modest_ride[325015]: --> relative data size: 1.0
Jan 31 08:10:17 compute-0 modest_ride[325015]: --> All data devices are unavailable
Jan 31 08:10:17 compute-0 systemd[1]: libpod-4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879.scope: Deactivated successfully.
Jan 31 08:10:17 compute-0 podman[324999]: 2026-01-31 08:10:17.197723546 +0000 UTC m=+0.972930320 container died 4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ride, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 08:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-610201dc8b82dc1e8a0da32a734a0e426c630283105cae27c89f0a58fc8d7fdb-merged.mount: Deactivated successfully.
Jan 31 08:10:17 compute-0 podman[324999]: 2026-01-31 08:10:17.261511107 +0000 UTC m=+1.036717851 container remove 4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ride, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:10:17 compute-0 systemd[1]: libpod-conmon-4f55537354db675bd38583270393ea9c5dbbd74a5e5fcb0be47ca90d2fd7c879.scope: Deactivated successfully.
Jan 31 08:10:17 compute-0 sudo[324894]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:17 compute-0 sudo[325041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:17 compute-0 sudo[325041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:17 compute-0 sudo[325041]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:17 compute-0 sudo[325066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:17 compute-0 sudo[325066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:17 compute-0 sudo[325066]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:17 compute-0 sudo[325091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:17 compute-0 sudo[325091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:17 compute-0 sudo[325091]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:17 compute-0 sudo[325117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:10:17 compute-0 sudo[325117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 306 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 156 op/s
Jan 31 08:10:17 compute-0 podman[325183]: 2026-01-31 08:10:17.855367288 +0000 UTC m=+0.032060195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:10:18 compute-0 podman[325183]: 2026-01-31 08:10:18.025865257 +0000 UTC m=+0.202558164 container create 2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:10:18 compute-0 ceph-mon[74496]: pgmap v2223: 305 pgs: 305 active+clean; 306 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 156 op/s
Jan 31 08:10:18 compute-0 systemd[1]: Started libpod-conmon-2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9.scope.
Jan 31 08:10:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:18 compute-0 podman[325183]: 2026-01-31 08:10:18.150533065 +0000 UTC m=+0.327225992 container init 2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 31 08:10:18 compute-0 podman[325183]: 2026-01-31 08:10:18.158680265 +0000 UTC m=+0.335373172 container start 2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:10:18 compute-0 podman[325183]: 2026-01-31 08:10:18.162521778 +0000 UTC m=+0.339214705 container attach 2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:18 compute-0 nostalgic_williamson[325214]: 167 167
Jan 31 08:10:18 compute-0 systemd[1]: libpod-2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9.scope: Deactivated successfully.
Jan 31 08:10:18 compute-0 podman[325183]: 2026-01-31 08:10:18.165187513 +0000 UTC m=+0.341880440 container died 2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:18 compute-0 podman[325199]: 2026-01-31 08:10:18.164747643 +0000 UTC m=+0.092748539 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc8ddf25be0e1a8ac79b8d5a49b4b7b8b26e883afe0d41e4a37d7e0dd514bc42-merged.mount: Deactivated successfully.
Jan 31 08:10:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:18.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:18 compute-0 podman[325183]: 2026-01-31 08:10:18.208108713 +0000 UTC m=+0.384801610 container remove 2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:10:18 compute-0 systemd[1]: libpod-conmon-2514a743d6582507e48e1782b54e9f96f579d3b5567a04bb21ec7a7463e0fda9.scope: Deactivated successfully.
Jan 31 08:10:18 compute-0 podman[325254]: 2026-01-31 08:10:18.349781037 +0000 UTC m=+0.051321835 container create e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhaskara, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:10:18 compute-0 systemd[1]: Started libpod-conmon-e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340.scope.
Jan 31 08:10:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abb2a24a99a1eb44a79777b09524730dcb03528bbaf99c0cc81d716f1d15e80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:18 compute-0 podman[325254]: 2026-01-31 08:10:18.322028128 +0000 UTC m=+0.023568996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abb2a24a99a1eb44a79777b09524730dcb03528bbaf99c0cc81d716f1d15e80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abb2a24a99a1eb44a79777b09524730dcb03528bbaf99c0cc81d716f1d15e80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abb2a24a99a1eb44a79777b09524730dcb03528bbaf99c0cc81d716f1d15e80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:18 compute-0 podman[325254]: 2026-01-31 08:10:18.436956508 +0000 UTC m=+0.138497346 container init e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:10:18 compute-0 podman[325254]: 2026-01-31 08:10:18.452753666 +0000 UTC m=+0.154294454 container start e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhaskara, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:10:18 compute-0 podman[325254]: 2026-01-31 08:10:18.45702795 +0000 UTC m=+0.158568748 container attach e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]: {
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:     "0": [
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:         {
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "devices": [
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "/dev/loop3"
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             ],
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "lv_name": "ceph_lv0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "lv_size": "7511998464",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "name": "ceph_lv0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "tags": {
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.crush_device_class": "",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.encrypted": "0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.osd_id": "0",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.type": "block",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:                 "ceph.vdo": "0"
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             },
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "type": "block",
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:             "vg_name": "ceph_vg0"
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:         }
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]:     ]
Jan 31 08:10:19 compute-0 elated_bhaskara[325271]: }
Jan 31 08:10:19 compute-0 systemd[1]: libpod-e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340.scope: Deactivated successfully.
Jan 31 08:10:19 compute-0 podman[325254]: 2026-01-31 08:10:19.276395855 +0000 UTC m=+0.977936693 container died e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhaskara, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:10:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abb2a24a99a1eb44a79777b09524730dcb03528bbaf99c0cc81d716f1d15e80-merged.mount: Deactivated successfully.
Jan 31 08:10:19 compute-0 podman[325254]: 2026-01-31 08:10:19.353506441 +0000 UTC m=+1.055047199 container remove e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:19 compute-0 systemd[1]: libpod-conmon-e929876e3804755d3495296f30a19a2ac8ce7aeaa3f8c92fae1d539c5b2df340.scope: Deactivated successfully.
Jan 31 08:10:19 compute-0 nova_compute[247704]: 2026-01-31 08:10:19.386 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:19 compute-0 sudo[325117]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:19 compute-0 sudo[325293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:19 compute-0 sudo[325293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:19 compute-0 sudo[325293]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:19 compute-0 sudo[325319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:19 compute-0 sudo[325319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:19 compute-0 sudo[325319]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:19 compute-0 sudo[325344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:19 compute-0 sudo[325344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:19 compute-0 sudo[325344]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 270 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 226 KiB/s wr, 204 op/s
Jan 31 08:10:19 compute-0 sudo[325369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:10:19 compute-0 sudo[325369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:19 compute-0 nova_compute[247704]: 2026-01-31 08:10:19.769 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:19.806 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:19 compute-0 nova_compute[247704]: 2026-01-31 08:10:19.806 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:19.807 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:10:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:19.808 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.032281499 +0000 UTC m=+0.043735391 container create 90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:20 compute-0 systemd[1]: Started libpod-conmon-90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2.scope.
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.012673309 +0000 UTC m=+0.024127221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.120284461 +0000 UTC m=+0.131738433 container init 90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:10:20
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', '.mgr', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.128650065 +0000 UTC m=+0.140103937 container start 90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.131800262 +0000 UTC m=+0.143254164 container attach 90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:10:20 compute-0 kind_stonebraker[325451]: 167 167
Jan 31 08:10:20 compute-0 systemd[1]: libpod-90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2.scope: Deactivated successfully.
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.135433211 +0000 UTC m=+0.146887093 container died 90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:10:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:10:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:20.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d31fc74a6c45417a38c12e5510da1f6bb5a4d61d49b0425e0058fa575d5fb7c8-merged.mount: Deactivated successfully.
Jan 31 08:10:20 compute-0 podman[325435]: 2026-01-31 08:10:20.173745888 +0000 UTC m=+0.185199770 container remove 90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:20 compute-0 systemd[1]: libpod-conmon-90e8ae96373b77d904256026ab3ce2a77414088cba45117bcc0f94e5451c38b2.scope: Deactivated successfully.
Jan 31 08:10:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:20.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:20 compute-0 podman[325475]: 2026-01-31 08:10:20.319206065 +0000 UTC m=+0.050485636 container create 466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:20 compute-0 systemd[1]: Started libpod-conmon-466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa.scope.
Jan 31 08:10:20 compute-0 podman[325475]: 2026-01-31 08:10:20.301412869 +0000 UTC m=+0.032692430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:10:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a87eac843916b9606f3e077fd8d6b29a1b914fc8f8e5a17be0f4decc02ed24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a87eac843916b9606f3e077fd8d6b29a1b914fc8f8e5a17be0f4decc02ed24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a87eac843916b9606f3e077fd8d6b29a1b914fc8f8e5a17be0f4decc02ed24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16a87eac843916b9606f3e077fd8d6b29a1b914fc8f8e5a17be0f4decc02ed24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:20 compute-0 podman[325475]: 2026-01-31 08:10:20.427231766 +0000 UTC m=+0.158511347 container init 466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:20 compute-0 podman[325475]: 2026-01-31 08:10:20.433011427 +0000 UTC m=+0.164290968 container start 466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:10:20 compute-0 podman[325475]: 2026-01-31 08:10:20.437040376 +0000 UTC m=+0.168319967 container attach 466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:10:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:10:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:10:20 compute-0 ceph-mon[74496]: pgmap v2224: 305 pgs: 305 active+clean; 270 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 226 KiB/s wr, 204 op/s
Jan 31 08:10:20 compute-0 nova_compute[247704]: 2026-01-31 08:10:20.994 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]: {
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:         "osd_id": 0,
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:         "type": "bluestore"
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]:     }
Jan 31 08:10:21 compute-0 vigilant_aryabhata[325492]: }
Jan 31 08:10:21 compute-0 systemd[1]: libpod-466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa.scope: Deactivated successfully.
Jan 31 08:10:21 compute-0 podman[325475]: 2026-01-31 08:10:21.328527785 +0000 UTC m=+1.059807356 container died 466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-16a87eac843916b9606f3e077fd8d6b29a1b914fc8f8e5a17be0f4decc02ed24-merged.mount: Deactivated successfully.
Jan 31 08:10:21 compute-0 podman[325475]: 2026-01-31 08:10:21.384196246 +0000 UTC m=+1.115475787 container remove 466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:21 compute-0 systemd[1]: libpod-conmon-466a31e48ac8ecbd743755e5ea6b3fb261d978dd110bc5fcdf80f1710aaaccfa.scope: Deactivated successfully.
Jan 31 08:10:21 compute-0 sudo[325369]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:10:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:10:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 254c9b2a-130c-4f0d-b395-913f0435ab71 does not exist
Jan 31 08:10:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 59c890ba-7b40-463a-a8b6-c3af5075e64e does not exist
Jan 31 08:10:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e96fcc2c-2e48-47a6-81ee-f1cf490ec67f does not exist
Jan 31 08:10:21 compute-0 sudo[325525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:21 compute-0 sudo[325525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:21 compute-0 sudo[325525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:21 compute-0 sudo[325551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:10:21 compute-0 sudo[325551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:21 compute-0 sudo[325551]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 206 KiB/s wr, 191 op/s
Jan 31 08:10:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:10:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:22.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:10:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:22.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:10:22 compute-0 ceph-mon[74496]: pgmap v2225: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 206 KiB/s wr, 191 op/s
Jan 31 08:10:22 compute-0 nova_compute[247704]: 2026-01-31 08:10:22.941 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:23 compute-0 nova_compute[247704]: 2026-01-31 08:10:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 175 op/s
Jan 31 08:10:23 compute-0 sudo[325578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:23 compute-0 sudo[325578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:23 compute-0 sudo[325578]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:23 compute-0 sudo[325603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:23 compute-0 sudo[325603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:23 compute-0 sudo[325603]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:24.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:24.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:24 compute-0 nova_compute[247704]: 2026-01-31 08:10:24.389 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:24 compute-0 nova_compute[247704]: 2026-01-31 08:10:24.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:24 compute-0 ceph-mon[74496]: pgmap v2226: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 175 op/s
Jan 31 08:10:24 compute-0 nova_compute[247704]: 2026-01-31 08:10:24.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.076 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.077 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.139 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.253 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.254 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.263 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.263 247708 INFO nova.compute.claims [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.499 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.598 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.599 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 174 op/s
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.630 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.725 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854736004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.967 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:25 compute-0 nova_compute[247704]: 2026-01-31 08:10:25.971 247708 DEBUG nova.compute.provider_tree [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.070 247708 DEBUG nova.scheduler.client.report [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:26.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.243 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.244 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.246 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.252 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.252 247708 INFO nova.compute.claims [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.416 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.417 247708 DEBUG nova.network.neutron [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.531 247708 INFO nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.577 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.634 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.696 247708 DEBUG nova.policy [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6215f7b8636349358e68ad58782a5ed9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9d0c9f5a2b074ef290312d1045d1747f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.796 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:10:26 compute-0 ceph-mon[74496]: pgmap v2227: 305 pgs: 305 active+clean; 246 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 21 KiB/s wr, 174 op/s
Jan 31 08:10:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1854736004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.805 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.806 247708 INFO nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Creating image(s)
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.831 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.860 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.887 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.891 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.951 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.953 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.954 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.955 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.991 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:26 compute-0 nova_compute[247704]: 2026-01-31 08:10:26.996 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/498864749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.095 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.102 247708 DEBUG nova.compute.provider_tree [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.118 247708 DEBUG nova.scheduler.client.report [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.139 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.140 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.195 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.196 247708 DEBUG nova.network.neutron [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.216 247708 INFO nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.239 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.301 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.337 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.338 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.338 247708 INFO nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Creating image(s)
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.361 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.387 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.411 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.414 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.436 247708 DEBUG nova.policy [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '18aee9d81d404f77ac81cde538f140d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.482 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.483 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.484 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.484 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.516 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.528 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 af8d85d9-c7a2-4709-a234-19511f3e4395_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.557 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] resizing rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:10:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 256 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 502 KiB/s wr, 174 op/s
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.692 247708 DEBUG nova.objects.instance [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lazy-loading 'migration_context' on Instance uuid 6e12a30c-8c62-4756-a0b8-c71359aaf303 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.707 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.708 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Ensure instance console log exists: /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.708 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.709 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.709 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/498864749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.870 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 af8d85d9-c7a2-4709-a234-19511f3e4395_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:27 compute-0 nova_compute[247704]: 2026-01-31 08:10:27.941 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] resizing rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.077 247708 DEBUG nova.objects.instance [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'migration_context' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.107 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.107 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Ensure instance console log exists: /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.108 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.108 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.109 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:28.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.204 247708 DEBUG nova.network.neutron [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Successfully created port: bbcc9e40-fa0a-4002-902e-79173be941a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:10:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:28 compute-0 nova_compute[247704]: 2026-01-31 08:10:28.765 247708 DEBUG nova.network.neutron [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Successfully created port: 4fa5cff4-40cb-4379-bda2-213171730f4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:10:28 compute-0 ceph-mon[74496]: pgmap v2228: 305 pgs: 305 active+clean; 256 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 502 KiB/s wr, 174 op/s
Jan 31 08:10:29 compute-0 ovn_controller[149457]: 2026-01-31T08:10:29Z|00470|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.060 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:29 compute-0 ovn_controller[149457]: 2026-01-31T08:10:29Z|00471|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.154 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.323 247708 DEBUG nova.network.neutron [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Successfully updated port: bbcc9e40-fa0a-4002-902e-79173be941a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.338 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "refresh_cache-6e12a30c-8c62-4756-a0b8-c71359aaf303" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.338 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquired lock "refresh_cache-6e12a30c-8c62-4756-a0b8-c71359aaf303" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.338 247708 DEBUG nova.network.neutron [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.391 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.422 247708 DEBUG nova.compute.manager [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-changed-bbcc9e40-fa0a-4002-902e-79173be941a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.422 247708 DEBUG nova.compute.manager [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Refreshing instance network info cache due to event network-changed-bbcc9e40-fa0a-4002-902e-79173be941a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.422 247708 DEBUG oslo_concurrency.lockutils [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6e12a30c-8c62-4756-a0b8-c71359aaf303" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.529 247708 DEBUG nova.network.neutron [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.598 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.598 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:10:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 290 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.823 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.824 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.824 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:10:29 compute-0 nova_compute[247704]: 2026-01-31 08:10:29.824 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:10:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:10:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:30.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.421 247708 DEBUG nova.network.neutron [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Successfully updated port: 4fa5cff4-40cb-4379-bda2-213171730f4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.442 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.443 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.443 247708 DEBUG nova.network.neutron [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:10:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.820 247708 DEBUG nova.network.neutron [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Updating instance_info_cache with network_info: [{"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.843 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Releasing lock "refresh_cache-6e12a30c-8c62-4756-a0b8-c71359aaf303" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.844 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Instance network_info: |[{"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.844 247708 DEBUG oslo_concurrency.lockutils [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6e12a30c-8c62-4756-a0b8-c71359aaf303" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.844 247708 DEBUG nova.network.neutron [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Refreshing network info cache for port bbcc9e40-fa0a-4002-902e-79173be941a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.847 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Start _get_guest_xml network_info=[{"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.851 247708 WARNING nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.855 247708 DEBUG nova.virt.libvirt.host [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.856 247708 DEBUG nova.virt.libvirt.host [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.863 247708 DEBUG nova.virt.libvirt.host [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.863 247708 DEBUG nova.virt.libvirt.host [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.864 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.865 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.865 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.866 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.866 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.866 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.866 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.867 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.867 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.867 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.868 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.868 247708 DEBUG nova.virt.hardware [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:10:30 compute-0 nova_compute[247704]: 2026-01-31 08:10:30.871 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:30 compute-0 ceph-mon[74496]: pgmap v2229: 305 pgs: 305 active+clean; 290 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Jan 31 08:10:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2174295167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2742948316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.286 247708 DEBUG nova.network.neutron [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845828736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.308 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.332 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.335 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.374 247708 DEBUG nova.compute.manager [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.375 247708 DEBUG nova.compute.manager [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing instance network info cache due to event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.375 247708 DEBUG oslo_concurrency.lockutils [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 323 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.9 MiB/s wr, 106 op/s
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.705 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.730 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.730 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.730 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.731 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.731 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.731 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.731 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1891158710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.757 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.758 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.759 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.759 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.759 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.787 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.789 247708 DEBUG nova.virt.libvirt.vif [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-423103778',display_name='tempest-ServerMetadataTestJSON-server-423103778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-423103778',id=118,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d0c9f5a2b074ef290312d1045d1747f',ramdisk_id='',reservation_id='r-wycemqq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-2083347598',owner_user_name='tempest-ServerMetadataTestJSON-2083347598-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:26Z,user_data=None,user_id='6215f7b8636349358e68ad58782a5ed9',uuid=6e12a30c-8c62-4756-a0b8-c71359aaf303,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.789 247708 DEBUG nova.network.os_vif_util [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Converting VIF {"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.790 247708 DEBUG nova.network.os_vif_util [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.791 247708 DEBUG nova.objects.instance [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e12a30c-8c62-4756-a0b8-c71359aaf303 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.813 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <uuid>6e12a30c-8c62-4756-a0b8-c71359aaf303</uuid>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <name>instance-00000076</name>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerMetadataTestJSON-server-423103778</nova:name>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:10:30</nova:creationTime>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:user uuid="6215f7b8636349358e68ad58782a5ed9">tempest-ServerMetadataTestJSON-2083347598-project-member</nova:user>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:project uuid="9d0c9f5a2b074ef290312d1045d1747f">tempest-ServerMetadataTestJSON-2083347598</nova:project>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <nova:port uuid="bbcc9e40-fa0a-4002-902e-79173be941a9">
Jan 31 08:10:31 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <system>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <entry name="serial">6e12a30c-8c62-4756-a0b8-c71359aaf303</entry>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <entry name="uuid">6e12a30c-8c62-4756-a0b8-c71359aaf303</entry>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </system>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <os>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </os>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <features>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </features>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6e12a30c-8c62-4756-a0b8-c71359aaf303_disk">
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </source>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6e12a30c-8c62-4756-a0b8-c71359aaf303_disk.config">
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </source>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:10:31 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:bd:67:27"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <target dev="tapbbcc9e40-fa"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/console.log" append="off"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <video>
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </video>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:10:31 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:10:31 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:10:31 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:10:31 compute-0 nova_compute[247704]: </domain>
Jan 31 08:10:31 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.814 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Preparing to wait for external event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.815 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.815 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.815 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.816 247708 DEBUG nova.virt.libvirt.vif [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-423103778',display_name='tempest-ServerMetadataTestJSON-server-423103778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-423103778',id=118,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9d0c9f5a2b074ef290312d1045d1747f',ramdisk_id='',reservation_id='r-wycemqq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-2083347598',owner_user_name='tempest-ServerMetadataTestJSON-2083347598-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:26Z,user_data=None,user_id='6215f7b8636349358e68ad58782a5ed9',uuid=6e12a30c-8c62-4756-a0b8-c71359aaf303,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.816 247708 DEBUG nova.network.os_vif_util [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Converting VIF {"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.817 247708 DEBUG nova.network.os_vif_util [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.817 247708 DEBUG os_vif [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.818 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.818 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.822 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbcc9e40-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.822 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbbcc9e40-fa, col_values=(('external_ids', {'iface-id': 'bbcc9e40-fa0a-4002-902e-79173be941a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:67:27', 'vm-uuid': '6e12a30c-8c62-4756-a0b8-c71359aaf303'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:31 compute-0 NetworkManager[49108]: <info>  [1769847031.8251] manager: (tapbbcc9e40-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.826 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.833 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.834 247708 INFO os_vif [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa')
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.891 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.892 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.892 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] No VIF found with MAC fa:16:3e:bd:67:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.893 247708 INFO nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Using config drive
Jan 31 08:10:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2845828736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1891158710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:31 compute-0 nova_compute[247704]: 2026-01-31 08:10:31.924 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:32.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707992801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.187 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:32.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.243 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.243 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.249 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.249 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.404 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.405 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4155MB free_disk=20.842449188232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.405 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.405 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.417 247708 INFO nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Creating config drive at /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/disk.config
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.420 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmvqdeslm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.545 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmvqdeslm" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.569 247708 DEBUG nova.storage.rbd_utils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] rbd image 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.572 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/disk.config 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.591 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.592 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 6e12a30c-8c62-4756-a0b8-c71359aaf303 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.592 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance af8d85d9-c7a2-4709-a234-19511f3e4395 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.592 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.592 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.596 247708 DEBUG nova.network.neutron [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.603 247708 DEBUG nova.network.neutron [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Updated VIF entry in instance network info cache for port bbcc9e40-fa0a-4002-902e-79173be941a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.604 247708 DEBUG nova.network.neutron [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Updating instance_info_cache with network_info: [{"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.625 247708 DEBUG oslo_concurrency.lockutils [req-ef2042b2-699d-43d2-8d28-c9687975710d req-96dff79b-5f9d-401c-96f0-067a69ae0a5f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6e12a30c-8c62-4756-a0b8-c71359aaf303" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.626 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.626 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance network_info: |[{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.627 247708 DEBUG oslo_concurrency.lockutils [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.627 247708 DEBUG nova.network.neutron [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.632 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start _get_guest_xml network_info=[{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.638 247708 WARNING nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.642 247708 DEBUG nova.virt.libvirt.host [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.642 247708 DEBUG nova.virt.libvirt.host [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.644 247708 DEBUG nova.virt.libvirt.host [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.644 247708 DEBUG nova.virt.libvirt.host [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.645 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.645 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.646 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.646 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.646 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.646 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.646 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.646 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.647 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.647 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.647 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.647 247708 DEBUG nova.virt.hardware [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.649 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.690 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.714 247708 DEBUG oslo_concurrency.processutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/disk.config 6e12a30c-8c62-4756-a0b8-c71359aaf303_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.714 247708 INFO nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Deleting local config drive /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303/disk.config because it was imported into RBD.
Jan 31 08:10:32 compute-0 kernel: tapbbcc9e40-fa: entered promiscuous mode
Jan 31 08:10:32 compute-0 ovn_controller[149457]: 2026-01-31T08:10:32Z|00472|binding|INFO|Claiming lport bbcc9e40-fa0a-4002-902e-79173be941a9 for this chassis.
Jan 31 08:10:32 compute-0 ovn_controller[149457]: 2026-01-31T08:10:32Z|00473|binding|INFO|bbcc9e40-fa0a-4002-902e-79173be941a9: Claiming fa:16:3e:bd:67:27 10.100.0.9
Jan 31 08:10:32 compute-0 NetworkManager[49108]: <info>  [1769847032.7497] manager: (tapbbcc9e40-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.763 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:67:27 10.100.0.9'], port_security=['fa:16:3e:bd:67:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6e12a30c-8c62-4756-a0b8-c71359aaf303', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d0c9f5a2b074ef290312d1045d1747f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8af6ab65-dff6-4231-b8dc-678acb0e6bd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6518b6f6-ebc3-4c6f-860c-5b13e28ce158, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=bbcc9e40-fa0a-4002-902e-79173be941a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.764 160028 INFO neutron.agent.ovn.metadata.agent [-] Port bbcc9e40-fa0a-4002-902e-79173be941a9 in datapath 3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 bound to our chassis
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.765 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d8fa5a4-f572-4b54-8e63-83d1d2d3f747
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.776 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[732cd74d-461b-430e-9e02-2aaa847d5abe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.777 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d8fa5a4-f1 in ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.779 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d8fa5a4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.779 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[05b545dc-07ee-4d83-898d-2b40b3c5ff87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.781 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6219a94c-9dc5-476c-aed7-e7e0f990e2f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.783 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:32 compute-0 systemd-udevd[326194]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:10:32 compute-0 ovn_controller[149457]: 2026-01-31T08:10:32Z|00474|binding|INFO|Setting lport bbcc9e40-fa0a-4002-902e-79173be941a9 ovn-installed in OVS
Jan 31 08:10:32 compute-0 ovn_controller[149457]: 2026-01-31T08:10:32Z|00475|binding|INFO|Setting lport bbcc9e40-fa0a-4002-902e-79173be941a9 up in Southbound
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.791 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef2255f-704a-4c71-9661-39462fec85a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 nova_compute[247704]: 2026-01-31 08:10:32.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:32 compute-0 systemd-machined[214448]: New machine qemu-48-instance-00000076.
Jan 31 08:10:32 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-00000076.
Jan 31 08:10:32 compute-0 NetworkManager[49108]: <info>  [1769847032.8105] device (tapbbcc9e40-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:10:32 compute-0 NetworkManager[49108]: <info>  [1769847032.8115] device (tapbbcc9e40-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.817 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1afc7920-12d7-476b-8926-25bfd2aef013]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.844 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc500c8-4acd-4126-b144-219397913ec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.847 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9faae0d8-4358-4246-b92a-990ae00bf85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 NetworkManager[49108]: <info>  [1769847032.8502] manager: (tap3d8fa5a4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Jan 31 08:10:32 compute-0 podman[326165]: 2026-01-31 08:10:32.87033831 +0000 UTC m=+0.095823353 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.886 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d9edf779-3ee2-4857-b5e2-394fbd06c3ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.888 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3817df06-e4b7-4cba-9f3b-0fa6951172fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 NetworkManager[49108]: <info>  [1769847032.9035] device (tap3d8fa5a4-f0): carrier: link connected
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.905 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[eed02cdd-57f4-4157-ba58-539a8c8dc857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.916 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[17e47b08-1d8d-4f0b-96c0-cb32d5024888]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d8fa5a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:ed:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727552, 'reachable_time': 25917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326259, 'error': None, 'target': 'ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.924 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe9158c-b47e-4d77-9470-d2ed3da60d56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:ed9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 727552, 'tstamp': 727552}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326260, 'error': None, 'target': 'ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ceph-mon[74496]: pgmap v2230: 305 pgs: 305 active+clean; 323 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.9 MiB/s wr, 106 op/s
Jan 31 08:10:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/20760702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1707992801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/398891581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.934 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5c4c7f-a887-4da0-95fc-6215f135c77f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d8fa5a4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:ed:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727552, 'reachable_time': 25917, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326261, 'error': None, 'target': 'ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:32.958 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[79aa8449-03a7-4aa5-8f00-993cf920da2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.008 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6a623f51-ce68-4c63-9bcf-6e92b8eb409a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.010 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d8fa5a4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.010 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.010 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d8fa5a4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:33 compute-0 kernel: tap3d8fa5a4-f0: entered promiscuous mode
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:33 compute-0 NetworkManager[49108]: <info>  [1769847033.0383] manager: (tap3d8fa5a4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.043 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d8fa5a4-f0, col_values=(('external_ids', {'iface-id': 'd9eaeb9a-af91-47d8-9f2c-05f28294115b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:33 compute-0 ovn_controller[149457]: 2026-01-31T08:10:33Z|00476|binding|INFO|Releasing lport d9eaeb9a-af91-47d8-9f2c-05f28294115b from this chassis (sb_readonly=0)
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.047 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d8fa5a4-f572-4b54-8e63-83d1d2d3f747.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d8fa5a4-f572-4b54-8e63-83d1d2d3f747.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.048 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff9a192-3d73-4a88-987b-2f20b2b3c328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.048 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/3d8fa5a4-f572-4b54-8e63-83d1d2d3f747.pid.haproxy
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 3d8fa5a4-f572-4b54-8e63-83d1d2d3f747
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:10:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:33.049 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'env', 'PROCESS_TAG=haproxy-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d8fa5a4-f572-4b54-8e63-83d1d2d3f747.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428002861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.131 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:10:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691849931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.163 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.169 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.196 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.205 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.243 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.269 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.270 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.385 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847033.384866, 6e12a30c-8c62-4756-a0b8-c71359aaf303 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.386 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] VM Started (Lifecycle Event)
Jan 31 08:10:33 compute-0 podman[326374]: 2026-01-31 08:10:33.391246568 +0000 UTC m=+0.056993305 container create 8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.432 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.436 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847033.3857024, 6e12a30c-8c62-4756-a0b8-c71359aaf303 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.436 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] VM Paused (Lifecycle Event)
Jan 31 08:10:33 compute-0 systemd[1]: Started libpod-conmon-8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24.scope.
Jan 31 08:10:33 compute-0 podman[326374]: 2026-01-31 08:10:33.35331979 +0000 UTC m=+0.019066557 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:10:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/097c0dbb628d16f2886716a29a04ac5887357d1f1a779f550f3c7f6c2879374a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:33 compute-0 podman[326374]: 2026-01-31 08:10:33.47883286 +0000 UTC m=+0.144579647 container init 8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.482 247708 DEBUG nova.compute.manager [req-9cbe9fb5-d17d-4783-a9be-d6c5c731eaa1 req-2e05a70f-3edf-49c2-b63b-03dc164d8be4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.482 247708 DEBUG oslo_concurrency.lockutils [req-9cbe9fb5-d17d-4783-a9be-d6c5c731eaa1 req-2e05a70f-3edf-49c2-b63b-03dc164d8be4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.482 247708 DEBUG oslo_concurrency.lockutils [req-9cbe9fb5-d17d-4783-a9be-d6c5c731eaa1 req-2e05a70f-3edf-49c2-b63b-03dc164d8be4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.482 247708 DEBUG oslo_concurrency.lockutils [req-9cbe9fb5-d17d-4783-a9be-d6c5c731eaa1 req-2e05a70f-3edf-49c2-b63b-03dc164d8be4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.483 247708 DEBUG nova.compute.manager [req-9cbe9fb5-d17d-4783-a9be-d6c5c731eaa1 req-2e05a70f-3edf-49c2-b63b-03dc164d8be4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Processing event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.483 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:10:33 compute-0 podman[326374]: 2026-01-31 08:10:33.485989605 +0000 UTC m=+0.151736352 container start 8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.501 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.507 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:33 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [NOTICE]   (326395) : New worker (326397) forked
Jan 31 08:10:33 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [NOTICE]   (326395) : Loading success.
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.511 247708 INFO nova.virt.libvirt.driver [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Instance spawned successfully.
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.511 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.513 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847033.486994, 6e12a30c-8c62-4756-a0b8-c71359aaf303 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.514 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] VM Resumed (Lifecycle Event)
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.545 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.551 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.551 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.552 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.553 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.553 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.554 247708 DEBUG nova.virt.libvirt.driver [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.562 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:10:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:10:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134985353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.598 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.599 247708 DEBUG nova.virt.libvirt.vif [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.600 247708 DEBUG nova.network.os_vif_util [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.600 247708 DEBUG nova.network.os_vif_util [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.601 247708 DEBUG nova.objects.instance [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'pci_devices' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.603 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:10:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 339 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.621 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <uuid>af8d85d9-c7a2-4709-a234-19511f3e4395</uuid>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <name>instance-00000077</name>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsTestOtherB-server-2134235451</nova:name>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:10:32</nova:creationTime>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <nova:port uuid="4fa5cff4-40cb-4379-bda2-213171730f4f">
Jan 31 08:10:33 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <system>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <entry name="serial">af8d85d9-c7a2-4709-a234-19511f3e4395</entry>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <entry name="uuid">af8d85d9-c7a2-4709-a234-19511f3e4395</entry>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </system>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <os>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </os>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <features>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </features>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/af8d85d9-c7a2-4709-a234-19511f3e4395_disk">
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </source>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config">
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </source>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:10:33 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:af:7b:39"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <target dev="tap4fa5cff4-40"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/console.log" append="off"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <video>
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </video>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:10:33 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:10:33 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:10:33 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:10:33 compute-0 nova_compute[247704]: </domain>
Jan 31 08:10:33 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.628 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Preparing to wait for external event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.629 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.629 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.629 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.630 247708 DEBUG nova.virt.libvirt.vif [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.631 247708 DEBUG nova.network.os_vif_util [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.632 247708 DEBUG nova.network.os_vif_util [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.632 247708 DEBUG os_vif [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.634 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.635 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.636 247708 INFO nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Took 6.84 seconds to spawn the instance on the hypervisor.
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.637 247708 DEBUG nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.640 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.641 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fa5cff4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.642 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fa5cff4-40, col_values=(('external_ids', {'iface-id': '4fa5cff4-40cb-4379-bda2-213171730f4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:7b:39', 'vm-uuid': 'af8d85d9-c7a2-4709-a234-19511f3e4395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:33 compute-0 NetworkManager[49108]: <info>  [1769847033.6443] manager: (tap4fa5cff4-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.646 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.650 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.651 247708 INFO os_vif [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40')
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.729 247708 INFO nova.compute.manager [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Took 8.52 seconds to build instance.
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.735 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.736 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.736 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:af:7b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.737 247708 INFO nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Using config drive
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.776 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:33 compute-0 nova_compute[247704]: 2026-01-31 08:10:33.800 247708 DEBUG oslo_concurrency.lockutils [None req-bf3649c8-32b5-452f-9f87-737d8d13b59a 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/428002861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2691849931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3134985353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:10:33 compute-0 ceph-mon[74496]: pgmap v2231: 305 pgs: 305 active+clean; 339 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 645 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 31 08:10:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:10:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:34.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:10:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:34.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.308 247708 INFO nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Creating config drive at /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/disk.config
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.315 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfnqglotq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.393 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.421 247708 DEBUG nova.network.neutron [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated VIF entry in instance network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.422 247708 DEBUG nova.network.neutron [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.442 247708 DEBUG oslo_concurrency.lockutils [req-dfb8cc37-d6d2-403f-a86a-2ab631c87c4a req-2a681075-565a-4611-99cd-6800ba4cb97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.451 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfnqglotq" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.482 247708 DEBUG nova.storage.rbd_utils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rbd image af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.486 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/disk.config af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.749 247708 DEBUG oslo_concurrency.processutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/disk.config af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.751 247708 INFO nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Deleting local config drive /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/disk.config because it was imported into RBD.
Jan 31 08:10:34 compute-0 kernel: tap4fa5cff4-40: entered promiscuous mode
Jan 31 08:10:34 compute-0 NetworkManager[49108]: <info>  [1769847034.8037] manager: (tap4fa5cff4-40): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Jan 31 08:10:34 compute-0 systemd-udevd[326240]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.805 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:34 compute-0 ovn_controller[149457]: 2026-01-31T08:10:34Z|00477|binding|INFO|Claiming lport 4fa5cff4-40cb-4379-bda2-213171730f4f for this chassis.
Jan 31 08:10:34 compute-0 ovn_controller[149457]: 2026-01-31T08:10:34Z|00478|binding|INFO|4fa5cff4-40cb-4379-bda2-213171730f4f: Claiming fa:16:3e:af:7b:39 10.100.0.6
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:34 compute-0 ovn_controller[149457]: 2026-01-31T08:10:34Z|00479|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f up in Southbound
Jan 31 08:10:34 compute-0 ovn_controller[149457]: 2026-01-31T08:10:34Z|00480|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f ovn-installed in OVS
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.816 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:7b:39 10.100.0.6'], port_security=['fa:16:3e:af:7b:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4fa5cff4-40cb-4379-bda2-213171730f4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.818 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4fa5cff4-40cb-4379-bda2-213171730f4f in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:10:34 compute-0 NetworkManager[49108]: <info>  [1769847034.8194] device (tap4fa5cff4-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:10:34 compute-0 NetworkManager[49108]: <info>  [1769847034.8202] device (tap4fa5cff4-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.820 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.826 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.831 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cebdc90e-3de9-48c4-9f64-09bff477794b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:34 compute-0 systemd-machined[214448]: New machine qemu-49-instance-00000077.
Jan 31 08:10:34 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-00000077.
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.865 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[144bb091-537c-4fc6-b997-65150612c2f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.869 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6b3600-d7d3-4981-b8d0-4bf1ecc657bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.889 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1f686419-e561-4b55-82e4-6184610c04ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.905 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ed478223-ccf1-4964-b384-c83d3995cc9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719151, 'reachable_time': 38392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326492, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.918 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0b18d509-ed1e-4414-8d89-4839e8cbca4e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719161, 'tstamp': 719161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326494, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719163, 'tstamp': 719163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326494, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.920 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:34 compute-0 nova_compute[247704]: 2026-01-31 08:10:34.922 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.923 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.923 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.923 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:34.924 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.228 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847035.2283068, af8d85d9-c7a2-4709-a234-19511f3e4395 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.229 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Started (Lifecycle Event)
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.251 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.255 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847035.2284586, af8d85d9-c7a2-4709-a234-19511f3e4395 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.255 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Paused (Lifecycle Event)
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.284 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.287 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.320 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007311950376884422 of space, bias 1.0, pg target 2.1935851130653266 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.549 247708 DEBUG nova.compute.manager [req-77c876d5-1a68-4253-a88f-7ebfe0df118c req-ce47ffbf-3282-4151-a1f4-a11ee75dafe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.550 247708 DEBUG oslo_concurrency.lockutils [req-77c876d5-1a68-4253-a88f-7ebfe0df118c req-ce47ffbf-3282-4151-a1f4-a11ee75dafe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.550 247708 DEBUG oslo_concurrency.lockutils [req-77c876d5-1a68-4253-a88f-7ebfe0df118c req-ce47ffbf-3282-4151-a1f4-a11ee75dafe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.551 247708 DEBUG oslo_concurrency.lockutils [req-77c876d5-1a68-4253-a88f-7ebfe0df118c req-ce47ffbf-3282-4151-a1f4-a11ee75dafe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.551 247708 DEBUG nova.compute.manager [req-77c876d5-1a68-4253-a88f-7ebfe0df118c req-ce47ffbf-3282-4151-a1f4-a11ee75dafe8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Processing event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.552 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.558 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847035.557974, af8d85d9-c7a2-4709-a234-19511f3e4395 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.558 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Resumed (Lifecycle Event)
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.560 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.563 247708 INFO nova.virt.libvirt.driver [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance spawned successfully.
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.564 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:10:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.580 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.594 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.603 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.603 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.604 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.605 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.606 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.606 247708 DEBUG nova.virt.libvirt.driver [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.612 247708 DEBUG nova.compute.manager [req-bd6020e3-a4c2-4a23-881f-8d364e8d2c7d req-6870c7d7-94a6-4aa6-bbcf-2ceeb5884efd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.612 247708 DEBUG oslo_concurrency.lockutils [req-bd6020e3-a4c2-4a23-881f-8d364e8d2c7d req-6870c7d7-94a6-4aa6-bbcf-2ceeb5884efd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.613 247708 DEBUG oslo_concurrency.lockutils [req-bd6020e3-a4c2-4a23-881f-8d364e8d2c7d req-6870c7d7-94a6-4aa6-bbcf-2ceeb5884efd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.613 247708 DEBUG oslo_concurrency.lockutils [req-bd6020e3-a4c2-4a23-881f-8d364e8d2c7d req-6870c7d7-94a6-4aa6-bbcf-2ceeb5884efd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.614 247708 DEBUG nova.compute.manager [req-bd6020e3-a4c2-4a23-881f-8d364e8d2c7d req-6870c7d7-94a6-4aa6-bbcf-2ceeb5884efd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] No waiting events found dispatching network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.614 247708 WARNING nova.compute.manager [req-bd6020e3-a4c2-4a23-881f-8d364e8d2c7d req-6870c7d7-94a6-4aa6-bbcf-2ceeb5884efd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received unexpected event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 for instance with vm_state active and task_state None.
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.617 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:10:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 651 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.670 247708 INFO nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Took 8.33 seconds to spawn the instance on the hypervisor.
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.670 247708 DEBUG nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.734 247708 INFO nova.compute.manager [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Took 10.04 seconds to build instance.
Jan 31 08:10:35 compute-0 nova_compute[247704]: 2026-01-31 08:10:35.769 247708 DEBUG oslo_concurrency.lockutils [None req-b7419052-13f0-4424-9140-b7822abba85a 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:36.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:10:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:10:36 compute-0 nova_compute[247704]: 2026-01-31 08:10:36.264 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:37 compute-0 ceph-mon[74496]: pgmap v2232: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 651 KiB/s rd, 3.6 MiB/s wr, 122 op/s
Jan 31 08:10:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.003 247708 DEBUG nova.compute.manager [req-881862de-96af-42e5-b1f8-4c7013662777 req-e6f7f3a4-4134-42ea-b627-f79662328b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.003 247708 DEBUG oslo_concurrency.lockutils [req-881862de-96af-42e5-b1f8-4c7013662777 req-e6f7f3a4-4134-42ea-b627-f79662328b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.003 247708 DEBUG oslo_concurrency.lockutils [req-881862de-96af-42e5-b1f8-4c7013662777 req-e6f7f3a4-4134-42ea-b627-f79662328b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.004 247708 DEBUG oslo_concurrency.lockutils [req-881862de-96af-42e5-b1f8-4c7013662777 req-e6f7f3a4-4134-42ea-b627-f79662328b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.004 247708 DEBUG nova.compute.manager [req-881862de-96af-42e5-b1f8-4c7013662777 req-e6f7f3a4-4134-42ea-b627-f79662328b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.004 247708 WARNING nova.compute.manager [req-881862de-96af-42e5-b1f8-4c7013662777 req-e6f7f3a4-4134-42ea-b627-f79662328b5d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state None.
Jan 31 08:10:38 compute-0 ceph-mon[74496]: pgmap v2233: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Jan 31 08:10:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:38.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.637 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.637 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.638 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.638 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.638 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.640 247708 INFO nova.compute.manager [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Terminating instance
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.641 247708 DEBUG nova.compute.manager [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 kernel: tapbbcc9e40-fa (unregistering): left promiscuous mode
Jan 31 08:10:38 compute-0 NetworkManager[49108]: <info>  [1769847038.7353] device (tapbbcc9e40-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.740 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 ovn_controller[149457]: 2026-01-31T08:10:38Z|00481|binding|INFO|Releasing lport bbcc9e40-fa0a-4002-902e-79173be941a9 from this chassis (sb_readonly=0)
Jan 31 08:10:38 compute-0 ovn_controller[149457]: 2026-01-31T08:10:38Z|00482|binding|INFO|Setting lport bbcc9e40-fa0a-4002-902e-79173be941a9 down in Southbound
Jan 31 08:10:38 compute-0 ovn_controller[149457]: 2026-01-31T08:10:38Z|00483|binding|INFO|Removing iface tapbbcc9e40-fa ovn-installed in OVS
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:38.752 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:67:27 10.100.0.9'], port_security=['fa:16:3e:bd:67:27 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '6e12a30c-8c62-4756-a0b8-c71359aaf303', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9d0c9f5a2b074ef290312d1045d1747f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8af6ab65-dff6-4231-b8dc-678acb0e6bd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6518b6f6-ebc3-4c6f-860c-5b13e28ce158, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=bbcc9e40-fa0a-4002-902e-79173be941a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:10:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:38.754 160028 INFO neutron.agent.ovn.metadata.agent [-] Port bbcc9e40-fa0a-4002-902e-79173be941a9 in datapath 3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 unbound from our chassis
Jan 31 08:10:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:38.756 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:10:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:38.757 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[77bc83d6-e72f-4bda-8782-c583a773915f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:38.757 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 namespace which is not needed anymore
Jan 31 08:10:38 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000076.scope: Deactivated successfully.
Jan 31 08:10:38 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000076.scope: Consumed 5.876s CPU time.
Jan 31 08:10:38 compute-0 systemd-machined[214448]: Machine qemu-48-instance-00000076 terminated.
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.862 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.865 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.876 247708 INFO nova.virt.libvirt.driver [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Instance destroyed successfully.
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.876 247708 DEBUG nova.objects.instance [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lazy-loading 'resources' on Instance uuid 6e12a30c-8c62-4756-a0b8-c71359aaf303 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.896 247708 DEBUG nova.virt.libvirt.vif [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-423103778',display_name='tempest-ServerMetadataTestJSON-server-423103778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-423103778',id=118,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9d0c9f5a2b074ef290312d1045d1747f',ramdisk_id='',reservation_id='r-wycemqq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-2083347598',owner_user_name='tempest-ServerMetadataTestJSON-2083347598-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:38Z,user_data=None,user_id='6215f7b8636349358e68ad58782a5ed9',uuid=6e12a30c-8c62-4756-a0b8-c71359aaf303,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:10:38 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [NOTICE]   (326395) : haproxy version is 2.8.14-c23fe91
Jan 31 08:10:38 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [NOTICE]   (326395) : path to executable is /usr/sbin/haproxy
Jan 31 08:10:38 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [WARNING]  (326395) : Exiting Master process...
Jan 31 08:10:38 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [WARNING]  (326395) : Exiting Master process...
Jan 31 08:10:38 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [ALERT]    (326395) : Current worker (326397) exited with code 143 (Terminated)
Jan 31 08:10:38 compute-0 neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747[326390]: [WARNING]  (326395) : All workers exited. Exiting... (0)
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.897 247708 DEBUG nova.network.os_vif_util [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Converting VIF {"id": "bbcc9e40-fa0a-4002-902e-79173be941a9", "address": "fa:16:3e:bd:67:27", "network": {"id": "3d8fa5a4-f572-4b54-8e63-83d1d2d3f747", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1606649686-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9d0c9f5a2b074ef290312d1045d1747f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbbcc9e40-fa", "ovs_interfaceid": "bbcc9e40-fa0a-4002-902e-79173be941a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:10:38 compute-0 systemd[1]: libpod-8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24.scope: Deactivated successfully.
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.903 247708 DEBUG nova.network.os_vif_util [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.904 247708 DEBUG os_vif [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:10:38 compute-0 podman[326560]: 2026-01-31 08:10:38.908660702 +0000 UTC m=+0.058285467 container died 8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.906 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.907 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbcc9e40-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.911 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.913 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:38 compute-0 nova_compute[247704]: 2026-01-31 08:10:38.917 247708 INFO os_vif [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:67:27,bridge_name='br-int',has_traffic_filtering=True,id=bbcc9e40-fa0a-4002-902e-79173be941a9,network=Network(3d8fa5a4-f572-4b54-8e63-83d1d2d3f747),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbbcc9e40-fa')
Jan 31 08:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24-userdata-shm.mount: Deactivated successfully.
Jan 31 08:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-097c0dbb628d16f2886716a29a04ac5887357d1f1a779f550f3c7f6c2879374a-merged.mount: Deactivated successfully.
Jan 31 08:10:38 compute-0 podman[326560]: 2026-01-31 08:10:38.957227869 +0000 UTC m=+0.106852594 container cleanup 8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:10:38 compute-0 systemd[1]: libpod-conmon-8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24.scope: Deactivated successfully.
Jan 31 08:10:39 compute-0 podman[326612]: 2026-01-31 08:10:39.025937359 +0000 UTC m=+0.048478496 container remove 8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.030 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf51ab0-40ef-4cca-99b8-c5b46e8d187b]: (4, ('Sat Jan 31 08:10:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 (8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24)\n8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24\nSat Jan 31 08:10:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 (8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24)\n8974de0a0ffbbc810b5c16fe06166e0b32f4d26def5ba3164121ad9fc3a16f24\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.032 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5cf54e-8ce5-4ca2-b933-d635dc600d22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.033 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d8fa5a4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-0 kernel: tap3d8fa5a4-f0: left promiscuous mode
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.046 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a59f03f8-ec54-4f7f-a89c-b5aed7b25dd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.059 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[177cfc2b-22da-467a-8356-d8e4fc696057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.060 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[05503dd8-8635-4e97-ae4d-6434f5d9cb88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.075 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d06afb9-a2f9-4125-ab18-2235ea039fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 727545, 'reachable_time': 44334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326632, 'error': None, 'target': 'ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d3d8fa5a4\x2df572\x2d4b54\x2d8e63\x2d83d1d2d3f747.mount: Deactivated successfully.
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.081 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d8fa5a4-f572-4b54-8e63-83d1d2d3f747 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:10:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:10:39.081 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ee8a0a-0eb8-4e3e-9101-bedc6680721e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.126 247708 DEBUG nova.compute.manager [req-efcd5785-fdbb-4851-a318-8f486a1ea2de req-ce756d86-87be-4bf7-8b72-301233cd8948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-vif-unplugged-bbcc9e40-fa0a-4002-902e-79173be941a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.127 247708 DEBUG oslo_concurrency.lockutils [req-efcd5785-fdbb-4851-a318-8f486a1ea2de req-ce756d86-87be-4bf7-8b72-301233cd8948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.127 247708 DEBUG oslo_concurrency.lockutils [req-efcd5785-fdbb-4851-a318-8f486a1ea2de req-ce756d86-87be-4bf7-8b72-301233cd8948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.128 247708 DEBUG oslo_concurrency.lockutils [req-efcd5785-fdbb-4851-a318-8f486a1ea2de req-ce756d86-87be-4bf7-8b72-301233cd8948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.128 247708 DEBUG nova.compute.manager [req-efcd5785-fdbb-4851-a318-8f486a1ea2de req-ce756d86-87be-4bf7-8b72-301233cd8948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] No waiting events found dispatching network-vif-unplugged-bbcc9e40-fa0a-4002-902e-79173be941a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.128 247708 DEBUG nova.compute.manager [req-efcd5785-fdbb-4851-a318-8f486a1ea2de req-ce756d86-87be-4bf7-8b72-301233cd8948 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-vif-unplugged-bbcc9e40-fa0a-4002-902e-79173be941a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:10:39 compute-0 NetworkManager[49108]: <info>  [1769847039.3003] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Jan 31 08:10:39 compute-0 NetworkManager[49108]: <info>  [1769847039.3007] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.301 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-0 ovn_controller[149457]: 2026-01-31T08:10:39Z|00484|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.339 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.452 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 206 op/s
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.990 247708 INFO nova.virt.libvirt.driver [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Deleting instance files /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303_del
Jan 31 08:10:39 compute-0 nova_compute[247704]: 2026-01-31 08:10:39.991 247708 INFO nova.virt.libvirt.driver [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Deletion of /var/lib/nova/instances/6e12a30c-8c62-4756-a0b8-c71359aaf303_del complete
Jan 31 08:10:40 compute-0 nova_compute[247704]: 2026-01-31 08:10:40.054 247708 INFO nova.compute.manager [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Took 1.41 seconds to destroy the instance on the hypervisor.
Jan 31 08:10:40 compute-0 nova_compute[247704]: 2026-01-31 08:10:40.055 247708 DEBUG oslo.service.loopingcall [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:10:40 compute-0 nova_compute[247704]: 2026-01-31 08:10:40.055 247708 DEBUG nova.compute.manager [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:10:40 compute-0 nova_compute[247704]: 2026-01-31 08:10:40.056 247708 DEBUG nova.network.neutron [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:10:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:40.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:40.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:40 compute-0 ceph-mon[74496]: pgmap v2234: 305 pgs: 305 active+clean; 341 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 206 op/s
Jan 31 08:10:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 292 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.980 247708 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.982 247708 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.982 247708 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.983 247708 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.983 247708 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] No waiting events found dispatching network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.983 247708 WARNING nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received unexpected event network-vif-plugged-bbcc9e40-fa0a-4002-902e-79173be941a9 for instance with vm_state active and task_state deleting.
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.984 247708 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.984 247708 DEBUG nova.compute.manager [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing instance network info cache due to event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.984 247708 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.984 247708 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:10:41 compute-0 nova_compute[247704]: 2026-01-31 08:10:41.985 247708 DEBUG nova.network.neutron [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:10:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:42.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.308 247708 DEBUG nova.network.neutron [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.334 247708 INFO nova.compute.manager [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Took 2.28 seconds to deallocate network for instance.
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.402 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.403 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.411 247708 DEBUG nova.compute.manager [req-462e36bf-ecca-443c-9e70-cb74dd0cb876 req-50cac99a-3d76-4a71-9f04-be0aff652283 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Received event network-vif-deleted-bbcc9e40-fa0a-4002-902e-79173be941a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.554 247708 DEBUG oslo_concurrency.processutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:10:42 compute-0 nova_compute[247704]: 2026-01-31 08:10:42.586 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:10:42 compute-0 ceph-mon[74496]: pgmap v2235: 305 pgs: 305 active+clean; 292 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.055 247708 DEBUG oslo_concurrency.processutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.062 247708 DEBUG nova.compute.provider_tree [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.080 247708 DEBUG nova.scheduler.client.report [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.110 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.142 247708 INFO nova.scheduler.client.report [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Deleted allocations for instance 6e12a30c-8c62-4756-a0b8-c71359aaf303
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.220 247708 DEBUG oslo_concurrency.lockutils [None req-f13f0eb1-c1c3-454b-8d67-063537bbaacf 6215f7b8636349358e68ad58782a5ed9 9d0c9f5a2b074ef290312d1045d1747f - - default default] Lock "6e12a30c-8c62-4756-a0b8-c71359aaf303" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:10:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 249 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 718 KiB/s wr, 214 op/s
Jan 31 08:10:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1321363103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4087330589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:43 compute-0 sudo[326659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:43 compute-0 sudo[326659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:43 compute-0 sudo[326659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:43 compute-0 sudo[326684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:10:43 compute-0 sudo[326684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:43 compute-0 sudo[326684]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.910 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.950 247708 DEBUG nova.network.neutron [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated VIF entry in instance network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.951 247708 DEBUG nova.network.neutron [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:10:43 compute-0 nova_compute[247704]: 2026-01-31 08:10:43.979 247708 DEBUG oslo_concurrency.lockutils [req-da95d2d2-f37a-4f8b-b73c-b46243578a43 req-237846cc-97c1-4a3e-bdfe-78020364a638 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:10:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:44.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:44.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:44 compute-0 nova_compute[247704]: 2026-01-31 08:10:44.276 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:44 compute-0 nova_compute[247704]: 2026-01-31 08:10:44.395 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:44 compute-0 ceph-mon[74496]: pgmap v2236: 305 pgs: 305 active+clean; 249 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 718 KiB/s wr, 214 op/s
Jan 31 08:10:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 167 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 50 KiB/s wr, 229 op/s
Jan 31 08:10:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4221475382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:10:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4221475382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:10:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:46.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:46.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:46 compute-0 ceph-mon[74496]: pgmap v2237: 305 pgs: 305 active+clean; 167 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 50 KiB/s wr, 229 op/s
Jan 31 08:10:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 167 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 222 op/s
Jan 31 08:10:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:48.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:48.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:48 compute-0 ceph-mon[74496]: pgmap v2238: 305 pgs: 305 active+clean; 167 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 222 op/s
Jan 31 08:10:48 compute-0 ovn_controller[149457]: 2026-01-31T08:10:48Z|00485|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:10:48 compute-0 nova_compute[247704]: 2026-01-31 08:10:48.839 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:48 compute-0 nova_compute[247704]: 2026-01-31 08:10:48.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:48 compute-0 podman[326711]: 2026-01-31 08:10:48.948185771 +0000 UTC m=+0.104526727 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 08:10:49 compute-0 nova_compute[247704]: 2026-01-31 08:10:49.398 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:49 compute-0 ovn_controller[149457]: 2026-01-31T08:10:49Z|00486|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:10:49 compute-0 nova_compute[247704]: 2026-01-31 08:10:49.563 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 188 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 171 op/s
Jan 31 08:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:50.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:50.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:50 compute-0 ovn_controller[149457]: 2026-01-31T08:10:50Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:7b:39 10.100.0.6
Jan 31 08:10:50 compute-0 ovn_controller[149457]: 2026-01-31T08:10:50Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:7b:39 10.100.0.6
Jan 31 08:10:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:50 compute-0 ceph-mon[74496]: pgmap v2239: 305 pgs: 305 active+clean; 188 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 171 op/s
Jan 31 08:10:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 191 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 1.8 MiB/s wr, 138 op/s
Jan 31 08:10:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:52.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:52.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:52 compute-0 ceph-mon[74496]: pgmap v2240: 305 pgs: 305 active+clean; 191 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 1.8 MiB/s wr, 138 op/s
Jan 31 08:10:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 194 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 31 08:10:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3915785734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:10:53 compute-0 nova_compute[247704]: 2026-01-31 08:10:53.875 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847038.8737957, 6e12a30c-8c62-4756-a0b8-c71359aaf303 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:10:53 compute-0 nova_compute[247704]: 2026-01-31 08:10:53.875 247708 INFO nova.compute.manager [-] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] VM Stopped (Lifecycle Event)
Jan 31 08:10:53 compute-0 nova_compute[247704]: 2026-01-31 08:10:53.918 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:53 compute-0 nova_compute[247704]: 2026-01-31 08:10:53.937 247708 DEBUG nova.compute.manager [None req-e0fafcc3-18a9-4946-9c48-74843ff08296 - - - - - -] [instance: 6e12a30c-8c62-4756-a0b8-c71359aaf303] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:10:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:54.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:54.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:54 compute-0 nova_compute[247704]: 2026-01-31 08:10:54.399 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:54 compute-0 ceph-mon[74496]: pgmap v2241: 305 pgs: 305 active+clean; 194 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 31 08:10:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:10:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 08:10:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:10:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:10:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:56.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:56 compute-0 ceph-mon[74496]: pgmap v2242: 305 pgs: 305 active+clean; 200 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 08:10:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 218 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 08:10:57 compute-0 ceph-mon[74496]: pgmap v2243: 305 pgs: 305 active+clean; 218 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 08:10:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:58.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:10:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:10:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:58.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:10:58 compute-0 nova_compute[247704]: 2026-01-31 08:10:58.921 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:59 compute-0 nova_compute[247704]: 2026-01-31 08:10:59.401 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:10:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 08:11:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:00.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:00.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:00 compute-0 ceph-mon[74496]: pgmap v2244: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 08:11:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1226629130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3936287298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Jan 31 08:11:01 compute-0 nova_compute[247704]: 2026-01-31 08:11:01.770 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:02.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:02.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:02 compute-0 ceph-mon[74496]: pgmap v2245: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 304 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Jan 31 08:11:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Jan 31 08:11:03 compute-0 nova_compute[247704]: 2026-01-31 08:11:03.658 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:03 compute-0 sudo[326752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:03 compute-0 sudo[326752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:03 compute-0 sudo[326752]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:03 compute-0 podman[326746]: 2026-01-31 08:11:03.910478204 +0000 UTC m=+0.077367386 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:11:03 compute-0 nova_compute[247704]: 2026-01-31 08:11:03.923 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:03 compute-0 sudo[326789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:03 compute-0 sudo[326789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:03 compute-0 sudo[326789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:04.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:04.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:04 compute-0 nova_compute[247704]: 2026-01-31 08:11:04.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:04 compute-0 ceph-mon[74496]: pgmap v2246: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Jan 31 08:11:05 compute-0 nova_compute[247704]: 2026-01-31 08:11:05.538 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 31 08:11:05 compute-0 ceph-mon[74496]: pgmap v2247: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 31 08:11:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:07 compute-0 nova_compute[247704]: 2026-01-31 08:11:07.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:11:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:08.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:08.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:08 compute-0 ceph-mon[74496]: pgmap v2248: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 08:11:08 compute-0 nova_compute[247704]: 2026-01-31 08:11:08.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:09 compute-0 nova_compute[247704]: 2026-01-31 08:11:09.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 31 08:11:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:10.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:10 compute-0 ceph-mon[74496]: pgmap v2249: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Jan 31 08:11:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:11.182 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:11.183 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:11.184 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 31 08:11:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:12.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:12.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:12 compute-0 ceph-mon[74496]: pgmap v2250: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 31 08:11:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1691286314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 08:11:13 compute-0 nova_compute[247704]: 2026-01-31 08:11:13.929 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:14.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:14 compute-0 nova_compute[247704]: 2026-01-31 08:11:14.409 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:14 compute-0 nova_compute[247704]: 2026-01-31 08:11:14.539 247708 DEBUG oslo_concurrency.lockutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:14 compute-0 nova_compute[247704]: 2026-01-31 08:11:14.540 247708 DEBUG oslo_concurrency.lockutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:14 compute-0 nova_compute[247704]: 2026-01-31 08:11:14.591 247708 DEBUG nova.objects.instance [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:14 compute-0 ceph-mon[74496]: pgmap v2251: 305 pgs: 305 active+clean; 246 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 08:11:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3639175740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:15 compute-0 nova_compute[247704]: 2026-01-31 08:11:15.008 247708 DEBUG oslo_concurrency.lockutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 264 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 77 op/s
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.075 247708 DEBUG oslo_concurrency.lockutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.076 247708 DEBUG oslo_concurrency.lockutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.077 247708 INFO nova.compute.manager [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Attaching volume eba44545-2378-4cdd-bfad-f9f0d9c730c0 to /dev/vdb
Jan 31 08:11:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:16.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.290 247708 DEBUG os_brick.utils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:11:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:16.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.292 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.308 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.308 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0bf75d-4f1d-469a-b808-80eb593344c7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.310 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.320 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.320 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ee2c99-3d3e-4782-a380-0d182fd5995b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.322 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.332 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.332 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c20744-6b77-4d6a-8981-48ddcd19b84a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.334 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2236ab-213b-4a59-b38e-4b6e747a5f81]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.334 247708 DEBUG oslo_concurrency.processutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.373 247708 DEBUG oslo_concurrency.processutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "nvme version" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.375 247708 DEBUG os_brick.initiator.connectors.lightos [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.375 247708 DEBUG os_brick.initiator.connectors.lightos [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.376 247708 DEBUG os_brick.initiator.connectors.lightos [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.376 247708 DEBUG os_brick.utils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:11:16 compute-0 nova_compute[247704]: 2026-01-31 08:11:16.376 247708 DEBUG nova.virt.block_device [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating existing volume attachment record: 11dcf9ec-8116-45f2-b743-b88783331531 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:11:17 compute-0 ceph-mon[74496]: pgmap v2252: 305 pgs: 305 active+clean; 264 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 488 KiB/s wr, 77 op/s
Jan 31 08:11:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3159024888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:11:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3324127290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 330 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Jan 31 08:11:17 compute-0 nova_compute[247704]: 2026-01-31 08:11:17.734 247708 DEBUG nova.objects.instance [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'flavor' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:17 compute-0 nova_compute[247704]: 2026-01-31 08:11:17.861 247708 DEBUG nova.virt.libvirt.driver [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Attempting to attach volume eba44545-2378-4cdd-bfad-f9f0d9c730c0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:11:17 compute-0 nova_compute[247704]: 2026-01-31 08:11:17.866 247708 DEBUG nova.virt.libvirt.guest [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:11:17 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:11:17 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-eba44545-2378-4cdd-bfad-f9f0d9c730c0">
Jan 31 08:11:17 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:11:17 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:11:17 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:11:17 compute-0 nova_compute[247704]:   </source>
Jan 31 08:11:17 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:11:17 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:11:17 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:11:17 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:11:17 compute-0 nova_compute[247704]:   <serial>eba44545-2378-4cdd-bfad-f9f0d9c730c0</serial>
Jan 31 08:11:17 compute-0 nova_compute[247704]: </disk>
Jan 31 08:11:17 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:11:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3324127290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:18 compute-0 ceph-mon[74496]: pgmap v2253: 305 pgs: 305 active+clean; 330 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.8 MiB/s wr, 122 op/s
Jan 31 08:11:18 compute-0 nova_compute[247704]: 2026-01-31 08:11:18.167 247708 DEBUG nova.virt.libvirt.driver [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:18 compute-0 nova_compute[247704]: 2026-01-31 08:11:18.168 247708 DEBUG nova.virt.libvirt.driver [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:18 compute-0 nova_compute[247704]: 2026-01-31 08:11:18.168 247708 DEBUG nova.virt.libvirt.driver [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:11:18 compute-0 nova_compute[247704]: 2026-01-31 08:11:18.169 247708 DEBUG nova.virt.libvirt.driver [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] No VIF found with MAC fa:16:3e:af:7b:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:11:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:18.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:18.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:18 compute-0 nova_compute[247704]: 2026-01-31 08:11:18.577 247708 DEBUG oslo_concurrency.lockutils [None req-341d243f-d249-4e14-b7e1-088c41a94731 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:18 compute-0 nova_compute[247704]: 2026-01-31 08:11:18.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:19 compute-0 nova_compute[247704]: 2026-01-31 08:11:19.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 871 KiB/s rd, 6.5 MiB/s wr, 114 op/s
Jan 31 08:11:19 compute-0 podman[326849]: 2026-01-31 08:11:19.916880001 +0000 UTC m=+0.085316180 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:11:20
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.meta', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log']
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:11:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:11:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:20.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:11:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:11:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:11:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:11:20 compute-0 ceph-mon[74496]: pgmap v2254: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 871 KiB/s rd, 6.5 MiB/s wr, 114 op/s
Jan 31 08:11:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/788118537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/810772048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 401 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 6.9 MiB/s wr, 133 op/s
Jan 31 08:11:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/728860884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:21 compute-0 sudo[326876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:21 compute-0 sudo[326876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:21 compute-0 sudo[326876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:21 compute-0 sudo[326901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:22 compute-0 sudo[326901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 sudo[326901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 sudo[326926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:22 compute-0 sudo[326926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 sudo[326926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 sudo[326951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:11:22 compute-0 sudo[326951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:22.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:22.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:11:22 compute-0 sudo[326951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 nova_compute[247704]: 2026-01-31 08:11:22.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:22 compute-0 nova_compute[247704]: 2026-01-31 08:11:22.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a5e879d1-53df-4429-ba07-7b04da5ee0a8 does not exist
Jan 31 08:11:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2df43de7-5f62-42d0-958f-957a56bbd436 does not exist
Jan 31 08:11:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 17dc986b-fd0d-41a5-9d4d-7889ee86b85b does not exist
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:11:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:11:22 compute-0 sudo[327009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:22 compute-0 sudo[327009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 sudo[327009]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 sudo[327034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:22 compute-0 sudo[327034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 sudo[327034]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 sudo[327059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:22 compute-0 sudo[327059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 sudo[327059]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:22 compute-0 ceph-mon[74496]: pgmap v2255: 305 pgs: 305 active+clean; 401 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 6.9 MiB/s wr, 133 op/s
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1219824679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:11:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:11:22 compute-0 sudo[327084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:11:22 compute-0 sudo[327084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:22 compute-0 nova_compute[247704]: 2026-01-31 08:11:22.825 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.142767164 +0000 UTC m=+0.050938462 container create 3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jemison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:11:23 compute-0 systemd[1]: Started libpod-conmon-3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5.scope.
Jan 31 08:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.118026181 +0000 UTC m=+0.026197519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.223148073 +0000 UTC m=+0.131319411 container init 3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.229890956 +0000 UTC m=+0.138062244 container start 3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.233215978 +0000 UTC m=+0.141387316 container attach 3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jemison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:11:23 compute-0 infallible_jemison[327167]: 167 167
Jan 31 08:11:23 compute-0 systemd[1]: libpod-3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5.scope: Deactivated successfully.
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.23494605 +0000 UTC m=+0.143117348 container died 3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jemison, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:11:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa12884d9dd7f198be70c1afab51f14703e25d85e26554441d5b230c8dc62f24-merged.mount: Deactivated successfully.
Jan 31 08:11:23 compute-0 podman[327151]: 2026-01-31 08:11:23.272419073 +0000 UTC m=+0.180590361 container remove 3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jemison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:11:23 compute-0 systemd[1]: libpod-conmon-3131737913e78f8865d0a82992db51eaef24441745861a5d7073cddba1ee06b5.scope: Deactivated successfully.
Jan 31 08:11:23 compute-0 podman[327190]: 2026-01-31 08:11:23.42497633 +0000 UTC m=+0.052130951 container create 77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:11:23 compute-0 systemd[1]: Started libpod-conmon-77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f.scope.
Jan 31 08:11:23 compute-0 podman[327190]: 2026-01-31 08:11:23.397671645 +0000 UTC m=+0.024826316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:11:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cf2b59b185140e51c6969f13f3becb228a9327328bb7904f73a2c8a755bf2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cf2b59b185140e51c6969f13f3becb228a9327328bb7904f73a2c8a755bf2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cf2b59b185140e51c6969f13f3becb228a9327328bb7904f73a2c8a755bf2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cf2b59b185140e51c6969f13f3becb228a9327328bb7904f73a2c8a755bf2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cf2b59b185140e51c6969f13f3becb228a9327328bb7904f73a2c8a755bf2a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:23 compute-0 podman[327190]: 2026-01-31 08:11:23.53246601 +0000 UTC m=+0.159620671 container init 77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 08:11:23 compute-0 podman[327190]: 2026-01-31 08:11:23.548231694 +0000 UTC m=+0.175386315 container start 77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 08:11:23 compute-0 podman[327190]: 2026-01-31 08:11:23.552883887 +0000 UTC m=+0.180038498 container attach 77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:11:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 31 08:11:23 compute-0 nova_compute[247704]: 2026-01-31 08:11:23.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:24 compute-0 sudo[327213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:24 compute-0 sudo[327213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-0 sudo[327213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[327238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:24 compute-0 sudo[327238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-0 sudo[327238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:11:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:24.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:11:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:24 compute-0 boring_lederberg[327208]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:11:24 compute-0 boring_lederberg[327208]: --> relative data size: 1.0
Jan 31 08:11:24 compute-0 boring_lederberg[327208]: --> All data devices are unavailable
Jan 31 08:11:24 compute-0 systemd[1]: libpod-77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f.scope: Deactivated successfully.
Jan 31 08:11:24 compute-0 podman[327190]: 2026-01-31 08:11:24.40666635 +0000 UTC m=+1.033820971 container died 77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:11:24 compute-0 nova_compute[247704]: 2026-01-31 08:11:24.414 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-05cf2b59b185140e51c6969f13f3becb228a9327328bb7904f73a2c8a755bf2a-merged.mount: Deactivated successfully.
Jan 31 08:11:24 compute-0 podman[327190]: 2026-01-31 08:11:24.475847236 +0000 UTC m=+1.103001807 container remove 77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:24 compute-0 systemd[1]: libpod-conmon-77636bd889be015a9103d654a9466254b74ba515e4ae74aa7ed36b36e83bf92f.scope: Deactivated successfully.
Jan 31 08:11:24 compute-0 sudo[327084]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[327287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:24 compute-0 sudo[327287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-0 sudo[327287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[327312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:24 compute-0 sudo[327312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-0 sudo[327312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[327337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:24 compute-0 sudo[327337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-0 sudo[327337]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 sudo[327362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:11:24 compute-0 sudo[327362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:24 compute-0 ceph-mon[74496]: pgmap v2256: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 7.4 MiB/s wr, 148 op/s
Jan 31 08:11:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3711382014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1353661443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:24 compute-0 nova_compute[247704]: 2026-01-31 08:11:24.825 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.039509141 +0000 UTC m=+0.038311445 container create 70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:11:25 compute-0 systemd[1]: Started libpod-conmon-70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927.scope.
Jan 31 08:11:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.025162131 +0000 UTC m=+0.023964455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.13267091 +0000 UTC m=+0.131473244 container init 70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.140837339 +0000 UTC m=+0.139639643 container start 70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.144050528 +0000 UTC m=+0.142852832 container attach 70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:11:25 compute-0 systemd[1]: libpod-70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927.scope: Deactivated successfully.
Jan 31 08:11:25 compute-0 great_poitras[327445]: 167 167
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.146547648 +0000 UTC m=+0.145349982 container died 70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:11:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0c69afff097542166a73582b933d3a1f5bd34ea3960591df8304819b18ace72-merged.mount: Deactivated successfully.
Jan 31 08:11:25 compute-0 podman[327428]: 2026-01-31 08:11:25.183729175 +0000 UTC m=+0.182531469 container remove 70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:11:25 compute-0 systemd[1]: libpod-conmon-70153113e808d68dc48e9821b53bbf2ed2eb3faa297034490b7500ed2aae7927.scope: Deactivated successfully.
Jan 31 08:11:25 compute-0 podman[327469]: 2026-01-31 08:11:25.346267065 +0000 UTC m=+0.050017180 container create 59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:11:25 compute-0 systemd[1]: Started libpod-conmon-59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1.scope.
Jan 31 08:11:25 compute-0 podman[327469]: 2026-01-31 08:11:25.320950888 +0000 UTC m=+0.024701073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:11:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40652bd8c97795625f4aa42cd9ffa040a943da34d9d5a99c3ccb3a613420866/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40652bd8c97795625f4aa42cd9ffa040a943da34d9d5a99c3ccb3a613420866/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40652bd8c97795625f4aa42cd9ffa040a943da34d9d5a99c3ccb3a613420866/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c40652bd8c97795625f4aa42cd9ffa040a943da34d9d5a99c3ccb3a613420866/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:25 compute-0 podman[327469]: 2026-01-31 08:11:25.445500933 +0000 UTC m=+0.149251078 container init 59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:11:25 compute-0 podman[327469]: 2026-01-31 08:11:25.451829337 +0000 UTC m=+0.155579442 container start 59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dhawan, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:11:25 compute-0 podman[327469]: 2026-01-31 08:11:25.456093991 +0000 UTC m=+0.159844146 container attach 59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dhawan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:11:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 773 KiB/s rd, 7.5 MiB/s wr, 169 op/s
Jan 31 08:11:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]: {
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:     "0": [
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:         {
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "devices": [
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "/dev/loop3"
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             ],
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "lv_name": "ceph_lv0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "lv_size": "7511998464",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "name": "ceph_lv0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "tags": {
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.cluster_name": "ceph",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.crush_device_class": "",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.encrypted": "0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.osd_id": "0",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.type": "block",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:                 "ceph.vdo": "0"
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             },
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "type": "block",
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:             "vg_name": "ceph_vg0"
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:         }
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]:     ]
Jan 31 08:11:26 compute-0 laughing_dhawan[327486]: }
Jan 31 08:11:26 compute-0 systemd[1]: libpod-59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1.scope: Deactivated successfully.
Jan 31 08:11:26 compute-0 podman[327469]: 2026-01-31 08:11:26.268581959 +0000 UTC m=+0.972332144 container died 59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c40652bd8c97795625f4aa42cd9ffa040a943da34d9d5a99c3ccb3a613420866-merged.mount: Deactivated successfully.
Jan 31 08:11:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:26 compute-0 podman[327469]: 2026-01-31 08:11:26.328414357 +0000 UTC m=+1.032164472 container remove 59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:11:26 compute-0 systemd[1]: libpod-conmon-59bcff63927942b19e23695805edd1829e12f92bd6f4e4f7497ae4096a6834e1.scope: Deactivated successfully.
Jan 31 08:11:26 compute-0 sudo[327362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 sudo[327508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:26 compute-0 sudo[327508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:26 compute-0 sudo[327508]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 sudo[327533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:11:26 compute-0 sudo[327533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:26 compute-0 sudo[327533]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 sudo[327558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:26 compute-0 sudo[327558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:26 compute-0 sudo[327558]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 sudo[327583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:11:26 compute-0 sudo[327583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:26 compute-0 nova_compute[247704]: 2026-01-31 08:11:26.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:26 compute-0 ceph-mon[74496]: pgmap v2257: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 773 KiB/s rd, 7.5 MiB/s wr, 169 op/s
Jan 31 08:11:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2252345541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.839085319 +0000 UTC m=+0.032917502 container create d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 08:11:26 compute-0 systemd[1]: Started libpod-conmon-d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799.scope.
Jan 31 08:11:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.823552951 +0000 UTC m=+0.017385174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.922520363 +0000 UTC m=+0.116352586 container init d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.927623457 +0000 UTC m=+0.121455670 container start d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:11:26 compute-0 modest_shockley[327664]: 167 167
Jan 31 08:11:26 compute-0 systemd[1]: libpod-d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799.scope: Deactivated successfully.
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.932499056 +0000 UTC m=+0.126331289 container attach d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.932993148 +0000 UTC m=+0.126825371 container died d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-da533731c1edb275488733c63ec281ac97d7e4f8bf291d7e05ff39595e6a04be-merged.mount: Deactivated successfully.
Jan 31 08:11:26 compute-0 podman[327648]: 2026-01-31 08:11:26.967881598 +0000 UTC m=+0.161713791 container remove d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:11:26 compute-0 systemd[1]: libpod-conmon-d5da8c580cad6d6e9943991c35532b01ed39a4d6616d3f7de1b0067ade2ab799.scope: Deactivated successfully.
Jan 31 08:11:27 compute-0 nova_compute[247704]: 2026-01-31 08:11:27.057 247708 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:27 compute-0 nova_compute[247704]: 2026-01-31 08:11:27.058 247708 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:27 compute-0 nova_compute[247704]: 2026-01-31 08:11:27.058 247708 DEBUG nova.network.neutron [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:11:27 compute-0 podman[327690]: 2026-01-31 08:11:27.118215721 +0000 UTC m=+0.032029171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:11:27 compute-0 podman[327690]: 2026-01-31 08:11:27.275053473 +0000 UTC m=+0.188866903 container create e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:11:27 compute-0 systemd[1]: Started libpod-conmon-e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86.scope.
Jan 31 08:11:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39257d835859c31eda3f69b0f8040620839c645db0b071eece9230cb1dc55981/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39257d835859c31eda3f69b0f8040620839c645db0b071eece9230cb1dc55981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39257d835859c31eda3f69b0f8040620839c645db0b071eece9230cb1dc55981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39257d835859c31eda3f69b0f8040620839c645db0b071eece9230cb1dc55981/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:11:27 compute-0 podman[327690]: 2026-01-31 08:11:27.467747918 +0000 UTC m=+0.381561348 container init e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:11:27 compute-0 podman[327690]: 2026-01-31 08:11:27.473484117 +0000 UTC m=+0.387297537 container start e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:11:27 compute-0 podman[327690]: 2026-01-31 08:11:27.476798159 +0000 UTC m=+0.390611579 container attach e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_franklin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:11:27 compute-0 nova_compute[247704]: 2026-01-31 08:11:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 261 op/s
Jan 31 08:11:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]: {
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:         "osd_id": 0,
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:         "type": "bluestore"
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]:     }
Jan 31 08:11:28 compute-0 vigilant_franklin[327707]: }
Jan 31 08:11:28 compute-0 systemd[1]: libpod-e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86.scope: Deactivated successfully.
Jan 31 08:11:28 compute-0 podman[327690]: 2026-01-31 08:11:28.267550726 +0000 UTC m=+1.181364166 container died e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:11:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-39257d835859c31eda3f69b0f8040620839c645db0b071eece9230cb1dc55981-merged.mount: Deactivated successfully.
Jan 31 08:11:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:28.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:28 compute-0 podman[327690]: 2026-01-31 08:11:28.318421796 +0000 UTC m=+1.232235216 container remove e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:11:28 compute-0 systemd[1]: libpod-conmon-e480281faf66945765b88e78a407b75db807508871fd3540066d5749428d5f86.scope: Deactivated successfully.
Jan 31 08:11:28 compute-0 sudo[327583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:11:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:11:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b5f3a6d9-65de-4029-a22c-473eff1e21a7 does not exist
Jan 31 08:11:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 10ecdc60-77e7-48c4-9b4b-8e1990fc5ac0 does not exist
Jan 31 08:11:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3ef777ff-ebb7-448e-9faf-d592a2fde1c9 does not exist
Jan 31 08:11:28 compute-0 sudo[327742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:28 compute-0 sudo[327742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:28 compute-0 sudo[327742]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-0 sudo[327767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:11:28 compute-0 sudo[327767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:28 compute-0 sudo[327767]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-0 nova_compute[247704]: 2026-01-31 08:11:28.694 247708 DEBUG nova.network.neutron [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:28 compute-0 nova_compute[247704]: 2026-01-31 08:11:28.796 247708 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:28 compute-0 ceph-mon[74496]: pgmap v2258: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 261 op/s
Jan 31 08:11:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:11:28 compute-0 nova_compute[247704]: 2026-01-31 08:11:28.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.418 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.628 247708 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.629 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Creating file /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/1e98362bf34949cc907dc5e972158897.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 08:11:29 compute-0 nova_compute[247704]: 2026-01-31 08:11:29.630 247708 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/1e98362bf34949cc907dc5e972158897.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 400 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.7 MiB/s wr, 269 op/s
Jan 31 08:11:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 31 08:11:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 31 08:11:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.090 247708 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/1e98362bf34949cc907dc5e972158897.tmp" returned: 1 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.091 247708 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/1e98362bf34949cc907dc5e972158897.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.091 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Creating directory /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.091 247708 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.293 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.294 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.294 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.295 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.308 247708 DEBUG oslo_concurrency.processutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:30.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.314 247708 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:11:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:30.698 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:11:30 compute-0 nova_compute[247704]: 2026-01-31 08:11:30.698 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:30.699 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:11:30 compute-0 ceph-mon[74496]: pgmap v2259: 305 pgs: 305 active+clean; 400 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.7 MiB/s wr, 269 op/s
Jan 31 08:11:30 compute-0 ceph-mon[74496]: osdmap e283: 3 total, 3 up, 3 in
Jan 31 08:11:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 31 08:11:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 31 08:11:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 31 08:11:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 425 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 1.6 MiB/s wr, 379 op/s
Jan 31 08:11:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 31 08:11:31 compute-0 ceph-mon[74496]: osdmap e284: 3 total, 3 up, 3 in
Jan 31 08:11:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3890559462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 31 08:11:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 31 08:11:32 compute-0 sshd-session[327796]: Invalid user pbanx from 45.148.10.240 port 50322
Jan 31 08:11:32 compute-0 sshd-session[327796]: Connection closed by invalid user pbanx 45.148.10.240 port 50322 [preauth]
Jan 31 08:11:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:32.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:32.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:32 compute-0 kernel: tap4fa5cff4-40 (unregistering): left promiscuous mode
Jan 31 08:11:32 compute-0 NetworkManager[49108]: <info>  [1769847092.6846] device (tap4fa5cff4-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:11:32 compute-0 ovn_controller[149457]: 2026-01-31T08:11:32Z|00487|binding|INFO|Releasing lport 4fa5cff4-40cb-4379-bda2-213171730f4f from this chassis (sb_readonly=0)
Jan 31 08:11:32 compute-0 ovn_controller[149457]: 2026-01-31T08:11:32Z|00488|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f down in Southbound
Jan 31 08:11:32 compute-0 ovn_controller[149457]: 2026-01-31T08:11:32Z|00489|binding|INFO|Removing iface tap4fa5cff4-40 ovn-installed in OVS
Jan 31 08:11:32 compute-0 nova_compute[247704]: 2026-01-31 08:11:32.694 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:32 compute-0 nova_compute[247704]: 2026-01-31 08:11:32.701 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:32.703 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:32 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 31 08:11:32 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000077.scope: Consumed 15.805s CPU time.
Jan 31 08:11:32 compute-0 systemd-machined[214448]: Machine qemu-49-instance-00000077 terminated.
Jan 31 08:11:32 compute-0 nova_compute[247704]: 2026-01-31 08:11:32.846 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:32.846 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:7b:39 10.100.0.6'], port_security=['fa:16:3e:af:7b:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4fa5cff4-40cb-4379-bda2-213171730f4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:11:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:32.848 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4fa5cff4-40cb-4379-bda2-213171730f4f in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:11:32 compute-0 ceph-mon[74496]: pgmap v2262: 305 pgs: 305 active+clean; 425 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 1.6 MiB/s wr, 379 op/s
Jan 31 08:11:32 compute-0 ceph-mon[74496]: osdmap e285: 3 total, 3 up, 3 in
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.099 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.113 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2a517ae0-0470-453f-a9d8-0fc1e6ceea8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.120 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.121 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.122 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.122 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.123 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.123 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.141 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8344fe7d-7bb1-44fe-9608-1e7d21e57bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.147 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c1481841-e290-47bf-9493-b298adb52d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.171 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[aed7aa73-7cfc-43aa-995b-dd3acf38e990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.190 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[919bf017-fd94-4f69-a0fe-e14cf0b4939f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719151, 'reachable_time': 38392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327820, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.206 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd4b4a5-415d-4c4d-bbb0-79b0b080c604]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719161, 'tstamp': 719161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327821, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719163, 'tstamp': 719163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327821, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.207 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.209 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.214 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.214 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.215 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.215 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:11:33.216 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.336 247708 INFO nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance shutdown successfully after 3 seconds.
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.344 247708 INFO nova.virt.libvirt.driver [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance destroyed successfully.
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.346 247708 DEBUG nova.virt.libvirt.vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.347 247708 DEBUG nova.network.os_vif_util [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1220738298-network", "vif_mac": "fa:16:3e:af:7b:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.349 247708 DEBUG nova.network.os_vif_util [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.350 247708 DEBUG os_vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.353 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.354 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fa5cff4-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.356 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.360 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.363 247708 INFO os_vif [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40')
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.372 247708 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.372 247708 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.373 247708 DEBUG nova.virt.libvirt.driver [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.433 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.435 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.436 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.436 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.437 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 390 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.2 MiB/s wr, 384 op/s
Jan 31 08:11:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:11:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214386677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:33 compute-0 nova_compute[247704]: 2026-01-31 08:11:33.868 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2795073161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3214386677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:34.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.391 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.391 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.394 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.394 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.394 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.541 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.542 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4188MB free_disk=20.800498962402344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.542 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:34 compute-0 nova_compute[247704]: 2026-01-31 08:11:34.542 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:34 compute-0 podman[327847]: 2026-01-31 08:11:34.872236787 +0000 UTC m=+0.043456819 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 08:11:34 compute-0 ceph-mon[74496]: pgmap v2264: 305 pgs: 305 active+clean; 390 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.2 MiB/s wr, 384 op/s
Jan 31 08:11:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/982230014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:35 compute-0 nova_compute[247704]: 2026-01-31 08:11:35.345 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating resource usage from migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008182732237576772 of space, bias 1.0, pg target 2.454819671273032 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00249675024427694 of space, bias 1.0, pg target 0.7440315727945281 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:11:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 379 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Jan 31 08:11:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:36.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.733 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.733 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.733 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.733 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.830 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.852 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.852 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.873 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:11:36 compute-0 nova_compute[247704]: 2026-01-31 08:11:36.910 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:11:37 compute-0 nova_compute[247704]: 2026-01-31 08:11:37.013 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:11:37 compute-0 ceph-mon[74496]: pgmap v2265: 305 pgs: 305 active+clean; 379 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Jan 31 08:11:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:11:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566712285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:37 compute-0 nova_compute[247704]: 2026-01-31 08:11:37.409 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:11:37 compute-0 nova_compute[247704]: 2026-01-31 08:11:37.414 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:11:37 compute-0 nova_compute[247704]: 2026-01-31 08:11:37.558 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:11:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 293 op/s
Jan 31 08:11:37 compute-0 nova_compute[247704]: 2026-01-31 08:11:37.847 247708 DEBUG neutronclient.v2_0.client [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 4fa5cff4-40cb-4379-bda2-213171730f4f for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:11:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1539722970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1777838766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2566712285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:38 compute-0 ceph-mon[74496]: pgmap v2266: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 293 op/s
Jan 31 08:11:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:38.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:38.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:38 compute-0 nova_compute[247704]: 2026-01-31 08:11:38.357 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:38 compute-0 nova_compute[247704]: 2026-01-31 08:11:38.513 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:11:38 compute-0 nova_compute[247704]: 2026-01-31 08:11:38.513 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:38 compute-0 nova_compute[247704]: 2026-01-31 08:11:38.514 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:38 compute-0 nova_compute[247704]: 2026-01-31 08:11:38.514 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.037 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 209 op/s
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.809 247708 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.810 247708 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.810 247708 DEBUG oslo_concurrency.lockutils [None req-af09894e-aff8-4cf8-8793-30bc5b3747e8 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.842 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:39 compute-0 nova_compute[247704]: 2026-01-31 08:11:39.843 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:11:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:40.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:40.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.661 247708 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.661 247708 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.662 247708 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.662 247708 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.662 247708 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.662 247708 WARNING nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state resize_migrated.
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.662 247708 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.663 247708 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.663 247708 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.663 247708 DEBUG oslo_concurrency.lockutils [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.664 247708 DEBUG nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:40 compute-0 nova_compute[247704]: 2026-01-31 08:11:40.664 247708 WARNING nova.compute.manager [req-15213c0e-460a-47b2-a5eb-ab4863a55fe0 req-1c039c09-34b0-4f69-bc44-5154a78469de 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state resize_migrated.
Jan 31 08:11:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 31 08:11:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 31 08:11:40 compute-0 ceph-mon[74496]: pgmap v2267: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 209 op/s
Jan 31 08:11:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2209944268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 188 op/s
Jan 31 08:11:41 compute-0 ceph-mon[74496]: osdmap e286: 3 total, 3 up, 3 in
Jan 31 08:11:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:42.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:42.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:42 compute-0 ceph-mon[74496]: pgmap v2269: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.5 MiB/s wr, 188 op/s
Jan 31 08:11:42 compute-0 nova_compute[247704]: 2026-01-31 08:11:42.975 247708 DEBUG nova.compute.manager [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:42 compute-0 nova_compute[247704]: 2026-01-31 08:11:42.976 247708 DEBUG nova.compute.manager [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing instance network info cache due to event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:11:42 compute-0 nova_compute[247704]: 2026-01-31 08:11:42.976 247708 DEBUG oslo_concurrency.lockutils [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:11:42 compute-0 nova_compute[247704]: 2026-01-31 08:11:42.976 247708 DEBUG oslo_concurrency.lockutils [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:11:42 compute-0 nova_compute[247704]: 2026-01-31 08:11:42.977 247708 DEBUG nova.network.neutron [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:11:43 compute-0 nova_compute[247704]: 2026-01-31 08:11:43.358 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 989 KiB/s rd, 3.4 MiB/s wr, 119 op/s
Jan 31 08:11:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1515250436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:11:44 compute-0 sudo[327896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:44 compute-0 sudo[327896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:44 compute-0 sudo[327896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:44 compute-0 sudo[327921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:11:44 compute-0 sudo[327921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:44 compute-0 sudo[327921]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:11:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:11:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:44.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:44 compute-0 nova_compute[247704]: 2026-01-31 08:11:44.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:44 compute-0 nova_compute[247704]: 2026-01-31 08:11:44.612 247708 DEBUG nova.network.neutron [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated VIF entry in instance network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:11:44 compute-0 nova_compute[247704]: 2026-01-31 08:11:44.613 247708 DEBUG nova.network.neutron [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:11:44 compute-0 nova_compute[247704]: 2026-01-31 08:11:44.634 247708 DEBUG oslo_concurrency.lockutils [req-802adb83-fa90-4c6f-bd10-609c04475816 req-920e072e-32d2-4ddc-96c1-d1868f6e8187 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:11:44 compute-0 ceph-mon[74496]: pgmap v2270: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 989 KiB/s rd, 3.4 MiB/s wr, 119 op/s
Jan 31 08:11:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1349055314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Jan 31 08:11:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 31 08:11:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 31 08:11:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 31 08:11:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3253062455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:11:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3253062455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:11:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:11:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:11:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:46 compute-0 ovn_controller[149457]: 2026-01-31T08:11:46Z|00490|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:11:46 compute-0 nova_compute[247704]: 2026-01-31 08:11:46.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:47 compute-0 ceph-mon[74496]: pgmap v2271: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Jan 31 08:11:47 compute-0 ceph-mon[74496]: osdmap e287: 3 total, 3 up, 3 in
Jan 31 08:11:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2257966533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.3 MiB/s wr, 38 op/s
Jan 31 08:11:47 compute-0 nova_compute[247704]: 2026-01-31 08:11:47.940 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847092.937943, af8d85d9-c7a2-4709-a234-19511f3e4395 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:11:47 compute-0 nova_compute[247704]: 2026-01-31 08:11:47.940 247708 INFO nova.compute.manager [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Stopped (Lifecycle Event)
Jan 31 08:11:48 compute-0 nova_compute[247704]: 2026-01-31 08:11:48.012 247708 DEBUG nova.compute.manager [None req-fbabd612-db07-4173-b96f-ae4ef35d79da - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:11:48 compute-0 nova_compute[247704]: 2026-01-31 08:11:48.015 247708 DEBUG nova.compute.manager [None req-fbabd612-db07-4173-b96f-ae4ef35d79da - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:11:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3444596718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:48 compute-0 ceph-mon[74496]: pgmap v2273: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.3 MiB/s wr, 38 op/s
Jan 31 08:11:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4215506736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2970087483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:48 compute-0 nova_compute[247704]: 2026-01-31 08:11:48.084 247708 INFO nova.compute.manager [None req-fbabd612-db07-4173-b96f-ae4ef35d79da - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 08:11:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:48.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:48 compute-0 nova_compute[247704]: 2026-01-31 08:11:48.360 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1997915657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2564914381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2760709419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:11:49 compute-0 nova_compute[247704]: 2026-01-31 08:11:49.429 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Jan 31 08:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:50.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:50 compute-0 ceph-mon[74496]: pgmap v2274: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Jan 31 08:11:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:50.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:50 compute-0 nova_compute[247704]: 2026-01-31 08:11:50.472 247708 DEBUG nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:50 compute-0 nova_compute[247704]: 2026-01-31 08:11:50.472 247708 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:50 compute-0 nova_compute[247704]: 2026-01-31 08:11:50.473 247708 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:50 compute-0 nova_compute[247704]: 2026-01-31 08:11:50.473 247708 DEBUG oslo_concurrency.lockutils [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:50 compute-0 nova_compute[247704]: 2026-01-31 08:11:50.474 247708 DEBUG nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:50 compute-0 nova_compute[247704]: 2026-01-31 08:11:50.474 247708 WARNING nova.compute.manager [req-b6403214-f910-495c-9c00-9fb98dde4a67 req-c12152c8-e9fe-4e51-a3c8-e0ce76990988 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state None.
Jan 31 08:11:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:50 compute-0 podman[327949]: 2026-01-31 08:11:50.921851066 +0000 UTC m=+0.093815127 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:11:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Jan 31 08:11:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:52.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:11:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:52.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:11:52 compute-0 nova_compute[247704]: 2026-01-31 08:11:52.722 247708 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:11:52 compute-0 nova_compute[247704]: 2026-01-31 08:11:52.723 247708 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:11:52 compute-0 nova_compute[247704]: 2026-01-31 08:11:52.723 247708 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:11:52 compute-0 nova_compute[247704]: 2026-01-31 08:11:52.723 247708 DEBUG oslo_concurrency.lockutils [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:11:52 compute-0 nova_compute[247704]: 2026-01-31 08:11:52.723 247708 DEBUG nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:11:52 compute-0 nova_compute[247704]: 2026-01-31 08:11:52.724 247708 WARNING nova.compute.manager [req-6f40a1e9-afca-4e53-af1f-4940f4297d20 req-b90de4c8-ee2a-4825-ac08-41508ebfd1d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state None.
Jan 31 08:11:52 compute-0 ceph-mon[74496]: pgmap v2275: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Jan 31 08:11:53 compute-0 nova_compute[247704]: 2026-01-31 08:11:53.361 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 31 08:11:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:54.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:11:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:54.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:11:54 compute-0 nova_compute[247704]: 2026-01-31 08:11:54.431 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:54 compute-0 ceph-mon[74496]: pgmap v2276: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 31 08:11:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:11:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 245 op/s
Jan 31 08:11:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:11:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:11:56 compute-0 ceph-mon[74496]: pgmap v2277: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 245 op/s
Jan 31 08:11:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.6 MiB/s wr, 286 op/s
Jan 31 08:11:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:11:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:58.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:11:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:11:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:11:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:58.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:11:58 compute-0 nova_compute[247704]: 2026-01-31 08:11:58.362 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:58 compute-0 ceph-mon[74496]: pgmap v2278: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.6 MiB/s wr, 286 op/s
Jan 31 08:11:59 compute-0 nova_compute[247704]: 2026-01-31 08:11:59.434 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:11:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 964 KiB/s wr, 266 op/s
Jan 31 08:12:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:00.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:00 compute-0 ceph-mon[74496]: pgmap v2279: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 964 KiB/s wr, 266 op/s
Jan 31 08:12:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 36 KiB/s wr, 234 op/s
Jan 31 08:12:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:02.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:02 compute-0 ceph-mon[74496]: pgmap v2280: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 36 KiB/s wr, 234 op/s
Jan 31 08:12:03 compute-0 nova_compute[247704]: 2026-01-31 08:12:03.364 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 440 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 36 KiB/s wr, 205 op/s
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.009 247708 DEBUG nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.010 247708 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.011 247708 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.011 247708 DEBUG oslo_concurrency.lockutils [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.012 247708 DEBUG nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.012 247708 WARNING nova.compute.manager [req-75d9a0c3-3717-46c1-908f-4747d6cf645a req-ec3a4b50-d977-4134-adab-946a5820e3e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:12:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:04.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:04 compute-0 sudo[327982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:04 compute-0 sudo[327982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:04 compute-0 sudo[327982]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-0 sudo[328007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:04 compute-0 sudo[328007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:04 compute-0 sudo[328007]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:04.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.436 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.695 247708 INFO nova.compute.manager [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Swapping old allocation on dict_keys(['39dae8fb-a3d6-4f01-ab04-67eb06f4b735']) held by migration 8b4ee4de-01d3-44bb-b219-d1f249fed0f0 for instance
Jan 31 08:12:04 compute-0 nova_compute[247704]: 2026-01-31 08:12:04.776 247708 DEBUG nova.scheduler.client.report [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Overwriting current allocation {'allocations': {'2e9f21b2-0b2a-410a-a5c8-4f9dd13a78fc': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 60}}, 'project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'user_id': '18aee9d81d404f77ac81cde538f140d8', 'consumer_generation': 1} on consumer af8d85d9-c7a2-4709-a234-19511f3e4395 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Jan 31 08:12:04 compute-0 ceph-mon[74496]: pgmap v2281: 305 pgs: 305 active+clean; 440 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 36 KiB/s wr, 205 op/s
Jan 31 08:12:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2218415899' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/784879266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 415 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 37 KiB/s wr, 216 op/s
Jan 31 08:12:05 compute-0 podman[328034]: 2026-01-31 08:12:05.893278348 +0000 UTC m=+0.064324488 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:12:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:06.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:06.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:06 compute-0 ceph-mon[74496]: pgmap v2282: 305 pgs: 305 active+clean; 415 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 37 KiB/s wr, 216 op/s
Jan 31 08:12:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 23 KiB/s wr, 225 op/s
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.192 247708 INFO nova.network.neutron [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating port 4fa5cff4-40cb-4379-bda2-213171730f4f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:12:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:08.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:08.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.412 247708 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.413 247708 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.413 247708 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.414 247708 DEBUG oslo_concurrency.lockutils [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.414 247708 DEBUG nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:08 compute-0 nova_compute[247704]: 2026-01-31 08:12:08.414 247708 WARNING nova.compute.manager [req-4a27e6bc-f7c2-45c3-a21e-f00cc91bc23b req-809a7da3-3dbe-4d3e-8097-8058c6f5600a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:12:08 compute-0 ceph-mon[74496]: pgmap v2283: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 23 KiB/s wr, 225 op/s
Jan 31 08:12:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:09.027 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:09.028 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.363 247708 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.364 247708 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.364 247708 DEBUG nova.network.neutron [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.437 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.539 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 23 KiB/s wr, 169 op/s
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.723 247708 DEBUG nova.compute.manager [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.724 247708 DEBUG nova.compute.manager [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing instance network info cache due to event network-changed-4fa5cff4-40cb-4379-bda2-213171730f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:12:09 compute-0 nova_compute[247704]: 2026-01-31 08:12:09.724 247708 DEBUG oslo_concurrency.lockutils [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:09 compute-0 ceph-mon[74496]: pgmap v2284: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 23 KiB/s wr, 169 op/s
Jan 31 08:12:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:10.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:11.184 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:11.184 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:11.184 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 23 KiB/s wr, 169 op/s
Jan 31 08:12:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.395 247708 DEBUG nova.network.neutron [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.458 247708 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.458 247708 DEBUG os_brick.utils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.459 247708 DEBUG oslo_concurrency.lockutils [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.459 247708 DEBUG nova.network.neutron [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Refreshing network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.460 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.477 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.478 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[c37af656-aa9c-4360-b327-a2a4553fb4fa]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.479 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.487 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.487 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa89ee6-72ae-4d10-a4e5-420401e0a6d5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.490 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.501 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.501 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[0557bf85-7af9-49ea-b2b0-6099a7312577]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.502 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[6394865a-0151-4c18-8d0b-2a3c8a1d6803]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.503 247708 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.530 247708 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.533 247708 DEBUG os_brick.initiator.connectors.lightos [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.533 247708 DEBUG os_brick.initiator.connectors.lightos [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.533 247708 DEBUG os_brick.initiator.connectors.lightos [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:12:12 compute-0 nova_compute[247704]: 2026-01-31 08:12:12.534 247708 DEBUG os_brick.utils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:12:12 compute-0 ceph-mon[74496]: pgmap v2285: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 23 KiB/s wr, 169 op/s
Jan 31 08:12:13 compute-0 nova_compute[247704]: 2026-01-31 08:12:13.368 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3258372223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 161 op/s
Jan 31 08:12:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3258372223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:14.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:14 compute-0 nova_compute[247704]: 2026-01-31 08:12:14.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:14 compute-0 ceph-mon[74496]: pgmap v2286: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 161 op/s
Jan 31 08:12:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 156 op/s
Jan 31 08:12:15 compute-0 nova_compute[247704]: 2026-01-31 08:12:15.896 247708 DEBUG nova.virt.libvirt.driver [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Jan 31 08:12:15 compute-0 nova_compute[247704]: 2026-01-31 08:12:15.996 247708 DEBUG nova.storage.rbd_utils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] rolling back rbd image(af8d85d9-c7a2-4709-a234-19511f3e4395_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Jan 31 08:12:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:16.030 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:16.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.302 247708 DEBUG nova.storage.rbd_utils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] removing snapshot(nova-resize) on rbd image(af8d85d9-c7a2-4709-a234-19511f3e4395_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:12:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:16.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 31 08:12:16 compute-0 ceph-mon[74496]: pgmap v2287: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 156 op/s
Jan 31 08:12:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2812206464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 31 08:12:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.874 247708 DEBUG nova.virt.libvirt.driver [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start _get_guest_xml network_info=[{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '45d037e6-681c-4fcd-b6f6-8c6313d49ebf', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eba44545-2378-4cdd-bfad-f9f0d9c730c0', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eba44545-2378-4cdd-bfad-f9f0d9c730c0', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'attached_at': '2026-01-31T08:12:15.000000', 'detached_at': '', 'volume_id': 'eba44545-2378-4cdd-bfad-f9f0d9c730c0', 'serial': 'eba44545-2378-4cdd-bfad-f9f0d9c730c0'}, 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.880 247708 WARNING nova.virt.libvirt.driver [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.889 247708 DEBUG nova.virt.libvirt.host [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.890 247708 DEBUG nova.virt.libvirt.host [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.895 247708 DEBUG nova.virt.libvirt.host [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.896 247708 DEBUG nova.virt.libvirt.host [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.899 247708 DEBUG nova.virt.libvirt.driver [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.899 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.900 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.901 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.901 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.902 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.902 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.902 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.903 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.903 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.904 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.904 247708 DEBUG nova.virt.hardware [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.905 247708 DEBUG nova.objects.instance [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'vcpu_model' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:16 compute-0 nova_compute[247704]: 2026-01-31 08:12:16.935 247708 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091787693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.381 247708 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.421 247708 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 407 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 933 KiB/s rd, 163 KiB/s wr, 75 op/s
Jan 31 08:12:17 compute-0 ceph-mon[74496]: osdmap e288: 3 total, 3 up, 3 in
Jan 31 08:12:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/310619649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1091787693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530508858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.868 247708 DEBUG oslo_concurrency.processutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.926 247708 DEBUG nova.virt.libvirt.vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.927 247708 DEBUG nova.network.os_vif_util [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.928 247708 DEBUG nova.network.os_vif_util [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.932 247708 DEBUG nova.virt.libvirt.driver [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <uuid>af8d85d9-c7a2-4709-a234-19511f3e4395</uuid>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <name>instance-00000077</name>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsTestOtherB-server-2134235451</nova:name>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:12:16</nova:creationTime>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:user uuid="18aee9d81d404f77ac81cde538f140d8">tempest-ServerActionsTestOtherB-2012907318-project-member</nova:user>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:project uuid="c3ddadeb950a490db5c99da98a32c9ec">tempest-ServerActionsTestOtherB-2012907318</nova:project>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <nova:port uuid="4fa5cff4-40cb-4379-bda2-213171730f4f">
Jan 31 08:12:17 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <system>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <entry name="serial">af8d85d9-c7a2-4709-a234-19511f3e4395</entry>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <entry name="uuid">af8d85d9-c7a2-4709-a234-19511f3e4395</entry>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </system>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <os>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </os>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <features>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </features>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/af8d85d9-c7a2-4709-a234-19511f3e4395_disk">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </source>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/af8d85d9-c7a2-4709-a234-19511f3e4395_disk.config">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </source>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-eba44545-2378-4cdd-bfad-f9f0d9c730c0">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </source>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:12:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <serial>eba44545-2378-4cdd-bfad-f9f0d9c730c0</serial>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:af:7b:39"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <target dev="tap4fa5cff4-40"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395/console.log" append="off"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <video>
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </video>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <input type="keyboard" bus="usb"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:12:17 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:12:17 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:12:17 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:12:17 compute-0 nova_compute[247704]: </domain>
Jan 31 08:12:17 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.934 247708 DEBUG nova.compute.manager [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Preparing to wait for external event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.934 247708 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.934 247708 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.935 247708 DEBUG oslo_concurrency.lockutils [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.936 247708 DEBUG nova.virt.libvirt.vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.936 247708 DEBUG nova.network.os_vif_util [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.937 247708 DEBUG nova.network.os_vif_util [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.938 247708 DEBUG os_vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.938 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.939 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.939 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.944 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.944 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fa5cff4-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.945 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fa5cff4-40, col_values=(('external_ids', {'iface-id': '4fa5cff4-40cb-4379-bda2-213171730f4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:7b:39', 'vm-uuid': 'af8d85d9-c7a2-4709-a234-19511f3e4395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.946 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:17 compute-0 NetworkManager[49108]: <info>  [1769847137.9477] manager: (tap4fa5cff4-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.949 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.953 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:17 compute-0 nova_compute[247704]: 2026-01-31 08:12:17.954 247708 INFO os_vif [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40')
Jan 31 08:12:18 compute-0 kernel: tap4fa5cff4-40: entered promiscuous mode
Jan 31 08:12:18 compute-0 NetworkManager[49108]: <info>  [1769847138.0124] manager: (tap4fa5cff4-40): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Jan 31 08:12:18 compute-0 ovn_controller[149457]: 2026-01-31T08:12:18Z|00491|binding|INFO|Claiming lport 4fa5cff4-40cb-4379-bda2-213171730f4f for this chassis.
Jan 31 08:12:18 compute-0 ovn_controller[149457]: 2026-01-31T08:12:18Z|00492|binding|INFO|4fa5cff4-40cb-4379-bda2-213171730f4f: Claiming fa:16:3e:af:7b:39 10.100.0.6
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.014 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.021 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:7b:39 10.100.0.6'], port_security=['fa:16:3e:af:7b:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '10', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4fa5cff4-40cb-4379-bda2-213171730f4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:18 compute-0 ovn_controller[149457]: 2026-01-31T08:12:18Z|00493|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f ovn-installed in OVS
Jan 31 08:12:18 compute-0 ovn_controller[149457]: 2026-01-31T08:12:18Z|00494|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f up in Southbound
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.022 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4fa5cff4-40cb-4379-bda2-213171730f4f in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 bound to our chassis
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.023 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.025 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.040 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ace6a9e9-c7ee-42a4-a8a3-e1d946e603e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:18 compute-0 systemd-udevd[328200]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:12:18 compute-0 systemd-machined[214448]: New machine qemu-50-instance-00000077.
Jan 31 08:12:18 compute-0 NetworkManager[49108]: <info>  [1769847138.0622] device (tap4fa5cff4-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:12:18 compute-0 NetworkManager[49108]: <info>  [1769847138.0631] device (tap4fa5cff4-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:12:18 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:12:18 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-00000077.
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.064 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[50e31d07-d110-43aa-9ca3-f7ac3bb2480d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.069 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[71e3d8be-1bb3-4a67-a1e2-69bc5761f7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:18 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.091 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8256e594-0dd4-4aff-a285-1b83d5ed4f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.104 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3552fb-5af9-44b4-a442-c3bbb58be60d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719151, 'reachable_time': 38392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328207, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.117 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b25e03fb-fe2b-45c8-bb18-d2a500c99a31]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719161, 'tstamp': 719161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328211, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719163, 'tstamp': 719163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328211, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.118 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.119 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.121 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.121 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.121 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.121 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:18.122 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:18.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.293 247708 DEBUG nova.network.neutron [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated VIF entry in instance network info cache for port 4fa5cff4-40cb-4379-bda2-213171730f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.294 247708 DEBUG nova.network.neutron [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:18 compute-0 nova_compute[247704]: 2026-01-31 08:12:18.334 247708 DEBUG oslo_concurrency.lockutils [req-02900304-5cc7-482c-9f18-fc04d88c27ef req-0d01e1c8-22f4-4f97-bbec-472ac695c9a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:18 compute-0 ceph-mon[74496]: pgmap v2289: 305 pgs: 305 active+clean; 407 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 933 KiB/s rd, 163 KiB/s wr, 75 op/s
Jan 31 08:12:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2530508858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.256 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.703 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847139.7026258, af8d85d9-c7a2-4709-a234-19511f3e4395 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.704 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Started (Lifecycle Event)
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.733 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.738 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847139.7030182, af8d85d9-c7a2-4709-a234-19511f3e4395 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.739 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Paused (Lifecycle Event)
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.780 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.785 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:12:19 compute-0 nova_compute[247704]: 2026-01-31 08:12:19.823 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:12:20
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'images', '.mgr', 'backups', 'volumes', '.rgw.root', 'vms']
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:12:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:20.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:12:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:12:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:12:20 compute-0 ceph-mon[74496]: pgmap v2290: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 3.2 MiB/s wr, 99 op/s
Jan 31 08:12:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 694 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 08:12:21 compute-0 podman[328277]: 2026-01-31 08:12:21.979814057 +0000 UTC m=+0.149890714 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:12:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:22.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:22.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:22 compute-0 ceph-mon[74496]: pgmap v2291: 305 pgs: 305 active+clean; 482 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 694 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 31 08:12:22 compute-0 nova_compute[247704]: 2026-01-31 08:12:22.947 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.855 247708 DEBUG nova.compute.manager [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.855 247708 DEBUG oslo_concurrency.lockutils [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.856 247708 DEBUG oslo_concurrency.lockutils [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.856 247708 DEBUG oslo_concurrency.lockutils [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.857 247708 DEBUG nova.compute.manager [req-8adf6ed5-725b-492f-beb7-b2e8ef9405f3 req-b4183c6e-ab91-4282-8090-7b1d92b9fc5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Processing event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.858 247708 DEBUG nova.compute.manager [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.863 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847143.8632402, af8d85d9-c7a2-4709-a234-19511f3e4395 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.864 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Resumed (Lifecycle Event)
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.873 247708 INFO nova.virt.libvirt.driver [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance running successfully.
Jan 31 08:12:23 compute-0 nova_compute[247704]: 2026-01-31 08:12:23.874 247708 DEBUG nova.virt.libvirt.driver [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Jan 31 08:12:24 compute-0 nova_compute[247704]: 2026-01-31 08:12:24.075 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:24 compute-0 nova_compute[247704]: 2026-01-31 08:12:24.079 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:12:24 compute-0 nova_compute[247704]: 2026-01-31 08:12:24.246 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Jan 31 08:12:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:24.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:24 compute-0 sudo[328306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:24 compute-0 sudo[328306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:24 compute-0 sudo[328306]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-0 nova_compute[247704]: 2026-01-31 08:12:24.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:24 compute-0 sudo[328331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:24 compute-0 sudo[328331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:24 compute-0 sudo[328331]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:24 compute-0 nova_compute[247704]: 2026-01-31 08:12:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:24 compute-0 ceph-mon[74496]: pgmap v2292: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Jan 31 08:12:25 compute-0 nova_compute[247704]: 2026-01-31 08:12:25.333 247708 INFO nova.compute.manager [None req-90876105-f250-45b1-9ac3-c4a4f14a7ff5 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance to original state: 'active'
Jan 31 08:12:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 31 08:12:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 31 08:12:25 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 31 08:12:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 4.8 MiB/s wr, 148 op/s
Jan 31 08:12:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3098008576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4176926919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:25 compute-0 ceph-mon[74496]: osdmap e289: 3 total, 3 up, 3 in
Jan 31 08:12:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:26.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:26.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:26 compute-0 ceph-mon[74496]: pgmap v2294: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 696 KiB/s rd, 4.8 MiB/s wr, 148 op/s
Jan 31 08:12:27 compute-0 nova_compute[247704]: 2026-01-31 08:12:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:27 compute-0 nova_compute[247704]: 2026-01-31 08:12:27.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 148 op/s
Jan 31 08:12:27 compute-0 ceph-mon[74496]: pgmap v2295: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 148 op/s
Jan 31 08:12:27 compute-0 nova_compute[247704]: 2026-01-31 08:12:27.948 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:28 compute-0 nova_compute[247704]: 2026-01-31 08:12:28.135 247708 DEBUG nova.compute.manager [req-2e5fde07-3a93-489f-a2f6-5bbc2165cfdc req-ffd4bbda-3a18-47e2-88c7-03d931842bc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:28 compute-0 nova_compute[247704]: 2026-01-31 08:12:28.136 247708 DEBUG oslo_concurrency.lockutils [req-2e5fde07-3a93-489f-a2f6-5bbc2165cfdc req-ffd4bbda-3a18-47e2-88c7-03d931842bc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:28 compute-0 nova_compute[247704]: 2026-01-31 08:12:28.136 247708 DEBUG oslo_concurrency.lockutils [req-2e5fde07-3a93-489f-a2f6-5bbc2165cfdc req-ffd4bbda-3a18-47e2-88c7-03d931842bc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:28 compute-0 nova_compute[247704]: 2026-01-31 08:12:28.137 247708 DEBUG oslo_concurrency.lockutils [req-2e5fde07-3a93-489f-a2f6-5bbc2165cfdc req-ffd4bbda-3a18-47e2-88c7-03d931842bc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:28 compute-0 nova_compute[247704]: 2026-01-31 08:12:28.137 247708 DEBUG nova.compute.manager [req-2e5fde07-3a93-489f-a2f6-5bbc2165cfdc req-ffd4bbda-3a18-47e2-88c7-03d931842bc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:28 compute-0 nova_compute[247704]: 2026-01-31 08:12:28.138 247708 WARNING nova.compute.manager [req-2e5fde07-3a93-489f-a2f6-5bbc2165cfdc req-ffd4bbda-3a18-47e2-88c7-03d931842bc6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state None.
Jan 31 08:12:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:28.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:28 compute-0 sudo[328358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:28 compute-0 sudo[328358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:28 compute-0 sudo[328358]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-0 sudo[328383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:28 compute-0 sudo[328383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:28 compute-0 sudo[328383]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-0 sudo[328408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:28 compute-0 sudo[328408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:28 compute-0 sudo[328408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:28 compute-0 sudo[328433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:12:28 compute-0 sudo[328433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:29 compute-0 sudo[328433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-0 nova_compute[247704]: 2026-01-31 08:12:29.456 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:29 compute-0 nova_compute[247704]: 2026-01-31 08:12:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:29 compute-0 nova_compute[247704]: 2026-01-31 08:12:29.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:12:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:12:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:12:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 54e761c0-1ee7-4c29-8639-0e42e27744c3 does not exist
Jan 31 08:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:12:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d87c1941-fe2d-4d5b-96e1-1429075f53fd does not exist
Jan 31 08:12:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0db25b4e-f40e-432a-9101-62cbf16c015e does not exist
Jan 31 08:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:12:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:12:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:12:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:12:29 compute-0 sudo[328490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:29 compute-0 sudo[328490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:29 compute-0 sudo[328490]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Jan 31 08:12:29 compute-0 sudo[328515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:29 compute-0 sudo[328515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:29 compute-0 sudo[328515]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-0 sudo[328540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:29 compute-0 sudo[328540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:29 compute-0 sudo[328540]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:29 compute-0 sudo[328565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:12:29 compute-0 sudo[328565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.164313983 +0000 UTC m=+0.051810294 container create 9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:12:30 compute-0 systemd[1]: Started libpod-conmon-9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc.scope.
Jan 31 08:12:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.134768073 +0000 UTC m=+0.022264404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.246980817 +0000 UTC m=+0.134477158 container init 9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.254012858 +0000 UTC m=+0.141509179 container start 9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_stonebraker, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.258349514 +0000 UTC m=+0.145845855 container attach 9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:30 compute-0 suspicious_stonebraker[328645]: 167 167
Jan 31 08:12:30 compute-0 systemd[1]: libpod-9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc.scope: Deactivated successfully.
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.261257435 +0000 UTC m=+0.148753776 container died 9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:12:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c108212869394b173f0b1f84f33ed6e846205fe30e6d856638f510f8100fec0-merged.mount: Deactivated successfully.
Jan 31 08:12:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:30.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:30 compute-0 podman[328629]: 2026-01-31 08:12:30.301610338 +0000 UTC m=+0.189106649 container remove 9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:12:30 compute-0 systemd[1]: libpod-conmon-9ed8ccb10d3391e9075eb93aac5d592b462f5e22ea6a687953431438169b33dc.scope: Deactivated successfully.
Jan 31 08:12:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:30 compute-0 podman[328670]: 2026-01-31 08:12:30.477577776 +0000 UTC m=+0.063659753 container create 59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 08:12:30 compute-0 systemd[1]: Started libpod-conmon-59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19.scope.
Jan 31 08:12:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:30 compute-0 podman[328670]: 2026-01-31 08:12:30.449791029 +0000 UTC m=+0.035873086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ca851c04912e23af38f6cde00ee0f2613928291db6b0d7ba6f1ee93d9c73d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ca851c04912e23af38f6cde00ee0f2613928291db6b0d7ba6f1ee93d9c73d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ca851c04912e23af38f6cde00ee0f2613928291db6b0d7ba6f1ee93d9c73d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ca851c04912e23af38f6cde00ee0f2613928291db6b0d7ba6f1ee93d9c73d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ca851c04912e23af38f6cde00ee0f2613928291db6b0d7ba6f1ee93d9c73d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:30 compute-0 podman[328670]: 2026-01-31 08:12:30.57993673 +0000 UTC m=+0.166018697 container init 59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:30 compute-0 podman[328670]: 2026-01-31 08:12:30.58731497 +0000 UTC m=+0.173396907 container start 59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kirch, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:30 compute-0 podman[328670]: 2026-01-31 08:12:30.590773715 +0000 UTC m=+0.176855702 container attach 59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kirch, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:30 compute-0 ceph-mon[74496]: pgmap v2296: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Jan 31 08:12:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3571684988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:31 compute-0 boring_kirch[328687]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:12:31 compute-0 boring_kirch[328687]: --> relative data size: 1.0
Jan 31 08:12:31 compute-0 boring_kirch[328687]: --> All data devices are unavailable
Jan 31 08:12:31 compute-0 systemd[1]: libpod-59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19.scope: Deactivated successfully.
Jan 31 08:12:31 compute-0 podman[328670]: 2026-01-31 08:12:31.333527652 +0000 UTC m=+0.919609589 container died 59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3ca851c04912e23af38f6cde00ee0f2613928291db6b0d7ba6f1ee93d9c73d2-merged.mount: Deactivated successfully.
Jan 31 08:12:31 compute-0 podman[328670]: 2026-01-31 08:12:31.393568745 +0000 UTC m=+0.979650672 container remove 59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:12:31 compute-0 systemd[1]: libpod-conmon-59c90dcfe956417d5f6e0aaf262b91ab3f3007091d982f5e898e46917b627f19.scope: Deactivated successfully.
Jan 31 08:12:31 compute-0 sudo[328565]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:31 compute-0 sudo[328714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:31 compute-0 sudo[328714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:31 compute-0 sudo[328714]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:31 compute-0 nova_compute[247704]: 2026-01-31 08:12:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:31 compute-0 nova_compute[247704]: 2026-01-31 08:12:31.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:12:31 compute-0 sudo[328740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:31 compute-0 sudo[328740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:31 compute-0 sudo[328740]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:31 compute-0 sudo[328765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:31 compute-0 sudo[328765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:31 compute-0 sudo[328765]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:31 compute-0 sudo[328790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:12:31 compute-0 sudo[328790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3123372633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 377 KiB/s wr, 82 op/s
Jan 31 08:12:31 compute-0 podman[328856]: 2026-01-31 08:12:31.947610896 +0000 UTC m=+0.040764325 container create ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:12:31 compute-0 systemd[1]: Started libpod-conmon-ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc.scope.
Jan 31 08:12:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:32 compute-0 podman[328856]: 2026-01-31 08:12:31.93261582 +0000 UTC m=+0.025769269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:12:32 compute-0 podman[328856]: 2026-01-31 08:12:32.042071147 +0000 UTC m=+0.135224626 container init ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:32 compute-0 podman[328856]: 2026-01-31 08:12:32.051120988 +0000 UTC m=+0.144274417 container start ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:12:32 compute-0 podman[328856]: 2026-01-31 08:12:32.054169252 +0000 UTC m=+0.147322731 container attach ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:12:32 compute-0 festive_pascal[328873]: 167 167
Jan 31 08:12:32 compute-0 systemd[1]: libpod-ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc.scope: Deactivated successfully.
Jan 31 08:12:32 compute-0 podman[328856]: 2026-01-31 08:12:32.057046142 +0000 UTC m=+0.150199621 container died ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:12:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-425cb210f55b2b77cdacd407e6c7cd10a5922fb972eb1568e84e46384279edf9-merged.mount: Deactivated successfully.
Jan 31 08:12:32 compute-0 podman[328856]: 2026-01-31 08:12:32.108846054 +0000 UTC m=+0.201999483 container remove ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pascal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:32 compute-0 systemd[1]: libpod-conmon-ee24b5bddda474d9a49398a70ee65aceab2b0b8f29cea4d619c9899e2b4c15bc.scope: Deactivated successfully.
Jan 31 08:12:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:32.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:32 compute-0 podman[328897]: 2026-01-31 08:12:32.295972543 +0000 UTC m=+0.044792042 container create 708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:12:32 compute-0 systemd[1]: Started libpod-conmon-708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034.scope.
Jan 31 08:12:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7681b02714c93e837e552f3fa2f89be68264f36bf93f5d802133142df2f31653/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7681b02714c93e837e552f3fa2f89be68264f36bf93f5d802133142df2f31653/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7681b02714c93e837e552f3fa2f89be68264f36bf93f5d802133142df2f31653/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7681b02714c93e837e552f3fa2f89be68264f36bf93f5d802133142df2f31653/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:32 compute-0 podman[328897]: 2026-01-31 08:12:32.370311405 +0000 UTC m=+0.119130924 container init 708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:32 compute-0 nova_compute[247704]: 2026-01-31 08:12:32.374 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:32 compute-0 podman[328897]: 2026-01-31 08:12:32.278798375 +0000 UTC m=+0.027617914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:12:32 compute-0 nova_compute[247704]: 2026-01-31 08:12:32.376 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:32 compute-0 nova_compute[247704]: 2026-01-31 08:12:32.376 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:12:32 compute-0 podman[328897]: 2026-01-31 08:12:32.385814023 +0000 UTC m=+0.134633532 container start 708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:12:32 compute-0 podman[328897]: 2026-01-31 08:12:32.391759438 +0000 UTC m=+0.140578957 container attach 708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2130249577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:32 compute-0 ceph-mon[74496]: pgmap v2297: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 377 KiB/s wr, 82 op/s
Jan 31 08:12:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2904659359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:32 compute-0 nova_compute[247704]: 2026-01-31 08:12:32.950 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:33 compute-0 adoring_gauss[328915]: {
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:     "0": [
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:         {
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "devices": [
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "/dev/loop3"
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             ],
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "lv_name": "ceph_lv0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "lv_size": "7511998464",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "name": "ceph_lv0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "tags": {
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.crush_device_class": "",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.encrypted": "0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.osd_id": "0",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.type": "block",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:                 "ceph.vdo": "0"
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             },
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "type": "block",
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:             "vg_name": "ceph_vg0"
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:         }
Jan 31 08:12:33 compute-0 adoring_gauss[328915]:     ]
Jan 31 08:12:33 compute-0 adoring_gauss[328915]: }
Jan 31 08:12:33 compute-0 systemd[1]: libpod-708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034.scope: Deactivated successfully.
Jan 31 08:12:33 compute-0 conmon[328915]: conmon 708b7b2059bba27f238d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034.scope/container/memory.events
Jan 31 08:12:33 compute-0 podman[328897]: 2026-01-31 08:12:33.165690686 +0000 UTC m=+0.914510215 container died 708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7681b02714c93e837e552f3fa2f89be68264f36bf93f5d802133142df2f31653-merged.mount: Deactivated successfully.
Jan 31 08:12:33 compute-0 podman[328897]: 2026-01-31 08:12:33.218495562 +0000 UTC m=+0.967315051 container remove 708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:12:33 compute-0 systemd[1]: libpod-conmon-708b7b2059bba27f238d4526d9613e3e0d2e9d2e60fffbfe253d816dede07034.scope: Deactivated successfully.
Jan 31 08:12:33 compute-0 sudo[328790]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:33 compute-0 sudo[328934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:33 compute-0 sudo[328934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:33 compute-0 sudo[328934]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:33 compute-0 sudo[328959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:33 compute-0 sudo[328959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:33 compute-0 sudo[328959]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:33 compute-0 sudo[328984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:33 compute-0 sudo[328984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:33 compute-0 sudo[328984]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:33 compute-0 sudo[329009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:12:33 compute-0 sudo[329009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.767633323 +0000 UTC m=+0.035985878 container create 42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williams, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:12:33 compute-0 systemd[1]: Started libpod-conmon-42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883.scope.
Jan 31 08:12:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.842594909 +0000 UTC m=+0.110947434 container init 42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.749806048 +0000 UTC m=+0.018158583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.848658597 +0000 UTC m=+0.117011132 container start 42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williams, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:33 compute-0 nostalgic_williams[329089]: 167 167
Jan 31 08:12:33 compute-0 systemd[1]: libpod-42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883.scope: Deactivated successfully.
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.85327841 +0000 UTC m=+0.121630945 container attach 42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williams, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.85369985 +0000 UTC m=+0.122052385 container died 42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williams, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b75d32cfb5ce9466f613dc4ca2c4125bc8a288fc7d19cfdc42f827d4e309757-merged.mount: Deactivated successfully.
Jan 31 08:12:33 compute-0 podman[329072]: 2026-01-31 08:12:33.884022639 +0000 UTC m=+0.152375174 container remove 42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williams, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:33 compute-0 systemd[1]: libpod-conmon-42b9abbb4b88b3ab1ef76f32251d7fb893eab30260fd630e485a2e8a831d0883.scope: Deactivated successfully.
Jan 31 08:12:34 compute-0 podman[329113]: 2026-01-31 08:12:34.026722606 +0000 UTC m=+0.035239890 container create b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hypatia, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:12:34 compute-0 systemd[1]: Started libpod-conmon-b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd.scope.
Jan 31 08:12:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624535bf767a8601299ef2e2571c7d05cb6bc85380e43a61272003b1963add38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624535bf767a8601299ef2e2571c7d05cb6bc85380e43a61272003b1963add38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624535bf767a8601299ef2e2571c7d05cb6bc85380e43a61272003b1963add38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624535bf767a8601299ef2e2571c7d05cb6bc85380e43a61272003b1963add38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:34 compute-0 podman[329113]: 2026-01-31 08:12:34.011237559 +0000 UTC m=+0.019754823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:12:34 compute-0 podman[329113]: 2026-01-31 08:12:34.110258521 +0000 UTC m=+0.118775765 container init b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hypatia, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:34 compute-0 podman[329113]: 2026-01-31 08:12:34.116105623 +0000 UTC m=+0.124622867 container start b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hypatia, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:12:34 compute-0 podman[329113]: 2026-01-31 08:12:34.119286881 +0000 UTC m=+0.127804145 container attach b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hypatia, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:34.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.463 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.599 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.599 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.600 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.600 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.601 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.602 247708 INFO nova.compute.manager [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Terminating instance
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.605 247708 DEBUG nova.compute.manager [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:12:34 compute-0 kernel: tap4fa5cff4-40 (unregistering): left promiscuous mode
Jan 31 08:12:34 compute-0 NetworkManager[49108]: <info>  [1769847154.6613] device (tap4fa5cff4-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:12:34 compute-0 ovn_controller[149457]: 2026-01-31T08:12:34Z|00495|binding|INFO|Releasing lport 4fa5cff4-40cb-4379-bda2-213171730f4f from this chassis (sb_readonly=0)
Jan 31 08:12:34 compute-0 ovn_controller[149457]: 2026-01-31T08:12:34Z|00496|binding|INFO|Setting lport 4fa5cff4-40cb-4379-bda2-213171730f4f down in Southbound
Jan 31 08:12:34 compute-0 ovn_controller[149457]: 2026-01-31T08:12:34Z|00497|binding|INFO|Removing iface tap4fa5cff4-40 ovn-installed in OVS
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.678 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:34 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 31 08:12:34 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000077.scope: Consumed 12.715s CPU time.
Jan 31 08:12:34 compute-0 systemd-machined[214448]: Machine qemu-50-instance-00000077 terminated.
Jan 31 08:12:34 compute-0 ceph-mon[74496]: pgmap v2298: 305 pgs: 305 active+clean; 499 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 08:12:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1081685098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.843 247708 INFO nova.virt.libvirt.driver [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Instance destroyed successfully.
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.843 247708 DEBUG nova.objects.instance [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid af8d85d9-c7a2-4709-a234-19511f3e4395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.859 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:7b:39 10.100.0.6'], port_security=['fa:16:3e:af:7b:39 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'af8d85d9-c7a2-4709-a234-19511f3e4395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '12', 'neutron:security_group_ids': '5b02cc0a-856b-4d31-80e9-eccd1c696448', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=4fa5cff4-40cb-4379-bda2-213171730f4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.860 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 4fa5cff4-40cb-4379-bda2-213171730f4f in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.861 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8014d6b-23e1-41ef-b5e2-3d770d302e72
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.877 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d9934441-2c03-4e2b-bad9-5227c61360f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.902 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c0e072-adb1-4b71-a3a9-269c434848f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.906 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0c73dfaa-3cd4-4146-8a87-ed1bc11be5cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.908 247708 DEBUG nova.virt.libvirt.vif [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2134235451',display_name='tempest-ServerActionsTestOtherB-server-2134235451',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2134235451',id=119,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIBsrFBicHYtOHR1R6vRAALJ9Bas8uQqwQxjg40t1CSqKgx9y2TPvbXQ87aJFDYMxnRLTQoY5DczCJahVhqvpmedcWwWPeQP/d3vWA175RU6Mi7x6I2zA/JKc2hVh/HOXw==',key_name='tempest-keypair-1408271761',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-9xfxf7l5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='18aee9d81d404f77ac81cde538f140d8',uuid=af8d85d9-c7a2-4709-a234-19511f3e4395,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.909 247708 DEBUG nova.network.os_vif_util [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.909 247708 DEBUG nova.network.os_vif_util [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.910 247708 DEBUG os_vif [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.911 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.911 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fa5cff4-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.912 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.918 247708 INFO os_vif [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:7b:39,bridge_name='br-int',has_traffic_filtering=True,id=4fa5cff4-40cb-4379-bda2-213171730f4f,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fa5cff4-40')
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.930 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0e23619b-9f04-4e4d-8870-eb252bc44c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.944 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a79474-b361-46ca-af79-3b8482b81b4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8014d6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:c1:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719151, 'reachable_time': 38392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329169, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.958 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b45d505a-5f70-4360-bf77-668b45663edb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719161, 'tstamp': 719161}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329181, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape8014d6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719163, 'tstamp': 719163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329181, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.960 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:34 compute-0 nova_compute[247704]: 2026-01-31 08:12:34.962 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.963 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8014d6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.963 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.964 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8014d6b-20, col_values=(('external_ids', {'iface-id': '4bb3ff19-f70b-4c8d-a829-66ff18233b61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:34.964 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]: {
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:         "osd_id": 0,
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:         "type": "bluestore"
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]:     }
Jan 31 08:12:35 compute-0 ecstatic_hypatia[329130]: }
Jan 31 08:12:35 compute-0 systemd[1]: libpod-b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd.scope: Deactivated successfully.
Jan 31 08:12:35 compute-0 podman[329113]: 2026-01-31 08:12:35.063039307 +0000 UTC m=+1.071556561 container died b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hypatia, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:12:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-624535bf767a8601299ef2e2571c7d05cb6bc85380e43a61272003b1963add38-merged.mount: Deactivated successfully.
Jan 31 08:12:35 compute-0 podman[329113]: 2026-01-31 08:12:35.123887179 +0000 UTC m=+1.132404423 container remove b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:12:35 compute-0 systemd[1]: libpod-conmon-b02fd5174cebd216fbdefd7fbb262b4f13167f103d80eed8fb1a3b097ce7d3cd.scope: Deactivated successfully.
Jan 31 08:12:35 compute-0 sudo[329009]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:12:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:12:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7c1ffe6f-a5c2-4397-923b-e29d16c17c54 does not exist
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev dd594ec2-6ebd-4b88-a44a-d5ebc5eff5cc does not exist
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fd93db10-7548-4fb8-b9c4-3652a53a8db6 does not exist
Jan 31 08:12:35 compute-0 sudo[329208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:35 compute-0 sudo[329208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:35 compute-0 sudo[329208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-0 sudo[329233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:12:35 compute-0 sudo[329233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:35 compute-0 sudo[329233]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.359 247708 INFO nova.virt.libvirt.driver [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Deleting instance files /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395_del
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.360 247708 INFO nova.virt.libvirt.driver [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Deletion of /var/lib/nova/instances/af8d85d9-c7a2-4709-a234-19511f3e4395_del complete
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.398 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [{"id": "4fa5cff4-40cb-4379-bda2-213171730f4f", "address": "fa:16:3e:af:7b:39", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fa5cff4-40", "ovs_interfaceid": "4fa5cff4-40cb-4379-bda2-213171730f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010663851665736088 of space, bias 1.0, pg target 3.1991554997208262 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8589431532663316 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 08:12:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 526 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 873 KiB/s wr, 137 op/s
Jan 31 08:12:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/413626707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.869 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-af8d85d9-c7a2-4709-a234-19511f3e4395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.869 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.869 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.870 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.870 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.971 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.971 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.971 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.972 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.972 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.995 247708 INFO nova.compute.manager [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Took 1.39 seconds to destroy the instance on the hypervisor.
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.996 247708 DEBUG oslo.service.loopingcall [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.996 247708 DEBUG nova.compute.manager [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:12:35 compute-0 nova_compute[247704]: 2026-01-31 08:12:35.997 247708 DEBUG nova.network.neutron [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:12:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:36.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661894411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.395 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:36.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:36 compute-0 podman[329282]: 2026-01-31 08:12:36.486742248 +0000 UTC m=+0.052068291 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.561 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.561 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.737 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.738 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4137MB free_disk=20.756126403808594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.738 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.738 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:36 compute-0 ceph-mon[74496]: pgmap v2299: 305 pgs: 305 active+clean; 526 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 873 KiB/s wr, 137 op/s
Jan 31 08:12:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/661894411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/263703871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.881 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.882 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance af8d85d9-c7a2-4709-a234-19511f3e4395 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.882 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.882 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:12:36 compute-0 nova_compute[247704]: 2026-01-31 08:12:36.961 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.235 247708 DEBUG nova.network.neutron [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.243 247708 DEBUG nova.compute.manager [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.244 247708 DEBUG oslo_concurrency.lockutils [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.245 247708 DEBUG oslo_concurrency.lockutils [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.245 247708 DEBUG oslo_concurrency.lockutils [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.246 247708 DEBUG nova.compute.manager [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.246 247708 DEBUG nova.compute.manager [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-unplugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.247 247708 DEBUG nova.compute.manager [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.247 247708 DEBUG oslo_concurrency.lockutils [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.247 247708 DEBUG oslo_concurrency.lockutils [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.248 247708 DEBUG oslo_concurrency.lockutils [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.248 247708 DEBUG nova.compute.manager [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] No waiting events found dispatching network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.249 247708 WARNING nova.compute.manager [req-7e58d8e1-eac0-4ef6-b1e8-d2bf008794c6 req-b6f1f680-ea1e-4f55-9722-65d8784fcb2e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received unexpected event network-vif-plugged-4fa5cff4-40cb-4379-bda2-213171730f4f for instance with vm_state active and task_state deleting.
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.285 247708 INFO nova.compute.manager [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Took 1.29 seconds to deallocate network for instance.
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.405 247708 DEBUG nova.compute.manager [req-09ddc2f1-6d0f-419e-94a8-df015c904871 req-ddbcdee6-c12c-44be-9617-8f9fca104d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Received event network-vif-deleted-4fa5cff4-40cb-4379-bda2-213171730f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793552847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.435 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.443 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.465 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.514 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.515 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.642 247708 INFO nova.compute.manager [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Took 0.36 seconds to detach 1 volumes for instance.
Jan 31 08:12:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 507 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.714 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.714 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:37 compute-0 nova_compute[247704]: 2026-01-31 08:12:37.813 247708 DEBUG oslo_concurrency.processutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/793552847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4149491747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:38 compute-0 nova_compute[247704]: 2026-01-31 08:12:38.283 247708 DEBUG oslo_concurrency.processutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:38 compute-0 nova_compute[247704]: 2026-01-31 08:12:38.292 247708 DEBUG nova.compute.provider_tree [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:38.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:38 compute-0 nova_compute[247704]: 2026-01-31 08:12:38.429 247708 DEBUG nova.scheduler.client.report [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:38 compute-0 nova_compute[247704]: 2026-01-31 08:12:38.466 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:38 compute-0 nova_compute[247704]: 2026-01-31 08:12:38.581 247708 INFO nova.scheduler.client.report [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocations for instance af8d85d9-c7a2-4709-a234-19511f3e4395
Jan 31 08:12:38 compute-0 nova_compute[247704]: 2026-01-31 08:12:38.736 247708 DEBUG oslo_concurrency.lockutils [None req-50ab1c0b-8d5f-459e-865e-4a5a1595dc08 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "af8d85d9-c7a2-4709-a234-19511f3e4395" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:38 compute-0 ceph-mon[74496]: pgmap v2300: 305 pgs: 305 active+clean; 507 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 204 op/s
Jan 31 08:12:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4149491747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:39 compute-0 nova_compute[247704]: 2026-01-31 08:12:39.465 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Jan 31 08:12:39 compute-0 nova_compute[247704]: 2026-01-31 08:12:39.914 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 31 08:12:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 31 08:12:40 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 31 08:12:40 compute-0 ceph-mon[74496]: pgmap v2301: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 227 op/s
Jan 31 08:12:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:40.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:40.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 31 08:12:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 31 08:12:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 31 08:12:41 compute-0 ceph-mon[74496]: osdmap e290: 3 total, 3 up, 3 in
Jan 31 08:12:41 compute-0 nova_compute[247704]: 2026-01-31 08:12:41.508 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.1 MiB/s wr, 312 op/s
Jan 31 08:12:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 31 08:12:42 compute-0 ceph-mon[74496]: osdmap e291: 3 total, 3 up, 3 in
Jan 31 08:12:42 compute-0 ceph-mon[74496]: pgmap v2304: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.1 MiB/s wr, 312 op/s
Jan 31 08:12:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1620311410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 31 08:12:42 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 31 08:12:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:42.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:42.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.549 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.550 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.586 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.726 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.727 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.735 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.736 247708 INFO nova.compute.claims [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:12:42 compute-0 nova_compute[247704]: 2026-01-31 08:12:42.957 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:43 compute-0 ceph-mon[74496]: osdmap e292: 3 total, 3 up, 3 in
Jan 31 08:12:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1119874547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:12:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516289054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.450 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.456 247708 DEBUG nova.compute.provider_tree [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.489 247708 DEBUG nova.scheduler.client.report [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.551 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.552 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.617 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.617 247708 DEBUG nova.network.neutron [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.653 247708 INFO nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.683 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:12:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.2 MiB/s wr, 165 op/s
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.893 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.895 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.895 247708 INFO nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Creating image(s)
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.936 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:43 compute-0 nova_compute[247704]: 2026-01-31 08:12:43.971 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.010 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.016 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.045 247708 DEBUG nova.policy [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.094 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.095 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.096 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.097 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.127 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.133 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/516289054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:44 compute-0 ceph-mon[74496]: pgmap v2306: 305 pgs: 305 active+clean; 488 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.2 MiB/s wr, 165 op/s
Jan 31 08:12:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:44.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.467 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:44 compute-0 sudo[329465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:44 compute-0 sudo[329465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:44 compute-0 sudo[329465]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.559 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:44 compute-0 sudo[329490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:12:44 compute-0 sudo[329490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:44 compute-0 sudo[329490]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.631 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.752 247708 DEBUG nova.objects.instance [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.793 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.793 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Ensure instance console log exists: /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.794 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.794 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.794 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:44 compute-0 nova_compute[247704]: 2026-01-31 08:12:44.956 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4141045526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:12:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4141045526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:12:45 compute-0 nova_compute[247704]: 2026-01-31 08:12:45.402 247708 DEBUG nova.network.neutron [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Successfully created port: f951c334-901a-4c41-af64-ad3eb7647eb7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:12:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 513 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Jan 31 08:12:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:46.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:46 compute-0 ceph-mon[74496]: pgmap v2307: 305 pgs: 305 active+clean; 513 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Jan 31 08:12:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1305186884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:12:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:46 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 31 08:12:47 compute-0 nova_compute[247704]: 2026-01-31 08:12:47.111 247708 DEBUG nova.network.neutron [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Successfully updated port: f951c334-901a-4c41-af64-ad3eb7647eb7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:12:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 9.4 MiB/s wr, 274 op/s
Jan 31 08:12:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:12:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:48.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:12:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:48 compute-0 ceph-mon[74496]: pgmap v2308: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 9.4 MiB/s wr, 274 op/s
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.254 247708 DEBUG nova.compute.manager [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-changed-f951c334-901a-4c41-af64-ad3eb7647eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.254 247708 DEBUG nova.compute.manager [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Refreshing instance network info cache due to event network-changed-f951c334-901a-4c41-af64-ad3eb7647eb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.255 247708 DEBUG oslo_concurrency.lockutils [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.255 247708 DEBUG oslo_concurrency.lockutils [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.255 247708 DEBUG nova.network.neutron [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Refreshing network info cache for port f951c334-901a-4c41-af64-ad3eb7647eb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.260 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.468 247708 DEBUG nova.network.neutron [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.471 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 12 MiB/s wr, 392 op/s
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.841 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847154.839697, af8d85d9-c7a2-4709-a234-19511f3e4395 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.842 247708 INFO nova.compute.manager [-] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] VM Stopped (Lifecycle Event)
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.889 247708 DEBUG nova.compute.manager [None req-5bc9a51a-d2e5-42cf-8d4f-31588559ff9d - - - - - -] [instance: af8d85d9-c7a2-4709-a234-19511f3e4395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:49 compute-0 nova_compute[247704]: 2026-01-31 08:12:49.958 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:50 compute-0 nova_compute[247704]: 2026-01-31 08:12:50.040 247708 DEBUG nova.network.neutron [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:50 compute-0 nova_compute[247704]: 2026-01-31 08:12:50.070 247708 DEBUG oslo_concurrency.lockutils [req-1de1552c-3d23-46bc-b3ce-e1a0fb2ef567 req-bb0d4455-04ff-4701-aa5a-e7129acc2179 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:50 compute-0 nova_compute[247704]: 2026-01-31 08:12:50.072 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:12:50 compute-0 nova_compute[247704]: 2026-01-31 08:12:50.072 247708 DEBUG nova.network.neutron [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:50 compute-0 nova_compute[247704]: 2026-01-31 08:12:50.373 247708 DEBUG nova.network.neutron [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:12:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:50.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 31 08:12:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 31 08:12:50 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 31 08:12:50 compute-0 ceph-mon[74496]: pgmap v2309: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 12 MiB/s wr, 392 op/s
Jan 31 08:12:50 compute-0 ceph-mon[74496]: osdmap e293: 3 total, 3 up, 3 in
Jan 31 08:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:50.951 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:50 compute-0 nova_compute[247704]: 2026-01-31 08:12:50.952 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:50.953 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:12:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 669 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 12 MiB/s wr, 385 op/s
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.799 247708 DEBUG nova.network.neutron [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Updating instance_info_cache with network_info: [{"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.939 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.940 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Instance network_info: |[{"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.943 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Start _get_guest_xml network_info=[{"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.951 247708 WARNING nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.962 247708 DEBUG nova.virt.libvirt.host [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.963 247708 DEBUG nova.virt.libvirt.host [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.969 247708 DEBUG nova.virt.libvirt.host [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.969 247708 DEBUG nova.virt.libvirt.host [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.971 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.971 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.972 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.972 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.972 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.972 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.972 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.973 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.973 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.973 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.973 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.974 247708 DEBUG nova.virt.hardware [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:12:51 compute-0 nova_compute[247704]: 2026-01-31 08:12:51.977 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:52 compute-0 ceph-mon[74496]: pgmap v2311: 305 pgs: 305 active+clean; 669 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 12 MiB/s wr, 385 op/s
Jan 31 08:12:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294508577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:52 compute-0 nova_compute[247704]: 2026-01-31 08:12:52.499 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:52 compute-0 nova_compute[247704]: 2026-01-31 08:12:52.538 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:52 compute-0 nova_compute[247704]: 2026-01-31 08:12:52.543 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:52 compute-0 podman[329651]: 2026-01-31 08:12:52.921547955 +0000 UTC m=+0.083077486 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:12:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1294508577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:12:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970627810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.084 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.087 247708 DEBUG nova.virt.libvirt.vif [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-860967629',display_name='tempest-ServersTestJSON-server-860967629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-860967629',id=128,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-aynuhsdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:43Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.088 247708 DEBUG nova.network.os_vif_util [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.089 247708 DEBUG nova.network.os_vif_util [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.090 247708 DEBUG nova.objects.instance [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.177 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <uuid>e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7</uuid>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <name>instance-00000080</name>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersTestJSON-server-860967629</nova:name>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:12:51</nova:creationTime>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <nova:port uuid="f951c334-901a-4c41-af64-ad3eb7647eb7">
Jan 31 08:12:53 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <system>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <entry name="serial">e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7</entry>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <entry name="uuid">e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7</entry>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </system>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <os>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </os>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <features>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </features>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk">
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </source>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk.config">
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </source>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:12:53 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:7f:f5:42"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <target dev="tapf951c334-90"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/console.log" append="off"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <video>
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </video>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:12:53 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:12:53 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:12:53 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:12:53 compute-0 nova_compute[247704]: </domain>
Jan 31 08:12:53 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.178 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Preparing to wait for external event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.178 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.179 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.179 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.179 247708 DEBUG nova.virt.libvirt.vif [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:12:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-860967629',display_name='tempest-ServersTestJSON-server-860967629',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-860967629',id=128,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-aynuhsdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:43Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.180 247708 DEBUG nova.network.os_vif_util [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.180 247708 DEBUG nova.network.os_vif_util [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.181 247708 DEBUG os_vif [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.182 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.182 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.184 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.184 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf951c334-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.185 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf951c334-90, col_values=(('external_ids', {'iface-id': 'f951c334-901a-4c41-af64-ad3eb7647eb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:f5:42', 'vm-uuid': 'e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:53 compute-0 NetworkManager[49108]: <info>  [1769847173.1886] manager: (tapf951c334-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.189 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.195 247708 INFO os_vif [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90')
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.417 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.418 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.418 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:7f:f5:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.418 247708 INFO nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Using config drive
Jan 31 08:12:53 compute-0 nova_compute[247704]: 2026-01-31 08:12:53.442 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 358 op/s
Jan 31 08:12:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/970627810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:54 compute-0 ceph-mon[74496]: pgmap v2312: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 11 MiB/s wr, 358 op/s
Jan 31 08:12:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:54.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.380 247708 INFO nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Creating config drive at /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/disk.config
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.384 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn4eh_mnu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:12:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:54.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.470 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.511 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn4eh_mnu" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.540 247708 DEBUG nova.storage.rbd_utils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.544 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/disk.config e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.731 247708 DEBUG oslo_concurrency.processutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/disk.config e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.732 247708 INFO nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Deleting local config drive /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7/disk.config because it was imported into RBD.
Jan 31 08:12:54 compute-0 kernel: tapf951c334-90: entered promiscuous mode
Jan 31 08:12:54 compute-0 NetworkManager[49108]: <info>  [1769847174.7809] manager: (tapf951c334-90): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Jan 31 08:12:54 compute-0 ovn_controller[149457]: 2026-01-31T08:12:54Z|00498|binding|INFO|Claiming lport f951c334-901a-4c41-af64-ad3eb7647eb7 for this chassis.
Jan 31 08:12:54 compute-0 ovn_controller[149457]: 2026-01-31T08:12:54Z|00499|binding|INFO|f951c334-901a-4c41-af64-ad3eb7647eb7: Claiming fa:16:3e:7f:f5:42 10.100.0.12
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.782 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:54 compute-0 ovn_controller[149457]: 2026-01-31T08:12:54Z|00500|binding|INFO|Setting lport f951c334-901a-4c41-af64-ad3eb7647eb7 ovn-installed in OVS
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:54 compute-0 nova_compute[247704]: 2026-01-31 08:12:54.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:54 compute-0 systemd-machined[214448]: New machine qemu-51-instance-00000080.
Jan 31 08:12:54 compute-0 systemd-udevd[329755]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:12:54 compute-0 NetworkManager[49108]: <info>  [1769847174.8205] device (tapf951c334-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:12:54 compute-0 NetworkManager[49108]: <info>  [1769847174.8214] device (tapf951c334-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:12:54 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-00000080.
Jan 31 08:12:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2682229306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/813564231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.123 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847175.1231532, e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.124 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] VM Started (Lifecycle Event)
Jan 31 08:12:55 compute-0 ovn_controller[149457]: 2026-01-31T08:12:55Z|00501|binding|INFO|Setting lport f951c334-901a-4c41-af64-ad3eb7647eb7 up in Southbound
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.234 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:f5:42 10.100.0.12'], port_security=['fa:16:3e:7f:f5:42 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=f951c334-901a-4c41-af64-ad3eb7647eb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.235 160028 INFO neutron.agent.ovn.metadata.agent [-] Port f951c334-901a-4c41-af64-ad3eb7647eb7 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.237 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.246 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b043d515-ccd3-4d2e-b066-8b5dcdbe5320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.247 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.249 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.249 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fb369a6e-34dd-4199-a6d3-671f5dbbaaed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.250 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[078320dd-78dd-4e16-bf0c-11b0ea1e8bc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.265 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[31446764-9f28-4b88-9022-41379de70a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.280 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[54ccff8d-cff4-4993-aef5-748750757b9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.309 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae448ac-4bfd-4614-9a47-4ea33404d808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 systemd-udevd[329757]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.314 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ee25cb50-8c0a-41f6-86de-e940f48538e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 NetworkManager[49108]: <info>  [1769847175.3160] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.347 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb74984-b8c9-48a2-b472-75a54fdbd447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.350 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fc73e7c5-170b-432e-b8c7-64457c0945e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 NetworkManager[49108]: <info>  [1769847175.3728] device (tapb88251fc-70): carrier: link connected
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.377 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c46913c6-5560-472e-804e-ed3f93d5fc2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.395 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[444ecdb9-588a-4abe-a45c-59ef81e84111]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741799, 'reachable_time': 32511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329830, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.410 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bf431ab1-87e0-40aa-8784-4465510c615f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741799, 'tstamp': 741799}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329831, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.436 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[80875065-4aad-49cc-b57a-cbb954c9a6e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741799, 'reachable_time': 32511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329832, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.471 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b23f84cf-3d51-42af-b4bb-28086adeaaf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.518 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a2650119-fcd4-4813-a51f-1cc13ac61a16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.520 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.520 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.520 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-0 NetworkManager[49108]: <info>  [1769847175.5237] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Jan 31 08:12:55 compute-0 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.525 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.527 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-0 ovn_controller[149457]: 2026-01-31T08:12:55Z|00502|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.534 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.535 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.536 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[38960293-18c7-4d15-9ab9-c35c2be815d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.537 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:55.537 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.636 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.640 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847175.1241982, e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.640 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] VM Paused (Lifecycle Event)
Jan 31 08:12:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:12:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 9.4 MiB/s wr, 312 op/s
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.907 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:55 compute-0 nova_compute[247704]: 2026-01-31 08:12:55.911 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:12:55 compute-0 podman[329865]: 2026-01-31 08:12:55.922694372 +0000 UTC m=+0.080859502 container create 9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:12:55 compute-0 podman[329865]: 2026-01-31 08:12:55.865228301 +0000 UTC m=+0.023393521 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:12:55 compute-0 systemd[1]: Started libpod-conmon-9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291.scope.
Jan 31 08:12:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/328c0f1b473dad5fbd8b6439e6390722098277b7edfa7dd0662ddbf1b56445a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:56 compute-0 podman[329865]: 2026-01-31 08:12:56.023214681 +0000 UTC m=+0.181379831 container init 9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:12:56 compute-0 podman[329865]: 2026-01-31 08:12:56.028316035 +0000 UTC m=+0.186481175 container start 9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:12:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [NOTICE]   (329884) : New worker (329886) forked
Jan 31 08:12:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [NOTICE]   (329884) : Loading success.
Jan 31 08:12:56 compute-0 ceph-mon[74496]: pgmap v2313: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 9.4 MiB/s wr, 312 op/s
Jan 31 08:12:56 compute-0 nova_compute[247704]: 2026-01-31 08:12:56.243 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:12:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:12:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:56.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:12:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:12:56.956 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:12:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 682 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 197 op/s
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.815 247708 DEBUG nova.compute.manager [req-2a07157c-0c70-45c6-87ed-2dd73051290f req-481c001c-80c2-4488-a398-c36930da32ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.816 247708 DEBUG oslo_concurrency.lockutils [req-2a07157c-0c70-45c6-87ed-2dd73051290f req-481c001c-80c2-4488-a398-c36930da32ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.816 247708 DEBUG oslo_concurrency.lockutils [req-2a07157c-0c70-45c6-87ed-2dd73051290f req-481c001c-80c2-4488-a398-c36930da32ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.816 247708 DEBUG oslo_concurrency.lockutils [req-2a07157c-0c70-45c6-87ed-2dd73051290f req-481c001c-80c2-4488-a398-c36930da32ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.817 247708 DEBUG nova.compute.manager [req-2a07157c-0c70-45c6-87ed-2dd73051290f req-481c001c-80c2-4488-a398-c36930da32ce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Processing event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.818 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.823 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847177.822717, e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.823 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] VM Resumed (Lifecycle Event)
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.826 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.830 247708 INFO nova.virt.libvirt.driver [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Instance spawned successfully.
Jan 31 08:12:57 compute-0 nova_compute[247704]: 2026-01-31 08:12:57.830 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.037 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.044 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.050 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.051 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.052 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.053 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.054 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.055 247708 DEBUG nova.virt.libvirt.driver [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.238 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:12:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:12:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:12:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.597 247708 INFO nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Took 14.70 seconds to spawn the instance on the hypervisor.
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.598 247708 DEBUG nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:12:58 compute-0 nova_compute[247704]: 2026-01-31 08:12:58.815 247708 INFO nova.compute.manager [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Took 16.12 seconds to build instance.
Jan 31 08:12:58 compute-0 ceph-mon[74496]: pgmap v2314: 305 pgs: 305 active+clean; 682 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 197 op/s
Jan 31 08:12:59 compute-0 nova_compute[247704]: 2026-01-31 08:12:59.017 247708 DEBUG oslo_concurrency.lockutils [None req-4b74d8e9-2317-4bb3-83b4-0dc1fe2f0796 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:12:59 compute-0 nova_compute[247704]: 2026-01-31 08:12:59.473 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:12:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 696 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 126 op/s
Jan 31 08:12:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4270166023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:00 compute-0 nova_compute[247704]: 2026-01-31 08:13:00.446 247708 DEBUG nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:00 compute-0 nova_compute[247704]: 2026-01-31 08:13:00.447 247708 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:00 compute-0 nova_compute[247704]: 2026-01-31 08:13:00.447 247708 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:00 compute-0 nova_compute[247704]: 2026-01-31 08:13:00.447 247708 DEBUG oslo_concurrency.lockutils [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:00 compute-0 nova_compute[247704]: 2026-01-31 08:13:00.448 247708 DEBUG nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] No waiting events found dispatching network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:00 compute-0 nova_compute[247704]: 2026-01-31 08:13:00.448 247708 WARNING nova.compute.manager [req-aabf603b-d4f0-40f7-9739-68a4669a568c req-57a14e4f-5d2f-4169-954e-21ecb33a9c4b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received unexpected event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 for instance with vm_state active and task_state None.
Jan 31 08:13:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:13:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:00.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:13:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.668894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180668996, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2011, "num_deletes": 261, "total_data_size": 3348164, "memory_usage": 3414944, "flush_reason": "Manual Compaction"}
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180709929, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3284278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48788, "largest_seqno": 50798, "table_properties": {"data_size": 3275094, "index_size": 5678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19871, "raw_average_key_size": 20, "raw_value_size": 3256358, "raw_average_value_size": 3384, "num_data_blocks": 245, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847008, "oldest_key_time": 1769847008, "file_creation_time": 1769847180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 41100 microseconds, and 6657 cpu microseconds.
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.710000) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3284278 bytes OK
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.710034) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.712108) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.712125) EVENT_LOG_v1 {"time_micros": 1769847180712119, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.712153) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3339706, prev total WAL file size 3339706, number of live WAL files 2.
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.713042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3207KB)], [107(10MB)]
Jan 31 08:13:00 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847180713126, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 14421089, "oldest_snapshot_seqno": -1}
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 8094 keys, 14269686 bytes, temperature: kUnknown
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847181298915, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 14269686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14212923, "index_size": 35403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208627, "raw_average_key_size": 25, "raw_value_size": 14066115, "raw_average_value_size": 1737, "num_data_blocks": 1407, "num_entries": 8094, "num_filter_entries": 8094, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.299214) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14269686 bytes
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.385381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.6 rd, 24.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.6 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 8635, records dropped: 541 output_compression: NoCompression
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.385428) EVENT_LOG_v1 {"time_micros": 1769847181385411, "job": 64, "event": "compaction_finished", "compaction_time_micros": 585887, "compaction_time_cpu_micros": 22179, "output_level": 6, "num_output_files": 1, "total_output_size": 14269686, "num_input_records": 8635, "num_output_records": 8094, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847181385888, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847181387129, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:00.712883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.387191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.387197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.387199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.387201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:01.387203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:01 compute-0 ceph-mon[74496]: pgmap v2315: 305 pgs: 305 active+clean; 696 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.1 MiB/s wr, 126 op/s
Jan 31 08:13:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1811768806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2254943178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 701 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 155 op/s
Jan 31 08:13:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:13:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:13:02 compute-0 ceph-mon[74496]: pgmap v2316: 305 pgs: 305 active+clean; 701 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 155 op/s
Jan 31 08:13:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.189 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.430 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.431 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.431 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.431 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.432 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.433 247708 INFO nova.compute.manager [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Terminating instance
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.434 247708 DEBUG nova.compute.manager [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:13:03 compute-0 kernel: tapf951c334-90 (unregistering): left promiscuous mode
Jan 31 08:13:03 compute-0 NetworkManager[49108]: <info>  [1769847183.4807] device (tapf951c334-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:03 compute-0 ovn_controller[149457]: 2026-01-31T08:13:03Z|00503|binding|INFO|Releasing lport f951c334-901a-4c41-af64-ad3eb7647eb7 from this chassis (sb_readonly=0)
Jan 31 08:13:03 compute-0 ovn_controller[149457]: 2026-01-31T08:13:03Z|00504|binding|INFO|Setting lport f951c334-901a-4c41-af64-ad3eb7647eb7 down in Southbound
Jan 31 08:13:03 compute-0 ovn_controller[149457]: 2026-01-31T08:13:03Z|00505|binding|INFO|Removing iface tapf951c334-90 ovn-installed in OVS
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:03 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 31 08:13:03 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000080.scope: Consumed 6.141s CPU time.
Jan 31 08:13:03 compute-0 systemd-machined[214448]: Machine qemu-51-instance-00000080 terminated.
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.673 247708 INFO nova.virt.libvirt.driver [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Instance destroyed successfully.
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.674 247708 DEBUG nova.objects.instance [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:13:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:03.761 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:f5:42 10.100.0.12'], port_security=['fa:16:3e:7f:f5:42 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=f951c334-901a-4c41-af64-ad3eb7647eb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:03.763 160028 INFO neutron.agent.ovn.metadata.agent [-] Port f951c334-901a-4c41-af64-ad3eb7647eb7 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis
Jan 31 08:13:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:03.767 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:13:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:03.768 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2277dd08-362d-4f91-ab79-714e7f4b3e37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:03.769 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore
Jan 31 08:13:03 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [NOTICE]   (329884) : haproxy version is 2.8.14-c23fe91
Jan 31 08:13:03 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [NOTICE]   (329884) : path to executable is /usr/sbin/haproxy
Jan 31 08:13:03 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [WARNING]  (329884) : Exiting Master process...
Jan 31 08:13:03 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [ALERT]    (329884) : Current worker (329886) exited with code 143 (Terminated)
Jan 31 08:13:03 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[329880]: [WARNING]  (329884) : All workers exited. Exiting... (0)
Jan 31 08:13:03 compute-0 systemd[1]: libpod-9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291.scope: Deactivated successfully.
Jan 31 08:13:03 compute-0 podman[329933]: 2026-01-31 08:13:03.921525325 +0000 UTC m=+0.053393222 container died 9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291-userdata-shm.mount: Deactivated successfully.
Jan 31 08:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-328c0f1b473dad5fbd8b6439e6390722098277b7edfa7dd0662ddbf1b56445a9-merged.mount: Deactivated successfully.
Jan 31 08:13:03 compute-0 podman[329933]: 2026-01-31 08:13:03.976141516 +0000 UTC m=+0.108009433 container cleanup 9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:13:03 compute-0 systemd[1]: libpod-conmon-9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291.scope: Deactivated successfully.
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.986 247708 DEBUG nova.virt.libvirt.vif [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-860967629',display_name='tempest-ServersTestJSON-server-860967629',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-860967629',id=128,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-aynuhsdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:12:58Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.987 247708 DEBUG nova.network.os_vif_util [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "f951c334-901a-4c41-af64-ad3eb7647eb7", "address": "fa:16:3e:7f:f5:42", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf951c334-90", "ovs_interfaceid": "f951c334-901a-4c41-af64-ad3eb7647eb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.988 247708 DEBUG nova.network.os_vif_util [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.988 247708 DEBUG os_vif [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.990 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf951c334-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.993 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:03 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:13:04 compute-0 nova_compute[247704]: 2026-01-31 08:13:03.999 247708 INFO os_vif [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:f5:42,bridge_name='br-int',has_traffic_filtering=True,id=f951c334-901a-4c41-af64-ad3eb7647eb7,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf951c334-90')
Jan 31 08:13:04 compute-0 podman[329964]: 2026-01-31 08:13:04.046583052 +0000 UTC m=+0.047293663 container remove 9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.052 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b262fd3-db87-45e7-a6be-a63fa3be2392]: (4, ('Sat Jan 31 08:13:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291)\n9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291\nSat Jan 31 08:13:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291)\n9f1a30e69380e6694598a52184980534701af9fa52c897fdb0d89656607cf291\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.054 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3ec717-ee2b-47c0-873d-39288b94ff07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.055 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:04 compute-0 nova_compute[247704]: 2026-01-31 08:13:04.056 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:04 compute-0 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 08:13:04 compute-0 nova_compute[247704]: 2026-01-31 08:13:04.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.067 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b0056da8-6651-44be-ba41-e2f65ff5bb4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.089 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cff219-9ab4-47e6-95bb-27fac85c1ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.091 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[13870548-a455-4d67-b6ff-24d02da358f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.107 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[195226f0-509b-441d-ac31-e7283ae5e149]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741792, 'reachable_time': 37807, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329997, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.111 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:13:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:04.111 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2c9a4f-d678-494a-bb17-f6e81c875840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:04 compute-0 nova_compute[247704]: 2026-01-31 08:13:04.476 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:04 compute-0 sudo[329999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:04 compute-0 sudo[329999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:04 compute-0 sudo[329999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:04 compute-0 sudo[330024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:04 compute-0 sudo[330024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:04 compute-0 sudo[330024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:04 compute-0 ceph-mon[74496]: pgmap v2317: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:13:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:05 compute-0 nova_compute[247704]: 2026-01-31 08:13:05.687 247708 INFO nova.virt.libvirt.driver [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Deleting instance files /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_del
Jan 31 08:13:05 compute-0 nova_compute[247704]: 2026-01-31 08:13:05.688 247708 INFO nova.virt.libvirt.driver [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Deletion of /var/lib/nova/instances/e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7_del complete
Jan 31 08:13:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Jan 31 08:13:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.334 247708 INFO nova.compute.manager [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Took 2.90 seconds to destroy the instance on the hypervisor.
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.334 247708 DEBUG oslo.service.loopingcall [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.335 247708 DEBUG nova.compute.manager [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.335 247708 DEBUG nova.network.neutron [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:13:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:13:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.510 247708 DEBUG nova.compute.manager [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-vif-unplugged-f951c334-901a-4c41-af64-ad3eb7647eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.510 247708 DEBUG oslo_concurrency.lockutils [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.511 247708 DEBUG oslo_concurrency.lockutils [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.511 247708 DEBUG oslo_concurrency.lockutils [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.511 247708 DEBUG nova.compute.manager [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] No waiting events found dispatching network-vif-unplugged-f951c334-901a-4c41-af64-ad3eb7647eb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:06 compute-0 nova_compute[247704]: 2026-01-31 08:13:06.511 247708 DEBUG nova.compute.manager [req-e23ef9fe-3a74-4cf1-b3f6-9b0a6767b87e req-1d5914be-889b-4a5e-9256-bfdb798697c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-vif-unplugged-f951c334-901a-4c41-af64-ad3eb7647eb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:13:06 compute-0 podman[330050]: 2026-01-31 08:13:06.87944778 +0000 UTC m=+0.051905506 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 08:13:06 compute-0 ceph-mon[74496]: pgmap v2318: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 222 op/s
Jan 31 08:13:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 268 op/s
Jan 31 08:13:08 compute-0 ceph-mon[74496]: pgmap v2319: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 268 op/s
Jan 31 08:13:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:08.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:08 compute-0 nova_compute[247704]: 2026-01-31 08:13:08.994 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:09 compute-0 nova_compute[247704]: 2026-01-31 08:13:09.479 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 258 op/s
Jan 31 08:13:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:13:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:13:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:11 compute-0 ceph-mon[74496]: pgmap v2320: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 258 op/s
Jan 31 08:13:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:11.185 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:11.186 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:11.187 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 684 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 214 op/s
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.829 247708 DEBUG nova.network.neutron [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.873 247708 INFO nova.compute.manager [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Took 5.54 seconds to deallocate network for instance.
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.952 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.952 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.979 247708 DEBUG nova.compute.manager [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.980 247708 DEBUG oslo_concurrency.lockutils [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.980 247708 DEBUG oslo_concurrency.lockutils [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.980 247708 DEBUG oslo_concurrency.lockutils [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.981 247708 DEBUG nova.compute.manager [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] No waiting events found dispatching network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:11 compute-0 nova_compute[247704]: 2026-01-31 08:13:11.981 247708 WARNING nova.compute.manager [req-52b6c407-be7a-49ef-b7fd-5dc1f1a725de req-1eeb793e-f837-4dfb-bc8c-606250378a79 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received unexpected event network-vif-plugged-f951c334-901a-4c41-af64-ad3eb7647eb7 for instance with vm_state deleted and task_state None.
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.077 247708 DEBUG oslo_concurrency.processutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.110 247708 DEBUG nova.compute.manager [req-70ce398e-4086-467b-af03-7e6aae360693 req-edb0962c-a814-4f39-8d92-a685c9681f0d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Received event network-vif-deleted-f951c334-901a-4c41-af64-ad3eb7647eb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:12 compute-0 ceph-mon[74496]: pgmap v2321: 305 pgs: 305 active+clean; 684 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 214 op/s
Jan 31 08:13:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:12.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/983180038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.530 247708 DEBUG oslo_concurrency.processutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.537 247708 DEBUG nova.compute.provider_tree [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.595 247708 DEBUG nova.scheduler.client.report [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.627 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.698 247708 INFO nova.scheduler.client.report [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7
Jan 31 08:13:12 compute-0 nova_compute[247704]: 2026-01-31 08:13:12.861 247708 DEBUG oslo_concurrency.lockutils [None req-af10caae-3dbe-4b52-9502-868ef9e2b7eb 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1414180678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/983180038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2099496265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2156842279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Jan 31 08:13:13 compute-0 nova_compute[247704]: 2026-01-31 08:13:13.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:14.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:14 compute-0 nova_compute[247704]: 2026-01-31 08:13:14.481 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:14.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:14 compute-0 ceph-mon[74496]: pgmap v2322: 305 pgs: 305 active+clean; 704 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Jan 31 08:13:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 709 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 31 08:13:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:16.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:17 compute-0 ceph-mon[74496]: pgmap v2323: 305 pgs: 305 active+clean; 709 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Jan 31 08:13:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Jan 31 08:13:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:18.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:18 compute-0 ceph-mon[74496]: pgmap v2324: 305 pgs: 305 active+clean; 688 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Jan 31 08:13:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:18 compute-0 nova_compute[247704]: 2026-01-31 08:13:18.672 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847183.670409, e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:18 compute-0 nova_compute[247704]: 2026-01-31 08:13:18.672 247708 INFO nova.compute.manager [-] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] VM Stopped (Lifecycle Event)
Jan 31 08:13:18 compute-0 nova_compute[247704]: 2026-01-31 08:13:18.949 247708 DEBUG nova.compute.manager [None req-ad2777bd-1e9f-454a-ba09-616b445971e7 - - - - - -] [instance: e8fcb865-0c15-4bc8-b0f1-7f982aaab6d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:19 compute-0 nova_compute[247704]: 2026-01-31 08:13:19.000 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:19 compute-0 nova_compute[247704]: 2026-01-31 08:13:19.484 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 650 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 174 op/s
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:13:20
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data']
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:13:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:20.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:13:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:13:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:13:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 651 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 223 op/s
Jan 31 08:13:22 compute-0 ceph-mon[74496]: pgmap v2325: 305 pgs: 305 active+clean; 650 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 174 op/s
Jan 31 08:13:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:22.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:22.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 651 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 252 op/s
Jan 31 08:13:24 compute-0 nova_compute[247704]: 2026-01-31 08:13:24.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:24.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:24 compute-0 nova_compute[247704]: 2026-01-31 08:13:24.487 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:24 compute-0 podman[330101]: 2026-01-31 08:13:24.522059957 +0000 UTC m=+0.688486087 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:24 compute-0 sudo[330127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:24 compute-0 sudo[330127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:24 compute-0 sudo[330127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-0 ceph-mon[74496]: pgmap v2326: 305 pgs: 305 active+clean; 651 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 223 op/s
Jan 31 08:13:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/645181621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:24 compute-0 sudo[330152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:24 compute-0 sudo[330152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:24 compute-0 sudo[330152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 655 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 248 op/s
Jan 31 08:13:26 compute-0 ceph-mon[74496]: pgmap v2327: 305 pgs: 305 active+clean; 651 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 252 op/s
Jan 31 08:13:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:26.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:26 compute-0 nova_compute[247704]: 2026-01-31 08:13:26.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.889734) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206889811, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 464, "num_deletes": 251, "total_data_size": 442535, "memory_usage": 452656, "flush_reason": "Manual Compaction"}
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206963768, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 437883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50799, "largest_seqno": 51262, "table_properties": {"data_size": 435225, "index_size": 694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6410, "raw_average_key_size": 18, "raw_value_size": 429957, "raw_average_value_size": 1268, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847181, "oldest_key_time": 1769847181, "file_creation_time": 1769847206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 74101 microseconds, and 2099 cpu microseconds.
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.963843) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 437883 bytes OK
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.963873) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.968135) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.968160) EVENT_LOG_v1 {"time_micros": 1769847206968152, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.968185) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 439792, prev total WAL file size 439792, number of live WAL files 2.
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.968833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(427KB)], [110(13MB)]
Jan 31 08:13:26 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847206968934, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 14707569, "oldest_snapshot_seqno": -1}
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7920 keys, 12781135 bytes, temperature: kUnknown
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207352313, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 12781135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12726865, "index_size": 33358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205710, "raw_average_key_size": 25, "raw_value_size": 12584301, "raw_average_value_size": 1588, "num_data_blocks": 1314, "num_entries": 7920, "num_filter_entries": 7920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847206, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.352561) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12781135 bytes
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.459360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.4 rd, 33.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(62.8) write-amplify(29.2) OK, records in: 8433, records dropped: 513 output_compression: NoCompression
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.459396) EVENT_LOG_v1 {"time_micros": 1769847207459382, "job": 66, "event": "compaction_finished", "compaction_time_micros": 383352, "compaction_time_cpu_micros": 38795, "output_level": 6, "num_output_files": 1, "total_output_size": 12781135, "num_input_records": 8433, "num_output_records": 7920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207459736, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847207461811, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:26.968687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.461980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.461988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.461992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.461995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:13:27.461998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:27 compute-0 nova_compute[247704]: 2026-01-31 08:13:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:27 compute-0 ceph-mon[74496]: pgmap v2328: 305 pgs: 305 active+clean; 655 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 248 op/s
Jan 31 08:13:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 247 op/s
Jan 31 08:13:27 compute-0 nova_compute[247704]: 2026-01-31 08:13:27.747 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:27 compute-0 nova_compute[247704]: 2026-01-31 08:13:27.748 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:28 compute-0 nova_compute[247704]: 2026-01-31 08:13:28.186 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:13:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:28.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:28 compute-0 nova_compute[247704]: 2026-01-31 08:13:28.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:28 compute-0 nova_compute[247704]: 2026-01-31 08:13:28.685 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:28 compute-0 nova_compute[247704]: 2026-01-31 08:13:28.686 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:28 compute-0 nova_compute[247704]: 2026-01-31 08:13:28.697 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:13:28 compute-0 nova_compute[247704]: 2026-01-31 08:13:28.697 247708 INFO nova.compute.claims [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:13:28 compute-0 ceph-mon[74496]: pgmap v2329: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 247 op/s
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.235 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.489 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:13:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253429130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.700 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.709 247708 DEBUG nova.compute.provider_tree [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:13:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 95 KiB/s wr, 222 op/s
Jan 31 08:13:29 compute-0 nova_compute[247704]: 2026-01-31 08:13:29.839 247708 DEBUG nova.scheduler.client.report [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:13:30 compute-0 nova_compute[247704]: 2026-01-31 08:13:30.180 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:30 compute-0 nova_compute[247704]: 2026-01-31 08:13:30.182 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:13:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:30.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/253429130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:30 compute-0 ceph-mon[74496]: pgmap v2330: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 95 KiB/s wr, 222 op/s
Jan 31 08:13:30 compute-0 nova_compute[247704]: 2026-01-31 08:13:30.499 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:13:30 compute-0 nova_compute[247704]: 2026-01-31 08:13:30.500 247708 DEBUG nova.network.neutron [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:13:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:30 compute-0 nova_compute[247704]: 2026-01-31 08:13:30.684 247708 INFO nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:13:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:30 compute-0 nova_compute[247704]: 2026-01-31 08:13:30.776 247708 DEBUG nova.policy [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ab9e181016f4d5a899c91dae3aa26e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:13:31 compute-0 nova_compute[247704]: 2026-01-31 08:13:31.110 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:13:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 109 KiB/s wr, 137 op/s
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.112 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.114 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.115 247708 INFO nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Creating image(s)
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.153 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.184 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.285 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.289 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.354 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.355 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.355 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.356 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:32.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:32 compute-0 ceph-mon[74496]: pgmap v2331: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 109 KiB/s wr, 137 op/s
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.378 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.382 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 827a8d86-f250-441f-911a-98626a322ef7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.938 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.938 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.939 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.939 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:13:32 compute-0 nova_compute[247704]: 2026-01-31 08:13:32.939 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.009 247708 DEBUG nova.network.neutron [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Successfully created port: 77c06224-c2b3-45e0-90f5-afd0c8400ff7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:13:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462951967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.384 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.488 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.488 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:13:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1462951967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.634 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.635 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4240MB free_disk=20.694122314453125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.635 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.635 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 51 KiB/s wr, 96 op/s
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.766 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.766 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 827a8d86-f250-441f-911a-98626a322ef7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.766 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.767 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:13:33 compute-0 nova_compute[247704]: 2026-01-31 08:13:33.888 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.036 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.131 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.132 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.197 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:13:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3800349255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.332 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.337 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.362 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.369 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.392 247708 DEBUG nova.network.neutron [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Successfully updated port: 77c06224-c2b3-45e0-90f5-afd0c8400ff7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.501 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.501 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.503 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.504 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.504 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquired lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.505 247708 DEBUG nova.network.neutron [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.514 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.515 247708 INFO nova.compute.claims [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:13:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:34.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.538 247708 DEBUG nova.compute.manager [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-changed-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.539 247708 DEBUG nova.compute.manager [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Refreshing instance network info cache due to event network-changed-77c06224-c2b3-45e0-90f5-afd0c8400ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.539 247708 DEBUG oslo_concurrency.lockutils [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.837 247708 DEBUG nova.network.neutron [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:13:34 compute-0 nova_compute[247704]: 2026-01-31 08:13:34.845 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:35 compute-0 nova_compute[247704]: 2026-01-31 08:13:35.279 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 827a8d86-f250-441f-911a-98626a322ef7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.897s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192415656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:35 compute-0 nova_compute[247704]: 2026-01-31 08:13:35.326 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014015389447236182 of space, bias 1.0, pg target 4.204616834170855 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016139726409826913 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1487719267634469 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 08:13:35 compute-0 nova_compute[247704]: 2026-01-31 08:13:35.499 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] resizing rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:13:35 compute-0 sudo[330420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:35 compute-0 sudo[330420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-0 sudo[330420]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 sudo[330445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:35 compute-0 sudo[330445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-0 sudo[330445]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 ceph-mon[74496]: pgmap v2332: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 51 KiB/s wr, 96 op/s
Jan 31 08:13:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3800349255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:35 compute-0 sudo[330470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:35 compute-0 sudo[330470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:35 compute-0 sudo[330470]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 41 KiB/s wr, 85 op/s
Jan 31 08:13:35 compute-0 sudo[330495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:13:35 compute-0 sudo[330495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:36 compute-0 sudo[330495]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.225 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.225 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.225 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.227 247708 DEBUG nova.compute.provider_tree [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.325 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.325 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.330 247708 DEBUG nova.scheduler.client.report [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:13:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.468 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.468 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:13:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:36.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.569 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.569 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.570 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.570 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.724 247708 DEBUG nova.network.neutron [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Updating instance_info_cache with network_info: [{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.988 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:13:36 compute-0 nova_compute[247704]: 2026-01-31 08:13:36.989 247708 DEBUG nova.network.neutron [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.051 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Releasing lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.052 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance network_info: |[{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.052 247708 DEBUG oslo_concurrency.lockutils [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.053 247708 DEBUG nova.network.neutron [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Refreshing network info cache for port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.071 247708 INFO nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.202 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.290 247708 DEBUG nova.policy [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.471 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.473 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.473 247708 INFO nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Creating image(s)
Jan 31 08:13:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 687 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 131 op/s
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.863 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:37 compute-0 podman[330559]: 2026-01-31 08:13:37.894021253 +0000 UTC m=+0.058739752 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.915 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.945 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:37 compute-0 nova_compute[247704]: 2026-01-31 08:13:37.951 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:38 compute-0 nova_compute[247704]: 2026-01-31 08:13:38.013 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:38 compute-0 nova_compute[247704]: 2026-01-31 08:13:38.014 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:38 compute-0 nova_compute[247704]: 2026-01-31 08:13:38.015 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:38 compute-0 nova_compute[247704]: 2026-01-31 08:13:38.015 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:13:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2337270161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:38 compute-0 nova_compute[247704]: 2026-01-31 08:13:38.196 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:38 compute-0 nova_compute[247704]: 2026-01-31 08:13:38.200 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3750999947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4192415656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:38 compute-0 ceph-mon[74496]: pgmap v2333: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 41 KiB/s wr, 85 op/s
Jan 31 08:13:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:13:39 compute-0 nova_compute[247704]: 2026-01-31 08:13:39.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:13:39 compute-0 nova_compute[247704]: 2026-01-31 08:13:39.494 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 31 08:13:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1966568629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1064267121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:40 compute-0 ceph-mon[74496]: pgmap v2334: 305 pgs: 305 active+clean; 687 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 131 op/s
Jan 31 08:13:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2337270161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:40.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:40.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:40 compute-0 nova_compute[247704]: 2026-01-31 08:13:40.701 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:40 compute-0 nova_compute[247704]: 2026-01-31 08:13:40.742 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:40 compute-0 nova_compute[247704]: 2026-01-31 08:13:40.742 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:13:40 compute-0 nova_compute[247704]: 2026-01-31 08:13:40.743 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:40 compute-0 nova_compute[247704]: 2026-01-31 08:13:40.743 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.035 247708 DEBUG nova.network.neutron [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Updated VIF entry in instance network info cache for port 77c06224-c2b3-45e0-90f5-afd0c8400ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.036 247708 DEBUG nova.network.neutron [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Updating instance_info_cache with network_info: [{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.064 247708 DEBUG oslo_concurrency.lockutils [req-9d5889dd-a4cc-4150-bbdd-c4196f2e0a0b req-bacbb402-8e17-45d2-96dd-3460f613d784 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.074 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.161 247708 DEBUG nova.network.neutron [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Successfully created port: 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.172 247708 DEBUG nova.objects.instance [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'migration_context' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.290 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.291 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Ensure instance console log exists: /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.291 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.292 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.292 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.294 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Start _get_guest_xml network_info=[{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.299 247708 WARNING nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.303 247708 DEBUG nova.virt.libvirt.host [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.304 247708 DEBUG nova.virt.libvirt.host [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.307 247708 DEBUG nova.virt.libvirt.host [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.307 247708 DEBUG nova.virt.libvirt.host [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.308 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.308 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.309 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.309 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.309 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.309 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.309 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.310 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.310 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.310 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.310 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.311 247708 DEBUG nova.virt.hardware [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.313 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:13:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:13:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:13:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:13:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:13:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 708 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 114 op/s
Jan 31 08:13:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629244402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.779 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.820 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.827 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:41 compute-0 nova_compute[247704]: 2026-01-31 08:13:41.890 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:41 compute-0 ceph-mon[74496]: pgmap v2335: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 31 08:13:41 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:42 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e4cb8ab0-28b9-4883-ad5c-bec3791398c1 does not exist
Jan 31 08:13:42 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3fe981c1-61ce-41ac-a970-e2e36e4f064d does not exist
Jan 31 08:13:42 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev edff2315-4ca3-480d-b2ca-1c6b6258cdb9 does not exist
Jan 31 08:13:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:13:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:13:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:13:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:13:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:13:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.113 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:13:42 compute-0 sudo[330779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:42 compute-0 sudo[330779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:42 compute-0 sudo[330779]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:42 compute-0 sudo[330821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:42 compute-0 sudo[330821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:42 compute-0 sudo[330821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:42 compute-0 sudo[330849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:42 compute-0 sudo[330849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:42 compute-0 sudo[330849]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583203475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.277 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.278 247708 DEBUG nova.virt.libvirt.vif [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1146155717',display_name='tempest-ServerRescueTestJSON-server-1146155717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1146155717',id=130,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-dm7efqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:31Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=827a8d86-f250-441f-911a-98626a322ef7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.279 247708 DEBUG nova.network.os_vif_util [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.280 247708 DEBUG nova.network.os_vif_util [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.281 247708 DEBUG nova.objects.instance [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:42 compute-0 sudo[330874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:13:42 compute-0 sudo[330874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.454 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <uuid>827a8d86-f250-441f-911a-98626a322ef7</uuid>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <name>instance-00000082</name>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerRescueTestJSON-server-1146155717</nova:name>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:13:41</nova:creationTime>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:user uuid="6ab9e181016f4d5a899c91dae3aa26e0">tempest-ServerRescueTestJSON-1911109090-project-member</nova:user>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:project uuid="cbd0f41e455b4b3b9a8edf35ef0b85ed">tempest-ServerRescueTestJSON-1911109090</nova:project>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <nova:port uuid="77c06224-c2b3-45e0-90f5-afd0c8400ff7">
Jan 31 08:13:42 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <system>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <entry name="serial">827a8d86-f250-441f-911a-98626a322ef7</entry>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <entry name="uuid">827a8d86-f250-441f-911a-98626a322ef7</entry>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </system>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <os>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </os>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <features>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </features>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/827a8d86-f250-441f-911a-98626a322ef7_disk">
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </source>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/827a8d86-f250-441f-911a-98626a322ef7_disk.config">
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </source>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:13:42 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:51:5f:e2"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <target dev="tap77c06224-c2"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/console.log" append="off"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <video>
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </video>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:13:42 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:13:42 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:13:42 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:13:42 compute-0 nova_compute[247704]: </domain>
Jan 31 08:13:42 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.454 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Preparing to wait for external event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.455 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.455 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.455 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.456 247708 DEBUG nova.virt.libvirt.vif [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1146155717',display_name='tempest-ServerRescueTestJSON-server-1146155717',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1146155717',id=130,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-dm7efqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:31Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=827a8d86-f250-441f-911a-98626a322ef7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.457 247708 DEBUG nova.network.os_vif_util [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.457 247708 DEBUG nova.network.os_vif_util [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.458 247708 DEBUG os_vif [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.459 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.460 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.461 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.466 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.466 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77c06224-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.467 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77c06224-c2, col_values=(('external_ids', {'iface-id': '77c06224-c2b3-45e0-90f5-afd0c8400ff7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:51:5f:e2', 'vm-uuid': '827a8d86-f250-441f-911a-98626a322ef7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.469 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:42 compute-0 NetworkManager[49108]: <info>  [1769847222.4709] manager: (tap77c06224-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.473 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.480 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:42 compute-0 nova_compute[247704]: 2026-01-31 08:13:42.483 247708 INFO os_vif [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2')
Jan 31 08:13:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:13:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:42.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:13:42 compute-0 podman[330942]: 2026-01-31 08:13:42.614606558 +0000 UTC m=+0.021004143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:13:42 compute-0 podman[330942]: 2026-01-31 08:13:42.82688585 +0000 UTC m=+0.233283415 container create 496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:13:43 compute-0 systemd[1]: Started libpod-conmon-496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca.scope.
Jan 31 08:13:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.127 247708 DEBUG nova.objects.instance [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid 905c4dc1-7ad5-4442-ba06-76af73f6742c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: pgmap v2336: 305 pgs: 305 active+clean; 708 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 114 op/s
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/629244402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3296464802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:13:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3583203475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:43 compute-0 podman[330942]: 2026-01-31 08:13:43.203570979 +0000 UTC m=+0.609968574 container init 496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_colden, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.208 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.209 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.209 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No VIF found with MAC fa:16:3e:51:5f:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:13:43 compute-0 podman[330942]: 2026-01-31 08:13:43.21058653 +0000 UTC m=+0.616984075 container start 496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.210 247708 INFO nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Using config drive
Jan 31 08:13:43 compute-0 upbeat_colden[330958]: 167 167
Jan 31 08:13:43 compute-0 systemd[1]: libpod-496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca.scope: Deactivated successfully.
Jan 31 08:13:43 compute-0 podman[330942]: 2026-01-31 08:13:43.24135104 +0000 UTC m=+0.647748635 container attach 496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:13:43 compute-0 podman[330942]: 2026-01-31 08:13:43.243464591 +0000 UTC m=+0.649862186 container died 496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.269 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.295 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.295 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Ensure instance console log exists: /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.295 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.296 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:43 compute-0 nova_compute[247704]: 2026-01-31 08:13:43.296 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-20ff5dcc41363472728a3c32839c5bbe434c16ac366aabffe964d3426fd3f9a8-merged.mount: Deactivated successfully.
Jan 31 08:13:43 compute-0 podman[330942]: 2026-01-31 08:13:43.527558193 +0000 UTC m=+0.933955768 container remove 496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:43 compute-0 systemd[1]: libpod-conmon-496527a163926c35cdabaecf3da991b0b20bf6c41044955b9cec7ef44c993cca.scope: Deactivated successfully.
Jan 31 08:13:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 718 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 31 08:13:43 compute-0 podman[331021]: 2026-01-31 08:13:43.766324961 +0000 UTC m=+0.102285953 container create 4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:13:43 compute-0 podman[331021]: 2026-01-31 08:13:43.697482923 +0000 UTC m=+0.033443965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:13:43 compute-0 systemd[1]: Started libpod-conmon-4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0.scope.
Jan 31 08:13:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9281adc3218b0de441659728105f7bb640baa8e76a7430196d50d40426abb6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9281adc3218b0de441659728105f7bb640baa8e76a7430196d50d40426abb6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9281adc3218b0de441659728105f7bb640baa8e76a7430196d50d40426abb6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9281adc3218b0de441659728105f7bb640baa8e76a7430196d50d40426abb6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9281adc3218b0de441659728105f7bb640baa8e76a7430196d50d40426abb6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:44 compute-0 podman[331021]: 2026-01-31 08:13:44.082535785 +0000 UTC m=+0.418496797 container init 4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:13:44 compute-0 podman[331021]: 2026-01-31 08:13:44.087983658 +0000 UTC m=+0.423944650 container start 4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.122 247708 INFO nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Creating config drive at /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.126 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp52_jeynb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:44 compute-0 podman[331021]: 2026-01-31 08:13:44.197779094 +0000 UTC m=+0.533740106 container attach 4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.247 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp52_jeynb" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.285 247708 DEBUG nova.storage.rbd_utils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.288 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config 827a8d86-f250-441f-911a-98626a322ef7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:44 compute-0 ceph-mon[74496]: pgmap v2337: 305 pgs: 305 active+clean; 718 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 31 08:13:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.382 247708 DEBUG nova.network.neutron [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Successfully updated port: 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.401 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-905c4dc1-7ad5-4442-ba06-76af73f6742c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.401 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-905c4dc1-7ad5-4442-ba06-76af73f6742c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.402 247708 DEBUG nova.network.neutron [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.495 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.512 247708 DEBUG nova.compute.manager [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-changed-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.512 247708 DEBUG nova.compute.manager [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Refreshing instance network info cache due to event network-changed-6fd77fc5-facb-4b81-bbc5-d38e65a67c12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.512 247708 DEBUG oslo_concurrency.lockutils [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-905c4dc1-7ad5-4442-ba06-76af73f6742c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.677 247708 DEBUG nova.network.neutron [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:13:44 compute-0 cool_euler[331037]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:13:44 compute-0 cool_euler[331037]: --> relative data size: 1.0
Jan 31 08:13:44 compute-0 cool_euler[331037]: --> All data devices are unavailable
Jan 31 08:13:44 compute-0 systemd[1]: libpod-4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0.scope: Deactivated successfully.
Jan 31 08:13:44 compute-0 podman[331021]: 2026-01-31 08:13:44.863895124 +0000 UTC m=+1.199856126 container died 4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.898 247708 DEBUG oslo_concurrency.processutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config 827a8d86-f250-441f-911a-98626a322ef7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.898 247708 INFO nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Deleting local config drive /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config because it was imported into RBD.
Jan 31 08:13:44 compute-0 kernel: tap77c06224-c2: entered promiscuous mode
Jan 31 08:13:44 compute-0 ovn_controller[149457]: 2026-01-31T08:13:44Z|00506|binding|INFO|Claiming lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 for this chassis.
Jan 31 08:13:44 compute-0 ovn_controller[149457]: 2026-01-31T08:13:44Z|00507|binding|INFO|77c06224-c2b3-45e0-90f5-afd0c8400ff7: Claiming fa:16:3e:51:5f:e2 10.100.0.14
Jan 31 08:13:44 compute-0 nova_compute[247704]: 2026-01-31 08:13:44.956 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:44 compute-0 NetworkManager[49108]: <info>  [1769847224.9576] manager: (tap77c06224-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Jan 31 08:13:44 compute-0 ovn_controller[149457]: 2026-01-31T08:13:44Z|00508|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 up in Southbound
Jan 31 08:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:44.963 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:5f:e2 10.100.0.14'], port_security=['fa:16:3e:51:5f:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '827a8d86-f250-441f-911a-98626a322ef7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=77c06224-c2b3-45e0-90f5-afd0c8400ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:44.964 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c bound to our chassis
Jan 31 08:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:44.966 160028 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:44.967 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d567707-4e7e-477f-9761-4097cafd7afd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:45 compute-0 ovn_controller[149457]: 2026-01-31T08:13:45Z|00509|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 ovn-installed in OVS
Jan 31 08:13:45 compute-0 nova_compute[247704]: 2026-01-31 08:13:45.015 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:45 compute-0 sudo[331106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:45 compute-0 sudo[331106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:45 compute-0 sudo[331106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 systemd-machined[214448]: New machine qemu-52-instance-00000082.
Jan 31 08:13:45 compute-0 systemd-udevd[331142]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:13:45 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-00000082.
Jan 31 08:13:45 compute-0 NetworkManager[49108]: <info>  [1769847225.0619] device (tap77c06224-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:13:45 compute-0 NetworkManager[49108]: <info>  [1769847225.0624] device (tap77c06224-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:13:45 compute-0 sudo[331141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:45 compute-0 sudo[331141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:45 compute-0 sudo[331141]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9281adc3218b0de441659728105f7bb640baa8e76a7430196d50d40426abb6a-merged.mount: Deactivated successfully.
Jan 31 08:13:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/523617742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:13:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/523617742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:13:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 759 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 675 KiB/s rd, 3.7 MiB/s wr, 119 op/s
Jan 31 08:13:45 compute-0 podman[331021]: 2026-01-31 08:13:45.835372087 +0000 UTC m=+2.171333079 container remove 4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_euler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:45 compute-0 sudo[330874]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 sudo[331213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:45 compute-0 systemd[1]: libpod-conmon-4e200a0605105a36f9d49abce794c4a104ee639bbcf4f210b0faa39926713ec0.scope: Deactivated successfully.
Jan 31 08:13:45 compute-0 sudo[331213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:45 compute-0 sudo[331213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 sudo[331238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:46 compute-0 sudo[331238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:46 compute-0 sudo[331238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:46 compute-0 sudo[331268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:46 compute-0 sudo[331268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:46 compute-0 sudo[331268]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.073 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847226.0722024, 827a8d86-f250-441f-911a-98626a322ef7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.074 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Started (Lifecycle Event)
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.103 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.109 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847226.0729625, 827a8d86-f250-441f-911a-98626a322ef7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.109 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Paused (Lifecycle Event)
Jan 31 08:13:46 compute-0 sudo[331294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:13:46 compute-0 sudo[331294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.139 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.145 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.172 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:13:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:13:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:13:46 compute-0 podman[331357]: 2026-01-31 08:13:46.502918443 +0000 UTC m=+0.087104153 container create 7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pike, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:46.531 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:46.532 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.533 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:46 compute-0 podman[331357]: 2026-01-31 08:13:46.446905388 +0000 UTC m=+0.031091068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:13:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:46.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.571 247708 DEBUG nova.compute.manager [req-c20fd4a9-c122-4a5a-9675-106042e1690a req-8599d20a-4f7a-4b09-a08f-2e90b5f96a74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.571 247708 DEBUG oslo_concurrency.lockutils [req-c20fd4a9-c122-4a5a-9675-106042e1690a req-8599d20a-4f7a-4b09-a08f-2e90b5f96a74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.572 247708 DEBUG oslo_concurrency.lockutils [req-c20fd4a9-c122-4a5a-9675-106042e1690a req-8599d20a-4f7a-4b09-a08f-2e90b5f96a74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.572 247708 DEBUG oslo_concurrency.lockutils [req-c20fd4a9-c122-4a5a-9675-106042e1690a req-8599d20a-4f7a-4b09-a08f-2e90b5f96a74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.573 247708 DEBUG nova.compute.manager [req-c20fd4a9-c122-4a5a-9675-106042e1690a req-8599d20a-4f7a-4b09-a08f-2e90b5f96a74 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Processing event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.574 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.578 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847226.5779934, 827a8d86-f250-441f-911a-98626a322ef7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.578 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Resumed (Lifecycle Event)
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.580 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.583 247708 INFO nova.virt.libvirt.driver [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance spawned successfully.
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.583 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.615 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:46 compute-0 systemd[1]: Started libpod-conmon-7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3.scope.
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.619 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.620 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.621 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.621 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.622 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.622 247708 DEBUG nova.virt.libvirt.driver [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.629 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:46 compute-0 podman[331357]: 2026-01-31 08:13:46.729556266 +0000 UTC m=+0.313742026 container init 7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pike, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:13:46 compute-0 podman[331357]: 2026-01-31 08:13:46.737406417 +0000 UTC m=+0.321592117 container start 7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pike, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:46 compute-0 youthful_pike[331373]: 167 167
Jan 31 08:13:46 compute-0 systemd[1]: libpod-7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3.scope: Deactivated successfully.
Jan 31 08:13:46 compute-0 podman[331357]: 2026-01-31 08:13:46.749932673 +0000 UTC m=+0.334118373 container attach 7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:46 compute-0 podman[331357]: 2026-01-31 08:13:46.75065526 +0000 UTC m=+0.334840970 container died 7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:46 compute-0 ceph-mon[74496]: pgmap v2338: 305 pgs: 305 active+clean; 759 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 675 KiB/s rd, 3.7 MiB/s wr, 119 op/s
Jan 31 08:13:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1571607756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:13:46 compute-0 nova_compute[247704]: 2026-01-31 08:13:46.961 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6715a210a82f1e85a9eaac95cad895d02d4f1675c00b1611d6b3f16e36ae852-merged.mount: Deactivated successfully.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.064 247708 INFO nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Took 14.95 seconds to spawn the instance on the hypervisor.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.065 247708 DEBUG nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:47 compute-0 podman[331357]: 2026-01-31 08:13:47.224154357 +0000 UTC m=+0.808340047 container remove 7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pike, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:13:47 compute-0 systemd[1]: libpod-conmon-7057504895f09bd6c43cb5f9f53d2f925e6c7b1bf4a3d9b469a23e3b39e472b3.scope: Deactivated successfully.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.379 247708 INFO nova.compute.manager [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Took 18.77 seconds to build instance.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.469 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:47 compute-0 podman[331398]: 2026-01-31 08:13:47.396790463 +0000 UTC m=+0.037144555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.503 247708 DEBUG oslo_concurrency.lockutils [None req-9e049f2c-2bdd-4d22-a79f-3b0e54c5f38e 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.532 247708 DEBUG nova.network.neutron [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Updating instance_info_cache with network_info: [{"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:47 compute-0 podman[331398]: 2026-01-31 08:13:47.641318452 +0000 UTC m=+0.281672454 container create f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:13:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 808 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 5.4 MiB/s wr, 136 op/s
Jan 31 08:13:47 compute-0 systemd[1]: Started libpod-conmon-f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15.scope.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.793 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-905c4dc1-7ad5-4442-ba06-76af73f6742c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.797 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Instance network_info: |[{"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.799 247708 DEBUG oslo_concurrency.lockutils [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-905c4dc1-7ad5-4442-ba06-76af73f6742c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.800 247708 DEBUG nova.network.neutron [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Refreshing network info cache for port 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.804 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Start _get_guest_xml network_info=[{"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.809 247708 WARNING nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.820 247708 DEBUG nova.virt.libvirt.host [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.821 247708 DEBUG nova.virt.libvirt.host [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.824 247708 DEBUG nova.virt.libvirt.host [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.825 247708 DEBUG nova.virt.libvirt.host [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.826 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:13:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.827 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.843 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.843 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.844 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.844 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.844 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.844 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.844 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.845 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.845 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.845 247708 DEBUG nova.virt.hardware [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:13:47 compute-0 nova_compute[247704]: 2026-01-31 08:13:47.848 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86517b9f712621a06484a6cde798b8c5688c7e62762304ee4f4e800245517772/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86517b9f712621a06484a6cde798b8c5688c7e62762304ee4f4e800245517772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86517b9f712621a06484a6cde798b8c5688c7e62762304ee4f4e800245517772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86517b9f712621a06484a6cde798b8c5688c7e62762304ee4f4e800245517772/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:48 compute-0 podman[331398]: 2026-01-31 08:13:48.019039766 +0000 UTC m=+0.659393788 container init f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:48 compute-0 podman[331398]: 2026-01-31 08:13:48.027386699 +0000 UTC m=+0.667740701 container start f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:48 compute-0 podman[331398]: 2026-01-31 08:13:48.060839885 +0000 UTC m=+0.701193887 container attach f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:13:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869198860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.280 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.309 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.314 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:48 compute-0 ceph-mon[74496]: pgmap v2339: 305 pgs: 305 active+clean; 808 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 754 KiB/s rd, 5.4 MiB/s wr, 136 op/s
Jan 31 08:13:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2869198860' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:48.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]: {
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:     "0": [
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:         {
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "devices": [
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "/dev/loop3"
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             ],
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "lv_name": "ceph_lv0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "lv_size": "7511998464",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "name": "ceph_lv0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "tags": {
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.crush_device_class": "",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.encrypted": "0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.osd_id": "0",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.type": "block",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:                 "ceph.vdo": "0"
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             },
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "type": "block",
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:             "vg_name": "ceph_vg0"
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:         }
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]:     ]
Jan 31 08:13:48 compute-0 nostalgic_williamson[331415]: }
Jan 31 08:13:48 compute-0 systemd[1]: libpod-f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15.scope: Deactivated successfully.
Jan 31 08:13:48 compute-0 podman[331398]: 2026-01-31 08:13:48.866957687 +0000 UTC m=+1.507311689 container died f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.918 247708 DEBUG nova.compute.manager [req-a01dbc13-47d8-4df9-8182-d4d543cf8ea9 req-f55e3a49-32f1-4cbc-8c5c-49bed6611c4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.919 247708 DEBUG oslo_concurrency.lockutils [req-a01dbc13-47d8-4df9-8182-d4d543cf8ea9 req-f55e3a49-32f1-4cbc-8c5c-49bed6611c4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.919 247708 DEBUG oslo_concurrency.lockutils [req-a01dbc13-47d8-4df9-8182-d4d543cf8ea9 req-f55e3a49-32f1-4cbc-8c5c-49bed6611c4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.919 247708 DEBUG oslo_concurrency.lockutils [req-a01dbc13-47d8-4df9-8182-d4d543cf8ea9 req-f55e3a49-32f1-4cbc-8c5c-49bed6611c4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.919 247708 DEBUG nova.compute.manager [req-a01dbc13-47d8-4df9-8182-d4d543cf8ea9 req-f55e3a49-32f1-4cbc-8c5c-49bed6611c4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:48 compute-0 nova_compute[247704]: 2026-01-31 08:13:48.919 247708 WARNING nova.compute.manager [req-a01dbc13-47d8-4df9-8182-d4d543cf8ea9 req-f55e3a49-32f1-4cbc-8c5c-49bed6611c4a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state active and task_state None.
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.166 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.171 247708 DEBUG nova.virt.libvirt.vif [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-718061765',display_name='tempest-ServersTestJSON-server-718061765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-718061765',id=131,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-0s68bku0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:37Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=905c4dc1-7ad5-4442-ba06-76af73f6742c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.171 247708 DEBUG nova.network.os_vif_util [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.173 247708 DEBUG nova.network.os_vif_util [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.175 247708 DEBUG nova.objects.instance [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 905c4dc1-7ad5-4442-ba06-76af73f6742c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-86517b9f712621a06484a6cde798b8c5688c7e62762304ee4f4e800245517772-merged.mount: Deactivated successfully.
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.310 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <uuid>905c4dc1-7ad5-4442-ba06-76af73f6742c</uuid>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <name>instance-00000083</name>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersTestJSON-server-718061765</nova:name>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:13:47</nova:creationTime>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <nova:port uuid="6fd77fc5-facb-4b81-bbc5-d38e65a67c12">
Jan 31 08:13:49 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <system>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <entry name="serial">905c4dc1-7ad5-4442-ba06-76af73f6742c</entry>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <entry name="uuid">905c4dc1-7ad5-4442-ba06-76af73f6742c</entry>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </system>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <os>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </os>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <features>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </features>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/905c4dc1-7ad5-4442-ba06-76af73f6742c_disk">
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </source>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/905c4dc1-7ad5-4442-ba06-76af73f6742c_disk.config">
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </source>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:13:49 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:59:cd:65"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <target dev="tap6fd77fc5-fa"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/console.log" append="off"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <video>
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </video>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:13:49 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:13:49 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:13:49 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:13:49 compute-0 nova_compute[247704]: </domain>
Jan 31 08:13:49 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.311 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Preparing to wait for external event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.311 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.315 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.316 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.316 247708 DEBUG nova.virt.libvirt.vif [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-718061765',display_name='tempest-ServersTestJSON-server-718061765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-718061765',id=131,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-0s68bku0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:37Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=905c4dc1-7ad5-4442-ba06-76af73f6742c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.317 247708 DEBUG nova.network.os_vif_util [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.317 247708 DEBUG nova.network.os_vif_util [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.318 247708 DEBUG os_vif [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.319 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.319 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.320 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.325 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fd77fc5-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.325 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fd77fc5-fa, col_values=(('external_ids', {'iface-id': '6fd77fc5-facb-4b81-bbc5-d38e65a67c12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:cd:65', 'vm-uuid': '905c4dc1-7ad5-4442-ba06-76af73f6742c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.327 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:49 compute-0 NetworkManager[49108]: <info>  [1769847229.3281] manager: (tap6fd77fc5-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.332 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.336 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.337 247708 INFO os_vif [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa')
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.504 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.635 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.635 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.636 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:59:cd:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.637 247708 INFO nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Using config drive
Jan 31 08:13:49 compute-0 nova_compute[247704]: 2026-01-31 08:13:49.675 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 834 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 111 op/s
Jan 31 08:13:49 compute-0 podman[331398]: 2026-01-31 08:13:49.876708641 +0000 UTC m=+2.517062683 container remove f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:13:49 compute-0 sudo[331294]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:49 compute-0 systemd[1]: libpod-conmon-f72152ed30956056f50b61fbf5971d87672dd0e7d2a3f20e689c7ef127173a15.scope: Deactivated successfully.
Jan 31 08:13:50 compute-0 sudo[331521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:50 compute-0 sudo[331521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:50 compute-0 sudo[331521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:50 compute-0 sudo[331546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:50 compute-0 sudo[331546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:50 compute-0 sudo[331546]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:50 compute-0 sudo[331571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:50 compute-0 sudo[331571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:50 compute-0 sudo[331571]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:50 compute-0 sshd-session[331519]: Invalid user banxgg from 45.148.10.240 port 38298
Jan 31 08:13:50 compute-0 sudo[331596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:13:50 compute-0 sudo[331596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:50 compute-0 sshd-session[331519]: Connection closed by invalid user banxgg 45.148.10.240 port 38298 [preauth]
Jan 31 08:13:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:13:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:50.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:13:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:50.534 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:50 compute-0 podman[331664]: 2026-01-31 08:13:50.47794904 +0000 UTC m=+0.019907076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:13:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:50.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.647 247708 INFO nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Creating config drive at /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/disk.config
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.654 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpoglut045 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:50 compute-0 podman[331664]: 2026-01-31 08:13:50.700854191 +0000 UTC m=+0.242812217 container create 9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wilbur, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.778 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpoglut045" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:50 compute-0 systemd[1]: Started libpod-conmon-9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d.scope.
Jan 31 08:13:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.958 247708 DEBUG nova.storage.rbd_utils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.964 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/disk.config 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.996 247708 DEBUG nova.network.neutron [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Updated VIF entry in instance network info cache for port 6fd77fc5-facb-4b81-bbc5-d38e65a67c12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:13:50 compute-0 nova_compute[247704]: 2026-01-31 08:13:50.997 247708 DEBUG nova.network.neutron [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Updating instance_info_cache with network_info: [{"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:51 compute-0 podman[331664]: 2026-01-31 08:13:51.049022075 +0000 UTC m=+0.590980141 container init 9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:51 compute-0 podman[331664]: 2026-01-31 08:13:51.058455116 +0000 UTC m=+0.600413172 container start 9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:13:51 compute-0 romantic_wilbur[331696]: 167 167
Jan 31 08:13:51 compute-0 systemd[1]: libpod-9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d.scope: Deactivated successfully.
Jan 31 08:13:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4047450471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:51 compute-0 ceph-mon[74496]: pgmap v2340: 305 pgs: 305 active+clean; 834 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 111 op/s
Jan 31 08:13:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1537017837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:51 compute-0 podman[331664]: 2026-01-31 08:13:51.188882203 +0000 UTC m=+0.730840269 container attach 9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:13:51 compute-0 podman[331664]: 2026-01-31 08:13:51.189347474 +0000 UTC m=+0.731305500 container died 9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:13:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:51 compute-0 nova_compute[247704]: 2026-01-31 08:13:51.272 247708 DEBUG oslo_concurrency.lockutils [req-a6706955-4ba3-46ad-84b7-742ab1d74a2f req-8267ead1-3a03-48aa-8e65-086bead3e90c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-905c4dc1-7ad5-4442-ba06-76af73f6742c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:51 compute-0 nova_compute[247704]: 2026-01-31 08:13:51.377 247708 INFO nova.compute.manager [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Rescuing
Jan 31 08:13:51 compute-0 nova_compute[247704]: 2026-01-31 08:13:51.377 247708 DEBUG oslo_concurrency.lockutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:13:51 compute-0 nova_compute[247704]: 2026-01-31 08:13:51.378 247708 DEBUG oslo_concurrency.lockutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquired lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:13:51 compute-0 nova_compute[247704]: 2026-01-31 08:13:51.378 247708 DEBUG nova.network.neutron [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-877741a9cdd5098f65f65a7f259b0e2c1a78e05dacea6a3797e4a4889436bf55-merged.mount: Deactivated successfully.
Jan 31 08:13:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 142 op/s
Jan 31 08:13:52 compute-0 podman[331664]: 2026-01-31 08:13:52.206364576 +0000 UTC m=+1.748322602 container remove 9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:13:52 compute-0 systemd[1]: libpod-conmon-9f41286dcf98dfa59a4042f43f454dd6270c16bd1b82721d6ffde2fe3b2c261d.scope: Deactivated successfully.
Jan 31 08:13:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:52.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:52 compute-0 podman[331744]: 2026-01-31 08:13:52.35387099 +0000 UTC m=+0.037963846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:13:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2975001044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:52 compute-0 ceph-mon[74496]: pgmap v2341: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 142 op/s
Jan 31 08:13:52 compute-0 podman[331744]: 2026-01-31 08:13:52.510940017 +0000 UTC m=+0.195032893 container create 872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:13:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:52 compute-0 nova_compute[247704]: 2026-01-31 08:13:52.789 247708 DEBUG oslo_concurrency.processutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/disk.config 905c4dc1-7ad5-4442-ba06-76af73f6742c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.825s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:13:52 compute-0 nova_compute[247704]: 2026-01-31 08:13:52.793 247708 INFO nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Deleting local config drive /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c/disk.config because it was imported into RBD.
Jan 31 08:13:52 compute-0 systemd[1]: Started libpod-conmon-872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8.scope.
Jan 31 08:13:52 compute-0 kernel: tap6fd77fc5-fa: entered promiscuous mode
Jan 31 08:13:52 compute-0 ovn_controller[149457]: 2026-01-31T08:13:52Z|00510|binding|INFO|Claiming lport 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 for this chassis.
Jan 31 08:13:52 compute-0 ovn_controller[149457]: 2026-01-31T08:13:52Z|00511|binding|INFO|6fd77fc5-facb-4b81-bbc5-d38e65a67c12: Claiming fa:16:3e:59:cd:65 10.100.0.8
Jan 31 08:13:52 compute-0 nova_compute[247704]: 2026-01-31 08:13:52.845 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:52 compute-0 ovn_controller[149457]: 2026-01-31T08:13:52Z|00512|binding|INFO|Setting lport 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 ovn-installed in OVS
Jan 31 08:13:52 compute-0 nova_compute[247704]: 2026-01-31 08:13:52.852 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:52 compute-0 NetworkManager[49108]: <info>  [1769847232.8544] manager: (tap6fd77fc5-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Jan 31 08:13:52 compute-0 nova_compute[247704]: 2026-01-31 08:13:52.855 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20fc1f31bf892680b3e43061fdaedbb7ecd9a1c453e9e67555822f95ebc64314/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20fc1f31bf892680b3e43061fdaedbb7ecd9a1c453e9e67555822f95ebc64314/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20fc1f31bf892680b3e43061fdaedbb7ecd9a1c453e9e67555822f95ebc64314/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20fc1f31bf892680b3e43061fdaedbb7ecd9a1c453e9e67555822f95ebc64314/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:52 compute-0 systemd-udevd[331775]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:13:52 compute-0 NetworkManager[49108]: <info>  [1769847232.9032] device (tap6fd77fc5-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:13:52 compute-0 NetworkManager[49108]: <info>  [1769847232.9040] device (tap6fd77fc5-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:13:52 compute-0 systemd-machined[214448]: New machine qemu-53-instance-00000083.
Jan 31 08:13:52 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000083.
Jan 31 08:13:52 compute-0 ovn_controller[149457]: 2026-01-31T08:13:52Z|00513|binding|INFO|Setting lport 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 up in Southbound
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.971 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:cd:65 10.100.0.8'], port_security=['fa:16:3e:59:cd:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '905c4dc1-7ad5-4442-ba06-76af73f6742c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6fd77fc5-facb-4b81-bbc5-d38e65a67c12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.973 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.976 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.985 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a16f3e9d-fd60-4085-8f68-d48e06ffdea2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.986 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.988 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.988 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d4d1c2-d4c0-4cd4-9b18-03b20622f362]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:52.989 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1c51b6-23cf-4570-b6b3-7112d8a5fe06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.002 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[fc383b54-81f5-48df-8ed8-c827a1cf4304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.015 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc6623f-1ca5-4b5f-be53-86436d1328ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.042 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5a89ffe2-edf3-4bd4-9cfe-5278bb65e5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 NetworkManager[49108]: <info>  [1769847233.0557] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Jan 31 08:13:53 compute-0 systemd-udevd[331778]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.055 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[324ada9c-e108-4766-8d9c-5c534278e9d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.100 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fa02b6c2-b9af-4181-8327-a9dbdfa90b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.106 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ad6cf2-5be6-408d-9972-e8f2c9cb72c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 NetworkManager[49108]: <info>  [1769847233.1270] device (tapb88251fc-70): carrier: link connected
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.133 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[34e5304f-8355-4cc4-a9d6-19ad2b595aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.155 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cefea39f-b8ab-4857-9895-7312240acaab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747574, 'reachable_time': 18075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331809, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.179 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e1e78bd9-d8b5-4bfb-ac7e-6d450a610181]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747574, 'tstamp': 747574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331810, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 podman[331744]: 2026-01-31 08:13:53.187457431 +0000 UTC m=+0.871550327 container init 872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:13:53 compute-0 podman[331744]: 2026-01-31 08:13:53.199372502 +0000 UTC m=+0.883465348 container start 872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.205 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3782a31a-8baa-4111-a45c-46a0115211b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747574, 'reachable_time': 18075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331811, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.245 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cbac599a-8dad-430d-a883-e99007c209d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 podman[331744]: 2026-01-31 08:13:53.261852713 +0000 UTC m=+0.945945539 container attach 872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.298 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1470f4-d6d9-4fb8-b289-75841735823e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.299 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.299 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.300 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:53 compute-0 NetworkManager[49108]: <info>  [1769847233.3024] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Jan 31 08:13:53 compute-0 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.304 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:13:53 compute-0 ovn_controller[149457]: 2026-01-31T08:13:53Z|00514|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.310 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.311 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.312 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[041b9335-efdb-455f-8ade-85a09e26bcbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.313 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:13:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:13:53.315 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.638 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847233.637439, 905c4dc1-7ad5-4442-ba06-76af73f6742c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.638 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] VM Started (Lifecycle Event)
Jan 31 08:13:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 156 op/s
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.779 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.784 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847233.637709, 905c4dc1-7ad5-4442-ba06-76af73f6742c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.785 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] VM Paused (Lifecycle Event)
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.950 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:53 compute-0 nova_compute[247704]: 2026-01-31 08:13:53.956 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:54 compute-0 podman[331886]: 2026-01-31 08:13:53.937830915 +0000 UTC m=+0.033925268 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.043 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.199 247708 DEBUG nova.compute.manager [req-b1457f40-9114-445a-bc86-b2b7d685a43d req-1abcc743-541b-4732-8737-a26797eac16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.200 247708 DEBUG oslo_concurrency.lockutils [req-b1457f40-9114-445a-bc86-b2b7d685a43d req-1abcc743-541b-4732-8737-a26797eac16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.200 247708 DEBUG oslo_concurrency.lockutils [req-b1457f40-9114-445a-bc86-b2b7d685a43d req-1abcc743-541b-4732-8737-a26797eac16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.200 247708 DEBUG oslo_concurrency.lockutils [req-b1457f40-9114-445a-bc86-b2b7d685a43d req-1abcc743-541b-4732-8737-a26797eac16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.200 247708 DEBUG nova.compute.manager [req-b1457f40-9114-445a-bc86-b2b7d685a43d req-1abcc743-541b-4732-8737-a26797eac16f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Processing event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.201 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.207 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.210 247708 INFO nova.virt.libvirt.driver [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Instance spawned successfully.
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.211 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.213 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847234.2129927, 905c4dc1-7ad5-4442-ba06-76af73f6742c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.213 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] VM Resumed (Lifecycle Event)
Jan 31 08:13:54 compute-0 ceph-mon[74496]: pgmap v2342: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 156 op/s
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.291 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.291 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.292 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.292 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.292 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.293 247708 DEBUG nova.virt.libvirt.driver [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.297 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.300 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.328 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:54 compute-0 strange_roentgen[331766]: {
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:         "osd_id": 0,
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:         "type": "bluestore"
Jan 31 08:13:54 compute-0 strange_roentgen[331766]:     }
Jan 31 08:13:54 compute-0 strange_roentgen[331766]: }
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.357 247708 DEBUG nova.network.neutron [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Updating instance_info_cache with network_info: [{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:13:54 compute-0 systemd[1]: libpod-872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8.scope: Deactivated successfully.
Jan 31 08:13:54 compute-0 podman[331886]: 2026-01-31 08:13:54.370279742 +0000 UTC m=+0.466374105 container create c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:13:54 compute-0 podman[331744]: 2026-01-31 08:13:54.373310296 +0000 UTC m=+2.057403162 container died 872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:13:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:54.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.406 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:13:54 compute-0 systemd[1]: Started libpod-conmon-c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa.scope.
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.498 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b04be1528438f7748df0625d441966cd7ecbe336fbbadaae79878a476f0e05b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-20fc1f31bf892680b3e43061fdaedbb7ecd9a1c453e9e67555822f95ebc64314-merged.mount: Deactivated successfully.
Jan 31 08:13:54 compute-0 podman[331744]: 2026-01-31 08:13:54.571779572 +0000 UTC m=+2.255872418 container remove 872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.577 247708 DEBUG oslo_concurrency.lockutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Releasing lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:13:54 compute-0 systemd[1]: libpod-conmon-872b80ea669ff796d359073a69ff9d1557a2ee8332e528c1c344990667fabdf8.scope: Deactivated successfully.
Jan 31 08:13:54 compute-0 podman[331886]: 2026-01-31 08:13:54.584636925 +0000 UTC m=+0.680731248 container init c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:13:54 compute-0 podman[331886]: 2026-01-31 08:13:54.589954445 +0000 UTC m=+0.686048768 container start c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:13:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:54.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:54 compute-0 sudo[331596]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:13:54 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [NOTICE]   (331934) : New worker (331942) forked
Jan 31 08:13:54 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [NOTICE]   (331934) : Loading success.
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.644 247708 INFO nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Took 17.17 seconds to spawn the instance on the hypervisor.
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.645 247708 DEBUG nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:13:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:13:54 compute-0 podman[331932]: 2026-01-31 08:13:54.67842321 +0000 UTC m=+0.067055225 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:13:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 568ba4c5-a05b-4eb6-81da-84c3cb825537 does not exist
Jan 31 08:13:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 20400520-bb4f-41cf-b32e-25612abfcf66 does not exist
Jan 31 08:13:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev aceb3686-e79b-4128-a70a-cdabbcdfa5d9 does not exist
Jan 31 08:13:54 compute-0 sudo[331970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:13:54 compute-0 sudo[331970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:54 compute-0 sudo[331970]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.794 247708 INFO nova.compute.manager [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Took 20.45 seconds to build instance.
Jan 31 08:13:54 compute-0 sudo[331995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:13:54 compute-0 sudo[331995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:54 compute-0 sudo[331995]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:54 compute-0 nova_compute[247704]: 2026-01-31 08:13:54.892 247708 DEBUG oslo_concurrency.lockutils [None req-8b7bddd4-42dc-4870-a663-57b2d94eff53 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.1 MiB/s wr, 149 op/s
Jan 31 08:13:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:13:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1459224970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1232812821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:13:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:13:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:56.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.392 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.509 247708 DEBUG nova.compute.manager [req-bad69a05-2d8b-4bae-ae0e-288e1195459e req-4adde9b7-4447-445b-b766-129cffb86888 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.510 247708 DEBUG oslo_concurrency.lockutils [req-bad69a05-2d8b-4bae-ae0e-288e1195459e req-4adde9b7-4447-445b-b766-129cffb86888 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.511 247708 DEBUG oslo_concurrency.lockutils [req-bad69a05-2d8b-4bae-ae0e-288e1195459e req-4adde9b7-4447-445b-b766-129cffb86888 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.512 247708 DEBUG oslo_concurrency.lockutils [req-bad69a05-2d8b-4bae-ae0e-288e1195459e req-4adde9b7-4447-445b-b766-129cffb86888 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.512 247708 DEBUG nova.compute.manager [req-bad69a05-2d8b-4bae-ae0e-288e1195459e req-4adde9b7-4447-445b-b766-129cffb86888 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] No waiting events found dispatching network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:13:56 compute-0 nova_compute[247704]: 2026-01-31 08:13:56.513 247708 WARNING nova.compute.manager [req-bad69a05-2d8b-4bae-ae0e-288e1195459e req-4adde9b7-4447-445b-b766-129cffb86888 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received unexpected event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 for instance with vm_state active and task_state None.
Jan 31 08:13:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:57 compute-0 ceph-mon[74496]: pgmap v2343: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.1 MiB/s wr, 149 op/s
Jan 31 08:13:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Jan 31 08:13:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:58.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:58 compute-0 ceph-mon[74496]: pgmap v2344: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Jan 31 08:13:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:13:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:13:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.157 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.159 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.161 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.161 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.161 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.164 247708 INFO nova.compute.manager [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Terminating instance
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.166 247708 DEBUG nova.compute.manager [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.330 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Jan 31 08:13:59 compute-0 kernel: tap6fd77fc5-fa (unregistering): left promiscuous mode
Jan 31 08:13:59 compute-0 NetworkManager[49108]: <info>  [1769847239.9327] device (tap6fd77fc5-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:13:59 compute-0 ovn_controller[149457]: 2026-01-31T08:13:59Z|00515|binding|INFO|Releasing lport 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 from this chassis (sb_readonly=0)
Jan 31 08:13:59 compute-0 ovn_controller[149457]: 2026-01-31T08:13:59Z|00516|binding|INFO|Setting lport 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 down in Southbound
Jan 31 08:13:59 compute-0 ovn_controller[149457]: 2026-01-31T08:13:59Z|00517|binding|INFO|Removing iface tap6fd77fc5-fa ovn-installed in OVS
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:59 compute-0 nova_compute[247704]: 2026-01-31 08:13:59.952 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:13:59 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000083.scope: Deactivated successfully.
Jan 31 08:13:59 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000083.scope: Consumed 5.698s CPU time.
Jan 31 08:13:59 compute-0 systemd-machined[214448]: Machine qemu-53-instance-00000083 terminated.
Jan 31 08:14:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:00.001 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:cd:65 10.100.0.8'], port_security=['fa:16:3e:59:cd:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '905c4dc1-7ad5-4442-ba06-76af73f6742c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6fd77fc5-facb-4b81-bbc5-d38e65a67c12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:00.002 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6fd77fc5-facb-4b81-bbc5-d38e65a67c12 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis
Jan 31 08:14:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:00.003 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:14:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:00.005 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[96e25afe-0fec-49cf-835b-14e08db221f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:00.005 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.212 247708 INFO nova.virt.libvirt.driver [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Instance destroyed successfully.
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.212 247708 DEBUG nova.objects.instance [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid 905c4dc1-7ad5-4442-ba06-76af73f6742c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.256 247708 DEBUG nova.virt.libvirt.vif [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-718061765',display_name='tempest-ServersTestJSON-server-718061765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-718061765',id=131,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-0s68bku0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:13:56Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=905c4dc1-7ad5-4442-ba06-76af73f6742c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.258 247708 DEBUG nova.network.os_vif_util [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "address": "fa:16:3e:59:cd:65", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fd77fc5-fa", "ovs_interfaceid": "6fd77fc5-facb-4b81-bbc5-d38e65a67c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.258 247708 DEBUG nova.network.os_vif_util [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.259 247708 DEBUG os_vif [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.260 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.260 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fd77fc5-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:00 compute-0 nova_compute[247704]: 2026-01-31 08:14:00.265 247708 INFO os_vif [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:cd:65,bridge_name='br-int',has_traffic_filtering=True,id=6fd77fc5-facb-4b81-bbc5-d38e65a67c12,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fd77fc5-fa')
Jan 31 08:14:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:00.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:00 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [NOTICE]   (331934) : haproxy version is 2.8.14-c23fe91
Jan 31 08:14:00 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [NOTICE]   (331934) : path to executable is /usr/sbin/haproxy
Jan 31 08:14:00 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [WARNING]  (331934) : Exiting Master process...
Jan 31 08:14:00 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [ALERT]    (331934) : Current worker (331942) exited with code 143 (Terminated)
Jan 31 08:14:00 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[331928]: [WARNING]  (331934) : All workers exited. Exiting... (0)
Jan 31 08:14:00 compute-0 systemd[1]: libpod-c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa.scope: Deactivated successfully.
Jan 31 08:14:00 compute-0 podman[332047]: 2026-01-31 08:14:00.516506784 +0000 UTC m=+0.416292955 container died c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:14:00 compute-0 ceph-mon[74496]: pgmap v2345: 305 pgs: 305 active+clean; 847 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Jan 31 08:14:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa-userdata-shm.mount: Deactivated successfully.
Jan 31 08:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b04be1528438f7748df0625d441966cd7ecbe336fbbadaae79878a476f0e05b0-merged.mount: Deactivated successfully.
Jan 31 08:14:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 848 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 526 KiB/s wr, 216 op/s
Jan 31 08:14:01 compute-0 podman[332047]: 2026-01-31 08:14:01.803318038 +0000 UTC m=+1.703104159 container cleanup c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:14:01 compute-0 systemd[1]: libpod-conmon-c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa.scope: Deactivated successfully.
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.355 247708 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-vif-unplugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.356 247708 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.356 247708 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.357 247708 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.357 247708 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] No waiting events found dispatching network-vif-unplugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.358 247708 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-vif-unplugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.358 247708 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.358 247708 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.358 247708 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.358 247708 DEBUG oslo_concurrency.lockutils [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.359 247708 DEBUG nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] No waiting events found dispatching network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.359 247708 WARNING nova.compute.manager [req-5ac9a5fc-9335-470b-bea0-527bab6a409b req-0cf4e03f-5cab-41f5-acb1-c884ef6273c6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received unexpected event network-vif-plugged-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 for instance with vm_state active and task_state deleting.
Jan 31 08:14:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:02.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:02 compute-0 ceph-mon[74496]: pgmap v2346: 305 pgs: 305 active+clean; 848 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 526 KiB/s wr, 216 op/s
Jan 31 08:14:02 compute-0 podman[332110]: 2026-01-31 08:14:02.549822277 +0000 UTC m=+0.729154958 container remove c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.556 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fb19bd2d-6b31-44d3-be16-858afc4c161b]: (4, ('Sat Jan 31 08:14:00 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa)\nc3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa\nSat Jan 31 08:14:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (c3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa)\nc3a75a49bc72345489a39b4b51e634ddfb269e9a0892c903f0fefb3d77756cfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.559 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff889f2-1297-4083-a388-108887ed6583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.560 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:02 compute-0 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.564 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:02 compute-0 nova_compute[247704]: 2026-01-31 08:14:02.571 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.574 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf45c68-267c-49b5-9895-5c1a7122fdf3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.587 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[61638fc9-a968-43c3-912e-6384c85ddeac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.588 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f13f8f20-35a9-4cdb-a5a8-ccb3b3a78d0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.601 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e92550-35ef-4be0-81cb-01a75c9fd15b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747565, 'reachable_time': 42137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332125, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.605 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:14:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:02.605 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4d5029-b809-4369-a9b2-da96c6987b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 31 08:14:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.1 MiB/s wr, 196 op/s
Jan 31 08:14:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 31 08:14:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:04.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 31 08:14:04 compute-0 nova_compute[247704]: 2026-01-31 08:14:04.501 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:04.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:04 compute-0 ceph-mon[74496]: pgmap v2347: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.1 MiB/s wr, 196 op/s
Jan 31 08:14:05 compute-0 sudo[332127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:05 compute-0 sudo[332127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:05 compute-0 sudo[332127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:05 compute-0 sudo[332152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:05 compute-0 sudo[332152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:05 compute-0 sudo[332152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:05 compute-0 nova_compute[247704]: 2026-01-31 08:14:05.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.4 MiB/s wr, 290 op/s
Jan 31 08:14:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:06.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:06 compute-0 ceph-mon[74496]: osdmap e294: 3 total, 3 up, 3 in
Jan 31 08:14:06 compute-0 nova_compute[247704]: 2026-01-31 08:14:06.498 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:14:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 846 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 273 op/s
Jan 31 08:14:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:08.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:08.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:08 compute-0 podman[332180]: 2026-01-31 08:14:08.890376413 +0000 UTC m=+0.060382343 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:14:09 compute-0 ceph-mon[74496]: pgmap v2349: 305 pgs: 305 active+clean; 867 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.4 MiB/s wr, 290 op/s
Jan 31 08:14:09 compute-0 nova_compute[247704]: 2026-01-31 08:14:09.505 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 827 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 214 op/s
Jan 31 08:14:10 compute-0 nova_compute[247704]: 2026-01-31 08:14:10.266 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:10.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:10 compute-0 ceph-mon[74496]: pgmap v2350: 305 pgs: 305 active+clean; 846 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.5 MiB/s wr, 273 op/s
Jan 31 08:14:10 compute-0 ceph-mon[74496]: pgmap v2351: 305 pgs: 305 active+clean; 827 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 214 op/s
Jan 31 08:14:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:10.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:11.186 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:11.186 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:11.187 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 31 08:14:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 834 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Jan 31 08:14:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 31 08:14:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 31 08:14:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:12.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:13 compute-0 ceph-mon[74496]: pgmap v2352: 305 pgs: 305 active+clean; 834 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Jan 31 08:14:13 compute-0 ceph-mon[74496]: osdmap e295: 3 total, 3 up, 3 in
Jan 31 08:14:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 31 08:14:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 31 08:14:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:14.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:14 compute-0 nova_compute[247704]: 2026-01-31 08:14:14.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:14 compute-0 ceph-mon[74496]: pgmap v2354: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 31 08:14:15 compute-0 nova_compute[247704]: 2026-01-31 08:14:15.210 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847240.2095733, 905c4dc1-7ad5-4442-ba06-76af73f6742c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:15 compute-0 nova_compute[247704]: 2026-01-31 08:14:15.211 247708 INFO nova.compute.manager [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] VM Stopped (Lifecycle Event)
Jan 31 08:14:15 compute-0 nova_compute[247704]: 2026-01-31 08:14:15.309 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 31 08:14:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 31 08:14:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 874 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 153 op/s
Jan 31 08:14:15 compute-0 nova_compute[247704]: 2026-01-31 08:14:15.922 247708 DEBUG nova.compute.manager [None req-1b354c1e-5de2-4eaf-b703-f1d369fc51c0 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:15 compute-0 nova_compute[247704]: 2026-01-31 08:14:15.925 247708 DEBUG nova.compute.manager [None req-1b354c1e-5de2-4eaf-b703-f1d369fc51c0 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:15 compute-0 nova_compute[247704]: 2026-01-31 08:14:15.973 247708 INFO nova.compute.manager [None req-1b354c1e-5de2-4eaf-b703-f1d369fc51c0 - - - - - -] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 31 08:14:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:16 compute-0 nova_compute[247704]: 2026-01-31 08:14:16.543 247708 INFO nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance shutdown successfully after 20 seconds.
Jan 31 08:14:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:16.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:17 compute-0 ceph-mon[74496]: osdmap e296: 3 total, 3 up, 3 in
Jan 31 08:14:17 compute-0 ceph-mon[74496]: pgmap v2356: 305 pgs: 305 active+clean; 874 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 153 op/s
Jan 31 08:14:17 compute-0 kernel: tap77c06224-c2 (unregistering): left promiscuous mode
Jan 31 08:14:17 compute-0 NetworkManager[49108]: <info>  [1769847257.4856] device (tap77c06224-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:17 compute-0 ovn_controller[149457]: 2026-01-31T08:14:17Z|00518|binding|INFO|Releasing lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 from this chassis (sb_readonly=0)
Jan 31 08:14:17 compute-0 ovn_controller[149457]: 2026-01-31T08:14:17Z|00519|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 down in Southbound
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.492 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:17 compute-0 ovn_controller[149457]: 2026-01-31T08:14:17Z|00520|binding|INFO|Removing iface tap77c06224-c2 ovn-installed in OVS
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.493 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:17.515 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:5f:e2 10.100.0.14'], port_security=['fa:16:3e:51:5f:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '827a8d86-f250-441f-911a-98626a322ef7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=77c06224-c2b3-45e0-90f5-afd0c8400ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:17.517 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c unbound from our chassis
Jan 31 08:14:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:17.519 160028 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:14:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:17.522 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc2a45d-7514-494b-a4f7-06325726d39a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:17 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 31 08:14:17 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000082.scope: Consumed 15.104s CPU time.
Jan 31 08:14:17 compute-0 systemd-machined[214448]: Machine qemu-52-instance-00000082 terminated.
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.579 247708 INFO nova.virt.libvirt.driver [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance destroyed successfully.
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.580 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'numa_topology' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.624 247708 INFO nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Attempting rescue
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.625 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.629 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.629 247708 INFO nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Creating image(s)
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.663 247708 DEBUG nova.storage.rbd_utils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.667 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'trusted_certs' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.725 247708 DEBUG nova.storage.rbd_utils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.2 MiB/s wr, 218 op/s
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.773 247708 DEBUG nova.storage.rbd_utils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.777 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.838 247708 DEBUG nova.compute.manager [req-0e06be26-7a6b-4f56-b87a-e02e25b063cb req-400edaf9-cabb-4b75-ba27-fc16b4f5abd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.839 247708 DEBUG oslo_concurrency.lockutils [req-0e06be26-7a6b-4f56-b87a-e02e25b063cb req-400edaf9-cabb-4b75-ba27-fc16b4f5abd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.839 247708 DEBUG oslo_concurrency.lockutils [req-0e06be26-7a6b-4f56-b87a-e02e25b063cb req-400edaf9-cabb-4b75-ba27-fc16b4f5abd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.840 247708 DEBUG oslo_concurrency.lockutils [req-0e06be26-7a6b-4f56-b87a-e02e25b063cb req-400edaf9-cabb-4b75-ba27-fc16b4f5abd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.840 247708 DEBUG nova.compute.manager [req-0e06be26-7a6b-4f56-b87a-e02e25b063cb req-400edaf9-cabb-4b75-ba27-fc16b4f5abd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.840 247708 WARNING nova.compute.manager [req-0e06be26-7a6b-4f56-b87a-e02e25b063cb req-400edaf9-cabb-4b75-ba27-fc16b4f5abd0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state active and task_state rescuing.
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.875 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.876 247708 DEBUG oslo_concurrency.lockutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.877 247708 DEBUG oslo_concurrency.lockutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.877 247708 DEBUG oslo_concurrency.lockutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.920 247708 DEBUG nova.storage.rbd_utils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.924 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 827a8d86-f250-441f-911a-98626a322ef7_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.967 247708 INFO nova.virt.libvirt.driver [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Deleting instance files /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c_del
Jan 31 08:14:17 compute-0 nova_compute[247704]: 2026-01-31 08:14:17.969 247708 INFO nova.virt.libvirt.driver [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Deletion of /var/lib/nova/instances/905c4dc1-7ad5-4442-ba06-76af73f6742c_del complete
Jan 31 08:14:18 compute-0 nova_compute[247704]: 2026-01-31 08:14:18.049 247708 INFO nova.compute.manager [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Took 18.88 seconds to destroy the instance on the hypervisor.
Jan 31 08:14:18 compute-0 nova_compute[247704]: 2026-01-31 08:14:18.049 247708 DEBUG oslo.service.loopingcall [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:14:18 compute-0 nova_compute[247704]: 2026-01-31 08:14:18.050 247708 DEBUG nova.compute.manager [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:14:18 compute-0 nova_compute[247704]: 2026-01-31 08:14:18.050 247708 DEBUG nova.network.neutron [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:14:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 31 08:14:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:18.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 31 08:14:18 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 31 08:14:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:18.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:18 compute-0 ceph-mon[74496]: pgmap v2357: 305 pgs: 305 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.2 MiB/s wr, 218 op/s
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.334 247708 DEBUG nova.network.neutron [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.360 247708 INFO nova.compute.manager [-] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Took 1.31 seconds to deallocate network for instance.
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.450 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.451 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.474 247708 DEBUG nova.compute.manager [req-85a8ff87-7549-4f68-8826-57b8f92613a3 req-db12edf5-fae9-41b0-9130-6541e9e1ddaa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 905c4dc1-7ad5-4442-ba06-76af73f6742c] Received event network-vif-deleted-6fd77fc5-facb-4b81-bbc5-d38e65a67c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.510 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.565 247708 DEBUG oslo_concurrency.processutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 957 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 8.5 MiB/s wr, 264 op/s
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.776 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 827a8d86-f250-441f-911a-98626a322ef7_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.777 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'migration_context' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.803 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.804 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Start _get_guest_xml network_info=[{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2001325626-network", "vif_mac": "fa:16:3e:51:5f:e2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7c23949f-bba8-4466-bb79-caf568852d38', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.805 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'resources' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.845 247708 WARNING nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.852 247708 DEBUG nova.virt.libvirt.host [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.853 247708 DEBUG nova.virt.libvirt.host [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.859 247708 DEBUG nova.virt.libvirt.host [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.859 247708 DEBUG nova.virt.libvirt.host [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.861 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.862 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.862 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.863 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.863 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.863 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.864 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.864 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.864 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.865 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.865 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.866 247708 DEBUG nova.virt.hardware [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.866 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'vcpu_model' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:19 compute-0 ceph-mon[74496]: osdmap e297: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-0 ceph-mon[74496]: osdmap e298: 3 total, 3 up, 3 in
Jan 31 08:14:19 compute-0 nova_compute[247704]: 2026-01-31 08:14:19.905 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2897049485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.076 247708 DEBUG oslo_concurrency.processutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.082 247708 DEBUG nova.compute.provider_tree [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.115 247708 DEBUG nova.scheduler.client.report [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:14:20
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', 'default.rgw.control', 'images']
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.167 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.232 247708 INFO nova.scheduler.client.report [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance 905c4dc1-7ad5-4442-ba06-76af73f6742c
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.319 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:14:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080364343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.379 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.380 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.426 247708 DEBUG oslo_concurrency.lockutils [None req-ea9a6308-b53d-46bc-8e4e-9267d17415c8 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "905c4dc1-7ad5-4442-ba06-76af73f6742c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 21.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.452 247708 DEBUG nova.compute.manager [req-6aeae17c-fdfd-4620-bee3-83a281513f4a req-6b8ea12e-d474-448f-8752-6a0fd88268e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.453 247708 DEBUG oslo_concurrency.lockutils [req-6aeae17c-fdfd-4620-bee3-83a281513f4a req-6b8ea12e-d474-448f-8752-6a0fd88268e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.453 247708 DEBUG oslo_concurrency.lockutils [req-6aeae17c-fdfd-4620-bee3-83a281513f4a req-6b8ea12e-d474-448f-8752-6a0fd88268e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.453 247708 DEBUG oslo_concurrency.lockutils [req-6aeae17c-fdfd-4620-bee3-83a281513f4a req-6b8ea12e-d474-448f-8752-6a0fd88268e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.454 247708 DEBUG nova.compute.manager [req-6aeae17c-fdfd-4620-bee3-83a281513f4a req-6b8ea12e-d474-448f-8752-6a0fd88268e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.454 247708 WARNING nova.compute.manager [req-6aeae17c-fdfd-4620-bee3-83a281513f4a req-6b8ea12e-d474-448f-8752-6a0fd88268e9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state active and task_state rescuing.
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:14:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:14:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:14:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234446353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.825 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:20 compute-0 nova_compute[247704]: 2026-01-31 08:14:20.826 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:21 compute-0 ceph-mon[74496]: pgmap v2360: 305 pgs: 305 active+clean; 957 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 8.5 MiB/s wr, 264 op/s
Jan 31 08:14:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2897049485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1080364343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2234446353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:14:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159375368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.300 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.303 247708 DEBUG nova.virt.libvirt.vif [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1146155717',display_name='tempest-ServerRescueTestJSON-server-1146155717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1146155717',id=130,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-dm7efqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:13:47Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=827a8d86-f250-441f-911a-98626a322ef7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2001325626-network", "vif_mac": "fa:16:3e:51:5f:e2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.303 247708 DEBUG nova.network.os_vif_util [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-2001325626-network", "vif_mac": "fa:16:3e:51:5f:e2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.305 247708 DEBUG nova.network.os_vif_util [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.307 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'pci_devices' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.8 MiB/s wr, 206 op/s
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.767 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <uuid>827a8d86-f250-441f-911a-98626a322ef7</uuid>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <name>instance-00000082</name>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerRescueTestJSON-server-1146155717</nova:name>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:14:19</nova:creationTime>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:user uuid="6ab9e181016f4d5a899c91dae3aa26e0">tempest-ServerRescueTestJSON-1911109090-project-member</nova:user>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:project uuid="cbd0f41e455b4b3b9a8edf35ef0b85ed">tempest-ServerRescueTestJSON-1911109090</nova:project>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <nova:port uuid="77c06224-c2b3-45e0-90f5-afd0c8400ff7">
Jan 31 08:14:21 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <system>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <entry name="serial">827a8d86-f250-441f-911a-98626a322ef7</entry>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <entry name="uuid">827a8d86-f250-441f-911a-98626a322ef7</entry>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </system>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <os>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </os>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <features>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </features>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/827a8d86-f250-441f-911a-98626a322ef7_disk.rescue">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </source>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/827a8d86-f250-441f-911a-98626a322ef7_disk">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </source>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/827a8d86-f250-441f-911a-98626a322ef7_disk.config.rescue">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </source>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:14:21 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:51:5f:e2"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <target dev="tap77c06224-c2"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/console.log" append="off"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <video>
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </video>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:14:21 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:14:21 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:14:21 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:14:21 compute-0 nova_compute[247704]: </domain>
Jan 31 08:14:21 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:14:21 compute-0 nova_compute[247704]: 2026-01-31 08:14:21.778 247708 INFO nova.virt.libvirt.driver [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance destroyed successfully.
Jan 31 08:14:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 31 08:14:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:22.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 31 08:14:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 31 08:14:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2159375368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:23 compute-0 ceph-mon[74496]: pgmap v2361: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.8 MiB/s wr, 206 op/s
Jan 31 08:14:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 994 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Jan 31 08:14:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:24 compute-0 nova_compute[247704]: 2026-01-31 08:14:24.513 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:24 compute-0 nova_compute[247704]: 2026-01-31 08:14:24.524 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:14:24 compute-0 nova_compute[247704]: 2026-01-31 08:14:24.524 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:14:24 compute-0 nova_compute[247704]: 2026-01-31 08:14:24.525 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:14:24 compute-0 nova_compute[247704]: 2026-01-31 08:14:24.525 247708 DEBUG nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] No VIF found with MAC fa:16:3e:51:5f:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:14:24 compute-0 nova_compute[247704]: 2026-01-31 08:14:24.526 247708 INFO nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Using config drive
Jan 31 08:14:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:24 compute-0 podman[332416]: 2026-01-31 08:14:24.968460576 +0000 UTC m=+0.136699242 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:14:25 compute-0 sudo[332443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:25 compute-0 sudo[332443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:25 compute-0 sudo[332443]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:25 compute-0 sudo[332468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:25 compute-0 sudo[332468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:25 compute-0 sudo[332468]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:25 compute-0 nova_compute[247704]: 2026-01-31 08:14:25.527 247708 DEBUG nova.storage.rbd_utils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:25 compute-0 nova_compute[247704]: 2026-01-31 08:14:25.534 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:25 compute-0 nova_compute[247704]: 2026-01-31 08:14:25.553 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'ec2_ids' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:25 compute-0 nova_compute[247704]: 2026-01-31 08:14:25.587 247708 DEBUG nova.objects.instance [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'keypairs' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 109 op/s
Jan 31 08:14:25 compute-0 ceph-mon[74496]: osdmap e299: 3 total, 3 up, 3 in
Jan 31 08:14:25 compute-0 ceph-mon[74496]: pgmap v2363: 305 pgs: 305 active+clean; 994 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Jan 31 08:14:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 31 08:14:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:26.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 31 08:14:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 31 08:14:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:26.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:26 compute-0 nova_compute[247704]: 2026-01-31 08:14:26.920 247708 INFO nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Creating config drive at /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config.rescue
Jan 31 08:14:26 compute-0 nova_compute[247704]: 2026-01-31 08:14:26.927 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfjpaj1x4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.054 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfjpaj1x4" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.084 247708 DEBUG nova.storage.rbd_utils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] rbd image 827a8d86-f250-441f-911a-98626a322ef7_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.088 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config.rescue 827a8d86-f250-441f-911a-98626a322ef7_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.115 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.116 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:27 compute-0 ceph-mon[74496]: pgmap v2364: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 109 op/s
Jan 31 08:14:27 compute-0 ceph-mon[74496]: osdmap e300: 3 total, 3 up, 3 in
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.217 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.404 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.405 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.416 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.416 247708 INFO nova.compute.claims [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:27 compute-0 nova_compute[247704]: 2026-01-31 08:14:27.637 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 2.8 MiB/s wr, 77 op/s
Jan 31 08:14:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448647849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.074 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.081 247708 DEBUG nova.compute.provider_tree [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.103 247708 DEBUG nova.scheduler.client.report [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.149 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.150 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.215 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.216 247708 DEBUG nova.network.neutron [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.259 247708 INFO nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.338 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:14:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:28.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:28 compute-0 ceph-mon[74496]: pgmap v2366: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 2.8 MiB/s wr, 77 op/s
Jan 31 08:14:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1448647849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.500 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.502 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.502 247708 INFO nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Creating image(s)
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.535 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.571 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.604 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.608 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.657 247708 DEBUG nova.policy [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5366d122b359489fb9d2bda8d19611a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:14:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:28.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.661 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.661 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.662 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.662 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.686 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:28 compute-0 nova_compute[247704]: 2026-01-31 08:14:28.690 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8db08219-6945-4f16-a138-8f5ddff67421_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.515 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.567 247708 DEBUG oslo_concurrency.processutils [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config.rescue 827a8d86-f250-441f-911a-98626a322ef7_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.567 247708 INFO nova.virt.libvirt.driver [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Deleting local config drive /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7/disk.config.rescue because it was imported into RBD.
Jan 31 08:14:29 compute-0 kernel: tap77c06224-c2: entered promiscuous mode
Jan 31 08:14:29 compute-0 NetworkManager[49108]: <info>  [1769847269.6155] manager: (tap77c06224-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Jan 31 08:14:29 compute-0 ovn_controller[149457]: 2026-01-31T08:14:29Z|00521|binding|INFO|Claiming lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 for this chassis.
Jan 31 08:14:29 compute-0 ovn_controller[149457]: 2026-01-31T08:14:29Z|00522|binding|INFO|77c06224-c2b3-45e0-90f5-afd0c8400ff7: Claiming fa:16:3e:51:5f:e2 10.100.0.14
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:29 compute-0 ovn_controller[149457]: 2026-01-31T08:14:29Z|00523|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 ovn-installed in OVS
Jan 31 08:14:29 compute-0 nova_compute[247704]: 2026-01-31 08:14:29.626 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:29 compute-0 systemd-udevd[332670]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:14:29 compute-0 NetworkManager[49108]: <info>  [1769847269.6551] device (tap77c06224-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:14:29 compute-0 NetworkManager[49108]: <info>  [1769847269.6563] device (tap77c06224-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:14:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:29.657 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:5f:e2 10.100.0.14'], port_security=['fa:16:3e:51:5f:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '827a8d86-f250-441f-911a-98626a322ef7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=77c06224-c2b3-45e0-90f5-afd0c8400ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:29.658 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c bound to our chassis
Jan 31 08:14:29 compute-0 ovn_controller[149457]: 2026-01-31T08:14:29Z|00524|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 up in Southbound
Jan 31 08:14:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:29.659 160028 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:14:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:29.660 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d055fe9a-ad57-4eb1-bf83-16be066699e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:29 compute-0 systemd-machined[214448]: New machine qemu-54-instance-00000082.
Jan 31 08:14:29 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000082.
Jan 31 08:14:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 1.3 MiB/s wr, 80 op/s
Jan 31 08:14:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:30.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:30 compute-0 nova_compute[247704]: 2026-01-31 08:14:30.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:30.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:30 compute-0 ceph-mon[74496]: pgmap v2367: 305 pgs: 305 active+clean; 1006 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 1.3 MiB/s wr, 80 op/s
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.008 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 8db08219-6945-4f16-a138-8f5ddff67421_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.112 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] resizing rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:14:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:31.172 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:31.173 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.218 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.225 247708 DEBUG nova.compute.manager [None req-9c21b3f6-389b-4270-96d4-c98cf3557657 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.226 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 827a8d86-f250-441f-911a-98626a322ef7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.226 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847271.138914, 827a8d86-f250-441f-911a-98626a322ef7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.227 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Resumed (Lifecycle Event)
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.231 247708 DEBUG nova.compute.manager [req-a3c7ac2a-8c8c-4479-9f4f-54495dcb541b req-93b89a47-df74-4dc3-8092-9311f0c6d28d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.231 247708 DEBUG oslo_concurrency.lockutils [req-a3c7ac2a-8c8c-4479-9f4f-54495dcb541b req-93b89a47-df74-4dc3-8092-9311f0c6d28d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.232 247708 DEBUG oslo_concurrency.lockutils [req-a3c7ac2a-8c8c-4479-9f4f-54495dcb541b req-93b89a47-df74-4dc3-8092-9311f0c6d28d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.232 247708 DEBUG oslo_concurrency.lockutils [req-a3c7ac2a-8c8c-4479-9f4f-54495dcb541b req-93b89a47-df74-4dc3-8092-9311f0c6d28d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.232 247708 DEBUG nova.compute.manager [req-a3c7ac2a-8c8c-4479-9f4f-54495dcb541b req-93b89a47-df74-4dc3-8092-9311f0c6d28d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.232 247708 WARNING nova.compute.manager [req-a3c7ac2a-8c8c-4479-9f4f-54495dcb541b req-93b89a47-df74-4dc3-8092-9311f0c6d28d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state active and task_state rescuing.
Jan 31 08:14:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.352 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.356 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.555 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.556 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847271.139129, 827a8d86-f250-441f-911a-98626a322ef7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.557 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Started (Lifecycle Event)
Jan 31 08:14:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.622 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.628 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.677 247708 DEBUG nova.network.neutron [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Successfully created port: c9fa121d-38bf-475f-924e-ce6f8d3af489 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:14:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 1017 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.4 MiB/s wr, 61 op/s
Jan 31 08:14:31 compute-0 nova_compute[247704]: 2026-01-31 08:14:31.961 247708 DEBUG nova.objects.instance [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'migration_context' on Instance uuid 8db08219-6945-4f16-a138-8f5ddff67421 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.010 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.011 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Ensure instance console log exists: /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.012 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.013 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.013 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:32.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.598 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.598 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.599 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.599 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:14:32 compute-0 nova_compute[247704]: 2026-01-31 08:14:32.599 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:32 compute-0 ceph-mon[74496]: osdmap e301: 3 total, 3 up, 3 in
Jan 31 08:14:32 compute-0 ceph-mon[74496]: pgmap v2369: 305 pgs: 305 active+clean; 1017 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.4 MiB/s wr, 61 op/s
Jan 31 08:14:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788472167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.051 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.140 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.141 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.146 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.146 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.146 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.185 247708 INFO nova.compute.manager [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Unrescuing
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.186 247708 DEBUG oslo_concurrency.lockutils [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.187 247708 DEBUG oslo_concurrency.lockutils [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquired lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.188 247708 DEBUG nova.network.neutron [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.325 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.326 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4085MB free_disk=20.550960540771484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.326 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.326 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.330 247708 DEBUG nova.compute.manager [req-bb920efc-c9f1-4c84-a98c-3bc1e93066a4 req-a463d240-fb49-4ac8-bfde-fed549028554 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.330 247708 DEBUG oslo_concurrency.lockutils [req-bb920efc-c9f1-4c84-a98c-3bc1e93066a4 req-a463d240-fb49-4ac8-bfde-fed549028554 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.330 247708 DEBUG oslo_concurrency.lockutils [req-bb920efc-c9f1-4c84-a98c-3bc1e93066a4 req-a463d240-fb49-4ac8-bfde-fed549028554 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.331 247708 DEBUG oslo_concurrency.lockutils [req-bb920efc-c9f1-4c84-a98c-3bc1e93066a4 req-a463d240-fb49-4ac8-bfde-fed549028554 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.331 247708 DEBUG nova.compute.manager [req-bb920efc-c9f1-4c84-a98c-3bc1e93066a4 req-a463d240-fb49-4ac8-bfde-fed549028554 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.331 247708 WARNING nova.compute.manager [req-bb920efc-c9f1-4c84-a98c-3bc1e93066a4 req-a463d240-fb49-4ac8-bfde-fed549028554 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.397 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.397 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 827a8d86-f250-441f-911a-98626a322ef7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.397 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 8db08219-6945-4f16-a138-8f5ddff67421 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.397 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.398 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.516 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 1.1 MiB/s wr, 61 op/s
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.897 247708 DEBUG nova.network.neutron [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Successfully updated port: c9fa121d-38bf-475f-924e-ce6f8d3af489 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.916 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "refresh_cache-8db08219-6945-4f16-a138-8f5ddff67421" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.917 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquired lock "refresh_cache-8db08219-6945-4f16-a138-8f5ddff67421" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.917 247708 DEBUG nova.network.neutron [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:14:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940524916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:33 compute-0 nova_compute[247704]: 2026-01-31 08:14:33.996 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.002 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.020 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.052 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.052 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2788472167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:14:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1690 writes, 7537 keys, 1690 commit groups, 1.0 writes per commit group, ingest: 11.23 MB, 0.02 MB/s
                                           Interval WAL: 1690 writes, 1690 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     60.4      1.12              0.17        33    0.034       0      0       0.0       0.0
                                             L6      1/0   12.19 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.5     64.5     54.5      5.60              0.75        32    0.175    200K    17K       0.0       0.0
                                            Sum      1/0   12.19 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.5     53.7     55.5      6.72              0.92        65    0.103    200K    17K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     42.5     43.2      1.74              0.20        12    0.145     48K   3075       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     64.5     54.5      5.60              0.75        32    0.175    200K    17K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     60.6      1.12              0.17        32    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.066, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.36 GB write, 0.09 MB/s write, 0.35 GB read, 0.09 MB/s read, 6.7 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 41.12 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000357 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2419,39.63 MB,13.0349%) FilterBlock(66,561.61 KB,0.18041%) IndexBlock(66,970.64 KB,0.311806%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:14:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:34.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.509 247708 DEBUG nova.network.neutron [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.520 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.874 247708 DEBUG nova.compute.manager [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received event network-changed-c9fa121d-38bf-475f-924e-ce6f8d3af489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.874 247708 DEBUG nova.compute.manager [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Refreshing instance network info cache due to event network-changed-c9fa121d-38bf-475f-924e-ce6f8d3af489. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:14:34 compute-0 nova_compute[247704]: 2026-01-31 08:14:34.874 247708 DEBUG oslo_concurrency.lockutils [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-8db08219-6945-4f16-a138-8f5ddff67421" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.052 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.085 247708 DEBUG nova.network.neutron [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Updating instance_info_cache with network_info: [{"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.102 247708 DEBUG oslo_concurrency.lockutils [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Releasing lock "refresh_cache-827a8d86-f250-441f-911a-98626a322ef7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.104 247708 DEBUG nova.objects.instance [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'flavor' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:35 compute-0 ceph-mon[74496]: pgmap v2370: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 1.1 MiB/s wr, 61 op/s
Jan 31 08:14:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/940524916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.02109960246836032 of space, bias 1.0, pg target 6.329880740508097 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001603067420435511 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005858829738972641 of space, bias 1.0, pg target 1.7224959432579565 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017041224641727154 quantized to 16 (current 16)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00031952296203238413 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018106301181835102 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042603061604317886 quantized to 32 (current 32)
Jan 31 08:14:35 compute-0 kernel: tap77c06224-c2 (unregistering): left promiscuous mode
Jan 31 08:14:35 compute-0 NetworkManager[49108]: <info>  [1769847275.4842] device (tap77c06224-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.493 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-0 ovn_controller[149457]: 2026-01-31T08:14:35Z|00525|binding|INFO|Releasing lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 from this chassis (sb_readonly=0)
Jan 31 08:14:35 compute-0 ovn_controller[149457]: 2026-01-31T08:14:35Z|00526|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 down in Southbound
Jan 31 08:14:35 compute-0 ovn_controller[149457]: 2026-01-31T08:14:35Z|00527|binding|INFO|Removing iface tap77c06224-c2 ovn-installed in OVS
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:35.500 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:5f:e2 10.100.0.14'], port_security=['fa:16:3e:51:5f:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '827a8d86-f250-441f-911a-98626a322ef7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=77c06224-c2b3-45e0-90f5-afd0c8400ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:35.501 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c unbound from our chassis
Jan 31 08:14:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:35.502 160028 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.503 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:35.503 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[de039768-0c71-43ec-9305-28b9da8748b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.540 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:35 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 31 08:14:35 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000082.scope: Consumed 4.734s CPU time.
Jan 31 08:14:35 compute-0 systemd-machined[214448]: Machine qemu-54-instance-00000082 terminated.
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.584 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.591 247708 DEBUG nova.network.neutron [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Updating instance_info_cache with network_info: [{"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.619 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Releasing lock "refresh_cache-8db08219-6945-4f16-a138-8f5ddff67421" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.619 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance network_info: |[{"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.620 247708 DEBUG oslo_concurrency.lockutils [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-8db08219-6945-4f16-a138-8f5ddff67421" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.620 247708 DEBUG nova.network.neutron [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Refreshing network info cache for port c9fa121d-38bf-475f-924e-ce6f8d3af489 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.625 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Start _get_guest_xml network_info=[{"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.630 247708 WARNING nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.637 247708 DEBUG nova.virt.libvirt.host [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.638 247708 DEBUG nova.virt.libvirt.host [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.641 247708 DEBUG nova.virt.libvirt.host [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.642 247708 DEBUG nova.virt.libvirt.host [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.643 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.644 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.645 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.645 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.646 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.646 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.646 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.647 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.647 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.648 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.648 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.648 247708 DEBUG nova.virt.hardware [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.653 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.699 247708 DEBUG nova.compute.manager [req-faf40c30-f82d-4bbc-a959-e79ca00812a0 req-1195998d-707f-4396-be97-89c11a3a1ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.700 247708 DEBUG oslo_concurrency.lockutils [req-faf40c30-f82d-4bbc-a959-e79ca00812a0 req-1195998d-707f-4396-be97-89c11a3a1ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.701 247708 DEBUG oslo_concurrency.lockutils [req-faf40c30-f82d-4bbc-a959-e79ca00812a0 req-1195998d-707f-4396-be97-89c11a3a1ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.702 247708 DEBUG oslo_concurrency.lockutils [req-faf40c30-f82d-4bbc-a959-e79ca00812a0 req-1195998d-707f-4396-be97-89c11a3a1ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.702 247708 DEBUG nova.compute.manager [req-faf40c30-f82d-4bbc-a959-e79ca00812a0 req-1195998d-707f-4396-be97-89c11a3a1ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.703 247708 WARNING nova.compute.manager [req-faf40c30-f82d-4bbc-a959-e79ca00812a0 req-1195998d-707f-4396-be97-89c11a3a1ad1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:14:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 134 op/s
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.762 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.763 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.764 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.764 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.774 247708 INFO nova.virt.libvirt.driver [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance destroyed successfully.
Jan 31 08:14:35 compute-0 nova_compute[247704]: 2026-01-31 08:14:35.775 247708 DEBUG nova.objects.instance [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'numa_topology' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:14:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1553411770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.114 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.142 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.146 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:36 compute-0 kernel: tap77c06224-c2: entered promiscuous mode
Jan 31 08:14:36 compute-0 NetworkManager[49108]: <info>  [1769847276.2320] manager: (tap77c06224-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Jan 31 08:14:36 compute-0 systemd-udevd[332868]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:14:36 compute-0 NetworkManager[49108]: <info>  [1769847276.2448] device (tap77c06224-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:14:36 compute-0 NetworkManager[49108]: <info>  [1769847276.2466] device (tap77c06224-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.276 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-0 ovn_controller[149457]: 2026-01-31T08:14:36Z|00528|binding|INFO|Claiming lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 for this chassis.
Jan 31 08:14:36 compute-0 ovn_controller[149457]: 2026-01-31T08:14:36Z|00529|binding|INFO|77c06224-c2b3-45e0-90f5-afd0c8400ff7: Claiming fa:16:3e:51:5f:e2 10.100.0.14
Jan 31 08:14:36 compute-0 ovn_controller[149457]: 2026-01-31T08:14:36Z|00530|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 ovn-installed in OVS
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-0 systemd-machined[214448]: New machine qemu-55-instance-00000082.
Jan 31 08:14:36 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000082.
Jan 31 08:14:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:36 compute-0 ovn_controller[149457]: 2026-01-31T08:14:36Z|00531|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 up in Southbound
Jan 31 08:14:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:36.395 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:5f:e2 10.100.0.14'], port_security=['fa:16:3e:51:5f:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '827a8d86-f250-441f-911a-98626a322ef7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=77c06224-c2b3-45e0-90f5-afd0c8400ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:36.397 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c bound to our chassis
Jan 31 08:14:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:36.398 160028 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:14:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:36.399 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[54481113-ad91-406a-bc2f-4b20f683b6dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:14:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364045828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:36.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.686 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.687 247708 DEBUG nova.virt.libvirt.vif [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-902266142',display_name='tempest-ServersTestJSON-server-902266142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-902266142',id=134,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-xz9s0eu8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:28Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=8db08219-6945-4f16-a138-8f5ddff67421,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.688 247708 DEBUG nova.network.os_vif_util [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.688 247708 DEBUG nova.network.os_vif_util [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.690 247708 DEBUG nova.objects.instance [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8db08219-6945-4f16-a138-8f5ddff67421 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:36 compute-0 ceph-mon[74496]: pgmap v2371: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 134 op/s
Jan 31 08:14:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4072180426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1553411770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.702 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <uuid>8db08219-6945-4f16-a138-8f5ddff67421</uuid>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <name>instance-00000086</name>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersTestJSON-server-902266142</nova:name>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:14:35</nova:creationTime>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:user uuid="5366d122b359489fb9d2bda8d19611a6">tempest-ServersTestJSON-327201738-project-member</nova:user>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:project uuid="4aa06cf35d8c468fb16884f19dc8ce71">tempest-ServersTestJSON-327201738</nova:project>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <nova:port uuid="c9fa121d-38bf-475f-924e-ce6f8d3af489">
Jan 31 08:14:36 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <system>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <entry name="serial">8db08219-6945-4f16-a138-8f5ddff67421</entry>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <entry name="uuid">8db08219-6945-4f16-a138-8f5ddff67421</entry>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </system>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <os>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </os>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <features>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </features>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/8db08219-6945-4f16-a138-8f5ddff67421_disk">
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </source>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/8db08219-6945-4f16-a138-8f5ddff67421_disk.config">
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </source>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:14:36 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:53:0a:83"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <target dev="tapc9fa121d-38"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/console.log" append="off"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <video>
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </video>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:14:36 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:14:36 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:14:36 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:14:36 compute-0 nova_compute[247704]: </domain>
Jan 31 08:14:36 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.703 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Preparing to wait for external event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.704 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.704 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.704 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.705 247708 DEBUG nova.virt.libvirt.vif [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-902266142',display_name='tempest-ServersTestJSON-server-902266142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-902266142',id=134,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-xz9s0eu8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:28Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=8db08219-6945-4f16-a138-8f5ddff67421,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.705 247708 DEBUG nova.network.os_vif_util [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.706 247708 DEBUG nova.network.os_vif_util [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.706 247708 DEBUG os_vif [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.706 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.707 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.707 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.711 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.711 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9fa121d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.712 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9fa121d-38, col_values=(('external_ids', {'iface-id': 'c9fa121d-38bf-475f-924e-ce6f8d3af489', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:0a:83', 'vm-uuid': '8db08219-6945-4f16-a138-8f5ddff67421'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.713 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-0 NetworkManager[49108]: <info>  [1769847276.7147] manager: (tapc9fa121d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.716 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.720 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.721 247708 INFO os_vif [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38')
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.834 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 827a8d86-f250-441f-911a-98626a322ef7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.835 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847276.8337326, 827a8d86-f250-441f-911a-98626a322ef7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.835 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Resumed (Lifecycle Event)
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.867 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.874 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.899 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.900 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847276.8346376, 827a8d86-f250-441f-911a-98626a322ef7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.901 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Started (Lifecycle Event)
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.921 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.922 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.922 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] No VIF found with MAC fa:16:3e:53:0a:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.923 247708 INFO nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Using config drive
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.957 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.962 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:36 compute-0 nova_compute[247704]: 2026-01-31 08:14:36.965 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.005 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.624 247708 INFO nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Creating config drive at /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/disk.config
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.629 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpl3gqf1bn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.664 247708 DEBUG nova.network.neutron [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Updated VIF entry in instance network info cache for port c9fa121d-38bf-475f-924e-ce6f8d3af489. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.665 247708 DEBUG nova.network.neutron [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Updating instance_info_cache with network_info: [{"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.688 247708 DEBUG oslo_concurrency.lockutils [req-b43ea53c-5888-46de-af31-1725ec4089a4 req-06b53011-94ff-4619-9ea6-1202edfef607 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-8db08219-6945-4f16-a138-8f5ddff67421" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.758 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpl3gqf1bn" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.784 247708 DEBUG nova.storage.rbd_utils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] rbd image 8db08219-6945-4f16-a138-8f5ddff67421_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.789 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/disk.config 8db08219-6945-4f16-a138-8f5ddff67421_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3364045828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1717633416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4290890653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.817 247708 DEBUG nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.818 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.818 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.819 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.819 247708 DEBUG nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.819 247708 WARNING nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.820 247708 DEBUG nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.820 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.820 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.821 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.821 247708 DEBUG nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.823 247708 WARNING nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.824 247708 DEBUG nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.824 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.824 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.824 247708 DEBUG oslo_concurrency.lockutils [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.825 247708 DEBUG nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.825 247708 WARNING nova.compute.manager [req-488ad1b2-6588-410e-8102-e26de52c187e req-111bac98-c2d9-4acf-942c-06936ce8d87c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.827 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [{"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.863 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-53a5c321-1278-4df4-9fb0-feb465508681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.863 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:14:37 compute-0 nova_compute[247704]: 2026-01-31 08:14:37.864 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.096 247708 DEBUG oslo_concurrency.processutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/disk.config 8db08219-6945-4f16-a138-8f5ddff67421_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.097 247708 INFO nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Deleting local config drive /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421/disk.config because it was imported into RBD.
Jan 31 08:14:38 compute-0 kernel: tapc9fa121d-38: entered promiscuous mode
Jan 31 08:14:38 compute-0 NetworkManager[49108]: <info>  [1769847278.1441] manager: (tapc9fa121d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.145 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:38 compute-0 ovn_controller[149457]: 2026-01-31T08:14:38Z|00532|binding|INFO|Claiming lport c9fa121d-38bf-475f-924e-ce6f8d3af489 for this chassis.
Jan 31 08:14:38 compute-0 ovn_controller[149457]: 2026-01-31T08:14:38Z|00533|binding|INFO|c9fa121d-38bf-475f-924e-ce6f8d3af489: Claiming fa:16:3e:53:0a:83 10.100.0.8
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.159 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:38 compute-0 NetworkManager[49108]: <info>  [1769847278.1616] device (tapc9fa121d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.158 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0a:83 10.100.0.8'], port_security=['fa:16:3e:53:0a:83 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8db08219-6945-4f16-a138-8f5ddff67421', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=c9fa121d-38bf-475f-924e-ce6f8d3af489) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.162 160028 INFO neutron.agent.ovn.metadata.agent [-] Port c9fa121d-38bf-475f-924e-ce6f8d3af489 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 bound to our chassis
Jan 31 08:14:38 compute-0 ovn_controller[149457]: 2026-01-31T08:14:38Z|00534|binding|INFO|Setting lport c9fa121d-38bf-475f-924e-ce6f8d3af489 ovn-installed in OVS
Jan 31 08:14:38 compute-0 ovn_controller[149457]: 2026-01-31T08:14:38Z|00535|binding|INFO|Setting lport c9fa121d-38bf-475f-924e-ce6f8d3af489 up in Southbound
Jan 31 08:14:38 compute-0 NetworkManager[49108]: <info>  [1769847278.1662] device (tapc9fa121d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.166 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.167 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.176 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7ce566-492c-4b37-ab07-395e3513f266]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.176 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb88251fc-71 in ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.178 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb88251fc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.178 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4477aa-382b-4f35-80ac-79a3096b9557]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 systemd-machined[214448]: New machine qemu-56-instance-00000086.
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.182 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[143b87cf-bcf9-483a-a402-e08add3f442f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.195 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[1d8361b2-d898-4605-b319-9c1a4ef033a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000086.
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.212 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f2de19af-c1f0-49b4-a302-17c929389ae4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.241 247708 DEBUG nova.compute.manager [None req-96050521-8d48-4469-8163-aa84fe3b32b8 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.242 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1b89d1-c187-40ab-bff1-e9bf5971d5a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.247 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e0766d39-2257-47d2-9214-95ce2273b740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 NetworkManager[49108]: <info>  [1769847278.2491] manager: (tapb88251fc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.275 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[74a9face-d86b-451e-b0bf-741596939ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.287 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc52f62-afac-4754-af04-44cadeafcfef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 NetworkManager[49108]: <info>  [1769847278.3108] device (tapb88251fc-70): carrier: link connected
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.318 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5615871d-c548-4ab6-970b-db9297a2d4b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.331 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[70b39ae1-b94d-4165-8543-29b22e9c9e89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752092, 'reachable_time': 41785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333132, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.342 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0d53a36b-467a-4a5a-bda1-29abf5e6fbd2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:2a68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752092, 'tstamp': 752092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333133, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.354 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[077e8187-9176-4d70-8f1f-2ef2df42c37f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb88251fc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:2a:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752092, 'reachable_time': 41785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333134, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.382 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[12d7526d-5d3a-45cf-a51f-fe84e5f37c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.431 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[002c30dd-8ad9-496c-8cf3-28ad4a9a4f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.433 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.433 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.434 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb88251fc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:38.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.474 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:38 compute-0 kernel: tapb88251fc-70: entered promiscuous mode
Jan 31 08:14:38 compute-0 NetworkManager[49108]: <info>  [1769847278.4755] manager: (tapb88251fc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.477 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb88251fc-70, col_values=(('external_ids', {'iface-id': '950341c4-aa2a-4261-8207-ff7e92fd4830'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:38 compute-0 ovn_controller[149457]: 2026-01-31T08:14:38Z|00536|binding|INFO|Releasing lport 950341c4-aa2a-4261-8207-ff7e92fd4830 from this chassis (sb_readonly=0)
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.479 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.479 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.484 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.484 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[90e7f96d-2a78-4600-ba05-b0b994518c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.485 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/b88251fc-7610-460a-ba55-2ed186c6f696.pid.haproxy
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID b88251fc-7610-460a-ba55-2ed186c6f696
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:14:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:38.485 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'env', 'PROCESS_TAG=haproxy-b88251fc-7610-460a-ba55-2ed186c6f696', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b88251fc-7610-460a-ba55-2ed186c6f696.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:14:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.704 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847278.7033398, 8db08219-6945-4f16-a138-8f5ddff67421 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.704 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] VM Started (Lifecycle Event)
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.732 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.736 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847278.7062612, 8db08219-6945-4f16-a138-8f5ddff67421 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.737 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] VM Paused (Lifecycle Event)
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.861 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.865 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:38 compute-0 ceph-mon[74496]: pgmap v2372: 305 pgs: 305 active+clean; 1.0 GiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Jan 31 08:14:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2905960649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3465886312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.886 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.894 247708 DEBUG nova.compute.manager [req-4b584024-5101-4c72-b955-724f8b9f1ab4 req-e8b3e129-6090-42a0-b0e9-7a7afc017536 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.894 247708 DEBUG oslo_concurrency.lockutils [req-4b584024-5101-4c72-b955-724f8b9f1ab4 req-e8b3e129-6090-42a0-b0e9-7a7afc017536 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.894 247708 DEBUG oslo_concurrency.lockutils [req-4b584024-5101-4c72-b955-724f8b9f1ab4 req-e8b3e129-6090-42a0-b0e9-7a7afc017536 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.895 247708 DEBUG oslo_concurrency.lockutils [req-4b584024-5101-4c72-b955-724f8b9f1ab4 req-e8b3e129-6090-42a0-b0e9-7a7afc017536 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.895 247708 DEBUG nova.compute.manager [req-4b584024-5101-4c72-b955-724f8b9f1ab4 req-e8b3e129-6090-42a0-b0e9-7a7afc017536 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Processing event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.895 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.898 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847278.898762, 8db08219-6945-4f16-a138-8f5ddff67421 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.899 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] VM Resumed (Lifecycle Event)
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.900 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.903 247708 INFO nova.virt.libvirt.driver [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance spawned successfully.
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.903 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:14:38 compute-0 podman[333209]: 2026-01-31 08:14:38.813191421 +0000 UTC m=+0.024512858 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.914 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.916 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.931 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.932 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.933 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.933 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.933 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.934 247708 DEBUG nova.virt.libvirt.driver [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.942 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:14:38 compute-0 podman[333209]: 2026-01-31 08:14:38.961452914 +0000 UTC m=+0.172774331 container create 7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.983 247708 INFO nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Took 10.48 seconds to spawn the instance on the hypervisor.
Jan 31 08:14:38 compute-0 nova_compute[247704]: 2026-01-31 08:14:38.985 247708 DEBUG nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:39 compute-0 systemd[1]: Started libpod-conmon-7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6.scope.
Jan 31 08:14:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4078a9e17c45398d3efc23af7852d7e5ee3d162e8b120fa64be1ab6cf680fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:39 compute-0 nova_compute[247704]: 2026-01-31 08:14:39.053 247708 INFO nova.compute.manager [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Took 11.70 seconds to build instance.
Jan 31 08:14:39 compute-0 nova_compute[247704]: 2026-01-31 08:14:39.071 247708 DEBUG oslo_concurrency.lockutils [None req-ce803112-b9fa-43e9-9caa-c40a3ff12f5e 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:39 compute-0 podman[333209]: 2026-01-31 08:14:39.107307228 +0000 UTC m=+0.318628695 container init 7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:14:39 compute-0 podman[333222]: 2026-01-31 08:14:39.109307626 +0000 UTC m=+0.101883103 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:39 compute-0 podman[333209]: 2026-01-31 08:14:39.117851684 +0000 UTC m=+0.329173141 container start 7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:14:39 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [NOTICE]   (333247) : New worker (333249) forked
Jan 31 08:14:39 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [NOTICE]   (333247) : Loading success.
Jan 31 08:14:39 compute-0 nova_compute[247704]: 2026-01-31 08:14:39.567 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 999 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Jan 31 08:14:39 compute-0 nova_compute[247704]: 2026-01-31 08:14:39.858 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:40 compute-0 ceph-mon[74496]: pgmap v2373: 305 pgs: 305 active+clean; 999 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Jan 31 08:14:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:40.176 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:40.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:40.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:40 compute-0 nova_compute[247704]: 2026-01-31 08:14:40.986 247708 DEBUG nova.compute.manager [req-dc5b9fba-7138-4985-8973-7b77770c908b req-cbe70878-4ad5-4baf-b45d-9d1055c03149 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:40 compute-0 nova_compute[247704]: 2026-01-31 08:14:40.987 247708 DEBUG oslo_concurrency.lockutils [req-dc5b9fba-7138-4985-8973-7b77770c908b req-cbe70878-4ad5-4baf-b45d-9d1055c03149 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:40 compute-0 nova_compute[247704]: 2026-01-31 08:14:40.988 247708 DEBUG oslo_concurrency.lockutils [req-dc5b9fba-7138-4985-8973-7b77770c908b req-cbe70878-4ad5-4baf-b45d-9d1055c03149 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:40 compute-0 nova_compute[247704]: 2026-01-31 08:14:40.988 247708 DEBUG oslo_concurrency.lockutils [req-dc5b9fba-7138-4985-8973-7b77770c908b req-cbe70878-4ad5-4baf-b45d-9d1055c03149 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:40 compute-0 nova_compute[247704]: 2026-01-31 08:14:40.989 247708 DEBUG nova.compute.manager [req-dc5b9fba-7138-4985-8973-7b77770c908b req-cbe70878-4ad5-4baf-b45d-9d1055c03149 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] No waiting events found dispatching network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:40 compute-0 nova_compute[247704]: 2026-01-31 08:14:40.990 247708 WARNING nova.compute.manager [req-dc5b9fba-7138-4985-8973-7b77770c908b req-cbe70878-4ad5-4baf-b45d-9d1055c03149 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received unexpected event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 for instance with vm_state active and task_state None.
Jan 31 08:14:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 973 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 290 op/s
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.775 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.775 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.775 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.776 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.776 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.777 247708 INFO nova.compute.manager [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Terminating instance
Jan 31 08:14:41 compute-0 nova_compute[247704]: 2026-01-31 08:14:41.778 247708 DEBUG nova.compute.manager [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:14:42 compute-0 kernel: tap77c06224-c2 (unregistering): left promiscuous mode
Jan 31 08:14:42 compute-0 NetworkManager[49108]: <info>  [1769847282.0789] device (tap77c06224-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:42 compute-0 ovn_controller[149457]: 2026-01-31T08:14:42Z|00537|binding|INFO|Releasing lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 from this chassis (sb_readonly=0)
Jan 31 08:14:42 compute-0 ovn_controller[149457]: 2026-01-31T08:14:42Z|00538|binding|INFO|Setting lport 77c06224-c2b3-45e0-90f5-afd0c8400ff7 down in Southbound
Jan 31 08:14:42 compute-0 ovn_controller[149457]: 2026-01-31T08:14:42Z|00539|binding|INFO|Removing iface tap77c06224-c2 ovn-installed in OVS
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:42.102 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:5f:e2 10.100.0.14'], port_security=['fa:16:3e:51:5f:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '827a8d86-f250-441f-911a-98626a322ef7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b57bf91-5573-4777-9a03-b1fa3ca3351c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cbd0f41e455b4b3b9a8edf35ef0b85ed', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5f679240-571e-49a9-90f1-7fce9428e205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6382cf61-a4d2-45ec-ba90-ec1b527a3e06, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=77c06224-c2b3-45e0-90f5-afd0c8400ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:42.106 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 77c06224-c2b3-45e0-90f5-afd0c8400ff7 in datapath 1b57bf91-5573-4777-9a03-b1fa3ca3351c unbound from our chassis
Jan 31 08:14:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:42.108 160028 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1b57bf91-5573-4777-9a03-b1fa3ca3351c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 31 08:14:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:42.109 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0696caa5-24af-428e-a467-612a77b0b268]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.115 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:42 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 31 08:14:42 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000082.scope: Consumed 5.458s CPU time.
Jan 31 08:14:42 compute-0 systemd-machined[214448]: Machine qemu-55-instance-00000082 terminated.
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.209 247708 INFO nova.virt.libvirt.driver [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Instance destroyed successfully.
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.209 247708 DEBUG nova.objects.instance [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lazy-loading 'resources' on Instance uuid 827a8d86-f250-441f-911a-98626a322ef7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:42 compute-0 ceph-mon[74496]: pgmap v2374: 305 pgs: 305 active+clean; 973 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 290 op/s
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.258 247708 DEBUG nova.virt.libvirt.vif [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1146155717',display_name='tempest-ServerRescueTestJSON-server-1146155717',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1146155717',id=130,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cbd0f41e455b4b3b9a8edf35ef0b85ed',ramdisk_id='',reservation_id='r-dm7efqr7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1911109090',owner_user_name='tempest-ServerRescueTestJSON-1911109090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:38Z,user_data=None,user_id='6ab9e181016f4d5a899c91dae3aa26e0',uuid=827a8d86-f250-441f-911a-98626a322ef7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.258 247708 DEBUG nova.network.os_vif_util [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converting VIF {"id": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "address": "fa:16:3e:51:5f:e2", "network": {"id": "1b57bf91-5573-4777-9a03-b1fa3ca3351c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-2001325626-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "cbd0f41e455b4b3b9a8edf35ef0b85ed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77c06224-c2", "ovs_interfaceid": "77c06224-c2b3-45e0-90f5-afd0c8400ff7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.259 247708 DEBUG nova.network.os_vif_util [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.260 247708 DEBUG os_vif [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.262 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77c06224-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.270 247708 INFO os_vif [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:51:5f:e2,bridge_name='br-int',has_traffic_filtering=True,id=77c06224-c2b3-45e0-90f5-afd0c8400ff7,network=Network(1b57bf91-5573-4777-9a03-b1fa3ca3351c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77c06224-c2')
Jan 31 08:14:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:42.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.822 247708 DEBUG nova.compute.manager [req-bdc12b9d-072d-4173-8311-7a93b694a6de req-6ab3f8a7-76f0-4855-886a-1e0d81debece 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.823 247708 DEBUG oslo_concurrency.lockutils [req-bdc12b9d-072d-4173-8311-7a93b694a6de req-6ab3f8a7-76f0-4855-886a-1e0d81debece 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.823 247708 DEBUG oslo_concurrency.lockutils [req-bdc12b9d-072d-4173-8311-7a93b694a6de req-6ab3f8a7-76f0-4855-886a-1e0d81debece 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.823 247708 DEBUG oslo_concurrency.lockutils [req-bdc12b9d-072d-4173-8311-7a93b694a6de req-6ab3f8a7-76f0-4855-886a-1e0d81debece 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.824 247708 DEBUG nova.compute.manager [req-bdc12b9d-072d-4173-8311-7a93b694a6de req-6ab3f8a7-76f0-4855-886a-1e0d81debece 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:42 compute-0 nova_compute[247704]: 2026-01-31 08:14:42.824 247708 DEBUG nova.compute.manager [req-bdc12b9d-072d-4173-8311-7a93b694a6de req-6ab3f8a7-76f0-4855-886a-1e0d81debece 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-unplugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:14:43 compute-0 nova_compute[247704]: 2026-01-31 08:14:43.290 247708 DEBUG oslo_concurrency.lockutils [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:43 compute-0 nova_compute[247704]: 2026-01-31 08:14:43.291 247708 DEBUG oslo_concurrency.lockutils [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:43 compute-0 nova_compute[247704]: 2026-01-31 08:14:43.292 247708 DEBUG nova.compute.manager [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:43 compute-0 nova_compute[247704]: 2026-01-31 08:14:43.295 247708 DEBUG nova.compute.manager [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 31 08:14:43 compute-0 nova_compute[247704]: 2026-01-31 08:14:43.296 247708 DEBUG nova.objects.instance [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'flavor' on Instance uuid 8db08219-6945-4f16-a138-8f5ddff67421 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:43 compute-0 nova_compute[247704]: 2026-01-31 08:14:43.323 247708 DEBUG nova.virt.libvirt.driver [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:14:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 959 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 278 op/s
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.115 247708 INFO nova.virt.libvirt.driver [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Deleting instance files /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7_del
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.117 247708 INFO nova.virt.libvirt.driver [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Deletion of /var/lib/nova/instances/827a8d86-f250-441f-911a-98626a322ef7_del complete
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.331 247708 INFO nova.compute.manager [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Took 2.55 seconds to destroy the instance on the hypervisor.
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.333 247708 DEBUG oslo.service.loopingcall [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.334 247708 DEBUG nova.compute.manager [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.334 247708 DEBUG nova.network.neutron [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:14:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:44.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.570 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:44.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.927 247708 DEBUG nova.compute.manager [req-95ef8a08-daa5-4645-b5cd-ac5bd797ea71 req-2acd9e39-551a-40ac-adf8-5aed782dde32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.928 247708 DEBUG oslo_concurrency.lockutils [req-95ef8a08-daa5-4645-b5cd-ac5bd797ea71 req-2acd9e39-551a-40ac-adf8-5aed782dde32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "827a8d86-f250-441f-911a-98626a322ef7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.928 247708 DEBUG oslo_concurrency.lockutils [req-95ef8a08-daa5-4645-b5cd-ac5bd797ea71 req-2acd9e39-551a-40ac-adf8-5aed782dde32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.928 247708 DEBUG oslo_concurrency.lockutils [req-95ef8a08-daa5-4645-b5cd-ac5bd797ea71 req-2acd9e39-551a-40ac-adf8-5aed782dde32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.929 247708 DEBUG nova.compute.manager [req-95ef8a08-daa5-4645-b5cd-ac5bd797ea71 req-2acd9e39-551a-40ac-adf8-5aed782dde32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] No waiting events found dispatching network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:44 compute-0 nova_compute[247704]: 2026-01-31 08:14:44.929 247708 WARNING nova.compute.manager [req-95ef8a08-daa5-4645-b5cd-ac5bd797ea71 req-2acd9e39-551a-40ac-adf8-5aed782dde32 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received unexpected event network-vif-plugged-77c06224-c2b3-45e0-90f5-afd0c8400ff7 for instance with vm_state active and task_state deleting.
Jan 31 08:14:44 compute-0 ceph-mon[74496]: pgmap v2375: 305 pgs: 305 active+clean; 959 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 278 op/s
Jan 31 08:14:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:14:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2418829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:14:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:14:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2418829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:14:45 compute-0 sudo[333298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:45 compute-0 sudo[333298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:45 compute-0 sudo[333298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:45 compute-0 sudo[333324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:45 compute-0 sudo[333324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:45 compute-0 sudo[333324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 925 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.1 MiB/s wr, 290 op/s
Jan 31 08:14:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2418829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:14:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2418829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:14:45 compute-0 ceph-mon[74496]: pgmap v2376: 305 pgs: 305 active+clean; 925 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.1 MiB/s wr, 290 op/s
Jan 31 08:14:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.498546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286498641, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1031, "num_deletes": 254, "total_data_size": 1576093, "memory_usage": 1595240, "flush_reason": "Manual Compaction"}
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286525674, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1036934, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51263, "largest_seqno": 52293, "table_properties": {"data_size": 1032515, "index_size": 1943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11640, "raw_average_key_size": 21, "raw_value_size": 1023083, "raw_average_value_size": 1891, "num_data_blocks": 84, "num_entries": 541, "num_filter_entries": 541, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847207, "oldest_key_time": 1769847207, "file_creation_time": 1769847286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 27174 microseconds, and 3977 cpu microseconds.
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:14:46 compute-0 nova_compute[247704]: 2026-01-31 08:14:46.533 247708 DEBUG nova.network.neutron [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:14:46 compute-0 nova_compute[247704]: 2026-01-31 08:14:46.554 247708 INFO nova.compute.manager [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Took 2.22 seconds to deallocate network for instance.
Jan 31 08:14:46 compute-0 nova_compute[247704]: 2026-01-31 08:14:46.593 247708 DEBUG nova.compute.manager [req-b3ab50b3-6f06-40c6-9f72-a72efe0dacca req-5da473a5-3521-46a6-9efd-713feba6ddbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Received event network-vif-deleted-77c06224-c2b3-45e0-90f5-afd0c8400ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:46 compute-0 nova_compute[247704]: 2026-01-31 08:14:46.599 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:46 compute-0 nova_compute[247704]: 2026-01-31 08:14:46.600 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.525732) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1036934 bytes OK
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.525801) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.615102) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.615150) EVENT_LOG_v1 {"time_micros": 1769847286615139, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.615179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1571190, prev total WAL file size 1571190, number of live WAL files 2.
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.615961) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373538' seq:72057594037927935, type:22 .. '6D6772737461740032303130' seq:0, type:0; will stop at (end)
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1012KB)], [113(12MB)]
Jan 31 08:14:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847286616048, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 13818069, "oldest_snapshot_seqno": -1}
Jan 31 08:14:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:46.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:46 compute-0 nova_compute[247704]: 2026-01-31 08:14:46.693 247708 DEBUG oslo_concurrency.processutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:14:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:14:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913157800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.168 247708 DEBUG oslo_concurrency.processutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7970 keys, 10452965 bytes, temperature: kUnknown
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287172538, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10452965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10401888, "index_size": 30037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 207158, "raw_average_key_size": 25, "raw_value_size": 10262076, "raw_average_value_size": 1287, "num_data_blocks": 1176, "num_entries": 7970, "num_filter_entries": 7970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847286, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.174 247708 DEBUG nova.compute.provider_tree [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.193 247708 DEBUG nova.scheduler.client.report [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.217 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.172784) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10452965 bytes
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.222762) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.8 rd, 18.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.4) write-amplify(10.1) OK, records in: 8461, records dropped: 491 output_compression: NoCompression
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.222830) EVENT_LOG_v1 {"time_micros": 1769847287222808, "job": 68, "event": "compaction_finished", "compaction_time_micros": 556586, "compaction_time_cpu_micros": 23979, "output_level": 6, "num_output_files": 1, "total_output_size": 10452965, "num_input_records": 8461, "num_output_records": 7970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287223518, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287225872, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:46.615864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.226015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.226023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.226025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.226027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:47 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:14:47.226028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.253 247708 INFO nova.scheduler.client.report [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Deleted allocations for instance 827a8d86-f250-441f-911a-98626a322ef7
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.266 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.339 247708 DEBUG oslo_concurrency.lockutils [None req-ad1e8ccc-e24d-4853-be3a-7c25002ebe52 6ab9e181016f4d5a899c91dae3aa26e0 cbd0f41e455b4b3b9a8edf35ef0b85ed - - default default] Lock "827a8d86-f250-441f-911a-98626a322ef7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:47 compute-0 nova_compute[247704]: 2026-01-31 08:14:47.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:14:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3913157800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 681 KiB/s wr, 252 op/s
Jan 31 08:14:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:48.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:48.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 31 08:14:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 31 08:14:48 compute-0 ceph-mon[74496]: pgmap v2377: 305 pgs: 305 active+clean; 880 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 681 KiB/s wr, 252 op/s
Jan 31 08:14:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 31 08:14:49 compute-0 nova_compute[247704]: 2026-01-31 08:14:49.572 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 858 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.3 MiB/s wr, 220 op/s
Jan 31 08:14:49 compute-0 ceph-mon[74496]: osdmap e302: 3 total, 3 up, 3 in
Jan 31 08:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:50.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 31 08:14:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 31 08:14:50 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 31 08:14:50 compute-0 ceph-mon[74496]: pgmap v2379: 305 pgs: 305 active+clean; 858 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.3 MiB/s wr, 220 op/s
Jan 31 08:14:50 compute-0 ceph-mon[74496]: osdmap e303: 3 total, 3 up, 3 in
Jan 31 08:14:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 862 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Jan 31 08:14:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 31 08:14:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 31 08:14:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 31 08:14:52 compute-0 nova_compute[247704]: 2026-01-31 08:14:52.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:52.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:52 compute-0 ceph-mon[74496]: pgmap v2381: 305 pgs: 305 active+clean; 862 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Jan 31 08:14:52 compute-0 ceph-mon[74496]: osdmap e304: 3 total, 3 up, 3 in
Jan 31 08:14:53 compute-0 nova_compute[247704]: 2026-01-31 08:14:53.383 247708 DEBUG nova.virt.libvirt.driver [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:14:53 compute-0 ovn_controller[149457]: 2026-01-31T08:14:53Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:0a:83 10.100.0.8
Jan 31 08:14:53 compute-0 ovn_controller[149457]: 2026-01-31T08:14:53Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0a:83 10.100.0.8
Jan 31 08:14:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 847 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 8.4 MiB/s wr, 237 op/s
Jan 31 08:14:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1016822471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:14:54 compute-0 ceph-mon[74496]: pgmap v2383: 305 pgs: 305 active+clean; 847 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 8.4 MiB/s wr, 237 op/s
Jan 31 08:14:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:54 compute-0 nova_compute[247704]: 2026-01-31 08:14:54.575 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:54.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:55 compute-0 sudo[333375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:55 compute-0 sudo[333375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-0 sudo[333375]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-0 sudo[333401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:55 compute-0 sudo[333401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-0 sudo[333401]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-0 podman[333399]: 2026-01-31 08:14:55.235086363 +0000 UTC m=+0.126625147 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:55 compute-0 sudo[333447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:55 compute-0 sudo[333447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-0 sudo[333447]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-0 sudo[333476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:14:55 compute-0 sudo[333476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 857 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 9.0 MiB/s wr, 299 op/s
Jan 31 08:14:55 compute-0 podman[333574]: 2026-01-31 08:14:55.87029512 +0000 UTC m=+0.096976554 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:14:55 compute-0 podman[333574]: 2026-01-31 08:14:55.953869077 +0000 UTC m=+0.180550511 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:14:56 compute-0 kernel: tapc9fa121d-38 (unregistering): left promiscuous mode
Jan 31 08:14:56 compute-0 NetworkManager[49108]: <info>  [1769847296.1127] device (tapc9fa121d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:14:56 compute-0 ovn_controller[149457]: 2026-01-31T08:14:56Z|00540|binding|INFO|Releasing lport c9fa121d-38bf-475f-924e-ce6f8d3af489 from this chassis (sb_readonly=0)
Jan 31 08:14:56 compute-0 ovn_controller[149457]: 2026-01-31T08:14:56Z|00541|binding|INFO|Setting lport c9fa121d-38bf-475f-924e-ce6f8d3af489 down in Southbound
Jan 31 08:14:56 compute-0 ovn_controller[149457]: 2026-01-31T08:14:56Z|00542|binding|INFO|Removing iface tapc9fa121d-38 ovn-installed in OVS
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.122 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.128 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:56.131 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0a:83 10.100.0.8'], port_security=['fa:16:3e:53:0a:83 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '8db08219-6945-4f16-a138-8f5ddff67421', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b88251fc-7610-460a-ba55-2ed186c6f696', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4aa06cf35d8c468fb16884f19dc8ce71', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f076789-2616-4234-8eab-1fc3da7d63b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7bb3690e-e43b-4d54-9d64-4797e471bf50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=c9fa121d-38bf-475f-924e-ce6f8d3af489) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:14:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:56.133 160028 INFO neutron.agent.ovn.metadata.agent [-] Port c9fa121d-38bf-475f-924e-ce6f8d3af489 in datapath b88251fc-7610-460a-ba55-2ed186c6f696 unbound from our chassis
Jan 31 08:14:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:56.137 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b88251fc-7610-460a-ba55-2ed186c6f696, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:14:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:56.138 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2a17810f-f176-4cc4-a29a-c7190e7dfb08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:56.138 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 namespace which is not needed anymore
Jan 31 08:14:56 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 31 08:14:56 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000086.scope: Consumed 14.157s CPU time.
Jan 31 08:14:56 compute-0 systemd-machined[214448]: Machine qemu-56-instance-00000086 terminated.
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.381 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.400 247708 INFO nova.virt.libvirt.driver [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance shutdown successfully after 13 seconds.
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.405 247708 INFO nova.virt.libvirt.driver [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance destroyed successfully.
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.406 247708 DEBUG nova.objects.instance [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8db08219-6945-4f16-a138-8f5ddff67421 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.440 247708 DEBUG nova.compute.manager [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:14:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:14:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [NOTICE]   (333247) : haproxy version is 2.8.14-c23fe91
Jan 31 08:14:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [NOTICE]   (333247) : path to executable is /usr/sbin/haproxy
Jan 31 08:14:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [WARNING]  (333247) : Exiting Master process...
Jan 31 08:14:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [ALERT]    (333247) : Current worker (333249) exited with code 143 (Terminated)
Jan 31 08:14:56 compute-0 neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696[333230]: [WARNING]  (333247) : All workers exited. Exiting... (0)
Jan 31 08:14:56 compute-0 systemd[1]: libpod-7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6.scope: Deactivated successfully.
Jan 31 08:14:56 compute-0 podman[333647]: 2026-01-31 08:14:56.74812962 +0000 UTC m=+0.200566968 container died 7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:14:56 compute-0 nova_compute[247704]: 2026-01-31 08:14:56.871 247708 DEBUG oslo_concurrency.lockutils [None req-4a7ebaf5-56a9-40ad-b5cb-b404b0acb000 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:57 compute-0 ceph-mon[74496]: pgmap v2384: 305 pgs: 305 active+clean; 857 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 9.0 MiB/s wr, 299 op/s
Jan 31 08:14:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1135928742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6-userdata-shm.mount: Deactivated successfully.
Jan 31 08:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb4078a9e17c45398d3efc23af7852d7e5ee3d162e8b120fa64be1ab6cf680fc-merged.mount: Deactivated successfully.
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.207 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847282.2070405, 827a8d86-f250-441f-911a-98626a322ef7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.208 247708 INFO nova.compute.manager [-] [instance: 827a8d86-f250-441f-911a-98626a322ef7] VM Stopped (Lifecycle Event)
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.244 247708 DEBUG nova.compute.manager [None req-7f5be893-a16b-4be0-b181-13825250c3f9 - - - - - -] [instance: 827a8d86-f250-441f-911a-98626a322ef7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.271 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:57 compute-0 podman[333647]: 2026-01-31 08:14:57.481457499 +0000 UTC m=+0.933894857 container cleanup 7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:57 compute-0 systemd[1]: libpod-conmon-7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6.scope: Deactivated successfully.
Jan 31 08:14:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 864 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.6 MiB/s wr, 288 op/s
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.952 247708 DEBUG nova.compute.manager [req-786e1f2b-e50b-4eb6-91d4-7bbba8be9ad7 req-879cd24a-9218-4855-97ce-01a191e716e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received event network-vif-unplugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.952 247708 DEBUG oslo_concurrency.lockutils [req-786e1f2b-e50b-4eb6-91d4-7bbba8be9ad7 req-879cd24a-9218-4855-97ce-01a191e716e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.953 247708 DEBUG oslo_concurrency.lockutils [req-786e1f2b-e50b-4eb6-91d4-7bbba8be9ad7 req-879cd24a-9218-4855-97ce-01a191e716e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.953 247708 DEBUG oslo_concurrency.lockutils [req-786e1f2b-e50b-4eb6-91d4-7bbba8be9ad7 req-879cd24a-9218-4855-97ce-01a191e716e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.954 247708 DEBUG nova.compute.manager [req-786e1f2b-e50b-4eb6-91d4-7bbba8be9ad7 req-879cd24a-9218-4855-97ce-01a191e716e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] No waiting events found dispatching network-vif-unplugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:14:57 compute-0 nova_compute[247704]: 2026-01-31 08:14:57.954 247708 WARNING nova.compute.manager [req-786e1f2b-e50b-4eb6-91d4-7bbba8be9ad7 req-879cd24a-9218-4855-97ce-01a191e716e4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received unexpected event network-vif-unplugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 for instance with vm_state stopped and task_state None.
Jan 31 08:14:58 compute-0 podman[333730]: 2026-01-31 08:14:58.112893585 +0000 UTC m=+0.610198190 container remove 7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.119 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cac6679e-4d29-4822-9760-72f9c4231dc5]: (4, ('Sat Jan 31 08:14:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6)\n7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6\nSat Jan 31 08:14:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 (7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6)\n7aba3da673d375b5cb0fe66f64d58f93ff35ca31fc8a1c8580458da8405a0ba6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.120 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0557d08a-ff21-41a4-9be1-03064be72513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.121 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb88251fc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:14:58 compute-0 nova_compute[247704]: 2026-01-31 08:14:58.123 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:58 compute-0 nova_compute[247704]: 2026-01-31 08:14:58.138 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.141 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b305ef83-0da3-4a41-ac90-c3f24f721436]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 kernel: tapb88251fc-70: left promiscuous mode
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.162 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[28e57a9f-1336-4893-9c71-3965a657f29b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.163 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[21e88317-5bad-48e9-a642-843608e94e49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.177 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3a581265-a30e-4862-b2d7-0f5f4009241d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752085, 'reachable_time': 31871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333773, 'error': None, 'target': 'ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.180 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b88251fc-7610-460a-ba55-2ed186c6f696 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:14:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:14:58.181 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[189fa500-a976-4daf-a600-b21685fed7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:14:58 compute-0 systemd[1]: run-netns-ovnmeta\x2db88251fc\x2d7610\x2d460a\x2dba55\x2d2ed186c6f696.mount: Deactivated successfully.
Jan 31 08:14:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:14:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:14:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3915545150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:58 compute-0 ceph-mon[74496]: pgmap v2385: 305 pgs: 305 active+clean; 864 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.6 MiB/s wr, 288 op/s
Jan 31 08:14:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/410555706' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:14:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:14:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:14:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:58.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:14:58 compute-0 podman[333807]: 2026-01-31 08:14:58.718882641 +0000 UTC m=+0.356341854 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:14:58 compute-0 podman[333807]: 2026-01-31 08:14:58.739809221 +0000 UTC m=+0.377268344 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:14:59 compute-0 podman[333872]: 2026-01-31 08:14:59.065820815 +0000 UTC m=+0.095614032 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, release=1793, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, architecture=x86_64)
Jan 31 08:14:59 compute-0 podman[333872]: 2026-01-31 08:14:59.079581809 +0000 UTC m=+0.109374926 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, name=keepalived, architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Jan 31 08:14:59 compute-0 sudo[333476]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:14:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:14:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:14:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:14:59 compute-0 sudo[333905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:59 compute-0 sudo[333905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:59 compute-0 sudo[333905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:59 compute-0 sudo[333930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:59 compute-0 sudo[333930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:59 compute-0 sudo[333930]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:59 compute-0 sudo[333955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:14:59 compute-0 sudo[333955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:59 compute-0 sudo[333955]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:59 compute-0 sudo[333980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:14:59 compute-0 sudo[333980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:59 compute-0 nova_compute[247704]: 2026-01-31 08:14:59.576 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:14:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 864 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 176 op/s
Jan 31 08:14:59 compute-0 sudo[333980]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 98f0397d-09a0-4e45-be17-65e732f65797 does not exist
Jan 31 08:15:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7aa1b65c-d6f0-41c2-afc0-46180cd0e82c does not exist
Jan 31 08:15:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d44f2200-f2c0-4f17-b2ef-fe0fa0dd0628 does not exist
Jan 31 08:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:15:00 compute-0 sudo[334038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:00 compute-0 sudo[334038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:00 compute-0 sudo[334038]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:00 compute-0 sudo[334063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:00 compute-0 sudo[334063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:00 compute-0 sudo[334063]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:00 compute-0 sudo[334088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:00 compute-0 sudo[334088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:00 compute-0 sudo[334088]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-0 ceph-mon[74496]: pgmap v2386: 305 pgs: 305 active+clean; 864 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 176 op/s
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:15:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:15:00 compute-0 sudo[334113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:15:00 compute-0 sudo[334113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:00 compute-0 nova_compute[247704]: 2026-01-31 08:15:00.494 247708 DEBUG nova.compute.manager [req-53dd41d4-0f05-45b1-9d73-2aab49605ce0 req-91c68897-7cd6-427a-a6a5-a148ec3e0890 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:00 compute-0 nova_compute[247704]: 2026-01-31 08:15:00.494 247708 DEBUG oslo_concurrency.lockutils [req-53dd41d4-0f05-45b1-9d73-2aab49605ce0 req-91c68897-7cd6-427a-a6a5-a148ec3e0890 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:00 compute-0 nova_compute[247704]: 2026-01-31 08:15:00.495 247708 DEBUG oslo_concurrency.lockutils [req-53dd41d4-0f05-45b1-9d73-2aab49605ce0 req-91c68897-7cd6-427a-a6a5-a148ec3e0890 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:00 compute-0 nova_compute[247704]: 2026-01-31 08:15:00.495 247708 DEBUG oslo_concurrency.lockutils [req-53dd41d4-0f05-45b1-9d73-2aab49605ce0 req-91c68897-7cd6-427a-a6a5-a148ec3e0890 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:00 compute-0 nova_compute[247704]: 2026-01-31 08:15:00.495 247708 DEBUG nova.compute.manager [req-53dd41d4-0f05-45b1-9d73-2aab49605ce0 req-91c68897-7cd6-427a-a6a5-a148ec3e0890 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] No waiting events found dispatching network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:00 compute-0 nova_compute[247704]: 2026-01-31 08:15:00.495 247708 WARNING nova.compute.manager [req-53dd41d4-0f05-45b1-9d73-2aab49605ce0 req-91c68897-7cd6-427a-a6a5-a148ec3e0890 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received unexpected event network-vif-plugged-c9fa121d-38bf-475f-924e-ce6f8d3af489 for instance with vm_state stopped and task_state None.
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.633351379 +0000 UTC m=+0.062097614 container create 8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:15:00 compute-0 systemd[1]: Started libpod-conmon-8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db.scope.
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.601217366 +0000 UTC m=+0.029963701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:15:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:00.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.741047403 +0000 UTC m=+0.169793648 container init 8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.748178117 +0000 UTC m=+0.176924362 container start 8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.754193923 +0000 UTC m=+0.182940178 container attach 8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:15:00 compute-0 cranky_gagarin[334196]: 167 167
Jan 31 08:15:00 compute-0 systemd[1]: libpod-8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db.scope: Deactivated successfully.
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.755917636 +0000 UTC m=+0.184663861 container died 8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gagarin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b351e6329160a70ab6351b85151e9f82fd69a6e416654566e537612c0e4b9b-merged.mount: Deactivated successfully.
Jan 31 08:15:00 compute-0 podman[334179]: 2026-01-31 08:15:00.813122409 +0000 UTC m=+0.241868634 container remove 8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:15:00 compute-0 systemd[1]: libpod-conmon-8254665e02eeb29f70e72941324b139a05a04482a7c42c56776d77e810a985db.scope: Deactivated successfully.
Jan 31 08:15:00 compute-0 podman[334220]: 2026-01-31 08:15:00.97402237 +0000 UTC m=+0.050121742 container create c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:01 compute-0 systemd[1]: Started libpod-conmon-c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc.scope.
Jan 31 08:15:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7f203467fe00e91d897d2c261c4d7e90377b1165e8e01d8e0d6b53261d389/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7f203467fe00e91d897d2c261c4d7e90377b1165e8e01d8e0d6b53261d389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7f203467fe00e91d897d2c261c4d7e90377b1165e8e01d8e0d6b53261d389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7f203467fe00e91d897d2c261c4d7e90377b1165e8e01d8e0d6b53261d389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc7f203467fe00e91d897d2c261c4d7e90377b1165e8e01d8e0d6b53261d389/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:01 compute-0 podman[334220]: 2026-01-31 08:15:00.950387684 +0000 UTC m=+0.026487136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:15:01 compute-0 podman[334220]: 2026-01-31 08:15:01.058268813 +0000 UTC m=+0.134368215 container init c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:01 compute-0 podman[334220]: 2026-01-31 08:15:01.069110287 +0000 UTC m=+0.145209679 container start c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:15:01 compute-0 podman[334220]: 2026-01-31 08:15:01.073810552 +0000 UTC m=+0.149909954 container attach c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:15:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 31 08:15:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 31 08:15:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 31 08:15:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 840 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 31 08:15:01 compute-0 competent_black[334237]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:15:01 compute-0 competent_black[334237]: --> relative data size: 1.0
Jan 31 08:15:01 compute-0 competent_black[334237]: --> All data devices are unavailable
Jan 31 08:15:01 compute-0 systemd[1]: libpod-c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc.scope: Deactivated successfully.
Jan 31 08:15:01 compute-0 podman[334253]: 2026-01-31 08:15:01.924292365 +0000 UTC m=+0.027017699 container died c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc7f203467fe00e91d897d2c261c4d7e90377b1165e8e01d8e0d6b53261d389-merged.mount: Deactivated successfully.
Jan 31 08:15:01 compute-0 podman[334253]: 2026-01-31 08:15:01.982061302 +0000 UTC m=+0.084786636 container remove c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:15:01 compute-0 systemd[1]: libpod-conmon-c631f8d15ed76a252bedfdf96fc7e48dc7d9eb8e78e64e81064e8e565a846dcc.scope: Deactivated successfully.
Jan 31 08:15:02 compute-0 sudo[334113]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:02 compute-0 sudo[334269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:02 compute-0 sudo[334269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:02 compute-0 sudo[334269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:02 compute-0 sudo[334294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:02 compute-0 sudo[334294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:02 compute-0 sudo[334294]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:02 compute-0 sudo[334319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:02 compute-0 sudo[334319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:02 compute-0 sudo[334319]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:02 compute-0 sudo[334344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:15:02 compute-0 sudo[334344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:02 compute-0 nova_compute[247704]: 2026-01-31 08:15:02.274 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:02 compute-0 ceph-mon[74496]: osdmap e305: 3 total, 3 up, 3 in
Jan 31 08:15:02 compute-0 ceph-mon[74496]: pgmap v2388: 305 pgs: 305 active+clean; 840 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 31 08:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/6157968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.516917304 +0000 UTC m=+0.038059918 container create 8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:15:02 compute-0 systemd[1]: Started libpod-conmon-8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2.scope.
Jan 31 08:15:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.496730633 +0000 UTC m=+0.017873267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.619473024 +0000 UTC m=+0.140615658 container init 8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.627219692 +0000 UTC m=+0.148362306 container start 8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:15:02 compute-0 reverent_fermat[334424]: 167 167
Jan 31 08:15:02 compute-0 systemd[1]: libpod-8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2.scope: Deactivated successfully.
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.631466076 +0000 UTC m=+0.152608710 container attach 8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 08:15:02 compute-0 conmon[334424]: conmon 8dc85ee51252d7ff0dc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2.scope/container/memory.events
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.632619174 +0000 UTC m=+0.153761788 container died 8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-70e04efa26cf1e50e09647310f1e80c2d22eb33328e8eb8b9db18bbe27378ddf-merged.mount: Deactivated successfully.
Jan 31 08:15:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:02 compute-0 podman[334407]: 2026-01-31 08:15:02.730713774 +0000 UTC m=+0.251856408 container remove 8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:15:02 compute-0 systemd[1]: libpod-conmon-8dc85ee51252d7ff0dc1f2dc5e787ccbeafcb8dafed6ae28a68263b05eab52e2.scope: Deactivated successfully.
Jan 31 08:15:02 compute-0 podman[334446]: 2026-01-31 08:15:02.853703441 +0000 UTC m=+0.026413675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:15:03 compute-0 podman[334446]: 2026-01-31 08:15:03.043218158 +0000 UTC m=+0.215928332 container create a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:03 compute-0 systemd[1]: Started libpod-conmon-a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575.scope.
Jan 31 08:15:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495958af961830302213f988afa438e502e893ba3d412b111e8e8ff415f1e254/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495958af961830302213f988afa438e502e893ba3d412b111e8e8ff415f1e254/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495958af961830302213f988afa438e502e893ba3d412b111e8e8ff415f1e254/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/495958af961830302213f988afa438e502e893ba3d412b111e8e8ff415f1e254/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:03 compute-0 podman[334446]: 2026-01-31 08:15:03.242121045 +0000 UTC m=+0.414831249 container init a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:15:03 compute-0 podman[334446]: 2026-01-31 08:15:03.249919615 +0000 UTC m=+0.422629789 container start a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:15:03 compute-0 podman[334446]: 2026-01-31 08:15:03.307835016 +0000 UTC m=+0.480545210 container attach a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:15:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 821 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 605 KiB/s rd, 2.2 MiB/s wr, 173 op/s
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]: {
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:     "0": [
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:         {
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "devices": [
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "/dev/loop3"
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             ],
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "lv_name": "ceph_lv0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "lv_size": "7511998464",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "name": "ceph_lv0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "tags": {
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.crush_device_class": "",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.encrypted": "0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.osd_id": "0",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.type": "block",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:                 "ceph.vdo": "0"
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             },
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "type": "block",
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:             "vg_name": "ceph_vg0"
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:         }
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]:     ]
Jan 31 08:15:03 compute-0 xenodochial_mclean[334463]: }
Jan 31 08:15:04 compute-0 systemd[1]: libpod-a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575.scope: Deactivated successfully.
Jan 31 08:15:04 compute-0 podman[334446]: 2026-01-31 08:15:04.001929419 +0000 UTC m=+1.174639633 container died a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:15:04 compute-0 ovn_controller[149457]: 2026-01-31T08:15:04Z|00543|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:15:04 compute-0 nova_compute[247704]: 2026-01-31 08:15:04.165 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:04 compute-0 ceph-mon[74496]: pgmap v2389: 305 pgs: 305 active+clean; 821 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 605 KiB/s rd, 2.2 MiB/s wr, 173 op/s
Jan 31 08:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-495958af961830302213f988afa438e502e893ba3d412b111e8e8ff415f1e254-merged.mount: Deactivated successfully.
Jan 31 08:15:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:04 compute-0 nova_compute[247704]: 2026-01-31 08:15:04.579 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:05 compute-0 podman[334446]: 2026-01-31 08:15:05.064326946 +0000 UTC m=+2.237037120 container remove a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:15:05 compute-0 systemd[1]: libpod-conmon-a61e7758b206b24be8d64368de64e05c4cfd6f56b84851dc85453c7b3404b575.scope: Deactivated successfully.
Jan 31 08:15:05 compute-0 sudo[334344]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-0 sudo[334485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:05 compute-0 sudo[334485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-0 sudo[334485]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-0 sudo[334510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:05 compute-0 sudo[334510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-0 sudo[334510]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-0 sudo[334535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:05 compute-0 sudo[334535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-0 sudo[334535]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-0 sudo[334560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:15:05 compute-0 sudo[334560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-0 sudo[334612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:05 compute-0 sudo[334612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-0 sudo[334612]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-0 sudo[334659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:05 compute-0 sudo[334659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:05 compute-0 sudo[334659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:05 compute-0 podman[334651]: 2026-01-31 08:15:05.724046211 +0000 UTC m=+0.092675999 container create d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:15:05 compute-0 podman[334651]: 2026-01-31 08:15:05.66695195 +0000 UTC m=+0.035581758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:15:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 785 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 568 KiB/s rd, 1010 KiB/s wr, 130 op/s
Jan 31 08:15:05 compute-0 systemd[1]: Started libpod-conmon-d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af.scope.
Jan 31 08:15:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:06 compute-0 podman[334651]: 2026-01-31 08:15:06.064472636 +0000 UTC m=+0.433102394 container init d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:06 compute-0 podman[334651]: 2026-01-31 08:15:06.074306465 +0000 UTC m=+0.442936223 container start d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:06 compute-0 systemd[1]: libpod-d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af.scope: Deactivated successfully.
Jan 31 08:15:06 compute-0 magical_yonath[334693]: 167 167
Jan 31 08:15:06 compute-0 conmon[334693]: conmon d5ef68f423ddce2b1048 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af.scope/container/memory.events
Jan 31 08:15:06 compute-0 podman[334651]: 2026-01-31 08:15:06.244237655 +0000 UTC m=+0.612867443 container attach d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:06 compute-0 podman[334651]: 2026-01-31 08:15:06.244909892 +0000 UTC m=+0.613539650 container died d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 08:15:06 compute-0 ceph-mon[74496]: pgmap v2390: 305 pgs: 305 active+clean; 785 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 568 KiB/s rd, 1010 KiB/s wr, 130 op/s
Jan 31 08:15:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-16985255067a980270c916d784036ad33958056c8eba45c20a593bb59b1697f9-merged.mount: Deactivated successfully.
Jan 31 08:15:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:06.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:06 compute-0 podman[334651]: 2026-01-31 08:15:06.762724349 +0000 UTC m=+1.131354107 container remove d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:15:06 compute-0 systemd[1]: libpod-conmon-d5ef68f423ddce2b1048e71d19bf40d599941f463ecd0a41e3639719f9db20af.scope: Deactivated successfully.
Jan 31 08:15:06 compute-0 podman[334719]: 2026-01-31 08:15:06.997643504 +0000 UTC m=+0.120765335 container create a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:15:07 compute-0 podman[334719]: 2026-01-31 08:15:06.909715521 +0000 UTC m=+0.032837392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:15:07 compute-0 systemd[1]: Started libpod-conmon-a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04.scope.
Jan 31 08:15:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb88468a668a5bcce0907a959a0083e4d066441cf45a7de225402488f7db2938/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb88468a668a5bcce0907a959a0083e4d066441cf45a7de225402488f7db2938/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb88468a668a5bcce0907a959a0083e4d066441cf45a7de225402488f7db2938/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb88468a668a5bcce0907a959a0083e4d066441cf45a7de225402488f7db2938/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:07 compute-0 nova_compute[247704]: 2026-01-31 08:15:07.279 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:07 compute-0 podman[334719]: 2026-01-31 08:15:07.30737852 +0000 UTC m=+0.430500401 container init a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:15:07 compute-0 podman[334719]: 2026-01-31 08:15:07.314541734 +0000 UTC m=+0.437663555 container start a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:15:07 compute-0 podman[334719]: 2026-01-31 08:15:07.472819111 +0000 UTC m=+0.595940932 container attach a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 08:15:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 785 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 139 op/s
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.044 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.045 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.045 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "8db08219-6945-4f16-a138-8f5ddff67421-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.045 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.045 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.046 247708 INFO nova.compute.manager [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Terminating instance
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.047 247708 DEBUG nova.compute.manager [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.054 247708 INFO nova.virt.libvirt.driver [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Instance destroyed successfully.
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.054 247708 DEBUG nova.objects.instance [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lazy-loading 'resources' on Instance uuid 8db08219-6945-4f16-a138-8f5ddff67421 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.073 247708 DEBUG nova.virt.libvirt.vif [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-902266142',display_name='tempest-Íñstáñcé-213814897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-902266142',id=134,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4aa06cf35d8c468fb16884f19dc8ce71',ramdisk_id='',reservation_id='r-xz9s0eu8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-327201738',owner_user_name='tempest-ServersTestJSON-327201738-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:02Z,user_data=None,user_id='5366d122b359489fb9d2bda8d19611a6',uuid=8db08219-6945-4f16-a138-8f5ddff67421,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.074 247708 DEBUG nova.network.os_vif_util [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converting VIF {"id": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "address": "fa:16:3e:53:0a:83", "network": {"id": "b88251fc-7610-460a-ba55-2ed186c6f696", "bridge": "br-int", "label": "tempest-ServersTestJSON-1832820458-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4aa06cf35d8c468fb16884f19dc8ce71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9fa121d-38", "ovs_interfaceid": "c9fa121d-38bf-475f-924e-ce6f8d3af489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.076 247708 DEBUG nova.network.os_vif_util [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.077 247708 DEBUG os_vif [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.080 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9fa121d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.082 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.086 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.090 247708 INFO os_vif [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0a:83,bridge_name='br-int',has_traffic_filtering=True,id=c9fa121d-38bf-475f-924e-ce6f8d3af489,network=Network(b88251fc-7610-460a-ba55-2ed186c6f696),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9fa121d-38')
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]: {
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:         "osd_id": 0,
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:         "type": "bluestore"
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]:     }
Jan 31 08:15:08 compute-0 wizardly_hawking[334735]: }
Jan 31 08:15:08 compute-0 systemd[1]: libpod-a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04.scope: Deactivated successfully.
Jan 31 08:15:08 compute-0 podman[334719]: 2026-01-31 08:15:08.153443185 +0000 UTC m=+1.276565016 container died a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb88468a668a5bcce0907a959a0083e4d066441cf45a7de225402488f7db2938-merged.mount: Deactivated successfully.
Jan 31 08:15:08 compute-0 podman[334719]: 2026-01-31 08:15:08.238038786 +0000 UTC m=+1.361160577 container remove a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:08 compute-0 systemd[1]: libpod-conmon-a7f33b8155285e83405d39be1ee20d9acf818622db76f02fc09dfaa736aa3d04.scope: Deactivated successfully.
Jan 31 08:15:08 compute-0 sudo[334560]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:15:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:15:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ff9dec00-d1f7-4517-a909-81a9229caa99 does not exist
Jan 31 08:15:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 266ec6fc-bacb-4052-932b-6b35559c961d does not exist
Jan 31 08:15:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c4978f89-9686-4595-a94f-26d98ae48072 does not exist
Jan 31 08:15:08 compute-0 sudo[334786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:08 compute-0 sudo[334786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:08 compute-0 sudo[334786]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:08 compute-0 sudo[334812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:15:08 compute-0 sudo[334812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:08 compute-0 sudo[334812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.680 247708 INFO nova.virt.libvirt.driver [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Deleting instance files /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421_del
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.681 247708 INFO nova.virt.libvirt.driver [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Deletion of /var/lib/nova/instances/8db08219-6945-4f16-a138-8f5ddff67421_del complete
Jan 31 08:15:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:08.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.783 247708 INFO nova.compute.manager [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.784 247708 DEBUG oslo.service.loopingcall [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.784 247708 DEBUG nova.compute.manager [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:15:08 compute-0 nova_compute[247704]: 2026-01-31 08:15:08.784 247708 DEBUG nova.network.neutron [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:15:08 compute-0 ceph-mon[74496]: pgmap v2391: 305 pgs: 305 active+clean; 785 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 139 op/s
Jan 31 08:15:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:15:09 compute-0 nova_compute[247704]: 2026-01-31 08:15:09.582 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 759 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 157 op/s
Jan 31 08:15:09 compute-0 podman[334838]: 2026-01-31 08:15:09.917234912 +0000 UTC m=+0.082922982 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:10 compute-0 ceph-mon[74496]: pgmap v2392: 305 pgs: 305 active+clean; 759 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 157 op/s
Jan 31 08:15:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:11.188 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:11.189 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:11.190 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:11 compute-0 nova_compute[247704]: 2026-01-31 08:15:11.388 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847296.3867292, 8db08219-6945-4f16-a138-8f5ddff67421 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:11 compute-0 nova_compute[247704]: 2026-01-31 08:15:11.389 247708 INFO nova.compute.manager [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] VM Stopped (Lifecycle Event)
Jan 31 08:15:11 compute-0 nova_compute[247704]: 2026-01-31 08:15:11.415 247708 DEBUG nova.compute.manager [None req-2863c9b2-07e3-423d-a9d4-34666802cc0e - - - - - -] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 730 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 148 op/s
Jan 31 08:15:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:12.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:12.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:12 compute-0 ceph-mon[74496]: pgmap v2393: 305 pgs: 305 active+clean; 730 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 148 op/s
Jan 31 08:15:13 compute-0 nova_compute[247704]: 2026-01-31 08:15:13.082 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 128 op/s
Jan 31 08:15:13 compute-0 nova_compute[247704]: 2026-01-31 08:15:13.793 247708 DEBUG nova.network.neutron [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:13 compute-0 nova_compute[247704]: 2026-01-31 08:15:13.964 247708 INFO nova.compute.manager [-] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Took 5.18 seconds to deallocate network for instance.
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.108 247708 DEBUG nova.compute.manager [req-3cd026a7-0264-468f-8961-c7e30650edc4 req-ad91b571-d489-46bd-863a-9f46f2398f8a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 8db08219-6945-4f16-a138-8f5ddff67421] Received event network-vif-deleted-c9fa121d-38bf-475f-924e-ce6f8d3af489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.207 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.207 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.324 247708 DEBUG oslo_concurrency.processutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105481692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:14.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.750 247708 DEBUG oslo_concurrency.processutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.757 247708 DEBUG nova.compute.provider_tree [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:14 compute-0 nova_compute[247704]: 2026-01-31 08:15:14.908 247708 DEBUG nova.scheduler.client.report [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:14 compute-0 ceph-mon[74496]: pgmap v2394: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 128 op/s
Jan 31 08:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3105481692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:15 compute-0 nova_compute[247704]: 2026-01-31 08:15:15.070 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:15 compute-0 nova_compute[247704]: 2026-01-31 08:15:15.561 247708 INFO nova.scheduler.client.report [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Deleted allocations for instance 8db08219-6945-4f16-a138-8f5ddff67421
Jan 31 08:15:15 compute-0 nova_compute[247704]: 2026-01-31 08:15:15.765 247708 DEBUG oslo_concurrency.lockutils [None req-f0ca43fe-1915-4f74-ad8f-31598eb7bdf7 5366d122b359489fb9d2bda8d19611a6 4aa06cf35d8c468fb16884f19dc8ce71 - - default default] Lock "8db08219-6945-4f16-a138-8f5ddff67421" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 710 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 201 KiB/s wr, 111 op/s
Jan 31 08:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1660655347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:16 compute-0 ceph-mon[74496]: pgmap v2395: 305 pgs: 305 active+clean; 710 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 201 KiB/s wr, 111 op/s
Jan 31 08:15:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 710 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 455 KiB/s wr, 159 op/s
Jan 31 08:15:18 compute-0 nova_compute[247704]: 2026-01-31 08:15:18.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:18 compute-0 ceph-mon[74496]: pgmap v2396: 305 pgs: 305 active+clean; 710 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 455 KiB/s wr, 159 op/s
Jan 31 08:15:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:18.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:19 compute-0 nova_compute[247704]: 2026-01-31 08:15:19.588 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 142 op/s
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:15:20
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:15:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:20.369 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:20.370 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:15:20 compute-0 nova_compute[247704]: 2026-01-31 08:15:20.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:20.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:15:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:15:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:20.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:20 compute-0 ceph-mon[74496]: pgmap v2397: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 142 op/s
Jan 31 08:15:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 679 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 136 op/s
Jan 31 08:15:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:22 compute-0 ceph-mon[74496]: pgmap v2398: 305 pgs: 305 active+clean; 679 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 136 op/s
Jan 31 08:15:23 compute-0 nova_compute[247704]: 2026-01-31 08:15:23.086 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 670 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 133 op/s
Jan 31 08:15:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3764220402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:24.373 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:24.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:24 compute-0 nova_compute[247704]: 2026-01-31 08:15:24.590 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:24 compute-0 nova_compute[247704]: 2026-01-31 08:15:24.898 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:24 compute-0 ceph-mon[74496]: pgmap v2399: 305 pgs: 305 active+clean; 670 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 455 KiB/s wr, 133 op/s
Jan 31 08:15:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/192795781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:25 compute-0 sudo[334889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:25 compute-0 sudo[334889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 653 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.7 MiB/s wr, 165 op/s
Jan 31 08:15:25 compute-0 sudo[334889]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:25 compute-0 sudo[334919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:25 compute-0 sudo[334919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:25 compute-0 sudo[334919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:25 compute-0 podman[334913]: 2026-01-31 08:15:25.911068973 +0000 UTC m=+0.123414788 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:15:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3434580900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:26.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:26 compute-0 ceph-mon[74496]: pgmap v2400: 305 pgs: 305 active+clean; 653 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.7 MiB/s wr, 165 op/s
Jan 31 08:15:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.1 MiB/s wr, 226 op/s
Jan 31 08:15:27 compute-0 ceph-mon[74496]: pgmap v2401: 305 pgs: 305 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.1 MiB/s wr, 226 op/s
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.071 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.072 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.164 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.377 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.378 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.390 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.390 247708 INFO nova.compute.claims [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:15:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:28.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:28 compute-0 nova_compute[247704]: 2026-01-31 08:15:28.711 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:28.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 31 08:15:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 31 08:15:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 31 08:15:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1775546745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.166 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.174 247708 DEBUG nova.compute.provider_tree [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.200 247708 DEBUG nova.scheduler.client.report [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.239 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.240 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.295 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.295 247708 DEBUG nova.network.neutron [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.320 247708 INFO nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.340 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.462 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.464 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.465 247708 INFO nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Creating image(s)
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.503 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.538 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.569 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.573 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.592 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.594 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.596 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.597 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.598 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.635 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.636 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.637 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.637 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.666 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.670 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 743cf933-3139-4c25-9c75-b45150274ae3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:29 compute-0 nova_compute[247704]: 2026-01-31 08:15:29.693 247708 DEBUG nova.policy [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd7d9a44201d548aba1e1654e136ddd06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:15:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 676 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.7 MiB/s wr, 212 op/s
Jan 31 08:15:30 compute-0 ceph-mon[74496]: osdmap e306: 3 total, 3 up, 3 in
Jan 31 08:15:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1775546745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:30 compute-0 ceph-mon[74496]: pgmap v2403: 305 pgs: 305 active+clean; 676 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.7 MiB/s wr, 212 op/s
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.013 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 743cf933-3139-4c25-9c75-b45150274ae3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.110 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] resizing rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.221 247708 DEBUG nova.objects.instance [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.252 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.252 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Ensure instance console log exists: /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.253 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.253 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.254 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:30 compute-0 ovn_controller[149457]: 2026-01-31T08:15:30Z|00544|binding|INFO|Releasing lport 4bb3ff19-f70b-4c8d-a829-66ff18233b61 from this chassis (sb_readonly=0)
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:30 compute-0 nova_compute[247704]: 2026-01-31 08:15:30.672 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:31 compute-0 nova_compute[247704]: 2026-01-31 08:15:31.406 247708 DEBUG nova.network.neutron [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Successfully created port: e33f46e9-c188-4c53-b863-93620a4c3452 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:15:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 665 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.0 MiB/s wr, 257 op/s
Jan 31 08:15:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:32.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:32 compute-0 ceph-mon[74496]: pgmap v2404: 305 pgs: 305 active+clean; 665 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.0 MiB/s wr, 257 op/s
Jan 31 08:15:33 compute-0 nova_compute[247704]: 2026-01-31 08:15:33.090 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 661 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 6.2 MiB/s wr, 304 op/s
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.153 247708 DEBUG nova.network.neutron [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Successfully updated port: e33f46e9-c188-4c53-b863-93620a4c3452 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.224 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.225 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.225 247708 DEBUG nova.network.neutron [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.398 247708 DEBUG nova.compute.manager [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.399 247708 DEBUG nova.compute.manager [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing instance network info cache due to event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.399 247708 DEBUG oslo_concurrency.lockutils [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.504 247708 DEBUG nova.network.neutron [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.595 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.626 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.627 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.627 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.627 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:15:34 compute-0 nova_compute[247704]: 2026-01-31 08:15:34.628 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:34 compute-0 ceph-mon[74496]: pgmap v2405: 305 pgs: 305 active+clean; 661 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 6.2 MiB/s wr, 304 op/s
Jan 31 08:15:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815147355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.111 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011551173110924995 of space, bias 1.0, pg target 3.4653519332774985 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002699042085427136 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005858829738972641 of space, bias 1.0, pg target 1.7400724324748744 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.526 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.527 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.718 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.720 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4206MB free_disk=20.74584197998047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.720 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.721 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.3 MiB/s wr, 267 op/s
Jan 31 08:15:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 31 08:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2815147355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 31 08:15:35 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.998 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 53a5c321-1278-4df4-9fb0-feb465508681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.998 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 743cf933-3139-4c25-9c75-b45150274ae3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.999 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:15:35 compute-0 nova_compute[247704]: 2026-01-31 08:15:35.999 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:15:36 compute-0 nova_compute[247704]: 2026-01-31 08:15:36.105 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 31 08:15:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:36.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720226546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:36 compute-0 nova_compute[247704]: 2026-01-31 08:15:36.618 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:36 compute-0 nova_compute[247704]: 2026-01-31 08:15:36.625 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:36 compute-0 nova_compute[247704]: 2026-01-31 08:15:36.669 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:36 compute-0 nova_compute[247704]: 2026-01-31 08:15:36.705 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:15:36 compute-0 nova_compute[247704]: 2026-01-31 08:15:36.705 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:36 compute-0 ceph-mon[74496]: pgmap v2406: 305 pgs: 305 active+clean; 675 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.3 MiB/s wr, 267 op/s
Jan 31 08:15:36 compute-0 ceph-mon[74496]: osdmap e307: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-0 ceph-mon[74496]: osdmap e308: 3 total, 3 up, 3 in
Jan 31 08:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1720226546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:37 compute-0 nova_compute[247704]: 2026-01-31 08:15:37.706 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:37 compute-0 nova_compute[247704]: 2026-01-31 08:15:37.707 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:15:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 147 op/s
Jan 31 08:15:37 compute-0 nova_compute[247704]: 2026-01-31 08:15:37.872 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:15:37 compute-0 nova_compute[247704]: 2026-01-31 08:15:37.873 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/967709114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:37 compute-0 nova_compute[247704]: 2026-01-31 08:15:37.970 247708 DEBUG nova.network.neutron [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.016 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.017 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance network_info: |[{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.018 247708 DEBUG oslo_concurrency.lockutils [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.019 247708 DEBUG nova.network.neutron [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.025 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Start _get_guest_xml network_info=[{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.033 247708 WARNING nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.038 247708 DEBUG nova.virt.libvirt.host [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.039 247708 DEBUG nova.virt.libvirt.host [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.043 247708 DEBUG nova.virt.libvirt.host [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.044 247708 DEBUG nova.virt.libvirt.host [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.046 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.047 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.048 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.049 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.049 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.050 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.050 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.051 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.051 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.052 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.053 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.053 247708 DEBUG nova.virt.hardware [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.059 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:15:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3728581685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:38.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.523 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.548 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.552 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:38.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:38 compute-0 ceph-mon[74496]: pgmap v2409: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 147 op/s
Jan 31 08:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3396587203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3728581685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:15:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615842584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:38 compute-0 nova_compute[247704]: 2026-01-31 08:15:38.998 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.000 247708 DEBUG nova.virt.libvirt.vif [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-107442355',display_name='tempest-ServerStableDeviceRescueTest-server-107442355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-107442355',id=135,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbP3gf7LmroDDYk0USWIG5AuQO84YQj17XtBS8JALIesJpm7oyiz9kFGiUhk5vHZ+8lJiStdnRkjJm79czkwPbDOitiF7fhmca3/rEfJLTDNBTZ+uam26C3TTY99imMvg==',key_name='tempest-keypair-719720114',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-401rbz0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=743cf933-3139-4c25-9c75-b45150274ae3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.001 247708 DEBUG nova.network.os_vif_util [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.002 247708 DEBUG nova.network.os_vif_util [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.003 247708 DEBUG nova.objects.instance [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.044 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <uuid>743cf933-3139-4c25-9c75-b45150274ae3</uuid>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <name>instance-00000087</name>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-107442355</nova:name>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:15:38</nova:creationTime>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <nova:port uuid="e33f46e9-c188-4c53-b863-93620a4c3452">
Jan 31 08:15:39 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <system>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <entry name="serial">743cf933-3139-4c25-9c75-b45150274ae3</entry>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <entry name="uuid">743cf933-3139-4c25-9c75-b45150274ae3</entry>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </system>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <os>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </os>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <features>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </features>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/743cf933-3139-4c25-9c75-b45150274ae3_disk">
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </source>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/743cf933-3139-4c25-9c75-b45150274ae3_disk.config">
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </source>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:15:39 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:79:71:3b"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <target dev="tape33f46e9-c1"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/console.log" append="off"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <video>
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </video>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:15:39 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:15:39 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:15:39 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:15:39 compute-0 nova_compute[247704]: </domain>
Jan 31 08:15:39 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.045 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Preparing to wait for external event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.046 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.046 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.046 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.048 247708 DEBUG nova.virt.libvirt.vif [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-107442355',display_name='tempest-ServerStableDeviceRescueTest-server-107442355',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-107442355',id=135,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbP3gf7LmroDDYk0USWIG5AuQO84YQj17XtBS8JALIesJpm7oyiz9kFGiUhk5vHZ+8lJiStdnRkjJm79czkwPbDOitiF7fhmca3/rEfJLTDNBTZ+uam26C3TTY99imMvg==',key_name='tempest-keypair-719720114',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-401rbz0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=743cf933-3139-4c25-9c75-b45150274ae3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.048 247708 DEBUG nova.network.os_vif_util [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.049 247708 DEBUG nova.network.os_vif_util [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.050 247708 DEBUG os_vif [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.052 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.052 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.057 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.057 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape33f46e9-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.058 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape33f46e9-c1, col_values=(('external_ids', {'iface-id': 'e33f46e9-c188-4c53-b863-93620a4c3452', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:71:3b', 'vm-uuid': '743cf933-3139-4c25-9c75-b45150274ae3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.060 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:39 compute-0 NetworkManager[49108]: <info>  [1769847339.0622] manager: (tape33f46e9-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.070 247708 INFO os_vif [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1')
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.155 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.156 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.156 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:79:71:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.156 247708 INFO nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Using config drive
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.183 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 643 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.838 247708 INFO nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Creating config drive at /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.847 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptci6sel7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1615842584' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1058907294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:39 compute-0 nova_compute[247704]: 2026-01-31 08:15:39.975 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptci6sel7" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.017 247708 DEBUG nova.storage.rbd_utils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.022 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config 743cf933-3139-4c25-9c75-b45150274ae3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.211 247708 DEBUG oslo_concurrency.processutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config 743cf933-3139-4c25-9c75-b45150274ae3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.212 247708 INFO nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Deleting local config drive /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config because it was imported into RBD.
Jan 31 08:15:40 compute-0 kernel: tape33f46e9-c1: entered promiscuous mode
Jan 31 08:15:40 compute-0 NetworkManager[49108]: <info>  [1769847340.2550] manager: (tape33f46e9-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Jan 31 08:15:40 compute-0 ovn_controller[149457]: 2026-01-31T08:15:40Z|00545|binding|INFO|Claiming lport e33f46e9-c188-4c53-b863-93620a4c3452 for this chassis.
Jan 31 08:15:40 compute-0 ovn_controller[149457]: 2026-01-31T08:15:40Z|00546|binding|INFO|e33f46e9-c188-4c53-b863-93620a4c3452: Claiming fa:16:3e:79:71:3b 10.100.0.4
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.257 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 ovn_controller[149457]: 2026-01-31T08:15:40Z|00547|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 ovn-installed in OVS
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 ovn_controller[149457]: 2026-01-31T08:15:40Z|00548|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 up in Southbound
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.271 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:71:3b 10.100.0.4'], port_security=['fa:16:3e:79:71:3b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '743cf933-3139-4c25-9c75-b45150274ae3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11deed3f-7193-4325-affb-77a66beb8424', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e33f46e9-c188-4c53-b863-93620a4c3452) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.272 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e33f46e9-c188-4c53-b863-93620a4c3452 in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.274 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.285 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[214c4474-8e79-4d3f-b16d-71b99b1b5ee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.285 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088d6992-61 in ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.287 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088d6992-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.287 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb67b01-f8af-4734-ae7b-7e4a3b32151c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.288 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7c514e-f19b-4063-99fb-399e1c050554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.295 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[14e19f6a-8f48-4724-918e-efbb8ba371d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 systemd-machined[214448]: New machine qemu-57-instance-00000087.
Jan 31 08:15:40 compute-0 systemd-udevd[335353]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:15:40 compute-0 NetworkManager[49108]: <info>  [1769847340.3142] device (tape33f46e9-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:15:40 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000087.
Jan 31 08:15:40 compute-0 NetworkManager[49108]: <info>  [1769847340.3153] device (tape33f46e9-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.317 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5cf80b-9b6d-4f9c-b507-d8a66cc5002b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.342 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[71e04c60-0e92-4ee1-9dbd-c1cc4d721b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 NetworkManager[49108]: <info>  [1769847340.3483] manager: (tap088d6992-60): new Veth device (/org/freedesktop/NetworkManager/Devices/249)
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.347 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[84542c62-e777-4a72-b42b-b55ec28e322c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 podman[335337]: 2026-01-31 08:15:40.35385858 +0000 UTC m=+0.072952479 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.376 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5519de1b-ea88-4240-96f0-73dd69384630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.378 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[cfad42f9-31ca-4edd-9a73-4bc32e21fe98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 NetworkManager[49108]: <info>  [1769847340.3938] device (tap088d6992-60): carrier: link connected
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.398 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[61460929-82ca-4107-b6a0-83495cee5211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.413 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea3af7d-d953-46e0-89d5-a61b1882c24c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758301, 'reachable_time': 38522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335389, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.425 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fc498a92-8fb0-4053-b300-6e97ba4d424d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:87bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758301, 'tstamp': 758301}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335390, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.440 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d4559c1e-1bff-43ef-ba65-6a2dffa82762]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758301, 'reachable_time': 38522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335391, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.462 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c7521f7d-1fb2-410a-97d2-6d4921849633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:40.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.518 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[faf73aa9-c245-4570-ba01-3317e7bc0a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.520 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.520 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.520 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 NetworkManager[49108]: <info>  [1769847340.5236] manager: (tap088d6992-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Jan 31 08:15:40 compute-0 kernel: tap088d6992-60: entered promiscuous mode
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.526 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.527 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.529 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 ovn_controller[149457]: 2026-01-31T08:15:40Z|00549|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.530 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.534 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2db7c75b-288b-49e3-b0e4-3599826006cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.535 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:15:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:40.536 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'env', 'PROCESS_TAG=haproxy-088d6992-6ba6-4719-a977-b3d306740157', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088d6992-6ba6-4719-a977-b3d306740157.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.743 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847340.7432008, 743cf933-3139-4c25-9c75-b45150274ae3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.744 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Started (Lifecycle Event)
Jan 31 08:15:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:40.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.805 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.809 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847340.7437406, 743cf933-3139-4c25-9c75-b45150274ae3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.809 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Paused (Lifecycle Event)
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.847 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.851 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:15:40 compute-0 nova_compute[247704]: 2026-01-31 08:15:40.921 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:15:40 compute-0 podman[335465]: 2026-01-31 08:15:40.944144583 +0000 UTC m=+0.068078930 container create a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:15:40 compute-0 ceph-mon[74496]: pgmap v2410: 305 pgs: 305 active+clean; 643 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 31 08:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1660631670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:40 compute-0 systemd[1]: Started libpod-conmon-a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393.scope.
Jan 31 08:15:40 compute-0 podman[335465]: 2026-01-31 08:15:40.90588066 +0000 UTC m=+0.029815077 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:15:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e850c5e94334fc8ae832d71ad49698da23163986d2c25c8cf4bf0f1a416921/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:41 compute-0 podman[335465]: 2026-01-31 08:15:41.042716115 +0000 UTC m=+0.166650542 container init a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:15:41 compute-0 podman[335465]: 2026-01-31 08:15:41.048786932 +0000 UTC m=+0.172721309 container start a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:15:41 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [NOTICE]   (335485) : New worker (335487) forked
Jan 31 08:15:41 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [NOTICE]   (335485) : Loading success.
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.106 247708 DEBUG nova.network.neutron [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updated VIF entry in instance network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.107 247708 DEBUG nova.network.neutron [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.136 247708 DEBUG oslo_concurrency.lockutils [req-3d45c3dd-39cb-40bf-b917-9b0703b9009a req-995a5cdc-8e15-43ed-b157-9c9e26a85f40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 650 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 1.7 MiB/s wr, 94 op/s
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.960 247708 DEBUG nova.compute.manager [req-457ea193-97b6-49af-ac2c-353f5705da94 req-89592d41-7c99-44de-a9bc-eba64c3ab455 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.961 247708 DEBUG oslo_concurrency.lockutils [req-457ea193-97b6-49af-ac2c-353f5705da94 req-89592d41-7c99-44de-a9bc-eba64c3ab455 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.961 247708 DEBUG oslo_concurrency.lockutils [req-457ea193-97b6-49af-ac2c-353f5705da94 req-89592d41-7c99-44de-a9bc-eba64c3ab455 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.961 247708 DEBUG oslo_concurrency.lockutils [req-457ea193-97b6-49af-ac2c-353f5705da94 req-89592d41-7c99-44de-a9bc-eba64c3ab455 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.962 247708 DEBUG nova.compute.manager [req-457ea193-97b6-49af-ac2c-353f5705da94 req-89592d41-7c99-44de-a9bc-eba64c3ab455 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Processing event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.963 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.967 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847341.9674213, 743cf933-3139-4c25-9c75-b45150274ae3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.968 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Resumed (Lifecycle Event)
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.969 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.973 247708 INFO nova.virt.libvirt.driver [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance spawned successfully.
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.973 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:15:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3699285779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:41 compute-0 ceph-mon[74496]: pgmap v2411: 305 pgs: 305 active+clean; 650 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 1.7 MiB/s wr, 94 op/s
Jan 31 08:15:41 compute-0 nova_compute[247704]: 2026-01-31 08:15:41.994 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.004 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.014 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.015 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.016 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.016 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.017 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.018 247708 DEBUG nova.virt.libvirt.driver [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.051 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.108 247708 INFO nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Took 12.65 seconds to spawn the instance on the hypervisor.
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.109 247708 DEBUG nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.314 247708 INFO nova.compute.manager [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Took 14.02 seconds to build instance.
Jan 31 08:15:42 compute-0 nova_compute[247704]: 2026-01-31 08:15:42.340 247708 DEBUG oslo_concurrency.lockutils [None req-80eab9b0-fe44-48c5-afb8-865c78d32ee8 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:42.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:42.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 619 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 629 KiB/s rd, 933 KiB/s wr, 122 op/s
Jan 31 08:15:43 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.062 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.167 247708 DEBUG nova.compute.manager [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.168 247708 DEBUG oslo_concurrency.lockutils [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.169 247708 DEBUG oslo_concurrency.lockutils [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.170 247708 DEBUG oslo_concurrency.lockutils [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.170 247708 DEBUG nova.compute.manager [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.171 247708 WARNING nova.compute.manager [req-81bddabf-08ce-4d07-a008-dde43ff1cbf8 req-9f5b7756-fad1-40ad-bbcd-0aa86f29689f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state None.
Jan 31 08:15:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:44 compute-0 nova_compute[247704]: 2026-01-31 08:15:44.603 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:44.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:44 compute-0 ceph-mon[74496]: pgmap v2412: 305 pgs: 305 active+clean; 619 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 629 KiB/s rd, 933 KiB/s wr, 122 op/s
Jan 31 08:15:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1582960450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 618 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Jan 31 08:15:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1263486483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:15:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1263486483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:15:45 compute-0 sudo[335499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:45 compute-0 sudo[335499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:45 compute-0 sudo[335499]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:45 compute-0 sudo[335524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:15:45 compute-0 sudo[335524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:45 compute-0 sudo[335524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 31 08:15:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 31 08:15:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 31 08:15:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:46.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:46 compute-0 ceph-mon[74496]: pgmap v2413: 305 pgs: 305 active+clean; 618 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Jan 31 08:15:46 compute-0 ceph-mon[74496]: osdmap e309: 3 total, 3 up, 3 in
Jan 31 08:15:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2993535437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:46 compute-0 nova_compute[247704]: 2026-01-31 08:15:46.925 247708 DEBUG nova.compute.manager [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:46 compute-0 nova_compute[247704]: 2026-01-31 08:15:46.925 247708 DEBUG nova.compute.manager [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing instance network info cache due to event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:15:46 compute-0 nova_compute[247704]: 2026-01-31 08:15:46.926 247708 DEBUG oslo_concurrency.lockutils [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:15:46 compute-0 nova_compute[247704]: 2026-01-31 08:15:46.926 247708 DEBUG oslo_concurrency.lockutils [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:15:46 compute-0 nova_compute[247704]: 2026-01-31 08:15:46.926 247708 DEBUG nova.network.neutron [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:15:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 596 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 31 08:15:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/804013123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:15:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/804013123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:15:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/923116940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:15:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:48.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:48.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.808 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.810 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.811 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.811 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.811 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.813 247708 INFO nova.compute.manager [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Terminating instance
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.813 247708 DEBUG nova.compute.manager [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:15:48 compute-0 kernel: tap86453c5b-b1 (unregistering): left promiscuous mode
Jan 31 08:15:48 compute-0 NetworkManager[49108]: <info>  [1769847348.8764] device (tap86453c5b-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:15:48 compute-0 ovn_controller[149457]: 2026-01-31T08:15:48Z|00550|binding|INFO|Releasing lport 86453c5b-b126-4762-81e9-44c33b078bfc from this chassis (sb_readonly=0)
Jan 31 08:15:48 compute-0 ovn_controller[149457]: 2026-01-31T08:15:48Z|00551|binding|INFO|Setting lport 86453c5b-b126-4762-81e9-44c33b078bfc down in Southbound
Jan 31 08:15:48 compute-0 ovn_controller[149457]: 2026-01-31T08:15:48Z|00552|binding|INFO|Removing iface tap86453c5b-b1 ovn-installed in OVS
Jan 31 08:15:48 compute-0 nova_compute[247704]: 2026-01-31 08:15:48.887 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:48.894 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:2f:9f 10.100.0.7'], port_security=['fa:16:3e:42:2f:9f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '53a5c321-1278-4df4-9fb0-feb465508681', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3ddadeb950a490db5c99da98a32c9ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cef7bb84-6ec0-48b7-8775-111e40762c53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17e596e7-33b3-44a6-9cbf-f9eacfd974b4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=86453c5b-b126-4762-81e9-44c33b078bfc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:15:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:48.896 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 86453c5b-b126-4762-81e9-44c33b078bfc in datapath e8014d6b-23e1-41ef-b5e2-3d770d302e72 unbound from our chassis
Jan 31 08:15:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:48.897 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8014d6b-23e1-41ef-b5e2-3d770d302e72, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:15:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:48.898 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d6b195e7-494c-4035-9d59-5b3efbc44cba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:48.900 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 namespace which is not needed anymore
Jan 31 08:15:48 compute-0 ceph-mon[74496]: pgmap v2415: 305 pgs: 305 active+clean; 596 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 31 08:15:48 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000072.scope: Deactivated successfully.
Jan 31 08:15:48 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000072.scope: Consumed 30.456s CPU time.
Jan 31 08:15:48 compute-0 systemd-machined[214448]: Machine qemu-47-instance-00000072 terminated.
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.006 247708 DEBUG nova.network.neutron [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updated VIF entry in instance network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.007 247708 DEBUG nova.network.neutron [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:49 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [NOTICE]   (324174) : haproxy version is 2.8.14-c23fe91
Jan 31 08:15:49 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [NOTICE]   (324174) : path to executable is /usr/sbin/haproxy
Jan 31 08:15:49 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [WARNING]  (324174) : Exiting Master process...
Jan 31 08:15:49 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [WARNING]  (324174) : Exiting Master process...
Jan 31 08:15:49 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [ALERT]    (324174) : Current worker (324176) exited with code 143 (Terminated)
Jan 31 08:15:49 compute-0 neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72[324068]: [WARNING]  (324174) : All workers exited. Exiting... (0)
Jan 31 08:15:49 compute-0 systemd[1]: libpod-d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725.scope: Deactivated successfully.
Jan 31 08:15:49 compute-0 podman[335573]: 2026-01-31 08:15:49.022171865 +0000 UTC m=+0.044721461 container died d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.042 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725-userdata-shm.mount: Deactivated successfully.
Jan 31 08:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-297c3994664edfbd55c78c2ea4265a198135f89872f84c8dba35506fa9f76182-merged.mount: Deactivated successfully.
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.059 247708 INFO nova.virt.libvirt.driver [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Instance destroyed successfully.
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.061 247708 DEBUG nova.objects.instance [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lazy-loading 'resources' on Instance uuid 53a5c321-1278-4df4-9fb0-feb465508681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.064 247708 DEBUG oslo_concurrency.lockutils [req-6ff08e6e-ef09-4e04-ad79-9a47948a6c30 req-0a34eddb-37d9-47cc-838b-d692f02487b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:15:49 compute-0 podman[335573]: 2026-01-31 08:15:49.067833528 +0000 UTC m=+0.090383064 container cleanup d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:15:49 compute-0 systemd[1]: libpod-conmon-d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725.scope: Deactivated successfully.
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.091 247708 DEBUG nova.virt.libvirt.vif [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:08:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2046159',display_name='tempest-ServerActionsTestOtherB-server-2046159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2046159',id=114,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3ddadeb950a490db5c99da98a32c9ec',ramdisk_id='',reservation_id='r-138tdv2g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-2012907318',owner_user_name='tempest-ServerActionsTestOtherB-2012907318-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:09:10Z,user_data=None,user_id='18aee9d81d404f77ac81cde538f140d8',uuid=53a5c321-1278-4df4-9fb0-feb465508681,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.092 247708 DEBUG nova.network.os_vif_util [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converting VIF {"id": "86453c5b-b126-4762-81e9-44c33b078bfc", "address": "fa:16:3e:42:2f:9f", "network": {"id": "e8014d6b-23e1-41ef-b5e2-3d770d302e72", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1220738298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3ddadeb950a490db5c99da98a32c9ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86453c5b-b1", "ovs_interfaceid": "86453c5b-b126-4762-81e9-44c33b078bfc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.093 247708 DEBUG nova.network.os_vif_util [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.093 247708 DEBUG os_vif [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.095 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86453c5b-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.096 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.098 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.101 247708 INFO os_vif [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:2f:9f,bridge_name='br-int',has_traffic_filtering=True,id=86453c5b-b126-4762-81e9-44c33b078bfc,network=Network(e8014d6b-23e1-41ef-b5e2-3d770d302e72),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86453c5b-b1')
Jan 31 08:15:49 compute-0 podman[335611]: 2026-01-31 08:15:49.120634994 +0000 UTC m=+0.036935271 container remove d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.124 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c69fdf60-18b9-43ee-9bcf-b6b846374dfd]: (4, ('Sat Jan 31 08:15:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725)\nd5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725\nSat Jan 31 08:15:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 (d5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725)\nd5ee085f1cb7b5c9ebc622bc9478515456eb4fa978b9cfa9a8fee9a691813725\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.125 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a8df3f7a-0a4c-4810-be8f-3c7708ad2348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.126 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8014d6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.128 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 kernel: tape8014d6b-20: left promiscuous mode
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.135 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.138 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ee52ca02-a145-4300-9f70-ba94e624d5f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.154 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[969209f2-a8e0-4cce-add6-b4a23ba4f109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.155 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[19cccaac-3a31-41f1-be33-68d5959286e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.166 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b20b8f63-7336-48b1-b1a6-f56a4110b27a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719144, 'reachable_time': 27608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335644, 'error': None, 'target': 'ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.168 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8014d6b-23e1-41ef-b5e2-3d770d302e72 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:15:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:15:49.168 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[4b764c93-cda9-499e-b014-e791ef6d1fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:15:49 compute-0 systemd[1]: run-netns-ovnmeta\x2de8014d6b\x2d23e1\x2d41ef\x2db5e2\x2d3d770d302e72.mount: Deactivated successfully.
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.554 247708 INFO nova.virt.libvirt.driver [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Deleting instance files /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681_del
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.555 247708 INFO nova.virt.libvirt.driver [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Deletion of /var/lib/nova/instances/53a5c321-1278-4df4-9fb0-feb465508681_del complete
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.603 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.675 247708 INFO nova.compute.manager [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Took 0.86 seconds to destroy the instance on the hypervisor.
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.676 247708 DEBUG oslo.service.loopingcall [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.676 247708 DEBUG nova.compute.manager [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:15:49 compute-0 nova_compute[247704]: 2026-01-31 08:15:49.677 247708 DEBUG nova.network.neutron [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:15:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 574 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Jan 31 08:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:50.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:50.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:50 compute-0 ceph-mon[74496]: pgmap v2416: 305 pgs: 305 active+clean; 574 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.208 247708 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-vif-unplugged-86453c5b-b126-4762-81e9-44c33b078bfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.209 247708 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.210 247708 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.211 247708 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.211 247708 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] No waiting events found dispatching network-vif-unplugged-86453c5b-b126-4762-81e9-44c33b078bfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.212 247708 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-vif-unplugged-86453c5b-b126-4762-81e9-44c33b078bfc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.212 247708 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.213 247708 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "53a5c321-1278-4df4-9fb0-feb465508681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.214 247708 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.214 247708 DEBUG oslo_concurrency.lockutils [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.215 247708 DEBUG nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] No waiting events found dispatching network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.215 247708 WARNING nova.compute.manager [req-df11d218-65f0-4bd5-ab7d-6a73826a963b req-7595314a-4fe3-4563-9ed9-ca4a575a0c42 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received unexpected event network-vif-plugged-86453c5b-b126-4762-81e9-44c33b078bfc for instance with vm_state active and task_state deleting.
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.295 247708 DEBUG nova.network.neutron [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.334 247708 INFO nova.compute.manager [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Took 1.66 seconds to deallocate network for instance.
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.448 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.449 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:15:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:51 compute-0 nova_compute[247704]: 2026-01-31 08:15:51.590 247708 DEBUG oslo_concurrency.processutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:15:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 244 op/s
Jan 31 08:15:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:15:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079221792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:52 compute-0 nova_compute[247704]: 2026-01-31 08:15:52.021 247708 DEBUG oslo_concurrency.processutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:15:52 compute-0 nova_compute[247704]: 2026-01-31 08:15:52.028 247708 DEBUG nova.compute.provider_tree [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:15:52 compute-0 nova_compute[247704]: 2026-01-31 08:15:52.051 247708 DEBUG nova.scheduler.client.report [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:15:52 compute-0 nova_compute[247704]: 2026-01-31 08:15:52.079 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:52 compute-0 nova_compute[247704]: 2026-01-31 08:15:52.181 247708 INFO nova.scheduler.client.report [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Deleted allocations for instance 53a5c321-1278-4df4-9fb0-feb465508681
Jan 31 08:15:52 compute-0 nova_compute[247704]: 2026-01-31 08:15:52.283 247708 DEBUG oslo_concurrency.lockutils [None req-d1271876-97b1-44fe-9fe0-41e29ead87af 18aee9d81d404f77ac81cde538f140d8 c3ddadeb950a490db5c99da98a32c9ec - - default default] Lock "53a5c321-1278-4df4-9fb0-feb465508681" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:15:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:15:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:52.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:15:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:52.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:52 compute-0 ceph-mon[74496]: pgmap v2417: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 244 op/s
Jan 31 08:15:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3079221792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:53 compute-0 nova_compute[247704]: 2026-01-31 08:15:53.377 247708 DEBUG nova.compute.manager [req-2b1d1305-7297-4c1f-95e2-069ab1bfcbd3 req-b9685dcd-2772-45ed-b226-61775642ec69 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Received event network-vif-deleted-86453c5b-b126-4762-81e9-44c33b078bfc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:15:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 225 op/s
Jan 31 08:15:54 compute-0 nova_compute[247704]: 2026-01-31 08:15:54.097 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:54.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:54 compute-0 nova_compute[247704]: 2026-01-31 08:15:54.606 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:54.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:54 compute-0 ceph-mon[74496]: pgmap v2418: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 225 op/s
Jan 31 08:15:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 210 op/s
Jan 31 08:15:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:15:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:56.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:56.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:56 compute-0 ceph-mon[74496]: pgmap v2419: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 210 op/s
Jan 31 08:15:56 compute-0 podman[335674]: 2026-01-31 08:15:56.892688635 +0000 UTC m=+0.070547870 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:15:57 compute-0 ovn_controller[149457]: 2026-01-31T08:15:57Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:79:71:3b 10.100.0.4
Jan 31 08:15:57 compute-0 ovn_controller[149457]: 2026-01-31T08:15:57Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:71:3b 10.100.0.4
Jan 31 08:15:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 524 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Jan 31 08:15:58 compute-0 nova_compute[247704]: 2026-01-31 08:15:58.287 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:15:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:58.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:15:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:15:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:15:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:58.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:15:58 compute-0 ceph-mon[74496]: pgmap v2420: 305 pgs: 305 active+clean; 524 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Jan 31 08:15:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/561970599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:15:59 compute-0 nova_compute[247704]: 2026-01-31 08:15:59.101 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:59 compute-0 ovn_controller[149457]: 2026-01-31T08:15:59Z|00553|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:15:59 compute-0 nova_compute[247704]: 2026-01-31 08:15:59.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:59 compute-0 nova_compute[247704]: 2026-01-31 08:15:59.609 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:15:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 524 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 163 op/s
Jan 31 08:16:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:00.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:00.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:00 compute-0 ceph-mon[74496]: pgmap v2421: 305 pgs: 305 active+clean; 524 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 163 op/s
Jan 31 08:16:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Jan 31 08:16:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:16:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:02.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:16:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:02 compute-0 nova_compute[247704]: 2026-01-31 08:16:02.903 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:02 compute-0 ceph-mon[74496]: pgmap v2422: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 185 op/s
Jan 31 08:16:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 31 08:16:04 compute-0 nova_compute[247704]: 2026-01-31 08:16:04.050 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847349.0486052, 53a5c321-1278-4df4-9fb0-feb465508681 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:16:04 compute-0 nova_compute[247704]: 2026-01-31 08:16:04.050 247708 INFO nova.compute.manager [-] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] VM Stopped (Lifecycle Event)
Jan 31 08:16:04 compute-0 nova_compute[247704]: 2026-01-31 08:16:04.100 247708 DEBUG nova.compute.manager [None req-bc481cb3-c703-4db4-8f38-8413a8031b17 - - - - - -] [instance: 53a5c321-1278-4df4-9fb0-feb465508681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:04 compute-0 nova_compute[247704]: 2026-01-31 08:16:04.103 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:04 compute-0 ovn_controller[149457]: 2026-01-31T08:16:04Z|00554|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:16:04 compute-0 nova_compute[247704]: 2026-01-31 08:16:04.468 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:16:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:04.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:16:04 compute-0 nova_compute[247704]: 2026-01-31 08:16:04.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:04 compute-0 sshd-session[335705]: Invalid user banx from 45.148.10.240 port 46628
Jan 31 08:16:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:16:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:16:04 compute-0 sshd-session[335705]: Connection closed by invalid user banx 45.148.10.240 port 46628 [preauth]
Jan 31 08:16:04 compute-0 ceph-mon[74496]: pgmap v2423: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 31 08:16:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 08:16:06 compute-0 sudo[335708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:06 compute-0 sudo[335708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:06 compute-0 sudo[335708]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:06 compute-0 sudo[335733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:06 compute-0 sudo[335733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:06 compute-0 sudo[335733]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:06.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:06 compute-0 ceph-mon[74496]: pgmap v2424: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 08:16:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 1016 KiB/s wr, 80 op/s
Jan 31 08:16:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:08.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:08 compute-0 sudo[335759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:08 compute-0 sudo[335759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-0 sudo[335759]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:08 compute-0 sudo[335784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:08 compute-0 sudo[335784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-0 sudo[335784]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-0 sudo[335809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:08 compute-0 sudo[335809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-0 sudo[335809]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-0 sudo[335834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:16:08 compute-0 sudo[335834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:08 compute-0 ceph-mon[74496]: pgmap v2425: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 1016 KiB/s wr, 80 op/s
Jan 31 08:16:09 compute-0 nova_compute[247704]: 2026-01-31 08:16:09.136 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:09 compute-0 sudo[335834]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:16:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:16:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:16:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3c4c636a-ef6c-44a1-8640-c963e9f81a41 does not exist
Jan 31 08:16:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f70ac101-f0cb-4b0d-9357-ad78ba37f5ec does not exist
Jan 31 08:16:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a5a662a2-0377-4b47-94cc-c098d85d5e9a does not exist
Jan 31 08:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:16:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:16:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:16:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:16:09 compute-0 sudo[335892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:09 compute-0 sudo[335892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:09 compute-0 sudo[335892]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-0 nova_compute[247704]: 2026-01-31 08:16:09.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:09 compute-0 sudo[335917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:09 compute-0 sudo[335917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:09 compute-0 sudo[335917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-0 sudo[335942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:09 compute-0 sudo[335942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:09 compute-0 sudo[335942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-0 sudo[335967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:16:09 compute-0 sudo[335967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 140 KiB/s rd, 152 KiB/s wr, 34 op/s
Jan 31 08:16:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:16:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.112689049 +0000 UTC m=+0.050022020 container create ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:10 compute-0 systemd[1]: Started libpod-conmon-ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90.scope.
Jan 31 08:16:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.091496063 +0000 UTC m=+0.028829064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.200478209 +0000 UTC m=+0.137811200 container init ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.207213713 +0000 UTC m=+0.144546684 container start ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.210059582 +0000 UTC m=+0.147392553 container attach ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:16:10 compute-0 festive_swirles[336049]: 167 167
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.213176978 +0000 UTC m=+0.150509979 container died ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:16:10 compute-0 systemd[1]: libpod-ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90.scope: Deactivated successfully.
Jan 31 08:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c387d4773778f87fefcecd347dfb7553478623fc5d030e6ff0b883eae04c4f65-merged.mount: Deactivated successfully.
Jan 31 08:16:10 compute-0 podman[336033]: 2026-01-31 08:16:10.252685991 +0000 UTC m=+0.190018962 container remove ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:16:10 compute-0 systemd[1]: libpod-conmon-ec6d8e5563defafc64a1588c06dca7a2405ef53bb0f5ab105efdff3f6ecabd90.scope: Deactivated successfully.
Jan 31 08:16:10 compute-0 podman[336073]: 2026-01-31 08:16:10.395623943 +0000 UTC m=+0.051567337 container create 9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:16:10 compute-0 systemd[1]: Started libpod-conmon-9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756.scope.
Jan 31 08:16:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116f036aed0e39ba2e0a3dce31c5b99f2c967ef1d6b49c5367894339da0db018/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116f036aed0e39ba2e0a3dce31c5b99f2c967ef1d6b49c5367894339da0db018/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116f036aed0e39ba2e0a3dce31c5b99f2c967ef1d6b49c5367894339da0db018/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116f036aed0e39ba2e0a3dce31c5b99f2c967ef1d6b49c5367894339da0db018/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/116f036aed0e39ba2e0a3dce31c5b99f2c967ef1d6b49c5367894339da0db018/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:10 compute-0 podman[336073]: 2026-01-31 08:16:10.374883368 +0000 UTC m=+0.030826762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:16:10 compute-0 podman[336073]: 2026-01-31 08:16:10.479741053 +0000 UTC m=+0.135684447 container init 9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:16:10 compute-0 podman[336073]: 2026-01-31 08:16:10.486489808 +0000 UTC m=+0.142433182 container start 9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:16:10 compute-0 podman[336073]: 2026-01-31 08:16:10.493181781 +0000 UTC m=+0.149125185 container attach 9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:16:10 compute-0 podman[336087]: 2026-01-31 08:16:10.510994325 +0000 UTC m=+0.071224306 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:16:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:10.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:10.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:10 compute-0 ceph-mon[74496]: pgmap v2426: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 140 KiB/s rd, 152 KiB/s wr, 34 op/s
Jan 31 08:16:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:16:11.189 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:16:11.191 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:16:11.193 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:11 compute-0 thirsty_golick[336090]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:16:11 compute-0 thirsty_golick[336090]: --> relative data size: 1.0
Jan 31 08:16:11 compute-0 thirsty_golick[336090]: --> All data devices are unavailable
Jan 31 08:16:11 compute-0 systemd[1]: libpod-9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756.scope: Deactivated successfully.
Jan 31 08:16:11 compute-0 podman[336073]: 2026-01-31 08:16:11.247491421 +0000 UTC m=+0.903434795 container died 9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-116f036aed0e39ba2e0a3dce31c5b99f2c967ef1d6b49c5367894339da0db018-merged.mount: Deactivated successfully.
Jan 31 08:16:11 compute-0 podman[336073]: 2026-01-31 08:16:11.322462687 +0000 UTC m=+0.978406071 container remove 9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:16:11 compute-0 systemd[1]: libpod-conmon-9fe17ee07b7a7ccf2c19a7d86df706e68087ab47cc24fabcfb49b88d5cfaf756.scope: Deactivated successfully.
Jan 31 08:16:11 compute-0 sudo[335967]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:11 compute-0 sudo[336136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:11 compute-0 sudo[336136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:11 compute-0 sudo[336136]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:11 compute-0 sudo[336161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:11 compute-0 sudo[336161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:11 compute-0 sudo[336161]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:11 compute-0 sudo[336187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:11 compute-0 sudo[336187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:11 compute-0 sudo[336187]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:11 compute-0 sudo[336212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:16:11 compute-0 sudo[336212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 140 KiB/s rd, 156 KiB/s wr, 35 op/s
Jan 31 08:16:11 compute-0 podman[336276]: 2026-01-31 08:16:11.896999667 +0000 UTC m=+0.049000975 container create ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:16:11 compute-0 systemd[1]: Started libpod-conmon-ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d.scope.
Jan 31 08:16:11 compute-0 podman[336276]: 2026-01-31 08:16:11.868518022 +0000 UTC m=+0.020519390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:16:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:11 compute-0 podman[336276]: 2026-01-31 08:16:11.989641534 +0000 UTC m=+0.141642902 container init ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:16:12 compute-0 podman[336276]: 2026-01-31 08:16:11.999824382 +0000 UTC m=+0.151825700 container start ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:12 compute-0 naughty_tu[336293]: 167 167
Jan 31 08:16:12 compute-0 systemd[1]: libpod-ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d.scope: Deactivated successfully.
Jan 31 08:16:12 compute-0 conmon[336293]: conmon ef7d4bf8e61672741bfb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d.scope/container/memory.events
Jan 31 08:16:12 compute-0 podman[336276]: 2026-01-31 08:16:12.008043893 +0000 UTC m=+0.160045241 container attach ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 08:16:12 compute-0 podman[336276]: 2026-01-31 08:16:12.00917895 +0000 UTC m=+0.161180258 container died ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:16:12 compute-0 ceph-mon[74496]: pgmap v2427: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 140 KiB/s rd, 156 KiB/s wr, 35 op/s
Jan 31 08:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f7f538d15bc0b98430a4d914e5679508f4846c0c3143f1fd82bc456b54620c-merged.mount: Deactivated successfully.
Jan 31 08:16:12 compute-0 podman[336276]: 2026-01-31 08:16:12.09822262 +0000 UTC m=+0.250223898 container remove ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_tu, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:16:12 compute-0 systemd[1]: libpod-conmon-ef7d4bf8e61672741bfb9e8ac4114039185dbdc7f17a5913c822bb2483b0cf1d.scope: Deactivated successfully.
Jan 31 08:16:12 compute-0 podman[336317]: 2026-01-31 08:16:12.235389492 +0000 UTC m=+0.051180698 container create 73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:16:12 compute-0 systemd[1]: Started libpod-conmon-73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62.scope.
Jan 31 08:16:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:12 compute-0 podman[336317]: 2026-01-31 08:16:12.205012492 +0000 UTC m=+0.020803798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4133578056fed10d2281f4f760942d75e433723398c6ce6d5195cf9abd251421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4133578056fed10d2281f4f760942d75e433723398c6ce6d5195cf9abd251421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4133578056fed10d2281f4f760942d75e433723398c6ce6d5195cf9abd251421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4133578056fed10d2281f4f760942d75e433723398c6ce6d5195cf9abd251421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:12 compute-0 podman[336317]: 2026-01-31 08:16:12.320037835 +0000 UTC m=+0.135829041 container init 73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:16:12 compute-0 podman[336317]: 2026-01-31 08:16:12.328380668 +0000 UTC m=+0.144171894 container start 73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:16:12 compute-0 podman[336317]: 2026-01-31 08:16:12.335559143 +0000 UTC m=+0.151350399 container attach 73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:16:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:12.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:12.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:13 compute-0 happy_euclid[336334]: {
Jan 31 08:16:13 compute-0 happy_euclid[336334]:     "0": [
Jan 31 08:16:13 compute-0 happy_euclid[336334]:         {
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "devices": [
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "/dev/loop3"
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             ],
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "lv_name": "ceph_lv0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "lv_size": "7511998464",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "name": "ceph_lv0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "tags": {
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.cluster_name": "ceph",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.crush_device_class": "",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.encrypted": "0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.osd_id": "0",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.type": "block",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:                 "ceph.vdo": "0"
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             },
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "type": "block",
Jan 31 08:16:13 compute-0 happy_euclid[336334]:             "vg_name": "ceph_vg0"
Jan 31 08:16:13 compute-0 happy_euclid[336334]:         }
Jan 31 08:16:13 compute-0 happy_euclid[336334]:     ]
Jan 31 08:16:13 compute-0 happy_euclid[336334]: }
Jan 31 08:16:13 compute-0 systemd[1]: libpod-73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62.scope: Deactivated successfully.
Jan 31 08:16:13 compute-0 podman[336317]: 2026-01-31 08:16:13.145043797 +0000 UTC m=+0.960835003 container died 73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:16:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4133578056fed10d2281f4f760942d75e433723398c6ce6d5195cf9abd251421-merged.mount: Deactivated successfully.
Jan 31 08:16:13 compute-0 podman[336317]: 2026-01-31 08:16:13.221945551 +0000 UTC m=+1.037736757 container remove 73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_euclid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:16:13 compute-0 systemd[1]: libpod-conmon-73accd82cc368a6fb7b87943777fbc572560f865ad9bbadfdfc0acd4b379cd62.scope: Deactivated successfully.
Jan 31 08:16:13 compute-0 sudo[336212]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:13 compute-0 sudo[336357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:13 compute-0 sudo[336357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:13 compute-0 sudo[336357]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:13 compute-0 sudo[336382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:13 compute-0 sudo[336382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:13 compute-0 sudo[336382]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:13 compute-0 sudo[336407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:13 compute-0 sudo[336407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:13 compute-0 sudo[336407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:13 compute-0 sudo[336433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:16:13 compute-0 sudo[336433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 08:16:13 compute-0 podman[336498]: 2026-01-31 08:16:13.818841605 +0000 UTC m=+0.019938157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:16:13 compute-0 podman[336498]: 2026-01-31 08:16:13.984167033 +0000 UTC m=+0.185263515 container create 0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:16:14 compute-0 nova_compute[247704]: 2026-01-31 08:16:14.139 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:14 compute-0 nova_compute[247704]: 2026-01-31 08:16:14.155 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:14 compute-0 systemd[1]: Started libpod-conmon-0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91.scope.
Jan 31 08:16:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:14 compute-0 podman[336498]: 2026-01-31 08:16:14.375292264 +0000 UTC m=+0.576388786 container init 0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:16:14 compute-0 podman[336498]: 2026-01-31 08:16:14.381334901 +0000 UTC m=+0.582431373 container start 0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:16:14 compute-0 crazy_moser[336515]: 167 167
Jan 31 08:16:14 compute-0 systemd[1]: libpod-0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91.scope: Deactivated successfully.
Jan 31 08:16:14 compute-0 ceph-mon[74496]: pgmap v2428: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s wr, 0 op/s
Jan 31 08:16:14 compute-0 podman[336498]: 2026-01-31 08:16:14.421774426 +0000 UTC m=+0.622870938 container attach 0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moser, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:16:14 compute-0 podman[336498]: 2026-01-31 08:16:14.422405771 +0000 UTC m=+0.623502283 container died 0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:16:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:14.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd33621179f362c32f2b7d16d363a54a4ce800c46019e5606c394e4ace14a99a-merged.mount: Deactivated successfully.
Jan 31 08:16:14 compute-0 nova_compute[247704]: 2026-01-31 08:16:14.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:14 compute-0 podman[336498]: 2026-01-31 08:16:14.740713218 +0000 UTC m=+0.941809720 container remove 0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:16:14 compute-0 systemd[1]: libpod-conmon-0f911af7d7bcdb0115e36d04f11162393a6807a632a74041067b556268034c91.scope: Deactivated successfully.
Jan 31 08:16:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:14.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:14 compute-0 podman[336540]: 2026-01-31 08:16:14.979587918 +0000 UTC m=+0.098936011 container create 356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:16:15 compute-0 podman[336540]: 2026-01-31 08:16:14.905715747 +0000 UTC m=+0.025063830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:16:15 compute-0 systemd[1]: Started libpod-conmon-356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb.scope.
Jan 31 08:16:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee46ed2022f8cf634aa4d8981e7c17bf82553d109e1843631febea5e4d06f8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee46ed2022f8cf634aa4d8981e7c17bf82553d109e1843631febea5e4d06f8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee46ed2022f8cf634aa4d8981e7c17bf82553d109e1843631febea5e4d06f8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee46ed2022f8cf634aa4d8981e7c17bf82553d109e1843631febea5e4d06f8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:15 compute-0 podman[336540]: 2026-01-31 08:16:15.172425687 +0000 UTC m=+0.291773760 container init 356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:16:15 compute-0 podman[336540]: 2026-01-31 08:16:15.17998324 +0000 UTC m=+0.299331293 container start 356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_matsumoto, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:16:15 compute-0 podman[336540]: 2026-01-31 08:16:15.224131207 +0000 UTC m=+0.343479260 container attach 356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 08:16:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]: {
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:         "osd_id": 0,
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:         "type": "bluestore"
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]:     }
Jan 31 08:16:15 compute-0 lucid_matsumoto[336556]: }
Jan 31 08:16:15 compute-0 systemd[1]: libpod-356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb.scope: Deactivated successfully.
Jan 31 08:16:16 compute-0 podman[336580]: 2026-01-31 08:16:16.044144417 +0000 UTC m=+0.032009750 container died 356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee46ed2022f8cf634aa4d8981e7c17bf82553d109e1843631febea5e4d06f8c-merged.mount: Deactivated successfully.
Jan 31 08:16:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:16 compute-0 podman[336580]: 2026-01-31 08:16:16.541315072 +0000 UTC m=+0.529180315 container remove 356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:16:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:16.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:16 compute-0 systemd[1]: libpod-conmon-356d1a8e65ab4958f7662c233494b0e24c6e1b32e30b012cd79a01c4f86c70fb.scope: Deactivated successfully.
Jan 31 08:16:16 compute-0 sudo[336433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:16:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:16:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:16.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1fbaec42-1724-4e11-986b-a64c2bebd934 does not exist
Jan 31 08:16:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 406a6197-0fac-49d8-bdc8-b2302986eb5e does not exist
Jan 31 08:16:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0c27221a-7c74-4b5d-8e1d-52637c338f30 does not exist
Jan 31 08:16:16 compute-0 sudo[336595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:16 compute-0 sudo[336595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:16 compute-0 sudo[336595]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:16 compute-0 ceph-mon[74496]: pgmap v2429: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:16:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3818137887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:16:17 compute-0 sudo[336620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:16:17 compute-0 sudo[336620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:17 compute-0 sudo[336620]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 520 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 734 KiB/s wr, 25 op/s
Jan 31 08:16:18 compute-0 ceph-mon[74496]: pgmap v2430: 305 pgs: 305 active+clean; 520 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 734 KiB/s wr, 25 op/s
Jan 31 08:16:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:18.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:19 compute-0 nova_compute[247704]: 2026-01-31 08:16:19.141 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:19 compute-0 nova_compute[247704]: 2026-01-31 08:16:19.642 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 520 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 733 KiB/s wr, 24 op/s
Jan 31 08:16:20 compute-0 nova_compute[247704]: 2026-01-31 08:16:20.019 247708 DEBUG nova.compute.manager [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:20 compute-0 nova_compute[247704]: 2026-01-31 08:16:20.083 247708 INFO nova.compute.manager [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] instance snapshotting
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:16:20
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:16:20 compute-0 nova_compute[247704]: 2026-01-31 08:16:20.479 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:20.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:16:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:16:20 compute-0 nova_compute[247704]: 2026-01-31 08:16:20.638 247708 INFO nova.virt.libvirt.driver [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Beginning live snapshot process
Jan 31 08:16:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:20.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:20 compute-0 ceph-mon[74496]: pgmap v2431: 305 pgs: 305 active+clean; 520 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 733 KiB/s wr, 24 op/s
Jan 31 08:16:20 compute-0 nova_compute[247704]: 2026-01-31 08:16:20.932 247708 DEBUG nova.virt.libvirt.imagebackend [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:16:21 compute-0 nova_compute[247704]: 2026-01-31 08:16:21.054 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:16:21 compute-0 nova_compute[247704]: 2026-01-31 08:16:21.868 247708 DEBUG nova.storage.rbd_utils [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(f42bc4d6bd794d818030934e6e635745) on rbd image(743cf933-3139-4c25-9c75-b45150274ae3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:16:22 compute-0 ceph-mon[74496]: pgmap v2432: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:16:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:16:22.650 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:16:22 compute-0 nova_compute[247704]: 2026-01-31 08:16:22.650 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:16:22.651 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:16:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:22.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 31 08:16:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 31 08:16:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 31 08:16:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/121152499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:23 compute-0 ceph-mon[74496]: osdmap e310: 3 total, 3 up, 3 in
Jan 31 08:16:23 compute-0 nova_compute[247704]: 2026-01-31 08:16:23.651 247708 DEBUG nova.storage.rbd_utils [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning vms/743cf933-3139-4c25-9c75-b45150274ae3_disk@f42bc4d6bd794d818030934e6e635745 to images/2ca7a053-cec0-4d6e-8cfb-6697bc58e21c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:16:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 08:16:24 compute-0 nova_compute[247704]: 2026-01-31 08:16:24.066 247708 DEBUG nova.storage.rbd_utils [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] flattening images/2ca7a053-cec0-4d6e-8cfb-6697bc58e21c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:16:24 compute-0 nova_compute[247704]: 2026-01-31 08:16:24.250 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4224457695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:24 compute-0 ceph-mon[74496]: pgmap v2434: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 08:16:24 compute-0 nova_compute[247704]: 2026-01-31 08:16:24.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:25 compute-0 nova_compute[247704]: 2026-01-31 08:16:25.123 247708 DEBUG nova.storage.rbd_utils [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] removing snapshot(f42bc4d6bd794d818030934e6e635745) on rbd image(743cf933-3139-4c25-9c75-b45150274ae3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:16:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.4 MiB/s wr, 91 op/s
Jan 31 08:16:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 31 08:16:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 31 08:16:25 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 31 08:16:25 compute-0 nova_compute[247704]: 2026-01-31 08:16:25.904 247708 DEBUG nova.storage.rbd_utils [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] creating snapshot(snap) on rbd image(2ca7a053-cec0-4d6e-8cfb-6697bc58e21c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:16:26 compute-0 nova_compute[247704]: 2026-01-31 08:16:26.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:26 compute-0 sudo[336792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:26 compute-0 sudo[336792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:26 compute-0 sudo[336792]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-0 sudo[336817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:26 compute-0 sudo[336817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:26 compute-0 sudo[336817]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:16:26.653 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:16:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:26 compute-0 ceph-mon[74496]: pgmap v2435: 305 pgs: 305 active+clean; 588 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.4 MiB/s wr, 91 op/s
Jan 31 08:16:26 compute-0 ceph-mon[74496]: osdmap e311: 3 total, 3 up, 3 in
Jan 31 08:16:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 31 08:16:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 31 08:16:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 31 08:16:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 615 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.8 MiB/s wr, 156 op/s
Jan 31 08:16:27 compute-0 ceph-mon[74496]: osdmap e312: 3 total, 3 up, 3 in
Jan 31 08:16:27 compute-0 podman[336843]: 2026-01-31 08:16:27.964364179 +0000 UTC m=+0.135502303 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:16:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:28.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:28 compute-0 ceph-mon[74496]: pgmap v2438: 305 pgs: 305 active+clean; 615 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.8 MiB/s wr, 156 op/s
Jan 31 08:16:29 compute-0 nova_compute[247704]: 2026-01-31 08:16:29.252 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:16:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 38K writes, 146K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.83 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6580 writes, 25K keys, 6580 commit groups, 1.0 writes per commit group, ingest: 24.19 MB, 0.04 MB/s
                                           Interval WAL: 6580 writes, 2621 syncs, 2.51 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:16:29 compute-0 nova_compute[247704]: 2026-01-31 08:16:29.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:29 compute-0 nova_compute[247704]: 2026-01-31 08:16:29.646 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 615 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.1 MiB/s wr, 133 op/s
Jan 31 08:16:29 compute-0 nova_compute[247704]: 2026-01-31 08:16:29.929 247708 INFO nova.virt.libvirt.driver [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Snapshot image upload complete
Jan 31 08:16:29 compute-0 nova_compute[247704]: 2026-01-31 08:16:29.930 247708 INFO nova.compute.manager [None req-94d97747-bf02-4c27-8e65-c2396c97fb31 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Took 9.85 seconds to snapshot the instance on the hypervisor.
Jan 31 08:16:30 compute-0 ceph-mon[74496]: pgmap v2439: 305 pgs: 305 active+clean; 615 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.1 MiB/s wr, 133 op/s
Jan 31 08:16:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:30.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:30 compute-0 nova_compute[247704]: 2026-01-31 08:16:30.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:31 compute-0 nova_compute[247704]: 2026-01-31 08:16:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:31 compute-0 nova_compute[247704]: 2026-01-31 08:16:31.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:31 compute-0 nova_compute[247704]: 2026-01-31 08:16:31.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:16:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.9 MiB/s wr, 147 op/s
Jan 31 08:16:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:33 compute-0 ceph-mon[74496]: pgmap v2440: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.9 MiB/s wr, 147 op/s
Jan 31 08:16:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 88 op/s
Jan 31 08:16:34 compute-0 ceph-mon[74496]: pgmap v2441: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 88 op/s
Jan 31 08:16:34 compute-0 nova_compute[247704]: 2026-01-31 08:16:34.254 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:34 compute-0 nova_compute[247704]: 2026-01-31 08:16:34.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:34 compute-0 nova_compute[247704]: 2026-01-31 08:16:34.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:16:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:34 compute-0 nova_compute[247704]: 2026-01-31 08:16:34.603 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:16:34 compute-0 nova_compute[247704]: 2026-01-31 08:16:34.649 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:34.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:35 compute-0 nova_compute[247704]: 2026-01-31 08:16:35.306 247708 DEBUG oslo_concurrency.lockutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:35 compute-0 nova_compute[247704]: 2026-01-31 08:16:35.307 247708 DEBUG oslo_concurrency.lockutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:35 compute-0 nova_compute[247704]: 2026-01-31 08:16:35.352 247708 DEBUG nova.objects.instance [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:16:35 compute-0 nova_compute[247704]: 2026-01-31 08:16:35.440 247708 DEBUG oslo_concurrency.lockutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009689106702493951 of space, bias 1.0, pg target 2.9067320107481853 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007034776021310255 of space, bias 1.0, pg target 2.096363254350456 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:16:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.157 247708 DEBUG oslo_concurrency.lockutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.158 247708 DEBUG oslo_concurrency.lockutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.158 247708 INFO nova.compute.manager [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Attaching volume ddc44c03-3580-457e-b1d4-9a35d3c393e8 to /dev/vdb
Jan 31 08:16:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 31 08:16:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 31 08:16:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.603 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.627 247708 DEBUG os_brick.utils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.629 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.642 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.643 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[ac00b6b6-ee1a-4ad7-9148-38245eab2bdd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.644 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.652 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.653 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[25ee6458-d825-4c52-b4da-6dfef482d678]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.654 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.656 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.657 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.657 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.657 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.658 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.663 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.664 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[21752ced-526a-4e5d-8aff-7e3df6553175]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.688 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[b52fc8bf-d8c1-4fe0-945c-35e0e502be13]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.688 247708 DEBUG oslo_concurrency.processutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.720 247708 DEBUG oslo_concurrency.processutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.723 247708 DEBUG os_brick.initiator.connectors.lightos [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.723 247708 DEBUG os_brick.initiator.connectors.lightos [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.723 247708 DEBUG os_brick.initiator.connectors.lightos [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.724 247708 DEBUG os_brick.utils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:16:36 compute-0 nova_compute[247704]: 2026-01-31 08:16:36.724 247708 DEBUG nova.virt.block_device [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating existing volume attachment record: f59239f8-ac09-4b3b-a97a-ef159be2d50b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:16:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:36.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 630 KiB/s wr, 114 op/s
Jan 31 08:16:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:38.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:38.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:39 compute-0 nova_compute[247704]: 2026-01-31 08:16:39.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:39 compute-0 nova_compute[247704]: 2026-01-31 08:16:39.652 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 630 KiB/s wr, 114 op/s
Jan 31 08:16:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:40.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:40 compute-0 podman[336904]: 2026-01-31 08:16:40.896771734 +0000 UTC m=+0.069715520 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:16:41 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 08:16:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 08:16:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 511 B/s wr, 85 op/s
Jan 31 08:16:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:42.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:42.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 409 B/s wr, 78 op/s
Jan 31 08:16:44 compute-0 nova_compute[247704]: 2026-01-31 08:16:44.296 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:44.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:44 compute-0 nova_compute[247704]: 2026-01-31 08:16:44.655 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:44.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:45 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 08:16:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 37 op/s
Jan 31 08:16:46 compute-0 nova_compute[247704]: 2026-01-31 08:16:46.062 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:46 compute-0 sudo[336926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:46 compute-0 sudo[336926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:46 compute-0 sudo[336926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:46 compute-0 sudo[336951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:16:46 compute-0 sudo[336951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:46 compute-0 sudo[336951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:46.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636384441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:46.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:46 compute-0 nova_compute[247704]: 2026-01-31 08:16:46.857 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 10.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:47 compute-0 nova_compute[247704]: 2026-01-31 08:16:47.374 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:16:47 compute-0 nova_compute[247704]: 2026-01-31 08:16:47.374 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:16:47 compute-0 nova_compute[247704]: 2026-01-31 08:16:47.565 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:16:47 compute-0 nova_compute[247704]: 2026-01-31 08:16:47.566 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4168MB free_disk=20.784923553466797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:16:47 compute-0 nova_compute[247704]: 2026-01-31 08:16:47.567 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:16:47 compute-0 nova_compute[247704]: 2026-01-31 08:16:47.567 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:16:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 08:16:47 compute-0 ceph-mon[74496]: paxos.0).electionLogic(41) init, last seen epoch 41, mid-election, bumping
Jan 31 08:16:47 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:47 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1530353335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 0 B/s wr, 37 op/s
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.527 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 743cf933-3139-4c25-9c75-b45150274ae3 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.527 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.528 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:16:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:48.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.677 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:16:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:48.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.890 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.891 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.909 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:16:48 compute-0 nova_compute[247704]: 2026-01-31 08:16:48.936 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:16:49 compute-0 nova_compute[247704]: 2026-01-31 08:16:49.012 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:16:49 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:49 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:49 compute-0 nova_compute[247704]: 2026-01-31 08:16:49.298 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:49 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 08:16:49 compute-0 nova_compute[247704]: 2026-01-31 08:16:49.656 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:16:49 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:49 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:49 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:50 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:50 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:50.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:51 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:51 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:51 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 638 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 903 KiB/s wr, 13 op/s
Jan 31 08:16:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 08:16:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_auth_request failed to assign global_id
Jan 31 08:16:53 compute-0 ceph-mon[74496]: pgmap v2442: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.4 MiB/s wr, 117 op/s
Jan 31 08:16:53 compute-0 ceph-mon[74496]: osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:16:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 71m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 08:16:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581355920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:53 compute-0 nova_compute[247704]: 2026-01-31 08:16:53.394 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:16:53 compute-0 nova_compute[247704]: 2026-01-31 08:16:53.399 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:16:53 compute-0 nova_compute[247704]: 2026-01-31 08:16:53.470 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 08:16:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [WRN] :     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 08:16:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 646 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.4 MiB/s wr, 45 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2444: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 630 KiB/s wr, 114 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2445: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 630 KiB/s wr, 114 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2446: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 511 B/s wr, 85 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2447: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 409 B/s wr, 78 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2448: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 37 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3636384441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:54 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 08:16:54 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 08:16:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1530353335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2449: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 0 B/s wr, 37 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2450: 305 pgs: 305 active+clean; 629 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2451: 305 pgs: 305 active+clean; 638 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 903 KiB/s wr, 13 op/s
Jan 31 08:16:54 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Jan 31 08:16:54 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:16:54 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:16:54 compute-0 ceph-mon[74496]: osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:54 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 71m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:16:54 compute-0 ceph-mon[74496]: Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Jan 31 08:16:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3581355920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2529377701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:54 compute-0 ceph-mon[74496]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Jan 31 08:16:54 compute-0 ceph-mon[74496]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Jan 31 08:16:54 compute-0 ceph-mon[74496]:     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Jan 31 08:16:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2899978361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:16:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2899978361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:16:54 compute-0 ceph-mon[74496]: pgmap v2452: 305 pgs: 305 active+clean; 646 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.4 MiB/s wr, 45 op/s
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.353 247708 DEBUG nova.objects.instance [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.359 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.364 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.365 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.365 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.366 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.479 247708 DEBUG nova.virt.libvirt.driver [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Attempting to attach volume ddc44c03-3580-457e-b1d4-9a35d3c393e8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.481 247708 DEBUG nova.virt.libvirt.guest [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:16:54 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:16:54 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-ddc44c03-3580-457e-b1d4-9a35d3c393e8">
Jan 31 08:16:54 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:16:54 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:16:54 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:16:54 compute-0 nova_compute[247704]:   </source>
Jan 31 08:16:54 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:16:54 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:16:54 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:16:54 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:16:54 compute-0 nova_compute[247704]:   <serial>ddc44c03-3580-457e-b1d4-9a35d3c393e8</serial>
Jan 31 08:16:54 compute-0 nova_compute[247704]: </disk>
Jan 31 08:16:54 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.551 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:54.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:54 compute-0 nova_compute[247704]: 2026-01-31 08:16:54.659 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 08:16:55 compute-0 ceph-mon[74496]: paxos.0).electionLogic(44) init, last seen epoch 44
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 71m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:16:55 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:16:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1987985800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4037236936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3089800500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 08:16:55 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:16:55 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:16:55 compute-0 ceph-mon[74496]: osdmap e313: 3 total, 3 up, 3 in
Jan 31 08:16:55 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 71m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:16:55 compute-0 ceph-mon[74496]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Jan 31 08:16:55 compute-0 ceph-mon[74496]: Cluster is now healthy
Jan 31 08:16:55 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:16:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 664 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Jan 31 08:16:56 compute-0 nova_compute[247704]: 2026-01-31 08:16:56.110 247708 DEBUG nova.virt.libvirt.driver [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:16:56 compute-0 nova_compute[247704]: 2026-01-31 08:16:56.111 247708 DEBUG nova.virt.libvirt.driver [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:16:56 compute-0 nova_compute[247704]: 2026-01-31 08:16:56.111 247708 DEBUG nova.virt.libvirt.driver [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:16:56 compute-0 nova_compute[247704]: 2026-01-31 08:16:56.111 247708 DEBUG nova.virt.libvirt.driver [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:79:71:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:16:56 compute-0 nova_compute[247704]: 2026-01-31 08:16:56.404 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:56 compute-0 nova_compute[247704]: 2026-01-31 08:16:56.405 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:56 compute-0 ceph-mon[74496]: pgmap v2453: 305 pgs: 305 active+clean; 664 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Jan 31 08:16:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:56.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:16:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:56.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:57 compute-0 nova_compute[247704]: 2026-01-31 08:16:57.215 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:16:57 compute-0 nova_compute[247704]: 2026-01-31 08:16:57.215 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:16:57 compute-0 nova_compute[247704]: 2026-01-31 08:16:57.215 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:16:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.3 MiB/s wr, 84 op/s
Jan 31 08:16:58 compute-0 ceph-mon[74496]: pgmap v2454: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.3 MiB/s wr, 84 op/s
Jan 31 08:16:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:16:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:58.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:16:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:16:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:16:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:16:58 compute-0 podman[337026]: 2026-01-31 08:16:58.956998344 +0000 UTC m=+0.128968513 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.126 247708 DEBUG oslo_concurrency.lockutils [None req-23515f72-af65-48e4-b0a0-86d956518ec7 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 22.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.177 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.178 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.179 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.179 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.361 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:59 compute-0 nova_compute[247704]: 2026-01-31 08:16:59.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:16:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1300768238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:16:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Jan 31 08:17:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:00.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:00 compute-0 ceph-mon[74496]: pgmap v2455: 305 pgs: 305 active+clean; 669 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Jan 31 08:17:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:00.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 708 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 31 08:17:02 compute-0 nova_compute[247704]: 2026-01-31 08:17:02.588 247708 INFO nova.compute.manager [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Rescuing
Jan 31 08:17:02 compute-0 nova_compute[247704]: 2026-01-31 08:17:02.588 247708 DEBUG oslo_concurrency.lockutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:02.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:02 compute-0 ceph-mon[74496]: pgmap v2456: 305 pgs: 305 active+clean; 708 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.921104) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422921162, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1535, "num_deletes": 254, "total_data_size": 2401570, "memory_usage": 2436864, "flush_reason": "Manual Compaction"}
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422953315, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2342684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52294, "largest_seqno": 53828, "table_properties": {"data_size": 2335522, "index_size": 4105, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16151, "raw_average_key_size": 20, "raw_value_size": 2320820, "raw_average_value_size": 3006, "num_data_blocks": 181, "num_entries": 772, "num_filter_entries": 772, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847287, "oldest_key_time": 1769847287, "file_creation_time": 1769847422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 32325 microseconds, and 5603 cpu microseconds.
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.953422) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2342684 bytes OK
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.953453) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.955931) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.955952) EVENT_LOG_v1 {"time_micros": 1769847422955945, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.955978) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2394855, prev total WAL file size 2394855, number of live WAL files 2.
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.956680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2287KB)], [116(10207KB)]
Jan 31 08:17:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847422956756, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12795649, "oldest_snapshot_seqno": -1}
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8211 keys, 10839410 bytes, temperature: kUnknown
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423106739, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10839410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10786362, "index_size": 31432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 213353, "raw_average_key_size": 25, "raw_value_size": 10641922, "raw_average_value_size": 1296, "num_data_blocks": 1229, "num_entries": 8211, "num_filter_entries": 8211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.107061) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10839410 bytes
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.110042) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.3 rd, 72.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 10.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(10.1) write-amplify(4.6) OK, records in: 8742, records dropped: 531 output_compression: NoCompression
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.110095) EVENT_LOG_v1 {"time_micros": 1769847423110062, "job": 70, "event": "compaction_finished", "compaction_time_micros": 149987, "compaction_time_cpu_micros": 17841, "output_level": 6, "num_output_files": 1, "total_output_size": 10839410, "num_input_records": 8742, "num_output_records": 8211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423110635, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847423112217, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:02.956519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.112289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.112294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.112297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.112300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:17:03.112302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.476 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.551 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.551 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.551 247708 DEBUG oslo_concurrency.lockutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.552 247708 DEBUG nova.network.neutron [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.553 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.553 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.554 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.584 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 743cf933-3139-4c25-9c75-b45150274ae3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.585 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.585 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "743cf933-3139-4c25-9c75-b45150274ae3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.586 247708 INFO nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 31 08:17:03 compute-0 nova_compute[247704]: 2026-01-31 08:17:03.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "743cf933-3139-4c25-9c75-b45150274ae3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 4.0 MiB/s wr, 104 op/s
Jan 31 08:17:04 compute-0 nova_compute[247704]: 2026-01-31 08:17:04.363 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:04.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:04 compute-0 nova_compute[247704]: 2026-01-31 08:17:04.664 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:04 compute-0 ceph-mon[74496]: pgmap v2457: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 4.0 MiB/s wr, 104 op/s
Jan 31 08:17:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 4.3 MiB/s wr, 87 op/s
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.006 247708 DEBUG nova.network.neutron [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.061 247708 DEBUG oslo_concurrency.lockutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.407 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.407 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.449 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:17:06 compute-0 ceph-mon[74496]: pgmap v2458: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 4.3 MiB/s wr, 87 op/s
Jan 31 08:17:06 compute-0 sudo[337057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:06 compute-0 sudo[337057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:06 compute-0 sudo[337057]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:06 compute-0 sudo[337082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:06 compute-0 sudo[337082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:06 compute-0 sudo[337082]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:06.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.610 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.612 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.625 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.625 247708 INFO nova.compute.claims [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.652 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:17:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:06 compute-0 nova_compute[247704]: 2026-01-31 08:17:06.849 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757802043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.304 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.311 247708 DEBUG nova.compute.provider_tree [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.345 247708 DEBUG nova.scheduler.client.report [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.402 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.402 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:17:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2757802043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.548 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.548 247708 DEBUG nova.network.neutron [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.605 247708 INFO nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.644 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:17:07 compute-0 nova_compute[247704]: 2026-01-31 08:17:07.702 247708 INFO nova.virt.block_device [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Booting with volume 28bdb463-7d0a-42f5-8392-3acd847cfe3e at /dev/vda
Jan 31 08:17:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.5 MiB/s wr, 47 op/s
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.094 247708 DEBUG os_brick.utils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.097 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.107 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.107 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[4b75bec7-d2f4-4339-b1ca-ac437b7b2b7e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.109 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.114 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.115 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e125ac62-148d-497f-a7e3-5f5e683c8d9b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.116 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.126 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.126 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[8fef12d0-2688-43d4-9a12-9aae2ed9a424]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.128 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[20fad2db-7473-4ad0-abb1-8b2e0b6ccd71]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.128 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.161 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.164 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.165 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.165 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.165 247708 DEBUG os_brick.utils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.165 247708 DEBUG nova.virt.block_device [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating existing volume attachment record: 602d0d17-75b2-4e80-9833-94a1008d60c6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:17:08 compute-0 nova_compute[247704]: 2026-01-31 08:17:08.305 247708 DEBUG nova.policy [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90313c82677c4144953c58efc0e13c3e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bda3ebf6541d46309fc9b2ce089dd857', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:17:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:08 compute-0 ceph-mon[74496]: pgmap v2459: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.5 MiB/s wr, 47 op/s
Jan 31 08:17:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:09 compute-0 kernel: tape33f46e9-c1 (unregistering): left promiscuous mode
Jan 31 08:17:09 compute-0 NetworkManager[49108]: <info>  [1769847429.0623] device (tape33f46e9-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:17:09 compute-0 ovn_controller[149457]: 2026-01-31T08:17:09Z|00555|binding|INFO|Releasing lport e33f46e9-c188-4c53-b863-93620a4c3452 from this chassis (sb_readonly=0)
Jan 31 08:17:09 compute-0 ovn_controller[149457]: 2026-01-31T08:17:09Z|00556|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 down in Southbound
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:09 compute-0 ovn_controller[149457]: 2026-01-31T08:17:09Z|00557|binding|INFO|Removing iface tape33f46e9-c1 ovn-installed in OVS
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:09 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 31 08:17:09 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000087.scope: Consumed 16.073s CPU time.
Jan 31 08:17:09 compute-0 systemd-machined[214448]: Machine qemu-57-instance-00000087 terminated.
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.666 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.675 247708 INFO nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance shutdown successfully after 3 seconds.
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.681 247708 INFO nova.virt.libvirt.driver [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance destroyed successfully.
Jan 31 08:17:09 compute-0 nova_compute[247704]: 2026-01-31 08:17:09.681 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.4 MiB/s wr, 44 op/s
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.114 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:71:3b 10.100.0.4'], port_security=['fa:16:3e:79:71:3b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '743cf933-3139-4c25-9c75-b45150274ae3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11deed3f-7193-4325-affb-77a66beb8424', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e33f46e9-c188-4c53-b863-93620a4c3452) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.115 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e33f46e9-c188-4c53-b863-93620a4c3452 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.117 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088d6992-6ba6-4719-a977-b3d306740157, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.122 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[20eae9f9-fc62-46cb-a224-25a3db861240]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.123 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace which is not needed anymore
Jan 31 08:17:10 compute-0 nova_compute[247704]: 2026-01-31 08:17:10.194 247708 INFO nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Attempting a stable device rescue
Jan 31 08:17:10 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [NOTICE]   (335485) : haproxy version is 2.8.14-c23fe91
Jan 31 08:17:10 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [NOTICE]   (335485) : path to executable is /usr/sbin/haproxy
Jan 31 08:17:10 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [WARNING]  (335485) : Exiting Master process...
Jan 31 08:17:10 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [ALERT]    (335485) : Current worker (335487) exited with code 143 (Terminated)
Jan 31 08:17:10 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[335481]: [WARNING]  (335485) : All workers exited. Exiting... (0)
Jan 31 08:17:10 compute-0 systemd[1]: libpod-a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393.scope: Deactivated successfully.
Jan 31 08:17:10 compute-0 podman[337174]: 2026-01-31 08:17:10.290572821 +0000 UTC m=+0.059288406 container died a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393-userdata-shm.mount: Deactivated successfully.
Jan 31 08:17:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-34e850c5e94334fc8ae832d71ad49698da23163986d2c25c8cf4bf0f1a416921-merged.mount: Deactivated successfully.
Jan 31 08:17:10 compute-0 podman[337174]: 2026-01-31 08:17:10.363405655 +0000 UTC m=+0.132121240 container cleanup a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:17:10 compute-0 systemd[1]: libpod-conmon-a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393.scope: Deactivated successfully.
Jan 31 08:17:10 compute-0 podman[337204]: 2026-01-31 08:17:10.4477547 +0000 UTC m=+0.065430214 container remove a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.453 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[11acd340-ab50-4440-8173-4a9092e6dac1]: (4, ('Sat Jan 31 08:17:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393)\na67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393\nSat Jan 31 08:17:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (a67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393)\na67079c0210f5627c64338d7d2d1f47867513cd8ad91faf8edbee598c96f0393\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.455 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0aff4bd7-ad83-4094-900a-ff1e12259e57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.456 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:10 compute-0 nova_compute[247704]: 2026-01-31 08:17:10.515 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:10 compute-0 kernel: tap088d6992-60: left promiscuous mode
Jan 31 08:17:10 compute-0 nova_compute[247704]: 2026-01-31 08:17:10.527 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.531 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[608c3a53-1f71-47da-9406-145e94723ac1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.544 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc7bda7-f6ab-48e4-912b-3444c461554f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.545 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b8dd39-9fcc-4211-b681-792f15a66ed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.556 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e176bf37-a8db-487f-8207-911e15a6e8e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758295, 'reachable_time': 17976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337224, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d088d6992\x2d6ba6\x2d4719\x2da977\x2db3d306740157.mount: Deactivated successfully.
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.561 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:17:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:10.561 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f0eee98e-7db3-41e5-bc35-fe0b8200ce7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:10.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:10 compute-0 ceph-mon[74496]: pgmap v2460: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.4 MiB/s wr, 44 op/s
Jan 31 08:17:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4075107395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.056 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.061 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.062 247708 INFO nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Creating image(s)
Jan 31 08:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:11.093 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.093 247708 DEBUG nova.storage.rbd_utils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:11.095 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.098 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.100 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.160 247708 DEBUG nova.storage.rbd_utils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:11.189 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:11.190 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:11.190 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.200 247708 DEBUG nova.storage.rbd_utils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.205 247708 DEBUG oslo_concurrency.lockutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "0d77e380bf9c9cb4c589a1f57a933a0495afb697" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:11 compute-0 nova_compute[247704]: 2026-01-31 08:17:11.207 247708 DEBUG oslo_concurrency.lockutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "0d77e380bf9c9cb4c589a1f57a933a0495afb697" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.4 MiB/s wr, 46 op/s
Jan 31 08:17:11 compute-0 podman[337280]: 2026-01-31 08:17:11.892431402 +0000 UTC m=+0.063360895 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:17:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/595431473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1102898705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.124 247708 DEBUG nova.network.neutron [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Successfully created port: 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.456 247708 DEBUG nova.compute.manager [req-160569ff-b6d7-4c5c-8797-ab0c05a46c6e req-cdb0caed-8c83-4cff-9f1f-8e539dcf52f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.456 247708 DEBUG oslo_concurrency.lockutils [req-160569ff-b6d7-4c5c-8797-ab0c05a46c6e req-cdb0caed-8c83-4cff-9f1f-8e539dcf52f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.457 247708 DEBUG oslo_concurrency.lockutils [req-160569ff-b6d7-4c5c-8797-ab0c05a46c6e req-cdb0caed-8c83-4cff-9f1f-8e539dcf52f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.457 247708 DEBUG oslo_concurrency.lockutils [req-160569ff-b6d7-4c5c-8797-ab0c05a46c6e req-cdb0caed-8c83-4cff-9f1f-8e539dcf52f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.457 247708 DEBUG nova.compute.manager [req-160569ff-b6d7-4c5c-8797-ab0c05a46c6e req-cdb0caed-8c83-4cff-9f1f-8e539dcf52f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.458 247708 WARNING nova.compute.manager [req-160569ff-b6d7-4c5c-8797-ab0c05a46c6e req-cdb0caed-8c83-4cff-9f1f-8e539dcf52f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state rescuing.
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.526 247708 DEBUG nova.virt.libvirt.imagebackend [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/2ca7a053-cec0-4d6e-8cfb-6697bc58e21c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/2ca7a053-cec0-4d6e-8cfb-6697bc58e21c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.589 247708 DEBUG nova.virt.libvirt.imagebackend [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/2ca7a053-cec0-4d6e-8cfb-6697bc58e21c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.590 247708 DEBUG nova.storage.rbd_utils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] cloning images/2ca7a053-cec0-4d6e-8cfb-6697bc58e21c@snap to None/743cf933-3139-4c25-9c75-b45150274ae3_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:17:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:12.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.738 247708 DEBUG oslo_concurrency.lockutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "0d77e380bf9c9cb4c589a1f57a933a0495afb697" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.796 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'migration_context' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:12.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.960 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.965 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Start _get_guest_xml network_info=[{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:79:71:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '2ca7a053-cec0-4d6e-8cfb-6697bc58e21c', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'f59239f8-ac09-4b3b-a97a-ef159be2d50b', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ddc44c03-3580-457e-b1d4-9a35d3c393e8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ddc44c03-3580-457e-b1d4-9a35d3c393e8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '743cf933-3139-4c25-9c75-b45150274ae3', 'attached_at': '', 'detached_at': '', 'volume_id': 'ddc44c03-3580-457e-b1d4-9a35d3c393e8', 'serial': 'ddc44c03-3580-457e-b1d4-9a35d3c393e8'}, 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:17:12 compute-0 nova_compute[247704]: 2026-01-31 08:17:12.966 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:13 compute-0 ceph-mon[74496]: pgmap v2461: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 3.4 MiB/s wr, 46 op/s
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.026 247708 WARNING nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.042 247708 DEBUG nova.virt.libvirt.host [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.043 247708 DEBUG nova.virt.libvirt.host [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.047 247708 DEBUG nova.virt.libvirt.host [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.048 247708 DEBUG nova.virt.libvirt.host [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.049 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.049 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.050 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.050 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.051 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.051 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.051 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.051 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.052 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.052 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.052 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.053 247708 DEBUG nova.virt.hardware [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.053 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.098 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.100 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.100 247708 INFO nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Creating image(s)
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.101 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.101 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Ensure instance console log exists: /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.102 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.102 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.102 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.157 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193268699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.639 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.689 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:13 compute-0 nova_compute[247704]: 2026-01-31 08:17:13.817 247708 DEBUG nova.network.neutron [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Successfully updated port: 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:17:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:17:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4193268699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:14 compute-0 ceph-mon[74496]: pgmap v2462: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.078 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.079 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquired lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.079 247708 DEBUG nova.network.neutron [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:17:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4054031621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.150 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.365 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.431 247708 DEBUG nova.compute.manager [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-changed-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.432 247708 DEBUG nova.compute.manager [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Refreshing instance network info cache due to event network-changed-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.432 247708 DEBUG oslo_concurrency.lockutils [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.780 247708 DEBUG nova.network.neutron [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:17:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253487533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:14.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.895 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.897 247708 DEBUG nova.virt.libvirt.vif [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-107442355',display_name='tempest-ServerStableDeviceRescueTest-server-107442355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-107442355',id=135,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbP3gf7LmroDDYk0USWIG5AuQO84YQj17XtBS8JALIesJpm7oyiz9kFGiUhk5vHZ+8lJiStdnRkjJm79czkwPbDOitiF7fhmca3/rEfJLTDNBTZ+uam26C3TTY99imMvg==',key_name='tempest-keypair-719720114',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-401rbz0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:16:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=743cf933-3139-4c25-9c75-b45150274ae3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:79:71:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.898 247708 DEBUG nova.network.os_vif_util [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "vif_mac": "fa:16:3e:79:71:3b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.899 247708 DEBUG nova.network.os_vif_util [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.902 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'pci_devices' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.938 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <uuid>743cf933-3139-4c25-9c75-b45150274ae3</uuid>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <name>instance-00000087</name>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-107442355</nova:name>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:17:13</nova:creationTime>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:user uuid="d7d9a44201d548aba1e1654e136ddd06">tempest-ServerStableDeviceRescueTest-569420416-project-member</nova:user>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:project uuid="1633c84ea1bf46b080aaafd30bbcf25f">tempest-ServerStableDeviceRescueTest-569420416</nova:project>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <nova:port uuid="e33f46e9-c188-4c53-b863-93620a4c3452">
Jan 31 08:17:14 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <system>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <entry name="serial">743cf933-3139-4c25-9c75-b45150274ae3</entry>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <entry name="uuid">743cf933-3139-4c25-9c75-b45150274ae3</entry>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </system>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <os>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </os>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <features>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </features>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/743cf933-3139-4c25-9c75-b45150274ae3_disk">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </source>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/743cf933-3139-4c25-9c75-b45150274ae3_disk.config">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </source>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-ddc44c03-3580-457e-b1d4-9a35d3c393e8">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </source>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <serial>ddc44c03-3580-457e-b1d4-9a35d3c393e8</serial>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/743cf933-3139-4c25-9c75-b45150274ae3_disk.rescue">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </source>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:17:14 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <target dev="vdc" bus="virtio"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <boot order="1"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:79:71:3b"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <target dev="tape33f46e9-c1"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/console.log" append="off"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <video>
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </video>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:17:14 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:17:14 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:17:14 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:17:14 compute-0 nova_compute[247704]: </domain>
Jan 31 08:17:14 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:17:14 compute-0 nova_compute[247704]: 2026-01-31 08:17:14.948 247708 INFO nova.virt.libvirt.driver [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance destroyed successfully.
Jan 31 08:17:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4054031621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2253487533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.356 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.357 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.358 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.358 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.358 247708 DEBUG nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] No VIF found with MAC fa:16:3e:79:71:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.359 247708 INFO nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Using config drive
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.399 247708 DEBUG nova.storage.rbd_utils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.437 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.492 247708 DEBUG nova.objects.instance [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'keypairs' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.678 247708 DEBUG nova.compute.manager [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.678 247708 DEBUG oslo_concurrency.lockutils [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.678 247708 DEBUG oslo_concurrency.lockutils [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.679 247708 DEBUG oslo_concurrency.lockutils [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.679 247708 DEBUG nova.compute.manager [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:15 compute-0 nova_compute[247704]: 2026-01-31 08:17:15.679 247708 WARNING nova.compute.manager [req-53a9884d-b9ae-4721-bfcb-60aeb9430e84 req-d22105af-b88e-4610-ae5a-2df223f64826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state rescuing.
Jan 31 08:17:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 850 KiB/s wr, 43 op/s
Jan 31 08:17:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:16.097 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.228 247708 INFO nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Creating config drive at /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config.rescue
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.233 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz9lp0v3t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.367 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpz9lp0v3t" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.406 247708 DEBUG nova.storage.rbd_utils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] rbd image 743cf933-3139-4c25-9c75-b45150274ae3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.410 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config.rescue 743cf933-3139-4c25-9c75-b45150274ae3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:16 compute-0 ceph-mon[74496]: pgmap v2463: 305 pgs: 305 active+clean; 754 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 850 KiB/s wr, 43 op/s
Jan 31 08:17:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:16.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.994 247708 DEBUG oslo_concurrency.processutils [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config.rescue 743cf933-3139-4c25-9c75-b45150274ae3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:16 compute-0 nova_compute[247704]: 2026-01-31 08:17:16.995 247708 INFO nova.virt.libvirt.driver [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Deleting local config drive /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3/disk.config.rescue because it was imported into RBD.
Jan 31 08:17:17 compute-0 kernel: tape33f46e9-c1: entered promiscuous mode
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.0615] manager: (tape33f46e9-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 ovn_controller[149457]: 2026-01-31T08:17:17Z|00558|binding|INFO|Claiming lport e33f46e9-c188-4c53-b863-93620a4c3452 for this chassis.
Jan 31 08:17:17 compute-0 ovn_controller[149457]: 2026-01-31T08:17:17Z|00559|binding|INFO|e33f46e9-c188-4c53-b863-93620a4c3452: Claiming fa:16:3e:79:71:3b 10.100.0.4
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.069 247708 DEBUG nova.network.neutron [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating instance_info_cache with network_info: [{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.075 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:71:3b 10.100.0.4'], port_security=['fa:16:3e:79:71:3b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '743cf933-3139-4c25-9c75-b45150274ae3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '11deed3f-7193-4325-affb-77a66beb8424', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e33f46e9-c188-4c53-b863-93620a4c3452) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:17 compute-0 ovn_controller[149457]: 2026-01-31T08:17:17Z|00560|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 ovn-installed in OVS
Jan 31 08:17:17 compute-0 ovn_controller[149457]: 2026-01-31T08:17:17Z|00561|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 up in Southbound
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.078 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e33f46e9-c188-4c53-b863-93620a4c3452 in datapath 088d6992-6ba6-4719-a977-b3d306740157 bound to our chassis
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.081 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.085 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.093 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5c3e09-15fb-41cf-b025-4532b7d4d168]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.094 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088d6992-61 in ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.096 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088d6992-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.097 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[eacaffa0-b6f2-4d2d-8f1c-b1abc5eeed4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.098 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2d5c96-8047-4931-911e-eabfff64e851]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 systemd-machined[214448]: New machine qemu-58-instance-00000087.
Jan 31 08:17:17 compute-0 systemd-udevd[337546]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.110 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Releasing lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.111 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance network_info: |[{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.111 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[0d95f150-4151-4000-8ffc-00e77a9d87fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.112 247708 DEBUG oslo_concurrency.lockutils [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.113 247708 DEBUG nova.network.neutron [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Refreshing network info cache for port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.1178] device (tape33f46e9-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.117 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Start _get_guest_xml network_info=[{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '602d0d17-75b2-4e80-9833-94a1008d60c6', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-28bdb463-7d0a-42f5-8392-3acd847cfe3e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '28bdb463-7d0a-42f5-8392-3acd847cfe3e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '83252cb5-25d7-40e3-823d-02d1d0eb73f1', 'attached_at': '', 'detached_at': '', 'volume_id': '28bdb463-7d0a-42f5-8392-3acd847cfe3e', 'serial': '28bdb463-7d0a-42f5-8392-3acd847cfe3e'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.1186] device (tape33f46e9-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:17:17 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000087.
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.123 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2af78e-d627-44a2-90f4-437abaa8cb4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.125 247708 WARNING nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.134 247708 DEBUG nova.virt.libvirt.host [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.135 247708 DEBUG nova.virt.libvirt.host [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.145 247708 DEBUG nova.virt.libvirt.host [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.146 247708 DEBUG nova.virt.libvirt.host [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.147 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.148 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.148 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.149 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.149 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.149 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.149 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.150 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.150 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.150 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.151 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.151 247708 DEBUG nova.virt.hardware [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.158 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[acaa6338-5ee5-4e82-9d59-b82c1983a493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.163 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3dce6c24-ceac-427d-9329-9c588474c9bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.1650] manager: (tap088d6992-60): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.194 247708 DEBUG nova.storage.rbd_utils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] rbd image 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.197 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3b29596b-cc2d-4f68-a67a-0b6ab8d42cea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.200 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.201 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cbb46d-5bcf-4b1c-8e70-0a7eb96ff12f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.2329] device (tap088d6992-60): carrier: link connected
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.241 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7670b032-cf2c-41e5-bb01-6fef3a94e790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.258 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fba3c59b-2f88-42c0-a17e-989a606c0ad3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 767985, 'reachable_time': 20164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337597, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.274 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f07c5fb8-6f32-4985-867f-2a2a7f459387]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:87bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 767985, 'tstamp': 767985}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337598, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.292 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fddc15ce-408d-4da9-b634-88a56538f17c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 767985, 'reachable_time': 20164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337599, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.328 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[85b9361d-e1fd-4b64-a1a3-cf26e2661cdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 sudo[337600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:17 compute-0 sudo[337600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-0 sudo[337600]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.398 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ec5b0b-59c1-4e77-bf21-da0faea93a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.400 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.400 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.400 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.403 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.4045] manager: (tap088d6992-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Jan 31 08:17:17 compute-0 kernel: tap088d6992-60: entered promiscuous mode
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.406 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 ovn_controller[149457]: 2026-01-31T08:17:17Z|00562|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.418 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.419 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[11284ad6-08f2-4778-bf6f-587417807a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.425 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:17:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:17.426 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'env', 'PROCESS_TAG=haproxy-088d6992-6ba6-4719-a977-b3d306740157', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088d6992-6ba6-4719-a977-b3d306740157.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:17:17 compute-0 sudo[337668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:17 compute-0 sudo[337668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-0 sudo[337668]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-0 sudo[337712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:17 compute-0 sudo[337712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-0 sudo[337712]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-0 sudo[337756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:17:17 compute-0 sudo[337756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.690 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.722 247708 DEBUG nova.virt.libvirt.vif [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:17:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-20534748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-20534748',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTBDp6a0SyMFs7OU2ywUkjN63r2N0PxPaRE6bSDGEX+Eq/ObkyuaXGME0SMKDkMYoPYqAUjzxLW21QhzrJW+bRoGYJmTS4PrdUXkRuBCox1bDVqwWhiTbfXfC+MijeFQ==',key_name='tempest-keypair-1991810396',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bda3ebf6541d46309fc9b2ce089dd857',ramdisk_id='',reservation_id='r-canzs8i7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-284184156',owner_user_name='tempest-ServerActionsV293TestJSON-284184156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90313c82677c4144953c58efc0e13c3e',uuid=83252cb5-25d7-40e3-823d-02d1d0eb73f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.725 247708 DEBUG nova.network.os_vif_util [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converting VIF {"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2807485188' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.727 247708 DEBUG nova.network.os_vif_util [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.729 247708 DEBUG nova.objects.instance [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'pci_devices' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.764 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <uuid>83252cb5-25d7-40e3-823d-02d1d0eb73f1</uuid>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <name>instance-0000008b</name>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsV293TestJSON-server-20534748</nova:name>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:17:17</nova:creationTime>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:user uuid="90313c82677c4144953c58efc0e13c3e">tempest-ServerActionsV293TestJSON-284184156-project-member</nova:user>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:project uuid="bda3ebf6541d46309fc9b2ce089dd857">tempest-ServerActionsV293TestJSON-284184156</nova:project>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <nova:port uuid="1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7">
Jan 31 08:17:17 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <system>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <entry name="serial">83252cb5-25d7-40e3-823d-02d1d0eb73f1</entry>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <entry name="uuid">83252cb5-25d7-40e3-823d-02d1d0eb73f1</entry>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </system>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <os>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </os>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <features>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </features>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config">
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </source>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-28bdb463-7d0a-42f5-8392-3acd847cfe3e">
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </source>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:17:17 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <serial>28bdb463-7d0a-42f5-8392-3acd847cfe3e</serial>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:85:b5:29"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <target dev="tap1cf8d8b8-8d"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/console.log" append="off"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <video>
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </video>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:17:17 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:17:17 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:17:17 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:17:17 compute-0 nova_compute[247704]: </domain>
Jan 31 08:17:17 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.766 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Preparing to wait for external event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.767 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.768 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.768 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.769 247708 DEBUG nova.virt.libvirt.vif [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:17:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-20534748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-20534748',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTBDp6a0SyMFs7OU2ywUkjN63r2N0PxPaRE6bSDGEX+Eq/ObkyuaXGME0SMKDkMYoPYqAUjzxLW21QhzrJW+bRoGYJmTS4PrdUXkRuBCox1bDVqwWhiTbfXfC+MijeFQ==',key_name='tempest-keypair-1991810396',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bda3ebf6541d46309fc9b2ce089dd857',ramdisk_id='',reservation_id='r-canzs8i7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-284184156',owner_user_name='tempest-ServerActionsV293TestJSON-284184156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90313c82677c4144953c58efc0e13c3e',uuid=83252cb5-25d7-40e3-823d-02d1d0eb73f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.769 247708 DEBUG nova.network.os_vif_util [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converting VIF {"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.770 247708 DEBUG nova.network.os_vif_util [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.771 247708 DEBUG os_vif [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.772 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.773 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.774 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 743cf933-3139-4c25-9c75-b45150274ae3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.774 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847437.7661388, 743cf933-3139-4c25-9c75-b45150274ae3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.775 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Resumed (Lifecycle Event)
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.779 247708 DEBUG nova.compute.manager [None req-69fb07ba-3cfe-4f6f-9e6b-02d7e526bc9d d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.785 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.786 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cf8d8b8-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.786 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cf8d8b8-8d, col_values=(('external_ids', {'iface-id': '1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:b5:29', 'vm-uuid': '83252cb5-25d7-40e3-823d-02d1d0eb73f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.789 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:17:17 compute-0 NetworkManager[49108]: <info>  [1769847437.7904] manager: (tap1cf8d8b8-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.804 247708 INFO os_vif [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d')
Jan 31 08:17:17 compute-0 podman[337844]: 2026-01-31 08:17:17.841202814 +0000 UTC m=+0.092377792 container create 0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:17:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 48 KiB/s wr, 69 op/s
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.872 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:17 compute-0 podman[337844]: 2026-01-31 08:17:17.782888953 +0000 UTC m=+0.034063951 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:17 compute-0 nova_compute[247704]: 2026-01-31 08:17:17.881 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:17 compute-0 systemd[1]: Started libpod-conmon-0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336.scope.
Jan 31 08:17:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f9c06873bad39e50c073c4c0ccdad2852bb6ef8a541673b01ccb0ca1860ef8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:17 compute-0 podman[337844]: 2026-01-31 08:17:17.936898346 +0000 UTC m=+0.188073354 container init 0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:17:17 compute-0 podman[337844]: 2026-01-31 08:17:17.941710764 +0000 UTC m=+0.192885742 container start 0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:17:17 compute-0 sudo[337756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[337864]: [NOTICE]   (337880) : New worker (337882) forked
Jan 31 08:17:17 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[337864]: [NOTICE]   (337880) : Loading success.
Jan 31 08:17:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:18.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:18 compute-0 ceph-mon[74496]: pgmap v2464: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 48 KiB/s wr, 69 op/s
Jan 31 08:17:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:18.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:19 compute-0 nova_compute[247704]: 2026-01-31 08:17:19.672 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:17:19 compute-0 nova_compute[247704]: 2026-01-31 08:17:19.802 247708 DEBUG nova.network.neutron [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updated VIF entry in instance network info cache for port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:19 compute-0 nova_compute[247704]: 2026-01-31 08:17:19.803 247708 DEBUG nova.network.neutron [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating instance_info_cache with network_info: [{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:17:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 48 KiB/s wr, 69 op/s
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:19 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 67a1bdc9-2202-4557-ae22-45864e7a03ee does not exist
Jan 31 08:17:19 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3fcc9e89-5932-4096-97fd-da1604b2c775 does not exist
Jan 31 08:17:19 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c7014aed-6eb3-4756-82d8-2e0b6c28b306 does not exist
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:17:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:17:19 compute-0 sudo[337892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:19 compute-0 sudo[337892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:19 compute-0 sudo[337892]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:19 compute-0 sudo[337917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:19 compute-0 sudo[337917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:19 compute-0 sudo[337917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:20 compute-0 sudo[337942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:20 compute-0 sudo[337942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:20 compute-0 sudo[337942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:20 compute-0 sudo[337967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:17:20 compute-0 sudo[337967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.151 247708 DEBUG nova.compute.manager [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.151 247708 DEBUG oslo_concurrency.lockutils [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.152 247708 DEBUG oslo_concurrency.lockutils [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.152 247708 DEBUG oslo_concurrency.lockutils [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:17:20
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.152 247708 DEBUG nova.compute.manager [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control']
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.153 247708 WARNING nova.compute.manager [req-07fd2d6d-43ae-4dee-a2c6-08f1a3af3795 req-271eb04f-4a27-4dc9-b902-cf421e77a127 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state rescuing.
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:17:20 compute-0 ceph-mon[74496]: pgmap v2465: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 48 KiB/s wr, 69 op/s
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:17:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.255 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.255 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847437.7672946, 743cf933-3139-4c25-9c75-b45150274ae3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.255 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Started (Lifecycle Event)
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.263 247708 DEBUG oslo_concurrency.lockutils [req-e2704bb5-a264-4ef5-8ad6-eb4e9417f149 req-07348a6f-01b9-4e1d-8fb4-af75496c7115 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.270 247708 INFO nova.compute.manager [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Unrescuing
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.271 247708 DEBUG oslo_concurrency.lockutils [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.271 247708 DEBUG oslo_concurrency.lockutils [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.271 247708 DEBUG nova.network.neutron [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.287 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.288 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.288 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] No VIF found with MAC fa:16:3e:85:b5:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.289 247708 INFO nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Using config drive
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.321 247708 DEBUG nova.storage.rbd_utils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] rbd image 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.336 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.341 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:20 compute-0 nova_compute[247704]: 2026-01-31 08:17:20.378 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.441380442 +0000 UTC m=+0.059878470 container create 1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:17:20 compute-0 systemd[1]: Started libpod-conmon-1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4.scope.
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.411157675 +0000 UTC m=+0.029655733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:17:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.531271832 +0000 UTC m=+0.149769870 container init 1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.536812647 +0000 UTC m=+0.155310675 container start 1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.540431656 +0000 UTC m=+0.158929824 container attach 1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:20 compute-0 competent_satoshi[338065]: 167 167
Jan 31 08:17:20 compute-0 systemd[1]: libpod-1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4.scope: Deactivated successfully.
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.547956619 +0000 UTC m=+0.166454637 container died 1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f84696348e3e51f85753b9b1230309d36cbc8c06cbecd9f7ddc0d07b121431f3-merged.mount: Deactivated successfully.
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:17:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:20.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:17:20 compute-0 podman[338048]: 2026-01-31 08:17:20.628491021 +0000 UTC m=+0.246989049 container remove 1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:17:20 compute-0 systemd[1]: libpod-conmon-1849d40eb2e19c52593060cdbc9aacbd9ed1cecb0e2d98ce35d26d94aebf15b4.scope: Deactivated successfully.
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:17:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:17:20 compute-0 podman[338091]: 2026-01-31 08:17:20.784027951 +0000 UTC m=+0.041371589 container create aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:17:20 compute-0 systemd[1]: Started libpod-conmon-aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7.scope.
Jan 31 08:17:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b5d5f3a11609dd494596861bda52c595486c78052055091c857370315d66ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b5d5f3a11609dd494596861bda52c595486c78052055091c857370315d66ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b5d5f3a11609dd494596861bda52c595486c78052055091c857370315d66ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b5d5f3a11609dd494596861bda52c595486c78052055091c857370315d66ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b5d5f3a11609dd494596861bda52c595486c78052055091c857370315d66ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:20 compute-0 podman[338091]: 2026-01-31 08:17:20.766283409 +0000 UTC m=+0.023627077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:17:20 compute-0 podman[338091]: 2026-01-31 08:17:20.887477042 +0000 UTC m=+0.144820710 container init aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:17:20 compute-0 podman[338091]: 2026-01-31 08:17:20.897944817 +0000 UTC m=+0.155288455 container start aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:17:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:20.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:20 compute-0 podman[338091]: 2026-01-31 08:17:20.905592893 +0000 UTC m=+0.162936621 container attach aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.289 247708 INFO nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Creating config drive at /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.295 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppw0z9ng_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.432 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppw0z9ng_" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.459 247708 DEBUG nova.storage.rbd_utils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] rbd image 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.465 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:21 compute-0 charming_darwin[338108]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:17:21 compute-0 charming_darwin[338108]: --> relative data size: 1.0
Jan 31 08:17:21 compute-0 charming_darwin[338108]: --> All data devices are unavailable
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.697 247708 DEBUG oslo_concurrency.processutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.699 247708 INFO nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deleting local config drive /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config because it was imported into RBD.
Jan 31 08:17:21 compute-0 systemd[1]: libpod-aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7.scope: Deactivated successfully.
Jan 31 08:17:21 compute-0 podman[338091]: 2026-01-31 08:17:21.733503896 +0000 UTC m=+0.990847534 container died aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:17:21 compute-0 NetworkManager[49108]: <info>  [1769847441.7533] manager: (tap1cf8d8b8-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Jan 31 08:17:21 compute-0 kernel: tap1cf8d8b8-8d: entered promiscuous mode
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-0 ovn_controller[149457]: 2026-01-31T08:17:21Z|00563|binding|INFO|Claiming lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for this chassis.
Jan 31 08:17:21 compute-0 ovn_controller[149457]: 2026-01-31T08:17:21Z|00564|binding|INFO|1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7: Claiming fa:16:3e:85:b5:29 10.100.0.3
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.775 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:b5:29 10.100.0.3'], port_security=['fa:16:3e:85:b5:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '83252cb5-25d7-40e3-823d-02d1d0eb73f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26cb596-7b93-4afd-9b50-133e9afd768d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bda3ebf6541d46309fc9b2ce089dd857', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc88ce0b-3f35-4088-82b2-3c421cb8a010', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06815ca2-b7af-4c41-816c-402a584fa466, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:21 compute-0 ovn_controller[149457]: 2026-01-31T08:17:21Z|00565|binding|INFO|Setting lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 ovn-installed in OVS
Jan 31 08:17:21 compute-0 ovn_controller[149457]: 2026-01-31T08:17:21Z|00566|binding|INFO|Setting lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 up in Southbound
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.778 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 in datapath c26cb596-7b93-4afd-9b50-133e9afd768d bound to our chassis
Jan 31 08:17:21 compute-0 nova_compute[247704]: 2026-01-31 08:17:21.783 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7b5d5f3a11609dd494596861bda52c595486c78052055091c857370315d66ab-merged.mount: Deactivated successfully.
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.789 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26cb596-7b93-4afd-9b50-133e9afd768d
Jan 31 08:17:21 compute-0 systemd-machined[214448]: New machine qemu-59-instance-0000008b.
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.803 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[43bf569e-84ac-4765-8fc4-21a021a42297]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.804 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc26cb596-71 in ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.806 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc26cb596-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.807 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5f3ca8-6b4d-470f-9272-c96c83dacf90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.807 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d44fd69a-96b2-43a7-bc18-2fac3a8b5d1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-0000008b.
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.819 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff6a97b-8667-4f4d-a3cc-89b1a1a3b218]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 podman[338091]: 2026-01-31 08:17:21.836175988 +0000 UTC m=+1.093519626 container remove aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 31 08:17:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 56 KiB/s wr, 150 op/s
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.845 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bca54c89-cee8-406c-88f5-b8a99e9ba74c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 systemd-udevd[338192]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:17:21 compute-0 systemd[1]: libpod-conmon-aa7d993f97df4f034f8d440a1abd019172359e2da313de0685a8c0848dd653a7.scope: Deactivated successfully.
Jan 31 08:17:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 31 08:17:21 compute-0 NetworkManager[49108]: <info>  [1769847441.8683] device (tap1cf8d8b8-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:17:21 compute-0 NetworkManager[49108]: <info>  [1769847441.8695] device (tap1cf8d8b8-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:17:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 31 08:17:21 compute-0 sudo[337967]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.889 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ea27107e-5443-4356-ac79-a57575eaa992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 NetworkManager[49108]: <info>  [1769847441.8953] manager: (tapc26cb596-70): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.894 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[da3a88ca-22f3-4627-bbde-bf2b722ac493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.924 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[535a0b4e-e38a-441a-8f70-cf76c391eb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.928 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[433eb4f1-43c5-4a88-a3b8-97a692a9f66b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 sudo[338202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:21 compute-0 sudo[338202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:21 compute-0 sudo[338202]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:21 compute-0 NetworkManager[49108]: <info>  [1769847441.9606] device (tapc26cb596-70): carrier: link connected
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.970 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9488e2-3e55-44b5-a872-365c5e2980e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:21.992 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2e794c-e970-4c43-b341-645fb94738c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26cb596-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:20:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768457, 'reachable_time': 41547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338259, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-0 sudo[338246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:22 compute-0 sudo[338246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:22 compute-0 sudo[338246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.017 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c84cf9c6-a212-494f-a11d-65911d1780ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:2057'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768457, 'tstamp': 768457}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338271, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.038 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc1b391-56bd-405a-8f0a-9029ff3b3843]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26cb596-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:20:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768457, 'reachable_time': 41547, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338280, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-0 sudo[338273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:22 compute-0 sudo[338273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:22 compute-0 sudo[338273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.070 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4da49a5f-ef1f-4de1-8a28-335427eb3f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-0 sudo[338301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:17:22 compute-0 sudo[338301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.117 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b64fe2-7954-4ff4-acf7-e6e4f3e8fcb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.119 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26cb596-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.119 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.119 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26cb596-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:22 compute-0 NetworkManager[49108]: <info>  [1769847442.1228] manager: (tapc26cb596-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Jan 31 08:17:22 compute-0 kernel: tapc26cb596-70: entered promiscuous mode
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.126 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26cb596-70, col_values=(('external_ids', {'iface-id': '9c7eeaf4-153c-4957-b0ec-6ca490c32a88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:22 compute-0 ovn_controller[149457]: 2026-01-31T08:17:22Z|00567|binding|INFO|Releasing lport 9c7eeaf4-153c-4957-b0ec-6ca490c32a88 from this chassis (sb_readonly=0)
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.129 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c26cb596-7b93-4afd-9b50-133e9afd768d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c26cb596-7b93-4afd-9b50-133e9afd768d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.130 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d23614a-48b4-482a-9f10-817decdf5142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.131 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-c26cb596-7b93-4afd-9b50-133e9afd768d
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/c26cb596-7b93-4afd-9b50-133e9afd768d.pid.haproxy
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID c26cb596-7b93-4afd-9b50-133e9afd768d
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:17:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:22.133 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'env', 'PROCESS_TAG=haproxy-c26cb596-7b93-4afd-9b50-133e9afd768d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c26cb596-7b93-4afd-9b50-133e9afd768d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.140 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.341 247708 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.341 247708 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.342 247708 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.342 247708 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.342 247708 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.342 247708 WARNING nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.343 247708 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.343 247708 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.343 247708 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.343 247708 DEBUG oslo_concurrency.lockutils [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.344 247708 DEBUG nova.compute.manager [req-ed24e923-0463-4ee3-9f78-bc37abac70dd req-4201fb65-4f35-45b8-8e2e-cdde5f03e222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Processing event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.480815625 +0000 UTC m=+0.082345257 container create da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.429187988 +0000 UTC m=+0.030717650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:17:22 compute-0 systemd[1]: Started libpod-conmon-da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a.scope.
Jan 31 08:17:22 compute-0 podman[338402]: 2026-01-31 08:17:22.543501283 +0000 UTC m=+0.098816039 container create e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:22 compute-0 systemd[1]: Started libpod-conmon-e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9.scope.
Jan 31 08:17:22 compute-0 podman[338402]: 2026-01-31 08:17:22.491970227 +0000 UTC m=+0.047284983 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.600743687 +0000 UTC m=+0.202273339 container init da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:17:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.606633521 +0000 UTC m=+0.208163173 container start da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:22 compute-0 boring_curran[338423]: 167 167
Jan 31 08:17:22 compute-0 systemd[1]: libpod-da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a.scope: Deactivated successfully.
Jan 31 08:17:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b000d246d0c958a4feb3846bc286fc1cef7df2b870694a9543b22abf005106d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.627986292 +0000 UTC m=+0.229515944 container attach da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curran, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.629031996 +0000 UTC m=+0.230561628 container died da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curran, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:17:22 compute-0 podman[338402]: 2026-01-31 08:17:22.665005733 +0000 UTC m=+0.220320509 container init e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4eb6c3091aa438b3e9311bbe854667e2728a438b1187493d9e0e02b9aba6f0b-merged.mount: Deactivated successfully.
Jan 31 08:17:22 compute-0 podman[338402]: 2026-01-31 08:17:22.676036802 +0000 UTC m=+0.231351558 container start e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:17:22 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [NOTICE]   (338458) : New worker (338476) forked
Jan 31 08:17:22 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [NOTICE]   (338458) : Loading success.
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.712 247708 DEBUG nova.network.neutron [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.744 247708 DEBUG oslo_concurrency.lockutils [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.745 247708 DEBUG nova.objects.instance [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:22 compute-0 podman[338379]: 2026-01-31 08:17:22.748672022 +0000 UTC m=+0.350201654 container remove da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:17:22 compute-0 systemd[1]: libpod-conmon-da81771e3c0d4222b2d07a121ee0e5dcdbf9be90d791ab5afa9b8a1133c73b9a.scope: Deactivated successfully.
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-0 kernel: tape33f46e9-c1 (unregistering): left promiscuous mode
Jan 31 08:17:22 compute-0 NetworkManager[49108]: <info>  [1769847442.8767] device (tape33f46e9-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:17:22 compute-0 ceph-mon[74496]: pgmap v2466: 305 pgs: 305 active+clean; 755 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 56 KiB/s wr, 150 op/s
Jan 31 08:17:22 compute-0 ceph-mon[74496]: osdmap e314: 3 total, 3 up, 3 in
Jan 31 08:17:22 compute-0 ovn_controller[149457]: 2026-01-31T08:17:22Z|00568|binding|INFO|Releasing lport e33f46e9-c188-4c53-b863-93620a4c3452 from this chassis (sb_readonly=0)
Jan 31 08:17:22 compute-0 ovn_controller[149457]: 2026-01-31T08:17:22Z|00569|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 down in Southbound
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.886 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-0 ovn_controller[149457]: 2026-01-31T08:17:22Z|00570|binding|INFO|Removing iface tape33f46e9-c1 ovn-installed in OVS
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.891 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.898 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.901 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.901 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847442.9006693, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.902 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Started (Lifecycle Event)
Jan 31 08:17:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.905 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:17:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.910 247708 INFO nova.virt.libvirt.driver [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance spawned successfully.
Jan 31 08:17:22 compute-0 nova_compute[247704]: 2026-01-31 08:17:22.911 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:17:22 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 31 08:17:22 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000087.scope: Consumed 5.702s CPU time.
Jan 31 08:17:22 compute-0 podman[338501]: 2026-01-31 08:17:22.925831218 +0000 UTC m=+0.060020433 container create 8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:17:22 compute-0 systemd-machined[214448]: Machine qemu-58-instance-00000087 terminated.
Jan 31 08:17:22 compute-0 systemd[1]: Started libpod-conmon-8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf.scope.
Jan 31 08:17:22 compute-0 podman[338501]: 2026-01-31 08:17:22.90742024 +0000 UTC m=+0.041609445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:17:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5dca48d492e8eedd54346579a7ecdef3354c9643a7221f2e85825b193eaf81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5dca48d492e8eedd54346579a7ecdef3354c9643a7221f2e85825b193eaf81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5dca48d492e8eedd54346579a7ecdef3354c9643a7221f2e85825b193eaf81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5dca48d492e8eedd54346579a7ecdef3354c9643a7221f2e85825b193eaf81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.043 247708 INFO nova.virt.libvirt.driver [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance destroyed successfully.
Jan 31 08:17:23 compute-0 podman[338501]: 2026-01-31 08:17:23.047212776 +0000 UTC m=+0.181402001 container init 8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.046 247708 DEBUG nova.objects.instance [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'numa_topology' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:23 compute-0 podman[338501]: 2026-01-31 08:17:23.057894667 +0000 UTC m=+0.192083862 container start 8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:17:23 compute-0 podman[338501]: 2026-01-31 08:17:23.061771261 +0000 UTC m=+0.195960476 container attach 8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.085 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.092 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:23 compute-0 kernel: tape33f46e9-c1: entered promiscuous mode
Jan 31 08:17:23 compute-0 NetworkManager[49108]: <info>  [1769847443.2255] manager: (tape33f46e9-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Jan 31 08:17:23 compute-0 systemd-udevd[338230]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.226 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:23 compute-0 ovn_controller[149457]: 2026-01-31T08:17:23Z|00571|if_status|INFO|Not updating pb chassis for e33f46e9-c188-4c53-b863-93620a4c3452 now as sb is readonly
Jan 31 08:17:23 compute-0 NetworkManager[49108]: <info>  [1769847443.2394] device (tape33f46e9-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:17:23 compute-0 NetworkManager[49108]: <info>  [1769847443.2403] device (tape33f46e9-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.241 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.242 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847442.9008615, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.242 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Paused (Lifecycle Event)
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.246 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:23 compute-0 systemd-machined[214448]: New machine qemu-60-instance-00000087.
Jan 31 08:17:23 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000087.
Jan 31 08:17:23 compute-0 ovn_controller[149457]: 2026-01-31T08:17:23Z|00572|binding|INFO|Claiming lport e33f46e9-c188-4c53-b863-93620a4c3452 for this chassis.
Jan 31 08:17:23 compute-0 ovn_controller[149457]: 2026-01-31T08:17:23Z|00573|binding|INFO|e33f46e9-c188-4c53-b863-93620a4c3452: Claiming fa:16:3e:79:71:3b 10.100.0.4
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.275 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:71:3b 10.100.0.4'], port_security=['fa:16:3e:79:71:3b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '743cf933-3139-4c25-9c75-b45150274ae3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '11deed3f-7193-4325-affb-77a66beb8424', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e33f46e9-c188-4c53-b863-93620a4c3452) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.277 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e33f46e9-c188-4c53-b863-93620a4c3452 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis
Jan 31 08:17:23 compute-0 ovn_controller[149457]: 2026-01-31T08:17:23Z|00574|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 ovn-installed in OVS
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.278 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.279 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088d6992-6ba6-4719-a977-b3d306740157, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.280 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[02170b65-193b-4137-9f2f-8b1625315fce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.281 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace which is not needed anymore
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.286 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:23 compute-0 ovn_controller[149457]: 2026-01-31T08:17:23Z|00575|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 up in Southbound
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.297 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:71:3b 10.100.0.4'], port_security=['fa:16:3e:79:71:3b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '743cf933-3139-4c25-9c75-b45150274ae3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '11deed3f-7193-4325-affb-77a66beb8424', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e33f46e9-c188-4c53-b863-93620a4c3452) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.304 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847442.904323, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.305 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Resumed (Lifecycle Event)
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.309 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.309 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.310 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.310 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.311 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.311 247708 DEBUG nova.virt.libvirt.driver [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.355 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.360 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.393 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:17:23 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[337864]: [NOTICE]   (337880) : haproxy version is 2.8.14-c23fe91
Jan 31 08:17:23 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[337864]: [NOTICE]   (337880) : path to executable is /usr/sbin/haproxy
Jan 31 08:17:23 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[337864]: [ALERT]    (337880) : Current worker (337882) exited with code 143 (Terminated)
Jan 31 08:17:23 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[337864]: [WARNING]  (337880) : All workers exited. Exiting... (0)
Jan 31 08:17:23 compute-0 systemd[1]: libpod-0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336.scope: Deactivated successfully.
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.432 247708 INFO nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Took 10.33 seconds to spawn the instance on the hypervisor.
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.433 247708 DEBUG nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:23 compute-0 podman[338573]: 2026-01-31 08:17:23.440827907 +0000 UTC m=+0.063478237 container died 0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336-userdata-shm.mount: Deactivated successfully.
Jan 31 08:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f9c06873bad39e50c073c4c0ccdad2852bb6ef8a541673b01ccb0ca1860ef8-merged.mount: Deactivated successfully.
Jan 31 08:17:23 compute-0 podman[338573]: 2026-01-31 08:17:23.537763219 +0000 UTC m=+0.160413529 container cleanup 0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:17:23 compute-0 systemd[1]: libpod-conmon-0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336.scope: Deactivated successfully.
Jan 31 08:17:23 compute-0 podman[338604]: 2026-01-31 08:17:23.626243986 +0000 UTC m=+0.065200090 container remove 0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.634 247708 INFO nova.compute.manager [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Took 17.07 seconds to build instance.
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.638 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6979ae13-4560-4b51-adfb-1bf57eba34da]: (4, ('Sat Jan 31 08:17:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336)\n0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336\nSat Jan 31 08:17:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336)\n0a46ff834a32f24ed3e6259a86a6415615620782b8516ea8d03b3c51501ed336\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.642 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2aafb968-4e2f-4734-9a21-afe7f72e385a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.644 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.647 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:23 compute-0 kernel: tap088d6992-60: left promiscuous mode
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.662 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.665 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3058aa4b-0040-4fac-89d2-aaeb97e05d74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.682 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a89bd3-ff0f-4306-b18e-94704ed8cdf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.684 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bb069a34-a7ff-49f3-8c42-8c30d83efcd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.698 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3da82c-c8c2-4b21-a438-987fe73c4b83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 767976, 'reachable_time': 18381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338619, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 nova_compute[247704]: 2026-01-31 08:17:23.700 247708 DEBUG oslo_concurrency.lockutils [None req-c975199b-b265-4c98-b1eb-6561837e8bec 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d088d6992\x2d6ba6\x2d4719\x2da977\x2db3d306740157.mount: Deactivated successfully.
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.704 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.705 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[8de2d3fe-5765-4e5d-8845-8d7fe4aba42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.707 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e33f46e9-c188-4c53-b863-93620a4c3452 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.708 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.718 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fe505dae-52a1-48f1-a3a7-74dba3c37b0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.719 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088d6992-61 in ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.723 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088d6992-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.723 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2d945014-8cbc-4ecf-84c3-ae42d8b62af9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.724 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[206be339-8f8d-498a-aeae-78061aa7c0c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.734 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[fe68fbf5-97b7-4be9-86cf-d140088b4123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.749 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[28b9d8b2-a64b-4818-8376-c5967189e1dd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.784 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6fa80b-7838-4af1-86d8-0b5737cb4ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 NetworkManager[49108]: <info>  [1769847443.7911] manager: (tap088d6992-60): new Veth device (/org/freedesktop/NetworkManager/Devices/259)
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.792 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7f58fdc1-dbe8-434a-998c-6db7f59684bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.822 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[81d634f4-5e58-4ef8-9622-86d8cbcc58c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.826 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1e79e6-1e72-4751-bb30-12883d026f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 770 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.3 MiB/s wr, 251 op/s
Jan 31 08:17:23 compute-0 NetworkManager[49108]: <info>  [1769847443.8515] device (tap088d6992-60): carrier: link connected
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.856 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1f6e74-59ab-4756-a577-b4846bca7f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.876 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c29bd06a-9e30-48de-aeab-13c422da76f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768646, 'reachable_time': 32667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338633, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.891 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[12b8bc75-e007-499f-80f4-5c18e4dafd91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:87bc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768646, 'tstamp': 768646}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338635, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.907 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[17c39d49-df02-4276-9c91-764cc1e67143]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088d6992-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:87:bc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768646, 'reachable_time': 32667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338636, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 wizardly_curie[338522]: {
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:     "0": [
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:         {
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "devices": [
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "/dev/loop3"
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             ],
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "lv_name": "ceph_lv0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "lv_size": "7511998464",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "name": "ceph_lv0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "tags": {
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.crush_device_class": "",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.encrypted": "0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.osd_id": "0",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.type": "block",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:                 "ceph.vdo": "0"
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             },
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "type": "block",
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:             "vg_name": "ceph_vg0"
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:         }
Jan 31 08:17:23 compute-0 wizardly_curie[338522]:     ]
Jan 31 08:17:23 compute-0 wizardly_curie[338522]: }
Jan 31 08:17:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 31 08:17:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:23.938 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6de0e5-fa5f-490d-bb83-efbd6a806d27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:23 compute-0 systemd[1]: libpod-8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf.scope: Deactivated successfully.
Jan 31 08:17:23 compute-0 podman[338501]: 2026-01-31 08:17:23.953410117 +0000 UTC m=+1.087599322 container died 8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e5dca48d492e8eedd54346579a7ecdef3354c9643a7221f2e85825b193eaf81-merged.mount: Deactivated successfully.
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.044 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f78e3e1d-9fdf-4485-8150-f4019e8498e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.048 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.048 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.049 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088d6992-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:24 compute-0 podman[338501]: 2026-01-31 08:17:24.050090743 +0000 UTC m=+1.184279938 container remove 8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:17:24 compute-0 NetworkManager[49108]: <info>  [1769847444.0520] manager: (tap088d6992-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Jan 31 08:17:24 compute-0 kernel: tap088d6992-60: entered promiscuous mode
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.056 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088d6992-60, col_values=(('external_ids', {'iface-id': '7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:24 compute-0 ovn_controller[149457]: 2026-01-31T08:17:24Z|00576|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.058 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:24 compute-0 systemd[1]: libpod-conmon-8bc7f8ac0022ea77a8895fb214774ca97a04d4dcd51793b2c0328effd21801cf.scope: Deactivated successfully.
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.068 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.070 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[63188124-fd74-4834-afd1-55c137703fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.071 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/088d6992-6ba6-4719-a977-b3d306740157.pid.haproxy
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 088d6992-6ba6-4719-a977-b3d306740157
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:17:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:24.072 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'env', 'PROCESS_TAG=haproxy-088d6992-6ba6-4719-a977-b3d306740157', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088d6992-6ba6-4719-a977-b3d306740157.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:17:24 compute-0 sudo[338301]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:24 compute-0 sudo[338685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:24 compute-0 sudo[338685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:24 compute-0 sudo[338685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:24 compute-0 sudo[338736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:24 compute-0 sudo[338736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:24 compute-0 sudo[338736]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:24 compute-0 sudo[338765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:24 compute-0 sudo[338765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:24 compute-0 sudo[338765]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:24 compute-0 sudo[338794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.360 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 743cf933-3139-4c25-9c75-b45150274ae3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.363 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847444.3604808, 743cf933-3139-4c25-9c75-b45150274ae3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.363 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Resumed (Lifecycle Event)
Jan 31 08:17:24 compute-0 sudo[338794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.404 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.409 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.438 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.439 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847444.3630197, 743cf933-3139-4c25-9c75-b45150274ae3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.439 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Started (Lifecycle Event)
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.476 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:24 compute-0 podman[338856]: 2026-01-31 08:17:24.477875237 +0000 UTC m=+0.052076390 container create c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.487 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:17:24 compute-0 systemd[1]: Started libpod-conmon-c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d.scope.
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.523 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbf5d2d8bb00b93c65060bb77a2b78cb6f73f86398d751a12563e975ca098d56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:24 compute-0 podman[338856]: 2026-01-31 08:17:24.450978572 +0000 UTC m=+0.025179745 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:24 compute-0 podman[338856]: 2026-01-31 08:17:24.588581664 +0000 UTC m=+0.162782837 container init c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.594 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.596 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing instance network info cache due to event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.596 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.597 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.597 247708 DEBUG nova.network.neutron [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:24 compute-0 podman[338856]: 2026-01-31 08:17:24.599148961 +0000 UTC m=+0.173350114 container start c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:24 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [NOTICE]   (338900) : New worker (338905) forked
Jan 31 08:17:24 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [NOTICE]   (338900) : Loading success.
Jan 31 08:17:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:24 compute-0 nova_compute[247704]: 2026-01-31 08:17:24.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:24 compute-0 podman[338927]: 2026-01-31 08:17:24.784780375 +0000 UTC m=+0.067311881 container create 8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:17:24 compute-0 systemd[1]: Started libpod-conmon-8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60.scope.
Jan 31 08:17:24 compute-0 podman[338927]: 2026-01-31 08:17:24.740915536 +0000 UTC m=+0.023447072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:24 compute-0 podman[338927]: 2026-01-31 08:17:24.891916396 +0000 UTC m=+0.174447912 container init 8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:17:24 compute-0 podman[338927]: 2026-01-31 08:17:24.899304806 +0000 UTC m=+0.181836302 container start 8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:17:24 compute-0 serene_beaver[338944]: 167 167
Jan 31 08:17:24 compute-0 systemd[1]: libpod-8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60.scope: Deactivated successfully.
Jan 31 08:17:24 compute-0 conmon[338944]: conmon 8cd5703945a8a8120e5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60.scope/container/memory.events
Jan 31 08:17:24 compute-0 podman[338927]: 2026-01-31 08:17:24.909255318 +0000 UTC m=+0.191786814 container attach 8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_beaver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:17:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:24 compute-0 podman[338927]: 2026-01-31 08:17:24.909879183 +0000 UTC m=+0.192410679 container died 8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_beaver, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:17:24 compute-0 ceph-mon[74496]: pgmap v2468: 305 pgs: 305 active+clean; 770 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.3 MiB/s wr, 251 op/s
Jan 31 08:17:24 compute-0 ceph-mon[74496]: osdmap e315: 3 total, 3 up, 3 in
Jan 31 08:17:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 31 08:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b83883a928b515923389bef3cb04fdaba347c9dc74b7fd722b945b6e6504d346-merged.mount: Deactivated successfully.
Jan 31 08:17:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 31 08:17:25 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 31 08:17:25 compute-0 podman[338927]: 2026-01-31 08:17:25.210618681 +0000 UTC m=+0.493150217 container remove 8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:17:25 compute-0 systemd[1]: libpod-conmon-8cd5703945a8a8120e5efe22c69abf15dcc6147b76c8194944155e12f5db6e60.scope: Deactivated successfully.
Jan 31 08:17:25 compute-0 podman[338969]: 2026-01-31 08:17:25.39688993 +0000 UTC m=+0.059666624 container create f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:17:25 compute-0 systemd[1]: Started libpod-conmon-f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597.scope.
Jan 31 08:17:25 compute-0 podman[338969]: 2026-01-31 08:17:25.366412767 +0000 UTC m=+0.029189471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:17:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b755d7cc13b8e511fce801dc6dd040699a055d3963b5dff75a79bdaff96f90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b755d7cc13b8e511fce801dc6dd040699a055d3963b5dff75a79bdaff96f90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b755d7cc13b8e511fce801dc6dd040699a055d3963b5dff75a79bdaff96f90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b755d7cc13b8e511fce801dc6dd040699a055d3963b5dff75a79bdaff96f90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:25 compute-0 podman[338969]: 2026-01-31 08:17:25.501603332 +0000 UTC m=+0.164380046 container init f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_turing, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:17:25 compute-0 podman[338969]: 2026-01-31 08:17:25.507919305 +0000 UTC m=+0.170695989 container start f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_turing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:25 compute-0 podman[338969]: 2026-01-31 08:17:25.515785347 +0000 UTC m=+0.178562061 container attach f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_turing, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:25 compute-0 nova_compute[247704]: 2026-01-31 08:17:25.612 247708 DEBUG nova.compute.manager [None req-df42770c-c8f8-4d3f-9277-8984856ae89b d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 794 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 MiB/s rd, 6.8 MiB/s wr, 596 op/s
Jan 31 08:17:26 compute-0 ceph-mon[74496]: osdmap e316: 3 total, 3 up, 3 in
Jan 31 08:17:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1733097171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:26 compute-0 ceph-mon[74496]: pgmap v2471: 305 pgs: 305 active+clean; 794 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 MiB/s rd, 6.8 MiB/s wr, 596 op/s
Jan 31 08:17:26 compute-0 frosty_turing[338986]: {
Jan 31 08:17:26 compute-0 frosty_turing[338986]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:17:26 compute-0 frosty_turing[338986]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:17:26 compute-0 frosty_turing[338986]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:17:26 compute-0 frosty_turing[338986]:         "osd_id": 0,
Jan 31 08:17:26 compute-0 frosty_turing[338986]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:17:26 compute-0 frosty_turing[338986]:         "type": "bluestore"
Jan 31 08:17:26 compute-0 frosty_turing[338986]:     }
Jan 31 08:17:26 compute-0 frosty_turing[338986]: }
Jan 31 08:17:26 compute-0 systemd[1]: libpod-f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597.scope: Deactivated successfully.
Jan 31 08:17:26 compute-0 podman[338969]: 2026-01-31 08:17:26.425956774 +0000 UTC m=+1.088733468 container died f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_turing, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-53b755d7cc13b8e511fce801dc6dd040699a055d3963b5dff75a79bdaff96f90-merged.mount: Deactivated successfully.
Jan 31 08:17:26 compute-0 podman[338969]: 2026-01-31 08:17:26.484776387 +0000 UTC m=+1.147553071 container remove f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:17:26 compute-0 systemd[1]: libpod-conmon-f51fdc2ff30e0be136a9a4f0ad1969a74e4b6750ac9992fb5d5ae18857369597.scope: Deactivated successfully.
Jan 31 08:17:26 compute-0 sudo[338794]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:17:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:17:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 770c9a88-4264-4fb5-a5a3-4e014186781a does not exist
Jan 31 08:17:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 33965371-be32-47d7-a6f5-e34c84b02622 does not exist
Jan 31 08:17:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5e306ca1-adc0-4760-b77e-4108cce6ffb1 does not exist
Jan 31 08:17:26 compute-0 sudo[339024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:26 compute-0 sudo[339023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:26 compute-0 sudo[339023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-0 sudo[339024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-0 sudo[339023]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-0 sudo[339024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:26 compute-0 sudo[339074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:17:26 compute-0 sudo[339074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-0 sudo[339073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:26 compute-0 sudo[339074]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-0 sudo[339073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:26 compute-0 sudo[339073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.783 247708 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.787 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.788 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.788 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.789 247708 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.790 247708 WARNING nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state None.
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.791 247708 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.791 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.792 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.792 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.793 247708 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.794 247708 WARNING nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state None.
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.794 247708 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.795 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.796 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.798 247708 DEBUG oslo_concurrency.lockutils [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.798 247708 DEBUG nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:26 compute-0 nova_compute[247704]: 2026-01-31 08:17:26.799 247708 WARNING nova.compute.manager [req-0e58b414-d747-438d-8fec-6ac46d6952c7 req-202ffa4e-37c3-4600-a3df-f1b0c406eeee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state active and task_state None.
Jan 31 08:17:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:26.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.836 247708 DEBUG nova.network.neutron [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updated VIF entry in instance network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.837 247708 DEBUG nova.network.neutron [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 7.8 MiB/s wr, 565 op/s
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.877 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.878 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.879 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing instance network info cache due to event network-changed-e33f46e9-c188-4c53-b863-93620a4c3452. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.880 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.880 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:27 compute-0 nova_compute[247704]: 2026-01-31 08:17:27.881 247708 DEBUG nova.network.neutron [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Refreshing network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:28 compute-0 ceph-mon[74496]: pgmap v2472: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 7.8 MiB/s wr, 565 op/s
Jan 31 08:17:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:29 compute-0 nova_compute[247704]: 2026-01-31 08:17:29.595 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:29 compute-0 nova_compute[247704]: 2026-01-31 08:17:29.690 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 425 op/s
Jan 31 08:17:29 compute-0 podman[339125]: 2026-01-31 08:17:29.986292117 +0000 UTC m=+0.128053651 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:17:30 compute-0 nova_compute[247704]: 2026-01-31 08:17:30.348 247708 DEBUG nova.compute.manager [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-changed-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:30 compute-0 nova_compute[247704]: 2026-01-31 08:17:30.349 247708 DEBUG nova.compute.manager [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Refreshing instance network info cache due to event network-changed-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:17:30 compute-0 nova_compute[247704]: 2026-01-31 08:17:30.349 247708 DEBUG oslo_concurrency.lockutils [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:30 compute-0 nova_compute[247704]: 2026-01-31 08:17:30.350 247708 DEBUG oslo_concurrency.lockutils [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:30 compute-0 nova_compute[247704]: 2026-01-31 08:17:30.350 247708 DEBUG nova.network.neutron [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Refreshing network info cache for port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:17:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:30 compute-0 ceph-mon[74496]: pgmap v2473: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 5.9 MiB/s wr, 425 op/s
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 31 08:17:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 4.3 MiB/s wr, 373 op/s
Jan 31 08:17:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.860 247708 DEBUG nova.network.neutron [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updated VIF entry in instance network info cache for port e33f46e9-c188-4c53-b863-93620a4c3452. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.861 247708 DEBUG nova.network.neutron [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.916 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.917 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.917 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.918 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.918 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.918 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] No waiting events found dispatching network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.919 247708 WARNING nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received unexpected event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for instance with vm_state active and task_state None.
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.919 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.920 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.920 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.920 247708 DEBUG oslo_concurrency.lockutils [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.921 247708 DEBUG nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:31 compute-0 nova_compute[247704]: 2026-01-31 08:17:31.921 247708 WARNING nova.compute.manager [req-0b47ffd4-30a6-4778-96d7-a35dae235273 req-6fe24ef2-6b76-4a1f-a892-3868ded8e2a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:17:32 compute-0 nova_compute[247704]: 2026-01-31 08:17:32.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:32 compute-0 nova_compute[247704]: 2026-01-31 08:17:32.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:32 compute-0 ceph-mon[74496]: pgmap v2474: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 4.3 MiB/s wr, 373 op/s
Jan 31 08:17:32 compute-0 ceph-mon[74496]: osdmap e317: 3 total, 3 up, 3 in
Jan 31 08:17:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:32.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:33 compute-0 nova_compute[247704]: 2026-01-31 08:17:33.385 247708 DEBUG nova.network.neutron [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updated VIF entry in instance network info cache for port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:17:33 compute-0 nova_compute[247704]: 2026-01-31 08:17:33.386 247708 DEBUG nova.network.neutron [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating instance_info_cache with network_info: [{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:33 compute-0 nova_compute[247704]: 2026-01-31 08:17:33.447 247708 DEBUG oslo_concurrency.lockutils [req-4ef47c3e-9039-4308-9f06-12d65e540a54 req-ae30e418-b41f-4098-aae6-d1ceec23e363 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:33 compute-0 nova_compute[247704]: 2026-01-31 08:17:33.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:33 compute-0 nova_compute[247704]: 2026-01-31 08:17:33.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:17:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Jan 31 08:17:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:34.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:34 compute-0 nova_compute[247704]: 2026-01-31 08:17:34.694 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:35 compute-0 ceph-mon[74496]: pgmap v2476: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.6 MiB/s wr, 210 op/s
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010877775742136608 of space, bias 1.0, pg target 3.2633327226409827 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.29392568310301503 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.009201825097710776 of space, bias 1.0, pg target 2.7329420540201004 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 08:17:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 790 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 125 op/s
Jan 31 08:17:36 compute-0 ceph-mon[74496]: pgmap v2477: 305 pgs: 305 active+clean; 790 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 125 op/s
Jan 31 08:17:36 compute-0 ovn_controller[149457]: 2026-01-31T08:17:36Z|00577|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:17:36 compute-0 ovn_controller[149457]: 2026-01-31T08:17:36Z|00578|binding|INFO|Releasing lport 9c7eeaf4-153c-4957-b0ec-6ca490c32a88 from this chassis (sb_readonly=0)
Jan 31 08:17:36 compute-0 nova_compute[247704]: 2026-01-31 08:17:36.170 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:37 compute-0 ovn_controller[149457]: 2026-01-31T08:17:37Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:b5:29 10.100.0.3
Jan 31 08:17:37 compute-0 ovn_controller[149457]: 2026-01-31T08:17:37Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:b5:29 10.100.0.3
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.643 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.644 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.645 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.645 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.646 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:37 compute-0 nova_compute[247704]: 2026-01-31 08:17:37.800 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 771 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 1000 KiB/s wr, 75 op/s
Jan 31 08:17:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/311006205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896216350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.086 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.219 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.220 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.220 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.223 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.224 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.376 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.377 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3949MB free_disk=20.774208068847656GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.377 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.378 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:38 compute-0 ovn_controller[149457]: 2026-01-31T08:17:38Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:79:71:3b 10.100.0.4
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.571 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 743cf933-3139-4c25-9c75-b45150274ae3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.571 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 83252cb5-25d7-40e3-823d-02d1d0eb73f1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.571 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.572 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:17:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:38 compute-0 nova_compute[247704]: 2026-01-31 08:17:38.642 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:17:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:38 compute-0 ceph-mon[74496]: pgmap v2478: 305 pgs: 305 active+clean; 771 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 1000 KiB/s wr, 75 op/s
Jan 31 08:17:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2896216350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2024757108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:17:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571439263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:39 compute-0 nova_compute[247704]: 2026-01-31 08:17:39.087 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:17:39 compute-0 nova_compute[247704]: 2026-01-31 08:17:39.093 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:17:39 compute-0 nova_compute[247704]: 2026-01-31 08:17:39.158 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:17:39 compute-0 nova_compute[247704]: 2026-01-31 08:17:39.194 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:17:39 compute-0 nova_compute[247704]: 2026-01-31 08:17:39.194 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:39 compute-0 nova_compute[247704]: 2026-01-31 08:17:39.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 135 op/s
Jan 31 08:17:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/571439263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.195 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.196 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.196 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.467 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.467 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.468 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:17:40 compute-0 nova_compute[247704]: 2026-01-31 08:17:40.468 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:40.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:40 compute-0 ceph-mon[74496]: pgmap v2479: 305 pgs: 305 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 135 op/s
Jan 31 08:17:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3919650562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/889878504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:41 compute-0 ovn_controller[149457]: 2026-01-31T08:17:41Z|00579|binding|INFO|Releasing lport 7bb97e8d-2b9f-4994-acb6-0aa8c7b822d8 from this chassis (sb_readonly=0)
Jan 31 08:17:41 compute-0 ovn_controller[149457]: 2026-01-31T08:17:41Z|00580|binding|INFO|Releasing lport 9c7eeaf4-153c-4957-b0ec-6ca490c32a88 from this chassis (sb_readonly=0)
Jan 31 08:17:41 compute-0 nova_compute[247704]: 2026-01-31 08:17:41.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 740 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 945 KiB/s rd, 2.5 MiB/s wr, 158 op/s
Jan 31 08:17:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2629141597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:42.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:42 compute-0 nova_compute[247704]: 2026-01-31 08:17:42.803 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:42 compute-0 podman[339202]: 2026-01-31 08:17:42.89268895 +0000 UTC m=+0.054531760 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:43 compute-0 ceph-mon[74496]: pgmap v2480: 305 pgs: 305 active+clean; 740 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 945 KiB/s rd, 2.5 MiB/s wr, 158 op/s
Jan 31 08:17:43 compute-0 nova_compute[247704]: 2026-01-31 08:17:43.351 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [{"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:17:43 compute-0 nova_compute[247704]: 2026-01-31 08:17:43.404 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-743cf933-3139-4c25-9c75-b45150274ae3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:43 compute-0 nova_compute[247704]: 2026-01-31 08:17:43.404 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:17:43 compute-0 nova_compute[247704]: 2026-01-31 08:17:43.405 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:17:44 compute-0 ceph-mon[74496]: pgmap v2481: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:17:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:44.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:44 compute-0 nova_compute[247704]: 2026-01-31 08:17:44.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:44 compute-0 nova_compute[247704]: 2026-01-31 08:17:44.765 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:17:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:44.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 835 KiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 08:17:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:46 compute-0 sudo[339223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:46 compute-0 sudo[339223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:46 compute-0 sudo[339223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:46 compute-0 sudo[339248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:17:46 compute-0 sudo[339248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:46 compute-0 sudo[339248]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:46.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:46 compute-0 ceph-mon[74496]: pgmap v2482: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 835 KiB/s rd, 2.2 MiB/s wr, 133 op/s
Jan 31 08:17:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:17:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134165204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:47 compute-0 nova_compute[247704]: 2026-01-31 08:17:47.808 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 1.7 MiB/s wr, 125 op/s
Jan 31 08:17:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3134165204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:48.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:48 compute-0 ceph-mon[74496]: pgmap v2483: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 1.7 MiB/s wr, 125 op/s
Jan 31 08:17:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1584127127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:17:49 compute-0 nova_compute[247704]: 2026-01-31 08:17:49.703 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 1.4 MiB/s wr, 102 op/s
Jan 31 08:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:50 compute-0 ceph-mon[74496]: pgmap v2484: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 1.4 MiB/s wr, 102 op/s
Jan 31 08:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:50.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:51.069 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:51.070 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:17:51 compute-0 nova_compute[247704]: 2026-01-31 08:17:51.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:51 compute-0 nova_compute[247704]: 2026-01-31 08:17:51.630 247708 INFO nova.compute.manager [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Rebuilding instance
Jan 31 08:17:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 567 KiB/s wr, 52 op/s
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.366 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.389 247708 DEBUG nova.compute.manager [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.462 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'pci_requests' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.484 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'pci_devices' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.508 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'resources' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.529 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'migration_context' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.543 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.547 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:17:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:52.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:52 compute-0 nova_compute[247704]: 2026-01-31 08:17:52.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:52 compute-0 ceph-mon[74496]: pgmap v2485: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 567 KiB/s wr, 52 op/s
Jan 31 08:17:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:52.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:17:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/242637935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:17:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:17:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/242637935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:17:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 62 KiB/s wr, 7 op/s
Jan 31 08:17:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/242637935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:17:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/242637935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:17:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:54.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:54 compute-0 nova_compute[247704]: 2026-01-31 08:17:54.705 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:54 compute-0 kernel: tap1cf8d8b8-8d (unregistering): left promiscuous mode
Jan 31 08:17:54 compute-0 NetworkManager[49108]: <info>  [1769847474.8886] device (tap1cf8d8b8-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:17:54 compute-0 ovn_controller[149457]: 2026-01-31T08:17:54Z|00581|binding|INFO|Releasing lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 from this chassis (sb_readonly=0)
Jan 31 08:17:54 compute-0 ovn_controller[149457]: 2026-01-31T08:17:54Z|00582|binding|INFO|Setting lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 down in Southbound
Jan 31 08:17:54 compute-0 nova_compute[247704]: 2026-01-31 08:17:54.902 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:54 compute-0 ovn_controller[149457]: 2026-01-31T08:17:54Z|00583|binding|INFO|Removing iface tap1cf8d8b8-8d ovn-installed in OVS
Jan 31 08:17:54 compute-0 nova_compute[247704]: 2026-01-31 08:17:54.905 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:54 compute-0 nova_compute[247704]: 2026-01-31 08:17:54.909 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:54.917 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:b5:29 10.100.0.3'], port_security=['fa:16:3e:85:b5:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '83252cb5-25d7-40e3-823d-02d1d0eb73f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26cb596-7b93-4afd-9b50-133e9afd768d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bda3ebf6541d46309fc9b2ce089dd857', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc88ce0b-3f35-4088-82b2-3c421cb8a010', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06815ca2-b7af-4c41-816c-402a584fa466, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:54.919 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 in datapath c26cb596-7b93-4afd-9b50-133e9afd768d unbound from our chassis
Jan 31 08:17:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:54.922 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c26cb596-7b93-4afd-9b50-133e9afd768d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:17:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:54.924 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f61046e7-974e-41f8-809f-12649a174254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:54.925 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d namespace which is not needed anymore
Jan 31 08:17:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:54.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:54 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Jan 31 08:17:54 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000008b.scope: Consumed 15.935s CPU time.
Jan 31 08:17:54 compute-0 systemd-machined[214448]: Machine qemu-59-instance-0000008b terminated.
Jan 31 08:17:54 compute-0 ceph-mon[74496]: pgmap v2486: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 62 KiB/s wr, 7 op/s
Jan 31 08:17:55 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [NOTICE]   (338458) : haproxy version is 2.8.14-c23fe91
Jan 31 08:17:55 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [NOTICE]   (338458) : path to executable is /usr/sbin/haproxy
Jan 31 08:17:55 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [WARNING]  (338458) : Exiting Master process...
Jan 31 08:17:55 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [ALERT]    (338458) : Current worker (338476) exited with code 143 (Terminated)
Jan 31 08:17:55 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[338440]: [WARNING]  (338458) : All workers exited. Exiting... (0)
Jan 31 08:17:55 compute-0 systemd[1]: libpod-e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9.scope: Deactivated successfully.
Jan 31 08:17:55 compute-0 podman[339300]: 2026-01-31 08:17:55.097326345 +0000 UTC m=+0.085994787 container died e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.128 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.135 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9-userdata-shm.mount: Deactivated successfully.
Jan 31 08:17:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b000d246d0c958a4feb3846bc286fc1cef7df2b870694a9543b22abf005106d-merged.mount: Deactivated successfully.
Jan 31 08:17:55 compute-0 podman[339300]: 2026-01-31 08:17:55.204509716 +0000 UTC m=+0.193178158 container cleanup e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:55 compute-0 systemd[1]: libpod-conmon-e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9.scope: Deactivated successfully.
Jan 31 08:17:55 compute-0 podman[339340]: 2026-01-31 08:17:55.327692797 +0000 UTC m=+0.100393597 container remove e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.334 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[46551c09-c6d4-4c18-a1ab-f81e23b66195]: (4, ('Sat Jan 31 08:17:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d (e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9)\ne4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9\nSat Jan 31 08:17:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d (e4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9)\ne4bf347dee733ec67ed53ace38f4a4afd6a38d9b285fac0148aaa46733e158a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.336 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[be68a426-5c3d-493f-840e-676df431ae76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.338 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26cb596-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.341 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 kernel: tapc26cb596-70: left promiscuous mode
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.357 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.362 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[450050c7-c1ee-4037-94b6-5b051b95d3e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.378 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccf6b07-4f5a-4b09-8618-d45eb1aaa1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.379 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[366ba4ec-e595-4da6-b47d-cfe84c9859ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.394 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65bbaef1-6b14-45f0-93df-a3ce0b487ebb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768450, 'reachable_time': 41245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339359, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 systemd[1]: run-netns-ovnmeta\x2dc26cb596\x2d7b93\x2d4afd\x2d9b50\x2d133e9afd768d.mount: Deactivated successfully.
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.399 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:17:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:55.399 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3272b7-847e-4b9b-95d3-50e2800e177d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.465 247708 DEBUG nova.compute.manager [req-4b68774f-98a6-4e49-8598-94d289e201ae req-8f078c81-e174-4b59-868e-3cc33ebb61c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-unplugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.465 247708 DEBUG oslo_concurrency.lockutils [req-4b68774f-98a6-4e49-8598-94d289e201ae req-8f078c81-e174-4b59-868e-3cc33ebb61c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.466 247708 DEBUG oslo_concurrency.lockutils [req-4b68774f-98a6-4e49-8598-94d289e201ae req-8f078c81-e174-4b59-868e-3cc33ebb61c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.466 247708 DEBUG oslo_concurrency.lockutils [req-4b68774f-98a6-4e49-8598-94d289e201ae req-8f078c81-e174-4b59-868e-3cc33ebb61c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.466 247708 DEBUG nova.compute.manager [req-4b68774f-98a6-4e49-8598-94d289e201ae req-8f078c81-e174-4b59-868e-3cc33ebb61c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] No waiting events found dispatching network-vif-unplugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.466 247708 WARNING nova.compute.manager [req-4b68774f-98a6-4e49-8598-94d289e201ae req-8f078c81-e174-4b59-868e-3cc33ebb61c9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received unexpected event network-vif-unplugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for instance with vm_state active and task_state rebuilding.
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.565 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance shutdown successfully after 3 seconds.
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.572 247708 INFO nova.virt.libvirt.driver [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance destroyed successfully.
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.578 247708 INFO nova.virt.libvirt.driver [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance destroyed successfully.
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.579 247708 DEBUG nova.virt.libvirt.vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:17:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-934525208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-20534748',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTBDp6a0SyMFs7OU2ywUkjN63r2N0PxPaRE6bSDGEX+Eq/ObkyuaXGME0SMKDkMYoPYqAUjzxLW21QhzrJW+bRoGYJmTS4PrdUXkRuBCox1bDVqwWhiTbfXfC+MijeFQ==',key_name='tempest-keypair-1991810396',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:17:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bda3ebf6541d46309fc9b2ce089dd857',ramdisk_id='',reservation_id='r-canzs8i7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-284184156',owner_user_name='tempest-ServerActionsV293TestJSON-284184156-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90313c82677c4144953c58efc0e13c3e',uuid=83252cb5-25d7-40e3-823d-02d1d0eb73f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.580 247708 DEBUG nova.network.os_vif_util [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converting VIF {"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.580 247708 DEBUG nova.network.os_vif_util [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.581 247708 DEBUG os_vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.584 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cf8d8b8-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.588 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:55 compute-0 nova_compute[247704]: 2026-01-31 08:17:55.592 247708 INFO os_vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d')
Jan 31 08:17:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 51 KiB/s wr, 7 op/s
Jan 31 08:17:56 compute-0 nova_compute[247704]: 2026-01-31 08:17:56.056 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deleting instance files /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1_del
Jan 31 08:17:56 compute-0 nova_compute[247704]: 2026-01-31 08:17:56.056 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deletion of /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1_del complete
Jan 31 08:17:56 compute-0 ceph-mon[74496]: pgmap v2487: 305 pgs: 305 active+clean; 743 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 51 KiB/s wr, 7 op/s
Jan 31 08:17:56 compute-0 nova_compute[247704]: 2026-01-31 08:17:56.625 247708 WARNING nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance 83252cb5-25d7-40e3-823d-02d1d0eb73f1 could not be found.
Jan 31 08:17:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:17:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:56.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:17:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:17:56 compute-0 nova_compute[247704]: 2026-01-31 08:17:56.904 247708 DEBUG oslo_concurrency.lockutils [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:56 compute-0 nova_compute[247704]: 2026-01-31 08:17:56.905 247708 DEBUG oslo_concurrency.lockutils [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:56 compute-0 nova_compute[247704]: 2026-01-31 08:17:56.923 247708 INFO nova.compute.manager [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Detaching volume ddc44c03-3580-457e-b1d4-9a35d3c393e8
Jan 31 08:17:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:17:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:56.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:17:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:17:57.072 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.320 247708 INFO nova.virt.block_device [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Attempting to driver detach volume ddc44c03-3580-457e-b1d4-9a35d3c393e8 from mountpoint /dev/vdb
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.339 247708 DEBUG nova.virt.libvirt.driver [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Attempting to detach device vdb from instance 743cf933-3139-4c25-9c75-b45150274ae3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.340 247708 DEBUG nova.virt.libvirt.guest [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-ddc44c03-3580-457e-b1d4-9a35d3c393e8">
Jan 31 08:17:57 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   </source>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <serial>ddc44c03-3580-457e-b1d4-9a35d3c393e8</serial>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]: </disk>
Jan 31 08:17:57 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.351 247708 INFO nova.virt.libvirt.driver [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully detached device vdb from instance 743cf933-3139-4c25-9c75-b45150274ae3 from the persistent domain config.
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.351 247708 DEBUG nova.virt.libvirt.driver [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 743cf933-3139-4c25-9c75-b45150274ae3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.352 247708 DEBUG nova.virt.libvirt.guest [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-ddc44c03-3580-457e-b1d4-9a35d3c393e8">
Jan 31 08:17:57 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   </source>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <serial>ddc44c03-3580-457e-b1d4-9a35d3c393e8</serial>
Jan 31 08:17:57 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:17:57 compute-0 nova_compute[247704]: </disk>
Jan 31 08:17:57 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.468 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769847477.4677472, 743cf933-3139-4c25-9c75-b45150274ae3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.470 247708 DEBUG nova.virt.libvirt.driver [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 743cf933-3139-4c25-9c75-b45150274ae3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.473 247708 INFO nova.virt.libvirt.driver [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully detached device vdb from instance 743cf933-3139-4c25-9c75-b45150274ae3 from the live domain config.
Jan 31 08:17:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 760 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 788 KiB/s wr, 32 op/s
Jan 31 08:17:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2776682966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:57 compute-0 nova_compute[247704]: 2026-01-31 08:17:57.929 247708 DEBUG nova.objects.instance [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'flavor' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.175 247708 DEBUG nova.compute.manager [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Preparing to wait for external event volume-reimaged-28bdb463-7d0a-42f5-8392-3acd847cfe3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.175 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.176 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.176 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.199 247708 DEBUG nova.compute.manager [req-85b47f07-eb2a-401a-b9db-affd41a49bb3 req-a7ec6750-f3e2-4116-8f88-e82f3b2e1c0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.200 247708 DEBUG oslo_concurrency.lockutils [req-85b47f07-eb2a-401a-b9db-affd41a49bb3 req-a7ec6750-f3e2-4116-8f88-e82f3b2e1c0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.201 247708 DEBUG oslo_concurrency.lockutils [req-85b47f07-eb2a-401a-b9db-affd41a49bb3 req-a7ec6750-f3e2-4116-8f88-e82f3b2e1c0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.201 247708 DEBUG oslo_concurrency.lockutils [req-85b47f07-eb2a-401a-b9db-affd41a49bb3 req-a7ec6750-f3e2-4116-8f88-e82f3b2e1c0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.202 247708 DEBUG nova.compute.manager [req-85b47f07-eb2a-401a-b9db-affd41a49bb3 req-a7ec6750-f3e2-4116-8f88-e82f3b2e1c0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] No event matching network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 in dict_keys([('volume-reimaged', '28bdb463-7d0a-42f5-8392-3acd847cfe3e')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.202 247708 WARNING nova.compute.manager [req-85b47f07-eb2a-401a-b9db-affd41a49bb3 req-a7ec6750-f3e2-4116-8f88-e82f3b2e1c0b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received unexpected event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for instance with vm_state active and task_state rebuilding.
Jan 31 08:17:58 compute-0 nova_compute[247704]: 2026-01-31 08:17:58.260 247708 DEBUG oslo_concurrency.lockutils [None req-4abb0e0c-cc18-424d-889b-69748cd835c9 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:58.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:17:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:17:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:58.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:17:58 compute-0 ceph-mon[74496]: pgmap v2488: 305 pgs: 305 active+clean; 760 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 788 KiB/s wr, 32 op/s
Jan 31 08:17:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1378512047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:17:59 compute-0 nova_compute[247704]: 2026-01-31 08:17:59.708 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:17:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 797 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 31 08:17:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 31 08:18:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 31 08:18:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 31 08:18:00 compute-0 ceph-mon[74496]: pgmap v2489: 305 pgs: 305 active+clean; 797 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 31 08:18:00 compute-0 nova_compute[247704]: 2026-01-31 08:18:00.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:00.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:00 compute-0 podman[339385]: 2026-01-31 08:18:00.915862338 +0000 UTC m=+0.086279012 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:18:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:18:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:18:01 compute-0 ceph-mon[74496]: osdmap e318: 3 total, 3 up, 3 in
Jan 31 08:18:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 822 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 31 08:18:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 31 08:18:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 31 08:18:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 31 08:18:02 compute-0 ceph-mon[74496]: pgmap v2491: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 822 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 31 08:18:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:02.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:03 compute-0 ceph-mon[74496]: osdmap e319: 3 total, 3 up, 3 in
Jan 31 08:18:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 795 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.9 MiB/s wr, 198 op/s
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.476 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.477 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.477 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.478 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.478 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.480 247708 INFO nova.compute.manager [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Terminating instance
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.482 247708 DEBUG nova.compute.manager [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:18:04 compute-0 ceph-mon[74496]: pgmap v2493: 305 pgs: 1 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 300 active+clean; 795 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.9 MiB/s wr, 198 op/s
Jan 31 08:18:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:04 compute-0 kernel: tape33f46e9-c1 (unregistering): left promiscuous mode
Jan 31 08:18:04 compute-0 NetworkManager[49108]: <info>  [1769847484.7092] device (tape33f46e9-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.720 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-0 ovn_controller[149457]: 2026-01-31T08:18:04Z|00584|binding|INFO|Releasing lport e33f46e9-c188-4c53-b863-93620a4c3452 from this chassis (sb_readonly=0)
Jan 31 08:18:04 compute-0 ovn_controller[149457]: 2026-01-31T08:18:04Z|00585|binding|INFO|Setting lport e33f46e9-c188-4c53-b863-93620a4c3452 down in Southbound
Jan 31 08:18:04 compute-0 ovn_controller[149457]: 2026-01-31T08:18:04Z|00586|binding|INFO|Removing iface tape33f46e9-c1 ovn-installed in OVS
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.724 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:04.744 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:71:3b 10.100.0.4'], port_security=['fa:16:3e:79:71:3b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '743cf933-3139-4c25-9c75-b45150274ae3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088d6992-6ba6-4719-a977-b3d306740157', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1633c84ea1bf46b080aaafd30bbcf25f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '11deed3f-7193-4325-affb-77a66beb8424', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=205f218b-b5d5-4c71-b350-59436d69ba1b, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e33f46e9-c188-4c53-b863-93620a4c3452) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:18:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:04.746 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e33f46e9-c188-4c53-b863-93620a4c3452 in datapath 088d6992-6ba6-4719-a977-b3d306740157 unbound from our chassis
Jan 31 08:18:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:04.747 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088d6992-6ba6-4719-a977-b3d306740157, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:18:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:04.748 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0857a9-baec-4d25-8532-395183bb9dad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:04.749 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 namespace which is not needed anymore
Jan 31 08:18:04 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 31 08:18:04 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000087.scope: Consumed 15.855s CPU time.
Jan 31 08:18:04 compute-0 systemd-machined[214448]: Machine qemu-60-instance-00000087 terminated.
Jan 31 08:18:04 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [NOTICE]   (338900) : haproxy version is 2.8.14-c23fe91
Jan 31 08:18:04 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [NOTICE]   (338900) : path to executable is /usr/sbin/haproxy
Jan 31 08:18:04 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [WARNING]  (338900) : Exiting Master process...
Jan 31 08:18:04 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [ALERT]    (338900) : Current worker (338905) exited with code 143 (Terminated)
Jan 31 08:18:04 compute-0 neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157[338885]: [WARNING]  (338900) : All workers exited. Exiting... (0)
Jan 31 08:18:04 compute-0 systemd[1]: libpod-c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d.scope: Deactivated successfully.
Jan 31 08:18:04 compute-0 conmon[338885]: conmon c601e8e9102e2e6568f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d.scope/container/memory.events
Jan 31 08:18:04 compute-0 podman[339438]: 2026-01-31 08:18:04.898348079 +0000 UTC m=+0.055844301 container died c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.927 247708 INFO nova.virt.libvirt.driver [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Instance destroyed successfully.
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.928 247708 DEBUG nova.objects.instance [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lazy-loading 'resources' on Instance uuid 743cf933-3139-4c25-9c75-b45150274ae3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d-userdata-shm.mount: Deactivated successfully.
Jan 31 08:18:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbf5d2d8bb00b93c65060bb77a2b78cb6f73f86398d751a12563e975ca098d56-merged.mount: Deactivated successfully.
Jan 31 08:18:04 compute-0 podman[339438]: 2026-01-31 08:18:04.947700522 +0000 UTC m=+0.105196734 container cleanup c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:18:04 compute-0 systemd[1]: libpod-conmon-c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d.scope: Deactivated successfully.
Jan 31 08:18:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.966 247708 DEBUG nova.virt.libvirt.vif [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-107442355',display_name='tempest-ServerStableDeviceRescueTest-server-107442355',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-107442355',id=135,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbP3gf7LmroDDYk0USWIG5AuQO84YQj17XtBS8JALIesJpm7oyiz9kFGiUhk5vHZ+8lJiStdnRkjJm79czkwPbDOitiF7fhmca3/rEfJLTDNBTZ+uam26C3TTY99imMvg==',key_name='tempest-keypair-719720114',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:17:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1633c84ea1bf46b080aaafd30bbcf25f',ramdisk_id='',reservation_id='r-401rbz0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-569420416',owner_user_name='tempest-ServerStableDeviceRescueTest-569420416-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:17:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d7d9a44201d548aba1e1654e136ddd06',uuid=743cf933-3139-4c25-9c75-b45150274ae3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.967 247708 DEBUG nova.network.os_vif_util [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converting VIF {"id": "e33f46e9-c188-4c53-b863-93620a4c3452", "address": "fa:16:3e:79:71:3b", "network": {"id": "088d6992-6ba6-4719-a977-b3d306740157", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-453071632-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1633c84ea1bf46b080aaafd30bbcf25f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape33f46e9-c1", "ovs_interfaceid": "e33f46e9-c188-4c53-b863-93620a4c3452", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.968 247708 DEBUG nova.network.os_vif_util [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.968 247708 DEBUG os_vif [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.970 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.970 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape33f46e9-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.971 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.973 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:04 compute-0 nova_compute[247704]: 2026-01-31 08:18:04.976 247708 INFO os_vif [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:71:3b,bridge_name='br-int',has_traffic_filtering=True,id=e33f46e9-c188-4c53-b863-93620a4c3452,network=Network(088d6992-6ba6-4719-a977-b3d306740157),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape33f46e9-c1')
Jan 31 08:18:05 compute-0 podman[339480]: 2026-01-31 08:18:05.059702031 +0000 UTC m=+0.084356056 container remove c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.063 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e70bbc5b-3e7d-464c-b531-f4c187a541b4]: (4, ('Sat Jan 31 08:18:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d)\nc601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d\nSat Jan 31 08:18:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 (c601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d)\nc601e8e9102e2e6568f6401b7fe896652ff4485ef112c1f4e6fd7cc88c84437d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.066 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ef8490-b24c-439b-88e3-58954c855687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.066 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088d6992-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:05 compute-0 nova_compute[247704]: 2026-01-31 08:18:05.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:05 compute-0 kernel: tap088d6992-60: left promiscuous mode
Jan 31 08:18:05 compute-0 nova_compute[247704]: 2026-01-31 08:18:05.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:05 compute-0 nova_compute[247704]: 2026-01-31 08:18:05.078 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.080 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a34ced38-b7c4-44c2-9e08-aac6d5fb0c14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.097 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4574e92f-0e35-4b90-8105-1e6389b09aae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.098 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[318975ed-154f-43fa-af94-03f882257f0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.113 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cbb78f-9d7c-416f-ad53-97c9fd0d1909]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768639, 'reachable_time': 18286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339513, 'error': None, 'target': 'ovnmeta-088d6992-6ba6-4719-a977-b3d306740157', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.116 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088d6992-6ba6-4719-a977-b3d306740157 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:18:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:05.116 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[3c168685-b51b-4a40-be48-7755c159d589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d088d6992\x2d6ba6\x2d4719\x2da977\x2db3d306740157.mount: Deactivated successfully.
Jan 31 08:18:05 compute-0 nova_compute[247704]: 2026-01-31 08:18:05.784 247708 INFO nova.virt.libvirt.driver [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Deleting instance files /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3_del
Jan 31 08:18:05 compute-0 nova_compute[247704]: 2026-01-31 08:18:05.785 247708 INFO nova.virt.libvirt.driver [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Deletion of /var/lib/nova/instances/743cf933-3139-4c25-9c75-b45150274ae3_del complete
Jan 31 08:18:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 4.7 MiB/s wr, 266 op/s
Jan 31 08:18:06 compute-0 nova_compute[247704]: 2026-01-31 08:18:06.643 247708 INFO nova.compute.manager [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Took 2.16 seconds to destroy the instance on the hypervisor.
Jan 31 08:18:06 compute-0 nova_compute[247704]: 2026-01-31 08:18:06.643 247708 DEBUG oslo.service.loopingcall [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:18:06 compute-0 nova_compute[247704]: 2026-01-31 08:18:06.644 247708 DEBUG nova.compute.manager [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:18:06 compute-0 nova_compute[247704]: 2026-01-31 08:18:06.644 247708 DEBUG nova.network.neutron [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:18:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 31 08:18:06 compute-0 sudo[339516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:06 compute-0 sudo[339516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:06 compute-0 sudo[339516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:06 compute-0 sudo[339541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:06 compute-0 sudo[339541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:06 compute-0 sudo[339541]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:06.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:07 compute-0 nova_compute[247704]: 2026-01-31 08:18:07.105 247708 DEBUG nova.compute.manager [req-b17ef5b1-415b-4bfb-b8f5-2935db25b3f8 req-b9b920b3-8ec2-4fa5-a90c-f69c56e4eb5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:07 compute-0 nova_compute[247704]: 2026-01-31 08:18:07.106 247708 DEBUG oslo_concurrency.lockutils [req-b17ef5b1-415b-4bfb-b8f5-2935db25b3f8 req-b9b920b3-8ec2-4fa5-a90c-f69c56e4eb5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:07 compute-0 nova_compute[247704]: 2026-01-31 08:18:07.106 247708 DEBUG oslo_concurrency.lockutils [req-b17ef5b1-415b-4bfb-b8f5-2935db25b3f8 req-b9b920b3-8ec2-4fa5-a90c-f69c56e4eb5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:07 compute-0 nova_compute[247704]: 2026-01-31 08:18:07.107 247708 DEBUG oslo_concurrency.lockutils [req-b17ef5b1-415b-4bfb-b8f5-2935db25b3f8 req-b9b920b3-8ec2-4fa5-a90c-f69c56e4eb5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:07 compute-0 nova_compute[247704]: 2026-01-31 08:18:07.107 247708 DEBUG nova.compute.manager [req-b17ef5b1-415b-4bfb-b8f5-2935db25b3f8 req-b9b920b3-8ec2-4fa5-a90c-f69c56e4eb5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:07 compute-0 nova_compute[247704]: 2026-01-31 08:18:07.108 247708 DEBUG nova.compute.manager [req-b17ef5b1-415b-4bfb-b8f5-2935db25b3f8 req-b9b920b3-8ec2-4fa5-a90c-f69c56e4eb5c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-unplugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:18:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 31 08:18:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 31 08:18:07 compute-0 ceph-mon[74496]: pgmap v2494: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 668 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 4.7 MiB/s wr, 266 op/s
Jan 31 08:18:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Jan 31 08:18:08 compute-0 ceph-mon[74496]: osdmap e320: 3 total, 3 up, 3 in
Jan 31 08:18:08 compute-0 ceph-mon[74496]: pgmap v2496: 305 pgs: 305 active+clean; 614 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.8 MiB/s wr, 287 op/s
Jan 31 08:18:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:08 compute-0 nova_compute[247704]: 2026-01-31 08:18:08.739 247708 DEBUG nova.network.neutron [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:18:08 compute-0 nova_compute[247704]: 2026-01-31 08:18:08.796 247708 INFO nova.compute.manager [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Took 2.15 seconds to deallocate network for instance.
Jan 31 08:18:08 compute-0 nova_compute[247704]: 2026-01-31 08:18:08.938 247708 DEBUG nova.compute.manager [req-445bc0a9-871d-4786-b648-d869b88936cc req-96c12e83-1c63-443e-8ab6-654ab29826d5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-deleted-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:08.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:08 compute-0 nova_compute[247704]: 2026-01-31 08:18:08.995 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:08 compute-0 nova_compute[247704]: 2026-01-31 08:18:08.996 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.079 247708 DEBUG oslo_concurrency.processutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.407 247708 DEBUG nova.compute.manager [req-bee4bacd-3bde-4193-ae64-a8f0914af2c1 req-2939007c-66e1-41b4-89ee-e9aa53d2f613 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.408 247708 DEBUG oslo_concurrency.lockutils [req-bee4bacd-3bde-4193-ae64-a8f0914af2c1 req-2939007c-66e1-41b4-89ee-e9aa53d2f613 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "743cf933-3139-4c25-9c75-b45150274ae3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.408 247708 DEBUG oslo_concurrency.lockutils [req-bee4bacd-3bde-4193-ae64-a8f0914af2c1 req-2939007c-66e1-41b4-89ee-e9aa53d2f613 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.409 247708 DEBUG oslo_concurrency.lockutils [req-bee4bacd-3bde-4193-ae64-a8f0914af2c1 req-2939007c-66e1-41b4-89ee-e9aa53d2f613 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.409 247708 DEBUG nova.compute.manager [req-bee4bacd-3bde-4193-ae64-a8f0914af2c1 req-2939007c-66e1-41b4-89ee-e9aa53d2f613 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] No waiting events found dispatching network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.410 247708 WARNING nova.compute.manager [req-bee4bacd-3bde-4193-ae64-a8f0914af2c1 req-2939007c-66e1-41b4-89ee-e9aa53d2f613 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Received unexpected event network-vif-plugged-e33f46e9-c188-4c53-b863-93620a4c3452 for instance with vm_state deleted and task_state None.
Jan 31 08:18:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:18:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030513395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.512 247708 DEBUG oslo_concurrency.processutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.520 247708 DEBUG nova.compute.provider_tree [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:18:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2030513395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.580 247708 DEBUG nova.scheduler.client.report [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.670 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 427 KiB/s wr, 230 op/s
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.941 247708 INFO nova.scheduler.client.report [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Deleted allocations for instance 743cf933-3139-4c25-9c75-b45150274ae3
Jan 31 08:18:09 compute-0 nova_compute[247704]: 2026-01-31 08:18:09.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.137 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847475.1366339, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.138 247708 INFO nova.compute.manager [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Stopped (Lifecycle Event)
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.211 247708 DEBUG nova.compute.manager [None req-b7edec87-d817-4312-94b3-710975f3f8ed - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.289 247708 DEBUG oslo_concurrency.lockutils [None req-0bb0bd47-3d6b-4be6-b1da-641554a17560 d7d9a44201d548aba1e1654e136ddd06 1633c84ea1bf46b080aaafd30bbcf25f - - default default] Lock "743cf933-3139-4c25-9c75-b45150274ae3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.603 247708 DEBUG nova.compute.manager [req-8abd5235-00cf-48bc-8bc1-20c9481e553c req-fdf619f8-ade6-4b15-8c3a-76bb9ad0401e 56002d9dca034ce2b7f7186567cfa432 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event volume-reimaged-28bdb463-7d0a-42f5-8392-3acd847cfe3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.604 247708 DEBUG oslo_concurrency.lockutils [req-8abd5235-00cf-48bc-8bc1-20c9481e553c req-fdf619f8-ade6-4b15-8c3a-76bb9ad0401e 56002d9dca034ce2b7f7186567cfa432 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.605 247708 DEBUG oslo_concurrency.lockutils [req-8abd5235-00cf-48bc-8bc1-20c9481e553c req-fdf619f8-ade6-4b15-8c3a-76bb9ad0401e 56002d9dca034ce2b7f7186567cfa432 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.605 247708 DEBUG oslo_concurrency.lockutils [req-8abd5235-00cf-48bc-8bc1-20c9481e553c req-fdf619f8-ade6-4b15-8c3a-76bb9ad0401e 56002d9dca034ce2b7f7186567cfa432 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.605 247708 DEBUG nova.compute.manager [req-8abd5235-00cf-48bc-8bc1-20c9481e553c req-fdf619f8-ade6-4b15-8c3a-76bb9ad0401e 56002d9dca034ce2b7f7186567cfa432 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Processing event volume-reimaged-28bdb463-7d0a-42f5-8392-3acd847cfe3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.607 247708 DEBUG nova.compute.manager [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance event wait completed in 10 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:18:10 compute-0 ceph-mon[74496]: pgmap v2497: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 427 KiB/s wr, 230 op/s
Jan 31 08:18:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:10 compute-0 nova_compute[247704]: 2026-01-31 08:18:10.932 247708 INFO nova.virt.block_device [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Booting with volume 28bdb463-7d0a-42f5-8392-3acd847cfe3e at /dev/vda
Jan 31 08:18:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:11.190 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:11.191 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:11.191 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.285 247708 DEBUG os_brick.utils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.287 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.302 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.303 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[8b31604c-2f66-4081-b817-f1e547d32203]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.306 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.312 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.313 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[9e75737c-ecbe-4734-b5d2-f76dce12b9cf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.315 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.324 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.325 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c59960-0ab8-42d0-a3d5-9e43e956d50c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.327 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[ac617a68-795f-40c7-88fb-509b2f709306]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.328 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.354 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.358 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.358 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.359 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.360 247708 DEBUG os_brick.utils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:18:11 compute-0 nova_compute[247704]: 2026-01-31 08:18:11.360 247708 DEBUG nova.virt.block_device [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating existing volume attachment record: 702eea6c-f82b-4551-9b33-10fe93968866 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:18:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 31 08:18:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 542 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.4 MiB/s wr, 204 op/s
Jan 31 08:18:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 31 08:18:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 31 08:18:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:13 compute-0 ceph-mon[74496]: pgmap v2498: 305 pgs: 305 active+clean; 542 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.4 MiB/s wr, 204 op/s
Jan 31 08:18:13 compute-0 ceph-mon[74496]: osdmap e321: 3 total, 3 up, 3 in
Jan 31 08:18:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2716440508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.222 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.223 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Creating image(s)
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.223 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.224 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Ensure instance console log exists: /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.225 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.225 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.225 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.230 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Start _get_guest_xml network_info=[{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '702eea6c-f82b-4551-9b33-10fe93968866', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-28bdb463-7d0a-42f5-8392-3acd847cfe3e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '28bdb463-7d0a-42f5-8392-3acd847cfe3e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '83252cb5-25d7-40e3-823d-02d1d0eb73f1', 'attached_at': '', 'detached_at': '', 'volume_id': '28bdb463-7d0a-42f5-8392-3acd847cfe3e', 'serial': '28bdb463-7d0a-42f5-8392-3acd847cfe3e'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.236 247708 WARNING nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.245 247708 DEBUG nova.virt.libvirt.host [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.247 247708 DEBUG nova.virt.libvirt.host [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.251 247708 DEBUG nova.virt.libvirt.host [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.252 247708 DEBUG nova.virt.libvirt.host [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.254 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.254 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:34Z,direct_url=<?>,disk_format='qcow2',id=40cf2ff3-f7ff-4843-b4ab-b7dcc843006f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:39Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.255 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.255 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.256 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.256 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.257 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.257 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.258 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.258 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.259 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.259 247708 DEBUG nova.virt.hardware [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.260 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.402 247708 DEBUG nova.storage.rbd_utils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] rbd image 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.408 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:18:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826215668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.844 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Jan 31 08:18:13 compute-0 podman[339637]: 2026-01-31 08:18:13.920018515 +0000 UTC m=+0.087597775 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.978 247708 DEBUG nova.virt.libvirt.vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:17:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-934525208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-20534748',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTBDp6a0SyMFs7OU2ywUkjN63r2N0PxPaRE6bSDGEX+Eq/ObkyuaXGME0SMKDkMYoPYqAUjzxLW21QhzrJW+bRoGYJmTS4PrdUXkRuBCox1bDVqwWhiTbfXfC+MijeFQ==',key_name='tempest-keypair-1991810396',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:17:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bda3ebf6541d46309fc9b2ce089dd857',ramdisk_id='',reservation_id='r-canzs8i7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-284184156',owner_user_name='tempest-ServerActionsV293TestJSON-284184156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:18:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90313c82677c4144953c58efc0e13c3e',uuid=83252cb5-25d7-40e3-823d-02d1d0eb73f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.980 247708 DEBUG nova.network.os_vif_util [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converting VIF {"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.982 247708 DEBUG nova.network.os_vif_util [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.985 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <uuid>83252cb5-25d7-40e3-823d-02d1d0eb73f1</uuid>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <name>instance-0000008b</name>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerActionsV293TestJSON-server-934525208</nova:name>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:18:13</nova:creationTime>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:user uuid="90313c82677c4144953c58efc0e13c3e">tempest-ServerActionsV293TestJSON-284184156-project-member</nova:user>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:project uuid="bda3ebf6541d46309fc9b2ce089dd857">tempest-ServerActionsV293TestJSON-284184156</nova:project>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <nova:port uuid="1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7">
Jan 31 08:18:13 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <system>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <entry name="serial">83252cb5-25d7-40e3-823d-02d1d0eb73f1</entry>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <entry name="uuid">83252cb5-25d7-40e3-823d-02d1d0eb73f1</entry>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </system>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <os>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </os>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <features>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </features>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config">
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </source>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-28bdb463-7d0a-42f5-8392-3acd847cfe3e">
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </source>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:18:13 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <serial>28bdb463-7d0a-42f5-8392-3acd847cfe3e</serial>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:85:b5:29"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <target dev="tap1cf8d8b8-8d"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/console.log" append="off"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <video>
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </video>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:18:13 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:18:13 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:18:13 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:18:13 compute-0 nova_compute[247704]: </domain>
Jan 31 08:18:13 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.987 247708 DEBUG nova.virt.libvirt.vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:17:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-934525208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-20534748',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTBDp6a0SyMFs7OU2ywUkjN63r2N0PxPaRE6bSDGEX+Eq/ObkyuaXGME0SMKDkMYoPYqAUjzxLW21QhzrJW+bRoGYJmTS4PrdUXkRuBCox1bDVqwWhiTbfXfC+MijeFQ==',key_name='tempest-keypair-1991810396',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:17:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='bda3ebf6541d46309fc9b2ce089dd857',ramdisk_id='',reservation_id='r-canzs8i7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-284184156',owner_user_name='tempest-ServerActionsV293TestJSON-284184156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:18:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90313c82677c4144953c58efc0e13c3e',uuid=83252cb5-25d7-40e3-823d-02d1d0eb73f1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.987 247708 DEBUG nova.network.os_vif_util [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converting VIF {"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.988 247708 DEBUG nova.network.os_vif_util [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.989 247708 DEBUG os_vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.990 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.991 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.996 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cf8d8b8-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:13 compute-0 nova_compute[247704]: 2026-01-31 08:18:13.997 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cf8d8b8-8d, col_values=(('external_ids', {'iface-id': '1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:b5:29', 'vm-uuid': '83252cb5-25d7-40e3-823d-02d1d0eb73f1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.000 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:14 compute-0 NetworkManager[49108]: <info>  [1769847494.0013] manager: (tap1cf8d8b8-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.003 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.008 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.010 247708 INFO os_vif [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d')
Jan 31 08:18:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 31 08:18:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 31 08:18:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.163 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.164 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.165 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] No VIF found with MAC fa:16:3e:85:b5:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.165 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Using config drive
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.218 247708 DEBUG nova.storage.rbd_utils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] rbd image 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.244 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.289 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'keypairs' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1826215668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:18:14 compute-0 ceph-mon[74496]: pgmap v2500: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Jan 31 08:18:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:18:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:18:14 compute-0 nova_compute[247704]: 2026-01-31 08:18:14.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:18:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:14.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:18:15 compute-0 ceph-mon[74496]: osdmap e322: 3 total, 3 up, 3 in
Jan 31 08:18:15 compute-0 nova_compute[247704]: 2026-01-31 08:18:15.468 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Creating config drive at /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config
Jan 31 08:18:15 compute-0 nova_compute[247704]: 2026-01-31 08:18:15.477 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpt9tduyi5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:15 compute-0 nova_compute[247704]: 2026-01-31 08:18:15.615 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpt9tduyi5" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:15 compute-0 nova_compute[247704]: 2026-01-31 08:18:15.654 247708 DEBUG nova.storage.rbd_utils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] rbd image 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:18:15 compute-0 nova_compute[247704]: 2026-01-31 08:18:15.660 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 533 KiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 31 08:18:16 compute-0 ceph-mon[74496]: pgmap v2502: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 533 KiB/s rd, 2.7 MiB/s wr, 123 op/s
Jan 31 08:18:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:18:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:18:16 compute-0 nova_compute[247704]: 2026-01-31 08:18:16.768 247708 DEBUG oslo_concurrency.processutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config 83252cb5-25d7-40e3-823d-02d1d0eb73f1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:16 compute-0 nova_compute[247704]: 2026-01-31 08:18:16.770 247708 INFO nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deleting local config drive /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1/disk.config because it was imported into RBD.
Jan 31 08:18:16 compute-0 kernel: tap1cf8d8b8-8d: entered promiscuous mode
Jan 31 08:18:16 compute-0 NetworkManager[49108]: <info>  [1769847496.8406] manager: (tap1cf8d8b8-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Jan 31 08:18:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:16 compute-0 ovn_controller[149457]: 2026-01-31T08:18:16Z|00587|binding|INFO|Claiming lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for this chassis.
Jan 31 08:18:16 compute-0 ovn_controller[149457]: 2026-01-31T08:18:16Z|00588|binding|INFO|1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7: Claiming fa:16:3e:85:b5:29 10.100.0.3
Jan 31 08:18:16 compute-0 nova_compute[247704]: 2026-01-31 08:18:16.890 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:16 compute-0 ovn_controller[149457]: 2026-01-31T08:18:16Z|00589|binding|INFO|Setting lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 ovn-installed in OVS
Jan 31 08:18:16 compute-0 ovn_controller[149457]: 2026-01-31T08:18:16Z|00590|binding|INFO|Setting lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 up in Southbound
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.901 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:b5:29 10.100.0.3'], port_security=['fa:16:3e:85:b5:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '83252cb5-25d7-40e3-823d-02d1d0eb73f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26cb596-7b93-4afd-9b50-133e9afd768d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bda3ebf6541d46309fc9b2ce089dd857', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bc88ce0b-3f35-4088-82b2-3c421cb8a010', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06815ca2-b7af-4c41-816c-402a584fa466, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.904 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 in datapath c26cb596-7b93-4afd-9b50-133e9afd768d bound to our chassis
Jan 31 08:18:16 compute-0 nova_compute[247704]: 2026-01-31 08:18:16.903 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:16 compute-0 nova_compute[247704]: 2026-01-31 08:18:16.905 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:16 compute-0 nova_compute[247704]: 2026-01-31 08:18:16.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.909 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c26cb596-7b93-4afd-9b50-133e9afd768d
Jan 31 08:18:16 compute-0 systemd-machined[214448]: New machine qemu-61-instance-0000008b.
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.925 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1c05211b-eb3f-4bcb-9f8f-944997aa2075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.926 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc26cb596-71 in ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.928 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc26cb596-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.929 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d666fe24-2841-48e2-b4aa-ff271b59ca5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.930 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e4226261-b839-4dff-b616-38f87c3e8424]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:16 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-0000008b.
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.940 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[1eaa7114-6288-41dc-8bdc-df0794311701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:16 compute-0 systemd-udevd[339737]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.954 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf13c65-e324-4d0a-8645-956deb5a0105]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:16 compute-0 NetworkManager[49108]: <info>  [1769847496.9687] device (tap1cf8d8b8-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:18:16 compute-0 NetworkManager[49108]: <info>  [1769847496.9701] device (tap1cf8d8b8-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:18:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:16.995 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[16611477-fb28-4228-9b20-75d844f967e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 NetworkManager[49108]: <info>  [1769847497.0055] manager: (tapc26cb596-70): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.005 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[355123c9-ff9f-4d26-8437-0d7cd4b64aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.035 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[705953f8-26db-42f2-8577-03c47c888b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.038 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bf544c4a-f9da-48d0-b571-60855b7a499f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 NetworkManager[49108]: <info>  [1769847497.0578] device (tapc26cb596-70): carrier: link connected
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.060 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9d4df927-011a-44a3-97b5-3ad77ca016f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.078 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c91b0518-eb45-464f-a4be-13145c1dfb5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26cb596-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:20:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773967, 'reachable_time': 42630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339767, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.092 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3cc16730-4d46-4306-9ca3-3ba2bfeedeae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:2057'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 773967, 'tstamp': 773967}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339768, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.110 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d28c2c56-3d46-4435-a16a-b9331d0a4732]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc26cb596-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:20:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773967, 'reachable_time': 42630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339769, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.136 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e254a663-66d2-48ed-b8f8-e294eb307254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.192 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7ad646-be69-4bfb-8397-37a921294f28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.194 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26cb596-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.194 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.195 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc26cb596-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:17 compute-0 NetworkManager[49108]: <info>  [1769847497.1978] manager: (tapc26cb596-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Jan 31 08:18:17 compute-0 kernel: tapc26cb596-70: entered promiscuous mode
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.199 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.201 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc26cb596-70, col_values=(('external_ids', {'iface-id': '9c7eeaf4-153c-4957-b0ec-6ca490c32a88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.201 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:17 compute-0 ovn_controller[149457]: 2026-01-31T08:18:17Z|00591|binding|INFO|Releasing lport 9c7eeaf4-153c-4957-b0ec-6ca490c32a88 from this chassis (sb_readonly=0)
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.209 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.210 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c26cb596-7b93-4afd-9b50-133e9afd768d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c26cb596-7b93-4afd-9b50-133e9afd768d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.211 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee42f08-48c1-4874-9397-8b3e0ba391e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.211 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-c26cb596-7b93-4afd-9b50-133e9afd768d
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/c26cb596-7b93-4afd-9b50-133e9afd768d.pid.haproxy
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID c26cb596-7b93-4afd-9b50-133e9afd768d
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:18:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:17.212 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'env', 'PROCESS_TAG=haproxy-c26cb596-7b93-4afd-9b50-133e9afd768d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c26cb596-7b93-4afd-9b50-133e9afd768d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.245 247708 DEBUG nova.compute.manager [req-a3a4060a-d349-4cf2-b46e-b004bb1ecf67 req-2e2dd164-675c-45ae-91da-587b7aaef6fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.245 247708 DEBUG oslo_concurrency.lockutils [req-a3a4060a-d349-4cf2-b46e-b004bb1ecf67 req-2e2dd164-675c-45ae-91da-587b7aaef6fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.245 247708 DEBUG oslo_concurrency.lockutils [req-a3a4060a-d349-4cf2-b46e-b004bb1ecf67 req-2e2dd164-675c-45ae-91da-587b7aaef6fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.246 247708 DEBUG oslo_concurrency.lockutils [req-a3a4060a-d349-4cf2-b46e-b004bb1ecf67 req-2e2dd164-675c-45ae-91da-587b7aaef6fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.246 247708 DEBUG nova.compute.manager [req-a3a4060a-d349-4cf2-b46e-b004bb1ecf67 req-2e2dd164-675c-45ae-91da-587b7aaef6fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] No waiting events found dispatching network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.246 247708 WARNING nova.compute.manager [req-a3a4060a-d349-4cf2-b46e-b004bb1ecf67 req-2e2dd164-675c-45ae-91da-587b7aaef6fa 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received unexpected event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for instance with vm_state active and task_state rebuild_spawning.
Jan 31 08:18:17 compute-0 podman[339819]: 2026-01-31 08:18:17.527120347 +0000 UTC m=+0.046799561 container create 695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:18:17 compute-0 systemd[1]: Started libpod-conmon-695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568.scope.
Jan 31 08:18:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:17 compute-0 podman[339819]: 2026-01-31 08:18:17.499680879 +0000 UTC m=+0.019360113 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:18:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4c341013206974a396c3219bd170547826a1918df91ada6a109beba08b2c4ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:17 compute-0 podman[339819]: 2026-01-31 08:18:17.611607566 +0000 UTC m=+0.131286790 container init 695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:18:17 compute-0 podman[339819]: 2026-01-31 08:18:17.616041054 +0000 UTC m=+0.135720268 container start 695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:18:17 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [NOTICE]   (339863) : New worker (339865) forked
Jan 31 08:18:17 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [NOTICE]   (339863) : Loading success.
Jan 31 08:18:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.706 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847497.7059271, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.707 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Resumed (Lifecycle Event)
Jan 31 08:18:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3663357759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3663357759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.712 247708 DEBUG nova.compute.manager [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.714 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.721 247708 INFO nova.virt.libvirt.driver [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance spawned successfully.
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.722 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.771 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.779 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.781 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.782 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.782 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.783 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.783 247708 DEBUG nova.virt.libvirt.driver [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.786 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.830 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.831 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847497.707377, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.832 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Started (Lifecycle Event)
Jan 31 08:18:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 31 08:18:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 31 08:18:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 1.4 MiB/s wr, 156 op/s
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.877 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.882 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.885 247708 DEBUG nova.compute.manager [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.924 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.970 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.971 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:17 compute-0 nova_compute[247704]: 2026-01-31 08:18:17.971 247708 DEBUG nova.objects.instance [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:18:18 compute-0 nova_compute[247704]: 2026-01-31 08:18:18.055 247708 DEBUG oslo_concurrency.lockutils [None req-8abd5235-00cf-48bc-8bc1-20c9481e553c 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:18 compute-0 ceph-mon[74496]: osdmap e323: 3 total, 3 up, 3 in
Jan 31 08:18:18 compute-0 ceph-mon[74496]: pgmap v2504: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 1.4 MiB/s wr, 156 op/s
Jan 31 08:18:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:18:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.000 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.365 247708 DEBUG nova.compute.manager [req-0cda2242-44e9-42e2-b0be-94943d5902ff req-3e69dde4-8bc5-4748-a2cd-acb901db75a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.366 247708 DEBUG oslo_concurrency.lockutils [req-0cda2242-44e9-42e2-b0be-94943d5902ff req-3e69dde4-8bc5-4748-a2cd-acb901db75a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.366 247708 DEBUG oslo_concurrency.lockutils [req-0cda2242-44e9-42e2-b0be-94943d5902ff req-3e69dde4-8bc5-4748-a2cd-acb901db75a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.366 247708 DEBUG oslo_concurrency.lockutils [req-0cda2242-44e9-42e2-b0be-94943d5902ff req-3e69dde4-8bc5-4748-a2cd-acb901db75a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.367 247708 DEBUG nova.compute.manager [req-0cda2242-44e9-42e2-b0be-94943d5902ff req-3e69dde4-8bc5-4748-a2cd-acb901db75a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] No waiting events found dispatching network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.367 247708 WARNING nova.compute.manager [req-0cda2242-44e9-42e2-b0be-94943d5902ff req-3e69dde4-8bc5-4748-a2cd-acb901db75a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received unexpected event network-vif-plugged-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 for instance with vm_state active and task_state None.
Jan 31 08:18:19 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:19 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 186 op/s
Jan 31 08:18:19 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.926 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847484.925941, 743cf933-3139-4c25-9c75-b45150274ae3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.928 247708 INFO nova.compute.manager [-] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] VM Stopped (Lifecycle Event)
Jan 31 08:18:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 31 08:18:19 compute-0 nova_compute[247704]: 2026-01-31 08:18:19.961 247708 DEBUG nova.compute.manager [None req-40490a5f-44aa-47bb-b810-026b3d84c8a2 - - - - - -] [instance: 743cf933-3139-4c25-9c75-b45150274ae3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:18:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 31 08:18:19 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:18:20
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'volumes', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root']
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:18:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:18:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1692506578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:18:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1692506578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:18:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:18:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:20.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:21 compute-0 ceph-mon[74496]: pgmap v2505: 305 pgs: 305 active+clean; 549 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 186 op/s
Jan 31 08:18:21 compute-0 ceph-mon[74496]: osdmap e324: 3 total, 3 up, 3 in
Jan 31 08:18:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1692506578' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1692506578' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 537 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 60 KiB/s wr, 230 op/s
Jan 31 08:18:22 compute-0 ceph-mon[74496]: pgmap v2507: 305 pgs: 305 active+clean; 537 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 60 KiB/s wr, 230 op/s
Jan 31 08:18:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:22.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 55 KiB/s wr, 307 op/s
Jan 31 08:18:24 compute-0 nova_compute[247704]: 2026-01-31 08:18:24.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:24 compute-0 nova_compute[247704]: 2026-01-31 08:18:24.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:24.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:25 compute-0 ceph-mon[74496]: pgmap v2508: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 55 KiB/s wr, 307 op/s
Jan 31 08:18:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 42 KiB/s wr, 444 op/s
Jan 31 08:18:26 compute-0 ceph-mon[74496]: pgmap v2509: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 42 KiB/s wr, 444 op/s
Jan 31 08:18:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3249558292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:26 compute-0 sshd-session[339880]: Connection closed by authenticating user root 45.148.10.240 port 42638 [preauth]
Jan 31 08:18:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 31 08:18:26 compute-0 sudo[339882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:26 compute-0 sudo[339882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:26 compute-0 sudo[339882]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:26.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:27 compute-0 sudo[339905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-0 sudo[339905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[339905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 31 08:18:27 compute-0 sudo[339930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:27 compute-0 sudo[339930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[339930]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 sudo[339940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-0 sudo[339940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[339940]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 sudo[339980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-0 sudo[339980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[339980]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 31 08:18:27 compute-0 sudo[340007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:18:27 compute-0 sudo[340007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[340007]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 08aa98e7-a935-41eb-b796-1c2c6707330a does not exist
Jan 31 08:18:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a0c45fa2-0f71-4e76-8c11-929dcfd8bdcc does not exist
Jan 31 08:18:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c069bac1-7b36-4b33-87da-383029729047 does not exist
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:18:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:18:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:18:27 compute-0 sudo[340064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-0 sudo[340064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[340064]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 20 KiB/s wr, 398 op/s
Jan 31 08:18:27 compute-0 sudo[340089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:27 compute-0 sudo[340089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[340089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:27 compute-0 auditd[699]: Audit daemon rotating log files
Jan 31 08:18:27 compute-0 sudo[340114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:27 compute-0 sudo[340114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:27 compute-0 sudo[340114]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:28 compute-0 sudo[340139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:18:28 compute-0 sudo[340139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:28 compute-0 ceph-mon[74496]: osdmap e325: 3 total, 3 up, 3 in
Jan 31 08:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:18:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:18:28 compute-0 ceph-mon[74496]: pgmap v2511: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 20 KiB/s wr, 398 op/s
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.323043524 +0000 UTC m=+0.022564971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.438461296 +0000 UTC m=+0.137982733 container create b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:28 compute-0 systemd[1]: Started libpod-conmon-b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702.scope.
Jan 31 08:18:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.619529349 +0000 UTC m=+0.319050796 container init b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.630768743 +0000 UTC m=+0.330290180 container start b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:28 compute-0 peaceful_pike[340219]: 167 167
Jan 31 08:18:28 compute-0 systemd[1]: libpod-b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702.scope: Deactivated successfully.
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.651185979 +0000 UTC m=+0.350707406 container attach b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.652554283 +0000 UTC m=+0.352075700 container died b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:18:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-591471b27917daa943619234056a18bbd70714a444d3f991bfb8ef2008e7b1e6-merged.mount: Deactivated successfully.
Jan 31 08:18:28 compute-0 podman[340203]: 2026-01-31 08:18:28.874620224 +0000 UTC m=+0.574141691 container remove b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:28 compute-0 systemd[1]: libpod-conmon-b9908fedd8ac1d1b4ba6c1bc7e3e0b3fb6b5967906d6e307ba3536615611b702.scope: Deactivated successfully.
Jan 31 08:18:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:29.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:29 compute-0 nova_compute[247704]: 2026-01-31 08:18:29.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:29 compute-0 podman[340244]: 2026-01-31 08:18:29.037007621 +0000 UTC m=+0.030094005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:18:29 compute-0 podman[340244]: 2026-01-31 08:18:29.141164928 +0000 UTC m=+0.134251282 container create 56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:18:29 compute-0 systemd[1]: Started libpod-conmon-56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819.scope.
Jan 31 08:18:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a11a3fcbd7edbb432f4666db924b2d15077126bbd579a728c116ae71a7cd668/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a11a3fcbd7edbb432f4666db924b2d15077126bbd579a728c116ae71a7cd668/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a11a3fcbd7edbb432f4666db924b2d15077126bbd579a728c116ae71a7cd668/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a11a3fcbd7edbb432f4666db924b2d15077126bbd579a728c116ae71a7cd668/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a11a3fcbd7edbb432f4666db924b2d15077126bbd579a728c116ae71a7cd668/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:29 compute-0 podman[340244]: 2026-01-31 08:18:29.318257494 +0000 UTC m=+0.311343898 container init 56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:18:29 compute-0 podman[340244]: 2026-01-31 08:18:29.327668302 +0000 UTC m=+0.320754666 container start 56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 31 08:18:29 compute-0 podman[340244]: 2026-01-31 08:18:29.354713082 +0000 UTC m=+0.347799536 container attach 56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:29 compute-0 nova_compute[247704]: 2026-01-31 08:18:29.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 387 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 331 op/s
Jan 31 08:18:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 31 08:18:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 31 08:18:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 31 08:18:30 compute-0 happy_driscoll[340261]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:18:30 compute-0 happy_driscoll[340261]: --> relative data size: 1.0
Jan 31 08:18:30 compute-0 happy_driscoll[340261]: --> All data devices are unavailable
Jan 31 08:18:30 compute-0 systemd[1]: libpod-56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819.scope: Deactivated successfully.
Jan 31 08:18:30 compute-0 podman[340244]: 2026-01-31 08:18:30.210662128 +0000 UTC m=+1.203748532 container died 56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a11a3fcbd7edbb432f4666db924b2d15077126bbd579a728c116ae71a7cd668-merged.mount: Deactivated successfully.
Jan 31 08:18:30 compute-0 podman[340244]: 2026-01-31 08:18:30.27270477 +0000 UTC m=+1.265791124 container remove 56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_driscoll, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:18:30 compute-0 systemd[1]: libpod-conmon-56a20b03dcae332835c82e45051ed5bcaf261a325db6b73f52a33d5845210819.scope: Deactivated successfully.
Jan 31 08:18:30 compute-0 sudo[340139]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:30 compute-0 sudo[340290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:30 compute-0 sudo[340290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:30 compute-0 sudo[340290]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:30 compute-0 sudo[340315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:30 compute-0 sudo[340315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:30 compute-0 sudo[340315]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:30 compute-0 sudo[340340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:30 compute-0 sudo[340340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:30 compute-0 sudo[340340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:30 compute-0 sudo[340365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:18:30 compute-0 sudo[340365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:30 compute-0 ceph-mon[74496]: pgmap v2512: 305 pgs: 305 active+clean; 387 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 331 op/s
Jan 31 08:18:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.86081528 +0000 UTC m=+0.056933038 container create 022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:18:30 compute-0 systemd[1]: Started libpod-conmon-022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e.scope.
Jan 31 08:18:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.837182565 +0000 UTC m=+0.033300383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.93754687 +0000 UTC m=+0.133664648 container init 022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.947544473 +0000 UTC m=+0.143662231 container start 022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.952196667 +0000 UTC m=+0.148314475 container attach 022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:18:30 compute-0 fervent_galileo[340443]: 167 167
Jan 31 08:18:30 compute-0 systemd[1]: libpod-022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e.scope: Deactivated successfully.
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.954120624 +0000 UTC m=+0.150238392 container died 022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:18:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a879779e3f081d4d974830afb79286ec670140bbcc6c6c20ce0aefd5ec0ea24f-merged.mount: Deactivated successfully.
Jan 31 08:18:30 compute-0 podman[340427]: 2026-01-31 08:18:30.991429613 +0000 UTC m=+0.187547351 container remove 022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:18:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:31.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:31 compute-0 systemd[1]: libpod-conmon-022d672f2ef7b5c1658382fb0e2f4917543cd9b8ee387de90afd1ed0f145286e.scope: Deactivated successfully.
Jan 31 08:18:31 compute-0 podman[340448]: 2026-01-31 08:18:31.097236521 +0000 UTC m=+0.107422468 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 31 08:18:31 compute-0 podman[340491]: 2026-01-31 08:18:31.176054082 +0000 UTC m=+0.046458834 container create 64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:18:31 compute-0 systemd[1]: Started libpod-conmon-64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553.scope.
Jan 31 08:18:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304f7ce7acd088db0ccd255da151a70b3f31f932d6f9ce094354d10b91eec914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304f7ce7acd088db0ccd255da151a70b3f31f932d6f9ce094354d10b91eec914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304f7ce7acd088db0ccd255da151a70b3f31f932d6f9ce094354d10b91eec914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/304f7ce7acd088db0ccd255da151a70b3f31f932d6f9ce094354d10b91eec914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:31 compute-0 podman[340491]: 2026-01-31 08:18:31.249191534 +0000 UTC m=+0.119596306 container init 64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:18:31 compute-0 podman[340491]: 2026-01-31 08:18:31.15547921 +0000 UTC m=+0.025884012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:18:31 compute-0 podman[340491]: 2026-01-31 08:18:31.260848457 +0000 UTC m=+0.131253209 container start 64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_snyder, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:18:31 compute-0 podman[340491]: 2026-01-31 08:18:31.264811434 +0000 UTC m=+0.135216196 container attach 64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_snyder, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:31.499 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:18:31 compute-0 nova_compute[247704]: 2026-01-31 08:18:31.501 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:31.503 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:18:31 compute-0 nova_compute[247704]: 2026-01-31 08:18:31.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:31 compute-0 ceph-mon[74496]: osdmap e326: 3 total, 3 up, 3 in
Jan 31 08:18:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 578 KiB/s wr, 234 op/s
Jan 31 08:18:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:31 compute-0 exciting_snyder[340508]: {
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:     "0": [
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:         {
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "devices": [
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "/dev/loop3"
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             ],
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "lv_name": "ceph_lv0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "lv_size": "7511998464",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "name": "ceph_lv0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "tags": {
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.crush_device_class": "",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.encrypted": "0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.osd_id": "0",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.type": "block",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:                 "ceph.vdo": "0"
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             },
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "type": "block",
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:             "vg_name": "ceph_vg0"
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:         }
Jan 31 08:18:31 compute-0 exciting_snyder[340508]:     ]
Jan 31 08:18:31 compute-0 exciting_snyder[340508]: }
Jan 31 08:18:31 compute-0 systemd[1]: libpod-64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553.scope: Deactivated successfully.
Jan 31 08:18:31 compute-0 podman[340491]: 2026-01-31 08:18:31.992191567 +0000 UTC m=+0.862596329 container died 64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_snyder, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-304f7ce7acd088db0ccd255da151a70b3f31f932d6f9ce094354d10b91eec914-merged.mount: Deactivated successfully.
Jan 31 08:18:32 compute-0 podman[340491]: 2026-01-31 08:18:32.064282574 +0000 UTC m=+0.934687356 container remove 64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:32 compute-0 systemd[1]: libpod-conmon-64d94d7b34b85e0ea21f8314eea7cef617f57a71abf242f161a2b4021367a553.scope: Deactivated successfully.
Jan 31 08:18:32 compute-0 sudo[340365]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:32 compute-0 sudo[340531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:32 compute-0 sudo[340531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:32 compute-0 sudo[340531]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:32 compute-0 sudo[340556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:32 compute-0 sudo[340556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:32 compute-0 sudo[340556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:32 compute-0 sudo[340581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:32 compute-0 sudo[340581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:32 compute-0 sudo[340581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:32 compute-0 sudo[340606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:18:32 compute-0 sudo[340606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:32 compute-0 nova_compute[247704]: 2026-01-31 08:18:32.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:32 compute-0 nova_compute[247704]: 2026-01-31 08:18:32.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:32 compute-0 ceph-mon[74496]: pgmap v2514: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 578 KiB/s wr, 234 op/s
Jan 31 08:18:32 compute-0 podman[340672]: 2026-01-31 08:18:32.868162322 +0000 UTC m=+0.102457858 container create b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:18:32 compute-0 podman[340672]: 2026-01-31 08:18:32.799326025 +0000 UTC m=+0.033621571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:18:32 compute-0 systemd[1]: Started libpod-conmon-b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360.scope.
Jan 31 08:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:32 compute-0 podman[340672]: 2026-01-31 08:18:32.968424125 +0000 UTC m=+0.202719721 container init b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:32 compute-0 podman[340672]: 2026-01-31 08:18:32.97601394 +0000 UTC m=+0.210309466 container start b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:18:32 compute-0 podman[340672]: 2026-01-31 08:18:32.980319654 +0000 UTC m=+0.214615630 container attach b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:32 compute-0 strange_blackwell[340688]: 167 167
Jan 31 08:18:32 compute-0 systemd[1]: libpod-b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360.scope: Deactivated successfully.
Jan 31 08:18:32 compute-0 conmon[340688]: conmon b67e1d05a7b192d96eb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360.scope/container/memory.events
Jan 31 08:18:32 compute-0 podman[340672]: 2026-01-31 08:18:32.983717068 +0000 UTC m=+0.218012614 container died b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:18:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:33.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c49df4858934a346b1a22572fc7ebda21370c626b2a3d82cac76804df4d3bd50-merged.mount: Deactivated successfully.
Jan 31 08:18:33 compute-0 podman[340672]: 2026-01-31 08:18:33.035343016 +0000 UTC m=+0.269638562 container remove b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_blackwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:18:33 compute-0 systemd[1]: libpod-conmon-b67e1d05a7b192d96eb41351a0bb0f31b8afe1e484acd2e09a6ba9395bb81360.scope: Deactivated successfully.
Jan 31 08:18:33 compute-0 podman[340712]: 2026-01-31 08:18:33.225822327 +0000 UTC m=+0.054540760 container create 6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:18:33 compute-0 systemd[1]: Started libpod-conmon-6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315.scope.
Jan 31 08:18:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:33 compute-0 podman[340712]: 2026-01-31 08:18:33.202842567 +0000 UTC m=+0.031560990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f5024dc830f7acd8656e17f0d9d4721f734297fe7e27401aa265911e9050e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f5024dc830f7acd8656e17f0d9d4721f734297fe7e27401aa265911e9050e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f5024dc830f7acd8656e17f0d9d4721f734297fe7e27401aa265911e9050e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f5024dc830f7acd8656e17f0d9d4721f734297fe7e27401aa265911e9050e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:33 compute-0 podman[340712]: 2026-01-31 08:18:33.3265344 +0000 UTC m=+0.155252803 container init 6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:18:33 compute-0 podman[340712]: 2026-01-31 08:18:33.334043324 +0000 UTC m=+0.162761727 container start 6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_snyder, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:33 compute-0 podman[340712]: 2026-01-31 08:18:33.337782535 +0000 UTC m=+0.166500938 container attach 6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:18:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 347 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 31 08:18:34 compute-0 nova_compute[247704]: 2026-01-31 08:18:34.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:34 compute-0 sad_snyder[340729]: {
Jan 31 08:18:34 compute-0 sad_snyder[340729]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:18:34 compute-0 sad_snyder[340729]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:18:34 compute-0 sad_snyder[340729]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:18:34 compute-0 sad_snyder[340729]:         "osd_id": 0,
Jan 31 08:18:34 compute-0 sad_snyder[340729]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:18:34 compute-0 sad_snyder[340729]:         "type": "bluestore"
Jan 31 08:18:34 compute-0 sad_snyder[340729]:     }
Jan 31 08:18:34 compute-0 sad_snyder[340729]: }
Jan 31 08:18:34 compute-0 systemd[1]: libpod-6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315.scope: Deactivated successfully.
Jan 31 08:18:34 compute-0 podman[340712]: 2026-01-31 08:18:34.146801577 +0000 UTC m=+0.975520010 container died 6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_snyder, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c2f5024dc830f7acd8656e17f0d9d4721f734297fe7e27401aa265911e9050e-merged.mount: Deactivated successfully.
Jan 31 08:18:34 compute-0 ceph-mon[74496]: pgmap v2515: 305 pgs: 305 active+clean; 347 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 31 08:18:34 compute-0 podman[340712]: 2026-01-31 08:18:34.303983998 +0000 UTC m=+1.132702401 container remove 6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 08:18:34 compute-0 systemd[1]: libpod-conmon-6a6450af3894f3fedb52c93362ddf0790df0b8b575a8e8f1d119302f1643e315.scope: Deactivated successfully.
Jan 31 08:18:34 compute-0 sudo[340606]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:18:34 compute-0 ovn_controller[149457]: 2026-01-31T08:18:34Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:b5:29 10.100.0.3
Jan 31 08:18:34 compute-0 ovn_controller[149457]: 2026-01-31T08:18:34Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:b5:29 10.100.0.3
Jan 31 08:18:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:34 compute-0 nova_compute[247704]: 2026-01-31 08:18:34.708 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:34 compute-0 nova_compute[247704]: 2026-01-31 08:18:34.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:18:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7a17690d-3f2f-45ff-a069-8766d574482c does not exist
Jan 31 08:18:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:35.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 86b80aaa-6bcd-4b45-b48c-ffdafce56594 does not exist
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 375d60de-519a-4a91-a80d-e77029f653a1 does not exist
Jan 31 08:18:35 compute-0 sudo[340763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:35 compute-0 sudo[340763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:35 compute-0 sudo[340763]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:35 compute-0 sudo[340788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:18:35 compute-0 sudo[340788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:35 compute-0 sudo[340788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004359361913735343 of space, bias 1.0, pg target 1.3078085741206031 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0017037589568211428 of space, bias 1.0, pg target 0.5094239280895217 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003689236111111111 of space, bias 1.0, pg target 1.103081597222222 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:18:35 compute-0 nova_compute[247704]: 2026-01-31 08:18:35.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:35 compute-0 nova_compute[247704]: 2026-01-31 08:18:35.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:18:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:18:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.9 MiB/s wr, 151 op/s
Jan 31 08:18:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:36 compute-0 ceph-mon[74496]: pgmap v2516: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.9 MiB/s wr, 151 op/s
Jan 31 08:18:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:37.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 2.5 MiB/s wr, 139 op/s
Jan 31 08:18:38 compute-0 ceph-mon[74496]: pgmap v2517: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 2.5 MiB/s wr, 139 op/s
Jan 31 08:18:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:38.506 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:38 compute-0 nova_compute[247704]: 2026-01-31 08:18:38.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:38 compute-0 nova_compute[247704]: 2026-01-31 08:18:38.607 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:38 compute-0 nova_compute[247704]: 2026-01-31 08:18:38.608 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:38 compute-0 nova_compute[247704]: 2026-01-31 08:18:38.608 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:38 compute-0 nova_compute[247704]: 2026-01-31 08:18:38.609 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:18:38 compute-0 nova_compute[247704]: 2026-01-31 08:18:38.609 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:39.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.058 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544434311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.091 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.250 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.250 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.390 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.391 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4165MB free_disk=20.90057373046875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.391 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.392 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2544434311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.754 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.6 MiB/s wr, 159 op/s
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.970 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 83252cb5-25d7-40e3-823d-02d1d0eb73f1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.971 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:18:39 compute-0 nova_compute[247704]: 2026-01-31 08:18:39.971 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:18:40 compute-0 nova_compute[247704]: 2026-01-31 08:18:40.062 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:18:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:18:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198695922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:40 compute-0 nova_compute[247704]: 2026-01-31 08:18:40.517 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:18:40 compute-0 nova_compute[247704]: 2026-01-31 08:18:40.524 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:18:40 compute-0 ceph-mon[74496]: pgmap v2518: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.6 MiB/s wr, 159 op/s
Jan 31 08:18:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1463827182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3198695922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:40 compute-0 nova_compute[247704]: 2026-01-31 08:18:40.764 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:18:40 compute-0 nova_compute[247704]: 2026-01-31 08:18:40.868 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:18:40 compute-0 nova_compute[247704]: 2026-01-31 08:18:40.869 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:41 compute-0 nova_compute[247704]: 2026-01-31 08:18:41.460 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:41 compute-0 nova_compute[247704]: 2026-01-31 08:18:41.870 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:41 compute-0 nova_compute[247704]: 2026-01-31 08:18:41.870 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:41 compute-0 nova_compute[247704]: 2026-01-31 08:18:41.870 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:18:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 248 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:18:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 31 08:18:42 compute-0 nova_compute[247704]: 2026-01-31 08:18:42.626 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:18:42 compute-0 nova_compute[247704]: 2026-01-31 08:18:42.627 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:18:42 compute-0 nova_compute[247704]: 2026-01-31 08:18:42.627 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:18:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 31 08:18:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 31 08:18:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2614889670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2356523471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 248 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 249 KiB/s rd, 1.0 MiB/s wr, 104 op/s
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.060 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 31 08:18:44 compute-0 ceph-mon[74496]: pgmap v2519: 305 pgs: 305 active+clean; 248 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:18:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/441361465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3826855599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:44 compute-0 ceph-mon[74496]: osdmap e327: 3 total, 3 up, 3 in
Jan 31 08:18:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3652022244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:44 compute-0 ceph-mon[74496]: pgmap v2521: 305 pgs: 305 active+clean; 248 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 249 KiB/s rd, 1.0 MiB/s wr, 104 op/s
Jan 31 08:18:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 31 08:18:44 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 31 08:18:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:18:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.917 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating instance_info_cache with network_info: [{"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:18:44 compute-0 podman[340863]: 2026-01-31 08:18:44.941430964 +0000 UTC m=+0.091223974 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.978 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-83252cb5-25d7-40e3-823d-02d1d0eb73f1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.979 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.979 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:44 compute-0 nova_compute[247704]: 2026-01-31 08:18:44.980 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:45.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:45 compute-0 ceph-mon[74496]: osdmap e328: 3 total, 3 up, 3 in
Jan 31 08:18:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 218 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 69 KiB/s wr, 55 op/s
Jan 31 08:18:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:46 compute-0 ceph-mon[74496]: pgmap v2523: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 218 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 69 KiB/s wr, 55 op/s
Jan 31 08:18:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:18:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:47.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:18:47 compute-0 sudo[340885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:47 compute-0 sudo[340885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:47 compute-0 sudo[340885]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:47 compute-0 sudo[340910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:18:47 compute-0 sudo[340910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:47 compute-0 sudo[340910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 185 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 24 KiB/s wr, 38 op/s
Jan 31 08:18:48 compute-0 ceph-mon[74496]: pgmap v2524: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 185 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 24 KiB/s wr, 38 op/s
Jan 31 08:18:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:48.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:49.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:49 compute-0 nova_compute[247704]: 2026-01-31 08:18:49.062 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:49 compute-0 nova_compute[247704]: 2026-01-31 08:18:49.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 144 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 21 KiB/s wr, 53 op/s
Jan 31 08:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:50 compute-0 ceph-mon[74496]: pgmap v2525: 305 pgs: 305 active+clean; 144 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 21 KiB/s wr, 53 op/s
Jan 31 08:18:50 compute-0 nova_compute[247704]: 2026-01-31 08:18:50.537 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:18:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:51.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:18:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 125 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 64 op/s
Jan 31 08:18:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 31 08:18:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 31 08:18:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 31 08:18:52 compute-0 ceph-mon[74496]: pgmap v2526: 305 pgs: 305 active+clean; 125 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 19 KiB/s wr, 64 op/s
Jan 31 08:18:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:52.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:53.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:53 compute-0 nova_compute[247704]: 2026-01-31 08:18:53.665 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:18:53 compute-0 ceph-mon[74496]: osdmap e329: 3 total, 3 up, 3 in
Jan 31 08:18:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3720147076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:18:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3720147076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:18:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/467398597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:18:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 60 op/s
Jan 31 08:18:54 compute-0 nova_compute[247704]: 2026-01-31 08:18:54.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:54 compute-0 nova_compute[247704]: 2026-01-31 08:18:54.794 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:54 compute-0 ceph-mon[74496]: pgmap v2528: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 18 KiB/s wr, 60 op/s
Jan 31 08:18:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:18:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:55.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:18:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.4 KiB/s wr, 49 op/s
Jan 31 08:18:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:18:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:18:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:18:56 compute-0 ceph-mon[74496]: pgmap v2529: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 3.4 KiB/s wr, 49 op/s
Jan 31 08:18:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:57.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.437 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.439 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.439 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.439 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.440 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.442 247708 INFO nova.compute.manager [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Terminating instance
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.444 247708 DEBUG nova.compute.manager [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:18:57 compute-0 kernel: tap1cf8d8b8-8d (unregistering): left promiscuous mode
Jan 31 08:18:57 compute-0 NetworkManager[49108]: <info>  [1769847537.5248] device (tap1cf8d8b8-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 ovn_controller[149457]: 2026-01-31T08:18:57Z|00592|binding|INFO|Releasing lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 from this chassis (sb_readonly=0)
Jan 31 08:18:57 compute-0 ovn_controller[149457]: 2026-01-31T08:18:57Z|00593|binding|INFO|Setting lport 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 down in Southbound
Jan 31 08:18:57 compute-0 ovn_controller[149457]: 2026-01-31T08:18:57Z|00594|binding|INFO|Removing iface tap1cf8d8b8-8d ovn-installed in OVS
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Jan 31 08:18:57 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000008b.scope: Consumed 15.087s CPU time.
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.591 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:b5:29 10.100.0.3'], port_security=['fa:16:3e:85:b5:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '83252cb5-25d7-40e3-823d-02d1d0eb73f1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c26cb596-7b93-4afd-9b50-133e9afd768d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bda3ebf6541d46309fc9b2ce089dd857', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bc88ce0b-3f35-4088-82b2-3c421cb8a010', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06815ca2-b7af-4c41-816c-402a584fa466, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.592 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 in datapath c26cb596-7b93-4afd-9b50-133e9afd768d unbound from our chassis
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.594 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c26cb596-7b93-4afd-9b50-133e9afd768d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:18:57 compute-0 systemd-machined[214448]: Machine qemu-61-instance-0000008b terminated.
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.595 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1fff16e1-8cb6-456e-9fe6-ce487a91716a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.596 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d namespace which is not needed anymore
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.669 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.688 247708 INFO nova.virt.libvirt.driver [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Instance destroyed successfully.
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.688 247708 DEBUG nova.objects.instance [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lazy-loading 'resources' on Instance uuid 83252cb5-25d7-40e3-823d-02d1d0eb73f1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:18:57 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [NOTICE]   (339863) : haproxy version is 2.8.14-c23fe91
Jan 31 08:18:57 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [NOTICE]   (339863) : path to executable is /usr/sbin/haproxy
Jan 31 08:18:57 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [WARNING]  (339863) : Exiting Master process...
Jan 31 08:18:57 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [WARNING]  (339863) : Exiting Master process...
Jan 31 08:18:57 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [ALERT]    (339863) : Current worker (339865) exited with code 143 (Terminated)
Jan 31 08:18:57 compute-0 neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d[339851]: [WARNING]  (339863) : All workers exited. Exiting... (0)
Jan 31 08:18:57 compute-0 systemd[1]: libpod-695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568.scope: Deactivated successfully.
Jan 31 08:18:57 compute-0 podman[340966]: 2026-01-31 08:18:57.744073265 +0000 UTC m=+0.062094055 container died 695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.759 247708 DEBUG nova.virt.libvirt.vif [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:17:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-934525208',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-20534748',id=139,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIgTBDp6a0SyMFs7OU2ywUkjN63r2N0PxPaRE6bSDGEX+Eq/ObkyuaXGME0SMKDkMYoPYqAUjzxLW21QhzrJW+bRoGYJmTS4PrdUXkRuBCox1bDVqwWhiTbfXfC+MijeFQ==',key_name='tempest-keypair-1991810396',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:18:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bda3ebf6541d46309fc9b2ce089dd857',ramdisk_id='',reservation_id='r-canzs8i7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='40cf2ff3-f7ff-4843-b4ab-b7dcc843006f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-284184156',owner_user_name='tempest-ServerActionsV293TestJSON-284184156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:18:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90313c82677c4144953c58efc0e13c3e',uuid=83252cb5-25d7-40e3-823d-02d1d0eb73f1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.760 247708 DEBUG nova.network.os_vif_util [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converting VIF {"id": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "address": "fa:16:3e:85:b5:29", "network": {"id": "c26cb596-7b93-4afd-9b50-133e9afd768d", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-1600458559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bda3ebf6541d46309fc9b2ce089dd857", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cf8d8b8-8d", "ovs_interfaceid": "1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.762 247708 DEBUG nova.network.os_vif_util [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.762 247708 DEBUG os_vif [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.765 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.766 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cf8d8b8-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568-userdata-shm.mount: Deactivated successfully.
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.776 247708 INFO os_vif [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:b5:29,bridge_name='br-int',has_traffic_filtering=True,id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7,network=Network(c26cb596-7b93-4afd-9b50-133e9afd768d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cf8d8b8-8d')
Jan 31 08:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4c341013206974a396c3219bd170547826a1918df91ada6a109beba08b2c4ad-merged.mount: Deactivated successfully.
Jan 31 08:18:57 compute-0 podman[340966]: 2026-01-31 08:18:57.787654217 +0000 UTC m=+0.105656206 container cleanup 695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 08:18:57 compute-0 systemd[1]: libpod-conmon-695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568.scope: Deactivated successfully.
Jan 31 08:18:57 compute-0 podman[341019]: 2026-01-31 08:18:57.854761502 +0000 UTC m=+0.046974936 container remove 695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.859 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4b130cc7-e98c-4317-93de-e664766eaa91]: (4, ('Sat Jan 31 08:18:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d (695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568)\n695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568\nSat Jan 31 08:18:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d (695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568)\n695f9b4ded841374420820fa0163df96b249a11098ef9cca5bde423695f6f568\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.861 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[60252631-56c2-4e91-a846-f4d6f41e26f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.862 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc26cb596-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.863 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 kernel: tapc26cb596-70: left promiscuous mode
Jan 31 08:18:57 compute-0 nova_compute[247704]: 2026-01-31 08:18:57.871 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.871 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[20a51f2e-6700-40df-974f-013a0f92ca4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.888 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[92ba9d29-8572-4bb2-83bd-5fe6ef4f751f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.892 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5386973b-b160-42a1-8197-1ac1158943f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 34 op/s
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.914 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a4332e88-7e9d-40a2-a398-ce581ad73e4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 773960, 'reachable_time': 26523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341038, 'error': None, 'target': 'ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:57 compute-0 systemd[1]: run-netns-ovnmeta\x2dc26cb596\x2d7b93\x2d4afd\x2d9b50\x2d133e9afd768d.mount: Deactivated successfully.
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.919 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c26cb596-7b93-4afd-9b50-133e9afd768d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:18:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:18:57.919 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a73f8dcd-a661-43c9-b1d7-394a1ee00334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:18:58 compute-0 nova_compute[247704]: 2026-01-31 08:18:58.058 247708 INFO nova.virt.libvirt.driver [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deleting instance files /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1_del
Jan 31 08:18:58 compute-0 nova_compute[247704]: 2026-01-31 08:18:58.059 247708 INFO nova.virt.libvirt.driver [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deletion of /var/lib/nova/instances/83252cb5-25d7-40e3-823d-02d1d0eb73f1_del complete
Jan 31 08:18:58 compute-0 nova_compute[247704]: 2026-01-31 08:18:58.455 247708 INFO nova.compute.manager [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Took 1.01 seconds to destroy the instance on the hypervisor.
Jan 31 08:18:58 compute-0 nova_compute[247704]: 2026-01-31 08:18:58.455 247708 DEBUG oslo.service.loopingcall [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:18:58 compute-0 nova_compute[247704]: 2026-01-31 08:18:58.456 247708 DEBUG nova.compute.manager [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:18:58 compute-0 nova_compute[247704]: 2026-01-31 08:18:58.456 247708 DEBUG nova.network.neutron [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:18:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:58 compute-0 ceph-mon[74496]: pgmap v2530: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 34 op/s
Jan 31 08:18:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:18:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:18:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:59.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:18:59 compute-0 nova_compute[247704]: 2026-01-31 08:18:59.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:18:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 31 08:19:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3710704562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:00 compute-0 nova_compute[247704]: 2026-01-31 08:19:00.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:00 compute-0 nova_compute[247704]: 2026-01-31 08:19:00.745 247708 DEBUG nova.network.neutron [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:00 compute-0 nova_compute[247704]: 2026-01-31 08:19:00.885 247708 DEBUG nova.compute.manager [req-c570d49c-3bc4-458e-9381-dc3e5369c04b req-f9807e22-95a4-4a56-a01d-864ab2efd37b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Received event network-vif-deleted-1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:00 compute-0 nova_compute[247704]: 2026-01-31 08:19:00.885 247708 INFO nova.compute.manager [req-c570d49c-3bc4-458e-9381-dc3e5369c04b req-f9807e22-95a4-4a56-a01d-864ab2efd37b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Neutron deleted interface 1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7; detaching it from the instance and deleting it from the info cache
Jan 31 08:19:00 compute-0 nova_compute[247704]: 2026-01-31 08:19:00.885 247708 DEBUG nova.network.neutron [req-c570d49c-3bc4-458e-9381-dc3e5369c04b req-f9807e22-95a4-4a56-a01d-864ab2efd37b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:19:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:01.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:19:01 compute-0 ceph-mon[74496]: pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 31 08:19:01 compute-0 nova_compute[247704]: 2026-01-31 08:19:01.207 247708 INFO nova.compute.manager [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Took 2.75 seconds to deallocate network for instance.
Jan 31 08:19:01 compute-0 nova_compute[247704]: 2026-01-31 08:19:01.212 247708 DEBUG nova.compute.manager [req-c570d49c-3bc4-458e-9381-dc3e5369c04b req-f9807e22-95a4-4a56-a01d-864ab2efd37b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Detach interface failed, port_id=1cf8d8b8-8da1-4815-936e-21ecd9a3d8f7, reason: Instance 83252cb5-25d7-40e3-823d-02d1d0eb73f1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:19:01 compute-0 nova_compute[247704]: 2026-01-31 08:19:01.655 247708 INFO nova.compute.manager [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Took 0.45 seconds to detach 1 volumes for instance.
Jan 31 08:19:01 compute-0 nova_compute[247704]: 2026-01-31 08:19:01.657 247708 DEBUG nova.compute.manager [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Deleting volume: 28bdb463-7d0a-42f5-8392-3acd847cfe3e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 08:19:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 16 op/s
Jan 31 08:19:01 compute-0 podman[341042]: 2026-01-31 08:19:01.923880562 +0000 UTC m=+0.098490701 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:19:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:02 compute-0 ceph-mon[74496]: pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 16 op/s
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.369 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.370 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.450 247708 DEBUG oslo_concurrency.processutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:19:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.769 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/681996875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.871 247708 DEBUG oslo_concurrency.processutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.878 247708 DEBUG nova.compute.provider_tree [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:02 compute-0 nova_compute[247704]: 2026-01-31 08:19:02.952 247708 DEBUG nova.scheduler.client.report [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:03.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:03 compute-0 nova_compute[247704]: 2026-01-31 08:19:03.103 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 31 08:19:03 compute-0 nova_compute[247704]: 2026-01-31 08:19:03.240 247708 INFO nova.scheduler.client.report [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Deleted allocations for instance 83252cb5-25d7-40e3-823d-02d1d0eb73f1
Jan 31 08:19:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 31 08:19:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 31 08:19:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/681996875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1771961321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:19:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1771961321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:19:03 compute-0 nova_compute[247704]: 2026-01-31 08:19:03.703 247708 DEBUG oslo_concurrency.lockutils [None req-9233a1cc-01f6-496c-bd3c-dd76d05b56aa 90313c82677c4144953c58efc0e13c3e bda3ebf6541d46309fc9b2ce089dd857 - - default default] Lock "83252cb5-25d7-40e3-823d-02d1d0eb73f1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 165 KiB/s wr, 39 op/s
Jan 31 08:19:04 compute-0 nova_compute[247704]: 2026-01-31 08:19:04.189 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:04 compute-0 ceph-mon[74496]: osdmap e330: 3 total, 3 up, 3 in
Jan 31 08:19:04 compute-0 ceph-mon[74496]: pgmap v2534: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 165 KiB/s wr, 39 op/s
Jan 31 08:19:04 compute-0 nova_compute[247704]: 2026-01-31 08:19:04.281 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:04 compute-0 nova_compute[247704]: 2026-01-31 08:19:04.875 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:19:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:19:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 119 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 4.2 MiB/s wr, 82 op/s
Jan 31 08:19:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:07 compute-0 ceph-mon[74496]: pgmap v2535: 305 pgs: 305 active+clean; 119 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 4.2 MiB/s wr, 82 op/s
Jan 31 08:19:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4218099084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:07.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:07 compute-0 sudo[341095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:07 compute-0 sudo[341095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:07 compute-0 sudo[341095]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:07 compute-0 sudo[341120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:07 compute-0 sudo[341120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:07 compute-0 sudo[341120]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:07 compute-0 nova_compute[247704]: 2026-01-31 08:19:07.773 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 4.2 MiB/s wr, 87 op/s
Jan 31 08:19:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1216326437' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:08 compute-0 ceph-mon[74496]: pgmap v2536: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 4.2 MiB/s wr, 87 op/s
Jan 31 08:19:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:09.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:09 compute-0 nova_compute[247704]: 2026-01-31 08:19:09.783 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:09 compute-0 nova_compute[247704]: 2026-01-31 08:19:09.783 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:09 compute-0 nova_compute[247704]: 2026-01-31 08:19:09.878 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.2 MiB/s wr, 78 op/s
Jan 31 08:19:09 compute-0 nova_compute[247704]: 2026-01-31 08:19:09.929 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:19:10 compute-0 nova_compute[247704]: 2026-01-31 08:19:10.197 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:10 compute-0 nova_compute[247704]: 2026-01-31 08:19:10.198 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:10 compute-0 nova_compute[247704]: 2026-01-31 08:19:10.208 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:19:10 compute-0 nova_compute[247704]: 2026-01-31 08:19:10.208 247708 INFO nova.compute.claims [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:19:10 compute-0 nova_compute[247704]: 2026-01-31 08:19:10.517 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:10.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:10 compute-0 ceph-mon[74496]: pgmap v2537: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.2 MiB/s wr, 78 op/s
Jan 31 08:19:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012524099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.014 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.021 247708 DEBUG nova.compute.provider_tree [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.085 247708 DEBUG nova.scheduler.client.report [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.149 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.150 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:11.191 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:11.192 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:11.194 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.268 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.269 247708 DEBUG nova.network.neutron [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.380 247708 INFO nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.545 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:19:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.2 MiB/s wr, 76 op/s
Jan 31 08:19:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.967 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.969 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:19:11 compute-0 nova_compute[247704]: 2026-01-31 08:19:11.970 247708 INFO nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Creating image(s)
Jan 31 08:19:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1012524099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.022 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.067 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.117 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.122 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.123 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.130 247708 DEBUG nova.policy [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f0be9090fdf49d2ac15246a0a820d3f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '134c066ac92844ff853b216870fa8eed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.686 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847537.6849167, 83252cb5-25d7-40e3-823d-02d1d0eb73f1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.687 247708 INFO nova.compute.manager [-] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] VM Stopped (Lifecycle Event)
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.723 247708 DEBUG nova.compute.manager [None req-1b3d8b1b-6872-4186-b795-ac6968f0533d - - - - - -] [instance: 83252cb5-25d7-40e3-823d-02d1d0eb73f1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.763 247708 DEBUG nova.virt.libvirt.imagebackend [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/f053e722-a943-4ece-ad77-a071fe4c9f59/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/f053e722-a943-4ece-ad77-a071fe4c9f59/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:19:12 compute-0 nova_compute[247704]: 2026-01-31 08:19:12.777 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:12 compute-0 ceph-mon[74496]: pgmap v2538: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 4.2 MiB/s wr, 76 op/s
Jan 31 08:19:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:13 compute-0 nova_compute[247704]: 2026-01-31 08:19:13.407 247708 DEBUG nova.network.neutron [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Successfully created port: 3d5b4f77-1672-4b27-83d1-741ef4fda685 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:19:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 3.8 MiB/s wr, 58 op/s
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.342 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.429 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.part --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.430 247708 DEBUG nova.virt.images [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] f053e722-a943-4ece-ad77-a071fe4c9f59 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.432 247708 DEBUG nova.privsep.utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.433 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.part /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.732 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.part /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.converted" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.735 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.828 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9.converted --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.830 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.866 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.872 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9 2050d0a7-d773-4467-886b-af07b4efa0d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:14 compute-0 nova_compute[247704]: 2026-01-31 08:19:14.923 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:15 compute-0 ceph-mon[74496]: pgmap v2539: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 3.8 MiB/s wr, 58 op/s
Jan 31 08:19:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.212 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9 2050d0a7-d773-4467-886b-af07b4efa0d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.324 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] resizing rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.414 247708 DEBUG nova.network.neutron [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Successfully updated port: 3d5b4f77-1672-4b27-83d1-741ef4fda685 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.446 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.447 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.448 247708 DEBUG nova.network.neutron [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.503 247708 DEBUG nova.objects.instance [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'migration_context' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.524 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.525 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Ensure instance console log exists: /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.526 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.526 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.527 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:15 compute-0 nova_compute[247704]: 2026-01-31 08:19:15.816 247708 DEBUG nova.network.neutron [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:19:15 compute-0 podman[341351]: 2026-01-31 08:19:15.896964585 +0000 UTC m=+0.065828205 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:19:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 109 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 118 op/s
Jan 31 08:19:16 compute-0 ceph-mon[74496]: pgmap v2540: 305 pgs: 305 active+clean; 109 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.4 MiB/s wr, 118 op/s
Jan 31 08:19:16 compute-0 nova_compute[247704]: 2026-01-31 08:19:16.481 247708 DEBUG nova.compute.manager [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:16 compute-0 nova_compute[247704]: 2026-01-31 08:19:16.482 247708 DEBUG nova.compute.manager [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:19:16 compute-0 nova_compute[247704]: 2026-01-31 08:19:16.482 247708 DEBUG oslo_concurrency.lockutils [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:16.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:17.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:17 compute-0 nova_compute[247704]: 2026-01-31 08:19:17.781 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 898 KiB/s wr, 99 op/s
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.057 247708 DEBUG nova.network.neutron [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.089 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.090 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance network_info: |[{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.090 247708 DEBUG oslo_concurrency.lockutils [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.091 247708 DEBUG nova.network.neutron [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.094 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Start _get_guest_xml network_info=[{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T08:19:01Z,direct_url=<?>,disk_format='qcow2',id=f053e722-a943-4ece-ad77-a071fe4c9f59,min_disk=0,min_ram=0,name='tempest-scenario-img--811139934',owner='134c066ac92844ff853b216870fa8eed',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T08:19:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': 'f053e722-a943-4ece-ad77-a071fe4c9f59'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.100 247708 WARNING nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.114 247708 DEBUG nova.virt.libvirt.host [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.115 247708 DEBUG nova.virt.libvirt.host [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.120 247708 DEBUG nova.virt.libvirt.host [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.121 247708 DEBUG nova.virt.libvirt.host [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.122 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.122 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T08:19:01Z,direct_url=<?>,disk_format='qcow2',id=f053e722-a943-4ece-ad77-a071fe4c9f59,min_disk=0,min_ram=0,name='tempest-scenario-img--811139934',owner='134c066ac92844ff853b216870fa8eed',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T08:19:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.123 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.123 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.123 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.124 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.124 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.124 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.124 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.124 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.125 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.125 247708 DEBUG nova.virt.hardware [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.127 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069976186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.594 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.635 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:18 compute-0 nova_compute[247704]: 2026-01-31 08:19:18.641 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:18 compute-0 ceph-mon[74496]: pgmap v2541: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 898 KiB/s wr, 99 op/s
Jan 31 08:19:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1069976186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549304386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.087 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.090 247708 DEBUG nova.virt.libvirt.vif [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1587986582',display_name='tempest-TestMinimumBasicScenario-server-1587986582',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1587986582',id=141,image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHmcExNwe2fixW74P9gMs2+p3Ai01ht0KWMZ+YQVrB+lWzAPhuDeCsQup7t70K9V4kEA8Q6TjuiL3oYataTcIeP973UypS9on/cMTGNZfWc+a/I9G+q9wlRQBnSo+WIwWQ==',key_name='tempest-TestMinimumBasicScenario-589136251',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-gqbdxfhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:11Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=2050d0a7-d773-4467-886b-af07b4efa0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.091 247708 DEBUG nova.network.os_vif_util [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.092 247708 DEBUG nova.network.os_vif_util [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.094 247708 DEBUG nova.objects.instance [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'pci_devices' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.123 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <uuid>2050d0a7-d773-4467-886b-af07b4efa0d8</uuid>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <name>instance-0000008d</name>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:name>tempest-TestMinimumBasicScenario-server-1587986582</nova:name>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:19:18</nova:creationTime>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:user uuid="7f0be9090fdf49d2ac15246a0a820d3f">tempest-TestMinimumBasicScenario-975831205-project-member</nova:user>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:project uuid="134c066ac92844ff853b216870fa8eed">tempest-TestMinimumBasicScenario-975831205</nova:project>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="f053e722-a943-4ece-ad77-a071fe4c9f59"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <nova:port uuid="3d5b4f77-1672-4b27-83d1-741ef4fda685">
Jan 31 08:19:19 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <system>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <entry name="serial">2050d0a7-d773-4467-886b-af07b4efa0d8</entry>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <entry name="uuid">2050d0a7-d773-4467-886b-af07b4efa0d8</entry>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </system>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <os>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </os>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <features>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </features>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/2050d0a7-d773-4467-886b-af07b4efa0d8_disk">
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </source>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config">
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </source>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:19:19 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:93:f1:46"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <target dev="tap3d5b4f77-16"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/console.log" append="off"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <video>
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </video>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:19:19 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:19:19 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:19:19 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:19:19 compute-0 nova_compute[247704]: </domain>
Jan 31 08:19:19 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.124 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Preparing to wait for external event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.124 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.125 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.125 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.126 247708 DEBUG nova.virt.libvirt.vif [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1587986582',display_name='tempest-TestMinimumBasicScenario-server-1587986582',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1587986582',id=141,image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHmcExNwe2fixW74P9gMs2+p3Ai01ht0KWMZ+YQVrB+lWzAPhuDeCsQup7t70K9V4kEA8Q6TjuiL3oYataTcIeP973UypS9on/cMTGNZfWc+a/I9G+q9wlRQBnSo+WIwWQ==',key_name='tempest-TestMinimumBasicScenario-589136251',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-gqbdxfhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:11Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=2050d0a7-d773-4467-886b-af07b4efa0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.126 247708 DEBUG nova.network.os_vif_util [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.127 247708 DEBUG nova.network.os_vif_util [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.128 247708 DEBUG os_vif [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.129 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.130 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.134 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.134 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d5b4f77-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.135 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d5b4f77-16, col_values=(('external_ids', {'iface-id': '3d5b4f77-1672-4b27-83d1-741ef4fda685', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:f1:46', 'vm-uuid': '2050d0a7-d773-4467-886b-af07b4efa0d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.137 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:19 compute-0 NetworkManager[49108]: <info>  [1769847559.1389] manager: (tap3d5b4f77-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.139 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.145 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.148 247708 INFO os_vif [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16')
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.244 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.245 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.245 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No VIF found with MAC fa:16:3e:93:f1:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.246 247708 INFO nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Using config drive
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.286 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 31 08:19:19 compute-0 nova_compute[247704]: 2026-01-31 08:19:19.955 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3549304386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:19:20
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', 'backups', '.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data']
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:19:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:19:20 compute-0 nova_compute[247704]: 2026-01-31 08:19:20.679 247708 INFO nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Creating config drive at /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/disk.config
Jan 31 08:19:20 compute-0 nova_compute[247704]: 2026-01-31 08:19:20.684 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo962nc3w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:19:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:19:20 compute-0 nova_compute[247704]: 2026-01-31 08:19:20.811 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo962nc3w" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:20 compute-0 nova_compute[247704]: 2026-01-31 08:19:20.854 247708 DEBUG nova.storage.rbd_utils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] rbd image 2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:20 compute-0 nova_compute[247704]: 2026-01-31 08:19:20.859 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/disk.config 2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:21 compute-0 ceph-mon[74496]: pgmap v2542: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 31 08:19:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:21.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.189 247708 DEBUG oslo_concurrency.processutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/disk.config 2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.190 247708 INFO nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Deleting local config drive /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/disk.config because it was imported into RBD.
Jan 31 08:19:21 compute-0 kernel: tap3d5b4f77-16: entered promiscuous mode
Jan 31 08:19:21 compute-0 NetworkManager[49108]: <info>  [1769847561.2468] manager: (tap3d5b4f77-16): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Jan 31 08:19:21 compute-0 ovn_controller[149457]: 2026-01-31T08:19:21Z|00595|binding|INFO|Claiming lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 for this chassis.
Jan 31 08:19:21 compute-0 ovn_controller[149457]: 2026-01-31T08:19:21Z|00596|binding|INFO|3d5b4f77-1672-4b27-83d1-741ef4fda685: Claiming fa:16:3e:93:f1:46 10.100.0.14
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.247 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.272 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:f1:46 10.100.0.14'], port_security=['fa:16:3e:93:f1:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2050d0a7-d773-4467-886b-af07b4efa0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '967ea74e-50db-4569-92ae-9b918e86440d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3d5b4f77-1672-4b27-83d1-741ef4fda685) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:19:21 compute-0 systemd-udevd[341505]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.274 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3d5b4f77-1672-4b27-83d1-741ef4fda685 in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 bound to our chassis
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.276 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.278 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 systemd-machined[214448]: New machine qemu-62-instance-0000008d.
Jan 31 08:19:21 compute-0 ovn_controller[149457]: 2026-01-31T08:19:21Z|00597|binding|INFO|Setting lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 ovn-installed in OVS
Jan 31 08:19:21 compute-0 ovn_controller[149457]: 2026-01-31T08:19:21Z|00598|binding|INFO|Setting lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 up in Southbound
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.287 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 NetworkManager[49108]: <info>  [1769847561.2895] device (tap3d5b4f77-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.287 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa9a558-f824-4739-9c87-7a855f850cee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.289 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8453b6a-01 in ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:19:21 compute-0 NetworkManager[49108]: <info>  [1769847561.2905] device (tap3d5b4f77-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.291 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8453b6a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.291 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f99bd381-f503-48e9-b672-99a3059f918c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.292 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6952e39e-b074-4a2a-b64f-1ff391d12456]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-0000008d.
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.302 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc47650-09a2-4d45-8a68-6870d70e98e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.324 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb4ff3c-4466-4074-9e8f-52aa5546de9b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.346 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bea095b0-f40c-4a1f-ad80-85871a41da6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 systemd-udevd[341511]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.354 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b7238196-b7af-482e-91bd-5d407797671c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 NetworkManager[49108]: <info>  [1769847561.3554] manager: (tapb8453b6a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.381 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[55ccc57b-1bc4-4e52-96c8-86ff1cb852da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.384 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ddaa1caf-29b8-4d2d-bc0f-9f529f96ab68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 NetworkManager[49108]: <info>  [1769847561.4062] device (tapb8453b6a-00): carrier: link connected
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.409 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa53e31-61af-4834-b7e1-e3ff2bc9b7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.429 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[25ba046c-edfd-47b2-8c57-699c2299b2d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780402, 'reachable_time': 39279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341541, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.445 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[94575b9b-b4ce-4a4b-b1a0-22f35749f18e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:cc04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 780402, 'tstamp': 780402}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341542, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.460 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[59fef086-5559-4ab1-a7ec-6cfa65e5a6f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780402, 'reachable_time': 39279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341557, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.485 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e264e207-b355-4f62-aa20-61ca2fd97d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.541 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5d42b615-8f31-4b25-b3be-782bb2636793]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.544 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.544 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.545 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8453b6a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.547 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 NetworkManager[49108]: <info>  [1769847561.5488] manager: (tapb8453b6a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Jan 31 08:19:21 compute-0 kernel: tapb8453b6a-00: entered promiscuous mode
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.553 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.555 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8453b6a-00, col_values=(('external_ids', {'iface-id': 'eb4259dc-1b35-4b46-af47-bdd24739342f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.557 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 ovn_controller[149457]: 2026-01-31T08:19:21Z|00599|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.559 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.560 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c99be754-29a3-4ee3-a773-6e8f88f2e05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.562 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:19:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:21.562 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'env', 'PROCESS_TAG=haproxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8453b6a-05bd-4d59-86e9-a509416a9ef0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.565 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.634 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847561.6341507, 2050d0a7-d773-4467-886b-af07b4efa0d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.635 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] VM Started (Lifecycle Event)
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.667 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.673 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847561.6345615, 2050d0a7-d773-4467-886b-af07b4efa0d8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.674 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] VM Paused (Lifecycle Event)
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.707 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.721 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.751 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.825 247708 DEBUG nova.compute.manager [req-95a84040-1b75-4cc8-93b4-0a4e089f38bc req-05c9d098-5d43-42b6-b6e4-a2bd9d8cfe77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.826 247708 DEBUG oslo_concurrency.lockutils [req-95a84040-1b75-4cc8-93b4-0a4e089f38bc req-05c9d098-5d43-42b6-b6e4-a2bd9d8cfe77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.826 247708 DEBUG oslo_concurrency.lockutils [req-95a84040-1b75-4cc8-93b4-0a4e089f38bc req-05c9d098-5d43-42b6-b6e4-a2bd9d8cfe77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.826 247708 DEBUG oslo_concurrency.lockutils [req-95a84040-1b75-4cc8-93b4-0a4e089f38bc req-05c9d098-5d43-42b6-b6e4-a2bd9d8cfe77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.827 247708 DEBUG nova.compute.manager [req-95a84040-1b75-4cc8-93b4-0a4e089f38bc req-05c9d098-5d43-42b6-b6e4-a2bd9d8cfe77 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Processing event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.827 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.831 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847561.831057, 2050d0a7-d773-4467-886b-af07b4efa0d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.831 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] VM Resumed (Lifecycle Event)
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.833 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.837 247708 INFO nova.virt.libvirt.driver [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance spawned successfully.
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.838 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.870 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.876 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.879 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.880 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.880 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.881 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.881 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.882 247708 DEBUG nova.virt.libvirt.driver [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.932 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:19:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:21 compute-0 podman[341618]: 2026-01-31 08:19:21.97025731 +0000 UTC m=+0.060253911 container create 152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:19:21 compute-0 nova_compute[247704]: 2026-01-31 08:19:21.999 247708 INFO nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Took 10.03 seconds to spawn the instance on the hypervisor.
Jan 31 08:19:22 compute-0 nova_compute[247704]: 2026-01-31 08:19:22.000 247708 DEBUG nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:22 compute-0 nova_compute[247704]: 2026-01-31 08:19:22.018 247708 DEBUG nova.network.neutron [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:19:22 compute-0 nova_compute[247704]: 2026-01-31 08:19:22.020 247708 DEBUG nova.network.neutron [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:22 compute-0 systemd[1]: Started libpod-conmon-152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf.scope.
Jan 31 08:19:22 compute-0 podman[341618]: 2026-01-31 08:19:21.932481649 +0000 UTC m=+0.022478270 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:19:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:22 compute-0 nova_compute[247704]: 2026-01-31 08:19:22.066 247708 DEBUG oslo_concurrency.lockutils [req-d842d2b8-6203-42db-9af0-f69274298e06 req-f7c6caed-2272-48b9-86bb-2043cc12149c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a6c1780a3fac206db0383bd7b0f9333e7b764cf25de7ba542260aea258bad6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:22 compute-0 ceph-mon[74496]: pgmap v2543: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 31 08:19:22 compute-0 nova_compute[247704]: 2026-01-31 08:19:22.099 247708 INFO nova.compute.manager [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Took 11.94 seconds to build instance.
Jan 31 08:19:22 compute-0 nova_compute[247704]: 2026-01-31 08:19:22.120 247708 DEBUG oslo_concurrency.lockutils [None req-8fe1b02f-d56d-49e0-8b68-d1cdd00ffd67 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:22 compute-0 podman[341618]: 2026-01-31 08:19:22.152780157 +0000 UTC m=+0.242776808 container init 152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:19:22 compute-0 podman[341618]: 2026-01-31 08:19:22.187325899 +0000 UTC m=+0.277322510 container start 152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:19:22 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [NOTICE]   (341638) : New worker (341640) forked
Jan 31 08:19:22 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [NOTICE]   (341638) : Loading success.
Jan 31 08:19:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:22.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:23.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:23 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 31 08:19:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.069 247708 DEBUG nova.compute.manager [req-497b6e17-8b31-4dc0-bd3f-e0c84b23abc9 req-b344edb1-e8fc-490d-bb82-a93f2ace99cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.070 247708 DEBUG oslo_concurrency.lockutils [req-497b6e17-8b31-4dc0-bd3f-e0c84b23abc9 req-b344edb1-e8fc-490d-bb82-a93f2ace99cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.070 247708 DEBUG oslo_concurrency.lockutils [req-497b6e17-8b31-4dc0-bd3f-e0c84b23abc9 req-b344edb1-e8fc-490d-bb82-a93f2ace99cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.071 247708 DEBUG oslo_concurrency.lockutils [req-497b6e17-8b31-4dc0-bd3f-e0c84b23abc9 req-b344edb1-e8fc-490d-bb82-a93f2ace99cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.071 247708 DEBUG nova.compute.manager [req-497b6e17-8b31-4dc0-bd3f-e0c84b23abc9 req-b344edb1-e8fc-490d-bb82-a93f2ace99cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.072 247708 WARNING nova.compute.manager [req-497b6e17-8b31-4dc0-bd3f-e0c84b23abc9 req-b344edb1-e8fc-490d-bb82-a93f2ace99cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received unexpected event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with vm_state active and task_state None.
Jan 31 08:19:24 compute-0 nova_compute[247704]: 2026-01-31 08:19:24.138 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:24.989 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:19:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:24.991 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:19:25 compute-0 ceph-mon[74496]: pgmap v2544: 305 pgs: 305 active+clean; 155 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 08:19:25 compute-0 nova_compute[247704]: 2026-01-31 08:19:25.007 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:25.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.4 MiB/s wr, 206 op/s
Jan 31 08:19:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3709655572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:26.994 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:27 compute-0 ceph-mon[74496]: pgmap v2545: 305 pgs: 305 active+clean; 176 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.4 MiB/s wr, 206 op/s
Jan 31 08:19:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:27.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:27 compute-0 sudo[341651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:27 compute-0 sudo[341651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:27 compute-0 sudo[341651]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:27 compute-0 sudo[341676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:27 compute-0 sudo[341676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:27 compute-0 sudo[341676]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 168 op/s
Jan 31 08:19:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2092078173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/70020733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:28 compute-0 ceph-mon[74496]: pgmap v2546: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 168 op/s
Jan 31 08:19:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:28.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:29.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:29 compute-0 nova_compute[247704]: 2026-01-31 08:19:29.140 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 211 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Jan 31 08:19:30 compute-0 nova_compute[247704]: 2026-01-31 08:19:30.055 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:30.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:30 compute-0 ceph-mon[74496]: pgmap v2547: 305 pgs: 305 active+clean; 211 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Jan 31 08:19:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:31.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.428 247708 DEBUG oslo_concurrency.lockutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.428 247708 DEBUG oslo_concurrency.lockutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.451 247708 DEBUG nova.objects.instance [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'flavor' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.509 247708 DEBUG oslo_concurrency.lockutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.895 247708 DEBUG oslo_concurrency.lockutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.896 247708 DEBUG oslo_concurrency.lockutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:31 compute-0 nova_compute[247704]: 2026-01-31 08:19:31.897 247708 INFO nova.compute.manager [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Attaching volume 124bfb1e-b91a-4179-9a6b-46ee143687ea to /dev/vdb
Jan 31 08:19:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 234 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 08:19:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.151 247708 DEBUG os_brick.utils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.153 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.166 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.166 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[6c99563b-137a-4626-9149-c3738d0b64e8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.168 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.174 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.175 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[66455a74-eeae-49c5-9925-09956c8d0449]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.176 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.185 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.186 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[05ee9a01-09f8-46e9-9ecc-c88cd44f4213]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.187 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe2d794-504f-4f6c-ba8a-766839a0aa6f]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.187 247708 DEBUG oslo_concurrency.processutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.216 247708 DEBUG oslo_concurrency.processutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.219 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.220 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.220 247708 DEBUG os_brick.initiator.connectors.lightos [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.221 247708 DEBUG os_brick.utils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:19:32 compute-0 nova_compute[247704]: 2026-01-31 08:19:32.221 247708 DEBUG nova.virt.block_device [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating existing volume attachment record: 427a8d76-874f-48e6-8b77-2a24c293c220 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:19:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:32.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:32 compute-0 podman[341711]: 2026-01-31 08:19:32.952542608 +0000 UTC m=+0.116836536 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:19:32 compute-0 ceph-mon[74496]: pgmap v2548: 305 pgs: 305 active+clean; 234 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 08:19:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1892932877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809861043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:33.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.286 247708 DEBUG nova.objects.instance [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'flavor' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.310 247708 DEBUG nova.virt.libvirt.driver [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Attempting to attach volume 124bfb1e-b91a-4179-9a6b-46ee143687ea with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.315 247708 DEBUG nova.virt.libvirt.guest [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:19:33 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:19:33 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-124bfb1e-b91a-4179-9a6b-46ee143687ea">
Jan 31 08:19:33 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:33 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:33 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:33 compute-0 nova_compute[247704]:   </source>
Jan 31 08:19:33 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:19:33 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:33 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:19:33 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:19:33 compute-0 nova_compute[247704]:   <serial>124bfb1e-b91a-4179-9a6b-46ee143687ea</serial>
Jan 31 08:19:33 compute-0 nova_compute[247704]: </disk>
Jan 31 08:19:33 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.623 247708 DEBUG nova.virt.libvirt.driver [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.624 247708 DEBUG nova.virt.libvirt.driver [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.625 247708 DEBUG nova.virt.libvirt.driver [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:33 compute-0 nova_compute[247704]: 2026-01-31 08:19:33.625 247708 DEBUG nova.virt.libvirt.driver [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] No VIF found with MAC fa:16:3e:93:f1:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:19:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 234 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Jan 31 08:19:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1809861043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.118 247708 DEBUG oslo_concurrency.lockutils [None req-a76aa1d2-b1f0-41c6-8eb1-4de03fb53ea8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.143 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.218 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.218 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.254 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.362 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.363 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.373 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.373 247708 INFO nova.compute.claims [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:34.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:34 compute-0 nova_compute[247704]: 2026-01-31 08:19:34.880 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:35 compute-0 ceph-mon[74496]: pgmap v2549: 305 pgs: 305 active+clean; 234 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Jan 31 08:19:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/336001031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.097 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:35.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433741234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.337 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.346 247708 DEBUG nova.compute.provider_tree [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.453 247708 DEBUG nova.scheduler.client.report [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.501 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.502 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:19:35 compute-0 sudo[341782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004161613914479807 of space, bias 1.0, pg target 1.248484174343942 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:19:35 compute-0 sudo[341782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:19:35 compute-0 sudo[341782]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:19:35 compute-0 sudo[341808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:35 compute-0 sudo[341808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-0 ovn_controller[149457]: 2026-01-31T08:19:35Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:f1:46 10.100.0.14
Jan 31 08:19:35 compute-0 ovn_controller[149457]: 2026-01-31T08:19:35Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:f1:46 10.100.0.14
Jan 31 08:19:35 compute-0 sudo[341808]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:35 compute-0 sudo[341833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:35 compute-0 sudo[341833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-0 sudo[341833]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:35 compute-0 sudo[341858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:19:35 compute-0 sudo[341858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.755 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.756 247708 DEBUG nova.network.neutron [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.810 247708 INFO nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:19:35 compute-0 nova_compute[247704]: 2026-01-31 08:19:35.913 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:19:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.6 MiB/s wr, 295 op/s
Jan 31 08:19:35 compute-0 sudo[341858]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 31 08:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 31 08:19:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 31 08:19:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1433741234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.090 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.092 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.092 247708 INFO nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Creating image(s)
Jan 31 08:19:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:19:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.130 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.160 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:36 compute-0 sudo[341937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.200 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:36 compute-0 sudo[341937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.205 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:36 compute-0 sudo[341937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:36 compute-0 sudo[341983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:36 compute-0 sudo[341983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:36 compute-0 sudo[341983]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.270 247708 DEBUG nova.policy [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6788b0883cb348719d1222b1c9483be2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.295 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.296 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.297 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.298 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:36 compute-0 sudo[342009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:36 compute-0 sudo[342009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:36 compute-0 sudo[342009]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.339 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:36 compute-0 nova_compute[247704]: 2026-01-31 08:19:36.345 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:36 compute-0 sudo[342054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:19:36 compute-0 sudo[342054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:36.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:37.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:37 compute-0 sudo[342054]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:19:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:19:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:19:37 compute-0 ceph-mon[74496]: pgmap v2550: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.6 MiB/s wr, 295 op/s
Jan 31 08:19:37 compute-0 ceph-mon[74496]: osdmap e331: 3 total, 3 up, 3 in
Jan 31 08:19:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:37 compute-0 nova_compute[247704]: 2026-01-31 08:19:37.509 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:37 compute-0 nova_compute[247704]: 2026-01-31 08:19:37.764 247708 DEBUG nova.network.neutron [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Successfully created port: 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:19:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 48d5d34f-62d7-4d18-a679-006e2d18f53a does not exist
Jan 31 08:19:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0c62b959-528e-4878-9bf2-fb1f188190db does not exist
Jan 31 08:19:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cd80f242-4220-4a6d-afff-01e4bfa5bac3 does not exist
Jan 31 08:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:19:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:19:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:19:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:19:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:19:37 compute-0 nova_compute[247704]: 2026-01-31 08:19:37.824 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] resizing rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:19:37 compute-0 sudo[342149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:37 compute-0 sudo[342149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:37 compute-0 sudo[342149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:37 compute-0 sudo[342207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:37 compute-0 sudo[342207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:37 compute-0 sudo[342207]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 265 op/s
Jan 31 08:19:37 compute-0 nova_compute[247704]: 2026-01-31 08:19:37.961 247708 DEBUG nova.objects.instance [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'migration_context' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:37 compute-0 sudo[342242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:37 compute-0 sudo[342242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:37 compute-0 sudo[342242]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:38 compute-0 sudo[342278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:19:38 compute-0 sudo[342278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.347 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.348 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Ensure instance console log exists: /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.348 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.349 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.349 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.380123168 +0000 UTC m=+0.036991479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.596069343 +0000 UTC m=+0.252937634 container create 81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:19:38 compute-0 systemd[1]: Started libpod-conmon-81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6.scope.
Jan 31 08:19:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.697219693 +0000 UTC m=+0.354087974 container init 81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.7048203 +0000 UTC m=+0.361688581 container start 81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.709172126 +0000 UTC m=+0.366040407 container attach 81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:38 compute-0 wonderful_perlman[342359]: 167 167
Jan 31 08:19:38 compute-0 systemd[1]: libpod-81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6.scope: Deactivated successfully.
Jan 31 08:19:38 compute-0 conmon[342359]: conmon 81a08d797aead4d7d8ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6.scope/container/memory.events
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.714776844 +0000 UTC m=+0.371645155 container died 81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ab0fe58435ce889ef372c8c3f88b56964c9afb467681d797a09dacb87143a60-merged.mount: Deactivated successfully.
Jan 31 08:19:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:38.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:38 compute-0 podman[342342]: 2026-01-31 08:19:38.783900098 +0000 UTC m=+0.440768399 container remove 81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:38 compute-0 systemd[1]: libpod-conmon-81a08d797aead4d7d8cea4824c8b2eb8e8a36b6467bdee4507f7e7b18600f1f6.scope: Deactivated successfully.
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.793 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.793 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.793 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.793 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:19:38 compute-0 nova_compute[247704]: 2026-01-31 08:19:38.794 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:38 compute-0 podman[342386]: 2026-01-31 08:19:38.944129607 +0000 UTC m=+0.060544685 container create e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:19:39 compute-0 systemd[1]: Started libpod-conmon-e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7.scope.
Jan 31 08:19:39 compute-0 podman[342386]: 2026-01-31 08:19:38.910507783 +0000 UTC m=+0.026922961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:19:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd140819b50729ddec3157427af1ff727106bc9d0e64fedc1f4cc2c4b5e65218/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd140819b50729ddec3157427af1ff727106bc9d0e64fedc1f4cc2c4b5e65218/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd140819b50729ddec3157427af1ff727106bc9d0e64fedc1f4cc2c4b5e65218/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd140819b50729ddec3157427af1ff727106bc9d0e64fedc1f4cc2c4b5e65218/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd140819b50729ddec3157427af1ff727106bc9d0e64fedc1f4cc2c4b5e65218/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:39 compute-0 podman[342386]: 2026-01-31 08:19:39.056510393 +0000 UTC m=+0.172925501 container init e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:19:39 compute-0 podman[342386]: 2026-01-31 08:19:39.064272583 +0000 UTC m=+0.180687691 container start e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:39 compute-0 podman[342386]: 2026-01-31 08:19:39.068565458 +0000 UTC m=+0.184980566 container attach e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 08:19:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:39.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:39 compute-0 nova_compute[247704]: 2026-01-31 08:19:39.146 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559065145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:39 compute-0 nova_compute[247704]: 2026-01-31 08:19:39.210 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:19:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:19:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:19:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:19:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:19:39 compute-0 ceph-mon[74496]: pgmap v2552: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.6 MiB/s wr, 265 op/s
Jan 31 08:19:39 compute-0 gracious_napier[342421]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:19:39 compute-0 gracious_napier[342421]: --> relative data size: 1.0
Jan 31 08:19:39 compute-0 gracious_napier[342421]: --> All data devices are unavailable
Jan 31 08:19:39 compute-0 systemd[1]: libpod-e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7.scope: Deactivated successfully.
Jan 31 08:19:39 compute-0 conmon[342421]: conmon e0def993f601fbe0e886 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7.scope/container/memory.events
Jan 31 08:19:39 compute-0 podman[342386]: 2026-01-31 08:19:39.878346026 +0000 UTC m=+0.994761154 container died e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:19:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 285 op/s
Jan 31 08:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd140819b50729ddec3157427af1ff727106bc9d0e64fedc1f4cc2c4b5e65218-merged.mount: Deactivated successfully.
Jan 31 08:19:39 compute-0 podman[342386]: 2026-01-31 08:19:39.944835366 +0000 UTC m=+1.061250444 container remove e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:39 compute-0 systemd[1]: libpod-conmon-e0def993f601fbe0e8869ce9399e86ff0e691bb5229352269edda47f7b2bcab7.scope: Deactivated successfully.
Jan 31 08:19:39 compute-0 sudo[342278]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:40 compute-0 sudo[342454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:40 compute-0 sudo[342454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:40 compute-0 sudo[342454]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.099 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:40 compute-0 sudo[342479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:40 compute-0 sudo[342479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:40 compute-0 sudo[342479]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:40 compute-0 sudo[342504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:40 compute-0 sudo[342504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:40 compute-0 sudo[342504]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:40 compute-0 sudo[342529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:19:40 compute-0 sudo[342529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3559065145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1997723901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:40 compute-0 ceph-mon[74496]: pgmap v2553: 305 pgs: 305 active+clean; 302 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.7 MiB/s wr, 285 op/s
Jan 31 08:19:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/887722912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.607189927 +0000 UTC m=+0.044346458 container create 119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:19:40 compute-0 systemd[1]: Started libpod-conmon-119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d.scope.
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.654 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.654 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.654 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:19:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.588878478 +0000 UTC m=+0.026035009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.699253355 +0000 UTC m=+0.136409886 container init 119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hellman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.710019359 +0000 UTC m=+0.147175920 container start 119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hellman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:19:40 compute-0 intelligent_hellman[342609]: 167 167
Jan 31 08:19:40 compute-0 systemd[1]: libpod-119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d.scope: Deactivated successfully.
Jan 31 08:19:40 compute-0 conmon[342609]: conmon 119bd493ca89f1dea2b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d.scope/container/memory.events
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.719722547 +0000 UTC m=+0.156879138 container attach 119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.72069428 +0000 UTC m=+0.157850861 container died 119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-506c405f872adef8ef714f9cd5008a20c5bcb34f64f4417d4a0bcbdd20b9c526-merged.mount: Deactivated successfully.
Jan 31 08:19:40 compute-0 podman[342594]: 2026-01-31 08:19:40.775497804 +0000 UTC m=+0.212654305 container remove 119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:19:40 compute-0 systemd[1]: libpod-conmon-119bd493ca89f1dea2b7b79141e17e969b96298fa9f4a4b16f0f6fe89179e07d.scope: Deactivated successfully.
Jan 31 08:19:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:40.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.847 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.849 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4128MB free_disk=20.88744354248047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.849 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:40 compute-0 nova_compute[247704]: 2026-01-31 08:19:40.850 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:40 compute-0 podman[342633]: 2026-01-31 08:19:40.928046824 +0000 UTC m=+0.036438514 container create 71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:19:40 compute-0 systemd[1]: Started libpod-conmon-71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35.scope.
Jan 31 08:19:41 compute-0 podman[342633]: 2026-01-31 08:19:40.912564835 +0000 UTC m=+0.020956525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:19:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96535fb00a342b5366aa7ba474d93dc21d4ddced698fef93a7808f775a8a862/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96535fb00a342b5366aa7ba474d93dc21d4ddced698fef93a7808f775a8a862/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96535fb00a342b5366aa7ba474d93dc21d4ddced698fef93a7808f775a8a862/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f96535fb00a342b5366aa7ba474d93dc21d4ddced698fef93a7808f775a8a862/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:41 compute-0 podman[342633]: 2026-01-31 08:19:41.037701334 +0000 UTC m=+0.146093034 container init 71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:41 compute-0 podman[342633]: 2026-01-31 08:19:41.052758282 +0000 UTC m=+0.161149962 container start 71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:19:41 compute-0 podman[342633]: 2026-01-31 08:19:41.057603742 +0000 UTC m=+0.165995412 container attach 71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.086 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 2050d0a7-d773-4467-886b-af07b4efa0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.086 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.086 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.086 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:19:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.298 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2954870532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:19:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984747512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.730 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.739 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.762 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.789 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:19:41 compute-0 nova_compute[247704]: 2026-01-31 08:19:41.790 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]: {
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:     "0": [
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:         {
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "devices": [
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "/dev/loop3"
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             ],
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "lv_name": "ceph_lv0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "lv_size": "7511998464",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "name": "ceph_lv0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "tags": {
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.crush_device_class": "",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.encrypted": "0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.osd_id": "0",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.type": "block",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:                 "ceph.vdo": "0"
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             },
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "type": "block",
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:             "vg_name": "ceph_vg0"
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:         }
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]:     ]
Jan 31 08:19:41 compute-0 gallant_mahavira[342649]: }
Jan 31 08:19:41 compute-0 systemd[1]: libpod-71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35.scope: Deactivated successfully.
Jan 31 08:19:41 compute-0 podman[342633]: 2026-01-31 08:19:41.842923259 +0000 UTC m=+0.951315019 container died 71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96535fb00a342b5366aa7ba474d93dc21d4ddced698fef93a7808f775a8a862-merged.mount: Deactivated successfully.
Jan 31 08:19:41 compute-0 podman[342633]: 2026-01-31 08:19:41.908835575 +0000 UTC m=+1.017227285 container remove 71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:19:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.8 MiB/s wr, 232 op/s
Jan 31 08:19:41 compute-0 systemd[1]: libpod-conmon-71e586b56743ea6252c7b73a435d6e6059740d4ba1704192dce702f9d34f9d35.scope: Deactivated successfully.
Jan 31 08:19:41 compute-0 sudo[342529]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:42 compute-0 sudo[342693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:42 compute-0 sudo[342693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:42 compute-0 sudo[342693]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:42 compute-0 sudo[342718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:42 compute-0 sudo[342718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:42 compute-0 sudo[342718]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:42 compute-0 sudo[342743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:42 compute-0 sudo[342743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:42 compute-0 sudo[342743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:42 compute-0 sudo[342768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:19:42 compute-0 sudo[342768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:42 compute-0 podman[342836]: 2026-01-31 08:19:42.591541496 +0000 UTC m=+0.022689918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:19:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 31 08:19:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:42.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:42 compute-0 nova_compute[247704]: 2026-01-31 08:19:42.790 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:42 compute-0 nova_compute[247704]: 2026-01-31 08:19:42.792 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:19:42 compute-0 nova_compute[247704]: 2026-01-31 08:19:42.792 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:19:42 compute-0 podman[342836]: 2026-01-31 08:19:42.980576365 +0000 UTC m=+0.411724767 container create 84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:19:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:43 compute-0 nova_compute[247704]: 2026-01-31 08:19:43.195 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:19:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/984747512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/249068326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:43 compute-0 ceph-mon[74496]: pgmap v2554: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.8 MiB/s wr, 232 op/s
Jan 31 08:19:43 compute-0 systemd[1]: Started libpod-conmon-84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3.scope.
Jan 31 08:19:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 31 08:19:43 compute-0 podman[342836]: 2026-01-31 08:19:43.804382227 +0000 UTC m=+1.235530669 container init 84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:19:43 compute-0 podman[342836]: 2026-01-31 08:19:43.811644084 +0000 UTC m=+1.242792486 container start 84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:19:43 compute-0 trusting_wilson[342854]: 167 167
Jan 31 08:19:43 compute-0 systemd[1]: libpod-84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3.scope: Deactivated successfully.
Jan 31 08:19:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.8 MiB/s wr, 219 op/s
Jan 31 08:19:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 31 08:19:44 compute-0 nova_compute[247704]: 2026-01-31 08:19:44.151 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:44 compute-0 podman[342836]: 2026-01-31 08:19:44.196894571 +0000 UTC m=+1.628043063 container attach 84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:19:44 compute-0 podman[342836]: 2026-01-31 08:19:44.197592729 +0000 UTC m=+1.628741161 container died 84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:19:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:44.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-445af3b072fdf4f6a5da37e37093827b34a88151421c0c391b7b8be295fddfb0-merged.mount: Deactivated successfully.
Jan 31 08:19:44 compute-0 ceph-mon[74496]: pgmap v2555: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.8 MiB/s wr, 219 op/s
Jan 31 08:19:44 compute-0 ceph-mon[74496]: osdmap e332: 3 total, 3 up, 3 in
Jan 31 08:19:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2846739514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:45 compute-0 nova_compute[247704]: 2026-01-31 08:19:45.102 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:45 compute-0 podman[342836]: 2026-01-31 08:19:45.105265775 +0000 UTC m=+2.536414207 container remove 84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:19:45 compute-0 systemd[1]: libpod-conmon-84515090831b3e2c05efe6fa0897f58603c1c0604f5d88ba6e57f93f15313ed3.scope: Deactivated successfully.
Jan 31 08:19:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:45.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:45 compute-0 podman[342878]: 2026-01-31 08:19:45.264647464 +0000 UTC m=+0.039006008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:19:45 compute-0 podman[342878]: 2026-01-31 08:19:45.531692821 +0000 UTC m=+0.306051355 container create f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:19:45 compute-0 systemd[1]: Started libpod-conmon-f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b.scope.
Jan 31 08:19:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968cd44c78d122c7e0710872b33419c346a47504e2785668cb46ad5031f049ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968cd44c78d122c7e0710872b33419c346a47504e2785668cb46ad5031f049ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968cd44c78d122c7e0710872b33419c346a47504e2785668cb46ad5031f049ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968cd44c78d122c7e0710872b33419c346a47504e2785668cb46ad5031f049ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:45 compute-0 podman[342878]: 2026-01-31 08:19:45.893478513 +0000 UTC m=+0.667837057 container init f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:19:45 compute-0 podman[342878]: 2026-01-31 08:19:45.900452405 +0000 UTC m=+0.674810899 container start f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:19:45 compute-0 podman[342878]: 2026-01-31 08:19:45.904586636 +0000 UTC m=+0.678945130 container attach f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:19:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.7 MiB/s wr, 127 op/s
Jan 31 08:19:46 compute-0 ceph-mon[74496]: pgmap v2557: 305 pgs: 305 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.7 MiB/s wr, 127 op/s
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.518 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.518 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.518 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.518 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:46 compute-0 nervous_carson[342895]: {
Jan 31 08:19:46 compute-0 nervous_carson[342895]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:19:46 compute-0 nervous_carson[342895]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:19:46 compute-0 nervous_carson[342895]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:19:46 compute-0 nervous_carson[342895]:         "osd_id": 0,
Jan 31 08:19:46 compute-0 nervous_carson[342895]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:19:46 compute-0 nervous_carson[342895]:         "type": "bluestore"
Jan 31 08:19:46 compute-0 nervous_carson[342895]:     }
Jan 31 08:19:46 compute-0 nervous_carson[342895]: }
Jan 31 08:19:46 compute-0 systemd[1]: libpod-f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b.scope: Deactivated successfully.
Jan 31 08:19:46 compute-0 podman[342878]: 2026-01-31 08:19:46.749606626 +0000 UTC m=+1.523965160 container died f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:19:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:46.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.800 247708 DEBUG nova.network.neutron [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Successfully updated port: 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.836 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.837 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:46 compute-0 nova_compute[247704]: 2026-01-31 08:19:46.837 247708 DEBUG nova.network.neutron [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:19:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-968cd44c78d122c7e0710872b33419c346a47504e2785668cb46ad5031f049ae-merged.mount: Deactivated successfully.
Jan 31 08:19:46 compute-0 podman[342917]: 2026-01-31 08:19:46.897301638 +0000 UTC m=+0.113033063 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:19:46 compute-0 podman[342878]: 2026-01-31 08:19:46.913571417 +0000 UTC m=+1.687929951 container remove f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:19:46 compute-0 systemd[1]: libpod-conmon-f368e1dbcf97aca6dce3a5d56c1a900541b39c466e0b69860563f26c80efad9b.scope: Deactivated successfully.
Jan 31 08:19:46 compute-0 sudo[342768]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:19:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:19:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3deee228-c162-41da-8252-cca74b0693d7 does not exist
Jan 31 08:19:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0110f1bb-ed9b-4479-bd97-1cea21834d19 does not exist
Jan 31 08:19:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev dd09c59b-090b-46dd-86c9-2289f30b09c6 does not exist
Jan 31 08:19:47 compute-0 nova_compute[247704]: 2026-01-31 08:19:47.194 247708 DEBUG nova.network.neutron [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:19:47 compute-0 sudo[342950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:47 compute-0 sudo[342950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-0 sudo[342950]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Jan 31 08:19:47 compute-0 sudo[342975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:19:47 compute-0 sudo[342975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3886512229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:19:47 compute-0 sudo[342975]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Jan 31 08:19:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Jan 31 08:19:47 compute-0 nova_compute[247704]: 2026-01-31 08:19:47.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:47 compute-0 NetworkManager[49108]: <info>  [1769847587.3752] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Jan 31 08:19:47 compute-0 NetworkManager[49108]: <info>  [1769847587.3767] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 31 08:19:47 compute-0 nova_compute[247704]: 2026-01-31 08:19:47.453 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:47 compute-0 ovn_controller[149457]: 2026-01-31T08:19:47Z|00600|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:19:47 compute-0 nova_compute[247704]: 2026-01-31 08:19:47.478 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:47 compute-0 sudo[343002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:47 compute-0 sudo[343002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-0 sudo[343002]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-0 sudo[343027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:19:47 compute-0 sudo[343027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:47 compute-0 sudo[343027]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 321 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 539 KiB/s wr, 40 op/s
Jan 31 08:19:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Jan 31 08:19:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Jan 31 08:19:48 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Jan 31 08:19:48 compute-0 ceph-mon[74496]: osdmap e333: 3 total, 3 up, 3 in
Jan 31 08:19:48 compute-0 ceph-mon[74496]: pgmap v2559: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 321 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 539 KiB/s wr, 40 op/s
Jan 31 08:19:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:48.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.121 247708 DEBUG nova.compute.manager [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-changed-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.122 247708 DEBUG nova.compute.manager [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Refreshing instance network info cache due to event network-changed-912fc9c9-cae4-4bd8-901c-7bb8a63759a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.122 247708 DEBUG oslo_concurrency.lockutils [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.153 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:49 compute-0 ceph-mon[74496]: osdmap e334: 3 total, 3 up, 3 in
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.847 247708 DEBUG nova.network.neutron [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.865 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.873 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.874 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance network_info: |[{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.874 247708 DEBUG oslo_concurrency.lockutils [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.875 247708 DEBUG nova.network.neutron [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Refreshing network info cache for port 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.879 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Start _get_guest_xml network_info=[{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.886 247708 WARNING nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.895 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.895 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.896 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.896 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.904 247708 DEBUG nova.virt.libvirt.host [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.904 247708 DEBUG nova.virt.libvirt.host [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:19:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.924 247708 DEBUG nova.virt.libvirt.host [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.930 247708 DEBUG nova.virt.libvirt.host [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.932 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.933 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.933 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.933 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.933 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.934 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.934 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.934 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.934 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.934 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.935 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.935 247708 DEBUG nova.virt.hardware [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:19:49 compute-0 nova_compute[247704]: 2026-01-31 08:19:49.938 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.125 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154019945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.372 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.404 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.410 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:50 compute-0 ceph-mon[74496]: pgmap v2561: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 31 08:19:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2154019945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.662 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:19:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:19:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076698534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.855 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.857 247708 DEBUG nova.virt.libvirt.vif [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-28125457',display_name='tempest-ServerRescueNegativeTestJSON-server-28125457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-28125457',id=144,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-k7f0vyw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:35Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=c725d43e-b5fe-4a94-ad44-6df85e3c0fa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.857 247708 DEBUG nova.network.os_vif_util [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.858 247708 DEBUG nova.network.os_vif_util [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.859 247708 DEBUG nova.objects.instance [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.884 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <uuid>c725d43e-b5fe-4a94-ad44-6df85e3c0fa0</uuid>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <name>instance-00000090</name>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-28125457</nova:name>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:19:49</nova:creationTime>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:user uuid="6788b0883cb348719d1222b1c9483be2">tempest-ServerRescueNegativeTestJSON-1784809431-project-member</nova:user>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:project uuid="4849ff916e1b4e2aa162faaf2c0717a2">tempest-ServerRescueNegativeTestJSON-1784809431</nova:project>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <nova:port uuid="912fc9c9-cae4-4bd8-901c-7bb8a63759a4">
Jan 31 08:19:50 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <system>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <entry name="serial">c725d43e-b5fe-4a94-ad44-6df85e3c0fa0</entry>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <entry name="uuid">c725d43e-b5fe-4a94-ad44-6df85e3c0fa0</entry>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </system>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <os>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </os>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <features>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </features>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk">
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </source>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config">
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </source>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:19:50 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:a5:2d:b8"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <target dev="tap912fc9c9-ca"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/console.log" append="off"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <video>
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </video>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:19:50 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:19:50 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:19:50 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:19:50 compute-0 nova_compute[247704]: </domain>
Jan 31 08:19:50 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.886 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Preparing to wait for external event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.887 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.888 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.888 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.891 247708 DEBUG nova.virt.libvirt.vif [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-28125457',display_name='tempest-ServerRescueNegativeTestJSON-server-28125457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-28125457',id=144,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-k7f0vyw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:35Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=c725d43e-b5fe-4a94-ad44-6df85e3c0fa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.891 247708 DEBUG nova.network.os_vif_util [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.893 247708 DEBUG nova.network.os_vif_util [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.894 247708 DEBUG os_vif [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.896 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.897 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.898 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.904 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.904 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap912fc9c9-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.905 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap912fc9c9-ca, col_values=(('external_ids', {'iface-id': '912fc9c9-cae4-4bd8-901c-7bb8a63759a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:2d:b8', 'vm-uuid': 'c725d43e-b5fe-4a94-ad44-6df85e3c0fa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:50 compute-0 NetworkManager[49108]: <info>  [1769847590.9089] manager: (tap912fc9c9-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.910 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.916 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.918 247708 INFO os_vif [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca')
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.986 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.987 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.987 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:a5:2d:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:19:50 compute-0 nova_compute[247704]: 2026-01-31 08:19:50.988 247708 INFO nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Using config drive
Jan 31 08:19:51 compute-0 nova_compute[247704]: 2026-01-31 08:19:51.025 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2076698534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:19:51 compute-0 nova_compute[247704]: 2026-01-31 08:19:51.902 247708 DEBUG nova.compute.manager [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:51 compute-0 nova_compute[247704]: 2026-01-31 08:19:51.902 247708 DEBUG nova.compute.manager [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:19:51 compute-0 nova_compute[247704]: 2026-01-31 08:19:51.903 247708 DEBUG oslo_concurrency.lockutils [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:51 compute-0 nova_compute[247704]: 2026-01-31 08:19:51.903 247708 DEBUG oslo_concurrency.lockutils [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:51 compute-0 nova_compute[247704]: 2026-01-31 08:19:51.903 247708 DEBUG nova.network.neutron [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:19:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.9 MiB/s wr, 207 op/s
Jan 31 08:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 31 08:19:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:51.984412) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:19:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 31 08:19:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847591984517, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2081, "num_deletes": 266, "total_data_size": 3418875, "memory_usage": 3489512, "flush_reason": "Manual Compaction"}
Jan 31 08:19:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592002648, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 3341001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53829, "largest_seqno": 55909, "table_properties": {"data_size": 3331502, "index_size": 5929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20313, "raw_average_key_size": 20, "raw_value_size": 3312244, "raw_average_value_size": 3383, "num_data_blocks": 257, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847423, "oldest_key_time": 1769847423, "file_creation_time": 1769847591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 18286 microseconds, and 5424 cpu microseconds.
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.002705) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 3341001 bytes OK
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.002739) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.011842) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.011864) EVENT_LOG_v1 {"time_micros": 1769847592011857, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.011892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3410126, prev total WAL file size 3410126, number of live WAL files 2.
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.012832) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(3262KB)], [119(10MB)]
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592012902, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 14180411, "oldest_snapshot_seqno": -1}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 8644 keys, 14029784 bytes, temperature: kUnknown
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592148676, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 14029784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970324, "index_size": 36756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 223626, "raw_average_key_size": 25, "raw_value_size": 13814765, "raw_average_value_size": 1598, "num_data_blocks": 1452, "num_entries": 8644, "num_filter_entries": 8644, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.149244) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 14029784 bytes
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.150649) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.4 rd, 103.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 9190, records dropped: 546 output_compression: NoCompression
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.150674) EVENT_LOG_v1 {"time_micros": 1769847592150663, "job": 72, "event": "compaction_finished", "compaction_time_micros": 135856, "compaction_time_cpu_micros": 32507, "output_level": 6, "num_output_files": 1, "total_output_size": 14029784, "num_input_records": 9190, "num_output_records": 8644, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592151627, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847592153313, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.012706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.153510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.153520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.153522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.153525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:19:52.153529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.173 247708 INFO nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Creating config drive at /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.177 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgohggout execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.310 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgohggout" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.346 247708 DEBUG nova.storage.rbd_utils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.351 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.542 247708 DEBUG oslo_concurrency.processutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.544 247708 INFO nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Deleting local config drive /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config because it was imported into RBD.
Jan 31 08:19:52 compute-0 kernel: tap912fc9c9-ca: entered promiscuous mode
Jan 31 08:19:52 compute-0 NetworkManager[49108]: <info>  [1769847592.6155] manager: (tap912fc9c9-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Jan 31 08:19:52 compute-0 ovn_controller[149457]: 2026-01-31T08:19:52Z|00601|binding|INFO|Claiming lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for this chassis.
Jan 31 08:19:52 compute-0 ovn_controller[149457]: 2026-01-31T08:19:52Z|00602|binding|INFO|912fc9c9-cae4-4bd8-901c-7bb8a63759a4: Claiming fa:16:3e:a5:2d:b8 10.100.0.10
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.619 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:52 compute-0 ovn_controller[149457]: 2026-01-31T08:19:52Z|00603|binding|INFO|Setting lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 ovn-installed in OVS
Jan 31 08:19:52 compute-0 ovn_controller[149457]: 2026-01-31T08:19:52Z|00604|binding|INFO|Setting lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 up in Southbound
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.631 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2d:b8 10.100.0.10'], port_security=['fa:16:3e:a5:2d:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c725d43e-b5fe-4a94-ad44-6df85e3c0fa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=912fc9c9-cae4-4bd8-901c-7bb8a63759a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.631 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.633 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.636 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:19:52 compute-0 systemd-udevd[343189]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.651 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65b9bc38-2dc1-4d7a-8804-d7499ee751fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.653 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape03fc320-c1 in ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.655 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape03fc320-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.655 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c52073d-9702-4bbd-bf94-9b086128cee2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.656 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[85c889e5-f04e-4c43-8cd8-44fa890118bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 systemd-machined[214448]: New machine qemu-63-instance-00000090.
Jan 31 08:19:52 compute-0 NetworkManager[49108]: <info>  [1769847592.6718] device (tap912fc9c9-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:19:52 compute-0 NetworkManager[49108]: <info>  [1769847592.6725] device (tap912fc9c9-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.672 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[64552a82-a4d3-405c-b3ad-63d2b7afd623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000090.
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.689 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0764ff1c-c6f2-4ba5-b96a-08a6cb84f485]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.725 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ef6f50-f3d9-4bb4-a2f9-df3b2b31070e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.732 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b6165b1f-ec9f-470d-932a-285c486cabee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 NetworkManager[49108]: <info>  [1769847592.7342] manager: (tape03fc320-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/273)
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.770 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c4544346-e43e-4c36-a34b-291f4ba124f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.775 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e5185600-8470-42b9-b9b0-621f8649e864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:52.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:52 compute-0 NetworkManager[49108]: <info>  [1769847592.8018] device (tape03fc320-c0): carrier: link connected
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.811 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[991bc05a-6174-4589-a0c4-20adf72eac5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.830 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5e35a888-6217-4f31-994b-3421f296a02d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783542, 'reachable_time': 25694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343222, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.848 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b364aa15-4c0b-4d60-ba63-504ef4433bf5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:2269'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 783542, 'tstamp': 783542}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343223, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.868 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[20c9e5c5-14bf-4dd4-aad5-705871c2d804]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783542, 'reachable_time': 25694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343224, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.905 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cff31275-4af8-456c-b2a7-550a6c7d472b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.970 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[82a57909-6a1b-4c4f-992d-4daed3e293a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.972 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.972 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.973 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:52 compute-0 NetworkManager[49108]: <info>  [1769847592.9767] manager: (tape03fc320-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:52 compute-0 kernel: tape03fc320-c0: entered promiscuous mode
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.985 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:19:52 compute-0 ovn_controller[149457]: 2026-01-31T08:19:52Z|00605|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.990 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:19:52 compute-0 nova_compute[247704]: 2026-01-31 08:19:52.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.991 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[83e046dc-f1c8-4f9f-a682-dc0ac534f8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.993 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:19:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:19:52.994 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'env', 'PROCESS_TAG=haproxy-e03fc320-c87d-42d2-a772-ec94aeb05209', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e03fc320-c87d-42d2-a772-ec94aeb05209.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:19:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.267 247708 DEBUG nova.network.neutron [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updated VIF entry in instance network info cache for port 912fc9c9-cae4-4bd8-901c-7bb8a63759a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.269 247708 DEBUG nova.network.neutron [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.294 247708 DEBUG oslo_concurrency.lockutils [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.295 247708 DEBUG nova.compute.manager [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.296 247708 DEBUG nova.compute.manager [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.298 247708 DEBUG oslo_concurrency.lockutils [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:53 compute-0 ceph-mon[74496]: pgmap v2562: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 5.9 MiB/s wr, 207 op/s
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.406 247708 DEBUG nova.compute.manager [req-c5af6e1a-6add-414b-b9f1-81a866b27def req-8510f3d8-161d-4065-b04e-6e6eb2619941 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.407 247708 DEBUG oslo_concurrency.lockutils [req-c5af6e1a-6add-414b-b9f1-81a866b27def req-8510f3d8-161d-4065-b04e-6e6eb2619941 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.407 247708 DEBUG oslo_concurrency.lockutils [req-c5af6e1a-6add-414b-b9f1-81a866b27def req-8510f3d8-161d-4065-b04e-6e6eb2619941 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.407 247708 DEBUG oslo_concurrency.lockutils [req-c5af6e1a-6add-414b-b9f1-81a866b27def req-8510f3d8-161d-4065-b04e-6e6eb2619941 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.408 247708 DEBUG nova.compute.manager [req-c5af6e1a-6add-414b-b9f1-81a866b27def req-8510f3d8-161d-4065-b04e-6e6eb2619941 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Processing event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:19:53 compute-0 podman[343256]: 2026-01-31 08:19:53.324409607 +0000 UTC m=+0.023193390 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:19:53 compute-0 podman[343256]: 2026-01-31 08:19:53.582863075 +0000 UTC m=+0.281646838 container create cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.640 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.659 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847593.658785, c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.660 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] VM Started (Lifecycle Event)
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.662 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.666 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.671 247708 INFO nova.virt.libvirt.driver [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance spawned successfully.
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.672 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.698 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.704 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.705 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.706 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.706 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.707 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.707 247708 DEBUG nova.virt.libvirt.driver [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.714 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:19:53 compute-0 systemd[1]: Started libpod-conmon-cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82.scope.
Jan 31 08:19:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d603f24ecc2845666e511ab832e8a7e556730743a4896e41808b2a345cda8c15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.815 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.816 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847593.6591165, c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.816 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] VM Paused (Lifecycle Event)
Jan 31 08:19:53 compute-0 podman[343256]: 2026-01-31 08:19:53.849977774 +0000 UTC m=+0.548761537 container init cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.852 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:53 compute-0 podman[343256]: 2026-01-31 08:19:53.855838848 +0000 UTC m=+0.554622611 container start cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.856 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847593.66555, c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.856 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] VM Resumed (Lifecycle Event)
Jan 31 08:19:53 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [NOTICE]   (343318) : New worker (343320) forked
Jan 31 08:19:53 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [NOTICE]   (343318) : Loading success.
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.898 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.903 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.908 247708 INFO nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Took 17.82 seconds to spawn the instance on the hypervisor.
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.909 247708 DEBUG nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:19:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 208 op/s
Jan 31 08:19:53 compute-0 nova_compute[247704]: 2026-01-31 08:19:53.942 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.032 247708 INFO nova.compute.manager [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Took 19.71 seconds to build instance.
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.068 247708 DEBUG oslo_concurrency.lockutils [None req-34f543c2-60fd-49c9-8054-85d87212fc39 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.126 247708 DEBUG nova.compute.manager [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.126 247708 DEBUG nova.compute.manager [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.127 247708 DEBUG oslo_concurrency.lockutils [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1275390790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:19:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1275390790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:19:54 compute-0 ceph-mon[74496]: pgmap v2563: 305 pgs: 305 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 208 op/s
Jan 31 08:19:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:54.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.910 247708 DEBUG nova.network.neutron [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.910 247708 DEBUG nova.network.neutron [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.940 247708 DEBUG oslo_concurrency.lockutils [req-078cf897-414b-4bd2-8d20-61c1aacb0d22 req-6f15101b-abf1-4823-b328-9634d86cb398 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.941 247708 DEBUG oslo_concurrency.lockutils [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:54 compute-0 nova_compute[247704]: 2026-01-31 08:19:54.941 247708 DEBUG nova.network.neutron [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:19:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:55.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:55 compute-0 nova_compute[247704]: 2026-01-31 08:19:55.157 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:55 compute-0 nova_compute[247704]: 2026-01-31 08:19:55.908 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:19:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 5.5 MiB/s wr, 276 op/s
Jan 31 08:19:56 compute-0 nova_compute[247704]: 2026-01-31 08:19:56.095 247708 DEBUG nova.compute.manager [req-1d058d87-3ffe-4fe7-b576-dfdb851aeb58 req-c2d7e7ae-c456-41b9-ac0e-a762e409dc40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:19:56 compute-0 nova_compute[247704]: 2026-01-31 08:19:56.096 247708 DEBUG oslo_concurrency.lockutils [req-1d058d87-3ffe-4fe7-b576-dfdb851aeb58 req-c2d7e7ae-c456-41b9-ac0e-a762e409dc40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:56 compute-0 nova_compute[247704]: 2026-01-31 08:19:56.096 247708 DEBUG oslo_concurrency.lockutils [req-1d058d87-3ffe-4fe7-b576-dfdb851aeb58 req-c2d7e7ae-c456-41b9-ac0e-a762e409dc40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:56 compute-0 nova_compute[247704]: 2026-01-31 08:19:56.097 247708 DEBUG oslo_concurrency.lockutils [req-1d058d87-3ffe-4fe7-b576-dfdb851aeb58 req-c2d7e7ae-c456-41b9-ac0e-a762e409dc40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:56 compute-0 nova_compute[247704]: 2026-01-31 08:19:56.098 247708 DEBUG nova.compute.manager [req-1d058d87-3ffe-4fe7-b576-dfdb851aeb58 req-c2d7e7ae-c456-41b9-ac0e-a762e409dc40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:19:56 compute-0 nova_compute[247704]: 2026-01-31 08:19:56.098 247708 WARNING nova.compute.manager [req-1d058d87-3ffe-4fe7-b576-dfdb851aeb58 req-c2d7e7ae-c456-41b9-ac0e-a762e409dc40 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received unexpected event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with vm_state active and task_state None.
Jan 31 08:19:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:56.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:19:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Jan 31 08:19:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Jan 31 08:19:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Jan 31 08:19:57 compute-0 ceph-mon[74496]: pgmap v2564: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 5.5 MiB/s wr, 276 op/s
Jan 31 08:19:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:19:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.206 247708 INFO nova.compute.manager [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Rescuing
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.206 247708 DEBUG oslo_concurrency.lockutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.207 247708 DEBUG oslo_concurrency.lockutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.207 247708 DEBUG nova.network.neutron [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.849 247708 DEBUG nova.network.neutron [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.850 247708 DEBUG nova.network.neutron [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.870 247708 DEBUG oslo_concurrency.lockutils [req-5eef096a-99b4-419e-aff9-8fbcb04f7ae0 req-50b2c7cf-0903-4f07-bf36-23c08fc94f9f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.871 247708 DEBUG oslo_concurrency.lockutils [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:19:57 compute-0 nova_compute[247704]: 2026-01-31 08:19:57.871 247708 DEBUG nova.network.neutron [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:19:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 4.6 MiB/s wr, 245 op/s
Jan 31 08:19:58 compute-0 ceph-mon[74496]: osdmap e335: 3 total, 3 up, 3 in
Jan 31 08:19:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:19:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:19:59 compute-0 ceph-mon[74496]: pgmap v2566: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 4.6 MiB/s wr, 245 op/s
Jan 31 08:19:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/592186997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:19:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:19:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:19:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:59.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:19:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.5 MiB/s wr, 189 op/s
Jan 31 08:20:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:20:00 compute-0 ceph-mon[74496]: pgmap v2567: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.5 MiB/s wr, 189 op/s
Jan 31 08:20:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:20:00 compute-0 nova_compute[247704]: 2026-01-31 08:20:00.161 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:00 compute-0 nova_compute[247704]: 2026-01-31 08:20:00.609 247708 DEBUG nova.network.neutron [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:00 compute-0 nova_compute[247704]: 2026-01-31 08:20:00.644 247708 DEBUG oslo_concurrency.lockutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:00 compute-0 nova_compute[247704]: 2026-01-31 08:20:00.743 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:00 compute-0 nova_compute[247704]: 2026-01-31 08:20:00.911 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:01.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:01 compute-0 nova_compute[247704]: 2026-01-31 08:20:01.405 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:20:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 336 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 606 KiB/s wr, 154 op/s
Jan 31 08:20:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.123 247708 DEBUG nova.network.neutron [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.124 247708 DEBUG nova.network.neutron [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.185 247708 DEBUG oslo_concurrency.lockutils [req-d6e3491c-7bee-4a88-b410-bf685a902035 req-6fb41210-1f5d-4385-bea3-1466d1bc2e59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:02.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.952 247708 DEBUG nova.compute.manager [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.952 247708 DEBUG nova.compute.manager [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.953 247708 DEBUG oslo_concurrency.lockutils [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.953 247708 DEBUG oslo_concurrency.lockutils [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:02 compute-0 nova_compute[247704]: 2026-01-31 08:20:02.953 247708 DEBUG nova.network.neutron [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:20:03 compute-0 ceph-mon[74496]: pgmap v2568: 305 pgs: 305 active+clean; 336 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 606 KiB/s wr, 154 op/s
Jan 31 08:20:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 327 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Jan 31 08:20:03 compute-0 podman[343334]: 2026-01-31 08:20:03.968121981 +0000 UTC m=+0.138474847 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:20:04 compute-0 ceph-mon[74496]: pgmap v2569: 305 pgs: 305 active+clean; 327 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 161 op/s
Jan 31 08:20:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:04.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:05 compute-0 nova_compute[247704]: 2026-01-31 08:20:05.161 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Jan 31 08:20:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Jan 31 08:20:05 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Jan 31 08:20:05 compute-0 nova_compute[247704]: 2026-01-31 08:20:05.852 247708 DEBUG nova.network.neutron [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:20:05 compute-0 nova_compute[247704]: 2026-01-31 08:20:05.853 247708 DEBUG nova.network.neutron [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:05 compute-0 nova_compute[247704]: 2026-01-31 08:20:05.901 247708 DEBUG oslo_concurrency.lockutils [req-03ac25be-61e5-47df-b09e-b70a8a22482d req-a0e9b6ca-828f-4c1d-b300-34cc67307256 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:05 compute-0 nova_compute[247704]: 2026-01-31 08:20:05.914 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 164 op/s
Jan 31 08:20:06 compute-0 nova_compute[247704]: 2026-01-31 08:20:06.420 247708 DEBUG oslo_concurrency.lockutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:06 compute-0 nova_compute[247704]: 2026-01-31 08:20:06.421 247708 DEBUG oslo_concurrency.lockutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:06 compute-0 nova_compute[247704]: 2026-01-31 08:20:06.422 247708 INFO nova.compute.manager [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Rebooting instance
Jan 31 08:20:06 compute-0 nova_compute[247704]: 2026-01-31 08:20:06.463 247708 DEBUG oslo_concurrency.lockutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:06 compute-0 nova_compute[247704]: 2026-01-31 08:20:06.463 247708 DEBUG oslo_concurrency.lockutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:06 compute-0 nova_compute[247704]: 2026-01-31 08:20:06.463 247708 DEBUG nova.network.neutron [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:20:06 compute-0 ceph-mon[74496]: osdmap e336: 3 total, 3 up, 3 in
Jan 31 08:20:06 compute-0 ceph-mon[74496]: pgmap v2571: 305 pgs: 305 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 164 op/s
Jan 31 08:20:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:06.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3739093903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:07 compute-0 sudo[343363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:07 compute-0 sudo[343363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:07 compute-0 sudo[343363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:07 compute-0 sudo[343388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:07 compute-0 sudo[343388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:07 compute-0 sudo[343388]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 352 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.2 MiB/s wr, 152 op/s
Jan 31 08:20:08 compute-0 ceph-mon[74496]: pgmap v2572: 305 pgs: 305 active+clean; 352 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.2 MiB/s wr, 152 op/s
Jan 31 08:20:08 compute-0 ovn_controller[149457]: 2026-01-31T08:20:08Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:2d:b8 10.100.0.10
Jan 31 08:20:08 compute-0 ovn_controller[149457]: 2026-01-31T08:20:08Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:2d:b8 10.100.0.10
Jan 31 08:20:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:08.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:09 compute-0 nova_compute[247704]: 2026-01-31 08:20:09.731 247708 DEBUG nova.network.neutron [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:09 compute-0 nova_compute[247704]: 2026-01-31 08:20:09.776 247708 DEBUG oslo_concurrency.lockutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:09 compute-0 nova_compute[247704]: 2026-01-31 08:20:09.779 247708 DEBUG nova.compute.manager [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 751 KiB/s rd, 5.5 MiB/s wr, 181 op/s
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3945026701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:10 compute-0 kernel: tap3d5b4f77-16 (unregistering): left promiscuous mode
Jan 31 08:20:10 compute-0 NetworkManager[49108]: <info>  [1769847610.5061] device (tap3d5b4f77-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.513 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 ovn_controller[149457]: 2026-01-31T08:20:10Z|00606|binding|INFO|Releasing lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 from this chassis (sb_readonly=0)
Jan 31 08:20:10 compute-0 ovn_controller[149457]: 2026-01-31T08:20:10Z|00607|binding|INFO|Setting lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 down in Southbound
Jan 31 08:20:10 compute-0 ovn_controller[149457]: 2026-01-31T08:20:10Z|00608|binding|INFO|Removing iface tap3d5b4f77-16 ovn-installed in OVS
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.523 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:10.535 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:f1:46 10.100.0.14'], port_security=['fa:16:3e:93:f1:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2050d0a7-d773-4467-886b-af07b4efa0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '967ea74e-50db-4569-92ae-9b918e86440d e9af8e5e-499e-47f8-b75b-91273617ee9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3d5b4f77-1672-4b27-83d1-741ef4fda685) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:10.537 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3d5b4f77-1672-4b27-83d1-741ef4fda685 in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 unbound from our chassis
Jan 31 08:20:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:10.539 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8453b6a-05bd-4d59-86e9-a509416a9ef0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:20:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:10.541 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbeef97-f5a7-4324-a934-4173e39f3cfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:10.542 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace which is not needed anymore
Jan 31 08:20:10 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 31 08:20:10 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008d.scope: Consumed 16.179s CPU time.
Jan 31 08:20:10 compute-0 systemd-machined[214448]: Machine qemu-62-instance-0000008d terminated.
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.710421) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610710526, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 442, "num_deletes": 252, "total_data_size": 374481, "memory_usage": 384248, "flush_reason": "Manual Compaction"}
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.714 247708 INFO nova.virt.libvirt.driver [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance destroyed successfully.
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.715 247708 DEBUG nova.objects.instance [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'resources' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.736 247708 DEBUG nova.virt.libvirt.vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1587986582',display_name='tempest-TestMinimumBasicScenario-server-1587986582',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1587986582',id=141,image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHmcExNwe2fixW74P9gMs2+p3Ai01ht0KWMZ+YQVrB+lWzAPhuDeCsQup7t70K9V4kEA8Q6TjuiL3oYataTcIeP973UypS9on/cMTGNZfWc+a/I9G+q9wlRQBnSo+WIwWQ==',key_name='tempest-TestMinimumBasicScenario-589136251',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-gqbdxfhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:09Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=2050d0a7-d773-4467-886b-af07b4efa0d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.736 247708 DEBUG nova.network.os_vif_util [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.737 247708 DEBUG nova.network.os_vif_util [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.737 247708 DEBUG os_vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.739 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.739 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d5b4f77-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.743 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.745 247708 INFO os_vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16')
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.753 247708 DEBUG nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Start _get_guest_xml network_info=[{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f053e722-a943-4ece-ad77-a071fe4c9f59,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': 'f053e722-a943-4ece-ad77-a071fe4c9f59'}], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '427a8d76-874f-48e6-8b77-2a24c293c220', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-124bfb1e-b91a-4179-9a6b-46ee143687ea', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '124bfb1e-b91a-4179-9a6b-46ee143687ea', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2050d0a7-d773-4467-886b-af07b4efa0d8', 'attached_at': '', 'detached_at': '', 'volume_id': '124bfb1e-b91a-4179-9a6b-46ee143687ea', 'serial': '124bfb1e-b91a-4179-9a6b-46ee143687ea'}, 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.757 247708 WARNING nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.765 247708 DEBUG nova.virt.libvirt.host [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.766 247708 DEBUG nova.virt.libvirt.host [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.771 247708 DEBUG nova.virt.libvirt.host [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.771 247708 DEBUG nova.virt.libvirt.host [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.772 247708 DEBUG nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.772 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=f053e722-a943-4ece-ad77-a071fe4c9f59,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.773 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.773 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.773 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.773 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.774 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.774 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.774 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.774 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.774 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.774 247708 DEBUG nova.virt.hardware [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.775 247708 DEBUG nova.objects.instance [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:10 compute-0 nova_compute[247704]: 2026-01-31 08:20:10.790 247708 DEBUG oslo_concurrency.processutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:10 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [NOTICE]   (341638) : haproxy version is 2.8.14-c23fe91
Jan 31 08:20:10 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [NOTICE]   (341638) : path to executable is /usr/sbin/haproxy
Jan 31 08:20:10 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [WARNING]  (341638) : Exiting Master process...
Jan 31 08:20:10 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [WARNING]  (341638) : Exiting Master process...
Jan 31 08:20:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:10 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [ALERT]    (341638) : Current worker (341640) exited with code 143 (Terminated)
Jan 31 08:20:10 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[341634]: [WARNING]  (341638) : All workers exited. Exiting... (0)
Jan 31 08:20:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:20:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:10.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:20:10 compute-0 systemd[1]: libpod-152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf.scope: Deactivated successfully.
Jan 31 08:20:10 compute-0 podman[343438]: 2026-01-31 08:20:10.824580727 +0000 UTC m=+0.192716037 container died 152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610905261, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 370738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55910, "largest_seqno": 56351, "table_properties": {"data_size": 368140, "index_size": 634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6442, "raw_average_key_size": 19, "raw_value_size": 362903, "raw_average_value_size": 1083, "num_data_blocks": 28, "num_entries": 335, "num_filter_entries": 335, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847592, "oldest_key_time": 1769847592, "file_creation_time": 1769847610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 194928 microseconds, and 2477 cpu microseconds.
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.905357) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 370738 bytes OK
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.905396) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.911045) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.911125) EVENT_LOG_v1 {"time_micros": 1769847610911067, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.911159) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 371795, prev total WAL file size 372076, number of live WAL files 2.
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.911908) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(362KB)], [122(13MB)]
Jan 31 08:20:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847610911965, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 14400522, "oldest_snapshot_seqno": -1}
Jan 31 08:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf-userdata-shm.mount: Deactivated successfully.
Jan 31 08:20:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a6c1780a3fac206db0383bd7b0f9333e7b764cf25de7ba542260aea258bad6-merged.mount: Deactivated successfully.
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 8462 keys, 12499653 bytes, temperature: kUnknown
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847611122242, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 12499653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12442675, "index_size": 34722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 220565, "raw_average_key_size": 26, "raw_value_size": 12291674, "raw_average_value_size": 1452, "num_data_blocks": 1359, "num_entries": 8462, "num_filter_entries": 8462, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.122628) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12499653 bytes
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.127122) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.4 rd, 59.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.4 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(72.6) write-amplify(33.7) OK, records in: 8979, records dropped: 517 output_compression: NoCompression
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.127153) EVENT_LOG_v1 {"time_micros": 1769847611127138, "job": 74, "event": "compaction_finished", "compaction_time_micros": 210397, "compaction_time_cpu_micros": 45032, "output_level": 6, "num_output_files": 1, "total_output_size": 12499653, "num_input_records": 8979, "num_output_records": 8462, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847611127375, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847611129450, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:10.911782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.129574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.129584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.129587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.129590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:20:11.129593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:11.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.192 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.192 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.193 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.245 247708 DEBUG nova.compute.manager [req-10f047d5-73a8-4602-9bc8-84b4a20cdd6b req-e1d9b233-71c4-4bee-87fd-1e01fc617d81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-unplugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.245 247708 DEBUG oslo_concurrency.lockutils [req-10f047d5-73a8-4602-9bc8-84b4a20cdd6b req-e1d9b233-71c4-4bee-87fd-1e01fc617d81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.246 247708 DEBUG oslo_concurrency.lockutils [req-10f047d5-73a8-4602-9bc8-84b4a20cdd6b req-e1d9b233-71c4-4bee-87fd-1e01fc617d81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.246 247708 DEBUG oslo_concurrency.lockutils [req-10f047d5-73a8-4602-9bc8-84b4a20cdd6b req-e1d9b233-71c4-4bee-87fd-1e01fc617d81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.246 247708 DEBUG nova.compute.manager [req-10f047d5-73a8-4602-9bc8-84b4a20cdd6b req-e1d9b233-71c4-4bee-87fd-1e01fc617d81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-unplugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.247 247708 WARNING nova.compute.manager [req-10f047d5-73a8-4602-9bc8-84b4a20cdd6b req-e1d9b233-71c4-4bee-87fd-1e01fc617d81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received unexpected event network-vif-unplugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with vm_state active and task_state reboot_started_hard.
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.289 247708 DEBUG oslo_concurrency.processutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:11 compute-0 podman[343438]: 2026-01-31 08:20:11.311870086 +0000 UTC m=+0.680005366 container cleanup 152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:20:11 compute-0 systemd[1]: libpod-conmon-152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf.scope: Deactivated successfully.
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.334 247708 DEBUG oslo_concurrency.processutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.456 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:20:11 compute-0 ceph-mon[74496]: pgmap v2573: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 751 KiB/s rd, 5.5 MiB/s wr, 181 op/s
Jan 31 08:20:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1956623074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604866068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.773 247708 DEBUG oslo_concurrency.processutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.847 247708 DEBUG nova.virt.libvirt.vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1587986582',display_name='tempest-TestMinimumBasicScenario-server-1587986582',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1587986582',id=141,image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHmcExNwe2fixW74P9gMs2+p3Ai01ht0KWMZ+YQVrB+lWzAPhuDeCsQup7t70K9V4kEA8Q6TjuiL3oYataTcIeP973UypS9on/cMTGNZfWc+a/I9G+q9wlRQBnSo+WIwWQ==',key_name='tempest-TestMinimumBasicScenario-589136251',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-gqbdxfhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:09Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=2050d0a7-d773-4467-886b-af07b4efa0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.848 247708 DEBUG nova.network.os_vif_util [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.849 247708 DEBUG nova.network.os_vif_util [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.850 247708 DEBUG nova.objects.instance [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'pci_devices' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:11 compute-0 podman[343519]: 2026-01-31 08:20:11.856411589 +0000 UTC m=+0.527638809 container remove 152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.861 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a7096af4-fcfe-43ba-ac4d-69f60b856006]: (4, ('Sat Jan 31 08:20:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf)\n152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf\nSat Jan 31 08:20:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf)\n152810f3649275cd25c4d8c91023eb3eb66f24f52680cbecf2a91a15317e0dcf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.863 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[92b35ae4-170b-4919-b933-1d3430b9fb03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.865 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.867 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-0 kernel: tapb8453b6a-00: left promiscuous mode
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.874 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.877 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[95aa6c4f-dfc9-420d-896c-438ea232eec6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.891 247708 DEBUG nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <uuid>2050d0a7-d773-4467-886b-af07b4efa0d8</uuid>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <name>instance-0000008d</name>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:name>tempest-TestMinimumBasicScenario-server-1587986582</nova:name>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:20:10</nova:creationTime>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:user uuid="7f0be9090fdf49d2ac15246a0a820d3f">tempest-TestMinimumBasicScenario-975831205-project-member</nova:user>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:project uuid="134c066ac92844ff853b216870fa8eed">tempest-TestMinimumBasicScenario-975831205</nova:project>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="f053e722-a943-4ece-ad77-a071fe4c9f59"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <nova:port uuid="3d5b4f77-1672-4b27-83d1-741ef4fda685">
Jan 31 08:20:11 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <system>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <entry name="serial">2050d0a7-d773-4467-886b-af07b4efa0d8</entry>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <entry name="uuid">2050d0a7-d773-4467-886b-af07b4efa0d8</entry>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </system>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <os>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </os>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <features>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </features>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/2050d0a7-d773-4467-886b-af07b4efa0d8_disk">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </source>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/2050d0a7-d773-4467-886b-af07b4efa0d8_disk.config">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </source>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-124bfb1e-b91a-4179-9a6b-46ee143687ea">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </source>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:20:11 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <serial>124bfb1e-b91a-4179-9a6b-46ee143687ea</serial>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:93:f1:46"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <target dev="tap3d5b4f77-16"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8/console.log" append="off"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <video>
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </video>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <input type="keyboard" bus="usb"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:20:11 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:20:11 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:20:11 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:20:11 compute-0 nova_compute[247704]: </domain>
Jan 31 08:20:11 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.893 247708 DEBUG nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.893 247708 DEBUG nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.893 247708 DEBUG nova.virt.libvirt.driver [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.895 247708 DEBUG nova.virt.libvirt.vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1587986582',display_name='tempest-TestMinimumBasicScenario-server-1587986582',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1587986582',id=141,image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHmcExNwe2fixW74P9gMs2+p3Ai01ht0KWMZ+YQVrB+lWzAPhuDeCsQup7t70K9V4kEA8Q6TjuiL3oYataTcIeP973UypS9on/cMTGNZfWc+a/I9G+q9wlRQBnSo+WIwWQ==',key_name='tempest-TestMinimumBasicScenario-589136251',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-gqbdxfhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:09Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=2050d0a7-d773-4467-886b-af07b4efa0d8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.895 247708 DEBUG nova.network.os_vif_util [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.895 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[732c88d5-88e5-4c85-ad10-2021bee96f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.897 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[80a3d2eb-d32e-48e3-9306-fb347e06a5d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.897 247708 DEBUG nova.network.os_vif_util [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.899 247708 DEBUG os_vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.901 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.901 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.903 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.904 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d5b4f77-16, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.904 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d5b4f77-16, col_values=(('external_ids', {'iface-id': '3d5b4f77-1672-4b27-83d1-741ef4fda685', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:f1:46', 'vm-uuid': '2050d0a7-d773-4467-886b-af07b4efa0d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.906 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-0 NetworkManager[49108]: <info>  [1769847611.9077] manager: (tap3d5b4f77-16): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.908 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.910 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2c51a4-e47f-4129-8600-2d96f906d7a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780396, 'reachable_time': 25121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343559, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.912 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:11 compute-0 systemd[1]: run-netns-ovnmeta\x2db8453b6a\x2d05bd\x2d4d59\x2d86e9\x2da509416a9ef0.mount: Deactivated successfully.
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.913 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:11.913 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[8119055d-6a55-49b9-a527-c911c7b2a8f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:11 compute-0 nova_compute[247704]: 2026-01-31 08:20:11.914 247708 INFO os_vif [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16')
Jan 31 08:20:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 406 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 765 KiB/s rd, 5.9 MiB/s wr, 165 op/s
Jan 31 08:20:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Jan 31 08:20:12 compute-0 kernel: tap3d5b4f77-16: entered promiscuous mode
Jan 31 08:20:12 compute-0 NetworkManager[49108]: <info>  [1769847612.0365] manager: (tap3d5b4f77-16): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Jan 31 08:20:12 compute-0 systemd-udevd[343416]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 ovn_controller[149457]: 2026-01-31T08:20:12Z|00609|binding|INFO|Claiming lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 for this chassis.
Jan 31 08:20:12 compute-0 ovn_controller[149457]: 2026-01-31T08:20:12Z|00610|binding|INFO|3d5b4f77-1672-4b27-83d1-741ef4fda685: Claiming fa:16:3e:93:f1:46 10.100.0.14
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.050 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:f1:46 10.100.0.14'], port_security=['fa:16:3e:93:f1:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2050d0a7-d773-4467-886b-af07b4efa0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '967ea74e-50db-4569-92ae-9b918e86440d e9af8e5e-499e-47f8-b75b-91273617ee9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3d5b4f77-1672-4b27-83d1-741ef4fda685) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.052 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3d5b4f77-1672-4b27-83d1-741ef4fda685 in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 bound to our chassis
Jan 31 08:20:12 compute-0 ovn_controller[149457]: 2026-01-31T08:20:12Z|00611|binding|INFO|Setting lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 ovn-installed in OVS
Jan 31 08:20:12 compute-0 ovn_controller[149457]: 2026-01-31T08:20:12Z|00612|binding|INFO|Setting lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 up in Southbound
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.057 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:20:12 compute-0 NetworkManager[49108]: <info>  [1769847612.0590] device (tap3d5b4f77-16): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.059 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 NetworkManager[49108]: <info>  [1769847612.0623] device (tap3d5b4f77-16): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:20:12 compute-0 systemd-machined[214448]: New machine qemu-64-instance-0000008d.
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.077 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2f678b-a08c-4212-803c-39510dabcc38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.078 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8453b6a-01 in ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.080 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8453b6a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.081 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1db39d1f-5005-44ad-808f-f02ed9f4fff4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.081 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[46670d6a-2391-44d1-a257-c53d44e4f0bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-0000008d.
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.094 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[95d82e1b-c5d4-4463-a554-7094e3fbeb39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.118 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[05fab8aa-a05b-46b8-8e99-fac9834ef11b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.156 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[41205bc8-b84b-46b2-bf07-3fa8e67bb256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 NetworkManager[49108]: <info>  [1769847612.1662] manager: (tapb8453b6a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/277)
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.164 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9fcdcd-7ec1-4cab-b94c-7ad4ce8c03b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.209 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[eb59da52-05f4-4d8b-ad5c-7579e5e895a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.216 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[95a54893-bcf6-450c-a71f-1f04ac5cccfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 NetworkManager[49108]: <info>  [1769847612.2457] device (tapb8453b6a-00): carrier: link connected
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.252 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d04756-b1fd-4591-8d4a-f8647971bef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.275 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9f28b01d-b8b8-4c06-adc7-ddcb79bbac65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785486, 'reachable_time': 20069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343604, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.294 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[60ac4626-df11-4ddb-9891-71044ec7b66d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:cc04'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785486, 'tstamp': 785486}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343605, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.320 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[067a3f7c-2855-452b-9d86-953af4f47d87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8453b6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:cc:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785486, 'reachable_time': 20069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343606, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.390 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a36408-82b3-4915-afe8-4d9e15daa8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.471 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d33c1d-2e54-4a23-aad9-3cae38924866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.473 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.473 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.474 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8453b6a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.481 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 kernel: tapb8453b6a-00: entered promiscuous mode
Jan 31 08:20:12 compute-0 NetworkManager[49108]: <info>  [1769847612.4823] manager: (tapb8453b6a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.485 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.487 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8453b6a-00, col_values=(('external_ids', {'iface-id': 'eb4259dc-1b35-4b46-af47-bdd24739342f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.490 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 ovn_controller[149457]: 2026-01-31T08:20:12Z|00613|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.493 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.494 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e74777a1-85ef-4672-849d-d1c06872e0c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.495 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/b8453b6a-05bd-4d59-86e9-a509416a9ef0.pid.haproxy
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID b8453b6a-05bd-4d59-86e9-a509416a9ef0
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:20:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:12.496 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'env', 'PROCESS_TAG=haproxy-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8453b6a-05bd-4d59-86e9-a509416a9ef0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.502 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2604866068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:12 compute-0 ceph-mon[74496]: pgmap v2574: 305 pgs: 305 active+clean; 406 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 765 KiB/s rd, 5.9 MiB/s wr, 165 op/s
Jan 31 08:20:12 compute-0 ceph-mon[74496]: osdmap e337: 3 total, 3 up, 3 in
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.715 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 2050d0a7-d773-4467-886b-af07b4efa0d8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.715 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847612.714345, 2050d0a7-d773-4467-886b-af07b4efa0d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.716 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] VM Resumed (Lifecycle Event)
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.720 247708 DEBUG nova.compute.manager [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.730 247708 INFO nova.virt.libvirt.driver [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance rebooted successfully.
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.730 247708 DEBUG nova.compute.manager [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.776 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.782 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:20:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.818 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.819 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847612.7192502, 2050d0a7-d773-4467-886b-af07b4efa0d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.819 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] VM Started (Lifecycle Event)
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.830 247708 DEBUG oslo_concurrency.lockutils [None req-366814b0-80a1-4d86-ba2d-868321a765ca 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.854 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:12 compute-0 nova_compute[247704]: 2026-01-31 08:20:12.858 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:20:12 compute-0 podman[343698]: 2026-01-31 08:20:12.890682871 +0000 UTC m=+0.041055679 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:20:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:13.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:13 compute-0 podman[343698]: 2026-01-31 08:20:13.190068661 +0000 UTC m=+0.340441479 container create 9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:20:13 compute-0 systemd[1]: Started libpod-conmon-9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670.scope.
Jan 31 08:20:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1d3b83906763f8516eb285f9fd8f6fd486b6c13d8805d9b8737d243570e1ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.392 247708 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.393 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.394 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.394 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.395 247708 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.395 247708 WARNING nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received unexpected event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with vm_state active and task_state None.
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.395 247708 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.396 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.396 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.397 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.397 247708 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.398 247708 WARNING nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received unexpected event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with vm_state active and task_state None.
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.398 247708 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.398 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.399 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.399 247708 DEBUG oslo_concurrency.lockutils [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.400 247708 DEBUG nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:13 compute-0 nova_compute[247704]: 2026-01-31 08:20:13.400 247708 WARNING nova.compute.manager [req-6b88cf65-c79b-4ce3-b827-6d60ae8a3862 req-f6d0e3e0-9cb8-439b-8b41-bd3bd1fa6b27 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received unexpected event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with vm_state active and task_state None.
Jan 31 08:20:13 compute-0 podman[343698]: 2026-01-31 08:20:13.535204145 +0000 UTC m=+0.685577013 container init 9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:20:13 compute-0 podman[343698]: 2026-01-31 08:20:13.549198298 +0000 UTC m=+0.699571106 container start 9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:20:13 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [NOTICE]   (343718) : New worker (343720) forked
Jan 31 08:20:13 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [NOTICE]   (343718) : Loading success.
Jan 31 08:20:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 5.4 MiB/s wr, 143 op/s
Jan 31 08:20:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1582425122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.473 247708 INFO nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance shutdown successfully after 13 seconds.
Jan 31 08:20:14 compute-0 kernel: tap912fc9c9-ca (unregistering): left promiscuous mode
Jan 31 08:20:14 compute-0 NetworkManager[49108]: <info>  [1769847614.4888] device (tap912fc9c9-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:20:14 compute-0 ovn_controller[149457]: 2026-01-31T08:20:14Z|00614|binding|INFO|Releasing lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 from this chassis (sb_readonly=0)
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.503 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-0 ovn_controller[149457]: 2026-01-31T08:20:14Z|00615|binding|INFO|Setting lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 down in Southbound
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-0 ovn_controller[149457]: 2026-01-31T08:20:14Z|00616|binding|INFO|Removing iface tap912fc9c9-ca ovn-installed in OVS
Jan 31 08:20:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:14.513 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2d:b8 10.100.0.10'], port_security=['fa:16:3e:a5:2d:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c725d43e-b5fe-4a94-ad44-6df85e3c0fa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=912fc9c9-cae4-4bd8-901c-7bb8a63759a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:14.515 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis
Jan 31 08:20:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:14.519 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:20:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:14.521 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9066ba-bc3d-43c5-bd99-c3ec6c636005]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:14.521 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace which is not needed anymore
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:14 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 31 08:20:14 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000090.scope: Consumed 15.130s CPU time.
Jan 31 08:20:14 compute-0 systemd-machined[214448]: Machine qemu-63-instance-00000090 terminated.
Jan 31 08:20:14 compute-0 systemd[1]: libpod-cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82.scope: Deactivated successfully.
Jan 31 08:20:14 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [NOTICE]   (343318) : haproxy version is 2.8.14-c23fe91
Jan 31 08:20:14 compute-0 conmon[343314]: conmon cdb570dc3db103cb0b7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82.scope/container/memory.events
Jan 31 08:20:14 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [NOTICE]   (343318) : path to executable is /usr/sbin/haproxy
Jan 31 08:20:14 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [WARNING]  (343318) : Exiting Master process...
Jan 31 08:20:14 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [ALERT]    (343318) : Current worker (343320) exited with code 143 (Terminated)
Jan 31 08:20:14 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[343314]: [WARNING]  (343318) : All workers exited. Exiting... (0)
Jan 31 08:20:14 compute-0 podman[343751]: 2026-01-31 08:20:14.679100774 +0000 UTC m=+0.070281894 container died cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.709 247708 INFO nova.virt.libvirt.driver [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance destroyed successfully.
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.710 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'numa_topology' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.738 247708 INFO nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Attempting rescue
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.740 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.749 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.750 247708 INFO nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Creating image(s)
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.782 247708 DEBUG nova.storage.rbd_utils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.791 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:14.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.846 247708 DEBUG nova.storage.rbd_utils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82-userdata-shm.mount: Deactivated successfully.
Jan 31 08:20:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d603f24ecc2845666e511ab832e8a7e556730743a4896e41808b2a345cda8c15-merged.mount: Deactivated successfully.
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.880 247708 DEBUG nova.storage.rbd_utils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.885 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.948 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.949 247708 DEBUG oslo_concurrency.lockutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.950 247708 DEBUG oslo_concurrency.lockutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.950 247708 DEBUG oslo_concurrency.lockutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.976 247708 DEBUG nova.storage.rbd_utils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:14 compute-0 nova_compute[247704]: 2026-01-31 08:20:14.987 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.056 247708 DEBUG nova.compute.manager [req-0fe0d534-c17c-40a0-bb50-7fe704a72cc8 req-3e2278f4-7c22-40b3-b5da-139d93c9f496 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-unplugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.056 247708 DEBUG oslo_concurrency.lockutils [req-0fe0d534-c17c-40a0-bb50-7fe704a72cc8 req-3e2278f4-7c22-40b3-b5da-139d93c9f496 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.056 247708 DEBUG oslo_concurrency.lockutils [req-0fe0d534-c17c-40a0-bb50-7fe704a72cc8 req-3e2278f4-7c22-40b3-b5da-139d93c9f496 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.057 247708 DEBUG oslo_concurrency.lockutils [req-0fe0d534-c17c-40a0-bb50-7fe704a72cc8 req-3e2278f4-7c22-40b3-b5da-139d93c9f496 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.057 247708 DEBUG nova.compute.manager [req-0fe0d534-c17c-40a0-bb50-7fe704a72cc8 req-3e2278f4-7c22-40b3-b5da-139d93c9f496 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-unplugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.057 247708 WARNING nova.compute.manager [req-0fe0d534-c17c-40a0-bb50-7fe704a72cc8 req-3e2278f4-7c22-40b3-b5da-139d93c9f496 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received unexpected event network-vif-unplugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with vm_state active and task_state rescuing.
Jan 31 08:20:15 compute-0 podman[343751]: 2026-01-31 08:20:15.071055095 +0000 UTC m=+0.462236215 container cleanup cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:20:15 compute-0 systemd[1]: libpod-conmon-cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82.scope: Deactivated successfully.
Jan 31 08:20:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:20:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:15.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:15 compute-0 ceph-mon[74496]: pgmap v2576: 305 pgs: 305 active+clean; 419 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 5.4 MiB/s wr, 143 op/s
Jan 31 08:20:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2724465647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:15 compute-0 podman[343883]: 2026-01-31 08:20:15.847496454 +0000 UTC m=+0.746065605 container remove cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.855 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0bec2de2-3f45-4af3-a059-bcff76c02038]: (4, ('Sat Jan 31 08:20:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82)\ncdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82\nSat Jan 31 08:20:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (cdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82)\ncdb570dc3db103cb0b7adebc86796808007fc2a084335b19c4b3387fe01dab82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.858 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7937117e-d9f9-4ff5-83ac-6ce39144bc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.859 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.863 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:15 compute-0 kernel: tape03fc320-c0: left promiscuous mode
Jan 31 08:20:15 compute-0 nova_compute[247704]: 2026-01-31 08:20:15.880 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.885 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9beab7-bcb5-43a3-83d4-97c03278b1e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.907 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[87a3790a-3916-4451-8912-d7f9b3242593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.909 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[be9c3881-49d7-4007-8aff-1accb31bcfdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.923 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3d9579-14f1-46fe-ba9c-c438c1fafe5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 783533, 'reachable_time': 37383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343907, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 systemd[1]: run-netns-ovnmeta\x2de03fc320\x2dc87d\x2d42d2\x2da772\x2dec94aeb05209.mount: Deactivated successfully.
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.930 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:20:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:15.930 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[6f77e3a0-c872-4be4-8e9b-a853712bdddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 MiB/s wr, 175 op/s
Jan 31 08:20:16 compute-0 ceph-mon[74496]: pgmap v2577: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.5 MiB/s wr, 175 op/s
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.325 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.326 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'migration_context' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.568 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.569 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Start _get_guest_xml network_info=[{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:a5:2d:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7c23949f-bba8-4466-bb79-caf568852d38', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.570 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'resources' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.778 247708 WARNING nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.793 247708 DEBUG nova.virt.libvirt.host [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.794 247708 DEBUG nova.virt.libvirt.host [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.805 247708 DEBUG nova.virt.libvirt.host [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.807 247708 DEBUG nova.virt.libvirt.host [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.809 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.810 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.811 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.812 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.812 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.813 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.813 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.814 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.815 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.815 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.815 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.815 247708 DEBUG nova.virt.hardware [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.817 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:16.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.865 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:16 compute-0 nova_compute[247704]: 2026-01-31 08:20:16.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:17.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.195 247708 DEBUG nova.compute.manager [req-79799bf0-7c13-4379-a0b2-d1b849a834b7 req-c0c5a682-e7a3-433d-8818-32bd0f7e6ec6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.196 247708 DEBUG oslo_concurrency.lockutils [req-79799bf0-7c13-4379-a0b2-d1b849a834b7 req-c0c5a682-e7a3-433d-8818-32bd0f7e6ec6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.196 247708 DEBUG oslo_concurrency.lockutils [req-79799bf0-7c13-4379-a0b2-d1b849a834b7 req-c0c5a682-e7a3-433d-8818-32bd0f7e6ec6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.197 247708 DEBUG oslo_concurrency.lockutils [req-79799bf0-7c13-4379-a0b2-d1b849a834b7 req-c0c5a682-e7a3-433d-8818-32bd0f7e6ec6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.197 247708 DEBUG nova.compute.manager [req-79799bf0-7c13-4379-a0b2-d1b849a834b7 req-c0c5a682-e7a3-433d-8818-32bd0f7e6ec6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.197 247708 WARNING nova.compute.manager [req-79799bf0-7c13-4379-a0b2-d1b849a834b7 req-c0c5a682-e7a3-433d-8818-32bd0f7e6ec6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received unexpected event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with vm_state active and task_state rescuing.
Jan 31 08:20:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140923325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.372 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.374 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/140923325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/895517308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.885 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:17 compute-0 nova_compute[247704]: 2026-01-31 08:20:17.887 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:17 compute-0 podman[343951]: 2026-01-31 08:20:17.890776618 +0000 UTC m=+0.063493889 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:20:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 456 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 212 op/s
Jan 31 08:20:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:20:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156638752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.357 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.360 247708 DEBUG nova.virt.libvirt.vif [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-28125457',display_name='tempest-ServerRescueNegativeTestJSON-server-28125457',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-28125457',id=144,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-k7f0vyw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:53Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=c725d43e-b5fe-4a94-ad44-6df85e3c0fa0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:a5:2d:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.360 247708 DEBUG nova.network.os_vif_util [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:a5:2d:b8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.362 247708 DEBUG nova.network.os_vif_util [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.363 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.395 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <uuid>c725d43e-b5fe-4a94-ad44-6df85e3c0fa0</uuid>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <name>instance-00000090</name>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-28125457</nova:name>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:20:16</nova:creationTime>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:user uuid="6788b0883cb348719d1222b1c9483be2">tempest-ServerRescueNegativeTestJSON-1784809431-project-member</nova:user>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:project uuid="4849ff916e1b4e2aa162faaf2c0717a2">tempest-ServerRescueNegativeTestJSON-1784809431</nova:project>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <nova:port uuid="912fc9c9-cae4-4bd8-901c-7bb8a63759a4">
Jan 31 08:20:18 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <system>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <entry name="serial">c725d43e-b5fe-4a94-ad44-6df85e3c0fa0</entry>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <entry name="uuid">c725d43e-b5fe-4a94-ad44-6df85e3c0fa0</entry>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </system>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <os>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </os>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <features>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </features>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.rescue">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </source>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </source>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config.rescue">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </source>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:20:18 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:a5:2d:b8"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <target dev="tap912fc9c9-ca"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/console.log" append="off"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <video>
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </video>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:20:18 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:20:18 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:20:18 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:20:18 compute-0 nova_compute[247704]: </domain>
Jan 31 08:20:18 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.408 247708 INFO nova.virt.libvirt.driver [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance destroyed successfully.
Jan 31 08:20:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1307699288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/895517308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-0 ceph-mon[74496]: pgmap v2578: 305 pgs: 305 active+clean; 456 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 212 op/s
Jan 31 08:20:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2106032858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1156638752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.505 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.506 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.506 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.507 247708 DEBUG nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:a5:2d:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.507 247708 INFO nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Using config drive
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.540 247708 DEBUG nova.storage.rbd_utils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.581 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:18 compute-0 nova_compute[247704]: 2026-01-31 08:20:18.625 247708 DEBUG nova.objects.instance [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'keypairs' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:19.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.194 247708 INFO nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Creating config drive at /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config.rescue
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.200 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptyuq0ufi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.342 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptyuq0ufi" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.394 247708 DEBUG nova.storage.rbd_utils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.399 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config.rescue c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.641 247708 DEBUG oslo_concurrency.processutils [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config.rescue c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.242s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.642 247708 INFO nova.virt.libvirt.driver [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Deleting local config drive /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0/disk.config.rescue because it was imported into RBD.
Jan 31 08:20:19 compute-0 kernel: tap912fc9c9-ca: entered promiscuous mode
Jan 31 08:20:19 compute-0 NetworkManager[49108]: <info>  [1769847619.7444] manager: (tap912fc9c9-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Jan 31 08:20:19 compute-0 ovn_controller[149457]: 2026-01-31T08:20:19Z|00617|binding|INFO|Claiming lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for this chassis.
Jan 31 08:20:19 compute-0 ovn_controller[149457]: 2026-01-31T08:20:19Z|00618|binding|INFO|912fc9c9-cae4-4bd8-901c-7bb8a63759a4: Claiming fa:16:3e:a5:2d:b8 10.100.0.10
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-0 ovn_controller[149457]: 2026-01-31T08:20:19Z|00619|binding|INFO|Setting lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 ovn-installed in OVS
Jan 31 08:20:19 compute-0 nova_compute[247704]: 2026-01-31 08:20:19.761 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:19 compute-0 ovn_controller[149457]: 2026-01-31T08:20:19Z|00620|binding|INFO|Setting lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 up in Southbound
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.775 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2d:b8 10.100.0.10'], port_security=['fa:16:3e:a5:2d:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c725d43e-b5fe-4a94-ad44-6df85e3c0fa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=912fc9c9-cae4-4bd8-901c-7bb8a63759a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.777 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.780 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:20:19 compute-0 systemd-machined[214448]: New machine qemu-65-instance-00000090.
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.791 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1394f43-5f1b-427a-af48-7200458dbf3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.792 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape03fc320-c1 in ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:20:19 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-00000090.
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.795 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape03fc320-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.795 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[483c8b5e-fcc5-40ad-8744-b6e505a2a73f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.797 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5f420d-725c-43e7-b7b7-9d119c5693a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 systemd-udevd[344069]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.822 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[15144c9e-8c39-4d9e-94bd-6a08dca1ffae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 NetworkManager[49108]: <info>  [1769847619.8331] device (tap912fc9c9-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:20:19 compute-0 NetworkManager[49108]: <info>  [1769847619.8336] device (tap912fc9c9-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.840 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fd00ed-262e-4428-9034-bac610fce701]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.872 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3e208c11-7e6e-407f-baa6-e62a94c682dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 systemd-udevd[344072]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:20:19 compute-0 NetworkManager[49108]: <info>  [1769847619.8808] manager: (tape03fc320-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.879 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c251521e-251d-4c68-9608-172fdb1791e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.908 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a46fc4-ee9d-48c7-ac23-55af7f2d39fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.911 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fec7471c-769a-49da-b3a3-8bf7a2eff4f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 NetworkManager[49108]: <info>  [1769847619.9329] device (tape03fc320-c0): carrier: link connected
Jan 31 08:20:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 522 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 7.3 MiB/s wr, 276 op/s
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.941 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[15e0a465-4428-4191-ab15-7c347195b5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.957 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[30f985d0-5b1b-46c5-95c9-29fe92607745]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344100, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.972 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ce167f1f-7df5-43c9-8b4c-948a21ab1a8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:2269'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786255, 'tstamp': 786255}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344101, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:19.990 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ebcf2f37-0ebd-4300-82aa-b797cd0d4c0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344102, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.026 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[88b1fec2-a41f-4389-ae59-5c4bba1a077f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.098 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ab43e8a3-e049-4574-8ecc-ac174c1ed9ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.100 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.100 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.101 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:20 compute-0 NetworkManager[49108]: <info>  [1769847620.1047] manager: (tape03fc320-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Jan 31 08:20:20 compute-0 kernel: tape03fc320-c0: entered promiscuous mode
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.104 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.106 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.111 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.112 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:20 compute-0 ovn_controller[149457]: 2026-01-31T08:20:20Z|00621|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.113 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.114 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.115 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[edfb426c-ec36-48f0-9e45-7a9193336f8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.116 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e03fc320-c87d-42d2-a772-ec94aeb05209.pid.haproxy
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:20:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:20.117 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'env', 'PROCESS_TAG=haproxy-e03fc320-c87d-42d2-a772-ec94aeb05209', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e03fc320-c87d-42d2-a772-ec94aeb05209.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.120 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:20:20
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms']
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.206 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.269 247708 DEBUG nova.compute.manager [req-7fab2b9c-ba5a-4f67-a8a9-7700bee271bc req-47e3e0d7-dcdd-40b5-bc01-8ff2f1695f04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.270 247708 DEBUG oslo_concurrency.lockutils [req-7fab2b9c-ba5a-4f67-a8a9-7700bee271bc req-47e3e0d7-dcdd-40b5-bc01-8ff2f1695f04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.270 247708 DEBUG oslo_concurrency.lockutils [req-7fab2b9c-ba5a-4f67-a8a9-7700bee271bc req-47e3e0d7-dcdd-40b5-bc01-8ff2f1695f04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.270 247708 DEBUG oslo_concurrency.lockutils [req-7fab2b9c-ba5a-4f67-a8a9-7700bee271bc req-47e3e0d7-dcdd-40b5-bc01-8ff2f1695f04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.270 247708 DEBUG nova.compute.manager [req-7fab2b9c-ba5a-4f67-a8a9-7700bee271bc req-47e3e0d7-dcdd-40b5-bc01-8ff2f1695f04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.271 247708 WARNING nova.compute.manager [req-7fab2b9c-ba5a-4f67-a8a9-7700bee271bc req-47e3e0d7-dcdd-40b5-bc01-8ff2f1695f04 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received unexpected event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with vm_state active and task_state rescuing.
Jan 31 08:20:20 compute-0 podman[344165]: 2026-01-31 08:20:20.504333174 +0000 UTC m=+0.058274689 container create f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:20:20 compute-0 systemd[1]: Started libpod-conmon-f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919.scope.
Jan 31 08:20:20 compute-0 podman[344165]: 2026-01-31 08:20:20.475768854 +0000 UTC m=+0.029710389 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:20:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2e624e40bd0c409140c65632d85177c60ae81a779dfb84415f49bc63fb6aa8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:20 compute-0 podman[344165]: 2026-01-31 08:20:20.596027093 +0000 UTC m=+0.149968638 container init f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:20:20 compute-0 podman[344165]: 2026-01-31 08:20:20.603877235 +0000 UTC m=+0.157818760 container start f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.613 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.613 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847620.6124575, c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.614 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] VM Resumed (Lifecycle Event)
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.619 247708 DEBUG nova.compute.manager [None req-371ab7af-5824-41e4-82fe-a8c1f380c184 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:20:20 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [NOTICE]   (344211) : New worker (344213) forked
Jan 31 08:20:20 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [NOTICE]   (344211) : Loading success.
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.667 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.671 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.702 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.702 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847620.6162198, c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.703 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] VM Started (Lifecycle Event)
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.723 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:20:20 compute-0 nova_compute[247704]: 2026-01-31 08:20:20.727 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:20:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:20 compute-0 ceph-mgr[74791]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:20:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Jan 31 08:20:21 compute-0 ceph-mon[74496]: pgmap v2579: 305 pgs: 305 active+clean; 522 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 7.3 MiB/s wr, 276 op/s
Jan 31 08:20:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Jan 31 08:20:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Jan 31 08:20:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:21.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:21 compute-0 nova_compute[247704]: 2026-01-31 08:20:21.909 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 7.8 MiB/s wr, 350 op/s
Jan 31 08:20:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:22 compute-0 ceph-mon[74496]: osdmap e338: 3 total, 3 up, 3 in
Jan 31 08:20:22 compute-0 nova_compute[247704]: 2026-01-31 08:20:22.391 247708 DEBUG nova.compute.manager [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:22 compute-0 nova_compute[247704]: 2026-01-31 08:20:22.392 247708 DEBUG oslo_concurrency.lockutils [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:22 compute-0 nova_compute[247704]: 2026-01-31 08:20:22.392 247708 DEBUG oslo_concurrency.lockutils [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:22 compute-0 nova_compute[247704]: 2026-01-31 08:20:22.393 247708 DEBUG oslo_concurrency.lockutils [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:22 compute-0 nova_compute[247704]: 2026-01-31 08:20:22.393 247708 DEBUG nova.compute.manager [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:20:22 compute-0 nova_compute[247704]: 2026-01-31 08:20:22.394 247708 WARNING nova.compute.manager [req-d77706e6-1c52-42cb-a57e-fe98ea0b7cb1 req-68d5049b-2b3f-4f76-8b5a-12888b38f2d8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received unexpected event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with vm_state rescued and task_state None.
Jan 31 08:20:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:23 compute-0 ceph-mon[74496]: pgmap v2581: 305 pgs: 305 active+clean; 534 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 7.8 MiB/s wr, 350 op/s
Jan 31 08:20:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3235684417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 518 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.9 MiB/s wr, 401 op/s
Jan 31 08:20:24 compute-0 ceph-mon[74496]: pgmap v2582: 305 pgs: 305 active+clean; 518 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.9 MiB/s wr, 401 op/s
Jan 31 08:20:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:25 compute-0 nova_compute[247704]: 2026-01-31 08:20:25.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 6.8 MiB/s wr, 491 op/s
Jan 31 08:20:26 compute-0 ovn_controller[149457]: 2026-01-31T08:20:26Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:f1:46 10.100.0.14
Jan 31 08:20:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:26 compute-0 nova_compute[247704]: 2026-01-31 08:20:26.912 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:27 compute-0 ceph-mon[74496]: pgmap v2583: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 6.8 MiB/s wr, 491 op/s
Jan 31 08:20:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Jan 31 08:20:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Jan 31 08:20:27 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Jan 31 08:20:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:27 compute-0 sudo[344227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:27 compute-0 sudo[344227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:27 compute-0 sudo[344227]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:27 compute-0 sudo[344252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:27 compute-0 sudo[344252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:27 compute-0 sudo[344252]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 1.7 MiB/s wr, 415 op/s
Jan 31 08:20:28 compute-0 ceph-mon[74496]: osdmap e339: 3 total, 3 up, 3 in
Jan 31 08:20:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:28.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:29 compute-0 ceph-mon[74496]: pgmap v2585: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 1.7 MiB/s wr, 415 op/s
Jan 31 08:20:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:29.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 20 KiB/s wr, 308 op/s
Jan 31 08:20:30 compute-0 ceph-mon[74496]: pgmap v2586: 305 pgs: 305 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 20 KiB/s wr, 308 op/s
Jan 31 08:20:30 compute-0 nova_compute[247704]: 2026-01-31 08:20:30.211 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:30 compute-0 ovn_controller[149457]: 2026-01-31T08:20:30Z|00622|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 08:20:30 compute-0 ovn_controller[149457]: 2026-01-31T08:20:30Z|00623|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:20:30 compute-0 nova_compute[247704]: 2026-01-31 08:20:30.699 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:30.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:31.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:31 compute-0 nova_compute[247704]: 2026-01-31 08:20:31.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 20 KiB/s wr, 279 op/s
Jan 31 08:20:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:32 compute-0 nova_compute[247704]: 2026-01-31 08:20:32.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:33 compute-0 ceph-mon[74496]: pgmap v2587: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 20 KiB/s wr, 279 op/s
Jan 31 08:20:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:33.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:33 compute-0 nova_compute[247704]: 2026-01-31 08:20:33.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 208 op/s
Jan 31 08:20:34 compute-0 ovn_controller[149457]: 2026-01-31T08:20:34Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:2d:b8 10.100.0.10
Jan 31 08:20:34 compute-0 ovn_controller[149457]: 2026-01-31T08:20:34Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:2d:b8 10.100.0.10
Jan 31 08:20:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:34 compute-0 podman[344280]: 2026-01-31 08:20:34.955660496 +0000 UTC m=+0.124523865 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:20:35 compute-0 ceph-mon[74496]: pgmap v2588: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 208 op/s
Jan 31 08:20:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:35.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:35 compute-0 nova_compute[247704]: 2026-01-31 08:20:35.213 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009680019018704635 of space, bias 1.0, pg target 2.9040057056113904 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0004874633584589615 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:20:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 42 KiB/s wr, 159 op/s
Jan 31 08:20:36 compute-0 ceph-mon[74496]: pgmap v2589: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 42 KiB/s wr, 159 op/s
Jan 31 08:20:36 compute-0 nova_compute[247704]: 2026-01-31 08:20:36.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:36 compute-0 nova_compute[247704]: 2026-01-31 08:20:36.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:37.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:37 compute-0 nova_compute[247704]: 2026-01-31 08:20:37.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:37 compute-0 nova_compute[247704]: 2026-01-31 08:20:37.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:20:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 42 KiB/s wr, 151 op/s
Jan 31 08:20:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:38.378 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:20:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:38.379 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.379 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:38 compute-0 ceph-mon[74496]: pgmap v2590: 305 pgs: 305 active+clean; 427 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 42 KiB/s wr, 151 op/s
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.589 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.590 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.590 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.590 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:20:38 compute-0 nova_compute[247704]: 2026-01-31 08:20:38.590 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771619444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.089 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:39.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.248 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.248 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.248 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.253 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.254 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.254 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:20:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1771619444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.475 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.477 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=20.785114288330078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.477 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.477 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.604 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 2050d0a7-d773-4467-886b-af07b4efa0d8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.604 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.605 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.605 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:20:39 compute-0 nova_compute[247704]: 2026-01-31 08:20:39.760 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:20:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 49 KiB/s wr, 126 op/s
Jan 31 08:20:40 compute-0 nova_compute[247704]: 2026-01-31 08:20:40.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:20:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1056137073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:40 compute-0 nova_compute[247704]: 2026-01-31 08:20:40.267 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:20:40 compute-0 nova_compute[247704]: 2026-01-31 08:20:40.272 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:20:40 compute-0 nova_compute[247704]: 2026-01-31 08:20:40.296 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:20:40 compute-0 nova_compute[247704]: 2026-01-31 08:20:40.327 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:20:40 compute-0 nova_compute[247704]: 2026-01-31 08:20:40.327 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:40 compute-0 ceph-mon[74496]: pgmap v2591: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 49 KiB/s wr, 126 op/s
Jan 31 08:20:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1056137073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:41.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:41 compute-0 nova_compute[247704]: 2026-01-31 08:20:41.920 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 79 KiB/s wr, 119 op/s
Jan 31 08:20:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:43 compute-0 ceph-mon[74496]: pgmap v2592: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 79 KiB/s wr, 119 op/s
Jan 31 08:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/342037608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:43.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.329 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.330 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.330 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.716 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.716 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.717 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:20:43 compute-0 nova_compute[247704]: 2026-01-31 08:20:43.717 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:20:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 406 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 266 KiB/s wr, 117 op/s
Jan 31 08:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/481649911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2769729325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1405474155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2878432924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1314591952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:20:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:44.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:45 compute-0 ceph-mon[74496]: pgmap v2593: 305 pgs: 305 active+clean; 406 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 266 KiB/s wr, 117 op/s
Jan 31 08:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/983807351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1977395580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:45.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:45 compute-0 nova_compute[247704]: 2026-01-31 08:20:45.243 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 31 08:20:46 compute-0 ceph-mon[74496]: pgmap v2594: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.259 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.287 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.288 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.288 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.289 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:46 compute-0 sshd-session[344357]: Invalid user ethereum from 45.148.10.240 port 32986
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.515 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:46 compute-0 sshd-session[344357]: Connection closed by invalid user ethereum 45.148.10.240 port 32986 [preauth]
Jan 31 08:20:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:46 compute-0 nova_compute[247704]: 2026-01-31 08:20:46.923 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:20:47.382 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:20:47 compute-0 sudo[344360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:47 compute-0 sudo[344360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-0 sudo[344360]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-0 sudo[344385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:47 compute-0 sudo[344385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-0 sudo[344385]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-0 sudo[344410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:47 compute-0 sudo[344410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:47 compute-0 sudo[344410]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 31 08:20:47 compute-0 sudo[344435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:20:47 compute-0 sudo[344435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-0 sudo[344460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-0 sudo[344460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-0 sudo[344460]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-0 podman[344459]: 2026-01-31 08:20:48.051422165 +0000 UTC m=+0.079284495 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:20:48 compute-0 sudo[344503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-0 sudo[344503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-0 sudo[344503]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:48 compute-0 sudo[344435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:20:48 compute-0 nova_compute[247704]: 2026-01-31 08:20:48.525 247708 DEBUG nova.compute.manager [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:48 compute-0 nova_compute[247704]: 2026-01-31 08:20:48.525 247708 DEBUG nova.compute.manager [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:20:48 compute-0 nova_compute[247704]: 2026-01-31 08:20:48.525 247708 DEBUG oslo_concurrency.lockutils [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:48 compute-0 nova_compute[247704]: 2026-01-31 08:20:48.525 247708 DEBUG oslo_concurrency.lockutils [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:48 compute-0 nova_compute[247704]: 2026-01-31 08:20:48.526 247708 DEBUG nova.network.neutron [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a219e3fa-e334-46a4-87c3-6879bc4854e0 does not exist
Jan 31 08:20:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c39fe1a2-ee53-4d30-8767-d47f0edf6b6d does not exist
Jan 31 08:20:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 594f2e86-f1f7-4a37-8fbb-49b23e706fb8 does not exist
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:20:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:48 compute-0 sudo[344561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:48 compute-0 sudo[344561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-0 sudo[344561]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-0 sudo[344586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:48 compute-0 sudo[344586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:48 compute-0 sudo[344586]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:49 compute-0 ceph-mon[74496]: pgmap v2595: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/514583558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/514583558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:20:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:20:49 compute-0 sudo[344611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:49 compute-0 sudo[344611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:49 compute-0 sudo[344611]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:49 compute-0 sudo[344636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:20:49 compute-0 sudo[344636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:49 compute-0 nova_compute[247704]: 2026-01-31 08:20:49.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.51203573 +0000 UTC m=+0.053166175 container create 0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:20:49 compute-0 systemd[1]: Started libpod-conmon-0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083.scope.
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.485460918 +0000 UTC m=+0.026591453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:20:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.639065415 +0000 UTC m=+0.180195900 container init 0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.647361938 +0000 UTC m=+0.188492423 container start 0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:20:49 compute-0 reverent_sinoussi[344720]: 167 167
Jan 31 08:20:49 compute-0 systemd[1]: libpod-0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083.scope: Deactivated successfully.
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.65685849 +0000 UTC m=+0.197988945 container attach 0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.65843636 +0000 UTC m=+0.199566815 container died 0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7cb16c4d1ec979bed4dc7780e78fcf6a88d524f27318307674626e466f2a9f68-merged.mount: Deactivated successfully.
Jan 31 08:20:49 compute-0 podman[344702]: 2026-01-31 08:20:49.782201184 +0000 UTC m=+0.323331659 container remove 0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:20:49 compute-0 systemd[1]: libpod-conmon-0bec72202fee827c5c9f187f8c10ccf4a818d44f56b61f390dde7f95f58a5083.scope: Deactivated successfully.
Jan 31 08:20:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Jan 31 08:20:49 compute-0 podman[344746]: 2026-01-31 08:20:49.969651851 +0000 UTC m=+0.051113295 container create e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hellman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:20:50 compute-0 systemd[1]: Started libpod-conmon-e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337.scope.
Jan 31 08:20:50 compute-0 podman[344746]: 2026-01-31 08:20:49.949892467 +0000 UTC m=+0.031353961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:20:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7b9c081863fe3084ec930843d24aff97cb2125adaa260fdb358482b352b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7b9c081863fe3084ec930843d24aff97cb2125adaa260fdb358482b352b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7b9c081863fe3084ec930843d24aff97cb2125adaa260fdb358482b352b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7b9c081863fe3084ec930843d24aff97cb2125adaa260fdb358482b352b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e7b9c081863fe3084ec930843d24aff97cb2125adaa260fdb358482b352b04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:50 compute-0 podman[344746]: 2026-01-31 08:20:50.083482322 +0000 UTC m=+0.164943786 container init e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hellman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:50 compute-0 podman[344746]: 2026-01-31 08:20:50.094914312 +0000 UTC m=+0.176375746 container start e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hellman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:20:50 compute-0 podman[344746]: 2026-01-31 08:20:50.099483045 +0000 UTC m=+0.180944659 container attach e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:20:50 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 31 08:20:50 compute-0 nova_compute[247704]: 2026-01-31 08:20:50.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:50 compute-0 nova_compute[247704]: 2026-01-31 08:20:50.570 247708 DEBUG nova.network.neutron [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:20:50 compute-0 nova_compute[247704]: 2026-01-31 08:20:50.571 247708 DEBUG nova.network.neutron [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:20:50 compute-0 nova_compute[247704]: 2026-01-31 08:20:50.615 247708 DEBUG oslo_concurrency.lockutils [req-ead1adf6-4519-482c-8ffe-0c584c74c2d2 req-b562a873-51b0-41bb-bece-4e1a17d2629a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:20:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:50.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:50 compute-0 blissful_hellman[344763]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:20:50 compute-0 blissful_hellman[344763]: --> relative data size: 1.0
Jan 31 08:20:50 compute-0 blissful_hellman[344763]: --> All data devices are unavailable
Jan 31 08:20:50 compute-0 systemd[1]: libpod-e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337.scope: Deactivated successfully.
Jan 31 08:20:50 compute-0 podman[344746]: 2026-01-31 08:20:50.974837919 +0000 UTC m=+1.056299363 container died e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-31e7b9c081863fe3084ec930843d24aff97cb2125adaa260fdb358482b352b04-merged.mount: Deactivated successfully.
Jan 31 08:20:51 compute-0 podman[344746]: 2026-01-31 08:20:51.046756882 +0000 UTC m=+1.128218336 container remove e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:20:51 compute-0 ceph-mon[74496]: pgmap v2596: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Jan 31 08:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1739381354' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1739381354' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:51 compute-0 systemd[1]: libpod-conmon-e05422882e53d999a634f3b46fb15e3b85fcabe3b120e0b7762261ed3f0d2337.scope: Deactivated successfully.
Jan 31 08:20:51 compute-0 sudo[344636]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:51 compute-0 sudo[344790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:51 compute-0 sudo[344790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:51 compute-0 sudo[344790]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:51 compute-0 sudo[344815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:51 compute-0 sudo[344815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:51 compute-0 sudo[344815]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:51 compute-0 sudo[344840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:51 compute-0 sudo[344840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:51 compute-0 sudo[344840]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:51 compute-0 sudo[344865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:20:51 compute-0 sudo[344865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.682347977 +0000 UTC m=+0.057009359 container create 1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 08:20:51 compute-0 systemd[1]: Started libpod-conmon-1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc.scope.
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.652008273 +0000 UTC m=+0.026669665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:20:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.80189883 +0000 UTC m=+0.176560232 container init 1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.810036568 +0000 UTC m=+0.184697950 container start 1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.814534429 +0000 UTC m=+0.189195791 container attach 1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:20:51 compute-0 intelligent_faraday[344947]: 167 167
Jan 31 08:20:51 compute-0 systemd[1]: libpod-1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc.scope: Deactivated successfully.
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.817952283 +0000 UTC m=+0.192613655 container died 1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:20:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9190e61f0d9d632a66bbd3e4e2541164a6cd63bdb5c4fbaef3571fd7d0c47d2-merged.mount: Deactivated successfully.
Jan 31 08:20:51 compute-0 podman[344930]: 2026-01-31 08:20:51.865422967 +0000 UTC m=+0.240084349 container remove 1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:20:51 compute-0 systemd[1]: libpod-conmon-1fead57cfa768f8c03c6222d00b8e0243270c87dfeb877409c5b8aea72197dfc.scope: Deactivated successfully.
Jan 31 08:20:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 08:20:51 compute-0 nova_compute[247704]: 2026-01-31 08:20:51.974 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:52 compute-0 ceph-mon[74496]: pgmap v2597: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 08:20:52 compute-0 podman[344971]: 2026-01-31 08:20:52.117326034 +0000 UTC m=+0.059379168 container create 852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatelet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:20:52 compute-0 systemd[1]: Started libpod-conmon-852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157.scope.
Jan 31 08:20:52 compute-0 podman[344971]: 2026-01-31 08:20:52.087579095 +0000 UTC m=+0.029632279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:20:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bd71d4d3df022696318e8d5aedc8e7cb75ef10336915179da76766445d4f98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bd71d4d3df022696318e8d5aedc8e7cb75ef10336915179da76766445d4f98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bd71d4d3df022696318e8d5aedc8e7cb75ef10336915179da76766445d4f98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bd71d4d3df022696318e8d5aedc8e7cb75ef10336915179da76766445d4f98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:52 compute-0 podman[344971]: 2026-01-31 08:20:52.21545052 +0000 UTC m=+0.157503664 container init 852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:20:52 compute-0 podman[344971]: 2026-01-31 08:20:52.222994015 +0000 UTC m=+0.165047109 container start 852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:20:52 compute-0 podman[344971]: 2026-01-31 08:20:52.226401259 +0000 UTC m=+0.168454463 container attach 852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:20:52 compute-0 nova_compute[247704]: 2026-01-31 08:20:52.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:20:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:20:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:52.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]: {
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:     "0": [
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:         {
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "devices": [
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "/dev/loop3"
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             ],
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "lv_name": "ceph_lv0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "lv_size": "7511998464",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "name": "ceph_lv0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "tags": {
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.cluster_name": "ceph",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.crush_device_class": "",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.encrypted": "0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.osd_id": "0",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.type": "block",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:                 "ceph.vdo": "0"
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             },
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "type": "block",
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:             "vg_name": "ceph_vg0"
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:         }
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]:     ]
Jan 31 08:20:52 compute-0 priceless_chatelet[344988]: }
Jan 31 08:20:53 compute-0 systemd[1]: libpod-852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157.scope: Deactivated successfully.
Jan 31 08:20:53 compute-0 podman[344971]: 2026-01-31 08:20:53.006767284 +0000 UTC m=+0.948820388 container died 852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6bd71d4d3df022696318e8d5aedc8e7cb75ef10336915179da76766445d4f98-merged.mount: Deactivated successfully.
Jan 31 08:20:53 compute-0 podman[344971]: 2026-01-31 08:20:53.071921372 +0000 UTC m=+1.013974466 container remove 852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:53 compute-0 systemd[1]: libpod-conmon-852ce3828c48ae18680578061e443294a63be8123b6ec9075df571a3b54ec157.scope: Deactivated successfully.
Jan 31 08:20:53 compute-0 sudo[344865]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:53 compute-0 sudo[345011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:53 compute-0 sudo[345011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:53 compute-0 sudo[345011]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:53 compute-0 sudo[345036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:53 compute-0 sudo[345036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:53 compute-0 sudo[345036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:53.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:53 compute-0 sudo[345061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:53 compute-0 sudo[345061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:53 compute-0 sudo[345061]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:53 compute-0 sudo[345086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:20:53 compute-0 sudo[345086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2573482399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2573482399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.693506544 +0000 UTC m=+0.045322203 container create 25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haslett, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:20:53 compute-0 systemd[1]: Started libpod-conmon-25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f.scope.
Jan 31 08:20:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.670301524 +0000 UTC m=+0.022117243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.773674759 +0000 UTC m=+0.125490438 container init 25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.778836315 +0000 UTC m=+0.130651974 container start 25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haslett, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 08:20:53 compute-0 eager_haslett[345169]: 167 167
Jan 31 08:20:53 compute-0 systemd[1]: libpod-25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f.scope: Deactivated successfully.
Jan 31 08:20:53 compute-0 conmon[345169]: conmon 25b073abcf6bd7aec15d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f.scope/container/memory.events
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.789163429 +0000 UTC m=+0.140979088 container attach 25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haslett, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.78965048 +0000 UTC m=+0.141466139 container died 25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haslett, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:20:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-410968ba65f8602e68f4afaf20955e1341a4555ff260a55676c2c42645fcd195-merged.mount: Deactivated successfully.
Jan 31 08:20:53 compute-0 podman[345153]: 2026-01-31 08:20:53.842189259 +0000 UTC m=+0.194004918 container remove 25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:20:53 compute-0 systemd[1]: libpod-conmon-25b073abcf6bd7aec15dd1c6cf936132390790c28fa807482516b15104eb487f.scope: Deactivated successfully.
Jan 31 08:20:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 31 08:20:54 compute-0 podman[345193]: 2026-01-31 08:20:54.008306382 +0000 UTC m=+0.047476145 container create 53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:20:54 compute-0 systemd[1]: Started libpod-conmon-53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1.scope.
Jan 31 08:20:54 compute-0 podman[345193]: 2026-01-31 08:20:53.986900957 +0000 UTC m=+0.026070740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:20:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d270cde350ddd8cb5d31ba43fd1bd2eca5eb996ab860bc70adf51d8e8b69de6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d270cde350ddd8cb5d31ba43fd1bd2eca5eb996ab860bc70adf51d8e8b69de6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d270cde350ddd8cb5d31ba43fd1bd2eca5eb996ab860bc70adf51d8e8b69de6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d270cde350ddd8cb5d31ba43fd1bd2eca5eb996ab860bc70adf51d8e8b69de6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:20:54 compute-0 podman[345193]: 2026-01-31 08:20:54.111898512 +0000 UTC m=+0.151068275 container init 53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:20:54 compute-0 podman[345193]: 2026-01-31 08:20:54.118525775 +0000 UTC m=+0.157695528 container start 53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:20:54 compute-0 podman[345193]: 2026-01-31 08:20:54.135355938 +0000 UTC m=+0.174525781 container attach 53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:20:54 compute-0 ceph-mon[74496]: pgmap v2598: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 31 08:20:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:54.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:54 compute-0 serene_raman[345209]: {
Jan 31 08:20:54 compute-0 serene_raman[345209]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:20:54 compute-0 serene_raman[345209]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:20:54 compute-0 serene_raman[345209]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:20:54 compute-0 serene_raman[345209]:         "osd_id": 0,
Jan 31 08:20:54 compute-0 serene_raman[345209]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:20:54 compute-0 serene_raman[345209]:         "type": "bluestore"
Jan 31 08:20:54 compute-0 serene_raman[345209]:     }
Jan 31 08:20:54 compute-0 serene_raman[345209]: }
Jan 31 08:20:54 compute-0 systemd[1]: libpod-53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1.scope: Deactivated successfully.
Jan 31 08:20:54 compute-0 podman[345193]: 2026-01-31 08:20:54.945699938 +0000 UTC m=+0.984869671 container died 53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:20:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d270cde350ddd8cb5d31ba43fd1bd2eca5eb996ab860bc70adf51d8e8b69de6-merged.mount: Deactivated successfully.
Jan 31 08:20:55 compute-0 podman[345193]: 2026-01-31 08:20:55.001701701 +0000 UTC m=+1.040871464 container remove 53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_raman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:20:55 compute-0 systemd[1]: libpod-conmon-53286edcc5349524556171b232773c85bd8cd55f99138616d6251a3833d6d2b1.scope: Deactivated successfully.
Jan 31 08:20:55 compute-0 sudo[345086]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:20:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:20:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 443b9fe2-394c-47a4-8322-cca11b201ac0 does not exist
Jan 31 08:20:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c7abff74-74a4-4773-a0a3-e0ad61b1f462 does not exist
Jan 31 08:20:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 75881d6c-b519-4661-ab92-714fe2558bf8 does not exist
Jan 31 08:20:55 compute-0 sudo[345240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:20:55 compute-0 sudo[345240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:55 compute-0 sudo[345240]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:55 compute-0 sudo[345265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:20:55 compute-0 sudo[345265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:55 compute-0 sudo[345265]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:55.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:55 compute-0 nova_compute[247704]: 2026-01-31 08:20:55.250 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.6 MiB/s wr, 243 op/s
Jan 31 08:20:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:20:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:56 compute-0 nova_compute[247704]: 2026-01-31 08:20:56.977 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:20:57 compute-0 ceph-mon[74496]: pgmap v2599: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.6 MiB/s wr, 243 op/s
Jan 31 08:20:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:57.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 19 KiB/s wr, 205 op/s
Jan 31 08:20:58 compute-0 ceph-mon[74496]: pgmap v2600: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 19 KiB/s wr, 205 op/s
Jan 31 08:20:58 compute-0 nova_compute[247704]: 2026-01-31 08:20:58.583 247708 DEBUG nova.compute.manager [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:20:58 compute-0 nova_compute[247704]: 2026-01-31 08:20:58.584 247708 DEBUG nova.compute.manager [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing instance network info cache due to event network-changed-3d5b4f77-1672-4b27-83d1-741ef4fda685. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:20:58 compute-0 nova_compute[247704]: 2026-01-31 08:20:58.584 247708 DEBUG oslo_concurrency.lockutils [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:20:58 compute-0 nova_compute[247704]: 2026-01-31 08:20:58.584 247708 DEBUG oslo_concurrency.lockutils [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:20:58 compute-0 nova_compute[247704]: 2026-01-31 08:20:58.584 247708 DEBUG nova.network.neutron [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Refreshing network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:20:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:20:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:20:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:20:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:20:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:59.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:20:59 compute-0 ovn_controller[149457]: 2026-01-31T08:20:59Z|00624|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 08:20:59 compute-0 ovn_controller[149457]: 2026-01-31T08:20:59Z|00625|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:20:59 compute-0 nova_compute[247704]: 2026-01-31 08:20:59.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:59 compute-0 ovn_controller[149457]: 2026-01-31T08:20:59Z|00626|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 08:20:59 compute-0 ovn_controller[149457]: 2026-01-31T08:20:59Z|00627|binding|INFO|Releasing lport eb4259dc-1b35-4b46-af47-bdd24739342f from this chassis (sb_readonly=0)
Jan 31 08:20:59 compute-0 nova_compute[247704]: 2026-01-31 08:20:59.537 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:20:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4028800591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:20:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 192 op/s
Jan 31 08:21:00 compute-0 nova_compute[247704]: 2026-01-31 08:21:00.251 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:00 compute-0 ceph-mon[74496]: pgmap v2601: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 192 op/s
Jan 31 08:21:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:00.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:00 compute-0 nova_compute[247704]: 2026-01-31 08:21:00.939 247708 DEBUG nova.network.neutron [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updated VIF entry in instance network info cache for port 3d5b4f77-1672-4b27-83d1-741ef4fda685. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:00 compute-0 nova_compute[247704]: 2026-01-31 08:21:00.940 247708 DEBUG nova.network.neutron [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [{"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:01.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:01 compute-0 nova_compute[247704]: 2026-01-31 08:21:01.280 247708 DEBUG oslo_concurrency.lockutils [req-9b3e3704-6b2e-41e2-9ad9-46d8bc8952c0 req-9f8a1bb3-bf06-4d86-a20e-7d56bc33b282 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2050d0a7-d773-4467-886b-af07b4efa0d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:01 compute-0 nova_compute[247704]: 2026-01-31 08:21:01.756 247708 DEBUG oslo_concurrency.lockutils [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:01 compute-0 nova_compute[247704]: 2026-01-31 08:21:01.757 247708 DEBUG oslo_concurrency.lockutils [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:01 compute-0 nova_compute[247704]: 2026-01-31 08:21:01.852 247708 INFO nova.compute.manager [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Detaching volume 124bfb1e-b91a-4179-9a6b-46ee143687ea
Jan 31 08:21:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.1 KiB/s wr, 141 op/s
Jan 31 08:21:01 compute-0 nova_compute[247704]: 2026-01-31 08:21:01.979 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.118 247708 INFO nova.virt.block_device [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Attempting to driver detach volume 124bfb1e-b91a-4179-9a6b-46ee143687ea from mountpoint /dev/vdb
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.133 247708 DEBUG nova.virt.libvirt.driver [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Attempting to detach device vdb from instance 2050d0a7-d773-4467-886b-af07b4efa0d8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.134 247708 DEBUG nova.virt.libvirt.guest [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-124bfb1e-b91a-4179-9a6b-46ee143687ea">
Jan 31 08:21:02 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   </source>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <serial>124bfb1e-b91a-4179-9a6b-46ee143687ea</serial>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]: </disk>
Jan 31 08:21:02 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.146 247708 INFO nova.virt.libvirt.driver [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully detached device vdb from instance 2050d0a7-d773-4467-886b-af07b4efa0d8 from the persistent domain config.
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.147 247708 DEBUG nova.virt.libvirt.driver [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2050d0a7-d773-4467-886b-af07b4efa0d8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.147 247708 DEBUG nova.virt.libvirt.guest [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-124bfb1e-b91a-4179-9a6b-46ee143687ea">
Jan 31 08:21:02 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   </source>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <serial>124bfb1e-b91a-4179-9a6b-46ee143687ea</serial>
Jan 31 08:21:02 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 08:21:02 compute-0 nova_compute[247704]: </disk>
Jan 31 08:21:02 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.271 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769847662.27106, 2050d0a7-d773-4467-886b-af07b4efa0d8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.274 247708 DEBUG nova.virt.libvirt.driver [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2050d0a7-d773-4467-886b-af07b4efa0d8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.276 247708 INFO nova.virt.libvirt.driver [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully detached device vdb from instance 2050d0a7-d773-4467-886b-af07b4efa0d8 from the live domain config.
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.777 247708 DEBUG nova.objects.instance [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'flavor' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:02.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:02 compute-0 nova_compute[247704]: 2026-01-31 08:21:02.964 247708 DEBUG oslo_concurrency.lockutils [None req-915c4d26-c4d9-4f2a-a2ea-857ae682e8e8 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:03 compute-0 ceph-mon[74496]: pgmap v2602: 305 pgs: 305 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.1 KiB/s wr, 141 op/s
Jan 31 08:21:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:03.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 284 KiB/s wr, 120 op/s
Jan 31 08:21:04 compute-0 ceph-mon[74496]: pgmap v2603: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 284 KiB/s wr, 120 op/s
Jan 31 08:21:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:05 compute-0 nova_compute[247704]: 2026-01-31 08:21:05.254 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:05.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:21:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3138236152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:21:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:21:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3138236152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:21:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3138236152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:21:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3138236152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:21:05 compute-0 podman[345299]: 2026-01-31 08:21:05.922972012 +0000 UTC m=+0.088375008 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 31 08:21:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 31 08:21:06 compute-0 ceph-mon[74496]: pgmap v2604: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 31 08:21:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/253835237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.759 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.761 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.855 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.855 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.856 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.856 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.856 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.857 247708 INFO nova.compute.manager [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Terminating instance
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.858 247708 DEBUG nova.compute.manager [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:21:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:06 compute-0 kernel: tap3d5b4f77-16 (unregistering): left promiscuous mode
Jan 31 08:21:06 compute-0 NetworkManager[49108]: <info>  [1769847666.9180] device (tap3d5b4f77-16): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:21:06 compute-0 ovn_controller[149457]: 2026-01-31T08:21:06Z|00628|binding|INFO|Releasing lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 from this chassis (sb_readonly=0)
Jan 31 08:21:06 compute-0 ovn_controller[149457]: 2026-01-31T08:21:06Z|00629|binding|INFO|Setting lport 3d5b4f77-1672-4b27-83d1-741ef4fda685 down in Southbound
Jan 31 08:21:06 compute-0 ovn_controller[149457]: 2026-01-31T08:21:06Z|00630|binding|INFO|Removing iface tap3d5b4f77-16 ovn-installed in OVS
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.926 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.935 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:06 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 31 08:21:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:06.981 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:f1:46 10.100.0.14'], port_security=['fa:16:3e:93:f1:46 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2050d0a7-d773-4467-886b-af07b4efa0d8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '134c066ac92844ff853b216870fa8eed', 'neutron:revision_number': '8', 'neutron:security_group_ids': '967ea74e-50db-4569-92ae-9b918e86440d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bd5646e-2523-4ee9-a162-795050792e9d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3d5b4f77-1672-4b27-83d1-741ef4fda685) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:06.983 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3d5b4f77-1672-4b27-83d1-741ef4fda685 in datapath b8453b6a-05bd-4d59-86e9-a509416a9ef0 unbound from our chassis
Jan 31 08:21:06 compute-0 nova_compute[247704]: 2026-01-31 08:21:06.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:06 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000008d.scope: Consumed 16.041s CPU time.
Jan 31 08:21:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:06.984 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8453b6a-05bd-4d59-86e9-a509416a9ef0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:21:06 compute-0 systemd-machined[214448]: Machine qemu-64-instance-0000008d terminated.
Jan 31 08:21:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:06.986 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e4d8aa-706b-41fd-829e-310c2ddd2f46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:06.986 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 namespace which is not needed anymore
Jan 31 08:21:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.100 247708 INFO nova.virt.libvirt.driver [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Instance destroyed successfully.
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.100 247708 DEBUG nova.objects.instance [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lazy-loading 'resources' on Instance uuid 2050d0a7-d773-4467-886b-af07b4efa0d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:07 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [NOTICE]   (343718) : haproxy version is 2.8.14-c23fe91
Jan 31 08:21:07 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [NOTICE]   (343718) : path to executable is /usr/sbin/haproxy
Jan 31 08:21:07 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [WARNING]  (343718) : Exiting Master process...
Jan 31 08:21:07 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [ALERT]    (343718) : Current worker (343720) exited with code 143 (Terminated)
Jan 31 08:21:07 compute-0 neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0[343713]: [WARNING]  (343718) : All workers exited. Exiting... (0)
Jan 31 08:21:07 compute-0 systemd[1]: libpod-9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670.scope: Deactivated successfully.
Jan 31 08:21:07 compute-0 podman[345350]: 2026-01-31 08:21:07.145652662 +0000 UTC m=+0.060197797 container died 9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670-userdata-shm.mount: Deactivated successfully.
Jan 31 08:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-af1d3b83906763f8516eb285f9fd8f6fd486b6c13d8805d9b8737d243570e1ef-merged.mount: Deactivated successfully.
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.190 247708 DEBUG nova.virt.libvirt.vif [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-1587986582',display_name='tempest-TestMinimumBasicScenario-server-1587986582',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-1587986582',id=141,image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHmcExNwe2fixW74P9gMs2+p3Ai01ht0KWMZ+YQVrB+lWzAPhuDeCsQup7t70K9V4kEA8Q6TjuiL3oYataTcIeP973UypS9on/cMTGNZfWc+a/I9G+q9wlRQBnSo+WIwWQ==',key_name='tempest-TestMinimumBasicScenario-589136251',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='134c066ac92844ff853b216870fa8eed',ramdisk_id='',reservation_id='r-gqbdxfhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='f053e722-a943-4ece-ad77-a071fe4c9f59',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-975831205',owner_user_name='tempest-TestMinimumBasicScenario-975831205-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:12Z,user_data=None,user_id='7f0be9090fdf49d2ac15246a0a820d3f',uuid=2050d0a7-d773-4467-886b-af07b4efa0d8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.191 247708 DEBUG nova.network.os_vif_util [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converting VIF {"id": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "address": "fa:16:3e:93:f1:46", "network": {"id": "b8453b6a-05bd-4d59-86e9-a509416a9ef0", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1743292655-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "134c066ac92844ff853b216870fa8eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d5b4f77-16", "ovs_interfaceid": "3d5b4f77-1672-4b27-83d1-741ef4fda685", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.191 247708 DEBUG nova.network.os_vif_util [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.192 247708 DEBUG os_vif [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.193 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.193 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d5b4f77-16, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-0 podman[345350]: 2026-01-31 08:21:07.196660784 +0000 UTC m=+0.111205939 container cleanup 9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.200 247708 INFO os_vif [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:f1:46,bridge_name='br-int',has_traffic_filtering=True,id=3d5b4f77-1672-4b27-83d1-741ef4fda685,network=Network(b8453b6a-05bd-4d59-86e9-a509416a9ef0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d5b4f77-16')
Jan 31 08:21:07 compute-0 systemd[1]: libpod-conmon-9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670.scope: Deactivated successfully.
Jan 31 08:21:07 compute-0 podman[345389]: 2026-01-31 08:21:07.258252694 +0000 UTC m=+0.047770362 container remove 9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:21:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:07.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.263 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[84753dd7-61de-44b0-9fbf-b5363a29eefa]: (4, ('Sat Jan 31 08:21:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670)\n9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670\nSat Jan 31 08:21:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 (9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670)\n9fd2675ddc0a36a9bf6ed26c9ea751264a79fdf410af16f0685bc684d19bb670\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.265 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9242e891-fb37-41d0-8b0c-094ba2f437a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.268 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8453b6a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-0 kernel: tapb8453b6a-00: left promiscuous mode
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.309 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.309 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.311 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.314 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2a8128-e657-4c01-a775-65900eadfd39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.320 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.321 247708 INFO nova.compute.claims [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.329 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a4a908-0899-4162-95a7-a875f75b33de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.331 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[37d42a49-333b-4384-8b45-6b75c5ce9da9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.347 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cfad6c-3982-46fc-a4a7-3f70593e5ac2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785476, 'reachable_time': 15870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345421, 'error': None, 'target': 'ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.350 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8453b6a-05bd-4d59-86e9-a509416a9ef0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:21:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:07.350 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[96b9c8c5-d562-4a26-be09-94cfb874943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:07 compute-0 systemd[1]: run-netns-ovnmeta\x2db8453b6a\x2d05bd\x2d4d59\x2d86e9\x2da509416a9ef0.mount: Deactivated successfully.
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.583 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.615 247708 DEBUG nova.compute.manager [req-ecae839f-380d-4b65-831f-9965dfd6e807 req-9ecf0f2e-670c-45f0-9b93-d26a6d0ad8b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-unplugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.616 247708 DEBUG oslo_concurrency.lockutils [req-ecae839f-380d-4b65-831f-9965dfd6e807 req-9ecf0f2e-670c-45f0-9b93-d26a6d0ad8b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.616 247708 DEBUG oslo_concurrency.lockutils [req-ecae839f-380d-4b65-831f-9965dfd6e807 req-9ecf0f2e-670c-45f0-9b93-d26a6d0ad8b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.617 247708 DEBUG oslo_concurrency.lockutils [req-ecae839f-380d-4b65-831f-9965dfd6e807 req-9ecf0f2e-670c-45f0-9b93-d26a6d0ad8b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.617 247708 DEBUG nova.compute.manager [req-ecae839f-380d-4b65-831f-9965dfd6e807 req-9ecf0f2e-670c-45f0-9b93-d26a6d0ad8b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-unplugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.618 247708 DEBUG nova.compute.manager [req-ecae839f-380d-4b65-831f-9965dfd6e807 req-9ecf0f2e-670c-45f0-9b93-d26a6d0ad8b0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-unplugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:21:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2403454183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.722 247708 INFO nova.virt.libvirt.driver [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Deleting instance files /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8_del
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.723 247708 INFO nova.virt.libvirt.driver [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Deletion of /var/lib/nova/instances/2050d0a7-d773-4467-886b-af07b4efa0d8_del complete
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.842 247708 INFO nova.compute.manager [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Took 0.98 seconds to destroy the instance on the hypervisor.
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.843 247708 DEBUG oslo.service.loopingcall [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.843 247708 DEBUG nova.compute.manager [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:21:07 compute-0 nova_compute[247704]: 2026-01-31 08:21:07.843 247708 DEBUG nova.network.neutron [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:21:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 08:21:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552483045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.039 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.045 247708 DEBUG nova.compute.provider_tree [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.066 247708 DEBUG nova.scheduler.client.report [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.089 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.090 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.152 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.153 247708 DEBUG nova.network.neutron [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:21:08 compute-0 sudo[345446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:08 compute-0 sudo[345446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:08 compute-0 sudo[345446]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.214 247708 INFO nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.247 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:21:08 compute-0 sudo[345471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:08 compute-0 sudo[345471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:08 compute-0 sudo[345471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.440 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.443 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.444 247708 INFO nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Creating image(s)
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.480 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.523 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.558 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.564 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.596 247708 DEBUG nova.policy [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6788b0883cb348719d1222b1c9483be2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.640 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.641 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.642 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.643 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.670 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:08 compute-0 nova_compute[247704]: 2026-01-31 08:21:08.673 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9461628f-d09f-4923-824c-3b03dfe4bb13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:08 compute-0 ceph-mon[74496]: pgmap v2605: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 08:21:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/552483045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:08.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.009 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9461628f-d09f-4923-824c-3b03dfe4bb13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.096 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] resizing rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.143 247708 DEBUG nova.network.neutron [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.224 247708 DEBUG nova.objects.instance [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.227 247708 INFO nova.compute.manager [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Took 1.38 seconds to deallocate network for instance.
Jan 31 08:21:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:09.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.271 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.271 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Ensure instance console log exists: /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.272 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.272 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.273 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:21:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.327 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.328 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.438 247708 DEBUG oslo_concurrency.processutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.468 247708 DEBUG nova.compute.manager [req-32f62571-7b85-4d80-8a46-aa7ef058d6a5 req-7cea7a27-33d3-4fc5-86b7-1ae648e824c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-deleted-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.751 247708 DEBUG nova.compute.manager [req-ab088e88-4a34-4263-a6dd-5f633dc0765d req-8055010a-fae3-403d-9b20-4640a6f74758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.752 247708 DEBUG oslo_concurrency.lockutils [req-ab088e88-4a34-4263-a6dd-5f633dc0765d req-8055010a-fae3-403d-9b20-4640a6f74758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.752 247708 DEBUG oslo_concurrency.lockutils [req-ab088e88-4a34-4263-a6dd-5f633dc0765d req-8055010a-fae3-403d-9b20-4640a6f74758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.752 247708 DEBUG oslo_concurrency.lockutils [req-ab088e88-4a34-4263-a6dd-5f633dc0765d req-8055010a-fae3-403d-9b20-4640a6f74758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.753 247708 DEBUG nova.compute.manager [req-ab088e88-4a34-4263-a6dd-5f633dc0765d req-8055010a-fae3-403d-9b20-4640a6f74758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] No waiting events found dispatching network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.753 247708 WARNING nova.compute.manager [req-ab088e88-4a34-4263-a6dd-5f633dc0765d req-8055010a-fae3-403d-9b20-4640a6f74758 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Received unexpected event network-vif-plugged-3d5b4f77-1672-4b27-83d1-741ef4fda685 for instance with vm_state deleted and task_state None.
Jan 31 08:21:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237790933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.895 247708 DEBUG nova.network.neutron [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Successfully created port: 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.921 247708 DEBUG oslo_concurrency.processutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.927 247708 DEBUG nova.compute.provider_tree [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.944 247708 DEBUG nova.scheduler.client.report [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:21:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1237790933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 2.7 MiB/s wr, 132 op/s
Jan 31 08:21:09 compute-0 nova_compute[247704]: 2026-01-31 08:21:09.974 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:10 compute-0 nova_compute[247704]: 2026-01-31 08:21:10.009 247708 INFO nova.scheduler.client.report [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Deleted allocations for instance 2050d0a7-d773-4467-886b-af07b4efa0d8
Jan 31 08:21:10 compute-0 nova_compute[247704]: 2026-01-31 08:21:10.138 247708 DEBUG oslo_concurrency.lockutils [None req-553b704e-d5fa-4196-ba11-37f6a7777c1a 7f0be9090fdf49d2ac15246a0a820d3f 134c066ac92844ff853b216870fa8eed - - default default] Lock "2050d0a7-d773-4467-886b-af07b4efa0d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:10 compute-0 nova_compute[247704]: 2026-01-31 08:21:10.259 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:21:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:21:10 compute-0 ceph-mon[74496]: pgmap v2606: 305 pgs: 305 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 2.7 MiB/s wr, 132 op/s
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.026 247708 DEBUG nova.network.neutron [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Successfully updated port: 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.067 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.067 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.067 247708 DEBUG nova.network.neutron [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:21:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:11.192 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:11.193 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:11.194 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:11.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.266 247708 DEBUG nova.network.neutron [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.412 247708 DEBUG nova.compute.manager [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.412 247708 DEBUG nova.compute.manager [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing instance network info cache due to event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:11 compute-0 nova_compute[247704]: 2026-01-31 08:21:11.413 247708 DEBUG oslo_concurrency.lockutils [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 171 op/s
Jan 31 08:21:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Jan 31 08:21:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Jan 31 08:21:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Jan 31 08:21:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.409 247708 DEBUG nova.network.neutron [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.443 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.444 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance network_info: |[{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.445 247708 DEBUG oslo_concurrency.lockutils [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.445 247708 DEBUG nova.network.neutron [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.448 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Start _get_guest_xml network_info=[{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.452 247708 WARNING nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.460 247708 DEBUG nova.virt.libvirt.host [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.460 247708 DEBUG nova.virt.libvirt.host [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.464 247708 DEBUG nova.virt.libvirt.host [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.464 247708 DEBUG nova.virt.libvirt.host [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.465 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.465 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.466 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.466 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.466 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.466 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.467 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.467 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.467 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.467 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.467 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.468 247708 DEBUG nova.virt.hardware [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.471 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:12.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956749508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.953 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:12 compute-0 ceph-mon[74496]: pgmap v2607: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 171 op/s
Jan 31 08:21:12 compute-0 ceph-mon[74496]: osdmap e340: 3 total, 3 up, 3 in
Jan 31 08:21:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2956749508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.988 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:12 compute-0 nova_compute[247704]: 2026-01-31 08:21:12.994 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:13.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828417063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.445 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.446 247708 DEBUG nova.virt.libvirt.vif [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1067474769',display_name='tempest-ServerRescueNegativeTestJSON-server-1067474769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1067474769',id=147,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL1XQ9RINPbQvwpLEj3q2/JDcye0xA2ChkvrNwwmCbJqk4tZWS/gilMtbqPwst/ucA1/c+m+Q83K+vDN4Tb2I1/339TQot95of7EYxS4NG33iMzRU+P4Lgy9rP/4HtB3Ww==',key_name='tempest-keypair-1375178499',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-573z0d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6788b0883cb348719d1222b1c9483be2',uuid=9461628f-d09f-4923-824c-3b03dfe4bb13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.447 247708 DEBUG nova.network.os_vif_util [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.448 247708 DEBUG nova.network.os_vif_util [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.449 247708 DEBUG nova.objects.instance [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.482 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <uuid>9461628f-d09f-4923-824c-3b03dfe4bb13</uuid>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <name>instance-00000093</name>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1067474769</nova:name>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:21:12</nova:creationTime>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:user uuid="6788b0883cb348719d1222b1c9483be2">tempest-ServerRescueNegativeTestJSON-1784809431-project-member</nova:user>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:project uuid="4849ff916e1b4e2aa162faaf2c0717a2">tempest-ServerRescueNegativeTestJSON-1784809431</nova:project>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <nova:port uuid="5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e">
Jan 31 08:21:13 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <system>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <entry name="serial">9461628f-d09f-4923-824c-3b03dfe4bb13</entry>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <entry name="uuid">9461628f-d09f-4923-824c-3b03dfe4bb13</entry>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </system>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <os>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </os>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <features>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </features>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9461628f-d09f-4923-824c-3b03dfe4bb13_disk">
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </source>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config">
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </source>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:21:13 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:df:0b:2a"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <target dev="tap5fe2b029-2d"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/console.log" append="off"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <video>
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </video>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:21:13 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:21:13 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:21:13 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:21:13 compute-0 nova_compute[247704]: </domain>
Jan 31 08:21:13 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.483 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Preparing to wait for external event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.483 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.484 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.484 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.485 247708 DEBUG nova.virt.libvirt.vif [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1067474769',display_name='tempest-ServerRescueNegativeTestJSON-server-1067474769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1067474769',id=147,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL1XQ9RINPbQvwpLEj3q2/JDcye0xA2ChkvrNwwmCbJqk4tZWS/gilMtbqPwst/ucA1/c+m+Q83K+vDN4Tb2I1/339TQot95of7EYxS4NG33iMzRU+P4Lgy9rP/4HtB3Ww==',key_name='tempest-keypair-1375178499',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-573z0d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6788b0883cb348719d1222b1c9483be2',uuid=9461628f-d09f-4923-824c-3b03dfe4bb13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.485 247708 DEBUG nova.network.os_vif_util [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.485 247708 DEBUG nova.network.os_vif_util [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.486 247708 DEBUG os_vif [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.487 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.487 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.491 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fe2b029-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.492 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fe2b029-2d, col_values=(('external_ids', {'iface-id': '5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:0b:2a', 'vm-uuid': '9461628f-d09f-4923-824c-3b03dfe4bb13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.493 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:13 compute-0 NetworkManager[49108]: <info>  [1769847673.4947] manager: (tap5fe2b029-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.500 247708 INFO os_vif [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d')
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.573 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.574 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.574 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:df:0b:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.575 247708 INFO nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Using config drive
Jan 31 08:21:13 compute-0 nova_compute[247704]: 2026-01-31 08:21:13.601 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 363 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 245 op/s
Jan 31 08:21:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Jan 31 08:21:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2828417063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Jan 31 08:21:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.081 247708 INFO nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Creating config drive at /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.089 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx0f0ccju execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.215 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx0f0ccju" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.247 247708 DEBUG nova.storage.rbd_utils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.253 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.403 247708 DEBUG nova.network.neutron [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updated VIF entry in instance network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.405 247708 DEBUG nova.network.neutron [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.428 247708 DEBUG oslo_concurrency.lockutils [req-94f5bda3-6768-4a1f-b7db-b258f46fe797 req-f4d3f93c-74e9-49e6-8d2e-a01aff73f0e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.548 247708 DEBUG oslo_concurrency.processutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.548 247708 INFO nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Deleting local config drive /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config because it was imported into RBD.
Jan 31 08:21:14 compute-0 kernel: tap5fe2b029-2d: entered promiscuous mode
Jan 31 08:21:14 compute-0 NetworkManager[49108]: <info>  [1769847674.5973] manager: (tap5fe2b029-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Jan 31 08:21:14 compute-0 ovn_controller[149457]: 2026-01-31T08:21:14Z|00631|binding|INFO|Claiming lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for this chassis.
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.636 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:14 compute-0 ovn_controller[149457]: 2026-01-31T08:21:14Z|00632|binding|INFO|5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e: Claiming fa:16:3e:df:0b:2a 10.100.0.13
Jan 31 08:21:14 compute-0 ovn_controller[149457]: 2026-01-31T08:21:14Z|00633|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e ovn-installed in OVS
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.643 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:14 compute-0 systemd-udevd[345823]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:21:14 compute-0 systemd-machined[214448]: New machine qemu-66-instance-00000093.
Jan 31 08:21:14 compute-0 NetworkManager[49108]: <info>  [1769847674.6661] device (tap5fe2b029-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:21:14 compute-0 NetworkManager[49108]: <info>  [1769847674.6670] device (tap5fe2b029-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:21:14 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000093.
Jan 31 08:21:14 compute-0 ovn_controller[149457]: 2026-01-31T08:21:14Z|00634|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e up in Southbound
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.683 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0b:2a 10.100.0.13'], port_security=['fa:16:3e:df:0b:2a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9461628f-d09f-4923-824c-3b03dfe4bb13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73e2ea1-9bc5-4762-ba07-cf77fb23394f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.685 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.687 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.702 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[944c4d29-344a-4b88-9329-2029fe003d4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.734 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[703b6753-4f82-46f0-95a6-a8c30f4e1984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.738 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[df323d07-1d95-447c-9478-3772ca3a8b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.766 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d47f4e13-8607-48a7-944e-7da8c52a3013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.788 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[63bde347-b54f-4f37-8d68-55e02b2e5593]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345838, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.807 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[93c6f1ff-3719-4c49-8a97-216000c5c203]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786267, 'tstamp': 786267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345839, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786270, 'tstamp': 786270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345839, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.809 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:14 compute-0 nova_compute[247704]: 2026-01-31 08:21:14.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.813 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.813 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.813 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:14.813 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Jan 31 08:21:15 compute-0 ceph-mon[74496]: pgmap v2609: 305 pgs: 305 active+clean; 363 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 245 op/s
Jan 31 08:21:15 compute-0 ceph-mon[74496]: osdmap e341: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.135 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847675.1346817, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.136 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Started (Lifecycle Event)
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.260 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:15.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.318 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.321 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847675.1348667, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.322 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Paused (Lifecycle Event)
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.351 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.355 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.399 247708 DEBUG nova.compute.manager [req-44935e40-4069-4dd0-bf71-084c3ee94655 req-076c9019-1f40-4c11-9247-66dbd871d7b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.400 247708 DEBUG oslo_concurrency.lockutils [req-44935e40-4069-4dd0-bf71-084c3ee94655 req-076c9019-1f40-4c11-9247-66dbd871d7b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.400 247708 DEBUG oslo_concurrency.lockutils [req-44935e40-4069-4dd0-bf71-084c3ee94655 req-076c9019-1f40-4c11-9247-66dbd871d7b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.400 247708 DEBUG oslo_concurrency.lockutils [req-44935e40-4069-4dd0-bf71-084c3ee94655 req-076c9019-1f40-4c11-9247-66dbd871d7b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.401 247708 DEBUG nova.compute.manager [req-44935e40-4069-4dd0-bf71-084c3ee94655 req-076c9019-1f40-4c11-9247-66dbd871d7b6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Processing event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.403 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.404 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.407 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847675.4073124, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.407 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Resumed (Lifecycle Event)
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.409 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.413 247708 INFO nova.virt.libvirt.driver [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance spawned successfully.
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.414 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.439 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.442 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.452 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.453 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.453 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.454 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.454 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.454 247708 DEBUG nova.virt.libvirt.driver [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.496 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.551 247708 INFO nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Took 7.11 seconds to spawn the instance on the hypervisor.
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.551 247708 DEBUG nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.633 247708 INFO nova.compute.manager [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Took 8.41 seconds to build instance.
Jan 31 08:21:15 compute-0 nova_compute[247704]: 2026-01-31 08:21:15.656 247708 DEBUG oslo_concurrency.lockutils [None req-320da659-bec4-4601-a2e8-00e9f4284729 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.7 MiB/s wr, 310 op/s
Jan 31 08:21:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Jan 31 08:21:16 compute-0 ceph-mon[74496]: osdmap e342: 3 total, 3 up, 3 in
Jan 31 08:21:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Jan 31 08:21:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Jan 31 08:21:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:16.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Jan 31 08:21:17 compute-0 ceph-mon[74496]: pgmap v2612: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.7 MiB/s wr, 310 op/s
Jan 31 08:21:17 compute-0 ceph-mon[74496]: osdmap e343: 3 total, 3 up, 3 in
Jan 31 08:21:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Jan 31 08:21:17 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Jan 31 08:21:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:17.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:17 compute-0 NetworkManager[49108]: <info>  [1769847677.3702] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Jan 31 08:21:17 compute-0 NetworkManager[49108]: <info>  [1769847677.3710] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.437 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:17 compute-0 ovn_controller[149457]: 2026-01-31T08:21:17Z|00635|binding|INFO|Releasing lport 075aefe0-13df-4a17-ae95-485ece950a10 from this chassis (sb_readonly=0)
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.462 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.595 247708 DEBUG nova.compute.manager [req-79abd98b-3891-42db-8e97-4eede0135b33 req-cf68b63a-cc82-46aa-be91-1d860c087ebd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.595 247708 DEBUG oslo_concurrency.lockutils [req-79abd98b-3891-42db-8e97-4eede0135b33 req-cf68b63a-cc82-46aa-be91-1d860c087ebd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.596 247708 DEBUG oslo_concurrency.lockutils [req-79abd98b-3891-42db-8e97-4eede0135b33 req-cf68b63a-cc82-46aa-be91-1d860c087ebd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.596 247708 DEBUG oslo_concurrency.lockutils [req-79abd98b-3891-42db-8e97-4eede0135b33 req-cf68b63a-cc82-46aa-be91-1d860c087ebd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.596 247708 DEBUG nova.compute.manager [req-79abd98b-3891-42db-8e97-4eede0135b33 req-cf68b63a-cc82-46aa-be91-1d860c087ebd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:21:17 compute-0 nova_compute[247704]: 2026-01-31 08:21:17.596 247708 WARNING nova.compute.manager [req-79abd98b-3891-42db-8e97-4eede0135b33 req-cf68b63a-cc82-46aa-be91-1d860c087ebd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state None.
Jan 31 08:21:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 325 op/s
Jan 31 08:21:18 compute-0 ceph-mon[74496]: osdmap e344: 3 total, 3 up, 3 in
Jan 31 08:21:18 compute-0 ceph-mon[74496]: pgmap v2615: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 325 op/s
Jan 31 08:21:18 compute-0 nova_compute[247704]: 2026-01-31 08:21:18.524 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:18 compute-0 nova_compute[247704]: 2026-01-31 08:21:18.575 247708 DEBUG nova.compute.manager [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:21:18 compute-0 nova_compute[247704]: 2026-01-31 08:21:18.575 247708 DEBUG nova.compute.manager [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing instance network info cache due to event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:21:18 compute-0 nova_compute[247704]: 2026-01-31 08:21:18.576 247708 DEBUG oslo_concurrency.lockutils [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:18 compute-0 nova_compute[247704]: 2026-01-31 08:21:18.576 247708 DEBUG oslo_concurrency.lockutils [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:18 compute-0 nova_compute[247704]: 2026-01-31 08:21:18.576 247708 DEBUG nova.network.neutron [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:21:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:18.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:18 compute-0 podman[345885]: 2026-01-31 08:21:18.899058227 +0000 UTC m=+0.067531448 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:21:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1073583248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 7.0 MiB/s wr, 289 op/s
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:21:20
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'images']
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:21:20 compute-0 nova_compute[247704]: 2026-01-31 08:21:20.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:20 compute-0 ceph-mon[74496]: pgmap v2616: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 7.0 MiB/s wr, 289 op/s
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:21:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:21:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:20.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:21 compute-0 nova_compute[247704]: 2026-01-31 08:21:21.072 247708 DEBUG nova.network.neutron [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updated VIF entry in instance network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:21:21 compute-0 nova_compute[247704]: 2026-01-31 08:21:21.072 247708 DEBUG nova.network.neutron [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:21 compute-0 nova_compute[247704]: 2026-01-31 08:21:21.123 247708 DEBUG oslo_concurrency.lockutils [req-a669e30d-c9a3-4080-ae81-20ff814a8320 req-33c19d7c-9475-4cd9-865d-82daf19eb6b4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:21.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 231 op/s
Jan 31 08:21:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Jan 31 08:21:22 compute-0 nova_compute[247704]: 2026-01-31 08:21:22.098 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847667.096351, 2050d0a7-d773-4467-886b-af07b4efa0d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:21:22 compute-0 nova_compute[247704]: 2026-01-31 08:21:22.098 247708 INFO nova.compute.manager [-] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] VM Stopped (Lifecycle Event)
Jan 31 08:21:22 compute-0 nova_compute[247704]: 2026-01-31 08:21:22.130 247708 DEBUG nova.compute.manager [None req-4c1e5d3b-d325-4fc4-89f7-0da2937a93ab - - - - - -] [instance: 2050d0a7-d773-4467-886b-af07b4efa0d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:21:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Jan 31 08:21:22 compute-0 ceph-mon[74496]: pgmap v2617: 305 pgs: 305 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 231 op/s
Jan 31 08:21:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Jan 31 08:21:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:22.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:23 compute-0 ceph-mon[74496]: osdmap e345: 3 total, 3 up, 3 in
Jan 31 08:21:23 compute-0 nova_compute[247704]: 2026-01-31 08:21:23.529 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 31 08:21:24 compute-0 ceph-mon[74496]: pgmap v2619: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 31 08:21:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:25 compute-0 nova_compute[247704]: 2026-01-31 08:21:25.291 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:21:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:25.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:21:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2298688102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 483 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 254 op/s
Jan 31 08:21:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1999579920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3591730240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-0 ceph-mon[74496]: pgmap v2620: 305 pgs: 305 active+clean; 483 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 254 op/s
Jan 31 08:21:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3856109048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:26.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:27.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 504 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 234 op/s
Jan 31 08:21:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/907480789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:28 compute-0 sudo[345911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:28 compute-0 sudo[345911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:28 compute-0 sudo[345911]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:28 compute-0 sudo[345936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:28 compute-0 sudo[345936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:28 compute-0 sudo[345936]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:28 compute-0 nova_compute[247704]: 2026-01-31 08:21:28.533 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:28.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:29 compute-0 ceph-mon[74496]: pgmap v2621: 305 pgs: 305 active+clean; 504 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.7 MiB/s wr, 234 op/s
Jan 31 08:21:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/907480789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:29.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:29 compute-0 ovn_controller[149457]: 2026-01-31T08:21:29Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:0b:2a 10.100.0.13
Jan 31 08:21:29 compute-0 ovn_controller[149457]: 2026-01-31T08:21:29Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:0b:2a 10.100.0.13
Jan 31 08:21:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 530 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 231 op/s
Jan 31 08:21:30 compute-0 nova_compute[247704]: 2026-01-31 08:21:30.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:30 compute-0 ceph-mon[74496]: pgmap v2622: 305 pgs: 305 active+clean; 530 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 231 op/s
Jan 31 08:21:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:30.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:31.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/410450162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.6 MiB/s wr, 247 op/s
Jan 31 08:21:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:32 compute-0 ceph-mon[74496]: pgmap v2623: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 8.6 MiB/s wr, 247 op/s
Jan 31 08:21:32 compute-0 nova_compute[247704]: 2026-01-31 08:21:32.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:32.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:33.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3987036066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:33 compute-0 nova_compute[247704]: 2026-01-31 08:21:33.576 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 565 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.7 MiB/s wr, 250 op/s
Jan 31 08:21:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2232488299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:34 compute-0 ceph-mon[74496]: pgmap v2624: 305 pgs: 305 active+clean; 565 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.7 MiB/s wr, 250 op/s
Jan 31 08:21:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:34.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:35 compute-0 nova_compute[247704]: 2026-01-31 08:21:35.297 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:35.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:35 compute-0 nova_compute[247704]: 2026-01-31 08:21:35.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011651137632607483 of space, bias 1.0, pg target 3.4953412897822447 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.000377865891959799 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003843544981853713 of space, bias 1.0, pg target 1.1415328596105527 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:21:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 568 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 261 op/s
Jan 31 08:21:36 compute-0 ceph-mon[74496]: pgmap v2625: 305 pgs: 305 active+clean; 568 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.3 MiB/s wr, 261 op/s
Jan 31 08:21:36 compute-0 nova_compute[247704]: 2026-01-31 08:21:36.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:36.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:36 compute-0 podman[345965]: 2026-01-31 08:21:36.969398406 +0000 UTC m=+0.134749345 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:21:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:37 compute-0 nova_compute[247704]: 2026-01-31 08:21:37.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:37 compute-0 nova_compute[247704]: 2026-01-31 08:21:37.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:21:37 compute-0 nova_compute[247704]: 2026-01-31 08:21:37.587 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:21:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3451054505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3451054505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Jan 31 08:21:38 compute-0 nova_compute[247704]: 2026-01-31 08:21:38.581 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:38 compute-0 nova_compute[247704]: 2026-01-31 08:21:38.586 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:38 compute-0 nova_compute[247704]: 2026-01-31 08:21:38.587 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:21:38 compute-0 ceph-mon[74496]: pgmap v2626: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Jan 31 08:21:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:38.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:39.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:39 compute-0 nova_compute[247704]: 2026-01-31 08:21:39.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:39 compute-0 nova_compute[247704]: 2026-01-31 08:21:39.606 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:39 compute-0 nova_compute[247704]: 2026-01-31 08:21:39.607 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:39 compute-0 nova_compute[247704]: 2026-01-31 08:21:39.607 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:39 compute-0 nova_compute[247704]: 2026-01-31 08:21:39.607 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:21:39 compute-0 nova_compute[247704]: 2026-01-31 08:21:39.607 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Jan 31 08:21:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092639563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.190 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.301 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:40 compute-0 ceph-mon[74496]: pgmap v2627: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Jan 31 08:21:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2092639563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.410 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.411 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.416 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.416 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.417 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.569 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.571 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3990MB free_disk=20.743377685546875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.571 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.572 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:21:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:21:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:40.950 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:40.951 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.963 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.964 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9461628f-d09f-4923-824c-3b03dfe4bb13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.964 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.964 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:21:40 compute-0 nova_compute[247704]: 2026-01-31 08:21:40.988 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:41 compute-0 nova_compute[247704]: 2026-01-31 08:21:41.028 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:21:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:41.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:21:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:21:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798986891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:41 compute-0 nova_compute[247704]: 2026-01-31 08:21:41.493 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:41 compute-0 nova_compute[247704]: 2026-01-31 08:21:41.501 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:21:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1798986891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 165 op/s
Jan 31 08:21:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:42 compute-0 nova_compute[247704]: 2026-01-31 08:21:42.084 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:21:42 compute-0 nova_compute[247704]: 2026-01-31 08:21:42.302 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:21:42 compute-0 nova_compute[247704]: 2026-01-31 08:21:42.302 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:42 compute-0 ceph-mon[74496]: pgmap v2628: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 165 op/s
Jan 31 08:21:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:43 compute-0 nova_compute[247704]: 2026-01-31 08:21:43.583 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1191907265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 315 KiB/s wr, 142 op/s
Jan 31 08:21:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:44.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:45 compute-0 ceph-mon[74496]: pgmap v2629: 305 pgs: 305 active+clean; 569 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 315 KiB/s wr, 142 op/s
Jan 31 08:21:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1300955847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2707252371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:45 compute-0 nova_compute[247704]: 2026-01-31 08:21:45.304 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:45 compute-0 nova_compute[247704]: 2026-01-31 08:21:45.304 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:21:45 compute-0 nova_compute[247704]: 2026-01-31 08:21:45.307 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:45 compute-0 nova_compute[247704]: 2026-01-31 08:21:45.615 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:45 compute-0 nova_compute[247704]: 2026-01-31 08:21:45.615 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:45 compute-0 nova_compute[247704]: 2026-01-31 08:21:45.615 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:21:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 593 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 162 op/s
Jan 31 08:21:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3079979552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:21:46 compute-0 ceph-mon[74496]: pgmap v2630: 305 pgs: 305 active+clean; 593 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 162 op/s
Jan 31 08:21:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:46.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:47.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 601 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.381 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.413 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.413 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.414 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.414 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.414 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.414 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:21:48 compute-0 sudo[346042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:48 compute-0 sudo[346042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:48 compute-0 sudo[346042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:48 compute-0 sudo[346067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:48 compute-0 sudo[346067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:48 compute-0 sudo[346067]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.588 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:48 compute-0 nova_compute[247704]: 2026-01-31 08:21:48.691 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:21:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:48.954 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:49 compute-0 ceph-mon[74496]: pgmap v2631: 305 pgs: 305 active+clean; 601 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 08:21:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:49 compute-0 podman[346093]: 2026-01-31 08:21:49.92624564 +0000 UTC m=+0.097373269 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:21:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 614 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Jan 31 08:21:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.298 247708 DEBUG oslo_concurrency.lockutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.299 247708 DEBUG oslo_concurrency.lockutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.306 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.320 247708 DEBUG nova.objects.instance [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.385 247708 DEBUG oslo_concurrency.lockutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.712 247708 DEBUG oslo_concurrency.lockutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.713 247708 DEBUG oslo_concurrency.lockutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.713 247708 INFO nova.compute.manager [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Attaching volume 9e7640b2-419e-4d50-9b23-7d76e34131d8 to /dev/vdb
Jan 31 08:21:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.986 247708 DEBUG os_brick.utils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.987 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.998 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:50 compute-0 nova_compute[247704]: 2026-01-31 08:21:50.999 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[807bc6ae-fe7a-4aaa-b961-acdc2cf21ae9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.000 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.006 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.006 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8153eb-7ecf-4a8a-9cfd-f6637313b344]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.008 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.013 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.014 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[763bf8aa-8c96-4a71-821f-ff5b80d81b95]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.016 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4d7de1-4b4d-4929-a94e-4fe4cf94c48c]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.016 247708 DEBUG oslo_concurrency.processutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.049 247708 DEBUG oslo_concurrency.processutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.051 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.051 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.052 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.052 247708 DEBUG os_brick.utils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.052 247708 DEBUG nova.virt.block_device [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating existing volume attachment record: 49d7fed1-88b6-4b64-8464-c958a7045955 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:21:51 compute-0 ceph-mon[74496]: pgmap v2632: 305 pgs: 305 active+clean; 614 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Jan 31 08:21:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:21:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951126738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.928 247708 DEBUG nova.objects.instance [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.962 247708 DEBUG nova.virt.libvirt.driver [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Attempting to attach volume 9e7640b2-419e-4d50-9b23-7d76e34131d8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:21:51 compute-0 nova_compute[247704]: 2026-01-31 08:21:51.965 247708 DEBUG nova.virt.libvirt.guest [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:21:51 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:21:51 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-9e7640b2-419e-4d50-9b23-7d76e34131d8">
Jan 31 08:21:51 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:21:51 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:21:51 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:21:51 compute-0 nova_compute[247704]:   </source>
Jan 31 08:21:51 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:21:51 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:21:51 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:21:51 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:21:51 compute-0 nova_compute[247704]:   <serial>9e7640b2-419e-4d50-9b23-7d76e34131d8</serial>
Jan 31 08:21:51 compute-0 nova_compute[247704]: </disk>
Jan 31 08:21:51 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:21:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 31 08:21:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:52 compute-0 nova_compute[247704]: 2026-01-31 08:21:52.140 247708 DEBUG nova.virt.libvirt.driver [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:52 compute-0 nova_compute[247704]: 2026-01-31 08:21:52.140 247708 DEBUG nova.virt.libvirt.driver [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:52 compute-0 nova_compute[247704]: 2026-01-31 08:21:52.140 247708 DEBUG nova.virt.libvirt.driver [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:21:52 compute-0 nova_compute[247704]: 2026-01-31 08:21:52.141 247708 DEBUG nova.virt.libvirt.driver [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:df:0b:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:21:52 compute-0 nova_compute[247704]: 2026-01-31 08:21:52.375 247708 DEBUG oslo_concurrency.lockutils [None req-c6d9e662-809e-4242-8238-4b22c0572eb0 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2951126738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:21:52 compute-0 ceph-mon[74496]: pgmap v2633: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 31 08:21:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:53 compute-0 nova_compute[247704]: 2026-01-31 08:21:53.592 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:53 compute-0 nova_compute[247704]: 2026-01-31 08:21:53.693 247708 INFO nova.compute.manager [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Rescuing
Jan 31 08:21:53 compute-0 nova_compute[247704]: 2026-01-31 08:21:53.694 247708 DEBUG oslo_concurrency.lockutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:21:53 compute-0 nova_compute[247704]: 2026-01-31 08:21:53.694 247708 DEBUG oslo_concurrency.lockutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:21:53 compute-0 nova_compute[247704]: 2026-01-31 08:21:53.694 247708 DEBUG nova.network.neutron [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:21:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 154 op/s
Jan 31 08:21:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3918858088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:21:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3918858088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:21:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:55 compute-0 ceph-mon[74496]: pgmap v2634: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.3 MiB/s wr, 154 op/s
Jan 31 08:21:55 compute-0 nova_compute[247704]: 2026-01-31 08:21:55.310 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:55.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:55 compute-0 sudo[346140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:55 compute-0 sudo[346140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-0 sudo[346140]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:55 compute-0 sudo[346165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:55 compute-0 sudo[346165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-0 sudo[346165]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:55 compute-0 sudo[346190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:55 compute-0 sudo[346190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-0 sudo[346190]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:55 compute-0 sudo[346215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:21:55 compute-0 sudo[346215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 726 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 31 08:21:56 compute-0 ceph-mon[74496]: pgmap v2635: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 726 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Jan 31 08:21:56 compute-0 sudo[346215]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:21:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev aebbcbac-a8c3-4915-aafc-6714af24f14c does not exist
Jan 31 08:21:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev da27b3fc-1902-44b4-83bb-7cd9e4a1146e does not exist
Jan 31 08:21:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b05d463c-cb7e-43f4-9dc3-55f06a1d3ad5 does not exist
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:21:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:21:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:21:56 compute-0 nova_compute[247704]: 2026-01-31 08:21:56.370 247708 DEBUG nova.network.neutron [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:21:56 compute-0 nova_compute[247704]: 2026-01-31 08:21:56.391 247708 DEBUG oslo_concurrency.lockutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:21:56 compute-0 sudo[346271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:56 compute-0 sudo[346271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:56 compute-0 sudo[346271]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:56 compute-0 sudo[346296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:56 compute-0 sudo[346296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:56 compute-0 sudo[346296]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:56 compute-0 sudo[346321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:56 compute-0 sudo[346321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:56 compute-0 sudo[346321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:56 compute-0 sudo[346346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:21:56 compute-0 sudo[346346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:56 compute-0 nova_compute[247704]: 2026-01-31 08:21:56.784 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:21:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:56.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:56 compute-0 podman[346411]: 2026-01-31 08:21:56.994164582 +0000 UTC m=+0.051455233 container create 7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:21:57 compute-0 systemd[1]: Started libpod-conmon-7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74.scope.
Jan 31 08:21:57 compute-0 podman[346411]: 2026-01-31 08:21:56.970950883 +0000 UTC m=+0.028241554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:21:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:21:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:57 compute-0 podman[346411]: 2026-01-31 08:21:57.143671198 +0000 UTC m=+0.200961949 container init 7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:21:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:21:57 compute-0 podman[346411]: 2026-01-31 08:21:57.159316492 +0000 UTC m=+0.216607143 container start 7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:21:57 compute-0 podman[346411]: 2026-01-31 08:21:57.164618511 +0000 UTC m=+0.221909182 container attach 7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:21:57 compute-0 jolly_khorana[346428]: 167 167
Jan 31 08:21:57 compute-0 systemd[1]: libpod-7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74.scope: Deactivated successfully.
Jan 31 08:21:57 compute-0 podman[346411]: 2026-01-31 08:21:57.170291241 +0000 UTC m=+0.227581912 container died 7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:21:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bb38150b20b705d3028dfb7279b8b7dada5dfa6d1a4b4854833ee15c6facfe4-merged.mount: Deactivated successfully.
Jan 31 08:21:57 compute-0 podman[346411]: 2026-01-31 08:21:57.238986666 +0000 UTC m=+0.296277357 container remove 7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_khorana, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:21:57 compute-0 systemd[1]: libpod-conmon-7c5223e92707eff9d779eedbaf61c8a560c0b52ce02843048a2fc66d4554ca74.scope: Deactivated successfully.
Jan 31 08:21:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:21:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:57.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:21:57 compute-0 podman[346454]: 2026-01-31 08:21:57.390212994 +0000 UTC m=+0.052629962 container create 97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:21:57 compute-0 systemd[1]: Started libpod-conmon-97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8.scope.
Jan 31 08:21:57 compute-0 podman[346454]: 2026-01-31 08:21:57.366966394 +0000 UTC m=+0.029383392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:21:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043191515ca5cec120f3605070d2eca99b08199e59198d80f918bf13b018a460/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043191515ca5cec120f3605070d2eca99b08199e59198d80f918bf13b018a460/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043191515ca5cec120f3605070d2eca99b08199e59198d80f918bf13b018a460/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043191515ca5cec120f3605070d2eca99b08199e59198d80f918bf13b018a460/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043191515ca5cec120f3605070d2eca99b08199e59198d80f918bf13b018a460/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:57 compute-0 podman[346454]: 2026-01-31 08:21:57.493044095 +0000 UTC m=+0.155461063 container init 97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:21:57 compute-0 podman[346454]: 2026-01-31 08:21:57.502253721 +0000 UTC m=+0.164670679 container start 97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:21:57 compute-0 podman[346454]: 2026-01-31 08:21:57.506378743 +0000 UTC m=+0.168795701 container attach 97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:21:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Jan 31 08:21:58 compute-0 ceph-mon[74496]: pgmap v2636: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Jan 31 08:21:58 compute-0 gallant_sinoussi[346471]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:21:58 compute-0 gallant_sinoussi[346471]: --> relative data size: 1.0
Jan 31 08:21:58 compute-0 gallant_sinoussi[346471]: --> All data devices are unavailable
Jan 31 08:21:58 compute-0 systemd[1]: libpod-97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8.scope: Deactivated successfully.
Jan 31 08:21:58 compute-0 podman[346454]: 2026-01-31 08:21:58.36228452 +0000 UTC m=+1.024701488 container died 97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:21:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-043191515ca5cec120f3605070d2eca99b08199e59198d80f918bf13b018a460-merged.mount: Deactivated successfully.
Jan 31 08:21:58 compute-0 podman[346454]: 2026-01-31 08:21:58.428613566 +0000 UTC m=+1.091030524 container remove 97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:21:58 compute-0 systemd[1]: libpod-conmon-97730a1f0beac39dc25f2ad5d4f29a2820408163019925ab0303d840b0a613e8.scope: Deactivated successfully.
Jan 31 08:21:58 compute-0 sudo[346346]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:58 compute-0 sudo[346502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:58 compute-0 sudo[346502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:58 compute-0 sudo[346502]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:58 compute-0 sudo[346527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:58 compute-0 sudo[346527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:58 compute-0 nova_compute[247704]: 2026-01-31 08:21:58.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:58 compute-0 sudo[346527]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:58 compute-0 sudo[346552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:21:58 compute-0 sudo[346552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:58 compute-0 sudo[346552]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:58 compute-0 sudo[346577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:21:58 compute-0 sudo[346577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:58.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:59 compute-0 kernel: tap5fe2b029-2d (unregistering): left promiscuous mode
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.182779349 +0000 UTC m=+0.057217764 container create 9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:21:59 compute-0 NetworkManager[49108]: <info>  [1769847719.1843] device (tap5fe2b029-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:59 compute-0 ovn_controller[149457]: 2026-01-31T08:21:59Z|00636|binding|INFO|Releasing lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e from this chassis (sb_readonly=0)
Jan 31 08:21:59 compute-0 ovn_controller[149457]: 2026-01-31T08:21:59Z|00637|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e down in Southbound
Jan 31 08:21:59 compute-0 ovn_controller[149457]: 2026-01-31T08:21:59Z|00638|binding|INFO|Removing iface tap5fe2b029-2d ovn-installed in OVS
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.203 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.209 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:59 compute-0 systemd[1]: Started libpod-conmon-9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c.scope.
Jan 31 08:21:59 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 31 08:21:59 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000093.scope: Consumed 15.606s CPU time.
Jan 31 08:21:59 compute-0 systemd-machined[214448]: Machine qemu-66-instance-00000093 terminated.
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.251 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0b:2a 10.100.0.13'], port_security=['fa:16:3e:df:0b:2a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9461628f-d09f-4923-824c-3b03dfe4bb13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73e2ea1-9bc5-4762-ba07-cf77fb23394f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.163128107 +0000 UTC m=+0.037566312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.259 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.263 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:21:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.284 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c460eac-d29e-4869-8562-b7f1739394c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.302173047 +0000 UTC m=+0.176611262 container init 9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.314503479 +0000 UTC m=+0.188941674 container start 9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cannon, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.318797674 +0000 UTC m=+0.193235959 container attach 9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cannon, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.319 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c130e3-1dfa-48c3-ad76-a77cc185fdc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:59 compute-0 vibrant_cannon[346659]: 167 167
Jan 31 08:21:59 compute-0 systemd[1]: libpod-9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c.scope: Deactivated successfully.
Jan 31 08:21:59 compute-0 conmon[346659]: conmon 9a0eddbd76637bf3c9b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c.scope/container/memory.events
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.325046717 +0000 UTC m=+0.199484902 container died 9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.325 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d96203ce-92ed-4b65-b319-2c554d6fe832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:21:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:21:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:59.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:21:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-27c1eed2d75c133ef19040b3211caa482ead689e0d3732a9fee815954155d2ef-merged.mount: Deactivated successfully.
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.358 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4380b5d2-02b1-48a2-b645-ea4b239a8673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:59 compute-0 podman[346641]: 2026-01-31 08:21:59.381129933 +0000 UTC m=+0.255568148 container remove 9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cannon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.383 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[300bb67d-b05a-4fb2-bd1e-1ce364b96741]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346685, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:59 compute-0 systemd[1]: libpod-conmon-9a0eddbd76637bf3c9b3f32c84a899a7a02cbac8011aa87a050d95485261ce3c.scope: Deactivated successfully.
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.403 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0dc0d2-1e72-4db6-ae8c-c0c0d9616438]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786267, 'tstamp': 786267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346687, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786270, 'tstamp': 786270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346687, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.405 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.407 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.418 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.419 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.419 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:21:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:21:59.420 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:21:59 compute-0 podman[346707]: 2026-01-31 08:21:59.57147927 +0000 UTC m=+0.050774456 container create a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:21:59 compute-0 systemd[1]: Started libpod-conmon-a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14.scope.
Jan 31 08:21:59 compute-0 podman[346707]: 2026-01-31 08:21:59.549366338 +0000 UTC m=+0.028661534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:21:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bccd2d1b89b19f390713592db16031ebb1fe675d3b39763d2be644d332d6af9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bccd2d1b89b19f390713592db16031ebb1fe675d3b39763d2be644d332d6af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bccd2d1b89b19f390713592db16031ebb1fe675d3b39763d2be644d332d6af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bccd2d1b89b19f390713592db16031ebb1fe675d3b39763d2be644d332d6af9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:59 compute-0 podman[346707]: 2026-01-31 08:21:59.693127823 +0000 UTC m=+0.172423049 container init a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:21:59 compute-0 podman[346707]: 2026-01-31 08:21:59.704155763 +0000 UTC m=+0.183450939 container start a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:21:59 compute-0 podman[346707]: 2026-01-31 08:21:59.710248934 +0000 UTC m=+0.189544100 container attach a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.811 247708 INFO nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance shutdown successfully after 3 seconds.
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.820 247708 INFO nova.virt.libvirt.driver [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance destroyed successfully.
Jan 31 08:21:59 compute-0 nova_compute[247704]: 2026-01-31 08:21:59.821 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:21:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.128 247708 INFO nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Attempting rescue
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.130 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.134 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.135 247708 INFO nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Creating image(s)
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.182 247708 DEBUG nova.storage.rbd_utils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.187 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.317 247708 DEBUG nova.storage.rbd_utils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.364 247708 DEBUG nova.storage.rbd_utils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.371 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:00 compute-0 suspicious_bell[346723]: {
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:     "0": [
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:         {
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "devices": [
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "/dev/loop3"
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             ],
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "lv_name": "ceph_lv0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "lv_size": "7511998464",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "name": "ceph_lv0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "tags": {
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.cluster_name": "ceph",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.crush_device_class": "",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.encrypted": "0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.osd_id": "0",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.type": "block",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:                 "ceph.vdo": "0"
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             },
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "type": "block",
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:             "vg_name": "ceph_vg0"
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:         }
Jan 31 08:22:00 compute-0 suspicious_bell[346723]:     ]
Jan 31 08:22:00 compute-0 suspicious_bell[346723]: }
Jan 31 08:22:00 compute-0 systemd[1]: libpod-a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14.scope: Deactivated successfully.
Jan 31 08:22:00 compute-0 podman[346707]: 2026-01-31 08:22:00.451793606 +0000 UTC m=+0.931088762 container died a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.462 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.465 247708 DEBUG oslo_concurrency.lockutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.466 247708 DEBUG oslo_concurrency.lockutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.467 247708 DEBUG oslo_concurrency.lockutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bccd2d1b89b19f390713592db16031ebb1fe675d3b39763d2be644d332d6af9-merged.mount: Deactivated successfully.
Jan 31 08:22:00 compute-0 podman[346707]: 2026-01-31 08:22:00.513152061 +0000 UTC m=+0.992447217 container remove a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.519 247708 DEBUG nova.storage.rbd_utils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.525 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:00 compute-0 systemd[1]: libpod-conmon-a697c65b88e4f4ab7b22af86c780d5ab091f8b2da56d59593c850f187cdaef14.scope: Deactivated successfully.
Jan 31 08:22:00 compute-0 sudo[346577]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.559 247708 DEBUG nova.compute.manager [req-e567a130-e572-427e-8b95-07ab9598550b req-1063892a-b873-440a-b937-d8bab0dd8873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.560 247708 DEBUG oslo_concurrency.lockutils [req-e567a130-e572-427e-8b95-07ab9598550b req-1063892a-b873-440a-b937-d8bab0dd8873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.561 247708 DEBUG oslo_concurrency.lockutils [req-e567a130-e572-427e-8b95-07ab9598550b req-1063892a-b873-440a-b937-d8bab0dd8873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.561 247708 DEBUG oslo_concurrency.lockutils [req-e567a130-e572-427e-8b95-07ab9598550b req-1063892a-b873-440a-b937-d8bab0dd8873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.561 247708 DEBUG nova.compute.manager [req-e567a130-e572-427e-8b95-07ab9598550b req-1063892a-b873-440a-b937-d8bab0dd8873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.562 247708 WARNING nova.compute.manager [req-e567a130-e572-427e-8b95-07ab9598550b req-1063892a-b873-440a-b937-d8bab0dd8873 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state rescuing.
Jan 31 08:22:00 compute-0 sudo[346819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:00 compute-0 sudo[346819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:00 compute-0 sudo[346819]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:00 compute-0 sudo[346859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:00 compute-0 sudo[346859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:00 compute-0 sudo[346859]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:00 compute-0 sudo[346887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:00 compute-0 sudo[346887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:00 compute-0 sudo[346887]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:00 compute-0 sudo[346912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:22:00 compute-0 sudo[346912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.838 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.838 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.921 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.923 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Start _get_guest_xml network_info=[{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:df:0b:2a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7c23949f-bba8-4466-bb79-caf568852d38', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.926 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'resources' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:00.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.982 247708 WARNING nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.989 247708 DEBUG nova.virt.libvirt.host [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.990 247708 DEBUG nova.virt.libvirt.host [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.997 247708 DEBUG nova.virt.libvirt.host [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:22:00 compute-0 nova_compute[247704]: 2026-01-31 08:22:00.998 247708 DEBUG nova.virt.libvirt.host [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.000 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.000 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.001 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.001 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.002 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.002 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.002 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.003 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.003 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.003 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.003 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.004 247708 DEBUG nova.virt.hardware [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.004 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.059 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:01 compute-0 ceph-mon[74496]: pgmap v2637: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.129643658 +0000 UTC m=+0.053616216 container create 33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:22:01 compute-0 systemd[1]: Started libpod-conmon-33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e.scope.
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.101832066 +0000 UTC m=+0.025804634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:22:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.224913284 +0000 UTC m=+0.148885932 container init 33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.234037328 +0000 UTC m=+0.158009886 container start 33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.238433486 +0000 UTC m=+0.162406104 container attach 33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:22:01 compute-0 jovial_buck[346992]: 167 167
Jan 31 08:22:01 compute-0 systemd[1]: libpod-33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e.scope: Deactivated successfully.
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.240151818 +0000 UTC m=+0.164124416 container died 33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 08:22:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-973b40ccf46c7273068a3ef9088dc604049182fce29810000ab1acc24e795612-merged.mount: Deactivated successfully.
Jan 31 08:22:01 compute-0 podman[346975]: 2026-01-31 08:22:01.288176556 +0000 UTC m=+0.212149074 container remove 33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:22:01 compute-0 systemd[1]: libpod-conmon-33964d2f3677385f944e95fab089b13a2390bf4a8e7e6a922a5fce037e69ce5e.scope: Deactivated successfully.
Jan 31 08:22:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:01.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:01 compute-0 podman[347035]: 2026-01-31 08:22:01.438820929 +0000 UTC m=+0.050110260 container create c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:22:01 compute-0 systemd[1]: Started libpod-conmon-c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38.scope.
Jan 31 08:22:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026c504dae50ecd54957a77f7b59543046817d5316b1d3db698c7fd0809cbf11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026c504dae50ecd54957a77f7b59543046817d5316b1d3db698c7fd0809cbf11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026c504dae50ecd54957a77f7b59543046817d5316b1d3db698c7fd0809cbf11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026c504dae50ecd54957a77f7b59543046817d5316b1d3db698c7fd0809cbf11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:01 compute-0 podman[347035]: 2026-01-31 08:22:01.413389026 +0000 UTC m=+0.024678377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:22:01 compute-0 podman[347035]: 2026-01-31 08:22:01.520400319 +0000 UTC m=+0.131689670 container init c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:22:01 compute-0 podman[347035]: 2026-01-31 08:22:01.528199971 +0000 UTC m=+0.139489312 container start c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_darwin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:22:01 compute-0 podman[347035]: 2026-01-31 08:22:01.532818164 +0000 UTC m=+0.144107605 container attach c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:22:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:22:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300814650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.555 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.557 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:22:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469410732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 640 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.997 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:01 compute-0 nova_compute[247704]: 2026-01-31 08:22:01.999 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3300814650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1469410732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:02 compute-0 trusting_darwin[347051]: {
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:         "osd_id": 0,
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:         "type": "bluestore"
Jan 31 08:22:02 compute-0 trusting_darwin[347051]:     }
Jan 31 08:22:02 compute-0 trusting_darwin[347051]: }
Jan 31 08:22:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:22:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3061871491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:02 compute-0 systemd[1]: libpod-c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38.scope: Deactivated successfully.
Jan 31 08:22:02 compute-0 podman[347035]: 2026-01-31 08:22:02.431815598 +0000 UTC m=+1.043104979 container died c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.439 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.442 247708 DEBUG nova.virt.libvirt.vif [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1067474769',display_name='tempest-ServerRescueNegativeTestJSON-server-1067474769',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1067474769',id=147,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL1XQ9RINPbQvwpLEj3q2/JDcye0xA2ChkvrNwwmCbJqk4tZWS/gilMtbqPwst/ucA1/c+m+Q83K+vDN4Tb2I1/339TQot95of7EYxS4NG33iMzRU+P4Lgy9rP/4HtB3Ww==',key_name='tempest-keypair-1375178499',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-573z0d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6788b0883cb348719d1222b1c9483be2',uuid=9461628f-d09f-4923-824c-3b03dfe4bb13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:df:0b:2a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.443 247708 DEBUG nova.network.os_vif_util [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "vif_mac": "fa:16:3e:df:0b:2a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.445 247708 DEBUG nova.network.os_vif_util [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.448 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-026c504dae50ecd54957a77f7b59543046817d5316b1d3db698c7fd0809cbf11-merged.mount: Deactivated successfully.
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.479 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <uuid>9461628f-d09f-4923-824c-3b03dfe4bb13</uuid>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <name>instance-00000093</name>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1067474769</nova:name>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:22:00</nova:creationTime>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:user uuid="6788b0883cb348719d1222b1c9483be2">tempest-ServerRescueNegativeTestJSON-1784809431-project-member</nova:user>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:project uuid="4849ff916e1b4e2aa162faaf2c0717a2">tempest-ServerRescueNegativeTestJSON-1784809431</nova:project>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <nova:port uuid="5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e">
Jan 31 08:22:02 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <system>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <entry name="serial">9461628f-d09f-4923-824c-3b03dfe4bb13</entry>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <entry name="uuid">9461628f-d09f-4923-824c-3b03dfe4bb13</entry>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </system>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <os>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </os>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <features>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </features>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9461628f-d09f-4923-824c-3b03dfe4bb13_disk.rescue">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </source>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9461628f-d09f-4923-824c-3b03dfe4bb13_disk">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </source>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config.rescue">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </source>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:22:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:df:0b:2a"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <target dev="tap5fe2b029-2d"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/console.log" append="off"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <video>
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </video>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:22:02 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:22:02 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:22:02 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:22:02 compute-0 nova_compute[247704]: </domain>
Jan 31 08:22:02 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.489 247708 INFO nova.virt.libvirt.driver [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance destroyed successfully.
Jan 31 08:22:02 compute-0 podman[347035]: 2026-01-31 08:22:02.49182437 +0000 UTC m=+1.103113701 container remove c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:22:02 compute-0 systemd[1]: libpod-conmon-c0a450bde442e75292b44d422b0968b5f93eb9b992b302e8a9456347a137ee38.scope: Deactivated successfully.
Jan 31 08:22:02 compute-0 sudo[346912]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:22:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:22:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:22:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:22:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bc343b00-c09b-4142-b162-0b8a32c8a08a does not exist
Jan 31 08:22:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3b647932-6d26-44d0-a6b0-a3480a2921f2 does not exist
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.564 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.564 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.565 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.565 247708 DEBUG nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] No VIF found with MAC fa:16:3e:df:0b:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:22:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev af9d50e7-e5ab-4f89-8acd-b6b1c6202580 does not exist
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.565 247708 INFO nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Using config drive
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.597 247708 DEBUG nova.storage.rbd_utils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.622 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:02 compute-0 sudo[347131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:02 compute-0 sudo[347131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:02 compute-0 sudo[347131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.694 247708 DEBUG nova.objects.instance [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'keypairs' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:02 compute-0 sudo[347174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:22:02 compute-0 sudo[347174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:02 compute-0 sudo[347174]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.707 247708 DEBUG nova.compute.manager [req-b551661a-fa14-45f1-bd55-603a580ce29b req-1ba05880-112b-40b1-9a12-9517b3ff9aa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.707 247708 DEBUG oslo_concurrency.lockutils [req-b551661a-fa14-45f1-bd55-603a580ce29b req-1ba05880-112b-40b1-9a12-9517b3ff9aa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.708 247708 DEBUG oslo_concurrency.lockutils [req-b551661a-fa14-45f1-bd55-603a580ce29b req-1ba05880-112b-40b1-9a12-9517b3ff9aa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.708 247708 DEBUG oslo_concurrency.lockutils [req-b551661a-fa14-45f1-bd55-603a580ce29b req-1ba05880-112b-40b1-9a12-9517b3ff9aa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.708 247708 DEBUG nova.compute.manager [req-b551661a-fa14-45f1-bd55-603a580ce29b req-1ba05880-112b-40b1-9a12-9517b3ff9aa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:02 compute-0 nova_compute[247704]: 2026-01-31 08:22:02.708 247708 WARNING nova.compute.manager [req-b551661a-fa14-45f1-bd55-603a580ce29b req-1ba05880-112b-40b1-9a12-9517b3ff9aa6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state rescuing.
Jan 31 08:22:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:03 compute-0 ceph-mon[74496]: pgmap v2638: 305 pgs: 305 active+clean; 640 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Jan 31 08:22:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3061871491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:22:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:22:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:03.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.563 247708 INFO nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Creating config drive at /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config.rescue
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.571 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsvgg3wmv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.689 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.700 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsvgg3wmv" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.746 247708 DEBUG nova.storage.rbd_utils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] rbd image 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.752 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config.rescue 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.942 247708 DEBUG oslo_concurrency.processutils [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config.rescue 9461628f-d09f-4923-824c-3b03dfe4bb13_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:03 compute-0 nova_compute[247704]: 2026-01-31 08:22:03.944 247708 INFO nova.virt.libvirt.driver [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Deleting local config drive /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13/disk.config.rescue because it was imported into RBD.
Jan 31 08:22:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 897 KiB/s wr, 50 op/s
Jan 31 08:22:04 compute-0 kernel: tap5fe2b029-2d: entered promiscuous mode
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:04 compute-0 NetworkManager[49108]: <info>  [1769847724.0141] manager: (tap5fe2b029-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Jan 31 08:22:04 compute-0 ovn_controller[149457]: 2026-01-31T08:22:04Z|00639|binding|INFO|Claiming lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for this chassis.
Jan 31 08:22:04 compute-0 ovn_controller[149457]: 2026-01-31T08:22:04Z|00640|binding|INFO|5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e: Claiming fa:16:3e:df:0b:2a 10.100.0.13
Jan 31 08:22:04 compute-0 ovn_controller[149457]: 2026-01-31T08:22:04Z|00641|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e ovn-installed in OVS
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.023 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:04 compute-0 ovn_controller[149457]: 2026-01-31T08:22:04Z|00642|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e up in Southbound
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.027 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0b:2a 10.100.0.13'], port_security=['fa:16:3e:df:0b:2a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9461628f-d09f-4923-824c-3b03dfe4bb13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f73e2ea1-9bc5-4762-ba07-cf77fb23394f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.028 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.031 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.035 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:22:04 compute-0 systemd-udevd[347253]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.049 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2732e4e3-8188-4720-8d67-b272307cf027]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:04 compute-0 NetworkManager[49108]: <info>  [1769847724.0624] device (tap5fe2b029-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:22:04 compute-0 NetworkManager[49108]: <info>  [1769847724.0633] device (tap5fe2b029-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:22:04 compute-0 systemd-machined[214448]: New machine qemu-67-instance-00000093.
Jan 31 08:22:04 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-00000093.
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.093 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f481368e-dca2-46ef-b39c-776f792349b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.099 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[75a252c2-b54a-4776-9194-04f3e6d5eba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:04 compute-0 ceph-mon[74496]: pgmap v2639: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 897 KiB/s wr, 50 op/s
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.129 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6d510a14-329f-42ca-ab07-937a469f8853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.149 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ee0007-3ad7-4d7c-b93f-d8a831d5c11c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347266, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.166 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4d871292-cf50-487b-8926-08dd706c86a8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786267, 'tstamp': 786267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347268, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786270, 'tstamp': 786270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347268, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.167 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.169 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.170 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.170 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.170 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.171 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:04.171 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.573 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 9461628f-d09f-4923-824c-3b03dfe4bb13 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.573 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847724.5723176, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.574 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Resumed (Lifecycle Event)
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.580 247708 DEBUG nova.compute.manager [None req-a5251845-b730-44cd-8c38-5c4006dca0eb 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.690 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.695 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.733 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847724.5749516, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.734 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Started (Lifecycle Event)
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.783 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.787 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.872 247708 DEBUG nova.compute.manager [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.872 247708 DEBUG oslo_concurrency.lockutils [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.874 247708 DEBUG oslo_concurrency.lockutils [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.874 247708 DEBUG oslo_concurrency.lockutils [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.875 247708 DEBUG nova.compute.manager [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.875 247708 WARNING nova.compute.manager [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state rescued and task_state None.
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.875 247708 DEBUG nova.compute.manager [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.875 247708 DEBUG oslo_concurrency.lockutils [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.875 247708 DEBUG oslo_concurrency.lockutils [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.876 247708 DEBUG oslo_concurrency.lockutils [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.876 247708 DEBUG nova.compute.manager [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:04 compute-0 nova_compute[247704]: 2026-01-31 08:22:04.876 247708 WARNING nova.compute.manager [req-b96f7300-39c6-4163-98d6-2b01c4f8ddd8 req-d4e41e34-9339-47c5-8d38-c21654eecd63 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state rescued and task_state None.
Jan 31 08:22:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:05 compute-0 nova_compute[247704]: 2026-01-31 08:22:05.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 08:22:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:07 compute-0 ceph-mon[74496]: pgmap v2640: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 08:22:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.104723) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727104809, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1504, "num_deletes": 254, "total_data_size": 2307674, "memory_usage": 2341120, "flush_reason": "Manual Compaction"}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727128390, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 2267714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56353, "largest_seqno": 57855, "table_properties": {"data_size": 2260799, "index_size": 3922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14863, "raw_average_key_size": 19, "raw_value_size": 2246497, "raw_average_value_size": 2952, "num_data_blocks": 170, "num_entries": 761, "num_filter_entries": 761, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847610, "oldest_key_time": 1769847610, "file_creation_time": 1769847727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 23749 microseconds, and 7842 cpu microseconds.
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.128465) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 2267714 bytes OK
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.128507) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.133378) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.133431) EVENT_LOG_v1 {"time_micros": 1769847727133417, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.133463) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 2301104, prev total WAL file size 2301104, number of live WAL files 2.
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.134566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(2214KB)], [125(11MB)]
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727134660, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 14767367, "oldest_snapshot_seqno": -1}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 8694 keys, 13659771 bytes, temperature: kUnknown
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727284908, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 13659771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13600236, "index_size": 36702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227305, "raw_average_key_size": 26, "raw_value_size": 13443984, "raw_average_value_size": 1546, "num_data_blocks": 1425, "num_entries": 8694, "num_filter_entries": 8694, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.285283) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13659771 bytes
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.286972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.2 rd, 90.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(12.5) write-amplify(6.0) OK, records in: 9223, records dropped: 529 output_compression: NoCompression
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.286995) EVENT_LOG_v1 {"time_micros": 1769847727286983, "job": 76, "event": "compaction_finished", "compaction_time_micros": 150355, "compaction_time_cpu_micros": 23077, "output_level": 6, "num_output_files": 1, "total_output_size": 13659771, "num_input_records": 9223, "num_output_records": 8694, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727287347, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847727288627, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.134376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.288769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.288779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.288782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.288785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:22:07.288788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:22:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:07 compute-0 nova_compute[247704]: 2026-01-31 08:22:07.554 247708 INFO nova.compute.manager [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Unrescuing
Jan 31 08:22:07 compute-0 nova_compute[247704]: 2026-01-31 08:22:07.555 247708 DEBUG oslo_concurrency.lockutils [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:07 compute-0 nova_compute[247704]: 2026-01-31 08:22:07.555 247708 DEBUG oslo_concurrency.lockutils [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:07 compute-0 nova_compute[247704]: 2026-01-31 08:22:07.555 247708 DEBUG nova.network.neutron [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:22:07 compute-0 podman[347331]: 2026-01-31 08:22:07.947131369 +0000 UTC m=+0.111069994 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:22:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 31 08:22:08 compute-0 ceph-mon[74496]: pgmap v2641: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 31 08:22:08 compute-0 sudo[347357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:08 compute-0 sudo[347357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:08 compute-0 sudo[347357]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:08 compute-0 nova_compute[247704]: 2026-01-31 08:22:08.729 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:08 compute-0 sudo[347382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:08 compute-0 sudo[347382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:08 compute-0 sudo[347382]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:22:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:08.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:22:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:09.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.315 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.784 247708 DEBUG nova.network.neutron [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.813 247708 DEBUG oslo_concurrency.lockutils [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.814 247708 DEBUG nova.objects.instance [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:10 compute-0 kernel: tap5fe2b029-2d (unregistering): left promiscuous mode
Jan 31 08:22:10 compute-0 NetworkManager[49108]: <info>  [1769847730.9403] device (tap5fe2b029-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.949 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:10 compute-0 ovn_controller[149457]: 2026-01-31T08:22:10Z|00643|binding|INFO|Releasing lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e from this chassis (sb_readonly=0)
Jan 31 08:22:10 compute-0 ovn_controller[149457]: 2026-01-31T08:22:10Z|00644|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e down in Southbound
Jan 31 08:22:10 compute-0 ovn_controller[149457]: 2026-01-31T08:22:10Z|00645|binding|INFO|Removing iface tap5fe2b029-2d ovn-installed in OVS
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.955 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:10 compute-0 nova_compute[247704]: 2026-01-31 08:22:10.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:22:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:22:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:10.975 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0b:2a 10.100.0.13'], port_security=['fa:16:3e:df:0b:2a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9461628f-d09f-4923-824c-3b03dfe4bb13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f73e2ea1-9bc5-4762-ba07-cf77fb23394f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:10.978 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis
Jan 31 08:22:10 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 31 08:22:10 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000093.scope: Consumed 7.057s CPU time.
Jan 31 08:22:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:10.981 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:22:10 compute-0 systemd-machined[214448]: Machine qemu-67-instance-00000093 terminated.
Jan 31 08:22:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:10.997 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a41d4d26-eee8-45d8-b1ed-622274da6954]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.025 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1a658206-b255-405e-8c22-440ad4ef1b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.029 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc9c21a-ae3c-455b-9a18-167938c6df16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.057 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8368f1-6194-49d8-8781-a14cba9da7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ceph-mon[74496]: pgmap v2642: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.079 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6f90fb-edc4-4a60-b709-f9e7b4e85f1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347420, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.097 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[85b09737-5129-4c3a-9bff-6721ea45f1c9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786267, 'tstamp': 786267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347422, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786270, 'tstamp': 786270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347422, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.099 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.142 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.148 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.148 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.148 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.149 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.152 247708 INFO nova.virt.libvirt.driver [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance destroyed successfully.
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.153 247708 DEBUG nova.objects.instance [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.193 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.194 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.195 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:11 compute-0 kernel: tap5fe2b029-2d: entered promiscuous mode
Jan 31 08:22:11 compute-0 systemd-udevd[347412]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:22:11 compute-0 NetworkManager[49108]: <info>  [1769847731.2351] manager: (tap5fe2b029-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Jan 31 08:22:11 compute-0 ovn_controller[149457]: 2026-01-31T08:22:11Z|00646|binding|INFO|Claiming lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for this chassis.
Jan 31 08:22:11 compute-0 ovn_controller[149457]: 2026-01-31T08:22:11Z|00647|binding|INFO|5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e: Claiming fa:16:3e:df:0b:2a 10.100.0.13
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.238 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 NetworkManager[49108]: <info>  [1769847731.2461] device (tap5fe2b029-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:22:11 compute-0 NetworkManager[49108]: <info>  [1769847731.2476] device (tap5fe2b029-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.249 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 ovn_controller[149457]: 2026-01-31T08:22:11Z|00648|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e ovn-installed in OVS
Jan 31 08:22:11 compute-0 ovn_controller[149457]: 2026-01-31T08:22:11Z|00649|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e up in Southbound
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.251 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0b:2a 10.100.0.13'], port_security=['fa:16:3e:df:0b:2a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9461628f-d09f-4923-824c-3b03dfe4bb13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f73e2ea1-9bc5-4762-ba07-cf77fb23394f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.252 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.253 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 bound to our chassis
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.255 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.266 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[20664295-6d93-43ff-ace2-fc8bf719845b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 systemd-machined[214448]: New machine qemu-68-instance-00000093.
Jan 31 08:22:11 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-00000093.
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.288 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[eef46556-fca9-4127-8a86-8b3d4f7dd22d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.291 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bda73930-ebbb-4162-94a2-bd306e95cd38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.313 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f830ee96-02e7-48e1-8e44-dcbe6f92b32f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.367 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2040a43d-6c24-4451-a42a-0b5baabb864a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347453, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:11.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.386 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[920f4333-1cb0-45e9-b927-eb0e188fb9fb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786267, 'tstamp': 786267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347459, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786270, 'tstamp': 786270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347459, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.389 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.391 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.392 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.393 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.393 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.394 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:11.394 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.610 247708 DEBUG nova.compute.manager [req-9abbd1d4-459d-49a7-9281-a83cc500e1c5 req-8bf1cf31-6815-4b2f-b253-d2283a1fa362 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.611 247708 DEBUG oslo_concurrency.lockutils [req-9abbd1d4-459d-49a7-9281-a83cc500e1c5 req-8bf1cf31-6815-4b2f-b253-d2283a1fa362 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.614 247708 DEBUG oslo_concurrency.lockutils [req-9abbd1d4-459d-49a7-9281-a83cc500e1c5 req-8bf1cf31-6815-4b2f-b253-d2283a1fa362 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.615 247708 DEBUG oslo_concurrency.lockutils [req-9abbd1d4-459d-49a7-9281-a83cc500e1c5 req-8bf1cf31-6815-4b2f-b253-d2283a1fa362 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.615 247708 DEBUG nova.compute.manager [req-9abbd1d4-459d-49a7-9281-a83cc500e1c5 req-8bf1cf31-6815-4b2f-b253-d2283a1fa362 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.616 247708 WARNING nova.compute.manager [req-9abbd1d4-459d-49a7-9281-a83cc500e1c5 req-8bf1cf31-6815-4b2f-b253-d2283a1fa362 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.979 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 9461628f-d09f-4923-824c-3b03dfe4bb13 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.980 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847731.9787076, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:11 compute-0 nova_compute[247704]: 2026-01-31 08:22:11.981 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Resumed (Lifecycle Event)
Jan 31 08:22:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.021 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.027 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.054 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.054 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847731.9798937, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.054 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Started (Lifecycle Event)
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.087 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.091 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:22:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.122 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:22:12 compute-0 ceph-mon[74496]: pgmap v2643: 305 pgs: 305 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Jan 31 08:22:12 compute-0 ovn_controller[149457]: 2026-01-31T08:22:12Z|00650|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 08:22:12 compute-0 nova_compute[247704]: 2026-01-31 08:22:12.546 247708 DEBUG nova.compute.manager [None req-15bc2a6b-df50-4a7f-882a-a000530a837f 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:12.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:13.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.733 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.742 247708 DEBUG nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.742 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.742 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.742 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.743 247708 DEBUG nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.743 247708 WARNING nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state None.
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.743 247708 DEBUG nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.743 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.743 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.744 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.744 247708 DEBUG nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.744 247708 WARNING nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state None.
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.744 247708 DEBUG nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.744 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.745 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.745 247708 DEBUG oslo_concurrency.lockutils [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.745 247708 DEBUG nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:13 compute-0 nova_compute[247704]: 2026-01-31 08:22:13.745 247708 WARNING nova.compute.manager [req-ef0e634c-c4af-4ef1-98bd-617aefeed439 req-e45fe174-af4c-4a72-9253-7048e2f14d33 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state None.
Jan 31 08:22:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 169 op/s
Jan 31 08:22:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:14.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:15 compute-0 ceph-mon[74496]: pgmap v2644: 305 pgs: 305 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 169 op/s
Jan 31 08:22:15 compute-0 nova_compute[247704]: 2026-01-31 08:22:15.319 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:15.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.2 MiB/s wr, 273 op/s
Jan 31 08:22:16 compute-0 nova_compute[247704]: 2026-01-31 08:22:16.134 247708 DEBUG nova.compute.manager [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:16 compute-0 nova_compute[247704]: 2026-01-31 08:22:16.134 247708 DEBUG nova.compute.manager [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing instance network info cache due to event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:22:16 compute-0 nova_compute[247704]: 2026-01-31 08:22:16.135 247708 DEBUG oslo_concurrency.lockutils [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:16 compute-0 nova_compute[247704]: 2026-01-31 08:22:16.135 247708 DEBUG oslo_concurrency.lockutils [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:16 compute-0 nova_compute[247704]: 2026-01-31 08:22:16.136 247708 DEBUG nova.network.neutron [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:22:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/845999303' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:16 compute-0 ceph-mon[74496]: pgmap v2645: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 1.2 MiB/s wr, 273 op/s
Jan 31 08:22:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:17.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:17 compute-0 nova_compute[247704]: 2026-01-31 08:22:17.998 247708 DEBUG nova.network.neutron [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updated VIF entry in instance network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:22:17 compute-0 nova_compute[247704]: 2026-01-31 08:22:17.999 247708 DEBUG nova.network.neutron [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 210 op/s
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.029 247708 DEBUG oslo_concurrency.lockutils [req-7b097d02-e353-439e-9bc8-326284992136 req-349ce080-513b-4005-9389-58a4237e6ecb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.296 247708 DEBUG nova.compute.manager [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.297 247708 DEBUG nova.compute.manager [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing instance network info cache due to event network-changed-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.297 247708 DEBUG oslo_concurrency.lockutils [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.298 247708 DEBUG oslo_concurrency.lockutils [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.298 247708 DEBUG nova.network.neutron [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Refreshing network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:22:18 compute-0 nova_compute[247704]: 2026-01-31 08:22:18.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:18.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:19 compute-0 ceph-mon[74496]: pgmap v2646: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 210 op/s
Jan 31 08:22:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:19.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 48 KiB/s wr, 198 op/s
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:20 compute-0 ceph-mon[74496]: pgmap v2647: 305 pgs: 305 active+clean; 634 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 48 KiB/s wr, 198 op/s
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:22:20
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:22:20 compute-0 nova_compute[247704]: 2026-01-31 08:22:20.321 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:22:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:22:20 compute-0 podman[347545]: 2026-01-31 08:22:20.896062288 +0000 UTC m=+0.066075681 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:22:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:20.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:21.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:21 compute-0 nova_compute[247704]: 2026-01-31 08:22:21.985 247708 DEBUG nova.network.neutron [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updated VIF entry in instance network info cache for port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:22:21 compute-0 nova_compute[247704]: 2026-01-31 08:22:21.986 247708 DEBUG nova.network.neutron [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [{"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 42 KiB/s wr, 157 op/s
Jan 31 08:22:22 compute-0 nova_compute[247704]: 2026-01-31 08:22:22.008 247708 DEBUG oslo_concurrency.lockutils [req-118853a9-8275-4edc-b10a-0d495c21773d req-6fed6166-e187-422c-954b-630ada626d8f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9461628f-d09f-4923-824c-3b03dfe4bb13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:22.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:23 compute-0 ceph-mon[74496]: pgmap v2648: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 42 KiB/s wr, 157 op/s
Jan 31 08:22:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:23.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:23 compute-0 nova_compute[247704]: 2026-01-31 08:22:23.788 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 43 KiB/s wr, 153 op/s
Jan 31 08:22:24 compute-0 ceph-mon[74496]: pgmap v2649: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 43 KiB/s wr, 153 op/s
Jan 31 08:22:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:24.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:25 compute-0 ovn_controller[149457]: 2026-01-31T08:22:25Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:0b:2a 10.100.0.13
Jan 31 08:22:25 compute-0 nova_compute[247704]: 2026-01-31 08:22:25.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:25.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 45 KiB/s wr, 174 op/s
Jan 31 08:22:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:26.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:27 compute-0 ceph-mon[74496]: pgmap v2650: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 45 KiB/s wr, 174 op/s
Jan 31 08:22:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:27.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 831 KiB/s rd, 43 KiB/s wr, 70 op/s
Jan 31 08:22:28 compute-0 ceph-mon[74496]: pgmap v2651: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 831 KiB/s rd, 43 KiB/s wr, 70 op/s
Jan 31 08:22:28 compute-0 nova_compute[247704]: 2026-01-31 08:22:28.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:28 compute-0 sudo[347568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:28 compute-0 sudo[347568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:28 compute-0 sudo[347568]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:28 compute-0 sudo[347593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:28 compute-0 sudo[347593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:28 compute-0 sudo[347593]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:28.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:29.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 725 KiB/s rd, 57 KiB/s wr, 65 op/s
Jan 31 08:22:30 compute-0 ceph-mon[74496]: pgmap v2652: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 725 KiB/s rd, 57 KiB/s wr, 65 op/s
Jan 31 08:22:30 compute-0 nova_compute[247704]: 2026-01-31 08:22:30.327 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:31.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 50 op/s
Jan 31 08:22:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:32 compute-0 nova_compute[247704]: 2026-01-31 08:22:32.504 247708 DEBUG oslo_concurrency.lockutils [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:32 compute-0 nova_compute[247704]: 2026-01-31 08:22:32.505 247708 DEBUG oslo_concurrency.lockutils [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:32 compute-0 nova_compute[247704]: 2026-01-31 08:22:32.621 247708 INFO nova.compute.manager [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Detaching volume 9e7640b2-419e-4d50-9b23-7d76e34131d8
Jan 31 08:22:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:33 compute-0 ceph-mon[74496]: pgmap v2653: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 50 op/s
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.143 247708 INFO nova.virt.block_device [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Attempting to driver detach volume 9e7640b2-419e-4d50-9b23-7d76e34131d8 from mountpoint /dev/vdb
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.156 247708 DEBUG nova.virt.libvirt.driver [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Attempting to detach device vdb from instance 9461628f-d09f-4923-824c-3b03dfe4bb13 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.157 247708 DEBUG nova.virt.libvirt.guest [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-9e7640b2-419e-4d50-9b23-7d76e34131d8">
Jan 31 08:22:33 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   </source>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <serial>9e7640b2-419e-4d50-9b23-7d76e34131d8</serial>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]: </disk>
Jan 31 08:22:33 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.166 247708 INFO nova.virt.libvirt.driver [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully detached device vdb from instance 9461628f-d09f-4923-824c-3b03dfe4bb13 from the persistent domain config.
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.166 247708 DEBUG nova.virt.libvirt.driver [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9461628f-d09f-4923-824c-3b03dfe4bb13 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.167 247708 DEBUG nova.virt.libvirt.guest [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-9e7640b2-419e-4d50-9b23-7d76e34131d8">
Jan 31 08:22:33 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   </source>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <serial>9e7640b2-419e-4d50-9b23-7d76e34131d8</serial>
Jan 31 08:22:33 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:22:33 compute-0 nova_compute[247704]: </disk>
Jan 31 08:22:33 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:22:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:22:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/235625692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.282 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769847753.2821925, 9461628f-d09f-4923-824c-3b03dfe4bb13 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.285 247708 DEBUG nova.virt.libvirt.driver [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9461628f-d09f-4923-824c-3b03dfe4bb13 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.287 247708 INFO nova.virt.libvirt.driver [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully detached device vdb from instance 9461628f-d09f-4923-824c-3b03dfe4bb13 from the live domain config.
Jan 31 08:22:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:33.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.623 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.692 247708 DEBUG nova.objects.instance [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'flavor' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.839 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:33 compute-0 nova_compute[247704]: 2026-01-31 08:22:33.851 247708 DEBUG oslo_concurrency.lockutils [None req-7118c06f-9f4b-45e1-aa12-a930244ada3a 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 49 op/s
Jan 31 08:22:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/235625692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2068840843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:34 compute-0 ceph-mon[74496]: pgmap v2654: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 534 KiB/s rd, 33 KiB/s wr, 49 op/s
Jan 31 08:22:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:34.536 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:34 compute-0 nova_compute[247704]: 2026-01-31 08:22:34.537 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:34.538 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:22:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:35 compute-0 nova_compute[247704]: 2026-01-31 08:22:35.329 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:35.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014037199888330541 of space, bias 1.0, pg target 4.211159966499163 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.99929043364973e-06 of space, bias 1.0, pg target 0.0005917899683603201 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003843544981853713 of space, bias 1.0, pg target 1.137689314628699 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:22:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 08:22:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 825 KiB/s rd, 33 KiB/s wr, 62 op/s
Jan 31 08:22:36 compute-0 ceph-mon[74496]: pgmap v2655: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 825 KiB/s rd, 33 KiB/s wr, 62 op/s
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.189 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.189 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.190 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.190 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.190 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.192 247708 INFO nova.compute.manager [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Terminating instance
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.192 247708 DEBUG nova.compute.manager [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:22:36 compute-0 kernel: tap5fe2b029-2d (unregistering): left promiscuous mode
Jan 31 08:22:36 compute-0 NetworkManager[49108]: <info>  [1769847756.2426] device (tap5fe2b029-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.249 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 ovn_controller[149457]: 2026-01-31T08:22:36Z|00651|binding|INFO|Releasing lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e from this chassis (sb_readonly=0)
Jan 31 08:22:36 compute-0 ovn_controller[149457]: 2026-01-31T08:22:36Z|00652|binding|INFO|Setting lport 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e down in Southbound
Jan 31 08:22:36 compute-0 ovn_controller[149457]: 2026-01-31T08:22:36Z|00653|binding|INFO|Removing iface tap5fe2b029-2d ovn-installed in OVS
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.252 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.257 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.284 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0b:2a 10.100.0.13'], port_security=['fa:16:3e:df:0b:2a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9461628f-d09f-4923-824c-3b03dfe4bb13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f73e2ea1-9bc5-4762-ba07-cf77fb23394f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.285 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.287 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e03fc320-c87d-42d2-a772-ec94aeb05209
Jan 31 08:22:36 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 31 08:22:36 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000093.scope: Consumed 14.239s CPU time.
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.306 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbd8f3d-2772-4dfb-ad5a-c073f3d856b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:36 compute-0 systemd-machined[214448]: Machine qemu-68-instance-00000093 terminated.
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.345 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4750c004-77d2-49b0-b5e3-fbf1a7a1a26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.350 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[889e2b82-e96e-42ce-8b50-98616fd45564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.388 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf2e623-e73f-434b-841f-564a4227dbcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.407 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[96b57803-3ba0-48da-bd60-43949e3a66bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape03fc320-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:22:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786255, 'reachable_time': 15912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347636, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.414 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.429 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[61f41969-336f-4123-b839-030087eab92e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786267, 'tstamp': 786267}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347641, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape03fc320-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 786270, 'tstamp': 786270}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347641, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.431 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.430 247708 INFO nova.virt.libvirt.driver [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Instance destroyed successfully.
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.430 247708 DEBUG nova.objects.instance [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'resources' on Instance uuid 9461628f-d09f-4923-824c-3b03dfe4bb13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.436 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.437 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape03fc320-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.438 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.439 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape03fc320-c0, col_values=(('external_ids', {'iface-id': '075aefe0-13df-4a17-ae95-485ece950a10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:36.439 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.679 247708 DEBUG nova.virt.libvirt.vif [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:21:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1067474769',display_name='tempest-ServerRescueNegativeTestJSON-server-1067474769',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1067474769',id=147,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL1XQ9RINPbQvwpLEj3q2/JDcye0xA2ChkvrNwwmCbJqk4tZWS/gilMtbqPwst/ucA1/c+m+Q83K+vDN4Tb2I1/339TQot95of7EYxS4NG33iMzRU+P4Lgy9rP/4HtB3Ww==',key_name='tempest-keypair-1375178499',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-573z0d2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6788b0883cb348719d1222b1c9483be2',uuid=9461628f-d09f-4923-824c-3b03dfe4bb13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.680 247708 DEBUG nova.network.os_vif_util [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "address": "fa:16:3e:df:0b:2a", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe2b029-2d", "ovs_interfaceid": "5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.681 247708 DEBUG nova.network.os_vif_util [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.681 247708 DEBUG os_vif [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.683 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fe2b029-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.685 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.686 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.687 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:36 compute-0 nova_compute[247704]: 2026-01-31 08:22:36.691 247708 INFO os_vif [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:0b:2a,bridge_name='br-int',has_traffic_filtering=True,id=5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe2b029-2d')
Jan 31 08:22:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:22:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:36.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.073 247708 DEBUG nova.compute.manager [req-801beda8-bf4a-4914-b6b6-f74bcf2a0ffa req-248ca278-b1a2-4835-ab6b-a26c0d25843a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.073 247708 DEBUG oslo_concurrency.lockutils [req-801beda8-bf4a-4914-b6b6-f74bcf2a0ffa req-248ca278-b1a2-4835-ab6b-a26c0d25843a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.073 247708 DEBUG oslo_concurrency.lockutils [req-801beda8-bf4a-4914-b6b6-f74bcf2a0ffa req-248ca278-b1a2-4835-ab6b-a26c0d25843a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.073 247708 DEBUG oslo_concurrency.lockutils [req-801beda8-bf4a-4914-b6b6-f74bcf2a0ffa req-248ca278-b1a2-4835-ab6b-a26c0d25843a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.074 247708 DEBUG nova.compute.manager [req-801beda8-bf4a-4914-b6b6-f74bcf2a0ffa req-248ca278-b1a2-4835-ab6b-a26c0d25843a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.074 247708 DEBUG nova.compute.manager [req-801beda8-bf4a-4914-b6b6-f74bcf2a0ffa req-248ca278-b1a2-4835-ab6b-a26c0d25843a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-unplugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:22:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.855 247708 INFO nova.virt.libvirt.driver [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Deleting instance files /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13_del
Jan 31 08:22:37 compute-0 nova_compute[247704]: 2026-01-31 08:22:37.856 247708 INFO nova.virt.libvirt.driver [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Deletion of /var/lib/nova/instances/9461628f-d09f-4923-824c-3b03dfe4bb13_del complete
Jan 31 08:22:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 65 op/s
Jan 31 08:22:38 compute-0 ceph-mon[74496]: pgmap v2656: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 65 op/s
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.437 247708 INFO nova.compute.manager [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Took 2.24 seconds to destroy the instance on the hypervisor.
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.438 247708 DEBUG oslo.service.loopingcall [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.439 247708 DEBUG nova.compute.manager [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.439 247708 DEBUG nova.network.neutron [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:38 compute-0 nova_compute[247704]: 2026-01-31 08:22:38.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:22:38 compute-0 podman[347669]: 2026-01-31 08:22:38.9162848 +0000 UTC m=+0.082997256 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:22:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:38.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 86 op/s
Jan 31 08:22:40 compute-0 ceph-mon[74496]: pgmap v2657: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 86 op/s
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.332 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:40.540 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.720 247708 DEBUG nova.compute.manager [req-6b0387f0-6034-48d7-8dd5-bc0efcf7013e req-dc677eab-43d1-4c4d-8302-1ea2a77d9728 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.720 247708 DEBUG oslo_concurrency.lockutils [req-6b0387f0-6034-48d7-8dd5-bc0efcf7013e req-dc677eab-43d1-4c4d-8302-1ea2a77d9728 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.721 247708 DEBUG oslo_concurrency.lockutils [req-6b0387f0-6034-48d7-8dd5-bc0efcf7013e req-dc677eab-43d1-4c4d-8302-1ea2a77d9728 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.721 247708 DEBUG oslo_concurrency.lockutils [req-6b0387f0-6034-48d7-8dd5-bc0efcf7013e req-dc677eab-43d1-4c4d-8302-1ea2a77d9728 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.722 247708 DEBUG nova.compute.manager [req-6b0387f0-6034-48d7-8dd5-bc0efcf7013e req-dc677eab-43d1-4c4d-8302-1ea2a77d9728 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] No waiting events found dispatching network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:40 compute-0 nova_compute[247704]: 2026-01-31 08:22:40.722 247708 WARNING nova.compute.manager [req-6b0387f0-6034-48d7-8dd5-bc0efcf7013e req-dc677eab-43d1-4c4d-8302-1ea2a77d9728 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received unexpected event network-vif-plugged-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e for instance with vm_state active and task_state deleting.
Jan 31 08:22:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:41.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.615 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.615 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.616 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.616 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.616 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:41 compute-0 nova_compute[247704]: 2026-01-31 08:22:41.685 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.8 KiB/s wr, 101 op/s
Jan 31 08:22:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:22:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035816931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.095 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:42 compute-0 ceph-mon[74496]: pgmap v2658: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.8 KiB/s wr, 101 op/s
Jan 31 08:22:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3035816931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.733 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.735 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.735 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.942 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.943 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4213MB free_disk=20.713241577148438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.944 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:42 compute-0 nova_compute[247704]: 2026-01-31 08:22:42.944 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:42.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:22:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1237920725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:22:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1237920725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1237920725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1237920725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.307 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.307 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9461628f-d09f-4923-824c-3b03dfe4bb13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.307 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.307 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.398 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:22:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.538 247708 DEBUG nova.network.neutron [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.568 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.568 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.598 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.629 247708 INFO nova.compute.manager [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Took 5.19 seconds to deallocate network for instance.
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.631 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.707 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:43 compute-0 nova_compute[247704]: 2026-01-31 08:22:43.800 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.0 KiB/s wr, 113 op/s
Jan 31 08:22:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:22:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076322004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:44 compute-0 nova_compute[247704]: 2026-01-31 08:22:44.153 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:44 compute-0 ceph-mon[74496]: pgmap v2659: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.0 KiB/s wr, 113 op/s
Jan 31 08:22:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3227282260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2076322004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:44 compute-0 nova_compute[247704]: 2026-01-31 08:22:44.160 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:22:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:45 compute-0 nova_compute[247704]: 2026-01-31 08:22:45.335 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:22:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:22:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 KiB/s wr, 124 op/s
Jan 31 08:22:46 compute-0 ceph-mon[74496]: pgmap v2660: 305 pgs: 305 active+clean; 557 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 KiB/s wr, 124 op/s
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.358 247708 DEBUG nova.compute.manager [req-6c695929-3240-4556-8445-fc63cfb99fa9 req-5942e543-27a8-4074-a1a2-8bb77fc603f2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Received event network-vif-deleted-5fe2b029-2d1e-4b09-aa7e-5b557b9dc90e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.362 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.463 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.463 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.465 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 2.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.585 247708 DEBUG oslo_concurrency.processutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:22:46 compute-0 nova_compute[247704]: 2026-01-31 08:22:46.687 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:22:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3903040416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.036 247708 DEBUG oslo_concurrency.processutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.046 247708 DEBUG nova.compute.provider_tree [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:22:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1587069712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/492939812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2237483166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3903040416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:47.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.466 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.466 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.467 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.467 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:22:47 compute-0 nova_compute[247704]: 2026-01-31 08:22:47.991 247708 DEBUG nova.scheduler.client.report [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:22:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 545 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 KiB/s wr, 147 op/s
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.047 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.098 247708 INFO nova.scheduler.client.report [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Deleted allocations for instance 9461628f-d09f-4923-824c-3b03dfe4bb13
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.249 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.249 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.249 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.250 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:48 compute-0 nova_compute[247704]: 2026-01-31 08:22:48.357 247708 DEBUG oslo_concurrency.lockutils [None req-90b420d4-514a-42e1-8be5-9f9d600b4e45 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "9461628f-d09f-4923-824c-3b03dfe4bb13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:48 compute-0 ceph-mon[74496]: pgmap v2661: 305 pgs: 305 active+clean; 545 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 KiB/s wr, 147 op/s
Jan 31 08:22:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:49.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:49 compute-0 sudo[347769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:49 compute-0 sudo[347769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:49 compute-0 sudo[347769]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:49 compute-0 sudo[347794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:22:49 compute-0 sudo[347794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:49 compute-0 sudo[347794]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/851481534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 15 KiB/s wr, 142 op/s
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:50 compute-0 nova_compute[247704]: 2026-01-31 08:22:50.338 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/245710413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:50 compute-0 ceph-mon[74496]: pgmap v2662: 305 pgs: 305 active+clean; 510 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 15 KiB/s wr, 142 op/s
Jan 31 08:22:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2482508508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:22:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:22:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:51.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.382 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [{"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.425 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.425 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.426 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.426 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.427 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847756.4271011, 9461628f-d09f-4923-824c-3b03dfe4bb13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.428 247708 INFO nova.compute.manager [-] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] VM Stopped (Lifecycle Event)
Jan 31 08:22:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.465 247708 DEBUG nova.compute.manager [None req-8d40ee8c-f1c6-46d7-ad85-91418704c286 - - - - - -] [instance: 9461628f-d09f-4923-824c-3b03dfe4bb13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:22:51 compute-0 nova_compute[247704]: 2026-01-31 08:22:51.690 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:51 compute-0 podman[347821]: 2026-01-31 08:22:51.883879989 +0000 UTC m=+0.049813043 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:22:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 16 KiB/s wr, 135 op/s
Jan 31 08:22:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:52 compute-0 ceph-mon[74496]: pgmap v2663: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 16 KiB/s wr, 135 op/s
Jan 31 08:22:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:22:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1518565786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:22:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1518565786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Jan 31 08:22:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Jan 31 08:22:53 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Jan 31 08:22:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1518565786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1518565786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 172 op/s
Jan 31 08:22:54 compute-0 ceph-mon[74496]: osdmap e346: 3 total, 3 up, 3 in
Jan 31 08:22:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2446759486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2446759486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:54 compute-0 ceph-mon[74496]: pgmap v2665: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 172 op/s
Jan 31 08:22:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2483533362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:22:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2483533362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:22:55 compute-0 nova_compute[247704]: 2026-01-31 08:22:55.340 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 46 KiB/s wr, 234 op/s
Jan 31 08:22:56 compute-0 ceph-mon[74496]: pgmap v2666: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 46 KiB/s wr, 234 op/s
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.693 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.852 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.853 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.853 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.853 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.854 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.856 247708 INFO nova.compute.manager [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Terminating instance
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.859 247708 DEBUG nova.compute.manager [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:22:56 compute-0 kernel: tap912fc9c9-ca (unregistering): left promiscuous mode
Jan 31 08:22:56 compute-0 NetworkManager[49108]: <info>  [1769847776.9289] device (tap912fc9c9-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:56 compute-0 ovn_controller[149457]: 2026-01-31T08:22:56Z|00654|binding|INFO|Releasing lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 from this chassis (sb_readonly=0)
Jan 31 08:22:56 compute-0 ovn_controller[149457]: 2026-01-31T08:22:56Z|00655|binding|INFO|Setting lport 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 down in Southbound
Jan 31 08:22:56 compute-0 ovn_controller[149457]: 2026-01-31T08:22:56Z|00656|binding|INFO|Removing iface tap912fc9c9-ca ovn-installed in OVS
Jan 31 08:22:56 compute-0 nova_compute[247704]: 2026-01-31 08:22:56.952 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:56.961 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:2d:b8 10.100.0.10'], port_security=['fa:16:3e:a5:2d:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c725d43e-b5fe-4a94-ad44-6df85e3c0fa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e03fc320-c87d-42d2-a772-ec94aeb05209', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4849ff916e1b4e2aa162faaf2c0717a2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0a8345fd-717b-4084-912f-0c496810f08f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed93fa99-ea0a-43df-97d5-ee7154033108, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=912fc9c9-cae4-4bd8-901c-7bb8a63759a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:22:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:56.963 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 912fc9c9-cae4-4bd8-901c-7bb8a63759a4 in datapath e03fc320-c87d-42d2-a772-ec94aeb05209 unbound from our chassis
Jan 31 08:22:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:56.967 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e03fc320-c87d-42d2-a772-ec94aeb05209, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:22:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:56.968 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd39c53-3081-4735-aafc-d84d17c3caed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:56.970 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 namespace which is not needed anymore
Jan 31 08:22:56 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 31 08:22:56 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000090.scope: Consumed 19.784s CPU time.
Jan 31 08:22:56 compute-0 systemd-machined[214448]: Machine qemu-65-instance-00000090 terminated.
Jan 31 08:22:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:57.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.088 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [NOTICE]   (344211) : haproxy version is 2.8.14-c23fe91
Jan 31 08:22:57 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [NOTICE]   (344211) : path to executable is /usr/sbin/haproxy
Jan 31 08:22:57 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [WARNING]  (344211) : Exiting Master process...
Jan 31 08:22:57 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [ALERT]    (344211) : Current worker (344213) exited with code 143 (Terminated)
Jan 31 08:22:57 compute-0 neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209[344206]: [WARNING]  (344211) : All workers exited. Exiting... (0)
Jan 31 08:22:57 compute-0 systemd[1]: libpod-f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919.scope: Deactivated successfully.
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.103 247708 INFO nova.virt.libvirt.driver [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Instance destroyed successfully.
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.104 247708 DEBUG nova.objects.instance [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lazy-loading 'resources' on Instance uuid c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:22:57 compute-0 podman[347864]: 2026-01-31 08:22:57.109185417 +0000 UTC m=+0.052121740 container died f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:22:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.129 247708 DEBUG nova.virt.libvirt.vif [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-28125457',display_name='tempest-ServerRescueNegativeTestJSON-server-28125457',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-28125457',id=144,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4849ff916e1b4e2aa162faaf2c0717a2',ramdisk_id='',reservation_id='r-k7f0vyw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1784809431',owner_user_name='tempest-ServerRescueNegativeTestJSON-1784809431-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:20Z,user_data=None,user_id='6788b0883cb348719d1222b1c9483be2',uuid=c725d43e-b5fe-4a94-ad44-6df85e3c0fa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.130 247708 DEBUG nova.network.os_vif_util [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converting VIF {"id": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "address": "fa:16:3e:a5:2d:b8", "network": {"id": "e03fc320-c87d-42d2-a772-ec94aeb05209", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1371534224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4849ff916e1b4e2aa162faaf2c0717a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap912fc9c9-ca", "ovs_interfaceid": "912fc9c9-cae4-4bd8-901c-7bb8a63759a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.130 247708 DEBUG nova.network.os_vif_util [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.131 247708 DEBUG os_vif [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.132 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.133 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap912fc9c9-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.134 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.136 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.138 247708 INFO os_vif [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:2d:b8,bridge_name='br-int',has_traffic_filtering=True,id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4,network=Network(e03fc320-c87d-42d2-a772-ec94aeb05209),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap912fc9c9-ca')
Jan 31 08:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919-userdata-shm.mount: Deactivated successfully.
Jan 31 08:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b2e624e40bd0c409140c65632d85177c60ae81a779dfb84415f49bc63fb6aa8-merged.mount: Deactivated successfully.
Jan 31 08:22:57 compute-0 podman[347864]: 2026-01-31 08:22:57.164362699 +0000 UTC m=+0.107299002 container cleanup f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:22:57 compute-0 systemd[1]: libpod-conmon-f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919.scope: Deactivated successfully.
Jan 31 08:22:57 compute-0 podman[347922]: 2026-01-31 08:22:57.227396585 +0000 UTC m=+0.041625132 container remove f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.232 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9adeeb25-6fa3-495a-843c-862a24d111fa]: (4, ('Sat Jan 31 08:22:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919)\nf5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919\nSat Jan 31 08:22:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 (f5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919)\nf5f7840f2c230cad514053c379d033cf6418871b397c35970f97191380b61919\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.234 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[477c8daa-5111-4e59-b91b-dfed91ffa36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.235 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape03fc320-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:22:57 compute-0 kernel: tape03fc320-c0: left promiscuous mode
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.244 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.244 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.246 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f97524cd-f358-4fa9-8eb4-179f0d5fd358]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.264 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[71b89714-5d52-4d71-92a5-cf63b6b5aa94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.265 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3188bceb-9b5f-4a7b-bde7-898236dcf6ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.281 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9403533f-df5f-4e63-9b1f-372a6819c36a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 786248, 'reachable_time': 18746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347940, 'error': None, 'target': 'ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.286 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e03fc320-c87d-42d2-a772-ec94aeb05209 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:22:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:22:57.286 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[cbff210d-1159-4614-aa82-99e6c7b00eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:22:57 compute-0 systemd[1]: run-netns-ovnmeta\x2de03fc320\x2dc87d\x2d42d2\x2da772\x2dec94aeb05209.mount: Deactivated successfully.
Jan 31 08:22:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.515 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.585 247708 DEBUG nova.compute.manager [req-85a7eed4-f5ae-4e76-a7ad-7be060d0eb29 req-556889ce-1fec-458d-a7d0-de5075db64e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-unplugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.586 247708 DEBUG oslo_concurrency.lockutils [req-85a7eed4-f5ae-4e76-a7ad-7be060d0eb29 req-556889ce-1fec-458d-a7d0-de5075db64e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.586 247708 DEBUG oslo_concurrency.lockutils [req-85a7eed4-f5ae-4e76-a7ad-7be060d0eb29 req-556889ce-1fec-458d-a7d0-de5075db64e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.587 247708 DEBUG oslo_concurrency.lockutils [req-85a7eed4-f5ae-4e76-a7ad-7be060d0eb29 req-556889ce-1fec-458d-a7d0-de5075db64e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.587 247708 DEBUG nova.compute.manager [req-85a7eed4-f5ae-4e76-a7ad-7be060d0eb29 req-556889ce-1fec-458d-a7d0-de5075db64e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-unplugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.587 247708 DEBUG nova.compute.manager [req-85a7eed4-f5ae-4e76-a7ad-7be060d0eb29 req-556889ce-1fec-458d-a7d0-de5075db64e8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-unplugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.946 247708 INFO nova.virt.libvirt.driver [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Deleting instance files /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_del
Jan 31 08:22:57 compute-0 nova_compute[247704]: 2026-01-31 08:22:57.948 247708 INFO nova.virt.libvirt.driver [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Deletion of /var/lib/nova/instances/c725d43e-b5fe-4a94-ad44-6df85e3c0fa0_del complete
Jan 31 08:22:58 compute-0 nova_compute[247704]: 2026-01-31 08:22:58.012 247708 INFO nova.compute.manager [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 31 08:22:58 compute-0 nova_compute[247704]: 2026-01-31 08:22:58.013 247708 DEBUG oslo.service.loopingcall [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:22:58 compute-0 nova_compute[247704]: 2026-01-31 08:22:58.014 247708 DEBUG nova.compute.manager [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:22:58 compute-0 nova_compute[247704]: 2026-01-31 08:22:58.014 247708 DEBUG nova.network.neutron [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:22:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 213 op/s
Jan 31 08:22:58 compute-0 ceph-mon[74496]: pgmap v2667: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 213 op/s
Jan 31 08:22:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:22:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.446 247708 DEBUG nova.network.neutron [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:22:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:22:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.622 247708 DEBUG nova.compute.manager [req-7c5b6406-d6c3-4edf-8e8d-3d494b9350a7 req-af279797-122e-474f-893e-527b780906b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-deleted-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.623 247708 INFO nova.compute.manager [req-7c5b6406-d6c3-4edf-8e8d-3d494b9350a7 req-af279797-122e-474f-893e-527b780906b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Neutron deleted interface 912fc9c9-cae4-4bd8-901c-7bb8a63759a4; detaching it from the instance and deleting it from the info cache
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.623 247708 DEBUG nova.network.neutron [req-7c5b6406-d6c3-4edf-8e8d-3d494b9350a7 req-af279797-122e-474f-893e-527b780906b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.670 247708 INFO nova.compute.manager [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Took 1.66 seconds to deallocate network for instance.
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.696 247708 DEBUG nova.compute.manager [req-7c5b6406-d6c3-4edf-8e8d-3d494b9350a7 req-af279797-122e-474f-893e-527b780906b7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Detach interface failed, port_id=912fc9c9-cae4-4bd8-901c-7bb8a63759a4, reason: Instance c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.812 247708 DEBUG nova.compute.manager [req-668bb14d-79e1-49c4-8e80-632a06615a38 req-1cedf583-e7d4-4a24-adae-9089468759e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.813 247708 DEBUG oslo_concurrency.lockutils [req-668bb14d-79e1-49c4-8e80-632a06615a38 req-1cedf583-e7d4-4a24-adae-9089468759e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.813 247708 DEBUG oslo_concurrency.lockutils [req-668bb14d-79e1-49c4-8e80-632a06615a38 req-1cedf583-e7d4-4a24-adae-9089468759e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.813 247708 DEBUG oslo_concurrency.lockutils [req-668bb14d-79e1-49c4-8e80-632a06615a38 req-1cedf583-e7d4-4a24-adae-9089468759e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.813 247708 DEBUG nova.compute.manager [req-668bb14d-79e1-49c4-8e80-632a06615a38 req-1cedf583-e7d4-4a24-adae-9089468759e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] No waiting events found dispatching network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.814 247708 WARNING nova.compute.manager [req-668bb14d-79e1-49c4-8e80-632a06615a38 req-1cedf583-e7d4-4a24-adae-9089468759e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Received unexpected event network-vif-plugged-912fc9c9-cae4-4bd8-901c-7bb8a63759a4 for instance with vm_state rescued and task_state deleting.
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.816 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.816 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:59 compute-0 nova_compute[247704]: 2026-01-31 08:22:59.963 247708 DEBUG oslo_concurrency.processutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 33 KiB/s wr, 242 op/s
Jan 31 08:23:00 compute-0 ceph-mon[74496]: pgmap v2668: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 33 KiB/s wr, 242 op/s
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.343 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867350194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.450 247708 DEBUG oslo_concurrency.processutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.458 247708 DEBUG nova.compute.provider_tree [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.505 247708 DEBUG nova.scheduler.client.report [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.563 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.620 247708 INFO nova.scheduler.client.report [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Deleted allocations for instance c725d43e-b5fe-4a94-ad44-6df85e3c0fa0
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.736 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.812 247708 DEBUG oslo_concurrency.lockutils [None req-f7c80e4f-fcbf-4d59-a408-70f5192cf43c 6788b0883cb348719d1222b1c9483be2 4849ff916e1b4e2aa162faaf2c0717a2 - - default default] Lock "c725d43e-b5fe-4a94-ad44-6df85e3c0fa0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:00 compute-0 nova_compute[247704]: 2026-01-31 08:23:00.841 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:01.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1867350194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 238 op/s
Jan 31 08:23:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Jan 31 08:23:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Jan 31 08:23:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Jan 31 08:23:02 compute-0 nova_compute[247704]: 2026-01-31 08:23:02.136 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:02 compute-0 ceph-mon[74496]: pgmap v2669: 305 pgs: 305 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 238 op/s
Jan 31 08:23:02 compute-0 ceph-mon[74496]: osdmap e347: 3 total, 3 up, 3 in
Jan 31 08:23:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:03.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:03 compute-0 sudo[347968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:03 compute-0 sudo[347968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[347968]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 sudo[347993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:03 compute-0 sudo[347993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[347993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 sudo[348018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:03 compute-0 sudo[348018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[348018]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 sudo[348043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:23:03 compute-0 sudo[348043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:03 compute-0 sudo[348043]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:23:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:23:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:23:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 560cf1bc-f057-4e1f-bd34-694a027fa6a3 does not exist
Jan 31 08:23:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4e1ba2c6-d543-4ab8-9434-e0209b128b50 does not exist
Jan 31 08:23:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 90437756-1077-43ea-bb10-de516fb47922 does not exist
Jan 31 08:23:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:23:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:23:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:23:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:23:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:23:03 compute-0 sudo[348101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:03 compute-0 sudo[348101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[348101]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 sudo[348126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:03 compute-0 sudo[348126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[348126]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 sudo[348151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:03 compute-0 sudo[348151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:03 compute-0 sudo[348151]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 329 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 22 KiB/s wr, 197 op/s
Jan 31 08:23:04 compute-0 sudo[348176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:23:04 compute-0 sudo[348176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.377783926 +0000 UTC m=+0.045370783 container create c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:23:04 compute-0 systemd[1]: Started libpod-conmon-c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1.scope.
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.356900654 +0000 UTC m=+0.024487541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:23:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.478308361 +0000 UTC m=+0.145895218 container init c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.485957738 +0000 UTC m=+0.153544575 container start c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.490183792 +0000 UTC m=+0.157770629 container attach c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_thompson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 08:23:04 compute-0 goofy_thompson[348257]: 167 167
Jan 31 08:23:04 compute-0 systemd[1]: libpod-c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1.scope: Deactivated successfully.
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.49496735 +0000 UTC m=+0.162554187 container died c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_thompson, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d512d52cdb08e2dd9b1fc12bd6120200bbbe8719eef6354158f77579126fda9-merged.mount: Deactivated successfully.
Jan 31 08:23:04 compute-0 podman[348241]: 2026-01-31 08:23:04.537495482 +0000 UTC m=+0.205082319 container remove c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:23:04 compute-0 systemd[1]: libpod-conmon-c05f3e8c1da176ffbda77f49165feb43c4efb1d0715163ccb0c5691caf602be1.scope: Deactivated successfully.
Jan 31 08:23:04 compute-0 podman[348284]: 2026-01-31 08:23:04.69605214 +0000 UTC m=+0.052438626 container create 5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_raman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:04 compute-0 systemd[1]: Started libpod-conmon-5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82.scope.
Jan 31 08:23:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbbb51edee8c8ff4d17a82a26a8a429a4fec4f3b951bc255c61aafd77fa4788c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbbb51edee8c8ff4d17a82a26a8a429a4fec4f3b951bc255c61aafd77fa4788c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbbb51edee8c8ff4d17a82a26a8a429a4fec4f3b951bc255c61aafd77fa4788c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbbb51edee8c8ff4d17a82a26a8a429a4fec4f3b951bc255c61aafd77fa4788c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbbb51edee8c8ff4d17a82a26a8a429a4fec4f3b951bc255c61aafd77fa4788c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:04 compute-0 podman[348284]: 2026-01-31 08:23:04.674337388 +0000 UTC m=+0.030723894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:23:04 compute-0 podman[348284]: 2026-01-31 08:23:04.780359248 +0000 UTC m=+0.136745734 container init 5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:23:04 compute-0 podman[348284]: 2026-01-31 08:23:04.787942643 +0000 UTC m=+0.144329119 container start 5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:23:04 compute-0 podman[348284]: 2026-01-31 08:23:04.792059264 +0000 UTC m=+0.148445740 container attach 5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:23:04 compute-0 ceph-mon[74496]: pgmap v2671: 305 pgs: 305 active+clean; 329 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 22 KiB/s wr, 197 op/s
Jan 31 08:23:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:05.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:05 compute-0 nova_compute[247704]: 2026-01-31 08:23:05.346 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:05.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:05 compute-0 sshd-session[348305]: Invalid user eth from 45.148.10.240 port 40662
Jan 31 08:23:05 compute-0 affectionate_raman[348300]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:23:05 compute-0 affectionate_raman[348300]: --> relative data size: 1.0
Jan 31 08:23:05 compute-0 affectionate_raman[348300]: --> All data devices are unavailable
Jan 31 08:23:05 compute-0 systemd[1]: libpod-5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82.scope: Deactivated successfully.
Jan 31 08:23:05 compute-0 podman[348284]: 2026-01-31 08:23:05.624134867 +0000 UTC m=+0.980521333 container died 5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:23:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbbb51edee8c8ff4d17a82a26a8a429a4fec4f3b951bc255c61aafd77fa4788c-merged.mount: Deactivated successfully.
Jan 31 08:23:05 compute-0 podman[348284]: 2026-01-31 08:23:05.678584553 +0000 UTC m=+1.034971019 container remove 5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:05 compute-0 systemd[1]: libpod-conmon-5e484ca71b3da443e0ea609c86c9649eb5a74676fde6f2b97e115deabd0bca82.scope: Deactivated successfully.
Jan 31 08:23:05 compute-0 sudo[348176]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:05 compute-0 sshd-session[348305]: Connection closed by invalid user eth 45.148.10.240 port 40662 [preauth]
Jan 31 08:23:05 compute-0 sudo[348329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:05 compute-0 sudo[348329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:05 compute-0 sudo[348329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:05 compute-0 sudo[348354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:05 compute-0 sudo[348354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:05 compute-0 sudo[348354]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:05 compute-0 sudo[348379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:05 compute-0 sudo[348379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:05 compute-0 sudo[348379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:05 compute-0 sudo[348404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:23:05 compute-0 sudo[348404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:05 compute-0 nova_compute[247704]: 2026-01-31 08:23:05.947 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:05 compute-0 nova_compute[247704]: 2026-01-31 08:23:05.950 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 9.1 KiB/s wr, 132 op/s
Jan 31 08:23:06 compute-0 ceph-mon[74496]: pgmap v2672: 305 pgs: 305 active+clean; 261 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 9.1 KiB/s wr, 132 op/s
Jan 31 08:23:06 compute-0 nova_compute[247704]: 2026-01-31 08:23:06.241 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.299256282 +0000 UTC m=+0.037368337 container create 938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:06 compute-0 systemd[1]: Started libpod-conmon-938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9.scope.
Jan 31 08:23:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.28285414 +0000 UTC m=+0.020966195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.390700935 +0000 UTC m=+0.128813030 container init 938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.39869991 +0000 UTC m=+0.136811985 container start 938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.402654677 +0000 UTC m=+0.140766742 container attach 938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:23:06 compute-0 peaceful_kapitsa[348486]: 167 167
Jan 31 08:23:06 compute-0 systemd[1]: libpod-938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9.scope: Deactivated successfully.
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.40643722 +0000 UTC m=+0.144549265 container died 938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:23:06 compute-0 nova_compute[247704]: 2026-01-31 08:23:06.412 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:06 compute-0 nova_compute[247704]: 2026-01-31 08:23:06.413 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:06 compute-0 nova_compute[247704]: 2026-01-31 08:23:06.424 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:23:06 compute-0 nova_compute[247704]: 2026-01-31 08:23:06.425 247708 INFO nova.compute.claims [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:23:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab73a19afe9acbd3d1d6df38d4d399d41d66b4f9b6cbeca2f5e4afdf1e8b44d8-merged.mount: Deactivated successfully.
Jan 31 08:23:06 compute-0 podman[348469]: 2026-01-31 08:23:06.450755438 +0000 UTC m=+0.188867473 container remove 938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kapitsa, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 31 08:23:06 compute-0 systemd[1]: libpod-conmon-938239aed4158fb3f14486d105e9dcbeb1a88dcb89157d5523e343227dd333f9.scope: Deactivated successfully.
Jan 31 08:23:06 compute-0 podman[348509]: 2026-01-31 08:23:06.648624929 +0000 UTC m=+0.057947601 container create c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:06 compute-0 systemd[1]: Started libpod-conmon-c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842.scope.
Jan 31 08:23:06 compute-0 podman[348509]: 2026-01-31 08:23:06.627691826 +0000 UTC m=+0.037014538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:23:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c956fcfbe37b4e5bc15f31cb1aa6bb8602d102f89cee0a03c40fcc11e7c1cef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c956fcfbe37b4e5bc15f31cb1aa6bb8602d102f89cee0a03c40fcc11e7c1cef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c956fcfbe37b4e5bc15f31cb1aa6bb8602d102f89cee0a03c40fcc11e7c1cef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c956fcfbe37b4e5bc15f31cb1aa6bb8602d102f89cee0a03c40fcc11e7c1cef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:06 compute-0 podman[348509]: 2026-01-31 08:23:06.756705519 +0000 UTC m=+0.166028201 container init c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:23:06 compute-0 podman[348509]: 2026-01-31 08:23:06.762260745 +0000 UTC m=+0.171583417 container start c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:06 compute-0 podman[348509]: 2026-01-31 08:23:06.76570647 +0000 UTC m=+0.175029152 container attach c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:23:06 compute-0 nova_compute[247704]: 2026-01-31 08:23:06.773 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:07.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506549681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.222 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.230 247708 DEBUG nova.compute.provider_tree [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.281 247708 DEBUG nova.scheduler.client.report [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.334 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.336 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:23:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/506549681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.449 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.450 247708 DEBUG nova.network.neutron [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:23:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:23:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:07.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.516 247708 INFO nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.575 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:23:07 compute-0 determined_mendel[348526]: {
Jan 31 08:23:07 compute-0 determined_mendel[348526]:     "0": [
Jan 31 08:23:07 compute-0 determined_mendel[348526]:         {
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "devices": [
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "/dev/loop3"
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             ],
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "lv_name": "ceph_lv0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "lv_size": "7511998464",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "name": "ceph_lv0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "tags": {
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.crush_device_class": "",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.encrypted": "0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.osd_id": "0",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.type": "block",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:                 "ceph.vdo": "0"
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             },
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "type": "block",
Jan 31 08:23:07 compute-0 determined_mendel[348526]:             "vg_name": "ceph_vg0"
Jan 31 08:23:07 compute-0 determined_mendel[348526]:         }
Jan 31 08:23:07 compute-0 determined_mendel[348526]:     ]
Jan 31 08:23:07 compute-0 determined_mendel[348526]: }
Jan 31 08:23:07 compute-0 systemd[1]: libpod-c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842.scope: Deactivated successfully.
Jan 31 08:23:07 compute-0 podman[348509]: 2026-01-31 08:23:07.655526329 +0000 UTC m=+1.064849061 container died c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.853 247708 DEBUG nova.policy [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '038e2b3b4f174162a3ac6c4870857e60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c90ea7f1be5f484bb873548236fadc00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.880 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.882 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.883 247708 INFO nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Creating image(s)
Jan 31 08:23:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c956fcfbe37b4e5bc15f31cb1aa6bb8602d102f89cee0a03c40fcc11e7c1cef-merged.mount: Deactivated successfully.
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.917 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.946 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.977 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:07 compute-0 nova_compute[247704]: 2026-01-31 08:23:07.982 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 8.6 KiB/s wr, 133 op/s
Jan 31 08:23:08 compute-0 nova_compute[247704]: 2026-01-31 08:23:08.048 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:08 compute-0 nova_compute[247704]: 2026-01-31 08:23:08.049 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:08 compute-0 nova_compute[247704]: 2026-01-31 08:23:08.050 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:08 compute-0 nova_compute[247704]: 2026-01-31 08:23:08.050 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:08 compute-0 podman[348509]: 2026-01-31 08:23:08.078011889 +0000 UTC m=+1.487334561 container remove c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:08 compute-0 nova_compute[247704]: 2026-01-31 08:23:08.083 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:08 compute-0 systemd[1]: libpod-conmon-c94720d33983341d422b3efa6571d0e0985a0cd679986f6aad8f294155491842.scope: Deactivated successfully.
Jan 31 08:23:08 compute-0 nova_compute[247704]: 2026-01-31 08:23:08.091 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ed055707-78a7-4777-97f3-842e56be52d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:08 compute-0 sudo[348404]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:08 compute-0 sudo[348648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:08 compute-0 sudo[348648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:08 compute-0 sudo[348648]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:08 compute-0 sudo[348688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:08 compute-0 sudo[348688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:08 compute-0 sudo[348688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:08 compute-0 sudo[348713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:08 compute-0 sudo[348713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:08 compute-0 sudo[348713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:08 compute-0 sudo[348738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:23:08 compute-0 sudo[348738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4137991365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:08 compute-0 ceph-mon[74496]: pgmap v2673: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 8.6 KiB/s wr, 133 op/s
Jan 31 08:23:08 compute-0 podman[348806]: 2026-01-31 08:23:08.717057529 +0000 UTC m=+0.023456517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:23:08 compute-0 podman[348806]: 2026-01-31 08:23:08.829692301 +0000 UTC m=+0.136091269 container create e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:23:08 compute-0 systemd[1]: Started libpod-conmon-e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5.scope.
Jan 31 08:23:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:09.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:09 compute-0 podman[348806]: 2026-01-31 08:23:09.143748672 +0000 UTC m=+0.450147660 container init e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:23:09 compute-0 sudo[348835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:09 compute-0 podman[348806]: 2026-01-31 08:23:09.153117432 +0000 UTC m=+0.459516410 container start e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:09 compute-0 sudo[348835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:09 compute-0 sudo[348835]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.157 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 ed055707-78a7-4777-97f3-842e56be52d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:09 compute-0 magical_hawking[348822]: 167 167
Jan 31 08:23:09 compute-0 systemd[1]: libpod-e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5.scope: Deactivated successfully.
Jan 31 08:23:09 compute-0 sudo[348870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:09 compute-0 sudo[348870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:09 compute-0 sudo[348870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:09 compute-0 podman[348806]: 2026-01-31 08:23:09.240661258 +0000 UTC m=+0.547060256 container attach e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:09 compute-0 podman[348806]: 2026-01-31 08:23:09.24276876 +0000 UTC m=+0.549167728 container died e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.245 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] resizing rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf4cfd60b01eadebe03f9499c007bb12315e970a4dbf095b3374082e65f02547-merged.mount: Deactivated successfully.
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.439 247708 DEBUG nova.network.neutron [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Successfully created port: 5ad80e9c-4635-405b-a513-4af4441d6e17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:23:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:09 compute-0 podman[348806]: 2026-01-31 08:23:09.466412063 +0000 UTC m=+0.772811031 container remove e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hawking, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 08:23:09 compute-0 systemd[1]: libpod-conmon-e9756fe33dbf284e992b6d30befc32bece97e6715d5c39e95c9d53e190e99ed5.scope: Deactivated successfully.
Jan 31 08:23:09 compute-0 podman[348824]: 2026-01-31 08:23:09.547965174 +0000 UTC m=+0.551367851 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 08:23:09 compute-0 podman[348976]: 2026-01-31 08:23:09.590799204 +0000 UTC m=+0.030356005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:23:09 compute-0 podman[348976]: 2026-01-31 08:23:09.722455562 +0000 UTC m=+0.162012343 container create 1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.735 247708 DEBUG nova.objects.instance [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'migration_context' on Instance uuid ed055707-78a7-4777-97f3-842e56be52d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.858 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.859 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Ensure instance console log exists: /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.860 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.860 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:09 compute-0 nova_compute[247704]: 2026-01-31 08:23:09.860 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:09 compute-0 systemd[1]: Started libpod-conmon-1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65.scope.
Jan 31 08:23:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904fd1029080198c159180c8c5d534022833d7bae73969c90b06a44f90af3619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904fd1029080198c159180c8c5d534022833d7bae73969c90b06a44f90af3619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904fd1029080198c159180c8c5d534022833d7bae73969c90b06a44f90af3619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/904fd1029080198c159180c8c5d534022833d7bae73969c90b06a44f90af3619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:09 compute-0 podman[348976]: 2026-01-31 08:23:09.956838069 +0000 UTC m=+0.396394920 container init 1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:23:09 compute-0 podman[348976]: 2026-01-31 08:23:09.968810123 +0000 UTC m=+0.408366944 container start 1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:23:10 compute-0 podman[348976]: 2026-01-31 08:23:10.029763078 +0000 UTC m=+0.469319899 container attach 1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:23:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 268 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 622 KiB/s wr, 86 op/s
Jan 31 08:23:10 compute-0 ceph-mon[74496]: pgmap v2674: 305 pgs: 305 active+clean; 268 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 622 KiB/s wr, 86 op/s
Jan 31 08:23:10 compute-0 nova_compute[247704]: 2026-01-31 08:23:10.347 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:10 compute-0 nice_taussig[349011]: {
Jan 31 08:23:10 compute-0 nice_taussig[349011]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:23:10 compute-0 nice_taussig[349011]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:23:10 compute-0 nice_taussig[349011]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:23:10 compute-0 nice_taussig[349011]:         "osd_id": 0,
Jan 31 08:23:10 compute-0 nice_taussig[349011]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:23:10 compute-0 nice_taussig[349011]:         "type": "bluestore"
Jan 31 08:23:10 compute-0 nice_taussig[349011]:     }
Jan 31 08:23:10 compute-0 nice_taussig[349011]: }
Jan 31 08:23:10 compute-0 systemd[1]: libpod-1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65.scope: Deactivated successfully.
Jan 31 08:23:10 compute-0 podman[348976]: 2026-01-31 08:23:10.834458789 +0000 UTC m=+1.274015610 container died 1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-904fd1029080198c159180c8c5d534022833d7bae73969c90b06a44f90af3619-merged.mount: Deactivated successfully.
Jan 31 08:23:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:11.195 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:11.196 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:11.196 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:11 compute-0 podman[348976]: 2026-01-31 08:23:11.349654442 +0000 UTC m=+1.789211223 container remove 1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_taussig, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:11 compute-0 systemd[1]: libpod-conmon-1f6790fb97f2b45f88f509e705caea7df5e2d5a2d2e2e1f320ebc5253ecddc65.scope: Deactivated successfully.
Jan 31 08:23:11 compute-0 sudo[348738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:23:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:11.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:11 compute-0 nova_compute[247704]: 2026-01-31 08:23:11.605 247708 DEBUG nova.network.neutron [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Successfully updated port: 5ad80e9c-4635-405b-a513-4af4441d6e17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:23:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:23:11 compute-0 nova_compute[247704]: 2026-01-31 08:23:11.657 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:23:11 compute-0 nova_compute[247704]: 2026-01-31 08:23:11.658 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:23:11 compute-0 nova_compute[247704]: 2026-01-31 08:23:11.658 247708 DEBUG nova.network.neutron [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:23:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 811b92f3-c268-4fbb-b348-912add5529d2 does not exist
Jan 31 08:23:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 30416b9a-e820-4cc7-9e92-c43dfa575fc7 does not exist
Jan 31 08:23:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1397f5ee-a9f5-44cc-a7af-112c3de761ae does not exist
Jan 31 08:23:11 compute-0 sudo[349047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:11 compute-0 sudo[349047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:11 compute-0 sudo[349047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:11 compute-0 sudo[349072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:23:11 compute-0 sudo[349072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:11 compute-0 sudo[349072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.029 247708 DEBUG nova.compute.manager [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-changed-5ad80e9c-4635-405b-a513-4af4441d6e17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.029 247708 DEBUG nova.compute.manager [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Refreshing instance network info cache due to event network-changed-5ad80e9c-4635-405b-a513-4af4441d6e17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.030 247708 DEBUG oslo_concurrency.lockutils [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:23:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.098 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847777.096835, c725d43e-b5fe-4a94-ad44-6df85e3c0fa0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.098 247708 INFO nova.compute.manager [-] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] VM Stopped (Lifecycle Event)
Jan 31 08:23:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.125 247708 DEBUG nova.compute.manager [None req-c4e58711-a327-4f21-93b2-84951bddaedf - - - - - -] [instance: c725d43e-b5fe-4a94-ad44-6df85e3c0fa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.205 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:12 compute-0 nova_compute[247704]: 2026-01-31 08:23:12.432 247708 DEBUG nova.network.neutron [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:23:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:23:12 compute-0 ceph-mon[74496]: pgmap v2675: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Jan 31 08:23:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:13.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:13.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.911 247708 DEBUG nova.network.neutron [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updating instance_info_cache with network_info: [{"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.963 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.963 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Instance network_info: |[{"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.964 247708 DEBUG oslo_concurrency.lockutils [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.965 247708 DEBUG nova.network.neutron [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Refreshing network info cache for port 5ad80e9c-4635-405b-a513-4af4441d6e17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.971 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Start _get_guest_xml network_info=[{"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.979 247708 WARNING nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.987 247708 DEBUG nova.virt.libvirt.host [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.988 247708 DEBUG nova.virt.libvirt.host [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.994 247708 DEBUG nova.virt.libvirt.host [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.995 247708 DEBUG nova.virt.libvirt.host [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.997 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.997 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.998 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.999 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:23:13 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.999 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:13.999 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.000 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.000 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.001 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.001 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.002 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.002 247708 DEBUG nova.virt.hardware [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.009 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 31 08:23:14 compute-0 ceph-mon[74496]: pgmap v2676: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 31 08:23:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:23:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97320698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.518 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.555 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.561 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:23:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268451175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.987 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.989 247708 DEBUG nova.virt.libvirt.vif [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:23:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1532317084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1532317084',id=151,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-f2y7gvlm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:23:07Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=ed055707-78a7-4777-97f3-842e56be52d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.990 247708 DEBUG nova.network.os_vif_util [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.991 247708 DEBUG nova.network.os_vif_util [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:23:14 compute-0 nova_compute[247704]: 2026-01-31 08:23:14.992 247708 DEBUG nova.objects.instance [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'pci_devices' on Instance uuid ed055707-78a7-4777-97f3-842e56be52d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.020 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <uuid>ed055707-78a7-4777-97f3-842e56be52d9</uuid>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <name>instance-00000097</name>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1532317084</nova:name>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:23:13</nova:creationTime>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:user uuid="038e2b3b4f174162a3ac6c4870857e60">tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member</nova:user>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:project uuid="c90ea7f1be5f484bb873548236fadc00">tempest-ServerBootFromVolumeStableRescueTest-1116995694</nova:project>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <nova:port uuid="5ad80e9c-4635-405b-a513-4af4441d6e17">
Jan 31 08:23:15 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <system>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <entry name="serial">ed055707-78a7-4777-97f3-842e56be52d9</entry>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <entry name="uuid">ed055707-78a7-4777-97f3-842e56be52d9</entry>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </system>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <os>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </os>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <features>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </features>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/ed055707-78a7-4777-97f3-842e56be52d9_disk">
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </source>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/ed055707-78a7-4777-97f3-842e56be52d9_disk.config">
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </source>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:23:15 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:36:d8:c5"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <target dev="tap5ad80e9c-46"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/console.log" append="off"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <video>
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </video>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:23:15 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:23:15 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:23:15 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:23:15 compute-0 nova_compute[247704]: </domain>
Jan 31 08:23:15 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.022 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Preparing to wait for external event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.023 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.023 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.024 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.025 247708 DEBUG nova.virt.libvirt.vif [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:23:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1532317084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1532317084',id=151,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-f2y7gvlm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:23:07Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=ed055707-78a7-4777-97f3-842e56be52d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.025 247708 DEBUG nova.network.os_vif_util [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.026 247708 DEBUG nova.network.os_vif_util [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.026 247708 DEBUG os_vif [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.027 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.027 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.028 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:23:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:15.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.032 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ad80e9c-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.033 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ad80e9c-46, col_values=(('external_ids', {'iface-id': '5ad80e9c-4635-405b-a513-4af4441d6e17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:d8:c5', 'vm-uuid': 'ed055707-78a7-4777-97f3-842e56be52d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:15 compute-0 NetworkManager[49108]: <info>  [1769847795.0789] manager: (tap5ad80e9c-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.082 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.086 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.089 247708 INFO os_vif [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46')
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.189 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.189 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.189 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No VIF found with MAC fa:16:3e:36:d8:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.190 247708 INFO nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Using config drive
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.222 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:15 compute-0 nova_compute[247704]: 2026-01-31 08:23:15.348 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:15.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/97320698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4268451175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 08:23:16 compute-0 ceph-mon[74496]: pgmap v2677: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.524 247708 INFO nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Creating config drive at /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/disk.config
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.529 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpijrwi021 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.667 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpijrwi021" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.696 247708 DEBUG nova.storage.rbd_utils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image ed055707-78a7-4777-97f3-842e56be52d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.700 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/disk.config ed055707-78a7-4777-97f3-842e56be52d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.864 247708 DEBUG oslo_concurrency.processutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/disk.config ed055707-78a7-4777-97f3-842e56be52d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.865 247708 INFO nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Deleting local config drive /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9/disk.config because it was imported into RBD.
Jan 31 08:23:16 compute-0 kernel: tap5ad80e9c-46: entered promiscuous mode
Jan 31 08:23:16 compute-0 NetworkManager[49108]: <info>  [1769847796.9234] manager: (tap5ad80e9c-46): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Jan 31 08:23:16 compute-0 ovn_controller[149457]: 2026-01-31T08:23:16Z|00657|binding|INFO|Claiming lport 5ad80e9c-4635-405b-a513-4af4441d6e17 for this chassis.
Jan 31 08:23:16 compute-0 ovn_controller[149457]: 2026-01-31T08:23:16Z|00658|binding|INFO|5ad80e9c-4635-405b-a513-4af4441d6e17: Claiming fa:16:3e:36:d8:c5 10.100.0.14
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.924 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.928 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.932 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.947 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:d8:c5 10.100.0.14'], port_security=['fa:16:3e:36:d8:c5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ed055707-78a7-4777-97f3-842e56be52d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5ad80e9c-4635-405b-a513-4af4441d6e17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.949 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5ad80e9c-4635-405b-a513-4af4441d6e17 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.950 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:23:16 compute-0 systemd-udevd[349234]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.955 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:16 compute-0 ovn_controller[149457]: 2026-01-31T08:23:16Z|00659|binding|INFO|Setting lport 5ad80e9c-4635-405b-a513-4af4441d6e17 ovn-installed in OVS
Jan 31 08:23:16 compute-0 ovn_controller[149457]: 2026-01-31T08:23:16Z|00660|binding|INFO|Setting lport 5ad80e9c-4635-405b-a513-4af4441d6e17 up in Southbound
Jan 31 08:23:16 compute-0 systemd-machined[214448]: New machine qemu-69-instance-00000097.
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.966 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0597db-f781-40ff-aa45-0dd64a0e55d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.968 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap936cead9-b1 in ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:23:16 compute-0 nova_compute[247704]: 2026-01-31 08:23:16.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:16 compute-0 NetworkManager[49108]: <info>  [1769847796.9718] device (tap5ad80e9c-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:23:16 compute-0 NetworkManager[49108]: <info>  [1769847796.9728] device (tap5ad80e9c-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.970 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap936cead9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.970 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[636cb048-e698-44d3-acea-093dc5d1df4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.973 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[28c09ae3-9674-4c76-926d-2a0b14b212b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:16 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-00000097.
Jan 31 08:23:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:16.985 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b81813-9db8-4815-882a-7096c94e1b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.000 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0e698742-866c-4147-b0e3-bb70f9d3e17f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.034 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[00c5b000-d76c-492a-86cd-8c5034809469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.038 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ad4594-aeba-4599-938e-2fc65fbd3576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 systemd-udevd[349238]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:23:17 compute-0 NetworkManager[49108]: <info>  [1769847797.0405] manager: (tap936cead9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.070 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6573d847-8a77-4650-85ff-136adfdfe3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.074 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5827a9-b784-4b03-b171-9a57c5a421ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 NetworkManager[49108]: <info>  [1769847797.0948] device (tap936cead9-b0): carrier: link connected
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.098 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0b735998-dd35-4306-98c0-e6546c1c65c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.117 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5fbc25-b513-4a34-a24f-4e5b661e1280]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 24986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349268, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.133 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1aee0a-56c4-4580-8645-0f19befcdb0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:62a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803971, 'tstamp': 803971}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349269, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.149 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b421b136-825b-43fc-8d51-ca0d245bf673]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 24986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349270, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.178 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cb426076-35ad-455c-af98-01ca432b4301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.231 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d70e3b-d510-4c15-b2e8-71d9fba5b048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.233 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.233 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.234 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:17 compute-0 kernel: tap936cead9-b0: entered promiscuous mode
Jan 31 08:23:17 compute-0 NetworkManager[49108]: <info>  [1769847797.2371] manager: (tap936cead9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.236 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.239 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:17 compute-0 ovn_controller[149457]: 2026-01-31T08:23:17Z|00661|binding|INFO|Releasing lport fd5187fd-cce9-41da-96d2-ef75fbcbcf0f from this chassis (sb_readonly=0)
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.246 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.247 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.248 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aa33d0c0-568e-46c0-8578-a29252c59e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.249 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/936cead9-bc2f-4c2d-8b4c-6079d2159263.pid.haproxy
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:23:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:17.250 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'env', 'PROCESS_TAG=haproxy-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/936cead9-bc2f-4c2d-8b4c-6079d2159263.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.258 247708 DEBUG nova.network.neutron [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updated VIF entry in instance network info cache for port 5ad80e9c-4635-405b-a513-4af4441d6e17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.259 247708 DEBUG nova.network.neutron [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updating instance_info_cache with network_info: [{"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.282 247708 DEBUG oslo_concurrency.lockutils [req-7703190f-a3b1-4fa4-a684-1ccf21506896 req-8836d6f4-b9d2-41e0-89da-cf657b9ad99b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:23:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:17.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:17 compute-0 podman[349303]: 2026-01-31 08:23:17.573103917 +0000 UTC m=+0.025327742 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.808 247708 DEBUG nova.compute.manager [req-1dc95884-a689-4911-bccd-7b358706cc00 req-a978920f-6b7a-4d4f-857f-74689528cff2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.809 247708 DEBUG oslo_concurrency.lockutils [req-1dc95884-a689-4911-bccd-7b358706cc00 req-a978920f-6b7a-4d4f-857f-74689528cff2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.810 247708 DEBUG oslo_concurrency.lockutils [req-1dc95884-a689-4911-bccd-7b358706cc00 req-a978920f-6b7a-4d4f-857f-74689528cff2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.810 247708 DEBUG oslo_concurrency.lockutils [req-1dc95884-a689-4911-bccd-7b358706cc00 req-a978920f-6b7a-4d4f-857f-74689528cff2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:17 compute-0 nova_compute[247704]: 2026-01-31 08:23:17.811 247708 DEBUG nova.compute.manager [req-1dc95884-a689-4911-bccd-7b358706cc00 req-a978920f-6b7a-4d4f-857f-74689528cff2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Processing event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:23:17 compute-0 podman[349303]: 2026-01-31 08:23:17.8937587 +0000 UTC m=+0.345982505 container create 168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:23:17 compute-0 systemd[1]: Started libpod-conmon-168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2.scope.
Jan 31 08:23:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/097d4bec168e05748f850731fe9315c23796636a941c9c9b1ddad144d00e98dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:17 compute-0 podman[349303]: 2026-01-31 08:23:17.994810498 +0000 UTC m=+0.447034343 container init 168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:23:18 compute-0 podman[349303]: 2026-01-31 08:23:18.000681893 +0000 UTC m=+0.452905728 container start 168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:23:18 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [NOTICE]   (349358) : New worker (349361) forked
Jan 31 08:23:18 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [NOTICE]   (349358) : Loading success.
Jan 31 08:23:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 08:23:18 compute-0 nova_compute[247704]: 2026-01-31 08:23:18.126 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:23:18 compute-0 nova_compute[247704]: 2026-01-31 08:23:18.128 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847798.1275537, ed055707-78a7-4777-97f3-842e56be52d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:18 compute-0 nova_compute[247704]: 2026-01-31 08:23:18.129 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] VM Started (Lifecycle Event)
Jan 31 08:23:18 compute-0 nova_compute[247704]: 2026-01-31 08:23:18.132 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:23:18 compute-0 nova_compute[247704]: 2026-01-31 08:23:18.136 247708 INFO nova.virt.libvirt.driver [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Instance spawned successfully.
Jan 31 08:23:18 compute-0 nova_compute[247704]: 2026-01-31 08:23:18.137 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:23:18 compute-0 ceph-mon[74496]: pgmap v2678: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 08:23:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.066 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.073 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.109 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.110 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.111 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.111 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.112 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.113 247708 DEBUG nova.virt.libvirt.driver [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.116 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.144 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.145 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847798.128777, ed055707-78a7-4777-97f3-842e56be52d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.146 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] VM Paused (Lifecycle Event)
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:23:20
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta']
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.191 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.202 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847798.1304235, ed055707-78a7-4777-97f3-842e56be52d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.203 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] VM Resumed (Lifecycle Event)
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.235 247708 INFO nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Took 12.35 seconds to spawn the instance on the hypervisor.
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.236 247708 DEBUG nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:20 compute-0 ceph-mon[74496]: pgmap v2679: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.275 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.278 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.333 247708 INFO nova.compute.manager [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Took 13.96 seconds to build instance.
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.351 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.403 247708 DEBUG oslo_concurrency.lockutils [None req-aa2d1bb0-a7ee-457d-87d7-c50b47d74ba9 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:23:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.696 247708 DEBUG nova.compute.manager [req-cc68cd50-7516-4664-95e5-716b4c493192 req-3fdcddbc-5d4e-4fe6-936e-fabf6c31997e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.697 247708 DEBUG oslo_concurrency.lockutils [req-cc68cd50-7516-4664-95e5-716b4c493192 req-3fdcddbc-5d4e-4fe6-936e-fabf6c31997e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.697 247708 DEBUG oslo_concurrency.lockutils [req-cc68cd50-7516-4664-95e5-716b4c493192 req-3fdcddbc-5d4e-4fe6-936e-fabf6c31997e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.697 247708 DEBUG oslo_concurrency.lockutils [req-cc68cd50-7516-4664-95e5-716b4c493192 req-3fdcddbc-5d4e-4fe6-936e-fabf6c31997e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.697 247708 DEBUG nova.compute.manager [req-cc68cd50-7516-4664-95e5-716b4c493192 req-3fdcddbc-5d4e-4fe6-936e-fabf6c31997e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] No waiting events found dispatching network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:23:20 compute-0 nova_compute[247704]: 2026-01-31 08:23:20.698 247708 WARNING nova.compute.manager [req-cc68cd50-7516-4664-95e5-716b4c493192 req-3fdcddbc-5d4e-4fe6-936e-fabf6c31997e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received unexpected event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 for instance with vm_state active and task_state None.
Jan 31 08:23:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:21.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:21.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 69 op/s
Jan 31 08:23:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.142567) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802142675, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 930, "num_deletes": 251, "total_data_size": 1373808, "memory_usage": 1403608, "flush_reason": "Manual Compaction"}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802153957, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 863215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57856, "largest_seqno": 58785, "table_properties": {"data_size": 859378, "index_size": 1488, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10386, "raw_average_key_size": 21, "raw_value_size": 851117, "raw_average_value_size": 1729, "num_data_blocks": 66, "num_entries": 492, "num_filter_entries": 492, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847728, "oldest_key_time": 1769847728, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 11445 microseconds, and 3358 cpu microseconds.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.154022) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 863215 bytes OK
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.154048) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.156481) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.156505) EVENT_LOG_v1 {"time_micros": 1769847802156500, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.156530) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1369411, prev total WAL file size 1370124, number of live WAL files 2.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.157276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303039' seq:72057594037927935, type:22 .. '6D6772737461740032323630' seq:0, type:0; will stop at (end)
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(842KB)], [128(13MB)]
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802157347, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 14522986, "oldest_snapshot_seqno": -1}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: pgmap v2680: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 69 op/s
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 8700 keys, 11173427 bytes, temperature: kUnknown
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802263252, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11173427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11117668, "index_size": 32926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227725, "raw_average_key_size": 26, "raw_value_size": 10965022, "raw_average_value_size": 1260, "num_data_blocks": 1270, "num_entries": 8700, "num_filter_entries": 8700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.263601) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11173427 bytes
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.277842) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 105.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(29.8) write-amplify(12.9) OK, records in: 9186, records dropped: 486 output_compression: NoCompression
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.277905) EVENT_LOG_v1 {"time_micros": 1769847802277884, "job": 78, "event": "compaction_finished", "compaction_time_micros": 106012, "compaction_time_cpu_micros": 42759, "output_level": 6, "num_output_files": 1, "total_output_size": 11173427, "num_input_records": 9186, "num_output_records": 8700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802278408, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802280127, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.157122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.280804) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802280927, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 263, "num_deletes": 251, "total_data_size": 14664, "memory_usage": 20104, "flush_reason": "Manual Compaction"}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802289634, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 14621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58786, "largest_seqno": 59048, "table_properties": {"data_size": 12786, "index_size": 67, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4731, "raw_average_key_size": 18, "raw_value_size": 9304, "raw_average_value_size": 35, "num_data_blocks": 3, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847802, "oldest_key_time": 1769847802, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 8840 microseconds, and 1326 cpu microseconds.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.289681) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 14621 bytes OK
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.289703) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.301157) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.301177) EVENT_LOG_v1 {"time_micros": 1769847802301172, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.301199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 12620, prev total WAL file size 12620, number of live WAL files 2.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.301851) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(14KB)], [131(10MB)]
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802301943, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11188048, "oldest_snapshot_seqno": -1}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 8453 keys, 9195512 bytes, temperature: kUnknown
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802395965, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9195512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9143251, "index_size": 30004, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223289, "raw_average_key_size": 26, "raw_value_size": 8996725, "raw_average_value_size": 1064, "num_data_blocks": 1139, "num_entries": 8453, "num_filter_entries": 8453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.396381) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9195512 bytes
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.420307) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.8 rd, 97.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.7 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(1394.1) write-amplify(628.9) OK, records in: 8960, records dropped: 507 output_compression: NoCompression
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.420364) EVENT_LOG_v1 {"time_micros": 1769847802420341, "job": 80, "event": "compaction_finished", "compaction_time_micros": 94187, "compaction_time_cpu_micros": 30287, "output_level": 6, "num_output_files": 1, "total_output_size": 9195512, "num_input_records": 8960, "num_output_records": 8453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802420620, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847802422924, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.301691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.423065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.423096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.423098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.423100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:23:22.423102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:22 compute-0 podman[349378]: 2026-01-31 08:23:22.887787779 +0000 UTC m=+0.057293846 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:23:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:23.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:23.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 614 KiB/s wr, 89 op/s
Jan 31 08:23:24 compute-0 ceph-mon[74496]: pgmap v2681: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 614 KiB/s wr, 89 op/s
Jan 31 08:23:24 compute-0 nova_compute[247704]: 2026-01-31 08:23:24.376 247708 DEBUG nova.compute.manager [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:23:24 compute-0 nova_compute[247704]: 2026-01-31 08:23:24.451 247708 INFO nova.compute.manager [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] instance snapshotting
Jan 31 08:23:24 compute-0 nova_compute[247704]: 2026-01-31 08:23:24.917 247708 INFO nova.virt.libvirt.driver [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Beginning live snapshot process
Jan 31 08:23:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.003000072s ======
Jan 31 08:23:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:25.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000072s
Jan 31 08:23:25 compute-0 nova_compute[247704]: 2026-01-31 08:23:25.147 247708 DEBUG nova.virt.libvirt.imagebackend [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:23:25 compute-0 nova_compute[247704]: 2026-01-31 08:23:25.152 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1503404094' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:25 compute-0 nova_compute[247704]: 2026-01-31 08:23:25.353 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:25.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:25 compute-0 nova_compute[247704]: 2026-01-31 08:23:25.642 247708 DEBUG nova.storage.rbd_utils [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] creating snapshot(a4badc20765342eb9e63a4a00d3b8b23) on rbd image(ed055707-78a7-4777-97f3-842e56be52d9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:23:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 76 op/s
Jan 31 08:23:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Jan 31 08:23:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Jan 31 08:23:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/597961286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:26 compute-0 ceph-mon[74496]: pgmap v2682: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 76 op/s
Jan 31 08:23:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Jan 31 08:23:26 compute-0 nova_compute[247704]: 2026-01-31 08:23:26.872 247708 DEBUG nova.storage.rbd_utils [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] cloning vms/ed055707-78a7-4777-97f3-842e56be52d9_disk@a4badc20765342eb9e63a4a00d3b8b23 to images/9a73b720-4ab7-4126-8c93-bff12492db80 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:23:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:27.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:27 compute-0 nova_compute[247704]: 2026-01-31 08:23:27.307 247708 DEBUG nova.storage.rbd_utils [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] flattening images/9a73b720-4ab7-4126-8c93-bff12492db80 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:23:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:27.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 16 KiB/s wr, 137 op/s
Jan 31 08:23:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/344176187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:28 compute-0 ceph-mon[74496]: osdmap e348: 3 total, 3 up, 3 in
Jan 31 08:23:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:29 compute-0 ceph-mon[74496]: pgmap v2684: 305 pgs: 305 active+clean; 295 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 16 KiB/s wr, 137 op/s
Jan 31 08:23:29 compute-0 sudo[349524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:29 compute-0 sudo[349524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:29 compute-0 sudo[349524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:29 compute-0 sudo[349549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:29 compute-0 sudo[349549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:29 compute-0 sudo[349549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:29 compute-0 nova_compute[247704]: 2026-01-31 08:23:29.405 247708 DEBUG nova.storage.rbd_utils [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] removing snapshot(a4badc20765342eb9e63a4a00d3b8b23) on rbd image(ed055707-78a7-4777-97f3-842e56be52d9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:23:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:29.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Jan 31 08:23:30 compute-0 nova_compute[247704]: 2026-01-31 08:23:30.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Jan 31 08:23:30 compute-0 ceph-mon[74496]: pgmap v2685: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Jan 31 08:23:30 compute-0 nova_compute[247704]: 2026-01-31 08:23:30.354 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Jan 31 08:23:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Jan 31 08:23:30 compute-0 nova_compute[247704]: 2026-01-31 08:23:30.603 247708 DEBUG nova.storage.rbd_utils [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] creating snapshot(snap) on rbd image(9a73b720-4ab7-4126-8c93-bff12492db80) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:23:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:31.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Jan 31 08:23:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Jan 31 08:23:31 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Jan 31 08:23:31 compute-0 ceph-mon[74496]: osdmap e349: 3 total, 3 up, 3 in
Jan 31 08:23:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/879210536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 363 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.2 MiB/s wr, 245 op/s
Jan 31 08:23:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:32 compute-0 ovn_controller[149457]: 2026-01-31T08:23:32Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:d8:c5 10.100.0.14
Jan 31 08:23:32 compute-0 ovn_controller[149457]: 2026-01-31T08:23:32Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:d8:c5 10.100.0.14
Jan 31 08:23:32 compute-0 ceph-mon[74496]: osdmap e350: 3 total, 3 up, 3 in
Jan 31 08:23:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/836179707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:32 compute-0 ceph-mon[74496]: pgmap v2688: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 363 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.2 MiB/s wr, 245 op/s
Jan 31 08:23:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:23:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:23:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 276 op/s
Jan 31 08:23:34 compute-0 ceph-mon[74496]: pgmap v2689: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 395 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.8 MiB/s wr, 276 op/s
Jan 31 08:23:34 compute-0 nova_compute[247704]: 2026-01-31 08:23:34.565 247708 INFO nova.virt.libvirt.driver [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Snapshot image upload complete
Jan 31 08:23:34 compute-0 nova_compute[247704]: 2026-01-31 08:23:34.566 247708 INFO nova.compute.manager [None req-6dc8853b-6fde-414f-9e43-a0ab3dcb5598 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Took 10.11 seconds to snapshot the instance on the hypervisor.
Jan 31 08:23:34 compute-0 nova_compute[247704]: 2026-01-31 08:23:34.571 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:35 compute-0 nova_compute[247704]: 2026-01-31 08:23:35.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:35 compute-0 nova_compute[247704]: 2026-01-31 08:23:35.356 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006725794772473479 of space, bias 1.0, pg target 2.0177384317420435 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:23:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 08:23:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 305 op/s
Jan 31 08:23:36 compute-0 ceph-mon[74496]: pgmap v2690: 305 pgs: 305 active+clean; 415 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 305 op/s
Jan 31 08:23:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:23:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:23:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Jan 31 08:23:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:37.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Jan 31 08:23:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Jan 31 08:23:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.1 MiB/s wr, 344 op/s
Jan 31 08:23:38 compute-0 nova_compute[247704]: 2026-01-31 08:23:38.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:38 compute-0 nova_compute[247704]: 2026-01-31 08:23:38.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:38 compute-0 ceph-mon[74496]: osdmap e351: 3 total, 3 up, 3 in
Jan 31 08:23:38 compute-0 ceph-mon[74496]: pgmap v2692: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.1 MiB/s wr, 344 op/s
Jan 31 08:23:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:39 compute-0 nova_compute[247704]: 2026-01-31 08:23:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:39 compute-0 nova_compute[247704]: 2026-01-31 08:23:39.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:23:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:39 compute-0 podman[349598]: 2026-01-31 08:23:39.939229286 +0000 UTC m=+0.111833424 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:23:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.4 MiB/s wr, 287 op/s
Jan 31 08:23:40 compute-0 ceph-mon[74496]: pgmap v2693: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.4 MiB/s wr, 287 op/s
Jan 31 08:23:40 compute-0 nova_compute[247704]: 2026-01-31 08:23:40.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:40 compute-0 nova_compute[247704]: 2026-01-31 08:23:40.359 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.294 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.295 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.340 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.463 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.463 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:41 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:41.478 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:23:41 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:41.479 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.480 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.497 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.498 247708 INFO nova.compute.claims [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:23:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:41 compute-0 nova_compute[247704]: 2026-01-31 08:23:41.707 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 278 op/s
Jan 31 08:23:42 compute-0 ceph-mon[74496]: pgmap v2694: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 278 op/s
Jan 31 08:23:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289977359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.189 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.199 247708 DEBUG nova.compute.provider_tree [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.221 247708 DEBUG nova.scheduler.client.report [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.264 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.265 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.339 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.340 247708 DEBUG nova.network.neutron [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.373 247708 INFO nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.402 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.465 247708 INFO nova.virt.block_device [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Booting with volume-backed-image 7c23949f-bba8-4466-bb79-caf568852d38 at /dev/vda
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.606 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.606 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.607 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.607 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.607 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:42 compute-0 nova_compute[247704]: 2026-01-31 08:23:42.710 247708 DEBUG nova.policy [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '038e2b3b4f174162a3ac6c4870857e60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c90ea7f1be5f484bb873548236fadc00', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:23:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228575897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.028 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:43.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.136 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.137 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:23:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1289977359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3228575897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.325 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.327 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4168MB free_disk=20.83056640625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.327 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.328 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:43.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:43 compute-0 nova_compute[247704]: 2026-01-31 08:23:43.961 247708 DEBUG nova.network.neutron [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Successfully created port: 568e8160-e7ce-4ef4-ba79-b8571d160073 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:23:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 31 08:23:44 compute-0 ceph-mon[74496]: pgmap v2695: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 31 08:23:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1119347368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:23:44.482 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:23:44 compute-0 nova_compute[247704]: 2026-01-31 08:23:44.673 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance ed055707-78a7-4777-97f3-842e56be52d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:23:44 compute-0 nova_compute[247704]: 2026-01-31 08:23:44.674 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:23:44 compute-0 nova_compute[247704]: 2026-01-31 08:23:44.674 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:23:44 compute-0 nova_compute[247704]: 2026-01-31 08:23:44.675 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:23:44 compute-0 nova_compute[247704]: 2026-01-31 08:23:44.998 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:23:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:45.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.247 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.363 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:23:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841049691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.544 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.552 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:23:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/841049691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.600 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:23:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:45.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.668 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.669 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.903 247708 DEBUG nova.network.neutron [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Successfully updated port: 568e8160-e7ce-4ef4-ba79-b8571d160073 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.944 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.945 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:23:45 compute-0 nova_compute[247704]: 2026-01-31 08:23:45.946 247708 DEBUG nova.network.neutron [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:23:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 45 KiB/s wr, 131 op/s
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.443 247708 DEBUG nova.network.neutron [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:23:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1246198736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:46 compute-0 ceph-mon[74496]: pgmap v2696: 305 pgs: 305 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 45 KiB/s wr, 131 op/s
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.670 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.671 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.728 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.729 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.729 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.907 247708 DEBUG nova.compute.manager [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-changed-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.907 247708 DEBUG nova.compute.manager [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Refreshing instance network info cache due to event network-changed-568e8160-e7ce-4ef4-ba79-b8571d160073. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:23:46 compute-0 nova_compute[247704]: 2026-01-31 08:23:46.908 247708 DEBUG oslo_concurrency.lockutils [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:23:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/320626493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 968 KiB/s wr, 109 op/s
Jan 31 08:23:48 compute-0 nova_compute[247704]: 2026-01-31 08:23:48.615 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:23:48 compute-0 ceph-mon[74496]: pgmap v2697: 305 pgs: 305 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 968 KiB/s wr, 109 op/s
Jan 31 08:23:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3497861562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:49.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:49 compute-0 sudo[349696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:49 compute-0 sudo[349696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:49 compute-0 sudo[349696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:49 compute-0 sudo[349721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:23:49 compute-0 sudo[349721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:49 compute-0 sudo[349721]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:49.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.254 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:50 compute-0 ceph-mon[74496]: pgmap v2698: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 114 op/s
Jan 31 08:23:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1434908461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.352 247708 DEBUG nova.network.neutron [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updating instance_info_cache with network_info: [{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.364 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.406 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.407 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance network_info: |[{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.407 247708 DEBUG oslo_concurrency.lockutils [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:23:50 compute-0 nova_compute[247704]: 2026-01-31 08:23:50.408 247708 DEBUG nova.network.neutron [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Refreshing network info cache for port 568e8160-e7ce-4ef4-ba79-b8571d160073 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:23:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:23:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:51.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 368 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Jan 31 08:23:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1673176264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:23:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:53.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:53 compute-0 ceph-mon[74496]: pgmap v2699: 305 pgs: 305 active+clean; 368 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Jan 31 08:23:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:53 compute-0 nova_compute[247704]: 2026-01-31 08:23:53.872 247708 DEBUG nova.network.neutron [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updated VIF entry in instance network info cache for port 568e8160-e7ce-4ef4-ba79-b8571d160073. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:23:53 compute-0 nova_compute[247704]: 2026-01-31 08:23:53.873 247708 DEBUG nova.network.neutron [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updating instance_info_cache with network_info: [{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:23:53 compute-0 podman[349749]: 2026-01-31 08:23:53.907944181 +0000 UTC m=+0.078299461 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:23:53 compute-0 nova_compute[247704]: 2026-01-31 08:23:53.960 247708 DEBUG oslo_concurrency.lockutils [req-d8d54730-2ab6-44bc-b6cc-e4bb4aeebf2d req-643355ff-11b9-4304-ad6c-b4feb3a7a5a9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:23:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 31 08:23:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1351194824' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:23:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1351194824' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:23:54 compute-0 ceph-mon[74496]: pgmap v2700: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 101 op/s
Jan 31 08:23:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:55.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:55 compute-0 nova_compute[247704]: 2026-01-31 08:23:55.267 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:55 compute-0 nova_compute[247704]: 2026-01-31 08:23:55.365 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:23:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:55.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 119 op/s
Jan 31 08:23:56 compute-0 ceph-mon[74496]: pgmap v2701: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 119 op/s
Jan 31 08:23:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:57.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:23:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 388 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 158 op/s
Jan 31 08:23:58 compute-0 ceph-mon[74496]: pgmap v2702: 305 pgs: 305 active+clean; 388 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 158 op/s
Jan 31 08:23:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:23:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:59.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:23:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/851229378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:23:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:23:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:23:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 397 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 152 op/s
Jan 31 08:24:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2749672095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:00 compute-0 ceph-mon[74496]: pgmap v2703: 305 pgs: 305 active+clean; 397 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.5 MiB/s wr, 152 op/s
Jan 31 08:24:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/681131750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.270 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.374 247708 DEBUG os_brick.utils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.377 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.388 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.388 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[44ac8abd-d21c-452b-b817-c55cf3ab9c34]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.390 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.398 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.399 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[0c48b96e-292b-47b3-9f63-7403ac6b6af0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.401 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.411 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.412 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[53c1c3d8-96d8-46c4-9b8c-49835e237217]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.414 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[d80770af-70e9-4ad0-888c-577945e844f9]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.414 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.444 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.449 247708 DEBUG os_brick.initiator.connectors.lightos [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.449 247708 DEBUG os_brick.initiator.connectors.lightos [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.450 247708 DEBUG os_brick.initiator.connectors.lightos [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.451 247708 DEBUG os_brick.utils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:24:00 compute-0 nova_compute[247704]: 2026-01-31 08:24:00.451 247708 DEBUG nova.virt.block_device [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updating existing volume attachment record: f16d44ff-26d6-4ad8-8798-323b6a48be9d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:24:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2372279131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1572303610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2372279131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:01.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.681 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.686 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.687 247708 INFO nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Creating image(s)
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.688 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.688 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Ensure instance console log exists: /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.690 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.690 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.691 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.696 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Start _get_guest_xml network_info=[{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'f16d44ff-26d6-4ad8-8798-323b6a48be9d', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bb6c3a76-a1e4-4bd7-8526-2c74f412b508', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bb6c3a76-a1e4-4bd7-8526-2c74f412b508', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'attached_at': '', 'detached_at': '', 'volume_id': 'bb6c3a76-a1e4-4bd7-8526-2c74f412b508', 'serial': 'bb6c3a76-a1e4-4bd7-8526-2c74f412b508'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.705 247708 WARNING nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.712 247708 DEBUG nova.virt.libvirt.host [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.713 247708 DEBUG nova.virt.libvirt.host [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.718 247708 DEBUG nova.virt.libvirt.host [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.718 247708 DEBUG nova.virt.libvirt.host [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.719 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.719 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.720 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.720 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.720 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.720 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.720 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.720 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.721 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.721 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.721 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.721 247708 DEBUG nova.virt.hardware [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.751 247708 DEBUG nova.storage.rbd_utils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:01 compute-0 nova_compute[247704]: 2026-01-31 08:24:01.757 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 416 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 151 op/s
Jan 31 08:24:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2384994557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.180 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.231 247708 DEBUG nova.virt.libvirt.vif [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1237883975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1237883975',id=153,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-40a23vvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:23:42Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=6a10c21a-772d-4a5c-8f62-3d90d4b7ca56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.232 247708 DEBUG nova.network.os_vif_util [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.235 247708 DEBUG nova.network.os_vif_util [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.237 247708 DEBUG nova.objects.instance [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.258 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <uuid>6a10c21a-772d-4a5c-8f62-3d90d4b7ca56</uuid>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <name>instance-00000099</name>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1237883975</nova:name>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:24:01</nova:creationTime>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:user uuid="038e2b3b4f174162a3ac6c4870857e60">tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member</nova:user>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:project uuid="c90ea7f1be5f484bb873548236fadc00">tempest-ServerBootFromVolumeStableRescueTest-1116995694</nova:project>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <nova:port uuid="568e8160-e7ce-4ef4-ba79-b8571d160073">
Jan 31 08:24:02 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <system>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <entry name="serial">6a10c21a-772d-4a5c-8f62-3d90d4b7ca56</entry>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <entry name="uuid">6a10c21a-772d-4a5c-8f62-3d90d4b7ca56</entry>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </system>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <os>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </os>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <features>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </features>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config">
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-bb6c3a76-a1e4-4bd7-8526-2c74f412b508">
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:02 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <serial>bb6c3a76-a1e4-4bd7-8526-2c74f412b508</serial>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:c2:71:92"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <target dev="tap568e8160-e7"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/console.log" append="off"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <video>
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </video>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:24:02 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:24:02 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:24:02 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:24:02 compute-0 nova_compute[247704]: </domain>
Jan 31 08:24:02 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.259 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Preparing to wait for external event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.260 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.261 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.261 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.262 247708 DEBUG nova.virt.libvirt.vif [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1237883975',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1237883975',id=153,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-40a23vvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:23:42Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=6a10c21a-772d-4a5c-8f62-3d90d4b7ca56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.263 247708 DEBUG nova.network.os_vif_util [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.264 247708 DEBUG nova.network.os_vif_util [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.265 247708 DEBUG os_vif [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.266 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.267 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.267 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.272 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.273 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap568e8160-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.274 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap568e8160-e7, col_values=(('external_ids', {'iface-id': '568e8160-e7ce-4ef4-ba79-b8571d160073', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:71:92', 'vm-uuid': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.276 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:02 compute-0 NetworkManager[49108]: <info>  [1769847842.2777] manager: (tap568e8160-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.278 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.287 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.290 247708 INFO os_vif [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7')
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.387 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.388 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.388 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No VIF found with MAC fa:16:3e:c2:71:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.389 247708 INFO nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Using config drive
Jan 31 08:24:02 compute-0 nova_compute[247704]: 2026-01-31 08:24:02.432 247708 DEBUG nova.storage.rbd_utils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:02 compute-0 ceph-mon[74496]: pgmap v2704: 305 pgs: 305 active+clean; 416 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 151 op/s
Jan 31 08:24:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2384994557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:24:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:03.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.125 247708 INFO nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Creating config drive at /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.131 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi10zptut execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.261 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpi10zptut" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.295 247708 DEBUG nova.storage.rbd_utils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.299 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.486 247708 DEBUG oslo_concurrency.processutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.488 247708 INFO nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Deleting local config drive /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config because it was imported into RBD.
Jan 31 08:24:03 compute-0 kernel: tap568e8160-e7: entered promiscuous mode
Jan 31 08:24:03 compute-0 NetworkManager[49108]: <info>  [1769847843.5483] manager: (tap568e8160-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Jan 31 08:24:03 compute-0 systemd-udevd[349890]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:03 compute-0 ovn_controller[149457]: 2026-01-31T08:24:03Z|00662|binding|INFO|Claiming lport 568e8160-e7ce-4ef4-ba79-b8571d160073 for this chassis.
Jan 31 08:24:03 compute-0 ovn_controller[149457]: 2026-01-31T08:24:03Z|00663|binding|INFO|568e8160-e7ce-4ef4-ba79-b8571d160073: Claiming fa:16:3e:c2:71:92 10.100.0.7
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:03 compute-0 ovn_controller[149457]: 2026-01-31T08:24:03Z|00664|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 ovn-installed in OVS
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.590 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:03 compute-0 ovn_controller[149457]: 2026-01-31T08:24:03Z|00665|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 up in Southbound
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.592 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:71:92 10.100.0.7'], port_security=['fa:16:3e:c2:71:92 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=568e8160-e7ce-4ef4-ba79-b8571d160073) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.594 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 568e8160-e7ce-4ef4-ba79-b8571d160073 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.596 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:24:03 compute-0 NetworkManager[49108]: <info>  [1769847843.5986] device (tap568e8160-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:24:03 compute-0 NetworkManager[49108]: <info>  [1769847843.5993] device (tap568e8160-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:24:03 compute-0 systemd-machined[214448]: New machine qemu-70-instance-00000099.
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.614 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[27fc4172-93c5-44db-aa61-793ee2a0d4c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:03 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-00000099.
Jan 31 08:24:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.647 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[45342903-375e-4e99-95b5-6e1a72e91709]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.652 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a02b1eaf-4c3f-4e61-ae59-f0d1460c0010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.684 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce0713a-b699-4cbe-847d-fe8da3802cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.704 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdc6ed2-70ae-4838-bc35-080df65aeaae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 32750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349906, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.721 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[793a66ff-543d-4bfb-ae7a-23906aaa751a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803982, 'tstamp': 803982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349908, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803984, 'tstamp': 803984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349908, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.723 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.725 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.727 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.727 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.728 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:03.728 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3841614421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.962 247708 DEBUG nova.compute.manager [req-b3a4a8f9-784b-460a-ae3a-670b3347e5f1 req-2281519a-843d-4831-9993-8e06345b6907 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.962 247708 DEBUG oslo_concurrency.lockutils [req-b3a4a8f9-784b-460a-ae3a-670b3347e5f1 req-2281519a-843d-4831-9993-8e06345b6907 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.963 247708 DEBUG oslo_concurrency.lockutils [req-b3a4a8f9-784b-460a-ae3a-670b3347e5f1 req-2281519a-843d-4831-9993-8e06345b6907 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.963 247708 DEBUG oslo_concurrency.lockutils [req-b3a4a8f9-784b-460a-ae3a-670b3347e5f1 req-2281519a-843d-4831-9993-8e06345b6907 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:03 compute-0 nova_compute[247704]: 2026-01-31 08:24:03.964 247708 DEBUG nova.compute.manager [req-b3a4a8f9-784b-460a-ae3a-670b3347e5f1 req-2281519a-843d-4831-9993-8e06345b6907 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Processing event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:24:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 431 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 5.3 MiB/s wr, 134 op/s
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.184 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847844.1838837, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.185 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Started (Lifecycle Event)
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.187 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.192 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.195 247708 INFO nova.virt.libvirt.driver [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance spawned successfully.
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.195 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.215 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.222 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.225 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.226 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.226 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.226 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.226 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.227 247708 DEBUG nova.virt.libvirt.driver [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.262 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.263 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847844.1842327, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.263 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Paused (Lifecycle Event)
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.301 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.305 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847844.1899683, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.305 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Resumed (Lifecycle Event)
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.338 247708 INFO nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Took 2.66 seconds to spawn the instance on the hypervisor.
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.339 247708 DEBUG nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.340 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.350 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.386 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.418 247708 INFO nova.compute.manager [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Took 22.99 seconds to build instance.
Jan 31 08:24:04 compute-0 nova_compute[247704]: 2026-01-31 08:24:04.450 247708 DEBUG oslo_concurrency.lockutils [None req-1ecbc13d-38f0-46b3-b8ee-be9fe8d5548d 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:04 compute-0 ceph-mon[74496]: pgmap v2705: 305 pgs: 305 active+clean; 431 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 5.3 MiB/s wr, 134 op/s
Jan 31 08:24:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:05 compute-0 nova_compute[247704]: 2026-01-31 08:24:05.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:05.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2866160867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 272 op/s
Jan 31 08:24:06 compute-0 nova_compute[247704]: 2026-01-31 08:24:06.133 247708 DEBUG nova.compute.manager [req-5f75b4ce-f728-4935-b60d-0e7d7edec870 req-1980a226-fc72-46fb-9fc0-949ae57fbd3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:06 compute-0 nova_compute[247704]: 2026-01-31 08:24:06.134 247708 DEBUG oslo_concurrency.lockutils [req-5f75b4ce-f728-4935-b60d-0e7d7edec870 req-1980a226-fc72-46fb-9fc0-949ae57fbd3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:06 compute-0 nova_compute[247704]: 2026-01-31 08:24:06.134 247708 DEBUG oslo_concurrency.lockutils [req-5f75b4ce-f728-4935-b60d-0e7d7edec870 req-1980a226-fc72-46fb-9fc0-949ae57fbd3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:06 compute-0 nova_compute[247704]: 2026-01-31 08:24:06.134 247708 DEBUG oslo_concurrency.lockutils [req-5f75b4ce-f728-4935-b60d-0e7d7edec870 req-1980a226-fc72-46fb-9fc0-949ae57fbd3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:06 compute-0 nova_compute[247704]: 2026-01-31 08:24:06.134 247708 DEBUG nova.compute.manager [req-5f75b4ce-f728-4935-b60d-0e7d7edec870 req-1980a226-fc72-46fb-9fc0-949ae57fbd3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:06 compute-0 nova_compute[247704]: 2026-01-31 08:24:06.134 247708 WARNING nova.compute.manager [req-5f75b4ce-f728-4935-b60d-0e7d7edec870 req-1980a226-fc72-46fb-9fc0-949ae57fbd3f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state None.
Jan 31 08:24:06 compute-0 ceph-mon[74496]: pgmap v2706: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.3 MiB/s wr, 272 op/s
Jan 31 08:24:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:07 compute-0 nova_compute[247704]: 2026-01-31 08:24:07.277 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:07.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 360 op/s
Jan 31 08:24:08 compute-0 ceph-mon[74496]: pgmap v2707: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 360 op/s
Jan 31 08:24:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:09.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:09 compute-0 sudo[349954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:09 compute-0 sudo[349954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:09 compute-0 sudo[349954]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:09 compute-0 sudo[349979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:09 compute-0 sudo[349979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:09.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:09 compute-0 sudo[349979]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:10 compute-0 nova_compute[247704]: 2026-01-31 08:24:10.039 247708 INFO nova.compute.manager [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Rescuing
Jan 31 08:24:10 compute-0 nova_compute[247704]: 2026-01-31 08:24:10.040 247708 DEBUG oslo_concurrency.lockutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:10 compute-0 nova_compute[247704]: 2026-01-31 08:24:10.040 247708 DEBUG oslo_concurrency.lockutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:10 compute-0 nova_compute[247704]: 2026-01-31 08:24:10.041 247708 DEBUG nova.network.neutron [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:24:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.1 MiB/s wr, 331 op/s
Jan 31 08:24:10 compute-0 ceph-mon[74496]: pgmap v2708: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.1 MiB/s wr, 331 op/s
Jan 31 08:24:10 compute-0 nova_compute[247704]: 2026-01-31 08:24:10.371 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:10 compute-0 podman[350004]: 2026-01-31 08:24:10.984950517 +0000 UTC m=+0.142488626 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:11.195 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:11.196 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:11.196 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2899694428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 304 op/s
Jan 31 08:24:12 compute-0 sudo[350033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:12 compute-0 sudo[350033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-0 nova_compute[247704]: 2026-01-31 08:24:12.280 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:12 compute-0 sudo[350033]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-0 sudo[350058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:12 compute-0 sudo[350058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-0 sudo[350058]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-0 sudo[350083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:12 compute-0 sudo[350083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-0 sudo[350083]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4275877689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:12 compute-0 ceph-mon[74496]: pgmap v2709: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 304 op/s
Jan 31 08:24:12 compute-0 sudo[350108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:24:12 compute-0 sudo[350108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:12 compute-0 nova_compute[247704]: 2026-01-31 08:24:12.548 247708 DEBUG nova.network.neutron [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updating instance_info_cache with network_info: [{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:12 compute-0 nova_compute[247704]: 2026-01-31 08:24:12.613 247708 DEBUG oslo_concurrency.lockutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:12 compute-0 sudo[350108]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:13 compute-0 nova_compute[247704]: 2026-01-31 08:24:13.087 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:24:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:24:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:24:13 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 287 op/s
Jan 31 08:24:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:24:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:24:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:24:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 01808c81-020c-4534-bdf6-32edfa2c7a6e does not exist
Jan 31 08:24:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 683e7206-a570-4d5b-891c-03b7ba753122 does not exist
Jan 31 08:24:14 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6c89c7a9-3920-4de8-8f72-d107e9676b6c does not exist
Jan 31 08:24:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:24:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:24:14 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:24:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:24:14 compute-0 sudo[350165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:14 compute-0 sudo[350165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:14 compute-0 sudo[350165]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:14 compute-0 sudo[350190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:14 compute-0 sudo[350190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:14 compute-0 sudo[350190]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:14 compute-0 sudo[350215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:14 compute-0 sudo[350215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:14 compute-0 sudo[350215]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:14 compute-0 sudo[350240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:24:14 compute-0 sudo[350240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-0 ceph-mon[74496]: pgmap v2710: 305 pgs: 305 active+clean; 432 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 287 op/s
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:24:14 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.832712357 +0000 UTC m=+0.043396575 container create 477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:24:14 compute-0 systemd[1]: Started libpod-conmon-477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e.scope.
Jan 31 08:24:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.810589675 +0000 UTC m=+0.021273923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.919852104 +0000 UTC m=+0.130536362 container init 477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mirzakhani, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.927790039 +0000 UTC m=+0.138474267 container start 477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mirzakhani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.932039923 +0000 UTC m=+0.142724151 container attach 477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:24:14 compute-0 sweet_mirzakhani[350321]: 167 167
Jan 31 08:24:14 compute-0 systemd[1]: libpod-477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e.scope: Deactivated successfully.
Jan 31 08:24:14 compute-0 conmon[350321]: conmon 477f514d88827fadd655 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e.scope/container/memory.events
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.935676412 +0000 UTC m=+0.146360650 container died 477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb8c2a022c5c9d4b385dc65149c02917dd0350cb25d9096d58e37d3d8c122c3-merged.mount: Deactivated successfully.
Jan 31 08:24:14 compute-0 podman[350304]: 2026-01-31 08:24:14.985849712 +0000 UTC m=+0.196533940 container remove 477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:24:14 compute-0 systemd[1]: libpod-conmon-477f514d88827fadd655fb3a7fab63271f2575f9e941b1fe397665f7ac47b78e.scope: Deactivated successfully.
Jan 31 08:24:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:15.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:15 compute-0 podman[350346]: 2026-01-31 08:24:15.141748586 +0000 UTC m=+0.042648468 container create 238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:24:15 compute-0 systemd[1]: Started libpod-conmon-238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3.scope.
Jan 31 08:24:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcd0bede47f5ccdc6ba7f28de6d55dfcac99eb7538085e1db30913d6676ae65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcd0bede47f5ccdc6ba7f28de6d55dfcac99eb7538085e1db30913d6676ae65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:15 compute-0 podman[350346]: 2026-01-31 08:24:15.125114088 +0000 UTC m=+0.026013990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcd0bede47f5ccdc6ba7f28de6d55dfcac99eb7538085e1db30913d6676ae65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcd0bede47f5ccdc6ba7f28de6d55dfcac99eb7538085e1db30913d6676ae65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bcd0bede47f5ccdc6ba7f28de6d55dfcac99eb7538085e1db30913d6676ae65/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:15 compute-0 podman[350346]: 2026-01-31 08:24:15.233796922 +0000 UTC m=+0.134696854 container init 238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:24:15 compute-0 podman[350346]: 2026-01-31 08:24:15.246008842 +0000 UTC m=+0.146908724 container start 238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:15 compute-0 podman[350346]: 2026-01-31 08:24:15.253434973 +0000 UTC m=+0.154334885 container attach 238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:24:15 compute-0 nova_compute[247704]: 2026-01-31 08:24:15.372 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 346 op/s
Jan 31 08:24:16 compute-0 pedantic_mendeleev[350363]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:24:16 compute-0 pedantic_mendeleev[350363]: --> relative data size: 1.0
Jan 31 08:24:16 compute-0 pedantic_mendeleev[350363]: --> All data devices are unavailable
Jan 31 08:24:16 compute-0 systemd[1]: libpod-238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3.scope: Deactivated successfully.
Jan 31 08:24:16 compute-0 conmon[350363]: conmon 238425cffb61402e384f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3.scope/container/memory.events
Jan 31 08:24:16 compute-0 ceph-mon[74496]: pgmap v2711: 305 pgs: 305 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 3.2 MiB/s wr, 346 op/s
Jan 31 08:24:16 compute-0 podman[350379]: 2026-01-31 08:24:16.175524154 +0000 UTC m=+0.039319915 container died 238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bcd0bede47f5ccdc6ba7f28de6d55dfcac99eb7538085e1db30913d6676ae65-merged.mount: Deactivated successfully.
Jan 31 08:24:16 compute-0 podman[350379]: 2026-01-31 08:24:16.231729053 +0000 UTC m=+0.095524724 container remove 238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:24:16 compute-0 systemd[1]: libpod-conmon-238425cffb61402e384fcd8044197903298695dd48d8e75f65d3e9e02716a0c3.scope: Deactivated successfully.
Jan 31 08:24:16 compute-0 sudo[350240]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:16 compute-0 sudo[350392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:16 compute-0 sudo[350392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:16 compute-0 sudo[350392]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:16 compute-0 sudo[350417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:16 compute-0 sudo[350417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:16 compute-0 sudo[350417]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:16 compute-0 sudo[350442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:16 compute-0 sudo[350442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:16 compute-0 sudo[350442]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:16 compute-0 sudo[350467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:24:16 compute-0 sudo[350467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:16 compute-0 podman[350534]: 2026-01-31 08:24:16.905941645 +0000 UTC m=+0.057008359 container create 7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:24:16 compute-0 systemd[1]: Started libpod-conmon-7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416.scope.
Jan 31 08:24:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:16 compute-0 podman[350534]: 2026-01-31 08:24:16.887731859 +0000 UTC m=+0.038798603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:24:16 compute-0 podman[350534]: 2026-01-31 08:24:16.992164719 +0000 UTC m=+0.143231453 container init 7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:17 compute-0 podman[350534]: 2026-01-31 08:24:16.999907759 +0000 UTC m=+0.150974463 container start 7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:17 compute-0 podman[350534]: 2026-01-31 08:24:17.004973763 +0000 UTC m=+0.156040477 container attach 7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:17 compute-0 angry_visvesvaraya[350551]: 167 167
Jan 31 08:24:17 compute-0 systemd[1]: libpod-7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416.scope: Deactivated successfully.
Jan 31 08:24:17 compute-0 conmon[350551]: conmon 7278f14c363b45b29276 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416.scope/container/memory.events
Jan 31 08:24:17 compute-0 podman[350534]: 2026-01-31 08:24:17.008965761 +0000 UTC m=+0.160032505 container died 7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:24:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a52b6019e19602caa06ea598c9fd63f72c210d467c328606b03db165da3040d6-merged.mount: Deactivated successfully.
Jan 31 08:24:17 compute-0 podman[350534]: 2026-01-31 08:24:17.055224465 +0000 UTC m=+0.206291169 container remove 7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:24:17 compute-0 systemd[1]: libpod-conmon-7278f14c363b45b29276e1d5e8d93da4c09772f873a247f13f1ad7a91bdb3416.scope: Deactivated successfully.
Jan 31 08:24:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:17.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:17 compute-0 podman[350574]: 2026-01-31 08:24:17.218429698 +0000 UTC m=+0.052634343 container create f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:24:17 compute-0 systemd[1]: Started libpod-conmon-f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941.scope.
Jan 31 08:24:17 compute-0 nova_compute[247704]: 2026-01-31 08:24:17.284 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:17 compute-0 podman[350574]: 2026-01-31 08:24:17.194570403 +0000 UTC m=+0.028775078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:24:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382723e9091b36eacd9615a0ff9f13a3d0051788bc81bb6281e0304d66e74525/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382723e9091b36eacd9615a0ff9f13a3d0051788bc81bb6281e0304d66e74525/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382723e9091b36eacd9615a0ff9f13a3d0051788bc81bb6281e0304d66e74525/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382723e9091b36eacd9615a0ff9f13a3d0051788bc81bb6281e0304d66e74525/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:17 compute-0 podman[350574]: 2026-01-31 08:24:17.342721066 +0000 UTC m=+0.176925741 container init f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 08:24:17 compute-0 podman[350574]: 2026-01-31 08:24:17.348600229 +0000 UTC m=+0.182804874 container start f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_antonelli, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:24:17 compute-0 podman[350574]: 2026-01-31 08:24:17.353248413 +0000 UTC m=+0.187453058 container attach f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:17 compute-0 ovn_controller[149457]: 2026-01-31T08:24:17Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c2:71:92 10.100.0.7
Jan 31 08:24:17 compute-0 ovn_controller[149457]: 2026-01-31T08:24:17Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c2:71:92 10.100.0.7
Jan 31 08:24:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.0 MiB/s wr, 265 op/s
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]: {
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:     "0": [
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:         {
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "devices": [
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "/dev/loop3"
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             ],
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "lv_name": "ceph_lv0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "lv_size": "7511998464",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "name": "ceph_lv0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "tags": {
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.crush_device_class": "",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.encrypted": "0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.osd_id": "0",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.type": "block",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:                 "ceph.vdo": "0"
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             },
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "type": "block",
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:             "vg_name": "ceph_vg0"
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:         }
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]:     ]
Jan 31 08:24:18 compute-0 cranky_antonelli[350590]: }
Jan 31 08:24:18 compute-0 ceph-mon[74496]: pgmap v2712: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.0 MiB/s wr, 265 op/s
Jan 31 08:24:18 compute-0 systemd[1]: libpod-f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941.scope: Deactivated successfully.
Jan 31 08:24:18 compute-0 podman[350600]: 2026-01-31 08:24:18.222570909 +0000 UTC m=+0.029284609 container died f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_antonelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-382723e9091b36eacd9615a0ff9f13a3d0051788bc81bb6281e0304d66e74525-merged.mount: Deactivated successfully.
Jan 31 08:24:18 compute-0 podman[350600]: 2026-01-31 08:24:18.271872037 +0000 UTC m=+0.078585737 container remove f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_antonelli, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:18 compute-0 systemd[1]: libpod-conmon-f1c9fbb0304975c2065bb00c707d54868353c619fbdbf29093dcde96b231e941.scope: Deactivated successfully.
Jan 31 08:24:18 compute-0 sudo[350467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:18 compute-0 sudo[350613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:18 compute-0 sudo[350613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:18 compute-0 sudo[350613]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:18 compute-0 sudo[350638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:18 compute-0 sudo[350638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:18 compute-0 sudo[350638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:18 compute-0 sudo[350663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:18 compute-0 sudo[350663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:18 compute-0 sudo[350663]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:18 compute-0 sudo[350688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:24:18 compute-0 sudo[350688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:18 compute-0 podman[350753]: 2026-01-31 08:24:18.872339762 +0000 UTC m=+0.043620921 container create 0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:24:18 compute-0 systemd[1]: Started libpod-conmon-0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729.scope.
Jan 31 08:24:18 compute-0 podman[350753]: 2026-01-31 08:24:18.850716042 +0000 UTC m=+0.021997221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:24:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:18 compute-0 podman[350753]: 2026-01-31 08:24:18.969208907 +0000 UTC m=+0.140490136 container init 0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:24:18 compute-0 podman[350753]: 2026-01-31 08:24:18.977595363 +0000 UTC m=+0.148876502 container start 0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:18 compute-0 practical_moser[350769]: 167 167
Jan 31 08:24:18 compute-0 podman[350753]: 2026-01-31 08:24:18.982306119 +0000 UTC m=+0.153587358 container attach 0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:24:18 compute-0 systemd[1]: libpod-0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729.scope: Deactivated successfully.
Jan 31 08:24:18 compute-0 podman[350753]: 2026-01-31 08:24:18.98318137 +0000 UTC m=+0.154462529 container died 0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e32d125873ce891b24dd7eddd950bdc4895d9eb343c01f4fbd6fa1b92fa79ac5-merged.mount: Deactivated successfully.
Jan 31 08:24:19 compute-0 podman[350753]: 2026-01-31 08:24:19.028044949 +0000 UTC m=+0.199326128 container remove 0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:24:19 compute-0 systemd[1]: libpod-conmon-0d8e504b7b53f76832b1255195de664e0f628cca9d518faa708781ac9f764729.scope: Deactivated successfully.
Jan 31 08:24:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:19 compute-0 podman[350793]: 2026-01-31 08:24:19.163792058 +0000 UTC m=+0.043339354 container create f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carver, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:24:19 compute-0 systemd[1]: Started libpod-conmon-f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f.scope.
Jan 31 08:24:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1aee08b52d7767274d30db17d43a336d581aaae29335ad3d88feb45087c8d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1aee08b52d7767274d30db17d43a336d581aaae29335ad3d88feb45087c8d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1aee08b52d7767274d30db17d43a336d581aaae29335ad3d88feb45087c8d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1aee08b52d7767274d30db17d43a336d581aaae29335ad3d88feb45087c8d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:19 compute-0 podman[350793]: 2026-01-31 08:24:19.146117115 +0000 UTC m=+0.025664431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:24:19 compute-0 podman[350793]: 2026-01-31 08:24:19.246971409 +0000 UTC m=+0.126518785 container init f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:24:19 compute-0 podman[350793]: 2026-01-31 08:24:19.252615227 +0000 UTC m=+0.132162523 container start f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carver, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:19 compute-0 podman[350793]: 2026-01-31 08:24:19.256222945 +0000 UTC m=+0.135770301 container attach f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:19.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:20 compute-0 fervent_carver[350809]: {
Jan 31 08:24:20 compute-0 fervent_carver[350809]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:24:20 compute-0 fervent_carver[350809]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:24:20 compute-0 fervent_carver[350809]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:24:20 compute-0 fervent_carver[350809]:         "osd_id": 0,
Jan 31 08:24:20 compute-0 fervent_carver[350809]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:24:20 compute-0 fervent_carver[350809]:         "type": "bluestore"
Jan 31 08:24:20 compute-0 fervent_carver[350809]:     }
Jan 31 08:24:20 compute-0 fervent_carver[350809]: }
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 489 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 189 op/s
Jan 31 08:24:20 compute-0 systemd[1]: libpod-f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f.scope: Deactivated successfully.
Jan 31 08:24:20 compute-0 conmon[350809]: conmon f67975fefa7a647a1499 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f.scope/container/memory.events
Jan 31 08:24:20 compute-0 podman[350793]: 2026-01-31 08:24:20.090055032 +0000 UTC m=+0.969602318 container died f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a1aee08b52d7767274d30db17d43a336d581aaae29335ad3d88feb45087c8d9-merged.mount: Deactivated successfully.
Jan 31 08:24:20 compute-0 ceph-mon[74496]: pgmap v2713: 305 pgs: 305 active+clean; 489 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 189 op/s
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:24:20
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.mgr', 'vms']
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:24:20 compute-0 podman[350793]: 2026-01-31 08:24:20.178805917 +0000 UTC m=+1.058353213 container remove f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:24:20 compute-0 systemd[1]: libpod-conmon-f67975fefa7a647a149953f6107fb53989ead7eef78786762103afc54c78fa3f.scope: Deactivated successfully.
Jan 31 08:24:20 compute-0 sudo[350688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:24:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:24:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ec38d00b-79e3-4fe1-83d1-316b5241f979 does not exist
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7ef42540-c9a3-43fd-b4af-666448360632 does not exist
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1f0893da-518a-492e-bc8c-3491e220b683 does not exist
Jan 31 08:24:20 compute-0 sudo[350845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:20 compute-0 sudo[350845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:20 compute-0 sudo[350845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:20 compute-0 nova_compute[247704]: 2026-01-31 08:24:20.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:20 compute-0 sudo[350870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:24:20 compute-0 sudo[350870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:20 compute-0 sudo[350870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:24:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:24:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:24:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Jan 31 08:24:22 compute-0 ceph-mon[74496]: pgmap v2714: 305 pgs: 305 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Jan 31 08:24:22 compute-0 nova_compute[247704]: 2026-01-31 08:24:22.327 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:23 compute-0 nova_compute[247704]: 2026-01-31 08:24:23.145 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 31 08:24:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 31 08:24:24 compute-0 ceph-mon[74496]: pgmap v2715: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 31 08:24:24 compute-0 podman[350898]: 2026-01-31 08:24:24.922067636 +0000 UTC m=+0.079018469 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:24:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:25.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:25 compute-0 nova_compute[247704]: 2026-01-31 08:24:25.377 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:25.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:25 compute-0 kernel: tap568e8160-e7 (unregistering): left promiscuous mode
Jan 31 08:24:25 compute-0 NetworkManager[49108]: <info>  [1769847865.8667] device (tap568e8160-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:24:25 compute-0 ovn_controller[149457]: 2026-01-31T08:24:25Z|00666|binding|INFO|Releasing lport 568e8160-e7ce-4ef4-ba79-b8571d160073 from this chassis (sb_readonly=0)
Jan 31 08:24:25 compute-0 ovn_controller[149457]: 2026-01-31T08:24:25Z|00667|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 down in Southbound
Jan 31 08:24:25 compute-0 nova_compute[247704]: 2026-01-31 08:24:25.910 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:25 compute-0 ovn_controller[149457]: 2026-01-31T08:24:25Z|00668|binding|INFO|Removing iface tap568e8160-e7 ovn-installed in OVS
Jan 31 08:24:25 compute-0 nova_compute[247704]: 2026-01-31 08:24:25.913 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:25 compute-0 nova_compute[247704]: 2026-01-31 08:24:25.918 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:25.935 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:71:92 10.100.0.7'], port_security=['fa:16:3e:c2:71:92 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=568e8160-e7ce-4ef4-ba79-b8571d160073) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:25.936 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 568e8160-e7ce-4ef4-ba79-b8571d160073 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis
Jan 31 08:24:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:25.938 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:24:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:25.951 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8dfee1-7505-4dca-8c9f-166de009046e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:25 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 31 08:24:25 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000099.scope: Consumed 14.350s CPU time.
Jan 31 08:24:25 compute-0 systemd-machined[214448]: Machine qemu-70-instance-00000099 terminated.
Jan 31 08:24:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:25.983 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c32cb9c7-5369-43dc-8419-35efff9096b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:25.988 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[98886e24-d7ff-4e05-a3e2-23738c86454a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.016 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c5baeda4-e3f2-4a94-88e1-907117b4f991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.039 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[96f36648-bdd8-4aa2-99d8-7426552ae8a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 32750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350931, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.059 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f09706ca-4e9e-454a-a8e1-423d46cfa6f7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803982, 'tstamp': 803982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350932, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803984, 'tstamp': 803984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350932, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.062 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.064 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.069 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.069 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.070 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:26.070 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.131 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.136 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.161 247708 INFO nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance shutdown successfully after 13 seconds.
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.169 247708 INFO nova.virt.libvirt.driver [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance destroyed successfully.
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.170 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.192 247708 INFO nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Attempting a stable device rescue
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.258 247708 DEBUG nova.compute.manager [req-d8c78fd1-9ee0-4981-b333-3d8e1f8e6b9c req-1ba9c75f-c08a-47ec-ba8f-c51ca8fe688f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.258 247708 DEBUG oslo_concurrency.lockutils [req-d8c78fd1-9ee0-4981-b333-3d8e1f8e6b9c req-1ba9c75f-c08a-47ec-ba8f-c51ca8fe688f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.259 247708 DEBUG oslo_concurrency.lockutils [req-d8c78fd1-9ee0-4981-b333-3d8e1f8e6b9c req-1ba9c75f-c08a-47ec-ba8f-c51ca8fe688f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.259 247708 DEBUG oslo_concurrency.lockutils [req-d8c78fd1-9ee0-4981-b333-3d8e1f8e6b9c req-1ba9c75f-c08a-47ec-ba8f-c51ca8fe688f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.259 247708 DEBUG nova.compute.manager [req-d8c78fd1-9ee0-4981-b333-3d8e1f8e6b9c req-1ba9c75f-c08a-47ec-ba8f-c51ca8fe688f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.259 247708 WARNING nova.compute.manager [req-d8c78fd1-9ee0-4981-b333-3d8e1f8e6b9c req-1ba9c75f-c08a-47ec-ba8f-c51ca8fe688f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state rescuing.
Jan 31 08:24:26 compute-0 ceph-mon[74496]: pgmap v2716: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.762 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.767 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.767 247708 INFO nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Creating image(s)
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.801 247708 DEBUG nova.storage.rbd_utils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.806 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.860 247708 DEBUG nova.storage.rbd_utils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.894 247708 DEBUG nova.storage.rbd_utils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.900 247708 DEBUG oslo_concurrency.lockutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "7d2a16d0de152d87ff48387dac566aee55340b2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:26 compute-0 nova_compute[247704]: 2026-01-31 08:24:26.902 247708 DEBUG oslo_concurrency.lockutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "7d2a16d0de152d87ff48387dac566aee55340b2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:27.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.283 247708 DEBUG nova.virt.libvirt.imagebackend [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image locations are: [{'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/9a73b720-4ab7-4126-8c93-bff12492db80/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/9a73b720-4ab7-4126-8c93-bff12492db80/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.373 247708 DEBUG nova.virt.libvirt.imagebackend [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Selected location: {'url': 'rbd://f70fcd2a-dcb4-5f89-a4ba-79a09959083b/images/9a73b720-4ab7-4126-8c93-bff12492db80/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.373 247708 DEBUG nova.storage.rbd_utils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] cloning images/9a73b720-4ab7-4126-8c93-bff12492db80@snap to None/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.529 247708 DEBUG oslo_concurrency.lockutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "7d2a16d0de152d87ff48387dac566aee55340b2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.578 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.606 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.609 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Start _get_guest_xml network_info=[{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "vif_mac": "fa:16:3e:c2:71:92"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '9a73b720-4ab7-4126-8c93-bff12492db80', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'f16d44ff-26d6-4ad8-8798-323b6a48be9d', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bb6c3a76-a1e4-4bd7-8526-2c74f412b508', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bb6c3a76-a1e4-4bd7-8526-2c74f412b508', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'attached_at': '', 'detached_at': '', 'volume_id': 'bb6c3a76-a1e4-4bd7-8526-2c74f412b508', 'serial': 'bb6c3a76-a1e4-4bd7-8526-2c74f412b508'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.609 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'resources' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.633 247708 WARNING nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.642 247708 DEBUG nova.virt.libvirt.host [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.643 247708 DEBUG nova.virt.libvirt.host [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.649 247708 DEBUG nova.virt.libvirt.host [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.650 247708 DEBUG nova.virt.libvirt.host [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.651 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.652 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.652 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.653 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.653 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.654 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.654 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.654 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.655 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.655 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.656 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.656 247708 DEBUG nova.virt.hardware [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.656 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:27.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:27 compute-0 nova_compute[247704]: 2026-01-31 08:24:27.721 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 513 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Jan 31 08:24:28 compute-0 ceph-mon[74496]: pgmap v2717: 305 pgs: 305 active+clean; 513 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Jan 31 08:24:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456242543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.206 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.232 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.481 247708 DEBUG nova.compute.manager [req-0f61c457-6f62-429d-b51c-0e7be19f15be req-8f406fb2-60a4-42cf-9354-1235bd332729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.482 247708 DEBUG oslo_concurrency.lockutils [req-0f61c457-6f62-429d-b51c-0e7be19f15be req-8f406fb2-60a4-42cf-9354-1235bd332729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.483 247708 DEBUG oslo_concurrency.lockutils [req-0f61c457-6f62-429d-b51c-0e7be19f15be req-8f406fb2-60a4-42cf-9354-1235bd332729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.484 247708 DEBUG oslo_concurrency.lockutils [req-0f61c457-6f62-429d-b51c-0e7be19f15be req-8f406fb2-60a4-42cf-9354-1235bd332729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.485 247708 DEBUG nova.compute.manager [req-0f61c457-6f62-429d-b51c-0e7be19f15be req-8f406fb2-60a4-42cf-9354-1235bd332729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.485 247708 WARNING nova.compute.manager [req-0f61c457-6f62-429d-b51c-0e7be19f15be req-8f406fb2-60a4-42cf-9354-1235bd332729 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state rescuing.
Jan 31 08:24:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646330775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.668 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.670 247708 DEBUG nova.virt.libvirt.vif [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1237883975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1237883975',id=153,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-40a23vvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:04Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=6a10c21a-772d-4a5c-8f62-3d90d4b7ca56,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "vif_mac": "fa:16:3e:c2:71:92"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.670 247708 DEBUG nova.network.os_vif_util [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "vif_mac": "fa:16:3e:c2:71:92"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.671 247708 DEBUG nova.network.os_vif_util [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.673 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.699 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <uuid>6a10c21a-772d-4a5c-8f62-3d90d4b7ca56</uuid>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <name>instance-00000099</name>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1237883975</nova:name>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:24:27</nova:creationTime>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:user uuid="038e2b3b4f174162a3ac6c4870857e60">tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member</nova:user>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:project uuid="c90ea7f1be5f484bb873548236fadc00">tempest-ServerBootFromVolumeStableRescueTest-1116995694</nova:project>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <nova:port uuid="568e8160-e7ce-4ef4-ba79-b8571d160073">
Jan 31 08:24:28 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <system>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <entry name="serial">6a10c21a-772d-4a5c-8f62-3d90d4b7ca56</entry>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <entry name="uuid">6a10c21a-772d-4a5c-8f62-3d90d4b7ca56</entry>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </system>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <os>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </os>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <features>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </features>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-bb6c3a76-a1e4-4bd7-8526-2c74f412b508">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <serial>bb6c3a76-a1e4-4bd7-8526-2c74f412b508</serial>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.rescue">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:28 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <target dev="vdb" bus="virtio"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <boot order="1"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:c2:71:92"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <target dev="tap568e8160-e7"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/console.log" append="off"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <video>
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </video>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:24:28 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:24:28 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:24:28 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:24:28 compute-0 nova_compute[247704]: </domain>
Jan 31 08:24:28 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.711 247708 INFO nova.virt.libvirt.driver [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance destroyed successfully.
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.863 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.864 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.864 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.864 247708 DEBUG nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] No VIF found with MAC fa:16:3e:c2:71:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.865 247708 INFO nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Using config drive
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.894 247708 DEBUG nova.storage.rbd_utils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.920 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:28 compute-0 nova_compute[247704]: 2026-01-31 08:24:28.953 247708 DEBUG nova.objects.instance [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'keypairs' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:29.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3456242543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2646330775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:29 compute-0 nova_compute[247704]: 2026-01-31 08:24:29.504 247708 INFO nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Creating config drive at /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config.rescue
Jan 31 08:24:29 compute-0 nova_compute[247704]: 2026-01-31 08:24:29.508 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpl_hdahti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:29 compute-0 nova_compute[247704]: 2026-01-31 08:24:29.651 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpl_hdahti" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:24:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:29.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:24:29 compute-0 nova_compute[247704]: 2026-01-31 08:24:29.682 247708 DEBUG nova.storage.rbd_utils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] rbd image 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:29 compute-0 nova_compute[247704]: 2026-01-31 08:24:29.688 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config.rescue 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:29 compute-0 sudo[351178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:29 compute-0 sudo[351178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:29 compute-0 sudo[351178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:29 compute-0 sudo[351216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:29 compute-0 sudo[351216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:29 compute-0 sudo[351216]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 3.2 MiB/s wr, 124 op/s
Jan 31 08:24:30 compute-0 nova_compute[247704]: 2026-01-31 08:24:30.380 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:30 compute-0 ceph-mon[74496]: pgmap v2718: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 665 KiB/s rd, 3.2 MiB/s wr, 124 op/s
Jan 31 08:24:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.213 247708 DEBUG oslo_concurrency.processutils [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config.rescue 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.215 247708 INFO nova.virt.libvirt.driver [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Deleting local config drive /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56/disk.config.rescue because it was imported into RBD.
Jan 31 08:24:31 compute-0 kernel: tap568e8160-e7: entered promiscuous mode
Jan 31 08:24:31 compute-0 NetworkManager[49108]: <info>  [1769847871.2586] manager: (tap568e8160-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/294)
Jan 31 08:24:31 compute-0 ovn_controller[149457]: 2026-01-31T08:24:31Z|00669|binding|INFO|Claiming lport 568e8160-e7ce-4ef4-ba79-b8571d160073 for this chassis.
Jan 31 08:24:31 compute-0 ovn_controller[149457]: 2026-01-31T08:24:31Z|00670|binding|INFO|568e8160-e7ce-4ef4-ba79-b8571d160073: Claiming fa:16:3e:c2:71:92 10.100.0.7
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.300 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:31 compute-0 ovn_controller[149457]: 2026-01-31T08:24:31Z|00671|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 ovn-installed in OVS
Jan 31 08:24:31 compute-0 ovn_controller[149457]: 2026-01-31T08:24:31Z|00672|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 up in Southbound
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.308 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.310 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:71:92 10.100.0.7'], port_security=['fa:16:3e:c2:71:92 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '5', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=568e8160-e7ce-4ef4-ba79-b8571d160073) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.311 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 568e8160-e7ce-4ef4-ba79-b8571d160073 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.313 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:24:31 compute-0 systemd-udevd[351271]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.328 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f72f44-a619-4cf2-88ca-ea4b062f4a91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:31 compute-0 systemd-machined[214448]: New machine qemu-71-instance-00000099.
Jan 31 08:24:31 compute-0 NetworkManager[49108]: <info>  [1769847871.3392] device (tap568e8160-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:24:31 compute-0 NetworkManager[49108]: <info>  [1769847871.3403] device (tap568e8160-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:24:31 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000099.
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.362 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf59d21-f48c-417c-a97d-64bd0bca63ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.365 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[abf5fa47-0718-44ba-b3da-d06d8054c54f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.399 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[406825f2-81f3-440a-9d5a-1082342fefb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.417 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5f524d45-72b2-4e1f-965f-b51285448604]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 10, 'rx_bytes': 658, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 10, 'rx_bytes': 658, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 32750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351285, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.433 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0686a3cc-d7a1-4028-b2a9-f2dd6a35d5e3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803982, 'tstamp': 803982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351286, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803984, 'tstamp': 803984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351286, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.434 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.438 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.438 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.438 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.439 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:31.439 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.620 247708 DEBUG nova.compute.manager [req-b299deee-5dc0-4bbf-8274-e7aa3020f779 req-50de4aba-9b8a-4b06-a23f-970ba9fc5e8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.621 247708 DEBUG oslo_concurrency.lockutils [req-b299deee-5dc0-4bbf-8274-e7aa3020f779 req-50de4aba-9b8a-4b06-a23f-970ba9fc5e8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.621 247708 DEBUG oslo_concurrency.lockutils [req-b299deee-5dc0-4bbf-8274-e7aa3020f779 req-50de4aba-9b8a-4b06-a23f-970ba9fc5e8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.621 247708 DEBUG oslo_concurrency.lockutils [req-b299deee-5dc0-4bbf-8274-e7aa3020f779 req-50de4aba-9b8a-4b06-a23f-970ba9fc5e8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.621 247708 DEBUG nova.compute.manager [req-b299deee-5dc0-4bbf-8274-e7aa3020f779 req-50de4aba-9b8a-4b06-a23f-970ba9fc5e8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:31 compute-0 nova_compute[247704]: 2026-01-31 08:24:31.621 247708 WARNING nova.compute.manager [req-b299deee-5dc0-4bbf-8274-e7aa3020f779 req-50de4aba-9b8a-4b06-a23f-970ba9fc5e8d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state rescuing.
Jan 31 08:24:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:31.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2082796801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 2.4 MiB/s wr, 115 op/s
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.116 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.117 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847872.1160676, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.117 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Resumed (Lifecycle Event)
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.124 247708 DEBUG nova.compute.manager [None req-b5cb5980-1782-44d7-826a-ee0ebe3696cd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.140 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.145 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.171 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.172 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847872.1208954, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.172 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Started (Lifecycle Event)
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.208 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.213 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:32 compute-0 nova_compute[247704]: 2026-01-31 08:24:32.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:32 compute-0 ceph-mon[74496]: pgmap v2719: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 2.4 MiB/s wr, 115 op/s
Jan 31 08:24:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:33.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.229 247708 INFO nova.compute.manager [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Unrescuing
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.229 247708 DEBUG oslo_concurrency.lockutils [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.230 247708 DEBUG oslo_concurrency.lockutils [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquired lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.230 247708 DEBUG nova.network.neutron [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:24:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:33.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.786 247708 DEBUG nova.compute.manager [req-6ed0b4dc-2e35-4b0c-a811-0cc17a87bb8b req-e87cdae0-26e0-4813-abf7-845dc07e3fb6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.787 247708 DEBUG oslo_concurrency.lockutils [req-6ed0b4dc-2e35-4b0c-a811-0cc17a87bb8b req-e87cdae0-26e0-4813-abf7-845dc07e3fb6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.787 247708 DEBUG oslo_concurrency.lockutils [req-6ed0b4dc-2e35-4b0c-a811-0cc17a87bb8b req-e87cdae0-26e0-4813-abf7-845dc07e3fb6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.788 247708 DEBUG oslo_concurrency.lockutils [req-6ed0b4dc-2e35-4b0c-a811-0cc17a87bb8b req-e87cdae0-26e0-4813-abf7-845dc07e3fb6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.788 247708 DEBUG nova.compute.manager [req-6ed0b4dc-2e35-4b0c-a811-0cc17a87bb8b req-e87cdae0-26e0-4813-abf7-845dc07e3fb6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:33 compute-0 nova_compute[247704]: 2026-01-31 08:24:33.789 247708 WARNING nova.compute.manager [req-6ed0b4dc-2e35-4b0c-a811-0cc17a87bb8b req-e87cdae0-26e0-4813-abf7-845dc07e3fb6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:24:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 31 08:24:34 compute-0 ceph-mon[74496]: pgmap v2720: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 31 08:24:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:24:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 59K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1710 writes, 7845 keys, 1710 commit groups, 1.0 writes per commit group, ingest: 10.92 MB, 0.02 MB/s
                                           Interval WAL: 1710 writes, 1710 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     53.9      1.44              0.20        40    0.036       0      0       0.0       0.0
                                             L6      1/0    8.77 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.9     64.6     54.7      7.00              0.96        39    0.179    262K    21K       0.0       0.0
                                            Sum      1/0    8.77 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.9     53.6     54.6      8.44              1.17        79    0.107    262K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.0     53.0     51.1      1.72              0.25        14    0.123     62K   3607       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     64.6     54.7      7.00              0.96        39    0.179    262K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.1      1.44              0.20        39    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.076, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.45 GB write, 0.10 MB/s write, 0.44 GB read, 0.09 MB/s read, 8.4 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 49.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000443 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2909,47.59 MB,15.6537%) FilterBlock(80,737.17 KB,0.236807%) IndexBlock(80,1.22 MB,0.400403%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:24:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:24:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:35.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.383 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.393 247708 DEBUG nova.network.neutron [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updating instance_info_cache with network_info: [{"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.429 247708 DEBUG oslo_concurrency.lockutils [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Releasing lock "refresh_cache-6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.430 247708 DEBUG nova.objects.instance [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'flavor' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:35 compute-0 kernel: tap568e8160-e7 (unregistering): left promiscuous mode
Jan 31 08:24:35 compute-0 NetworkManager[49108]: <info>  [1769847875.5234] device (tap568e8160-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.531 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00673|binding|INFO|Releasing lport 568e8160-e7ce-4ef4-ba79-b8571d160073 from this chassis (sb_readonly=0)
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00674|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 down in Southbound
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00675|binding|INFO|Removing iface tap568e8160-e7 ovn-installed in OVS
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.534 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.543 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.551 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:71:92 10.100.0.7'], port_security=['fa:16:3e:c2:71:92 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=568e8160-e7ce-4ef4-ba79-b8571d160073) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.553 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 568e8160-e7ce-4ef4-ba79-b8571d160073 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.556 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:24:35 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 31 08:24:35 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000099.scope: Consumed 4.160s CPU time.
Jan 31 08:24:35 compute-0 systemd-machined[214448]: Machine qemu-71-instance-00000099 terminated.
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.570 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[46a72296-f0ae-46cf-8c25-ec32f3c839ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.594 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b947cc1e-b4c1-45f5-986e-6763bd0d31fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.598 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9eedfbc0-81a7-44ef-8c8b-694bab97522f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.621 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[97dbaa2e-0983-4077-90ee-48c16bde814e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008684917643774428 of space, bias 1.0, pg target 2.6054752931323284 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6443723972873627 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:24:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.636 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d94a4d56-0ee4-410a-a0c3-78f349384b0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 12, 'rx_bytes': 658, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 32750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351362, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.649 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[10b6a375-9eb4-4054-b97e-7b67cf9be004]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803982, 'tstamp': 803982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351363, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803984, 'tstamp': 803984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351363, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.651 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.652 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.656 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.657 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.657 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.658 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.658 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:35.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.689 247708 INFO nova.virt.libvirt.driver [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance destroyed successfully.
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.690 247708 DEBUG nova.objects.instance [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:35 compute-0 kernel: tap568e8160-e7: entered promiscuous mode
Jan 31 08:24:35 compute-0 systemd-udevd[351352]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:35 compute-0 NetworkManager[49108]: <info>  [1769847875.7762] manager: (tap568e8160-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00676|binding|INFO|Claiming lport 568e8160-e7ce-4ef4-ba79-b8571d160073 for this chassis.
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00677|binding|INFO|568e8160-e7ce-4ef4-ba79-b8571d160073: Claiming fa:16:3e:c2:71:92 10.100.0.7
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.778 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 NetworkManager[49108]: <info>  [1769847875.7849] device (tap568e8160-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:24:35 compute-0 NetworkManager[49108]: <info>  [1769847875.7857] device (tap568e8160-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.787 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:71:92 10.100.0.7'], port_security=['fa:16:3e:c2:71:92 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '6', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=568e8160-e7ce-4ef4-ba79-b8571d160073) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.788 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 568e8160-e7ce-4ef4-ba79-b8571d160073 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 bound to our chassis
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00678|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 ovn-installed in OVS
Jan 31 08:24:35 compute-0 ovn_controller[149457]: 2026-01-31T08:24:35Z|00679|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 up in Southbound
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.789 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.790 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.791 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 systemd-machined[214448]: New machine qemu-72-instance-00000099.
Jan 31 08:24:35 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000099.
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.817 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[45b9fb68-e59d-45c3-aeab-195bcc74f9c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.849 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb841c5-36f9-4d51-aea7-0a547814c6b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.853 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4910253e-a900-48c2-b2da-50ab8c75b1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.879 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[76d37c47-4314-4a1c-91f0-f441b6aaeb1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.896 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5466d9ae-eea1-4f84-bf7b-5c50efd3ea5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 14, 'rx_bytes': 658, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 14, 'rx_bytes': 658, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 32750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351401, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.912 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[65c1fd80-4262-4d67-aedf-22b3d5f25dde]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803982, 'tstamp': 803982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351402, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803984, 'tstamp': 803984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351402, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.913 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 nova_compute[247704]: 2026-01-31 08:24:35.967 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.968 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.968 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.969 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:35.969 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.027 247708 DEBUG nova.compute.manager [req-d1edad72-a9b9-4dd4-aeb9-010b5e678ab2 req-f7d94de4-c5f0-42c7-990a-fe38bb5cab5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.027 247708 DEBUG oslo_concurrency.lockutils [req-d1edad72-a9b9-4dd4-aeb9-010b5e678ab2 req-f7d94de4-c5f0-42c7-990a-fe38bb5cab5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.028 247708 DEBUG oslo_concurrency.lockutils [req-d1edad72-a9b9-4dd4-aeb9-010b5e678ab2 req-f7d94de4-c5f0-42c7-990a-fe38bb5cab5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.028 247708 DEBUG oslo_concurrency.lockutils [req-d1edad72-a9b9-4dd4-aeb9-010b5e678ab2 req-f7d94de4-c5f0-42c7-990a-fe38bb5cab5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.028 247708 DEBUG nova.compute.manager [req-d1edad72-a9b9-4dd4-aeb9-010b5e678ab2 req-f7d94de4-c5f0-42c7-990a-fe38bb5cab5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.028 247708 WARNING nova.compute.manager [req-d1edad72-a9b9-4dd4-aeb9-010b5e678ab2 req-f7d94de4-c5f0-42c7-990a-fe38bb5cab5e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state rescued and task_state unrescuing.
Jan 31 08:24:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 31 08:24:36 compute-0 ceph-mon[74496]: pgmap v2721: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 31 08:24:36 compute-0 nova_compute[247704]: 2026-01-31 08:24:36.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.100 247708 DEBUG nova.virt.libvirt.host [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Removed pending event for 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.101 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847877.099846, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.101 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Resumed (Lifecycle Event)
Jan 31 08:24:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:37.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.147 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.152 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.198 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.199 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847877.1037698, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.199 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Started (Lifecycle Event)
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.238 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.243 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.278 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:37 compute-0 nova_compute[247704]: 2026-01-31 08:24:37.593 247708 DEBUG nova.compute.manager [None req-608ed441-e650-4a12-a509-f7d4fcf003bd 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:37.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:24:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1950468783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.357 247708 DEBUG nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.358 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.358 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.359 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.359 247708 DEBUG nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.359 247708 WARNING nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state None.
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.359 247708 DEBUG nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.360 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.360 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.360 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.360 247708 DEBUG nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.361 247708 WARNING nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state None.
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.361 247708 DEBUG nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.361 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.361 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.362 247708 DEBUG oslo_concurrency.lockutils [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.362 247708 DEBUG nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:38 compute-0 nova_compute[247704]: 2026-01-31 08:24:38.362 247708 WARNING nova.compute.manager [req-9fd790a4-22fd-4aea-b1cf-2c948ebca7bd req-e3f9b3aa-fd1a-460d-bc9a-fd9fe598141d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state None.
Jan 31 08:24:39 compute-0 ceph-mon[74496]: pgmap v2722: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:24:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:39.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:39 compute-0 nova_compute[247704]: 2026-01-31 08:24:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:39 compute-0 nova_compute[247704]: 2026-01-31 08:24:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:39 compute-0 nova_compute[247704]: 2026-01-31 08:24:39.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:24:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.113 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.114 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.114 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.114 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.114 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.115 247708 INFO nova.compute.manager [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Terminating instance
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.116 247708 DEBUG nova.compute.manager [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:24:40 compute-0 ceph-mon[74496]: pgmap v2723: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Jan 31 08:24:40 compute-0 kernel: tap568e8160-e7 (unregistering): left promiscuous mode
Jan 31 08:24:40 compute-0 NetworkManager[49108]: <info>  [1769847880.1642] device (tap568e8160-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.170 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 ovn_controller[149457]: 2026-01-31T08:24:40Z|00680|binding|INFO|Releasing lport 568e8160-e7ce-4ef4-ba79-b8571d160073 from this chassis (sb_readonly=0)
Jan 31 08:24:40 compute-0 ovn_controller[149457]: 2026-01-31T08:24:40Z|00681|binding|INFO|Setting lport 568e8160-e7ce-4ef4-ba79-b8571d160073 down in Southbound
Jan 31 08:24:40 compute-0 ovn_controller[149457]: 2026-01-31T08:24:40Z|00682|binding|INFO|Removing iface tap568e8160-e7 ovn-installed in OVS
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.184 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:71:92 10.100.0.7'], port_security=['fa:16:3e:c2:71:92 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6a10c21a-772d-4a5c-8f62-3d90d4b7ca56', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '8', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=568e8160-e7ce-4ef4-ba79-b8571d160073) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.185 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 568e8160-e7ce-4ef4-ba79-b8571d160073 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.187 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 936cead9-bc2f-4c2d-8b4c-6079d2159263
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.198 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4a3528-2c14-453c-9263-6ef9de62c185]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:40 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 31 08:24:40 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000099.scope: Consumed 4.371s CPU time.
Jan 31 08:24:40 compute-0 systemd-machined[214448]: Machine qemu-72-instance-00000099 terminated.
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.223 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[273d6cc5-e0fb-4aba-951e-28f3d9a4970a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.226 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[259f2f55-01cf-4f84-9816-eaf83b21d428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.241 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[bf9e1928-ab1b-4215-9bbb-345f30343cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.251 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c4538198-e541-4d31-88d6-be62a562ae5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap936cead9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:06:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 658, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 16, 'rx_bytes': 658, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803971, 'reachable_time': 32750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351477, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.263 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb61d86-f2b3-475e-ba3c-2286b4e50344]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803982, 'tstamp': 803982}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351478, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap936cead9-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803984, 'tstamp': 803984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351478, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.264 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.266 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.270 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.270 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap936cead9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.270 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.271 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap936cead9-b0, col_values=(('external_ids', {'iface-id': 'fd5187fd-cce9-41da-96d2-ef75fbcbcf0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:40 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:40.271 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.359 247708 INFO nova.virt.libvirt.driver [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Instance destroyed successfully.
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.360 247708 DEBUG nova.objects.instance [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'resources' on Instance uuid 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.377 247708 DEBUG nova.virt.libvirt.vif [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1237883975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1237883975',id=153,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-40a23vvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:24:37Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=6a10c21a-772d-4a5c-8f62-3d90d4b7ca56,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.377 247708 DEBUG nova.network.os_vif_util [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "568e8160-e7ce-4ef4-ba79-b8571d160073", "address": "fa:16:3e:c2:71:92", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap568e8160-e7", "ovs_interfaceid": "568e8160-e7ce-4ef4-ba79-b8571d160073", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.378 247708 DEBUG nova.network.os_vif_util [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.378 247708 DEBUG os_vif [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.380 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.380 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap568e8160-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.403 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.406 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.409 247708 INFO os_vif [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:71:92,bridge_name='br-int',has_traffic_filtering=True,id=568e8160-e7ce-4ef4-ba79-b8571d160073,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap568e8160-e7')
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.550 247708 DEBUG nova.compute.manager [req-5b171b44-3914-4784-b181-3b4c7488264e req-f8380115-6180-4b41-bf2e-88ceabfb07e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.552 247708 DEBUG oslo_concurrency.lockutils [req-5b171b44-3914-4784-b181-3b4c7488264e req-f8380115-6180-4b41-bf2e-88ceabfb07e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.552 247708 DEBUG oslo_concurrency.lockutils [req-5b171b44-3914-4784-b181-3b4c7488264e req-f8380115-6180-4b41-bf2e-88ceabfb07e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.552 247708 DEBUG oslo_concurrency.lockutils [req-5b171b44-3914-4784-b181-3b4c7488264e req-f8380115-6180-4b41-bf2e-88ceabfb07e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.553 247708 DEBUG nova.compute.manager [req-5b171b44-3914-4784-b181-3b4c7488264e req-f8380115-6180-4b41-bf2e-88ceabfb07e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.553 247708 DEBUG nova.compute.manager [req-5b171b44-3914-4784-b181-3b4c7488264e req-f8380115-6180-4b41-bf2e-88ceabfb07e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-unplugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.654 247708 INFO nova.virt.libvirt.driver [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Deleting instance files /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_del
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.656 247708 INFO nova.virt.libvirt.driver [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Deletion of /var/lib/nova/instances/6a10c21a-772d-4a5c-8f62-3d90d4b7ca56_del complete
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.738 247708 INFO nova.compute.manager [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.739 247708 DEBUG oslo.service.loopingcall [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.740 247708 DEBUG nova.compute.manager [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:24:40 compute-0 nova_compute[247704]: 2026-01-31 08:24:40.740 247708 DEBUG nova.network.neutron [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:24:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:41.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:41.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:41 compute-0 podman[351511]: 2026-01-31 08:24:41.958888136 +0000 UTC m=+0.132663224 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:24:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 678 KiB/s wr, 192 op/s
Jan 31 08:24:42 compute-0 ceph-mon[74496]: pgmap v2724: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 678 KiB/s wr, 192 op/s
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.612 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.613 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.613 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.613 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.613 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:42.634 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:42.635 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.642 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.694 247708 DEBUG nova.network.neutron [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.739 247708 INFO nova.compute.manager [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Took 2.00 seconds to deallocate network for instance.
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.787 247708 DEBUG nova.compute.manager [req-fc0d3a28-773f-45b2-9d51-564289b64293 req-5f53ab54-6b01-48bd-89dc-efadb6169e29 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.788 247708 DEBUG oslo_concurrency.lockutils [req-fc0d3a28-773f-45b2-9d51-564289b64293 req-5f53ab54-6b01-48bd-89dc-efadb6169e29 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.788 247708 DEBUG oslo_concurrency.lockutils [req-fc0d3a28-773f-45b2-9d51-564289b64293 req-5f53ab54-6b01-48bd-89dc-efadb6169e29 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.789 247708 DEBUG oslo_concurrency.lockutils [req-fc0d3a28-773f-45b2-9d51-564289b64293 req-5f53ab54-6b01-48bd-89dc-efadb6169e29 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.789 247708 DEBUG nova.compute.manager [req-fc0d3a28-773f-45b2-9d51-564289b64293 req-5f53ab54-6b01-48bd-89dc-efadb6169e29 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] No waiting events found dispatching network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.789 247708 WARNING nova.compute.manager [req-fc0d3a28-773f-45b2-9d51-564289b64293 req-5f53ab54-6b01-48bd-89dc-efadb6169e29 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received unexpected event network-vif-plugged-568e8160-e7ce-4ef4-ba79-b8571d160073 for instance with vm_state active and task_state deleting.
Jan 31 08:24:42 compute-0 nova_compute[247704]: 2026-01-31 08:24:42.931 247708 DEBUG nova.compute.manager [req-ea045cb0-4ba5-403f-a02c-569e41c4a52d req-2fa53b06-eabe-471e-b3b6-96f50cc7ed52 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Received event network-vif-deleted-568e8160-e7ce-4ef4-ba79-b8571d160073 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.014 247708 INFO nova.compute.manager [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Took 0.28 seconds to detach 1 volumes for instance.
Jan 31 08:24:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563591530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.100 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.101 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.122 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.227 247708 DEBUG oslo_concurrency.processutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.249 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.250 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:24:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Jan 31 08:24:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Jan 31 08:24:43 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Jan 31 08:24:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2563591530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.443 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.444 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4155MB free_disk=20.8057861328125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.445 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:43.638 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633162157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.688 247708 DEBUG oslo_concurrency.processutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.693 247708 DEBUG nova.compute.provider_tree [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:43.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.720 247708 DEBUG nova.scheduler.client.report [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.777 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.780 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.881 247708 INFO nova.scheduler.client.report [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Deleted allocations for instance 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.900 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance ed055707-78a7-4777-97f3-842e56be52d9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.900 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.901 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:24:43 compute-0 nova_compute[247704]: 2026-01-31 08:24:43.981 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:44 compute-0 nova_compute[247704]: 2026-01-31 08:24:44.034 247708 DEBUG oslo_concurrency.lockutils [None req-83e9e43d-0f4b-4dbb-873b-3309bfd1cb88 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "6a10c21a-772d-4a5c-8f62-3d90d4b7ca56" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.0 MiB/s wr, 278 op/s
Jan 31 08:24:44 compute-0 ceph-mon[74496]: osdmap e352: 3 total, 3 up, 3 in
Jan 31 08:24:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3633162157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3317738892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1905162500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-0 ceph-mon[74496]: pgmap v2726: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.0 MiB/s wr, 278 op/s
Jan 31 08:24:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/933169485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098951561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:44 compute-0 nova_compute[247704]: 2026-01-31 08:24:44.423 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:44 compute-0 nova_compute[247704]: 2026-01-31 08:24:44.430 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:44 compute-0 nova_compute[247704]: 2026-01-31 08:24:44.501 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:44 compute-0 nova_compute[247704]: 2026-01-31 08:24:44.582 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:24:44 compute-0 nova_compute[247704]: 2026-01-31 08:24:44.582 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1098951561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3702335835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.644 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.644 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.732 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.879 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.880 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.886 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:24:45 compute-0 nova_compute[247704]: 2026-01-31 08:24:45.887 247708 INFO nova.compute.claims [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:24:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.189 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Jan 31 08:24:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Jan 31 08:24:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2806860288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1145741290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-0 ceph-mon[74496]: pgmap v2727: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Jan 31 08:24:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.583 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.584 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.585 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:24:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565020080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.626 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.633 247708 DEBUG nova.compute.provider_tree [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.641 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.683 247708 DEBUG nova.scheduler.client.report [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.731 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.731 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.802 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.802 247708 DEBUG nova.network.neutron [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.837 247708 INFO nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.877 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.882 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.882 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.882 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:24:46 compute-0 nova_compute[247704]: 2026-01-31 08:24:46.883 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ed055707-78a7-4777-97f3-842e56be52d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.032 247708 DEBUG nova.policy [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85dfa8546d9942648bb4197c8b1947e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbdbdee526499e90da7e971ede68d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.067 247708 INFO nova.virt.block_device [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Booting with volume aa9253f0-3405-4d95-ab32-e423b297e3e5 at /dev/vda
Jan 31 08:24:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.244 247708 DEBUG os_brick.utils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.247 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.261 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.262 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[f71a5103-2379-4f86-bbc4-a1f65762c286]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.265 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.273 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.274 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e92b010b-e1fb-493f-a8a7-35a62da98416]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.276 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.284 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.284 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[617c0753-021e-43cc-8f9b-bb13727caef3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.286 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[c75251ea-8462-4140-a683-84e90c92e6bf]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.286 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.307 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.309 247708 DEBUG os_brick.initiator.connectors.lightos [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.309 247708 DEBUG os_brick.initiator.connectors.lightos [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.309 247708 DEBUG os_brick.initiator.connectors.lightos [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.310 247708 DEBUG os_brick.utils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.310 247708 DEBUG nova.virt.block_device [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating existing volume attachment record: baed91ee-e5df-4d94-afbd-ea04a1f2d51b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:24:47 compute-0 ceph-mon[74496]: osdmap e353: 3 total, 3 up, 3 in
Jan 31 08:24:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3565020080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3613285156' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.524 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.525 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.525 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.525 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.526 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.527 247708 INFO nova.compute.manager [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Terminating instance
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.529 247708 DEBUG nova.compute.manager [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:24:47 compute-0 kernel: tap5ad80e9c-46 (unregistering): left promiscuous mode
Jan 31 08:24:47 compute-0 NetworkManager[49108]: <info>  [1769847887.5953] device (tap5ad80e9c-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 ovn_controller[149457]: 2026-01-31T08:24:47Z|00683|binding|INFO|Releasing lport 5ad80e9c-4635-405b-a513-4af4441d6e17 from this chassis (sb_readonly=0)
Jan 31 08:24:47 compute-0 ovn_controller[149457]: 2026-01-31T08:24:47Z|00684|binding|INFO|Setting lport 5ad80e9c-4635-405b-a513-4af4441d6e17 down in Southbound
Jan 31 08:24:47 compute-0 ovn_controller[149457]: 2026-01-31T08:24:47Z|00685|binding|INFO|Removing iface tap5ad80e9c-46 ovn-installed in OVS
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.635 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:d8:c5 10.100.0.14'], port_security=['fa:16:3e:36:d8:c5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ed055707-78a7-4777-97f3-842e56be52d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c90ea7f1be5f484bb873548236fadc00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '952b4f08-f5a7-4fc0-ae2c-267f2ba857a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=42261fad-d2a1-4da1-823a-75e271c17223, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5ad80e9c-4635-405b-a513-4af4441d6e17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.638 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5ad80e9c-4635-405b-a513-4af4441d6e17 in datapath 936cead9-bc2f-4c2d-8b4c-6079d2159263 unbound from our chassis
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.641 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 936cead9-bc2f-4c2d-8b4c-6079d2159263, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.643 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbea5ca-5096-4577-a2d0-caa6e7f58c14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.644 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 namespace which is not needed anymore
Jan 31 08:24:47 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 31 08:24:47 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000097.scope: Consumed 18.030s CPU time.
Jan 31 08:24:47 compute-0 systemd-machined[214448]: Machine qemu-69-instance-00000097 terminated.
Jan 31 08:24:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:47.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:47 compute-0 NetworkManager[49108]: <info>  [1769847887.7509] manager: (tap5ad80e9c-46): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.769 247708 INFO nova.virt.libvirt.driver [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Instance destroyed successfully.
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.770 247708 DEBUG nova.objects.instance [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lazy-loading 'resources' on Instance uuid ed055707-78a7-4777-97f3-842e56be52d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:47 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [NOTICE]   (349358) : haproxy version is 2.8.14-c23fe91
Jan 31 08:24:47 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [NOTICE]   (349358) : path to executable is /usr/sbin/haproxy
Jan 31 08:24:47 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [WARNING]  (349358) : Exiting Master process...
Jan 31 08:24:47 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [WARNING]  (349358) : Exiting Master process...
Jan 31 08:24:47 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [ALERT]    (349358) : Current worker (349361) exited with code 143 (Terminated)
Jan 31 08:24:47 compute-0 neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263[349336]: [WARNING]  (349358) : All workers exited. Exiting... (0)
Jan 31 08:24:47 compute-0 systemd[1]: libpod-168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2.scope: Deactivated successfully.
Jan 31 08:24:47 compute-0 podman[351662]: 2026-01-31 08:24:47.831488348 +0000 UTC m=+0.073154985 container died 168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.848 247708 DEBUG nova.virt.libvirt.vif [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:23:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1532317084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1532317084',id=151,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:23:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c90ea7f1be5f484bb873548236fadc00',ramdisk_id='',reservation_id='r-f2y7gvlm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1116995694-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:23:34Z,user_data=None,user_id='038e2b3b4f174162a3ac6c4870857e60',uuid=ed055707-78a7-4777-97f3-842e56be52d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.848 247708 DEBUG nova.network.os_vif_util [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converting VIF {"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.849 247708 DEBUG nova.network.os_vif_util [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.850 247708 DEBUG os_vif [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.852 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.853 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ad80e9c-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.860 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.865 247708 INFO os_vif [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:d8:c5,bridge_name='br-int',has_traffic_filtering=True,id=5ad80e9c-4635-405b-a513-4af4441d6e17,network=Network(936cead9-bc2f-4c2d-8b4c-6079d2159263),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad80e9c-46')
Jan 31 08:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2-userdata-shm.mount: Deactivated successfully.
Jan 31 08:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-097d4bec168e05748f850731fe9315c23796636a941c9c9b1ddad144d00e98dd-merged.mount: Deactivated successfully.
Jan 31 08:24:47 compute-0 podman[351662]: 2026-01-31 08:24:47.886742343 +0000 UTC m=+0.128409010 container cleanup 168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:24:47 compute-0 systemd[1]: libpod-conmon-168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2.scope: Deactivated successfully.
Jan 31 08:24:47 compute-0 podman[351714]: 2026-01-31 08:24:47.978195445 +0000 UTC m=+0.063601681 container remove 168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.985 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[98dd332b-4d96-4338-a05f-6f1274b1c0cd]: (4, ('Sat Jan 31 08:24:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2)\n168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2\nSat Jan 31 08:24:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 (168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2)\n168af507c6eeb2f86aec5b9ff6801ef8772faa658d1019a388ac3de8bab2bbc2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.988 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e48ad0d9-3c69-4aac-bfae-5374730b8be8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:47 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:47.989 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap936cead9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:47 compute-0 nova_compute[247704]: 2026-01-31 08:24:47.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:47 compute-0 kernel: tap936cead9-b0: left promiscuous mode
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.003 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:48.007 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[91c194c3-461b-4b00-ae43-ff064806804c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:48.032 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[39af733b-159b-4755-b19e-6143f0a283f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:48.034 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab66524-ecd5-4501-9f5e-dcd5188b4ea5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:48.054 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf11be2-1aa0-4a50-b53d-18f2bee39758]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803964, 'reachable_time': 33941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351732, 'error': None, 'target': 'ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d936cead9\x2dbc2f\x2d4c2d\x2d8b4c\x2d6079d2159263.mount: Deactivated successfully.
Jan 31 08:24:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:48.059 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-936cead9-bc2f-4c2d-8b4c-6079d2159263 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:24:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:48.060 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7eae2e-0106-4ad9-bee9-e0ba9dcffc97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 566 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 228 op/s
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.195 247708 DEBUG nova.network.neutron [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Successfully created port: eae745f2-80f8-4d46-9f62-485ae14b62ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:24:48 compute-0 ceph-mon[74496]: pgmap v2729: 305 pgs: 305 active+clean; 566 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 228 op/s
Jan 31 08:24:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2870581475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.469 247708 INFO nova.virt.libvirt.driver [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Deleting instance files /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9_del
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.470 247708 INFO nova.virt.libvirt.driver [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Deletion of /var/lib/nova/instances/ed055707-78a7-4777-97f3-842e56be52d9_del complete
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.675 247708 INFO nova.compute.manager [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.676 247708 DEBUG oslo.service.loopingcall [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.676 247708 DEBUG nova.compute.manager [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.677 247708 DEBUG nova.network.neutron [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.723 247708 DEBUG nova.compute.manager [req-2885939b-7d30-40f1-95b0-a833df9fd666 req-3f43739a-a188-474f-8f39-ed5eb6fccbdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-vif-unplugged-5ad80e9c-4635-405b-a513-4af4441d6e17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.724 247708 DEBUG oslo_concurrency.lockutils [req-2885939b-7d30-40f1-95b0-a833df9fd666 req-3f43739a-a188-474f-8f39-ed5eb6fccbdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.725 247708 DEBUG oslo_concurrency.lockutils [req-2885939b-7d30-40f1-95b0-a833df9fd666 req-3f43739a-a188-474f-8f39-ed5eb6fccbdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.725 247708 DEBUG oslo_concurrency.lockutils [req-2885939b-7d30-40f1-95b0-a833df9fd666 req-3f43739a-a188-474f-8f39-ed5eb6fccbdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.726 247708 DEBUG nova.compute.manager [req-2885939b-7d30-40f1-95b0-a833df9fd666 req-3f43739a-a188-474f-8f39-ed5eb6fccbdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] No waiting events found dispatching network-vif-unplugged-5ad80e9c-4635-405b-a513-4af4441d6e17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.727 247708 DEBUG nova.compute.manager [req-2885939b-7d30-40f1-95b0-a833df9fd666 req-3f43739a-a188-474f-8f39-ed5eb6fccbdc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-vif-unplugged-5ad80e9c-4635-405b-a513-4af4441d6e17 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.903 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.905 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.905 247708 INFO nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Creating image(s)
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.905 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.906 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Ensure instance console log exists: /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.906 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.906 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:48 compute-0 nova_compute[247704]: 2026-01-31 08:24:48.906 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:49.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:49 compute-0 nova_compute[247704]: 2026-01-31 08:24:49.200 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updating instance_info_cache with network_info: [{"id": "5ad80e9c-4635-405b-a513-4af4441d6e17", "address": "fa:16:3e:36:d8:c5", "network": {"id": "936cead9-bc2f-4c2d-8b4c-6079d2159263", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-1814386317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c90ea7f1be5f484bb873548236fadc00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad80e9c-46", "ovs_interfaceid": "5ad80e9c-4635-405b-a513-4af4441d6e17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:49 compute-0 nova_compute[247704]: 2026-01-31 08:24:49.254 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-ed055707-78a7-4777-97f3-842e56be52d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:49 compute-0 nova_compute[247704]: 2026-01-31 08:24:49.255 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:24:49 compute-0 nova_compute[247704]: 2026-01-31 08:24:49.256 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:49 compute-0 nova_compute[247704]: 2026-01-31 08:24:49.256 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:49.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:49 compute-0 sudo[351735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:49 compute-0 sudo[351735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:49 compute-0 sudo[351735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:49 compute-0 sudo[351760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:24:49 compute-0 sudo[351760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:49 compute-0 sudo[351760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 523 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:50 compute-0 ceph-mon[74496]: pgmap v2730: 305 pgs: 305 active+clean; 523 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 206 op/s
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.215 247708 DEBUG nova.network.neutron [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Successfully updated port: eae745f2-80f8-4d46-9f62-485ae14b62ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.251 247708 DEBUG nova.network.neutron [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.286 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.286 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.286 247708 DEBUG nova.network.neutron [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.335 247708 DEBUG nova.compute.manager [req-3241a9ee-b86f-49f6-9ab2-a5c86b5bdd8f req-2ac71019-2b6f-4f61-903b-26fa1ba2834b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-vif-deleted-5ad80e9c-4635-405b-a513-4af4441d6e17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.335 247708 INFO nova.compute.manager [req-3241a9ee-b86f-49f6-9ab2-a5c86b5bdd8f req-2ac71019-2b6f-4f61-903b-26fa1ba2834b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Neutron deleted interface 5ad80e9c-4635-405b-a513-4af4441d6e17; detaching it from the instance and deleting it from the info cache
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.336 247708 DEBUG nova.network.neutron [req-3241a9ee-b86f-49f6-9ab2-a5c86b5bdd8f req-2ac71019-2b6f-4f61-903b-26fa1ba2834b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.340 247708 INFO nova.compute.manager [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Took 1.66 seconds to deallocate network for instance.
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.406 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.413 247708 DEBUG nova.compute.manager [req-3241a9ee-b86f-49f6-9ab2-a5c86b5bdd8f req-2ac71019-2b6f-4f61-903b-26fa1ba2834b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Detach interface failed, port_id=5ad80e9c-4635-405b-a513-4af4441d6e17, reason: Instance ed055707-78a7-4777-97f3-842e56be52d9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.427 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.428 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.517 247708 DEBUG oslo_concurrency.processutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.555 247708 DEBUG nova.network.neutron [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.931 247708 DEBUG nova.compute.manager [req-c31f56cc-c553-458d-9f2e-5d5e81ac5695 req-3f7af63d-552a-4a25-9e76-dc180bbd0542 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.932 247708 DEBUG oslo_concurrency.lockutils [req-c31f56cc-c553-458d-9f2e-5d5e81ac5695 req-3f7af63d-552a-4a25-9e76-dc180bbd0542 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "ed055707-78a7-4777-97f3-842e56be52d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.932 247708 DEBUG oslo_concurrency.lockutils [req-c31f56cc-c553-458d-9f2e-5d5e81ac5695 req-3f7af63d-552a-4a25-9e76-dc180bbd0542 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.933 247708 DEBUG oslo_concurrency.lockutils [req-c31f56cc-c553-458d-9f2e-5d5e81ac5695 req-3f7af63d-552a-4a25-9e76-dc180bbd0542 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.933 247708 DEBUG nova.compute.manager [req-c31f56cc-c553-458d-9f2e-5d5e81ac5695 req-3f7af63d-552a-4a25-9e76-dc180bbd0542 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] No waiting events found dispatching network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577017772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.933 247708 WARNING nova.compute.manager [req-c31f56cc-c553-458d-9f2e-5d5e81ac5695 req-3f7af63d-552a-4a25-9e76-dc180bbd0542 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Received unexpected event network-vif-plugged-5ad80e9c-4635-405b-a513-4af4441d6e17 for instance with vm_state deleted and task_state None.
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.950 247708 DEBUG oslo_concurrency.processutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.956 247708 DEBUG nova.compute.provider_tree [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:24:50 compute-0 nova_compute[247704]: 2026-01-31 08:24:50.991 247708 DEBUG nova.scheduler.client.report [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.049 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.092 247708 INFO nova.scheduler.client.report [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Deleted allocations for instance ed055707-78a7-4777-97f3-842e56be52d9
Jan 31 08:24:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:51.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Jan 31 08:24:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3577017772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Jan 31 08:24:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.252 247708 DEBUG oslo_concurrency.lockutils [None req-2115c488-29c4-4cbb-a4a2-1b7cbe27193f 038e2b3b4f174162a3ac6c4870857e60 c90ea7f1be5f484bb873548236fadc00 - - default default] Lock "ed055707-78a7-4777-97f3-842e56be52d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.780 247708 DEBUG nova.network.neutron [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.826 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.826 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Instance network_info: |[{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.832 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Start _get_guest_xml network_info=[{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'baed91ee-e5df-4d94-afbd-ea04a1f2d51b', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aa9253f0-3405-4d95-ab32-e423b297e3e5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aa9253f0-3405-4d95-ab32-e423b297e3e5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '1beb42da-13c9-4f95-8a5d-e2c3c1affd2a', 'attached_at': '', 'detached_at': '', 'volume_id': 'aa9253f0-3405-4d95-ab32-e423b297e3e5', 'serial': 'aa9253f0-3405-4d95-ab32-e423b297e3e5', 'multiattach': True}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.838 247708 WARNING nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.844 247708 DEBUG nova.virt.libvirt.host [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.845 247708 DEBUG nova.virt.libvirt.host [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.849 247708 DEBUG nova.virt.libvirt.host [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.850 247708 DEBUG nova.virt.libvirt.host [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.852 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.853 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.853 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.854 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.855 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.855 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.855 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.856 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.857 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.857 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.858 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.858 247708 DEBUG nova.virt.hardware [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.895 247708 DEBUG nova.storage.rbd_utils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:51 compute-0 nova_compute[247704]: 2026-01-31 08:24:51.899 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 268 KiB/s wr, 187 op/s
Jan 31 08:24:52 compute-0 ceph-mon[74496]: osdmap e354: 3 total, 3 up, 3 in
Jan 31 08:24:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4119211165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:52 compute-0 ceph-mon[74496]: pgmap v2732: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 268 KiB/s wr, 187 op/s
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.230 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:24:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737181080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.329 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Jan 31 08:24:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Jan 31 08:24:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.676 247708 DEBUG nova.virt.libvirt.vif [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:24:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1810503935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1810503935',id=156,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-lmso5hs7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:47Z,user_data=None,user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=1beb42da-13c9-4f95-8a5d-e2c3c1affd2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.677 247708 DEBUG nova.network.os_vif_util [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.678 247708 DEBUG nova.network.os_vif_util [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.680 247708 DEBUG nova.objects.instance [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.693 247708 DEBUG nova.compute.manager [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-changed-eae745f2-80f8-4d46-9f62-485ae14b62ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.694 247708 DEBUG nova.compute.manager [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Refreshing instance network info cache due to event network-changed-eae745f2-80f8-4d46-9f62-485ae14b62ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.694 247708 DEBUG oslo_concurrency.lockutils [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.695 247708 DEBUG oslo_concurrency.lockutils [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.695 247708 DEBUG nova.network.neutron [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Refreshing network info cache for port eae745f2-80f8-4d46-9f62-485ae14b62ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.718 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <uuid>1beb42da-13c9-4f95-8a5d-e2c3c1affd2a</uuid>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <name>instance-0000009c</name>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <nova:name>tempest-AttachVolumeMultiAttachTest-server-1810503935</nova:name>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:24:51</nova:creationTime>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <nova:port uuid="eae745f2-80f8-4d46-9f62-485ae14b62ea">
Jan 31 08:24:52 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <system>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <entry name="serial">1beb42da-13c9-4f95-8a5d-e2c3c1affd2a</entry>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <entry name="uuid">1beb42da-13c9-4f95-8a5d-e2c3c1affd2a</entry>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </system>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <os>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </os>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <features>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </features>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_disk.config">
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-aa9253f0-3405-4d95-ab32-e423b297e3e5">
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </source>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:24:52 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <serial>aa9253f0-3405-4d95-ab32-e423b297e3e5</serial>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <shareable/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:b6:48:03"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <target dev="tapeae745f2-80"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/console.log" append="off"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <video>
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </video>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:24:52 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:24:52 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:24:52 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:24:52 compute-0 nova_compute[247704]: </domain>
Jan 31 08:24:52 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.720 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Preparing to wait for external event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.721 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.721 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.721 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.722 247708 DEBUG nova.virt.libvirt.vif [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:24:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1810503935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1810503935',id=156,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-lmso5hs7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:24:47Z,user_data=None,user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=1beb42da-13c9-4f95-8a5d-e2c3c1affd2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.723 247708 DEBUG nova.network.os_vif_util [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.726 247708 DEBUG nova.network.os_vif_util [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.726 247708 DEBUG os_vif [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.727 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.728 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.728 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.735 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeae745f2-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.736 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeae745f2-80, col_values=(('external_ids', {'iface-id': 'eae745f2-80f8-4d46-9f62-485ae14b62ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:48:03', 'vm-uuid': '1beb42da-13c9-4f95-8a5d-e2c3c1affd2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:52 compute-0 NetworkManager[49108]: <info>  [1769847892.7387] manager: (tapeae745f2-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.739 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.749 247708 INFO os_vif [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80')
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.941 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.942 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.945 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:b6:48:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.946 247708 INFO nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Using config drive
Jan 31 08:24:52 compute-0 nova_compute[247704]: 2026-01-31 08:24:52.984 247708 DEBUG nova.storage.rbd_utils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/737181080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:24:53 compute-0 ceph-mon[74496]: osdmap e355: 3 total, 3 up, 3 in
Jan 31 08:24:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:24:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:24:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.2 KiB/s wr, 225 op/s
Jan 31 08:24:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1721956627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:24:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1721956627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:24:54 compute-0 ceph-mon[74496]: pgmap v2734: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.2 KiB/s wr, 225 op/s
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.479 247708 INFO nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Creating config drive at /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/disk.config
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.484 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc0qsyrsd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.625 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpc0qsyrsd" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.659 247708 DEBUG nova.storage.rbd_utils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.664 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/disk.config 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.907 247708 DEBUG oslo_concurrency.processutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/disk.config 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.908 247708 INFO nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Deleting local config drive /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a/disk.config because it was imported into RBD.
Jan 31 08:24:54 compute-0 kernel: tapeae745f2-80: entered promiscuous mode
Jan 31 08:24:54 compute-0 NetworkManager[49108]: <info>  [1769847894.9537] manager: (tapeae745f2-80): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.955 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:54 compute-0 ovn_controller[149457]: 2026-01-31T08:24:54Z|00686|binding|INFO|Claiming lport eae745f2-80f8-4d46-9f62-485ae14b62ea for this chassis.
Jan 31 08:24:54 compute-0 ovn_controller[149457]: 2026-01-31T08:24:54Z|00687|binding|INFO|eae745f2-80f8-4d46-9f62-485ae14b62ea: Claiming fa:16:3e:b6:48:03 10.100.0.12
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.958 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.966 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:48:03 10.100.0.12'], port_security=['fa:16:3e:b6:48:03 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1beb42da-13c9-4f95-8a5d-e2c3c1affd2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4abfdb7-aa95-4407-b049-c51322e9a052', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=eae745f2-80f8-4d46-9f62-485ae14b62ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.968 160028 INFO neutron.agent.ovn.metadata.agent [-] Port eae745f2-80f8-4d46-9f62-485ae14b62ea in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.970 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.981 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d469b849-5c6e-4f67-a1e3-e80a09bb08ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.983 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26ad6a8f-31 in ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:54 compute-0 ovn_controller[149457]: 2026-01-31T08:24:54Z|00688|binding|INFO|Setting lport eae745f2-80f8-4d46-9f62-485ae14b62ea ovn-installed in OVS
Jan 31 08:24:54 compute-0 ovn_controller[149457]: 2026-01-31T08:24:54Z|00689|binding|INFO|Setting lport eae745f2-80f8-4d46-9f62-485ae14b62ea up in Southbound
Jan 31 08:24:54 compute-0 nova_compute[247704]: 2026-01-31 08:24:54.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.986 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26ad6a8f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.986 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d68c477f-56cc-48eb-8fc6-62601faede08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:54.988 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[da43c450-0375-423f-9064-a410fd29ee2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:54 compute-0 systemd-machined[214448]: New machine qemu-73-instance-0000009c.
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.001 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[40880866-f086-4b45-a882-79547bc85c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.013 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6930e1e7-e2c3-43cb-8489-9fb11b838b95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000009c.
Jan 31 08:24:55 compute-0 systemd-udevd[351940]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.032 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[47dce79b-01d8-4bfc-98f7-b8a6724217a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 NetworkManager[49108]: <info>  [1769847895.0390] manager: (tap26ad6a8f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/299)
Jan 31 08:24:55 compute-0 NetworkManager[49108]: <info>  [1769847895.0398] device (tapeae745f2-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:24:55 compute-0 NetworkManager[49108]: <info>  [1769847895.0401] device (tapeae745f2-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.038 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8b8a27-b30c-4204-9858-c19bf08449f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 systemd-udevd[351947]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.071 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1fe7f4-3c36-4539-896e-ced75a5c61b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.074 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[80b08fc0-92c9-45f1-9120-52bd76b2ba61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 podman[351920]: 2026-01-31 08:24:55.082256444 +0000 UTC m=+0.095429300 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:55 compute-0 NetworkManager[49108]: <info>  [1769847895.0991] device (tap26ad6a8f-30): carrier: link connected
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.102 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9fed927f-f658-454f-bd09-1e14f1285f41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.119 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[41e5e4df-78d5-43ec-bdcf-c460cf8e79b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 37081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351975, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.136 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c35262fc-678c-4e7d-b03b-3c192e3c8759]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:605d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813771, 'tstamp': 813771}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351976, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:24:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:55.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.154 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7f30693b-218d-419c-b61e-2e28dc6f34e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 37081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351977, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.187 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[71bb551f-cb94-49e5-8e4e-e64126235c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.255 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e2aabfcc-8b1d-4e99-9250-abe6558b4c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.257 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.257 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.258 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.260 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:55 compute-0 kernel: tap26ad6a8f-30: entered promiscuous mode
Jan 31 08:24:55 compute-0 NetworkManager[49108]: <info>  [1769847895.2610] manager: (tap26ad6a8f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.262 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:24:55 compute-0 ovn_controller[149457]: 2026-01-31T08:24:55Z|00690|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.269 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.269 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.270 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c1019143-fd5d-4694-953b-2de41ae374df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.271 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.pid.haproxy
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:24:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:24:55.272 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'env', 'PROCESS_TAG=haproxy-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26ad6a8f-33d5-432e-83d3-63a9d2f165ad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.358 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847880.357735, 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.359 247708 INFO nova.compute.manager [-] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] VM Stopped (Lifecycle Event)
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.407 247708 DEBUG nova.compute.manager [None req-b81eb012-2b91-485f-9d03-873eedacdfd1 - - - - - -] [instance: 6a10c21a-772d-4a5c-8f62-3d90d4b7ca56] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:55 compute-0 podman[352010]: 2026-01-31 08:24:55.617272373 +0000 UTC m=+0.051999946 container create 9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:24:55 compute-0 systemd[1]: Started libpod-conmon-9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a.scope.
Jan 31 08:24:55 compute-0 podman[352010]: 2026-01-31 08:24:55.588549929 +0000 UTC m=+0.023277522 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:24:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577026d6a9564561c364784dcfda717ad7092a046dabbbbf7ab687ecebaa972a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:55 compute-0 podman[352010]: 2026-01-31 08:24:55.714281802 +0000 UTC m=+0.149009385 container init 9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:24:55 compute-0 podman[352010]: 2026-01-31 08:24:55.718810663 +0000 UTC m=+0.153538236 container start 9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:24:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:55 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [NOTICE]   (352029) : New worker (352031) forked
Jan 31 08:24:55 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [NOTICE]   (352029) : Loading success.
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.971 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847895.9707901, 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:55 compute-0 nova_compute[247704]: 2026-01-31 08:24:55.971 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] VM Started (Lifecycle Event)
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.004 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.009 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847895.970934, 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.010 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] VM Paused (Lifecycle Event)
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.028 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.033 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/88557555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.061 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:24:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 27 KiB/s wr, 228 op/s
Jan 31 08:24:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:24:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3119264981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.914 247708 DEBUG nova.network.neutron [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updated VIF entry in instance network info cache for port eae745f2-80f8-4d46-9f62-485ae14b62ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.915 247708 DEBUG nova.network.neutron [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:24:56 compute-0 nova_compute[247704]: 2026-01-31 08:24:56.936 247708 DEBUG oslo_concurrency.lockutils [req-5a526360-90c9-4a79-b5d7-89eda613b43e req-6f760262-ee7a-4a9b-bb22-04cf3ee25184 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:57 compute-0 ceph-mon[74496]: pgmap v2735: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 27 KiB/s wr, 228 op/s
Jan 31 08:24:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3119264981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:24:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:57.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:24:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Jan 31 08:24:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Jan 31 08:24:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Jan 31 08:24:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:57.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.736 247708 DEBUG nova.compute.manager [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.737 247708 DEBUG oslo_concurrency.lockutils [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.737 247708 DEBUG oslo_concurrency.lockutils [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.737 247708 DEBUG oslo_concurrency.lockutils [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.737 247708 DEBUG nova.compute.manager [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Processing event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.737 247708 DEBUG nova.compute.manager [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.738 247708 DEBUG oslo_concurrency.lockutils [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.738 247708 DEBUG oslo_concurrency.lockutils [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.738 247708 DEBUG oslo_concurrency.lockutils [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.738 247708 DEBUG nova.compute.manager [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] No waiting events found dispatching network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.738 247708 WARNING nova.compute.manager [req-207ebb3d-3ca3-43cd-a083-7a07d5d40fb5 req-027000ff-c813-4a90-bcad-70ee535af0a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received unexpected event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea for instance with vm_state building and task_state spawning.
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.739 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.739 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.742 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847897.7422178, 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.742 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] VM Resumed (Lifecycle Event)
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.744 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.746 247708 INFO nova.virt.libvirt.driver [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Instance spawned successfully.
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.746 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.849 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.856 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.860 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.861 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.861 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.861 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.862 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.862 247708 DEBUG nova.virt.libvirt.driver [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:24:57 compute-0 nova_compute[247704]: 2026-01-31 08:24:57.925 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:24:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 45 KiB/s wr, 142 op/s
Jan 31 08:24:58 compute-0 nova_compute[247704]: 2026-01-31 08:24:58.121 247708 INFO nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Took 9.22 seconds to spawn the instance on the hypervisor.
Jan 31 08:24:58 compute-0 nova_compute[247704]: 2026-01-31 08:24:58.121 247708 DEBUG nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:24:58 compute-0 nova_compute[247704]: 2026-01-31 08:24:58.277 247708 INFO nova.compute.manager [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Took 12.43 seconds to build instance.
Jan 31 08:24:58 compute-0 nova_compute[247704]: 2026-01-31 08:24:58.404 247708 DEBUG oslo_concurrency.lockutils [None req-fa955fd0-6f32-439a-88f3-30e9e8f1e448 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Jan 31 08:24:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Jan 31 08:24:58 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Jan 31 08:24:58 compute-0 ceph-mon[74496]: osdmap e356: 3 total, 3 up, 3 in
Jan 31 08:24:58 compute-0 ceph-mon[74496]: pgmap v2737: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 45 KiB/s wr, 142 op/s
Jan 31 08:24:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:24:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:59.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:24:59 compute-0 ceph-mon[74496]: osdmap e357: 3 total, 3 up, 3 in
Jan 31 08:24:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:24:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:24:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:59.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 43 KiB/s wr, 105 op/s
Jan 31 08:25:00 compute-0 nova_compute[247704]: 2026-01-31 08:25:00.410 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:00 compute-0 ceph-mon[74496]: pgmap v2739: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 657 KiB/s rd, 43 KiB/s wr, 105 op/s
Jan 31 08:25:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:01.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1055653894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1055653894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:01.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 42 KiB/s wr, 180 op/s
Jan 31 08:25:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:02 compute-0 ceph-mon[74496]: pgmap v2740: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 42 KiB/s wr, 180 op/s
Jan 31 08:25:02 compute-0 nova_compute[247704]: 2026-01-31 08:25:02.770 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847887.765605, ed055707-78a7-4777-97f3-842e56be52d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:25:02 compute-0 nova_compute[247704]: 2026-01-31 08:25:02.771 247708 INFO nova.compute.manager [-] [instance: ed055707-78a7-4777-97f3-842e56be52d9] VM Stopped (Lifecycle Event)
Jan 31 08:25:02 compute-0 nova_compute[247704]: 2026-01-31 08:25:02.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:02 compute-0 nova_compute[247704]: 2026-01-31 08:25:02.841 247708 DEBUG nova.compute.manager [None req-0815823d-9d89-47f8-8073-5bb986278818 - - - - - -] [instance: ed055707-78a7-4777-97f3-842e56be52d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:25:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:03.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:03.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2295090872' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2295090872' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3430440580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 311 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 21 KiB/s wr, 256 op/s
Jan 31 08:25:04 compute-0 ceph-mon[74496]: pgmap v2741: 305 pgs: 305 active+clean; 311 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 21 KiB/s wr, 256 op/s
Jan 31 08:25:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:05.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:05 compute-0 nova_compute[247704]: 2026-01-31 08:25:05.414 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:05.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 252 op/s
Jan 31 08:25:06 compute-0 ceph-mon[74496]: pgmap v2742: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 252 op/s
Jan 31 08:25:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:07.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Jan 31 08:25:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Jan 31 08:25:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Jan 31 08:25:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:07.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:07 compute-0 nova_compute[247704]: 2026-01-31 08:25:07.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 19 KiB/s wr, 230 op/s
Jan 31 08:25:08 compute-0 ceph-mon[74496]: osdmap e358: 3 total, 3 up, 3 in
Jan 31 08:25:08 compute-0 ceph-mon[74496]: pgmap v2744: 305 pgs: 305 active+clean; 248 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 19 KiB/s wr, 230 op/s
Jan 31 08:25:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2344999837' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2344999837' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:09.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:25:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:09.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:25:10 compute-0 sudo[352089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:10 compute-0 sudo[352089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:10 compute-0 sudo[352089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 251 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 216 KiB/s wr, 206 op/s
Jan 31 08:25:10 compute-0 sudo[352114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:10 compute-0 sudo[352114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:10 compute-0 sudo[352114]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:10 compute-0 ceph-mon[74496]: pgmap v2745: 305 pgs: 305 active+clean; 251 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 216 KiB/s wr, 206 op/s
Jan 31 08:25:10 compute-0 nova_compute[247704]: 2026-01-31 08:25:10.413 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:11 compute-0 ovn_controller[149457]: 2026-01-31T08:25:11Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:48:03 10.100.0.12
Jan 31 08:25:11 compute-0 ovn_controller[149457]: 2026-01-31T08:25:11Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:48:03 10.100.0.12
Jan 31 08:25:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:25:11.196 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:25:11.197 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:25:11.198 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:11.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1401734021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 256 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 163 op/s
Jan 31 08:25:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:12 compute-0 nova_compute[247704]: 2026-01-31 08:25:12.779 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:12 compute-0 ceph-mon[74496]: pgmap v2746: 305 pgs: 305 active+clean; 256 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 163 op/s
Jan 31 08:25:12 compute-0 podman[352140]: 2026-01-31 08:25:12.966436379 +0000 UTC m=+0.135686338 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:25:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:13.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:13.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 239 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 1.9 MiB/s wr, 97 op/s
Jan 31 08:25:14 compute-0 ceph-mon[74496]: pgmap v2747: 305 pgs: 305 active+clean; 239 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 1.9 MiB/s wr, 97 op/s
Jan 31 08:25:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:15 compute-0 nova_compute[247704]: 2026-01-31 08:25:15.416 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:15.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 231 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 4.0 MiB/s wr, 132 op/s
Jan 31 08:25:16 compute-0 ceph-mon[74496]: pgmap v2748: 305 pgs: 305 active+clean; 231 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 4.0 MiB/s wr, 132 op/s
Jan 31 08:25:16 compute-0 ovn_controller[149457]: 2026-01-31T08:25:16Z|00691|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:25:16 compute-0 nova_compute[247704]: 2026-01-31 08:25:16.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1727550074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:17.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:17 compute-0 nova_compute[247704]: 2026-01-31 08:25:17.831 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 4.5 MiB/s wr, 141 op/s
Jan 31 08:25:18 compute-0 ceph-mon[74496]: pgmap v2749: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 4.5 MiB/s wr, 141 op/s
Jan 31 08:25:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:19.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:19.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:25:20
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', '.rgw.root', 'default.rgw.log']
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:25:20 compute-0 ceph-mon[74496]: pgmap v2750: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 31 08:25:20 compute-0 nova_compute[247704]: 2026-01-31 08:25:20.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:25:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:25:20 compute-0 sudo[352172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:20 compute-0 sudo[352172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:20 compute-0 sudo[352172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:20 compute-0 sudo[352197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:20 compute-0 sudo[352197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:20 compute-0 sudo[352197]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:20 compute-0 sudo[352222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:20 compute-0 sudo[352222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:20 compute-0 sudo[352222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:20 compute-0 sudo[352247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:25:20 compute-0 sudo[352247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:21.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:21 compute-0 podman[352347]: 2026-01-31 08:25:21.421224469 +0000 UTC m=+0.075527584 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:25:21 compute-0 podman[352347]: 2026-01-31 08:25:21.54483507 +0000 UTC m=+0.199138155 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:25:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:25:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:21.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:25:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 3.7 MiB/s wr, 109 op/s
Jan 31 08:25:22 compute-0 ceph-mon[74496]: pgmap v2751: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 3.7 MiB/s wr, 109 op/s
Jan 31 08:25:22 compute-0 podman[352504]: 2026-01-31 08:25:22.245168633 +0000 UTC m=+0.073747240 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:25:22 compute-0 podman[352504]: 2026-01-31 08:25:22.255552107 +0000 UTC m=+0.084130674 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:25:22 compute-0 podman[352571]: 2026-01-31 08:25:22.487584737 +0000 UTC m=+0.060229988 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, vcs-type=git)
Jan 31 08:25:22 compute-0 podman[352571]: 2026-01-31 08:25:22.500522824 +0000 UTC m=+0.073168035 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 08:25:22 compute-0 sudo[352247]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:25:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:25:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:22 compute-0 sudo[352606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:22 compute-0 sudo[352606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:22 compute-0 sudo[352606]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:22 compute-0 sudo[352631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:22 compute-0 sudo[352631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:22 compute-0 sudo[352631]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:22 compute-0 sudo[352656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:22 compute-0 sudo[352656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:22 compute-0 sudo[352656]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:22 compute-0 nova_compute[247704]: 2026-01-31 08:25:22.833 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:22 compute-0 sudo[352681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:25:22 compute-0 sudo[352681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:23 compute-0 sudo[352681]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f9ace097-2b27-4aa9-a969-a46e71e2ab7f does not exist
Jan 31 08:25:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 470b7837-67a5-4c7e-bf68-d9794ccd8bbe does not exist
Jan 31 08:25:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 712d8bb3-214f-4311-b2dd-abda12bbb652 does not exist
Jan 31 08:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:25:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:25:23 compute-0 sudo[352738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:23 compute-0 sudo[352738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:23 compute-0 sudo[352738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:23 compute-0 sudo[352763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:23 compute-0 sudo[352763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:23 compute-0 sudo[352763]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:23 compute-0 sudo[352788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:23 compute-0 sudo[352788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:23 compute-0 sudo[352788]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:23 compute-0 sudo[352813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:25:23 compute-0 sudo[352813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:25:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:25:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:25:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:23.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:25:23 compute-0 podman[352881]: 2026-01-31 08:25:23.888029597 +0000 UTC m=+0.052362614 container create 3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:25:23 compute-0 systemd[1]: Started libpod-conmon-3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912.scope.
Jan 31 08:25:23 compute-0 podman[352881]: 2026-01-31 08:25:23.861854885 +0000 UTC m=+0.026188002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:25:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:23 compute-0 podman[352881]: 2026-01-31 08:25:23.984037352 +0000 UTC m=+0.148370399 container init 3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:25:23 compute-0 podman[352881]: 2026-01-31 08:25:23.99540543 +0000 UTC m=+0.159738447 container start 3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:25:24 compute-0 podman[352881]: 2026-01-31 08:25:23.999793328 +0000 UTC m=+0.164126375 container attach 3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:25:24 compute-0 ecstatic_curie[352898]: 167 167
Jan 31 08:25:24 compute-0 systemd[1]: libpod-3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912.scope: Deactivated successfully.
Jan 31 08:25:24 compute-0 podman[352881]: 2026-01-31 08:25:24.004240667 +0000 UTC m=+0.168573694 container died 3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:25:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b0f084f12fb5db50733ba60be91490306a86b6c93df70de5dad76a8e11c7576-merged.mount: Deactivated successfully.
Jan 31 08:25:24 compute-0 podman[352881]: 2026-01-31 08:25:24.049983699 +0000 UTC m=+0.214316746 container remove 3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:25:24 compute-0 systemd[1]: libpod-conmon-3ea0d359d529b3aad5bbf0565d38e2c6f8f8c8574d7fc48940cfb6f5d61dc912.scope: Deactivated successfully.
Jan 31 08:25:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 3.0 MiB/s wr, 90 op/s
Jan 31 08:25:24 compute-0 podman[352922]: 2026-01-31 08:25:24.229698625 +0000 UTC m=+0.069056444 container create 0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:25:24 compute-0 systemd[1]: Started libpod-conmon-0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d.scope.
Jan 31 08:25:24 compute-0 podman[352922]: 2026-01-31 08:25:24.206892966 +0000 UTC m=+0.046250805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:25:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e36f7a784fc02cc95241bd0a4291d43572f5e2057368e4a2cfacdb62c7a4162/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e36f7a784fc02cc95241bd0a4291d43572f5e2057368e4a2cfacdb62c7a4162/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e36f7a784fc02cc95241bd0a4291d43572f5e2057368e4a2cfacdb62c7a4162/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e36f7a784fc02cc95241bd0a4291d43572f5e2057368e4a2cfacdb62c7a4162/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e36f7a784fc02cc95241bd0a4291d43572f5e2057368e4a2cfacdb62c7a4162/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:24 compute-0 podman[352922]: 2026-01-31 08:25:24.331702037 +0000 UTC m=+0.171059846 container init 0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:25:24 compute-0 podman[352922]: 2026-01-31 08:25:24.342774918 +0000 UTC m=+0.182132757 container start 0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:25:24 compute-0 podman[352922]: 2026-01-31 08:25:24.347412882 +0000 UTC m=+0.186770711 container attach 0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:25:24 compute-0 ceph-mon[74496]: pgmap v2752: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 3.0 MiB/s wr, 90 op/s
Jan 31 08:25:24 compute-0 sshd-session[352945]: Invalid user solv from 45.148.10.240 port 43274
Jan 31 08:25:24 compute-0 sshd-session[352945]: Connection closed by invalid user solv 45.148.10.240 port 43274 [preauth]
Jan 31 08:25:25 compute-0 gracious_goodall[352939]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:25:25 compute-0 gracious_goodall[352939]: --> relative data size: 1.0
Jan 31 08:25:25 compute-0 gracious_goodall[352939]: --> All data devices are unavailable
Jan 31 08:25:25 compute-0 systemd[1]: libpod-0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d.scope: Deactivated successfully.
Jan 31 08:25:25 compute-0 podman[352922]: 2026-01-31 08:25:25.18539741 +0000 UTC m=+1.024755219 container died 0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:25:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e36f7a784fc02cc95241bd0a4291d43572f5e2057368e4a2cfacdb62c7a4162-merged.mount: Deactivated successfully.
Jan 31 08:25:25 compute-0 podman[352922]: 2026-01-31 08:25:25.26775996 +0000 UTC m=+1.107117769 container remove 0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goodall, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:25:25 compute-0 systemd[1]: libpod-conmon-0e4cf974639da4101d8b44ee4383636355e8b2607d3f1b05d3f8306379c7006d.scope: Deactivated successfully.
Jan 31 08:25:25 compute-0 sudo[352813]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:25 compute-0 podman[352958]: 2026-01-31 08:25:25.309807321 +0000 UTC m=+0.095594466 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 08:25:25 compute-0 sudo[352990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:25 compute-0 sudo[352990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:25 compute-0 sudo[352990]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:25 compute-0 nova_compute[247704]: 2026-01-31 08:25:25.422 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:25 compute-0 sudo[353015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:25 compute-0 sudo[353015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:25 compute-0 sudo[353015]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:25 compute-0 sudo[353040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:25 compute-0 sudo[353040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:25 compute-0 sudo[353040]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:25 compute-0 sudo[353066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:25:25 compute-0 sudo[353066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:25.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:25 compute-0 podman[353134]: 2026-01-31 08:25:25.984451683 +0000 UTC m=+0.051683537 container create e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:26 compute-0 systemd[1]: Started libpod-conmon-e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899.scope.
Jan 31 08:25:26 compute-0 podman[353134]: 2026-01-31 08:25:25.962000523 +0000 UTC m=+0.029232457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:25:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:26 compute-0 podman[353134]: 2026-01-31 08:25:26.097865634 +0000 UTC m=+0.165097508 container init e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:25:26 compute-0 podman[353134]: 2026-01-31 08:25:26.106158078 +0000 UTC m=+0.173389942 container start e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_thompson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:26 compute-0 podman[353134]: 2026-01-31 08:25:26.109551411 +0000 UTC m=+0.176783275 container attach e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Jan 31 08:25:26 compute-0 adoring_thompson[353150]: 167 167
Jan 31 08:25:26 compute-0 systemd[1]: libpod-e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899.scope: Deactivated successfully.
Jan 31 08:25:26 compute-0 podman[353134]: 2026-01-31 08:25:26.1152023 +0000 UTC m=+0.182434154 container died e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_thompson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6b79c2a7d18ff2501e043814b1acbbae6c085e18b834dbfd4b3624c57e1f15-merged.mount: Deactivated successfully.
Jan 31 08:25:26 compute-0 podman[353134]: 2026-01-31 08:25:26.157043196 +0000 UTC m=+0.224275050 container remove e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:25:26 compute-0 systemd[1]: libpod-conmon-e692353091093902ef94ab33d1cfc6eb29fa8dc58980990eab5c157a7c635899.scope: Deactivated successfully.
Jan 31 08:25:26 compute-0 ceph-mon[74496]: pgmap v2753: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Jan 31 08:25:26 compute-0 podman[353172]: 2026-01-31 08:25:26.354974379 +0000 UTC m=+0.066042570 container create d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:25:26 compute-0 systemd[1]: Started libpod-conmon-d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07.scope.
Jan 31 08:25:26 compute-0 podman[353172]: 2026-01-31 08:25:26.323623201 +0000 UTC m=+0.034691482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:25:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea9ef39c7143e207cdcf2b33d84f8a4cd6b56d222658f6dfa8d5e61203a77b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea9ef39c7143e207cdcf2b33d84f8a4cd6b56d222658f6dfa8d5e61203a77b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea9ef39c7143e207cdcf2b33d84f8a4cd6b56d222658f6dfa8d5e61203a77b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ea9ef39c7143e207cdcf2b33d84f8a4cd6b56d222658f6dfa8d5e61203a77b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:26 compute-0 podman[353172]: 2026-01-31 08:25:26.458918688 +0000 UTC m=+0.169986929 container init d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:25:26 compute-0 podman[353172]: 2026-01-31 08:25:26.465995701 +0000 UTC m=+0.177063882 container start d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:25:26 compute-0 podman[353172]: 2026-01-31 08:25:26.468772119 +0000 UTC m=+0.179840360 container attach d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:25:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:27.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/18559313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:27 compute-0 brave_cannon[353188]: {
Jan 31 08:25:27 compute-0 brave_cannon[353188]:     "0": [
Jan 31 08:25:27 compute-0 brave_cannon[353188]:         {
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "devices": [
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "/dev/loop3"
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             ],
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "lv_name": "ceph_lv0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "lv_size": "7511998464",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "name": "ceph_lv0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "tags": {
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.crush_device_class": "",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.encrypted": "0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.osd_id": "0",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.type": "block",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:                 "ceph.vdo": "0"
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             },
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "type": "block",
Jan 31 08:25:27 compute-0 brave_cannon[353188]:             "vg_name": "ceph_vg0"
Jan 31 08:25:27 compute-0 brave_cannon[353188]:         }
Jan 31 08:25:27 compute-0 brave_cannon[353188]:     ]
Jan 31 08:25:27 compute-0 brave_cannon[353188]: }
Jan 31 08:25:27 compute-0 systemd[1]: libpod-d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07.scope: Deactivated successfully.
Jan 31 08:25:27 compute-0 podman[353172]: 2026-01-31 08:25:27.303119738 +0000 UTC m=+1.014187949 container died d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-19ea9ef39c7143e207cdcf2b33d84f8a4cd6b56d222658f6dfa8d5e61203a77b-merged.mount: Deactivated successfully.
Jan 31 08:25:27 compute-0 podman[353172]: 2026-01-31 08:25:27.358592259 +0000 UTC m=+1.069660490 container remove d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:27 compute-0 systemd[1]: libpod-conmon-d1779704f13d490bdb34e51a1368685f4b8b0f1dd89b9d8e02ed17fd108c5e07.scope: Deactivated successfully.
Jan 31 08:25:27 compute-0 sudo[353066]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:27 compute-0 sudo[353208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:27 compute-0 sudo[353208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:27 compute-0 sudo[353208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:27 compute-0 sudo[353233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:27 compute-0 sudo[353233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:27 compute-0 sudo[353233]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:27 compute-0 sudo[353259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:27 compute-0 sudo[353259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:27 compute-0 sudo[353259]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:27 compute-0 sudo[353284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:25:27 compute-0 sudo[353284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:27.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:27 compute-0 nova_compute[247704]: 2026-01-31 08:25:27.836 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.023578915 +0000 UTC m=+0.045862976 container create 9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:25:28 compute-0 systemd[1]: Started libpod-conmon-9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022.scope.
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.00299072 +0000 UTC m=+0.025274821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:25:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 606 KiB/s wr, 16 op/s
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.127917303 +0000 UTC m=+0.150201434 container init 9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.136384221 +0000 UTC m=+0.158668282 container start 9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.141197679 +0000 UTC m=+0.163481760 container attach 9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:25:28 compute-0 funny_kalam[353365]: 167 167
Jan 31 08:25:28 compute-0 systemd[1]: libpod-9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022.scope: Deactivated successfully.
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.143427643 +0000 UTC m=+0.165711734 container died 9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7926ae69e049e6d290265751c5faea0c25ef91ca56d696c7f01897e04dbb091b-merged.mount: Deactivated successfully.
Jan 31 08:25:28 compute-0 podman[353349]: 2026-01-31 08:25:28.194202689 +0000 UTC m=+0.216486740 container remove 9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kalam, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:28 compute-0 systemd[1]: libpod-conmon-9c274475bacfd8a2a659cfbfda2923f4cde63c9ad47d1ffb13eeb13bbf338022.scope: Deactivated successfully.
Jan 31 08:25:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4210798319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:28 compute-0 ceph-mon[74496]: pgmap v2754: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 606 KiB/s wr, 16 op/s
Jan 31 08:25:28 compute-0 podman[353389]: 2026-01-31 08:25:28.347897527 +0000 UTC m=+0.043389945 container create 766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:25:28 compute-0 systemd[1]: Started libpod-conmon-766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4.scope.
Jan 31 08:25:28 compute-0 podman[353389]: 2026-01-31 08:25:28.32965594 +0000 UTC m=+0.025148398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:25:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5d541469dce3387dc9a2a5812cd6f4dacc9c4015ca9e2e2c1dd191415bcbee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5d541469dce3387dc9a2a5812cd6f4dacc9c4015ca9e2e2c1dd191415bcbee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5d541469dce3387dc9a2a5812cd6f4dacc9c4015ca9e2e2c1dd191415bcbee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5d541469dce3387dc9a2a5812cd6f4dacc9c4015ca9e2e2c1dd191415bcbee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:28 compute-0 podman[353389]: 2026-01-31 08:25:28.450536235 +0000 UTC m=+0.146028673 container init 766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:28 compute-0 podman[353389]: 2026-01-31 08:25:28.459176256 +0000 UTC m=+0.154668674 container start 766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:25:28 compute-0 podman[353389]: 2026-01-31 08:25:28.46258339 +0000 UTC m=+0.158075828 container attach 766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:25:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:25:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:29.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]: {
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:         "osd_id": 0,
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:         "type": "bluestore"
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]:     }
Jan 31 08:25:29 compute-0 unruffled_leakey[353405]: }
Jan 31 08:25:29 compute-0 systemd[1]: libpod-766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4.scope: Deactivated successfully.
Jan 31 08:25:29 compute-0 podman[353426]: 2026-01-31 08:25:29.343986833 +0000 UTC m=+0.034461246 container died 766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e5d541469dce3387dc9a2a5812cd6f4dacc9c4015ca9e2e2c1dd191415bcbee-merged.mount: Deactivated successfully.
Jan 31 08:25:29 compute-0 podman[353426]: 2026-01-31 08:25:29.417472814 +0000 UTC m=+0.107947157 container remove 766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:25:29 compute-0 systemd[1]: libpod-conmon-766a521fb6228d60f0b76cb31c3fb583c51883eeb76e67bb0d8c59e13a6f32e4.scope: Deactivated successfully.
Jan 31 08:25:29 compute-0 sudo[353284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:25:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:25:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cd3502f1-b9d8-470c-b166-57c8986a8f70 does not exist
Jan 31 08:25:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 65a583a5-ed8c-4bc8-baf2-4a6d24019393 does not exist
Jan 31 08:25:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4b3ea01b-fb49-4cc5-93f0-47df9421cd14 does not exist
Jan 31 08:25:29 compute-0 sudo[353442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:29 compute-0 sudo[353442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:29 compute-0 sudo[353442]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:29 compute-0 sudo[353467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:25:29 compute-0 sudo[353467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:29 compute-0 sudo[353467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:29.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s wr, 0 op/s
Jan 31 08:25:30 compute-0 sudo[353492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:30 compute-0 sudo[353492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:30 compute-0 sudo[353492]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:30 compute-0 sudo[353517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:30 compute-0 sudo[353517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:30 compute-0 sudo[353517]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:30 compute-0 nova_compute[247704]: 2026-01-31 08:25:30.424 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:25:30 compute-0 ceph-mon[74496]: pgmap v2755: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s wr, 0 op/s
Jan 31 08:25:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:25:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:31.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:25:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:31.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 22 KiB/s wr, 0 op/s
Jan 31 08:25:32 compute-0 ceph-mon[74496]: pgmap v2756: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 255 B/s rd, 22 KiB/s wr, 0 op/s
Jan 31 08:25:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:32 compute-0 nova_compute[247704]: 2026-01-31 08:25:32.838 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:33.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:33.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 16 KiB/s wr, 8 op/s
Jan 31 08:25:34 compute-0 ceph-mon[74496]: pgmap v2757: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 16 KiB/s wr, 8 op/s
Jan 31 08:25:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:35 compute-0 nova_compute[247704]: 2026-01-31 08:25:35.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010036437976921645 of space, bias 1.0, pg target 0.30109313930764936 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0043215571491717845 of space, bias 1.0, pg target 1.2964671447515352 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:25:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:25:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 55 op/s
Jan 31 08:25:36 compute-0 ceph-mon[74496]: pgmap v2758: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 55 op/s
Jan 31 08:25:36 compute-0 nova_compute[247704]: 2026-01-31 08:25:36.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:37.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:37 compute-0 nova_compute[247704]: 2026-01-31 08:25:37.842 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/567916548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 31 08:25:38 compute-0 ovn_controller[149457]: 2026-01-31T08:25:38Z|00692|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:25:38 compute-0 nova_compute[247704]: 2026-01-31 08:25:38.935 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:38 compute-0 NetworkManager[49108]: <info>  [1769847938.9366] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Jan 31 08:25:38 compute-0 NetworkManager[49108]: <info>  [1769847938.9380] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Jan 31 08:25:38 compute-0 ovn_controller[149457]: 2026-01-31T08:25:38Z|00693|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:25:38 compute-0 nova_compute[247704]: 2026-01-31 08:25:38.946 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:38 compute-0 nova_compute[247704]: 2026-01-31 08:25:38.950 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:39 compute-0 ceph-mon[74496]: pgmap v2759: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 31 08:25:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:39.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:39 compute-0 nova_compute[247704]: 2026-01-31 08:25:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:39 compute-0 nova_compute[247704]: 2026-01-31 08:25:39.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:39 compute-0 nova_compute[247704]: 2026-01-31 08:25:39.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:25:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:39.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 31 08:25:40 compute-0 ceph-mon[74496]: pgmap v2760: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 31 08:25:40 compute-0 nova_compute[247704]: 2026-01-31 08:25:40.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:41.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3804637156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:41 compute-0 nova_compute[247704]: 2026-01-31 08:25:41.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:41 compute-0 nova_compute[247704]: 2026-01-31 08:25:41.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:25:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:41.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:25:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 265 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 75 op/s
Jan 31 08:25:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2644805062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2327237780' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:42 compute-0 ceph-mon[74496]: pgmap v2761: 305 pgs: 305 active+clean; 265 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 718 KiB/s wr, 75 op/s
Jan 31 08:25:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:42 compute-0 nova_compute[247704]: 2026-01-31 08:25:42.845 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:43.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:43 compute-0 nova_compute[247704]: 2026-01-31 08:25:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:43 compute-0 nova_compute[247704]: 2026-01-31 08:25:43.741 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:43 compute-0 nova_compute[247704]: 2026-01-31 08:25:43.742 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:43 compute-0 nova_compute[247704]: 2026-01-31 08:25:43.742 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:43 compute-0 nova_compute[247704]: 2026-01-31 08:25:43.743 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:25:43 compute-0 nova_compute[247704]: 2026-01-31 08:25:43.743 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:43 compute-0 podman[353551]: 2026-01-31 08:25:43.946004648 +0000 UTC m=+0.113167416 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:25:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:25:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 747 KiB/s wr, 75 op/s
Jan 31 08:25:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109718097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:44 compute-0 nova_compute[247704]: 2026-01-31 08:25:44.183 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:44 compute-0 ceph-mon[74496]: pgmap v2762: 305 pgs: 305 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 747 KiB/s wr, 75 op/s
Jan 31 08:25:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2109718097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:25:44.739 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:25:44 compute-0 nova_compute[247704]: 2026-01-31 08:25:44.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:25:44.741 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:25:44 compute-0 nova_compute[247704]: 2026-01-31 08:25:44.845 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:25:44 compute-0 nova_compute[247704]: 2026-01-31 08:25:44.845 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.028 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.030 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4152MB free_disk=20.95916748046875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.030 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.030 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.125 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.125 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.126 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.185 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2405147875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2694520024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:45.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.430 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1072790435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.696 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.703 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:45 compute-0 nova_compute[247704]: 2026-01-31 08:25:45.765 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:25:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.008 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.009 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 152 op/s
Jan 31 08:25:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1072790435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/579878461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:46 compute-0 ceph-mon[74496]: pgmap v2763: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 152 op/s
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.637 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.638 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.696 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.843 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.844 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.850 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:25:46 compute-0 nova_compute[247704]: 2026-01-31 08:25:46.851 247708 INFO nova.compute.claims [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:25:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1443221660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:47 compute-0 nova_compute[247704]: 2026-01-31 08:25:47.441 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:47.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:47 compute-0 nova_compute[247704]: 2026-01-31 08:25:47.869 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:25:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949229944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.010 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.011 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.012 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.016 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.022 247708 DEBUG nova.compute.provider_tree [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 371 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 216 op/s
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.325 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.334 247708 DEBUG nova.scheduler.client.report [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.386 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.388 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:25:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1949229944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:48 compute-0 ceph-mon[74496]: pgmap v2764: 305 pgs: 305 active+clean; 371 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 216 op/s
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.570 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.570 247708 DEBUG nova.network.neutron [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.600 247708 INFO nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.653 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.773 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.773 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.774 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.774 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.817 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.819 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.819 247708 INFO nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Creating image(s)
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.846 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.874 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.902 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.907 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:48 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.940 247708 DEBUG nova.policy [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85dfa8546d9942648bb4197c8b1947e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbdbdee526499e90da7e971ede68d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:48.999 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.000 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.001 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.001 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.030 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.035 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.336 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.440 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] resizing rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.571 247708 DEBUG nova.objects.instance [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'migration_context' on Instance uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.629 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.630 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Ensure instance console log exists: /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.630 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.630 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:49 compute-0 nova_compute[247704]: 2026-01-31 08:25:49.631 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:49.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.3 MiB/s wr, 276 op/s
Jan 31 08:25:50 compute-0 sudo[353812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:50 compute-0 sudo[353812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:50 compute-0 sudo[353812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:50 compute-0 sudo[353837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:25:50 compute-0 sudo[353837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:50 compute-0 sudo[353837]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:50 compute-0 nova_compute[247704]: 2026-01-31 08:25:50.460 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:50 compute-0 ceph-mon[74496]: pgmap v2765: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.3 MiB/s wr, 276 op/s
Jan 31 08:25:50 compute-0 nova_compute[247704]: 2026-01-31 08:25:50.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:51.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3045328600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2226192119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:25:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:51.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 292 op/s
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.249 247708 DEBUG nova.network.neutron [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Successfully created port: 85a0f13a-358c-4d17-a146-95c5b877e950 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.254 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.394 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.395 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.396 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.397 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:52 compute-0 ceph-mon[74496]: pgmap v2766: 305 pgs: 305 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.7 MiB/s wr, 292 op/s
Jan 31 08:25:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.670464) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952670533, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1837, "num_deletes": 262, "total_data_size": 2926507, "memory_usage": 2977696, "flush_reason": "Manual Compaction"}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952704721, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2866418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59049, "largest_seqno": 60885, "table_properties": {"data_size": 2858202, "index_size": 4901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18167, "raw_average_key_size": 20, "raw_value_size": 2841245, "raw_average_value_size": 3203, "num_data_blocks": 213, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847804, "oldest_key_time": 1769847804, "file_creation_time": 1769847952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 34343 microseconds, and 7770 cpu microseconds.
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.704803) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2866418 bytes OK
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.704839) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.707824) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.707852) EVENT_LOG_v1 {"time_micros": 1769847952707844, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.707885) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2918726, prev total WAL file size 2918726, number of live WAL files 2.
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.708746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323631' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2799KB)], [134(8979KB)]
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952708796, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12061930, "oldest_snapshot_seqno": -1}
Jan 31 08:25:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:25:52.744 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:52 compute-0 nova_compute[247704]: 2026-01-31 08:25:52.872 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8799 keys, 11909355 bytes, temperature: kUnknown
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952883433, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11909355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11851994, "index_size": 34272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 231651, "raw_average_key_size": 26, "raw_value_size": 11696652, "raw_average_value_size": 1329, "num_data_blocks": 1318, "num_entries": 8799, "num_filter_entries": 8799, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.883777) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11909355 bytes
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.886356) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 69.0 rd, 68.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 8.8 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.4) write-amplify(4.2) OK, records in: 9340, records dropped: 541 output_compression: NoCompression
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.886388) EVENT_LOG_v1 {"time_micros": 1769847952886374, "job": 82, "event": "compaction_finished", "compaction_time_micros": 174741, "compaction_time_cpu_micros": 43696, "output_level": 6, "num_output_files": 1, "total_output_size": 11909355, "num_input_records": 9340, "num_output_records": 8799, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952886932, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847952888560, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.708634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.888666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.888673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.888677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.888681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:25:52.888685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:25:53 compute-0 nova_compute[247704]: 2026-01-31 08:25:53.064 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3522562724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:25:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3522562724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:25:53 compute-0 nova_compute[247704]: 2026-01-31 08:25:53.941 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 292 op/s
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.437 247708 DEBUG nova.network.neutron [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Successfully updated port: 85a0f13a-358c-4d17-a146-95c5b877e950 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.475 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.475 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.475 247708 DEBUG nova.network.neutron [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.686 247708 DEBUG nova.compute.manager [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-changed-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.687 247708 DEBUG nova.compute.manager [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Refreshing instance network info cache due to event network-changed-85a0f13a-358c-4d17-a146-95c5b877e950. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.687 247708 DEBUG oslo_concurrency.lockutils [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:25:54 compute-0 nova_compute[247704]: 2026-01-31 08:25:54.834 247708 DEBUG nova.network.neutron [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:25:55 compute-0 ceph-mon[74496]: pgmap v2767: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 292 op/s
Jan 31 08:25:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:55.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:55 compute-0 nova_compute[247704]: 2026-01-31 08:25:55.463 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:55.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:55 compute-0 podman[353865]: 2026-01-31 08:25:55.888133741 +0000 UTC m=+0.064212006 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:25:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 292 op/s
Jan 31 08:25:56 compute-0 ceph-mon[74496]: pgmap v2768: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 292 op/s
Jan 31 08:25:56 compute-0 nova_compute[247704]: 2026-01-31 08:25:56.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.043 247708 DEBUG nova.network.neutron [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.076 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.077 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Instance network_info: |[{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.078 247708 DEBUG oslo_concurrency.lockutils [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.079 247708 DEBUG nova.network.neutron [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Refreshing network info cache for port 85a0f13a-358c-4d17-a146-95c5b877e950 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.084 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Start _get_guest_xml network_info=[{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.091 247708 WARNING nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.099 247708 DEBUG nova.virt.libvirt.host [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.100 247708 DEBUG nova.virt.libvirt.host [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.104 247708 DEBUG nova.virt.libvirt.host [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.105 247708 DEBUG nova.virt.libvirt.host [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.107 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.108 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.109 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.109 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.109 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.110 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.110 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.111 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.112 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.113 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.113 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.114 247708 DEBUG nova.virt.hardware [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.119 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:25:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:25:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:25:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3704513804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.605 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.634 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.640 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3704513804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:25:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:57.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:57 compute-0 nova_compute[247704]: 2026-01-31 08:25:57.874 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:25:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656717281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 214 op/s
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.143 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.145 247708 DEBUG nova.virt.libvirt.vif [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=160,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-a49r1jvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:25:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=bbc5f09e-71d7-4009-bdf6-06e95b32574c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.146 247708 DEBUG nova.network.os_vif_util [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.147 247708 DEBUG nova.network.os_vif_util [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.149 247708 DEBUG nova.objects.instance [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.168 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <uuid>bbc5f09e-71d7-4009-bdf6-06e95b32574c</uuid>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <name>instance-000000a0</name>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:name>multiattach-server-1</nova:name>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:25:57</nova:creationTime>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <nova:port uuid="85a0f13a-358c-4d17-a146-95c5b877e950">
Jan 31 08:25:58 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <system>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <entry name="serial">bbc5f09e-71d7-4009-bdf6-06e95b32574c</entry>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <entry name="uuid">bbc5f09e-71d7-4009-bdf6-06e95b32574c</entry>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </system>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <os>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </os>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <features>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </features>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk">
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </source>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk.config">
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </source>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:25:58 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:be:57:08"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <target dev="tap85a0f13a-35"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/console.log" append="off"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <video>
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </video>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:25:58 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:25:58 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:25:58 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:25:58 compute-0 nova_compute[247704]: </domain>
Jan 31 08:25:58 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.170 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Preparing to wait for external event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.170 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.170 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.170 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.171 247708 DEBUG nova.virt.libvirt.vif [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=160,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-a49r1jvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:25:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=bbc5f09e-71d7-4009-bdf6-06e95b32574c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.171 247708 DEBUG nova.network.os_vif_util [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.172 247708 DEBUG nova.network.os_vif_util [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.172 247708 DEBUG os_vif [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.173 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.173 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.174 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.178 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.179 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85a0f13a-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.179 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap85a0f13a-35, col_values=(('external_ids', {'iface-id': '85a0f13a-358c-4d17-a146-95c5b877e950', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:57:08', 'vm-uuid': 'bbc5f09e-71d7-4009-bdf6-06e95b32574c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:25:58 compute-0 NetworkManager[49108]: <info>  [1769847958.1819] manager: (tap85a0f13a-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.190 247708 INFO os_vif [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35')
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.333 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.333 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.333 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:be:57:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.334 247708 INFO nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Using config drive
Jan 31 08:25:58 compute-0 nova_compute[247704]: 2026-01-31 08:25:58.362 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.102 247708 INFO nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Creating config drive at /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/disk.config
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.107 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpniwr_ols execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2656717281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.228 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpniwr_ols" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:59.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.548 247708 DEBUG nova.storage.rbd_utils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.553 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/disk.config bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.803 247708 DEBUG oslo_concurrency.processutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/disk.config bbc5f09e-71d7-4009-bdf6-06e95b32574c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.804 247708 INFO nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Deleting local config drive /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c/disk.config because it was imported into RBD.
Jan 31 08:25:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:25:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:25:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:25:59 compute-0 kernel: tap85a0f13a-35: entered promiscuous mode
Jan 31 08:25:59 compute-0 NetworkManager[49108]: <info>  [1769847959.8531] manager: (tap85a0f13a-35): new Tun device (/org/freedesktop/NetworkManager/Devices/304)
Jan 31 08:25:59 compute-0 systemd-udevd[354020]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:25:59 compute-0 ovn_controller[149457]: 2026-01-31T08:25:59Z|00694|binding|INFO|Claiming lport 85a0f13a-358c-4d17-a146-95c5b877e950 for this chassis.
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.895 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:59 compute-0 ovn_controller[149457]: 2026-01-31T08:25:59Z|00695|binding|INFO|85a0f13a-358c-4d17-a146-95c5b877e950: Claiming fa:16:3e:be:57:08 10.100.0.14
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.898 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:59 compute-0 ovn_controller[149457]: 2026-01-31T08:25:59Z|00696|binding|INFO|Setting lport 85a0f13a-358c-4d17-a146-95c5b877e950 ovn-installed in OVS
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.908 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:25:59 compute-0 NetworkManager[49108]: <info>  [1769847959.9113] device (tap85a0f13a-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:25:59 compute-0 NetworkManager[49108]: <info>  [1769847959.9121] device (tap85a0f13a-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:25:59 compute-0 systemd-machined[214448]: New machine qemu-74-instance-000000a0.
Jan 31 08:25:59 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-000000a0.
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.961 247708 DEBUG nova.network.neutron [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updated VIF entry in instance network info cache for port 85a0f13a-358c-4d17-a146-95c5b877e950. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:25:59 compute-0 nova_compute[247704]: 2026-01-31 08:25:59.962 247708 DEBUG nova.network.neutron [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 154 op/s
Jan 31 08:26:00 compute-0 ovn_controller[149457]: 2026-01-31T08:26:00Z|00697|binding|INFO|Setting lport 85a0f13a-358c-4d17-a146-95c5b877e950 up in Southbound
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.153 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:57:08 10.100.0.14'], port_security=['fa:16:3e:be:57:08 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bbc5f09e-71d7-4009-bdf6-06e95b32574c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=85a0f13a-358c-4d17-a146-95c5b877e950) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.156 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 85a0f13a-358c-4d17-a146-95c5b877e950 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.159 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.177 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[30628b04-a138-46fc-954e-4af31a1d0739]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:00 compute-0 nova_compute[247704]: 2026-01-31 08:26:00.182 247708 DEBUG oslo_concurrency.lockutils [req-be7c536a-e5e9-4874-b8bf-dd447febcb8e req-b28022d7-40d0-4f2b-89ef-96dfdf4c790f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.211 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9981e2e3-7641-4f5b-a82e-58489a0449bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.217 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2e1722-f8b2-4228-9d9e-6465106e0c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:00 compute-0 ceph-mon[74496]: pgmap v2769: 305 pgs: 305 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.8 MiB/s wr, 214 op/s
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.249 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f0d403-9864-40a1-9db4-878d1d0b28e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.266 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[69f579fc-2d83-483f-b491-4a1fed00cb95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 37081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354037, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.282 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e374542c-0ffc-440b-866c-936e810a9a20]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813783, 'tstamp': 813783}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354038, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813786, 'tstamp': 813786}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354038, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.284 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:00 compute-0 nova_compute[247704]: 2026-01-31 08:26:00.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.287 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.287 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.287 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:00.288 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:00 compute-0 nova_compute[247704]: 2026-01-31 08:26:00.464 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:00 compute-0 nova_compute[247704]: 2026-01-31 08:26:00.764 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847960.7639332, bbc5f09e-71d7-4009-bdf6-06e95b32574c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:00 compute-0 nova_compute[247704]: 2026-01-31 08:26:00.765 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] VM Started (Lifecycle Event)
Jan 31 08:26:01 compute-0 nova_compute[247704]: 2026-01-31 08:26:01.066 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:01 compute-0 nova_compute[247704]: 2026-01-31 08:26:01.071 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847960.7642481, bbc5f09e-71d7-4009-bdf6-06e95b32574c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:01 compute-0 nova_compute[247704]: 2026-01-31 08:26:01.071 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] VM Paused (Lifecycle Event)
Jan 31 08:26:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:26:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:26:01 compute-0 nova_compute[247704]: 2026-01-31 08:26:01.574 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:01 compute-0 ceph-mon[74496]: pgmap v2770: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 154 op/s
Jan 31 08:26:01 compute-0 nova_compute[247704]: 2026-01-31 08:26:01.580 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:26:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:01.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:01 compute-0 nova_compute[247704]: 2026-01-31 08:26:01.967 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:26:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 5.1 MiB/s wr, 116 op/s
Jan 31 08:26:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:02 compute-0 nova_compute[247704]: 2026-01-31 08:26:02.808 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:03 compute-0 nova_compute[247704]: 2026-01-31 08:26:03.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:03.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:03 compute-0 ceph-mon[74496]: pgmap v2771: 305 pgs: 305 active+clean; 467 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 5.1 MiB/s wr, 116 op/s
Jan 31 08:26:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 4.9 MiB/s wr, 122 op/s
Jan 31 08:26:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:05.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:05 compute-0 nova_compute[247704]: 2026-01-31 08:26:05.468 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:05 compute-0 ceph-mon[74496]: pgmap v2772: 305 pgs: 305 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 4.9 MiB/s wr, 122 op/s
Jan 31 08:26:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:05.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 602 KiB/s rd, 4.3 MiB/s wr, 153 op/s
Jan 31 08:26:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:07 compute-0 ceph-mon[74496]: pgmap v2773: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 602 KiB/s rd, 4.3 MiB/s wr, 153 op/s
Jan 31 08:26:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1500801380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2069233769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:07.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 4.4 MiB/s wr, 175 op/s
Jan 31 08:26:08 compute-0 nova_compute[247704]: 2026-01-31 08:26:08.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:26:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:09.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.364 247708 DEBUG nova.compute.manager [req-b533bcc2-aa48-4ffd-9957-f1bc26002206 req-768a02e2-4ac2-4ed9-b69a-47240e015e47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.365 247708 DEBUG oslo_concurrency.lockutils [req-b533bcc2-aa48-4ffd-9957-f1bc26002206 req-768a02e2-4ac2-4ed9-b69a-47240e015e47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.365 247708 DEBUG oslo_concurrency.lockutils [req-b533bcc2-aa48-4ffd-9957-f1bc26002206 req-768a02e2-4ac2-4ed9-b69a-47240e015e47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.366 247708 DEBUG oslo_concurrency.lockutils [req-b533bcc2-aa48-4ffd-9957-f1bc26002206 req-768a02e2-4ac2-4ed9-b69a-47240e015e47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.366 247708 DEBUG nova.compute.manager [req-b533bcc2-aa48-4ffd-9957-f1bc26002206 req-768a02e2-4ac2-4ed9-b69a-47240e015e47 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Processing event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.368 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.373 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847969.3724964, bbc5f09e-71d7-4009-bdf6-06e95b32574c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.373 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] VM Resumed (Lifecycle Event)
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.376 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.381 247708 INFO nova.virt.libvirt.driver [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Instance spawned successfully.
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.381 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.438 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.447 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.451 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.452 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.452 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.453 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.454 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.455 247708 DEBUG nova.virt.libvirt.driver [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.540 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.638 247708 INFO nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Took 20.82 seconds to spawn the instance on the hypervisor.
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.639 247708 DEBUG nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2906156154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:09 compute-0 ceph-mon[74496]: pgmap v2774: 305 pgs: 305 active+clean; 417 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 4.4 MiB/s wr, 175 op/s
Jan 31 08:26:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:09.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.864 247708 INFO nova.compute.manager [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Took 23.08 seconds to build instance.
Jan 31 08:26:09 compute-0 nova_compute[247704]: 2026-01-31 08:26:09.978 247708 DEBUG oslo_concurrency.lockutils [None req-50d23f8f-0a9a-4d42-bc25-70053411617b 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Jan 31 08:26:10 compute-0 nova_compute[247704]: 2026-01-31 08:26:10.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:10 compute-0 sudo[354086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:10 compute-0 sudo[354086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:10 compute-0 sudo[354086]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:10 compute-0 sudo[354111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:10 compute-0 sudo[354111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:10 compute-0 sudo[354111]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2906156154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:11.197 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:11.197 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:11.198 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:11.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.363 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.364 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.492 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.700 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.701 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.711 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.712 247708 INFO nova.compute.claims [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:26:11 compute-0 ceph-mon[74496]: pgmap v2775: 305 pgs: 305 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.7 MiB/s wr, 205 op/s
Jan 31 08:26:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:11.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.962 247708 DEBUG nova.compute.manager [req-ebeb60e8-9a80-460f-abdf-1705b6fbfa53 req-27dad580-1f83-41ad-a852-a1b5b3bd16cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.963 247708 DEBUG oslo_concurrency.lockutils [req-ebeb60e8-9a80-460f-abdf-1705b6fbfa53 req-27dad580-1f83-41ad-a852-a1b5b3bd16cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.964 247708 DEBUG oslo_concurrency.lockutils [req-ebeb60e8-9a80-460f-abdf-1705b6fbfa53 req-27dad580-1f83-41ad-a852-a1b5b3bd16cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.964 247708 DEBUG oslo_concurrency.lockutils [req-ebeb60e8-9a80-460f-abdf-1705b6fbfa53 req-27dad580-1f83-41ad-a852-a1b5b3bd16cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.965 247708 DEBUG nova.compute.manager [req-ebeb60e8-9a80-460f-abdf-1705b6fbfa53 req-27dad580-1f83-41ad-a852-a1b5b3bd16cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] No waiting events found dispatching network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:26:11 compute-0 nova_compute[247704]: 2026-01-31 08:26:11.965 247708 WARNING nova.compute.manager [req-ebeb60e8-9a80-460f-abdf-1705b6fbfa53 req-27dad580-1f83-41ad-a852-a1b5b3bd16cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received unexpected event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 for instance with vm_state active and task_state None.
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.105 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 463 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 220 op/s
Jan 31 08:26:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260443752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.608 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.615 247708 DEBUG nova.compute.provider_tree [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.682 247708 DEBUG nova.scheduler.client.report [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.794 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.796 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:26:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2260443752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.930 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.931 247708 DEBUG nova.network.neutron [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:26:12 compute-0 nova_compute[247704]: 2026-01-31 08:26:12.967 247708 INFO nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.022 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:13.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.308 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.309 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.310 247708 INFO nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Creating image(s)
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.341 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.381 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.425 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.430 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.498 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.499 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.500 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.501 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.527 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.533 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:13.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:13 compute-0 nova_compute[247704]: 2026-01-31 08:26:13.891 247708 DEBUG nova.policy [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48d684de9ba340f48e249b4cce857bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '418d5319c640455ab23850c0b0f24f92', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:26:13 compute-0 ceph-mon[74496]: pgmap v2776: 305 pgs: 305 active+clean; 463 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 220 op/s
Jan 31 08:26:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 470 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Jan 31 08:26:14 compute-0 nova_compute[247704]: 2026-01-31 08:26:14.847 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:14 compute-0 nova_compute[247704]: 2026-01-31 08:26:14.992 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] resizing rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:26:15 compute-0 podman[354254]: 2026-01-31 08:26:15.00806277 +0000 UTC m=+0.177185487 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.211 247708 DEBUG nova.objects.instance [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'migration_context' on Instance uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:15.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.271 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.272 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Ensure instance console log exists: /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.273 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.273 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.274 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:15 compute-0 nova_compute[247704]: 2026-01-31 08:26:15.514 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:15.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:16 compute-0 ceph-mon[74496]: pgmap v2777: 305 pgs: 305 active+clean; 470 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Jan 31 08:26:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.4 MiB/s wr, 277 op/s
Jan 31 08:26:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3581095025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:17.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:17.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Jan 31 08:26:18 compute-0 ceph-mon[74496]: pgmap v2778: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.4 MiB/s wr, 277 op/s
Jan 31 08:26:18 compute-0 nova_compute[247704]: 2026-01-31 08:26:18.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:19 compute-0 nova_compute[247704]: 2026-01-31 08:26:19.111 247708 DEBUG nova.network.neutron [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Successfully created port: 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:26:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:19.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:19 compute-0 ceph-mon[74496]: pgmap v2779: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 255 op/s
Jan 31 08:26:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:26:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:19.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:26:19 compute-0 nova_compute[247704]: 2026-01-31 08:26:19.876 247708 DEBUG nova.compute.manager [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-changed-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:19 compute-0 nova_compute[247704]: 2026-01-31 08:26:19.876 247708 DEBUG nova.compute.manager [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Refreshing instance network info cache due to event network-changed-85a0f13a-358c-4d17-a146-95c5b877e950. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:26:19 compute-0 nova_compute[247704]: 2026-01-31 08:26:19.877 247708 DEBUG oslo_concurrency.lockutils [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:19 compute-0 nova_compute[247704]: 2026-01-31 08:26:19.877 247708 DEBUG oslo_concurrency.lockutils [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:19 compute-0 nova_compute[247704]: 2026-01-31 08:26:19.878 247708 DEBUG nova.network.neutron [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Refreshing network info cache for port 85a0f13a-358c-4d17-a146-95c5b877e950 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.2 MiB/s wr, 233 op/s
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:26:20
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:26:20 compute-0 nova_compute[247704]: 2026-01-31 08:26:20.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:26:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:26:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:21.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:21 compute-0 ceph-mon[74496]: pgmap v2780: 305 pgs: 305 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.2 MiB/s wr, 233 op/s
Jan 31 08:26:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Jan 31 08:26:22 compute-0 ovn_controller[149457]: 2026-01-31T08:26:22Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:57:08 10.100.0.14
Jan 31 08:26:22 compute-0 ovn_controller[149457]: 2026-01-31T08:26:22Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:57:08 10.100.0.14
Jan 31 08:26:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.197 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:23.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.385 247708 DEBUG nova.network.neutron [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Successfully updated port: 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.423 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.424 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquired lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.424 247708 DEBUG nova.network.neutron [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:26:23 compute-0 ceph-mon[74496]: pgmap v2781: 305 pgs: 305 active+clean; 497 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.3 MiB/s wr, 197 op/s
Jan 31 08:26:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3860694169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.719 247708 DEBUG nova.compute.manager [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-changed-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.719 247708 DEBUG nova.compute.manager [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Refreshing instance network info cache due to event network-changed-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:26:23 compute-0 nova_compute[247704]: 2026-01-31 08:26:23.720 247708 DEBUG oslo_concurrency.lockutils [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:23.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:24 compute-0 nova_compute[247704]: 2026-01-31 08:26:24.004 247708 DEBUG nova.network.neutron [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:26:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 31 08:26:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3903398586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:24 compute-0 nova_compute[247704]: 2026-01-31 08:26:24.982 247708 DEBUG nova.network.neutron [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updated VIF entry in instance network info cache for port 85a0f13a-358c-4d17-a146-95c5b877e950. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:26:24 compute-0 nova_compute[247704]: 2026-01-31 08:26:24.983 247708 DEBUG nova.network.neutron [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:25 compute-0 nova_compute[247704]: 2026-01-31 08:26:25.029 247708 DEBUG oslo_concurrency.lockutils [req-b19592bb-7430-4258-89bd-4697da87989e req-d1247600-6a05-4125-b5d2-8b4557360417 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:25.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:25 compute-0 nova_compute[247704]: 2026-01-31 08:26:25.518 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:25 compute-0 ceph-mon[74496]: pgmap v2782: 305 pgs: 305 active+clean; 500 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 138 op/s
Jan 31 08:26:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 179 op/s
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.160 247708 DEBUG nova.network.neutron [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updating instance_info_cache with network_info: [{"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.438 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Releasing lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.439 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Instance network_info: |[{"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.439 247708 DEBUG oslo_concurrency.lockutils [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.440 247708 DEBUG nova.network.neutron [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Refreshing network info cache for port 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.443 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Start _get_guest_xml network_info=[{"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.449 247708 WARNING nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.462 247708 DEBUG nova.virt.libvirt.host [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.463 247708 DEBUG nova.virt.libvirt.host [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.466 247708 DEBUG nova.virt.libvirt.host [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.466 247708 DEBUG nova.virt.libvirt.host [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.468 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.468 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.469 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.469 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.469 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.469 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.470 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.470 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.470 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.471 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.471 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.471 247708 DEBUG nova.virt.hardware [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.475 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4235895941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:26 compute-0 podman[354380]: 2026-01-31 08:26:26.917479649 +0000 UTC m=+0.088862371 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:26:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778908129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:26 compute-0 nova_compute[247704]: 2026-01-31 08:26:26.988 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.023 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.029 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:26:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912877065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.547 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.551 247708 DEBUG nova.virt.libvirt.vif [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:26:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-435720527',display_name='tempest-AttachVolumeNegativeTest-server-435720527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-435720527',id=162,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGTasBb7yA6yl4NsN6VKzciwzxddV3xWuEa+pvWeaIqTayVegS57LL03uNYCNkNWMeetdRe0i56n50ShiYxicVG4xkWY3c9k2KD8xxZZvG+1vcbn+ZilpCG+NeW+N1kD9A==',key_name='tempest-keypair-1125943210',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-h95jhdb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:26:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=884d5d5d-6ad9-46a8-867a-b01ed20a527d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.552 247708 DEBUG nova.network.os_vif_util [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.554 247708 DEBUG nova.network.os_vif_util [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.557 247708 DEBUG nova.objects.instance [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.609 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <uuid>884d5d5d-6ad9-46a8-867a-b01ed20a527d</uuid>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <name>instance-000000a2</name>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:name>tempest-AttachVolumeNegativeTest-server-435720527</nova:name>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:26:26</nova:creationTime>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:user uuid="48d684de9ba340f48e249b4cce857bfa">tempest-AttachVolumeNegativeTest-562353674-project-member</nova:user>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:project uuid="418d5319c640455ab23850c0b0f24f92">tempest-AttachVolumeNegativeTest-562353674</nova:project>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <nova:port uuid="36dc53a0-e7ae-426b-a9bb-da7abd1ebb84">
Jan 31 08:26:27 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <system>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <entry name="serial">884d5d5d-6ad9-46a8-867a-b01ed20a527d</entry>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <entry name="uuid">884d5d5d-6ad9-46a8-867a-b01ed20a527d</entry>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </system>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <os>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </os>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <features>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </features>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk">
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </source>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk.config">
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </source>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:26:27 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:a6:30:cf"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <target dev="tap36dc53a0-e7"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/console.log" append="off"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <video>
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </video>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:26:27 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:26:27 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:26:27 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:26:27 compute-0 nova_compute[247704]: </domain>
Jan 31 08:26:27 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.611 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Preparing to wait for external event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.611 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.611 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.611 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.612 247708 DEBUG nova.virt.libvirt.vif [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:26:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-435720527',display_name='tempest-AttachVolumeNegativeTest-server-435720527',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-435720527',id=162,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGTasBb7yA6yl4NsN6VKzciwzxddV3xWuEa+pvWeaIqTayVegS57LL03uNYCNkNWMeetdRe0i56n50ShiYxicVG4xkWY3c9k2KD8xxZZvG+1vcbn+ZilpCG+NeW+N1kD9A==',key_name='tempest-keypair-1125943210',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-h95jhdb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:26:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=884d5d5d-6ad9-46a8-867a-b01ed20a527d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.612 247708 DEBUG nova.network.os_vif_util [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.613 247708 DEBUG nova.network.os_vif_util [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.614 247708 DEBUG os_vif [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.615 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.615 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.619 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.619 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36dc53a0-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.620 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36dc53a0-e7, col_values=(('external_ids', {'iface-id': '36dc53a0-e7ae-426b-a9bb-da7abd1ebb84', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:30:cf', 'vm-uuid': '884d5d5d-6ad9-46a8-867a-b01ed20a527d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:27 compute-0 NetworkManager[49108]: <info>  [1769847987.6226] manager: (tap36dc53a0-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.621 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.625 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.629 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.631 247708 INFO os_vif [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7')
Jan 31 08:26:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.805 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.806 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.807 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:a6:30:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.808 247708 INFO nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Using config drive
Jan 31 08:26:27 compute-0 nova_compute[247704]: 2026-01-31 08:26:27.843 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:27 compute-0 ceph-mon[74496]: pgmap v2783: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.8 MiB/s wr, 179 op/s
Jan 31 08:26:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1778908129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3912877065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 3.2 MiB/s wr, 126 op/s
Jan 31 08:26:28 compute-0 nova_compute[247704]: 2026-01-31 08:26:28.946 247708 INFO nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Creating config drive at /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/disk.config
Jan 31 08:26:28 compute-0 nova_compute[247704]: 2026-01-31 08:26:28.972 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu6_us17u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.111 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu6_us17u" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.148 247708 DEBUG nova.storage.rbd_utils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] rbd image 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.156 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/disk.config 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:29.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:29 compute-0 ceph-mon[74496]: pgmap v2784: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 3.2 MiB/s wr, 126 op/s
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.363 247708 DEBUG oslo_concurrency.processutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/disk.config 884d5d5d-6ad9-46a8-867a-b01ed20a527d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.364 247708 INFO nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Deleting local config drive /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d/disk.config because it was imported into RBD.
Jan 31 08:26:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:26:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 172K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.16 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7749 writes, 26K keys, 7749 commit groups, 1.0 writes per commit group, ingest: 27.16 MB, 0.05 MB/s
                                           Interval WAL: 7749 writes, 3103 syncs, 2.50 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:26:29 compute-0 kernel: tap36dc53a0-e7: entered promiscuous mode
Jan 31 08:26:29 compute-0 NetworkManager[49108]: <info>  [1769847989.4215] manager: (tap36dc53a0-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.420 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 ovn_controller[149457]: 2026-01-31T08:26:29Z|00698|binding|INFO|Claiming lport 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 for this chassis.
Jan 31 08:26:29 compute-0 ovn_controller[149457]: 2026-01-31T08:26:29Z|00699|binding|INFO|36dc53a0-e7ae-426b-a9bb-da7abd1ebb84: Claiming fa:16:3e:a6:30:cf 10.100.0.14
Jan 31 08:26:29 compute-0 ovn_controller[149457]: 2026-01-31T08:26:29Z|00700|binding|INFO|Setting lport 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 ovn-installed in OVS
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.431 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.436 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 systemd-udevd[354514]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:26:29 compute-0 NetworkManager[49108]: <info>  [1769847989.4621] device (tap36dc53a0-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:26:29 compute-0 NetworkManager[49108]: <info>  [1769847989.4631] device (tap36dc53a0-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:26:29 compute-0 systemd-machined[214448]: New machine qemu-75-instance-000000a2.
Jan 31 08:26:29 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-000000a2.
Jan 31 08:26:29 compute-0 ovn_controller[149457]: 2026-01-31T08:26:29Z|00701|binding|INFO|Setting lport 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 up in Southbound
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.622 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:30:cf 10.100.0.14'], port_security=['fa:16:3e:a6:30:cf 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '884d5d5d-6ad9-46a8-867a-b01ed20a527d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82b0b280-5a6f-4a3b-a288-461c595a8d30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.624 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 in datapath e26a2af1-a850-4885-977e-596b6be13fb8 bound to our chassis
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.628 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.642 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[22c15dce-263d-45c9-8f77-02734aa01924]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.643 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape26a2af1-a1 in ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.646 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape26a2af1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.647 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[847b39c4-c62e-4571-9780-05a575f6d690]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.647 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[38909f2d-fb5a-4b58-95c6-375215de8145]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.661 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[70692906-275e-40e8-8052-bf4a29c7ac7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.679 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[11b8c6de-8acb-47d8-af07-5e1d0e3f1a74]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.707 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[baac1f88-a63c-40a0-a95a-150b3553083e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 NetworkManager[49108]: <info>  [1769847989.7175] manager: (tape26a2af1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/307)
Jan 31 08:26:29 compute-0 systemd-udevd[354516]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.717 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ab21b102-7313-4a53-a214-8584dbf57113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.753 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0b99501b-0bba-44ab-bd38-ed874126d9a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.757 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f35701-ec95-4ea6-8a70-c4b154cf5170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 NetworkManager[49108]: <info>  [1769847989.7852] device (tape26a2af1-a0): carrier: link connected
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.790 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c2413edc-7893-41f3-8762-2c571f5d21dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.807 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea86134-0395-4b9b-bfe7-95e5f0e71a7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823240, 'reachable_time': 19612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354551, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.822 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d799af-fb27-4bb5-8ead-f8768d6c5635]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9c:7dba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823240, 'tstamp': 823240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354552, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.839 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e967a08b-8dd1-4df7-80f7-9101f6dc6c84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape26a2af1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9c:7d:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823240, 'reachable_time': 19612, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354553, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.857 247708 DEBUG nova.network.neutron [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updated VIF entry in instance network info cache for port 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.857 247708 DEBUG nova.network.neutron [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updating instance_info_cache with network_info: [{"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.870 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b3bfed-7d01-45c5-a89d-ca9f24060b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.912 247708 DEBUG oslo_concurrency.lockutils [req-27ab433c-b52e-43b0-b89c-8ae070de191e req-910b3eee-1528-4d80-b8ed-d0ca47387304 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.937 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[951f1edb-ceda-4ea6-b96c-ca0c2ba753c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.939 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.939 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.939 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape26a2af1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.941 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 NetworkManager[49108]: <info>  [1769847989.9424] manager: (tape26a2af1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Jan 31 08:26:29 compute-0 kernel: tape26a2af1-a0: entered promiscuous mode
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.945 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.946 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape26a2af1-a0, col_values=(('external_ids', {'iface-id': '003d1f0e-744f-4244-8c9f-3a9be6033652'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.947 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 ovn_controller[149457]: 2026-01-31T08:26:29Z|00702|binding|INFO|Releasing lport 003d1f0e-744f-4244-8c9f-3a9be6033652 from this chassis (sb_readonly=0)
Jan 31 08:26:29 compute-0 nova_compute[247704]: 2026-01-31 08:26:29.953 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.953 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.954 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0a89499d-8620-4b4e-8dc7-7c9f02d0eafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.955 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e26a2af1-a850-4885-977e-596b6be13fb8.pid.haproxy
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e26a2af1-a850-4885-977e-596b6be13fb8
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:26:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:26:29.955 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'env', 'PROCESS_TAG=haproxy-e26a2af1-a850-4885-977e-596b6be13fb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e26a2af1-a850-4885-977e-596b6be13fb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:26:30 compute-0 sudo[354560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-0 sudo[354560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354560]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 sudo[354613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:30 compute-0 sudo[354613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354613]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Jan 31 08:26:30 compute-0 sudo[354653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-0 sudo[354653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.189 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847990.1888075, 884d5d5d-6ad9-46a8-867a-b01ed20a527d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.190 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] VM Started (Lifecycle Event)
Jan 31 08:26:30 compute-0 sudo[354680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:26:30 compute-0 sudo[354680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.282 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.290 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847990.1900482, 884d5d5d-6ad9-46a8-867a-b01ed20a527d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.290 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] VM Paused (Lifecycle Event)
Jan 31 08:26:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3639916361' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.359 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.364 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.397 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:26:30 compute-0 podman[354727]: 2026-01-31 08:26:30.335822159 +0000 UTC m=+0.025597748 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.453 247708 DEBUG nova.compute.manager [req-b0ed98e4-23a5-488b-bb3b-da952a434304 req-121216d5-283e-415f-90f7-f6b32b6b926f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.455 247708 DEBUG oslo_concurrency.lockutils [req-b0ed98e4-23a5-488b-bb3b-da952a434304 req-121216d5-283e-415f-90f7-f6b32b6b926f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.455 247708 DEBUG oslo_concurrency.lockutils [req-b0ed98e4-23a5-488b-bb3b-da952a434304 req-121216d5-283e-415f-90f7-f6b32b6b926f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.455 247708 DEBUG oslo_concurrency.lockutils [req-b0ed98e4-23a5-488b-bb3b-da952a434304 req-121216d5-283e-415f-90f7-f6b32b6b926f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.455 247708 DEBUG nova.compute.manager [req-b0ed98e4-23a5-488b-bb3b-da952a434304 req-121216d5-283e-415f-90f7-f6b32b6b926f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Processing event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.456 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.459 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769847990.4595091, 884d5d5d-6ad9-46a8-867a-b01ed20a527d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.460 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] VM Resumed (Lifecycle Event)
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.463 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.466 247708 INFO nova.virt.libvirt.driver [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Instance spawned successfully.
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.467 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:26:30 compute-0 podman[354727]: 2026-01-31 08:26:30.509951259 +0000 UTC m=+0.199726818 container create 01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.521 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.532 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.538 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.539 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.539 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.540 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.540 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.541 247708 DEBUG nova.virt.libvirt.driver [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.548 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:26:30 compute-0 systemd[1]: Started libpod-conmon-01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0.scope.
Jan 31 08:26:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4828ed6d745975bd77c893242461ce89b76fd93d70cecc8394f35ca71ea95c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:30 compute-0 podman[354727]: 2026-01-31 08:26:30.613134539 +0000 UTC m=+0.302910108 container init 01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:26:30 compute-0 podman[354727]: 2026-01-31 08:26:30.618350017 +0000 UTC m=+0.308125576 container start 01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.626 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:26:30 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [NOTICE]   (354764) : New worker (354766) forked
Jan 31 08:26:30 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [NOTICE]   (354764) : Loading success.
Jan 31 08:26:30 compute-0 sudo[354680]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 sudo[354787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-0 sudo[354787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354787]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.770 247708 INFO nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Took 17.46 seconds to spawn the instance on the hypervisor.
Jan 31 08:26:30 compute-0 nova_compute[247704]: 2026-01-31 08:26:30.770 247708 DEBUG nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d77a9947-cf01-42fa-9542-6f72ea2dbfd5 does not exist
Jan 31 08:26:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4bf5415f-351a-44dc-bef3-5ca01b54eec6 does not exist
Jan 31 08:26:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 673b016f-0279-4a74-b771-85dc10d366ab does not exist
Jan 31 08:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:26:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:26:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:26:30 compute-0 sudo[354812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-0 sudo[354812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 sudo[354835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-0 sudo[354835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354835]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 sudo[354862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:30 compute-0 sudo[354862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354862]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:30 compute-0 sudo[354887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:30 compute-0 sudo[354887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:30 compute-0 sudo[354887]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:31 compute-0 sudo[354912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:26:31 compute-0 sudo[354912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:31 compute-0 nova_compute[247704]: 2026-01-31 08:26:31.041 247708 INFO nova.compute.manager [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Took 19.38 seconds to build instance.
Jan 31 08:26:31 compute-0 nova_compute[247704]: 2026-01-31 08:26:31.179 247708 DEBUG oslo_concurrency.lockutils [None req-6a34225f-3aea-4980-b438-e64ba9c2ac13 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:31.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.391712711 +0000 UTC m=+0.053113414 container create 3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.361178792 +0000 UTC m=+0.022579525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:26:31 compute-0 systemd[1]: Started libpod-conmon-3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7.scope.
Jan 31 08:26:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.513818065 +0000 UTC m=+0.175218798 container init 3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:31 compute-0 ceph-mon[74496]: pgmap v2785: 305 pgs: 305 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Jan 31 08:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:26:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.522493238 +0000 UTC m=+0.183893941 container start 3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:31 compute-0 systemd[1]: libpod-3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7.scope: Deactivated successfully.
Jan 31 08:26:31 compute-0 conmon[354996]: conmon 3701a455745e026ad7b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7.scope/container/memory.events
Jan 31 08:26:31 compute-0 zealous_mestorf[354996]: 167 167
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.542007317 +0000 UTC m=+0.203408020 container attach 3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mestorf, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.543607486 +0000 UTC m=+0.205008209 container died 3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.552370) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991552624, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 612, "num_deletes": 251, "total_data_size": 653127, "memory_usage": 664248, "flush_reason": "Manual Compaction"}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991560541, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 644796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60886, "largest_seqno": 61497, "table_properties": {"data_size": 641646, "index_size": 1057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7682, "raw_average_key_size": 19, "raw_value_size": 635209, "raw_average_value_size": 1596, "num_data_blocks": 47, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847953, "oldest_key_time": 1769847953, "file_creation_time": 1769847991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 8252 microseconds, and 2257 cpu microseconds.
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.560614) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 644796 bytes OK
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.560652) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.566180) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.566245) EVENT_LOG_v1 {"time_micros": 1769847991566233, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.566273) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 649864, prev total WAL file size 649864, number of live WAL files 2.
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.567252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(629KB)], [137(11MB)]
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991567318, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12554151, "oldest_snapshot_seqno": -1}
Jan 31 08:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-06a25f0b8baeffc41d9710bb9305f62dc04a1c403a3171c9a7de4c2032a529b4-merged.mount: Deactivated successfully.
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8685 keys, 10651772 bytes, temperature: kUnknown
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991780584, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 10651772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10596369, "index_size": 32570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 230003, "raw_average_key_size": 26, "raw_value_size": 10444394, "raw_average_value_size": 1202, "num_data_blocks": 1238, "num_entries": 8685, "num_filter_entries": 8685, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769847991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:31 compute-0 podman[354979]: 2026-01-31 08:26:31.781694454 +0000 UTC m=+0.443095147 container remove 3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.780981) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10651772 bytes
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.787672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.8 rd, 49.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.4 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(36.0) write-amplify(16.5) OK, records in: 9197, records dropped: 512 output_compression: NoCompression
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.787739) EVENT_LOG_v1 {"time_micros": 1769847991787714, "job": 84, "event": "compaction_finished", "compaction_time_micros": 213397, "compaction_time_cpu_micros": 21700, "output_level": 6, "num_output_files": 1, "total_output_size": 10651772, "num_input_records": 9197, "num_output_records": 8685, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991791482, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847991792785, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.567155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.792888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.792896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.792897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.792899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:26:31.792901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:31 compute-0 systemd[1]: libpod-conmon-3701a455745e026ad7b06279d39fa703c6106fddac1a06641e3a3a69a05d10a7.scope: Deactivated successfully.
Jan 31 08:26:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:31 compute-0 podman[355021]: 2026-01-31 08:26:31.978289534 +0000 UTC m=+0.052098388 container create c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:32 compute-0 systemd[1]: Started libpod-conmon-c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0.scope.
Jan 31 08:26:32 compute-0 podman[355021]: 2026-01-31 08:26:31.951972539 +0000 UTC m=+0.025781413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:26:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae51f5424f757c9d3078a19917a1e7f7bdad1b9fdc285a842c9cfc0446478680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae51f5424f757c9d3078a19917a1e7f7bdad1b9fdc285a842c9cfc0446478680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae51f5424f757c9d3078a19917a1e7f7bdad1b9fdc285a842c9cfc0446478680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae51f5424f757c9d3078a19917a1e7f7bdad1b9fdc285a842c9cfc0446478680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae51f5424f757c9d3078a19917a1e7f7bdad1b9fdc285a842c9cfc0446478680/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:32 compute-0 podman[355021]: 2026-01-31 08:26:32.09313844 +0000 UTC m=+0.166947294 container init c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rubin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:26:32 compute-0 podman[355021]: 2026-01-31 08:26:32.104279564 +0000 UTC m=+0.178088418 container start c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rubin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:32 compute-0 podman[355021]: 2026-01-31 08:26:32.123433503 +0000 UTC m=+0.197242407 container attach c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.135 247708 DEBUG oslo_concurrency.lockutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.140 247708 DEBUG oslo_concurrency.lockutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.282 247708 DEBUG nova.objects.instance [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.489 247708 DEBUG oslo_concurrency.lockutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.623 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.872 247708 DEBUG oslo_concurrency.lockutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.873 247708 DEBUG oslo_concurrency.lockutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.874 247708 INFO nova.compute.manager [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Attaching volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1 to /dev/vdb
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.882 247708 DEBUG nova.compute.manager [req-e47ab180-8eda-49f9-b0be-253cac2c14ac req-42e531f9-ba6c-466e-a829-62fab8f39978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.882 247708 DEBUG oslo_concurrency.lockutils [req-e47ab180-8eda-49f9-b0be-253cac2c14ac req-42e531f9-ba6c-466e-a829-62fab8f39978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.883 247708 DEBUG oslo_concurrency.lockutils [req-e47ab180-8eda-49f9-b0be-253cac2c14ac req-42e531f9-ba6c-466e-a829-62fab8f39978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.883 247708 DEBUG oslo_concurrency.lockutils [req-e47ab180-8eda-49f9-b0be-253cac2c14ac req-42e531f9-ba6c-466e-a829-62fab8f39978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.883 247708 DEBUG nova.compute.manager [req-e47ab180-8eda-49f9-b0be-253cac2c14ac req-42e531f9-ba6c-466e-a829-62fab8f39978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] No waiting events found dispatching network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:26:32 compute-0 nova_compute[247704]: 2026-01-31 08:26:32.884 247708 WARNING nova.compute.manager [req-e47ab180-8eda-49f9-b0be-253cac2c14ac req-42e531f9-ba6c-466e-a829-62fab8f39978 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received unexpected event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 for instance with vm_state active and task_state None.
Jan 31 08:26:33 compute-0 flamboyant_rubin[355038]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:26:33 compute-0 flamboyant_rubin[355038]: --> relative data size: 1.0
Jan 31 08:26:33 compute-0 flamboyant_rubin[355038]: --> All data devices are unavailable
Jan 31 08:26:33 compute-0 systemd[1]: libpod-c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0.scope: Deactivated successfully.
Jan 31 08:26:33 compute-0 conmon[355038]: conmon c40bb0b846b78872f5a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0.scope/container/memory.events
Jan 31 08:26:33 compute-0 podman[355021]: 2026-01-31 08:26:33.094532155 +0000 UTC m=+1.168341019 container died c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rubin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae51f5424f757c9d3078a19917a1e7f7bdad1b9fdc285a842c9cfc0446478680-merged.mount: Deactivated successfully.
Jan 31 08:26:33 compute-0 podman[355021]: 2026-01-31 08:26:33.166286695 +0000 UTC m=+1.240095549 container remove c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_rubin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 08:26:33 compute-0 sudo[354912]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:33 compute-0 systemd[1]: libpod-conmon-c40bb0b846b78872f5a8c3bd6c18ad270b9d50ba6b6087659c8674f49643eeb0.scope: Deactivated successfully.
Jan 31 08:26:33 compute-0 sudo[355066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:33 compute-0 sudo[355066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:33 compute-0 sudo[355066]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:26:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:33.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:26:33 compute-0 sudo[355091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:33 compute-0 sudo[355091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:33 compute-0 sudo[355091]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.349 247708 DEBUG os_brick.utils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.354 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.373 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.374 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e485b0e9-3e5a-46e2-90c3-68bcb6511147]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.376 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.385 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.386 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[aff8bd1b-bf18-44b2-96a1-7f849aec8a62]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:33 compute-0 sudo[355116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.387 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:33 compute-0 sudo[355116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:33 compute-0 sudo[355116]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.396 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.396 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[41fa7fcb-4c1d-4029-b788-c6a483e20c89]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.397 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[be417c94-3ae5-4e50-aab6-8e17975e0816]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.398 247708 DEBUG oslo_concurrency.processutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.431 247708 DEBUG oslo_concurrency.processutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.433 247708 DEBUG os_brick.initiator.connectors.lightos [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.433 247708 DEBUG os_brick.initiator.connectors.lightos [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.433 247708 DEBUG os_brick.initiator.connectors.lightos [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.434 247708 DEBUG os_brick.utils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:26:33 compute-0 nova_compute[247704]: 2026-01-31 08:26:33.434 247708 DEBUG nova.virt.block_device [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating existing volume attachment record: 7d1a6c4a-de83-4ac6-a420-a85b4d557c1a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:26:33 compute-0 sudo[355147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:26:33 compute-0 sudo[355147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:33 compute-0 ceph-mon[74496]: pgmap v2786: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.767401845 +0000 UTC m=+0.047786503 container create 24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 31 08:26:33 compute-0 systemd[1]: Started libpod-conmon-24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607.scope.
Jan 31 08:26:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.749141647 +0000 UTC m=+0.029526325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.851379794 +0000 UTC m=+0.131764492 container init 24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.858430177 +0000 UTC m=+0.138814875 container start 24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.862932147 +0000 UTC m=+0.143316815 container attach 24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:26:33 compute-0 laughing_heisenberg[355230]: 167 167
Jan 31 08:26:33 compute-0 systemd[1]: libpod-24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607.scope: Deactivated successfully.
Jan 31 08:26:33 compute-0 conmon[355230]: conmon 24f274288329ddcf48aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607.scope/container/memory.events
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.869749595 +0000 UTC m=+0.150134263 container died 24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:26:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b16a2c74a56a65f2fe6f2673c9e60edee28540bf261689d081cbf9ac1dcde3c9-merged.mount: Deactivated successfully.
Jan 31 08:26:33 compute-0 podman[355214]: 2026-01-31 08:26:33.915609359 +0000 UTC m=+0.195994017 container remove 24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_heisenberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:26:33 compute-0 systemd[1]: libpod-conmon-24f274288329ddcf48aa0b5741d7b2dccc6ce842f4621e6c7e7c6392a4011607.scope: Deactivated successfully.
Jan 31 08:26:34 compute-0 podman[355254]: 2026-01-31 08:26:34.086994172 +0000 UTC m=+0.048155952 container create b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:34 compute-0 systemd[1]: Started libpod-conmon-b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886.scope.
Jan 31 08:26:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Jan 31 08:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb999fe22d0d13dd2e825d5854fe36dcd34206f83520b0d19211582074670ad5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb999fe22d0d13dd2e825d5854fe36dcd34206f83520b0d19211582074670ad5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb999fe22d0d13dd2e825d5854fe36dcd34206f83520b0d19211582074670ad5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb999fe22d0d13dd2e825d5854fe36dcd34206f83520b0d19211582074670ad5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:34 compute-0 podman[355254]: 2026-01-31 08:26:34.065641368 +0000 UTC m=+0.026803158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:26:34 compute-0 podman[355254]: 2026-01-31 08:26:34.165578578 +0000 UTC m=+0.126740378 container init b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:26:34 compute-0 podman[355254]: 2026-01-31 08:26:34.173610446 +0000 UTC m=+0.134772206 container start b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_northcutt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:26:34 compute-0 podman[355254]: 2026-01-31 08:26:34.187168118 +0000 UTC m=+0.148329908 container attach b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_northcutt, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:26:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/121583807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.721 247708 DEBUG nova.objects.instance [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.780 247708 DEBUG nova.virt.libvirt.driver [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Attempting to attach volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.784 247708 DEBUG nova.virt.libvirt.guest [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:26:34 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-4e49c5a3-4191-4757-9b9f-0465fe16a7f1">
Jan 31 08:26:34 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:34 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:34 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   </source>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:26:34 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   <serial>4e49c5a3-4191-4757-9b9f-0465fe16a7f1</serial>
Jan 31 08:26:34 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 08:26:34 compute-0 nova_compute[247704]: </disk>
Jan 31 08:26:34 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]: {
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:     "0": [
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:         {
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "devices": [
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "/dev/loop3"
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             ],
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "lv_name": "ceph_lv0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "lv_size": "7511998464",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "name": "ceph_lv0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "tags": {
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.crush_device_class": "",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.encrypted": "0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.osd_id": "0",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.type": "block",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:                 "ceph.vdo": "0"
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             },
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "type": "block",
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:             "vg_name": "ceph_vg0"
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:         }
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]:     ]
Jan 31 08:26:34 compute-0 interesting_northcutt[355270]: }
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.975 247708 DEBUG nova.virt.libvirt.driver [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.976 247708 DEBUG nova.virt.libvirt.driver [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.976 247708 DEBUG nova.virt.libvirt.driver [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:26:34 compute-0 nova_compute[247704]: 2026-01-31 08:26:34.976 247708 DEBUG nova.virt.libvirt.driver [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:be:57:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:26:34 compute-0 systemd[1]: libpod-b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886.scope: Deactivated successfully.
Jan 31 08:26:34 compute-0 conmon[355270]: conmon b548afb634977d8a7434 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886.scope/container/memory.events
Jan 31 08:26:34 compute-0 podman[355254]: 2026-01-31 08:26:34.987699338 +0000 UTC m=+0.948861108 container died b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb999fe22d0d13dd2e825d5854fe36dcd34206f83520b0d19211582074670ad5-merged.mount: Deactivated successfully.
Jan 31 08:26:35 compute-0 podman[355254]: 2026-01-31 08:26:35.037706764 +0000 UTC m=+0.998868534 container remove b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:35 compute-0 systemd[1]: libpod-conmon-b548afb634977d8a743444ea327977677dc222f87254f08e9c5f0684fe730886.scope: Deactivated successfully.
Jan 31 08:26:35 compute-0 sudo[355147]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:35 compute-0 sudo[355310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:35 compute-0 sudo[355310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:35 compute-0 sudo[355310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:35 compute-0 sudo[355335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:35 compute-0 sudo[355335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:35 compute-0 sudo[355335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:35 compute-0 sudo[355360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:35 compute-0 sudo[355360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:35 compute-0 sudo[355360]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:35.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:35 compute-0 sudo[355385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:26:35 compute-0 sudo[355385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:35 compute-0 nova_compute[247704]: 2026-01-31 08:26:35.551 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:35 compute-0 ceph-mon[74496]: pgmap v2787: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006343021531267448 of space, bias 1.0, pg target 1.9029064593802345 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004321738902847571 of space, bias 1.0, pg target 1.292199931951424 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:26:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.6896378 +0000 UTC m=+0.044304247 container create c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:26:35 compute-0 systemd[1]: Started libpod-conmon-c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce.scope.
Jan 31 08:26:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.668776519 +0000 UTC m=+0.023442986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.774782308 +0000 UTC m=+0.129448755 container init c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.781443661 +0000 UTC m=+0.136110098 container start c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:26:35 compute-0 strange_zhukovsky[355469]: 167 167
Jan 31 08:26:35 compute-0 systemd[1]: libpod-c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce.scope: Deactivated successfully.
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.786013084 +0000 UTC m=+0.140679621 container attach c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.791652252 +0000 UTC m=+0.146318699 container died c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-35f90c008c99221e50d0128eaa3b9d253dd150c2def01fc5137c67570f72b442-merged.mount: Deactivated successfully.
Jan 31 08:26:35 compute-0 podman[355452]: 2026-01-31 08:26:35.839962326 +0000 UTC m=+0.194628783 container remove c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:35 compute-0 systemd[1]: libpod-conmon-c1c3bbd9f9770ae261779432930f5e6e1d3fc6d0c5f54644ec962f5866ac3fce.scope: Deactivated successfully.
Jan 31 08:26:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:35.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:35 compute-0 nova_compute[247704]: 2026-01-31 08:26:35.954 247708 DEBUG nova.compute.manager [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-changed-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:26:35 compute-0 nova_compute[247704]: 2026-01-31 08:26:35.954 247708 DEBUG nova.compute.manager [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Refreshing instance network info cache due to event network-changed-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:26:35 compute-0 nova_compute[247704]: 2026-01-31 08:26:35.955 247708 DEBUG oslo_concurrency.lockutils [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:35 compute-0 nova_compute[247704]: 2026-01-31 08:26:35.955 247708 DEBUG oslo_concurrency.lockutils [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:35 compute-0 nova_compute[247704]: 2026-01-31 08:26:35.955 247708 DEBUG nova.network.neutron [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Refreshing network info cache for port 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:26:36 compute-0 podman[355495]: 2026-01-31 08:26:36.039376176 +0000 UTC m=+0.064873012 container create 098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:26:36 compute-0 systemd[1]: Started libpod-conmon-098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607.scope.
Jan 31 08:26:36 compute-0 podman[355495]: 2026-01-31 08:26:36.000188456 +0000 UTC m=+0.025685302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:26:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc918ff302f3494787c5bc80436f23b7156d514ada3518601b4af8c499ac64d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc918ff302f3494787c5bc80436f23b7156d514ada3518601b4af8c499ac64d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc918ff302f3494787c5bc80436f23b7156d514ada3518601b4af8c499ac64d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc918ff302f3494787c5bc80436f23b7156d514ada3518601b4af8c499ac64d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:36 compute-0 podman[355495]: 2026-01-31 08:26:36.119819978 +0000 UTC m=+0.145316814 container init 098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:36 compute-0 podman[355495]: 2026-01-31 08:26:36.125558339 +0000 UTC m=+0.151055175 container start 098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:36 compute-0 podman[355495]: 2026-01-31 08:26:36.135959615 +0000 UTC m=+0.161456461 container attach 098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Jan 31 08:26:36 compute-0 nova_compute[247704]: 2026-01-31 08:26:36.306 247708 DEBUG oslo_concurrency.lockutils [None req-b14670d8-5862-44c1-b963-8da9d4e72032 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:36 compute-0 nova_compute[247704]: 2026-01-31 08:26:36.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]: {
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:         "osd_id": 0,
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:         "type": "bluestore"
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]:     }
Jan 31 08:26:37 compute-0 happy_visvesvaraya[355512]: }
Jan 31 08:26:37 compute-0 systemd[1]: libpod-098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607.scope: Deactivated successfully.
Jan 31 08:26:37 compute-0 podman[355495]: 2026-01-31 08:26:37.119500932 +0000 UTC m=+1.144997778 container died 098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc918ff302f3494787c5bc80436f23b7156d514ada3518601b4af8c499ac64d-merged.mount: Deactivated successfully.
Jan 31 08:26:37 compute-0 podman[355495]: 2026-01-31 08:26:37.189620131 +0000 UTC m=+1.215116977 container remove 098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 08:26:37 compute-0 systemd[1]: libpod-conmon-098979a7af5b7c9be6e7c5b45efeec547bf927da73fe8641e3138f52f88eb607.scope: Deactivated successfully.
Jan 31 08:26:37 compute-0 sudo[355385]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:26:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:26:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9c593057-4dba-414d-aba7-8374c63086b5 does not exist
Jan 31 08:26:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 26559c4b-e981-4743-8396-f6f8e61dc6e9 does not exist
Jan 31 08:26:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6392a71e-2640-4f57-8288-ec363a31511e does not exist
Jan 31 08:26:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:37.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:37 compute-0 sudo[355547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:37 compute-0 sudo[355547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:37 compute-0 sudo[355547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:37 compute-0 sudo[355572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:26:37 compute-0 sudo[355572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:37 compute-0 sudo[355572]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:37 compute-0 nova_compute[247704]: 2026-01-31 08:26:37.629 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:37.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 49 KiB/s wr, 156 op/s
Jan 31 08:26:38 compute-0 ceph-mon[74496]: pgmap v2788: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Jan 31 08:26:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:26:39 compute-0 nova_compute[247704]: 2026-01-31 08:26:39.286 247708 DEBUG nova.network.neutron [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updated VIF entry in instance network info cache for port 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:26:39 compute-0 nova_compute[247704]: 2026-01-31 08:26:39.287 247708 DEBUG nova.network.neutron [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updating instance_info_cache with network_info: [{"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:39 compute-0 nova_compute[247704]: 2026-01-31 08:26:39.373 247708 DEBUG oslo_concurrency.lockutils [req-4f53b4b7-6953-46d4-ab0a-94bbb54bd10d req-8a91c4b0-c5aa-4b6d-a6bd-981c822ccc66 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-884d5d5d-6ad9-46a8-867a-b01ed20a527d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:39 compute-0 ceph-mon[74496]: pgmap v2789: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 49 KiB/s wr, 156 op/s
Jan 31 08:26:39 compute-0 nova_compute[247704]: 2026-01-31 08:26:39.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:39 compute-0 nova_compute[247704]: 2026-01-31 08:26:39.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:39 compute-0 nova_compute[247704]: 2026-01-31 08:26:39.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:26:39 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 31 08:26:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:26:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:39.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:26:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 460 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.1 MiB/s wr, 170 op/s
Jan 31 08:26:40 compute-0 nova_compute[247704]: 2026-01-31 08:26:40.553 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.119 247708 DEBUG oslo_concurrency.lockutils [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.119 247708 DEBUG oslo_concurrency.lockutils [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.140 247708 INFO nova.compute.manager [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Detaching volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1
Jan 31 08:26:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:41.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.472 247708 INFO nova.virt.block_device [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Attempting to driver detach volume 4e49c5a3-4191-4757-9b9f-0465fe16a7f1 from mountpoint /dev/vdb
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.492 247708 DEBUG nova.virt.libvirt.driver [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Attempting to detach device vdb from instance bbc5f09e-71d7-4009-bdf6-06e95b32574c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.493 247708 DEBUG nova.virt.libvirt.guest [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-4e49c5a3-4191-4757-9b9f-0465fe16a7f1">
Jan 31 08:26:41 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   </source>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <serial>4e49c5a3-4191-4757-9b9f-0465fe16a7f1</serial>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]: </disk>
Jan 31 08:26:41 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.504 247708 INFO nova.virt.libvirt.driver [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance bbc5f09e-71d7-4009-bdf6-06e95b32574c from the persistent domain config.
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.505 247708 DEBUG nova.virt.libvirt.driver [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance bbc5f09e-71d7-4009-bdf6-06e95b32574c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.505 247708 DEBUG nova.virt.libvirt.guest [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-4e49c5a3-4191-4757-9b9f-0465fe16a7f1">
Jan 31 08:26:41 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   </source>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <serial>4e49c5a3-4191-4757-9b9f-0465fe16a7f1</serial>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 08:26:41 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:26:41 compute-0 nova_compute[247704]: </disk>
Jan 31 08:26:41 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:26:41 compute-0 ceph-mon[74496]: pgmap v2790: 305 pgs: 305 active+clean; 460 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.1 MiB/s wr, 170 op/s
Jan 31 08:26:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.560 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769848001.5597143, bbc5f09e-71d7-4009-bdf6-06e95b32574c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.562 247708 DEBUG nova.virt.libvirt.driver [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance bbc5f09e-71d7-4009-bdf6-06e95b32574c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:26:41 compute-0 nova_compute[247704]: 2026-01-31 08:26:41.564 247708 INFO nova.virt.libvirt.driver [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully detached device vdb from instance bbc5f09e-71d7-4009-bdf6-06e95b32574c from the live domain config.
Jan 31 08:26:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 148 op/s
Jan 31 08:26:42 compute-0 nova_compute[247704]: 2026-01-31 08:26:42.321 247708 DEBUG nova.objects.instance [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:42 compute-0 nova_compute[247704]: 2026-01-31 08:26:42.395 247708 DEBUG oslo_concurrency.lockutils [None req-3fab118a-3b42-4313-b17b-57ff86820aaa 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:42 compute-0 nova_compute[247704]: 2026-01-31 08:26:42.600 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:42 compute-0 nova_compute[247704]: 2026-01-31 08:26:42.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:43.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:43 compute-0 ceph-mon[74496]: pgmap v2791: 305 pgs: 305 active+clean; 471 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 148 op/s
Jan 31 08:26:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:43.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 08:26:44 compute-0 nova_compute[247704]: 2026-01-31 08:26:44.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:44 compute-0 nova_compute[247704]: 2026-01-31 08:26:44.612 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:44 compute-0 nova_compute[247704]: 2026-01-31 08:26:44.613 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:44 compute-0 nova_compute[247704]: 2026-01-31 08:26:44.613 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:44 compute-0 nova_compute[247704]: 2026-01-31 08:26:44.613 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:26:44 compute-0 nova_compute[247704]: 2026-01-31 08:26:44.613 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063618554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.075 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.206 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.208 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.212 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.212 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.215 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.215 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:26:45 compute-0 podman[355626]: 2026-01-31 08:26:45.233536876 +0000 UTC m=+0.102367941 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:26:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:45.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.431 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.432 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3769MB free_disk=20.830669403076172GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.432 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.432 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.556 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.646 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.646 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bbc5f09e-71d7-4009-bdf6-06e95b32574c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.646 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.647 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.647 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:26:45 compute-0 nova_compute[247704]: 2026-01-31 08:26:45.780 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:45 compute-0 ceph-mon[74496]: pgmap v2792: 305 pgs: 305 active+clean; 481 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 08:26:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2063618554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/848783938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:45.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 105 op/s
Jan 31 08:26:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791014467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:46 compute-0 nova_compute[247704]: 2026-01-31 08:26:46.252 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:46 compute-0 nova_compute[247704]: 2026-01-31 08:26:46.259 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:46 compute-0 nova_compute[247704]: 2026-01-31 08:26:46.286 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:46 compute-0 nova_compute[247704]: 2026-01-31 08:26:46.361 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:26:46 compute-0 nova_compute[247704]: 2026-01-31 08:26:46.362 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:46 compute-0 ovn_controller[149457]: 2026-01-31T08:26:46Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:30:cf 10.100.0.14
Jan 31 08:26:46 compute-0 ovn_controller[149457]: 2026-01-31T08:26:46Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:30:cf 10.100.0.14
Jan 31 08:26:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1898698978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/791014467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:47 compute-0 nova_compute[247704]: 2026-01-31 08:26:47.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:47 compute-0 nova_compute[247704]: 2026-01-31 08:26:47.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:26:47 compute-0 nova_compute[247704]: 2026-01-31 08:26:47.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:26:47 compute-0 nova_compute[247704]: 2026-01-31 08:26:47.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:47.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:47 compute-0 ceph-mon[74496]: pgmap v2793: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.4 MiB/s wr, 105 op/s
Jan 31 08:26:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2720197295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1176362589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1160455600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:48 compute-0 nova_compute[247704]: 2026-01-31 08:26:48.031 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:26:48 compute-0 nova_compute[247704]: 2026-01-31 08:26:48.032 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:26:48 compute-0 nova_compute[247704]: 2026-01-31 08:26:48.032 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:26:48 compute-0 nova_compute[247704]: 2026-01-31 08:26:48.032 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Jan 31 08:26:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1267069725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2763669411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:49.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:26:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:26:50 compute-0 ceph-mon[74496]: pgmap v2794: 305 pgs: 305 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Jan 31 08:26:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1876234704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 566 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 867 KiB/s rd, 6.7 MiB/s wr, 150 op/s
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.349 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.368 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.368 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.369 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.369 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.369 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.369 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.385 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.559 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:50 compute-0 nova_compute[247704]: 2026-01-31 08:26:50.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:26:50 compute-0 sudo[355677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:50 compute-0 sudo[355677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:50 compute-0 sudo[355677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:50 compute-0 sudo[355702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:26:50 compute-0 sudo[355702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:50 compute-0 sudo[355702]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:51.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:52 compute-0 ceph-mon[74496]: pgmap v2795: 305 pgs: 305 active+clean; 566 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 867 KiB/s rd, 6.7 MiB/s wr, 150 op/s
Jan 31 08:26:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.2 MiB/s wr, 157 op/s
Jan 31 08:26:52 compute-0 nova_compute[247704]: 2026-01-31 08:26:52.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:52 compute-0 nova_compute[247704]: 2026-01-31 08:26:52.858 247708 DEBUG nova.compute.manager [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.077 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.078 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.137 247708 DEBUG nova.objects.instance [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'pci_requests' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.173 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.174 247708 INFO nova.compute.claims [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.175 247708 DEBUG nova.objects.instance [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'resources' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.202 247708 DEBUG nova.objects.instance [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'numa_topology' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.232 247708 DEBUG nova.objects.instance [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'pci_devices' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:26:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:53.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.382 247708 INFO nova.compute.resource_tracker [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating resource usage from migration 45878621-34ad-484d-97db-dac570b012b2
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.383 247708 DEBUG nova.compute.resource_tracker [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Starting to track incoming migration 45878621-34ad-484d-97db-dac570b012b2 with flavor fea01737-128b-41fa-a695-aaaa6e96e4b2 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.567 247708 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:53.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:26:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848615711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.977 247708 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:53 compute-0 nova_compute[247704]: 2026-01-31 08:26:53.983 247708 DEBUG nova.compute.provider_tree [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:54 compute-0 ceph-mon[74496]: pgmap v2796: 305 pgs: 305 active+clean; 596 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.2 MiB/s wr, 157 op/s
Jan 31 08:26:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2557482008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:26:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2557482008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:26:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1848615711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:26:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Jan 31 08:26:54 compute-0 nova_compute[247704]: 2026-01-31 08:26:54.172 247708 DEBUG nova.scheduler.client.report [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:54 compute-0 nova_compute[247704]: 2026-01-31 08:26:54.214 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:54 compute-0 nova_compute[247704]: 2026-01-31 08:26:54.214 247708 INFO nova.compute.manager [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Migrating
Jan 31 08:26:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:55.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:55 compute-0 nova_compute[247704]: 2026-01-31 08:26:55.562 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:55.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:56 compute-0 ceph-mon[74496]: pgmap v2797: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Jan 31 08:26:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Jan 31 08:26:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/306460858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:57.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:57 compute-0 nova_compute[247704]: 2026-01-31 08:26:57.616 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:26:57 compute-0 nova_compute[247704]: 2026-01-31 08:26:57.699 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:26:57 compute-0 podman[355753]: 2026-01-31 08:26:57.904475 +0000 UTC m=+0.071487244 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:26:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:26:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:57.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:26:58 compute-0 ceph-mon[74496]: pgmap v2798: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Jan 31 08:26:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2863717728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:26:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.5 MiB/s wr, 184 op/s
Jan 31 08:26:59 compute-0 sshd-session[355772]: Accepted publickey for nova from 192.168.122.102 port 60648 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:26:59 compute-0 systemd-logind[816]: New session 61 of user nova.
Jan 31 08:26:59 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 08:26:59 compute-0 ceph-mon[74496]: pgmap v2799: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.5 MiB/s wr, 184 op/s
Jan 31 08:26:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 08:26:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 08:26:59 compute-0 systemd[1]: Starting User Manager for UID 42436...
Jan 31 08:26:59 compute-0 systemd[355776]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:26:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:59.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:26:59 compute-0 systemd[355776]: Queued start job for default target Main User Target.
Jan 31 08:26:59 compute-0 systemd[355776]: Created slice User Application Slice.
Jan 31 08:26:59 compute-0 systemd[355776]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:26:59 compute-0 systemd[355776]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:26:59 compute-0 systemd[355776]: Reached target Paths.
Jan 31 08:26:59 compute-0 systemd[355776]: Reached target Timers.
Jan 31 08:26:59 compute-0 systemd[355776]: Starting D-Bus User Message Bus Socket...
Jan 31 08:26:59 compute-0 systemd[355776]: Starting Create User's Volatile Files and Directories...
Jan 31 08:26:59 compute-0 systemd[355776]: Finished Create User's Volatile Files and Directories.
Jan 31 08:26:59 compute-0 systemd[355776]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:26:59 compute-0 systemd[355776]: Reached target Sockets.
Jan 31 08:26:59 compute-0 systemd[355776]: Reached target Basic System.
Jan 31 08:26:59 compute-0 systemd[355776]: Reached target Main User Target.
Jan 31 08:26:59 compute-0 systemd[355776]: Startup finished in 150ms.
Jan 31 08:26:59 compute-0 systemd[1]: Started User Manager for UID 42436.
Jan 31 08:26:59 compute-0 systemd[1]: Started Session 61 of User nova.
Jan 31 08:26:59 compute-0 sshd-session[355772]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:26:59 compute-0 sshd-session[355791]: Received disconnect from 192.168.122.102 port 60648:11: disconnected by user
Jan 31 08:26:59 compute-0 sshd-session[355791]: Disconnected from user nova 192.168.122.102 port 60648
Jan 31 08:26:59 compute-0 sshd-session[355772]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:26:59 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Jan 31 08:26:59 compute-0 systemd-logind[816]: Session 61 logged out. Waiting for processes to exit.
Jan 31 08:26:59 compute-0 systemd-logind[816]: Removed session 61.
Jan 31 08:26:59 compute-0 sshd-session[355794]: Accepted publickey for nova from 192.168.122.102 port 60654 ssh2: ECDSA SHA256:x674mWemszn5UyYA1PQSm9fK8+OEaBfRnNSUktYnOE0
Jan 31 08:26:59 compute-0 systemd-logind[816]: New session 63 of user nova.
Jan 31 08:26:59 compute-0 systemd[1]: Started Session 63 of User nova.
Jan 31 08:26:59 compute-0 sshd-session[355794]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Jan 31 08:26:59 compute-0 sshd-session[355797]: Received disconnect from 192.168.122.102 port 60654:11: disconnected by user
Jan 31 08:26:59 compute-0 sshd-session[355797]: Disconnected from user nova 192.168.122.102 port 60654
Jan 31 08:26:59 compute-0 sshd-session[355794]: pam_unix(sshd:session): session closed for user nova
Jan 31 08:26:59 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Jan 31 08:26:59 compute-0 systemd-logind[816]: Session 63 logged out. Waiting for processes to exit.
Jan 31 08:26:59 compute-0 systemd-logind[816]: Removed session 63.
Jan 31 08:26:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:26:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:26:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 153 op/s
Jan 31 08:27:00 compute-0 nova_compute[247704]: 2026-01-31 08:27:00.564 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:01 compute-0 ceph-mon[74496]: pgmap v2800: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.0 MiB/s wr, 153 op/s
Jan 31 08:27:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:01.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:01.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 619 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 31 08:27:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.701 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.810 247708 DEBUG nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.811 247708 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.811 247708 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.811 247708 DEBUG oslo_concurrency.lockutils [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.811 247708 DEBUG nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:02 compute-0 nova_compute[247704]: 2026-01-31 08:27:02.812 247708 WARNING nova.compute.manager [req-b88f55b4-d96b-4df9-9701-1a891812a6d6 req-6c39127d-45fb-4073-a92d-e8f580341408 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state resize_migrating.
Jan 31 08:27:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:03 compute-0 ceph-mon[74496]: pgmap v2801: 305 pgs: 305 active+clean; 619 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 31 08:27:03 compute-0 nova_compute[247704]: 2026-01-31 08:27:03.778 247708 INFO nova.network.neutron [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 31 08:27:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 627 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 127 op/s
Jan 31 08:27:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:04.490 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:04.491 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.934 247708 DEBUG nova.compute.manager [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.935 247708 DEBUG oslo_concurrency.lockutils [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.935 247708 DEBUG oslo_concurrency.lockutils [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.936 247708 DEBUG oslo_concurrency.lockutils [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.936 247708 DEBUG nova.compute.manager [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:04 compute-0 nova_compute[247704]: 2026-01-31 08:27:04.937 247708 WARNING nova.compute.manager [req-62080176-df66-489c-9ddf-7df297fa5d83 req-822b21f9-e22b-4946-a7a5-0eda01da2d86 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state resize_migrated.
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.192 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.192 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.192 247708 DEBUG nova.network.neutron [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.277 247708 DEBUG nova.compute.manager [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-changed-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.278 247708 DEBUG nova.compute.manager [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Refreshing instance network info cache due to event network-changed-85a0f13a-358c-4d17-a146-95c5b877e950. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.278 247708 DEBUG oslo_concurrency.lockutils [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.278 247708 DEBUG oslo_concurrency.lockutils [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.279 247708 DEBUG nova.network.neutron [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Refreshing network info cache for port 85a0f13a-358c-4d17-a146-95c5b877e950 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:05.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:05 compute-0 ceph-mon[74496]: pgmap v2802: 305 pgs: 305 active+clean; 627 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 127 op/s
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.567 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.601 247708 DEBUG nova.compute.manager [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.601 247708 DEBUG nova.compute.manager [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:05 compute-0 nova_compute[247704]: 2026-01-31 08:27:05.601 247708 DEBUG oslo_concurrency.lockutils [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 31 08:27:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:07.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.463 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:07 compute-0 ceph-mon[74496]: pgmap v2803: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.529 247708 DEBUG nova.network.neutron [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.557 247708 WARNING nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.558 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.558 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.559 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.559 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.559 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.560 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.561 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.561 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.561 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.562 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.562 247708 INFO nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During sync_power_state the instance has a pending task (resize_migrated). Skip.
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.562 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.562 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.563 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.698 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.703 247708 DEBUG oslo_concurrency.lockutils [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.704 247708 DEBUG nova.network.neutron [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.708 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.739 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.743 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.788 247708 DEBUG nova.network.neutron [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updated VIF entry in instance network info cache for port 85a0f13a-358c-4d17-a146-95c5b877e950. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.789 247708 DEBUG nova.network.neutron [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.842 247708 DEBUG oslo_concurrency.lockutils [req-c8f577c8-8554-4924-aecc-7b09ea2b1230 req-95e68a86-1976-48ca-8d04-66c2c65c4fbe 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.854 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.857 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.857 247708 INFO nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Creating image(s)
Jan 31 08:27:07 compute-0 nova_compute[247704]: 2026-01-31 08:27:07.900 247708 DEBUG nova.storage.rbd_utils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] creating snapshot(nova-resize) on rbd image(651d6b65-a0ee-4942-bf60-88b037eb6508_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:27:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 642 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 162 op/s
Jan 31 08:27:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Jan 31 08:27:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2764240598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1378778972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Jan 31 08:27:08 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.606 247708 DEBUG nova.objects.instance [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'trusted_certs' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.762 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.763 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Ensure instance console log exists: /var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.763 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.763 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.764 247708 DEBUG oslo_concurrency.lockutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.766 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Start _get_guest_xml network_info=[{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.771 247708 WARNING nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.779 247708 DEBUG nova.virt.libvirt.host [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.779 247708 DEBUG nova.virt.libvirt.host [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.787 247708 DEBUG nova.virt.libvirt.host [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.787 247708 DEBUG nova.virt.libvirt.host [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.788 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.789 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.789 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.789 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.790 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.790 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.790 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.790 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.790 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.791 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.791 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.791 247708 DEBUG nova.virt.hardware [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.791 247708 DEBUG nova.objects.instance [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lazy-loading 'vcpu_model' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:08 compute-0 nova_compute[247704]: 2026-01-31 08:27:08.832 247708 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3732445394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.302 247708 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:09.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.353 247708 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:09 compute-0 ceph-mon[74496]: pgmap v2804: 305 pgs: 305 active+clean; 642 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 162 op/s
Jan 31 08:27:09 compute-0 ceph-mon[74496]: osdmap e359: 3 total, 3 up, 3 in
Jan 31 08:27:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3732445394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:09 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 08:27:09 compute-0 systemd[355776]: Activating special unit Exit the Session...
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped target Main User Target.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped target Basic System.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped target Paths.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped target Sockets.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped target Timers.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:27:09 compute-0 systemd[355776]: Closed D-Bus User Message Bus Socket.
Jan 31 08:27:09 compute-0 systemd[355776]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:27:09 compute-0 systemd[355776]: Removed slice User Application Slice.
Jan 31 08:27:09 compute-0 systemd[355776]: Reached target Shutdown.
Jan 31 08:27:09 compute-0 systemd[355776]: Finished Exit the Session.
Jan 31 08:27:09 compute-0 systemd[355776]: Reached target Exit the Session.
Jan 31 08:27:09 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 08:27:09 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 08:27:09 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 08:27:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2994273057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:09 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 08:27:09 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 08:27:09 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 08:27:09 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.863 247708 DEBUG oslo_concurrency.processutils [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.866 247708 DEBUG nova.virt.libvirt.vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:26:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:27:03Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.866 247708 DEBUG nova.network.os_vif_util [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.867 247708 DEBUG nova.network.os_vif_util [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.871 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <uuid>651d6b65-a0ee-4942-bf60-88b037eb6508</uuid>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <name>instance-000000a1</name>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-2107337279</nova:name>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:27:08</nova:creationTime>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <nova:port uuid="ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f">
Jan 31 08:27:09 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <system>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <entry name="serial">651d6b65-a0ee-4942-bf60-88b037eb6508</entry>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <entry name="uuid">651d6b65-a0ee-4942-bf60-88b037eb6508</entry>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </system>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <os>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </os>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <features>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </features>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/651d6b65-a0ee-4942-bf60-88b037eb6508_disk">
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </source>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/651d6b65-a0ee-4942-bf60-88b037eb6508_disk.config">
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </source>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:27:09 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:75:f3:24"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <target dev="tapec7fbb6b-9a"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/651d6b65-a0ee-4942-bf60-88b037eb6508/console.log" append="off"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <video>
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </video>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:27:09 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:27:09 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:27:09 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:27:09 compute-0 nova_compute[247704]: </domain>
Jan 31 08:27:09 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.873 247708 DEBUG nova.virt.libvirt.vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:26:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:27:03Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.873 247708 DEBUG nova.network.os_vif_util [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1449295874", "vif_mac": "fa:16:3e:75:f3:24"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.874 247708 DEBUG nova.network.os_vif_util [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.874 247708 DEBUG os_vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.875 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.876 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.877 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.880 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.881 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec7fbb6b-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.881 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec7fbb6b-9a, col_values=(('external_ids', {'iface-id': 'ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:f3:24', 'vm-uuid': '651d6b65-a0ee-4942-bf60-88b037eb6508'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:09 compute-0 NetworkManager[49108]: <info>  [1769848029.8846] manager: (tapec7fbb6b-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.887 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.891 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.892 247708 INFO os_vif [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a')
Jan 31 08:27:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.968 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.968 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.968 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No VIF found with MAC fa:16:3e:75:f3:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:27:09 compute-0 nova_compute[247704]: 2026-01-31 08:27:09.969 247708 INFO nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Using config drive
Jan 31 08:27:10 compute-0 kernel: tapec7fbb6b-9a: entered promiscuous mode
Jan 31 08:27:10 compute-0 NetworkManager[49108]: <info>  [1769848030.0613] manager: (tapec7fbb6b-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.104 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 ovn_controller[149457]: 2026-01-31T08:27:10Z|00703|binding|INFO|Claiming lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for this chassis.
Jan 31 08:27:10 compute-0 ovn_controller[149457]: 2026-01-31T08:27:10Z|00704|binding|INFO|ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f: Claiming fa:16:3e:75:f3:24 10.100.0.11
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.111 247708 DEBUG oslo_concurrency.lockutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.111 247708 DEBUG oslo_concurrency.lockutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:10 compute-0 ovn_controller[149457]: 2026-01-31T08:27:10Z|00705|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f ovn-installed in OVS
Jan 31 08:27:10 compute-0 ovn_controller[149457]: 2026-01-31T08:27:10Z|00706|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f up in Southbound
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.114 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:f3:24 10.100.0.11'], port_security=['fa:16:3e:75:f3:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '651d6b65-a0ee-4942-bf60-88b037eb6508', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '53c8d6c3-3fc8-4e05-8f45-013b15b35751', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eeea185c-a70e-4e33-a1d7-88e2fb6e75b6, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.116 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.116 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f in datapath cc669a9b-1a99-4cea-8b35-6d932fb2087c bound to our chassis
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.119 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:27:10 compute-0 systemd-machined[214448]: New machine qemu-76-instance-000000a1.
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.131 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b302e95-960a-47c1-88bc-dc8fa0b75542]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.132 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcc669a9b-11 in ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:27:10 compute-0 systemd-udevd[355973]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.134 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcc669a9b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.134 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0f9082-d3c1-4c0b-9f4f-7fa6ccfcf457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.135 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[594e0706-3d7c-4802-a559-8b730ded89d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-000000a1.
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.142 247708 DEBUG nova.objects.instance [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.149 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[564f964e-9aa6-4c18-ae86-6abb60cea0ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 NetworkManager[49108]: <info>  [1769848030.1505] device (tapec7fbb6b-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:27:10 compute-0 NetworkManager[49108]: <info>  [1769848030.1516] device (tapec7fbb6b-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.163 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[41aa9de4-b08e-4e07-9811-4f3619d9ce57]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.181 247708 DEBUG nova.network.neutron [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.182 247708 DEBUG nova.network.neutron [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.186 247708 DEBUG oslo_concurrency.lockutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.193 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfa53f3-d088-483a-9e9d-87bf0143fb4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 NetworkManager[49108]: <info>  [1769848030.2002] manager: (tapcc669a9b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.202 247708 DEBUG oslo_concurrency.lockutils [req-0b542e79-75f7-4bd5-a610-ada7a5ac4fc2 req-6071908c-fdd5-4cc5-8fe5-f877d1b37897 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.201 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a50548-04cb-422d-95ea-f30b0edc702f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.224 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[73873735-0447-431c-8248-6e96a2a507b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.228 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c0337917-d6b6-45c0-bf7b-9551d42e7df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 NetworkManager[49108]: <info>  [1769848030.2448] device (tapcc669a9b-10): carrier: link connected
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.250 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4d47a2ea-af78-459d-b2df-8ee5b8661764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.267 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c5cdc1ff-5167-4698-8155-7805ebc47608]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc669a9b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827286, 'reachable_time': 27745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356005, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.287 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[becf0749-51dc-4373-a5a4-cc9c47eed605]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:92df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827286, 'tstamp': 827286}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356006, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.310 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[db44e7e5-ee64-44ef-93ae-2843842ef2cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc669a9b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:92:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827286, 'reachable_time': 27745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356007, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.346 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b13426d-701a-449e-be8c-931662add0c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.409 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[48cf4aec-58cc-4f6d-ac3e-bf7661cb87d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.411 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc669a9b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.411 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.411 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc669a9b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:10 compute-0 NetworkManager[49108]: <info>  [1769848030.4147] manager: (tapcc669a9b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.414 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 kernel: tapcc669a9b-10: entered promiscuous mode
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.416 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.419 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcc669a9b-10, col_values=(('external_ids', {'iface-id': '897a561a-9f88-407a-b979-589100a315c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 ovn_controller[149457]: 2026-01-31T08:27:10Z|00707|binding|INFO|Releasing lport 897a561a-9f88-407a-b979-589100a315c7 from this chassis (sb_readonly=0)
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.425 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.426 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a9efb7ca-56a0-4714-a0d2-3bd1c8249a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.427 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/cc669a9b-1a99-4cea-8b35-6d932fb2087c.pid.haproxy
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID cc669a9b-1a99-4cea-8b35-6d932fb2087c
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:27:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:10.427 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'env', 'PROCESS_TAG=haproxy-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cc669a9b-1a99-4cea-8b35-6d932fb2087c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.429 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.500 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848030.499546, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.500 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Resumed (Lifecycle Event)
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.503 247708 DEBUG nova.compute.manager [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.508 247708 DEBUG nova.compute.manager [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.509 247708 DEBUG oslo_concurrency.lockutils [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.509 247708 DEBUG oslo_concurrency.lockutils [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.510 247708 DEBUG oslo_concurrency.lockutils [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.510 247708 DEBUG nova.compute.manager [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.510 247708 WARNING nova.compute.manager [req-7ee31c96-5fa2-4f80-b091-e61b3c492b4f req-84af3f10-3763-491f-9ba6-f7051570293c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state active and task_state resize_finish.
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.517 247708 INFO nova.virt.libvirt.driver [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance running successfully.
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.521 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.522 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:10 compute-0 virtqemud[247621]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.529 247708 DEBUG nova.virt.libvirt.guest [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.530 247708 DEBUG nova.virt.libvirt.driver [None req-0fd19d79-15ee-43be-acf8-b4dbce35a57d af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.563 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.574 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.598 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:27:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2994273057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:10 compute-0 podman[356078]: 2026-01-31 08:27:10.820311698 +0000 UTC m=+0.058714321 container create a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:27:10 compute-0 systemd[1]: Started libpod-conmon-a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8.scope.
Jan 31 08:27:10 compute-0 podman[356078]: 2026-01-31 08:27:10.786047607 +0000 UTC m=+0.024450260 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:27:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546c36bf2932cdd78fd72242fc35f517e75f0af6c78a01cb73086e8250c6bf9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:10 compute-0 podman[356078]: 2026-01-31 08:27:10.922256247 +0000 UTC m=+0.160658920 container init a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:27:10 compute-0 podman[356078]: 2026-01-31 08:27:10.928117961 +0000 UTC m=+0.166520604 container start a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:27:10 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [NOTICE]   (356096) : New worker (356098) forked
Jan 31 08:27:10 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [NOTICE]   (356096) : Loading success.
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.979 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] During sync_power_state the instance has a pending task (resize_finish). Skip.
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.980 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848030.5004258, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:10 compute-0 nova_compute[247704]: 2026-01-31 08:27:10.980 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Started (Lifecycle Event)
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.009 247708 DEBUG oslo_concurrency.lockutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.010 247708 DEBUG oslo_concurrency.lockutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.010 247708 INFO nova.compute.manager [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Attaching volume d1f290e7-6438-4a38-9b76-cefa7f5a6025 to /dev/vdb
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.022 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:11 compute-0 sudo[356107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:11 compute-0 sudo[356107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:11 compute-0 sudo[356107]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.056 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.069 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.070 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.080 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.081 247708 INFO nova.compute.claims [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:27:11 compute-0 sudo[356132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:11 compute-0 sudo[356132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:11 compute-0 sudo[356132]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:11.197 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:11.198 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:11.199 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.242 247708 DEBUG os_brick.utils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.243 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.258 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.258 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e1b61-93b3-4403-8508-327664265aaa]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.262 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.273 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.274 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[b5765beb-d08c-418c-ae81-873d228d3088]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.275 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.288 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.289 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7f44f4-91c3-4d1e-aa02-07a2c165c72a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.290 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[b2928eee-6b4e-4722-8c18-3f5f4832ddc2]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.291 247708 DEBUG oslo_concurrency.processutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.322 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:11.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.355 247708 DEBUG oslo_concurrency.processutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "nvme version" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.358 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.358 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.358 247708 DEBUG os_brick.initiator.connectors.lightos [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.358 247708 DEBUG os_brick.utils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] <== get_connector_properties: return (116ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.359 247708 DEBUG nova.virt.block_device [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updating existing volume attachment record: cea545d4-0d68-460c-b27d-29f6b2c30e27 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:27:11 compute-0 ceph-mon[74496]: pgmap v2806: 305 pgs: 305 active+clean; 626 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 225 op/s
Jan 31 08:27:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058255815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.802 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.809 247708 DEBUG nova.compute.provider_tree [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.839 247708 DEBUG nova.scheduler.client.report [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.869 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.870 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:27:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:11.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.989 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:27:11 compute-0 nova_compute[247704]: 2026-01-31 08:27:11.990 247708 DEBUG nova.network.neutron [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.022 247708 INFO nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.065 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:27:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 220 op/s
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.213 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.215 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.215 247708 INFO nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Creating image(s)
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.245 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4158850951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.284 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.314 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.318 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.345 247708 DEBUG nova.policy [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85dfa8546d9942648bb4197c8b1947e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbdbdee526499e90da7e971ede68d3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.383 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.384 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.385 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.385 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.414 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.417 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 10816ede-cf43-4736-aba7-48389f607d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.452 247708 DEBUG nova.objects.instance [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.491 247708 DEBUG nova.virt.libvirt.driver [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Attempting to attach volume d1f290e7-6438-4a38-9b76-cefa7f5a6025 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.494 247708 DEBUG nova.virt.libvirt.guest [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:27:12 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:27:12 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-d1f290e7-6438-4a38-9b76-cefa7f5a6025">
Jan 31 08:27:12 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:12 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:12 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:12 compute-0 nova_compute[247704]:   </source>
Jan 31 08:27:12 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:27:12 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:12 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:27:12 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:27:12 compute-0 nova_compute[247704]:   <serial>d1f290e7-6438-4a38-9b76-cefa7f5a6025</serial>
Jan 31 08:27:12 compute-0 nova_compute[247704]: </disk>
Jan 31 08:27:12 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:27:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3058255815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4158850951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.669 247708 DEBUG nova.virt.libvirt.driver [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.669 247708 DEBUG nova.virt.libvirt.driver [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.669 247708 DEBUG nova.virt.libvirt.driver [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.670 247708 DEBUG nova.virt.libvirt.driver [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] No VIF found with MAC fa:16:3e:a6:30:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:27:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.706 247708 DEBUG nova.compute.manager [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.706 247708 DEBUG oslo_concurrency.lockutils [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.707 247708 DEBUG oslo_concurrency.lockutils [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.707 247708 DEBUG oslo_concurrency.lockutils [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.707 247708 DEBUG nova.compute.manager [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.707 247708 WARNING nova.compute.manager [req-3bd68cb5-39c7-462c-9c46-1a98db56ad5c req-a4cee998-106b-4c0e-8815-4e5edcb06609 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:12 compute-0 nova_compute[247704]: 2026-01-31 08:27:12.979 247708 DEBUG oslo_concurrency.lockutils [None req-8cfaac13-190e-4ca6-8630-b692fb235d29 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.041 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 10816ede-cf43-4736-aba7-48389f607d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.125 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] resizing rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.255 247708 DEBUG nova.objects.instance [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.282 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.283 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Ensure instance console log exists: /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.284 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.284 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.284 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:13.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:13 compute-0 ceph-mon[74496]: pgmap v2807: 305 pgs: 305 active+clean; 610 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.5 MiB/s wr, 220 op/s
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.926 247708 DEBUG nova.network.neutron [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.927 247708 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.927 247708 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:13 compute-0 nova_compute[247704]: 2026-01-31 08:27:13.928 247708 DEBUG nova.network.neutron [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:27:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:13.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 264 op/s
Jan 31 08:27:14 compute-0 nova_compute[247704]: 2026-01-31 08:27:14.195 247708 DEBUG nova.network.neutron [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Successfully created port: aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:27:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:14.493 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1067723279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:14 compute-0 nova_compute[247704]: 2026-01-31 08:27:14.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:15.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:15 compute-0 nova_compute[247704]: 2026-01-31 08:27:15.571 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:15 compute-0 ceph-mon[74496]: pgmap v2808: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 264 op/s
Jan 31 08:27:15 compute-0 podman[356375]: 2026-01-31 08:27:15.914981773 +0000 UTC m=+0.079472249 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:27:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:15.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.007 247708 DEBUG nova.network.neutron [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Successfully updated port: aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.093 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.094 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.095 247708 DEBUG nova.network.neutron [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:27:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 661 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.8 MiB/s wr, 393 op/s
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.474 247708 DEBUG nova.compute.manager [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.475 247708 DEBUG nova.compute.manager [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing instance network info cache due to event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:16 compute-0 nova_compute[247704]: 2026-01-31 08:27:16.475 247708 DEBUG oslo_concurrency.lockutils [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:17 compute-0 nova_compute[247704]: 2026-01-31 08:27:17.099 247708 DEBUG nova.network.neutron [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:27:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:17 compute-0 ceph-mon[74496]: pgmap v2809: 305 pgs: 305 active+clean; 661 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 6.8 MiB/s wr, 393 op/s
Jan 31 08:27:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:18 compute-0 nova_compute[247704]: 2026-01-31 08:27:18.117 247708 DEBUG nova.network.neutron [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 381 op/s
Jan 31 08:27:18 compute-0 nova_compute[247704]: 2026-01-31 08:27:18.634 247708 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:19 compute-0 kernel: tapec7fbb6b-9a (unregistering): left promiscuous mode
Jan 31 08:27:19 compute-0 NetworkManager[49108]: <info>  [1769848039.1979] device (tapec7fbb6b-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:27:19 compute-0 ovn_controller[149457]: 2026-01-31T08:27:19Z|00708|binding|INFO|Releasing lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f from this chassis (sb_readonly=0)
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 ovn_controller[149457]: 2026-01-31T08:27:19Z|00709|binding|INFO|Setting lport ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f down in Southbound
Jan 31 08:27:19 compute-0 ovn_controller[149457]: 2026-01-31T08:27:19Z|00710|binding|INFO|Removing iface tapec7fbb6b-9a ovn-installed in OVS
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.213 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.222 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:f3:24 10.100.0.11'], port_security=['fa:16:3e:75:f3:24 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '651d6b65-a0ee-4942-bf60-88b037eb6508', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '53c8d6c3-3fc8-4e05-8f45-013b15b35751', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eeea185c-a70e-4e33-a1d7-88e2fb6e75b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.225 160028 INFO neutron.agent.ovn.metadata.agent [-] Port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f in datapath cc669a9b-1a99-4cea-8b35-6d932fb2087c unbound from our chassis
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.227 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc669a9b-1a99-4cea-8b35-6d932fb2087c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.228 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[57568133-efa1-4c49-a7b9-397523809d54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.229 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c namespace which is not needed anymore
Jan 31 08:27:19 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Jan 31 08:27:19 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a1.scope: Consumed 9.045s CPU time.
Jan 31 08:27:19 compute-0 systemd-machined[214448]: Machine qemu-76-instance-000000a1 terminated.
Jan 31 08:27:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:19 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [NOTICE]   (356096) : haproxy version is 2.8.14-c23fe91
Jan 31 08:27:19 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [NOTICE]   (356096) : path to executable is /usr/sbin/haproxy
Jan 31 08:27:19 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [WARNING]  (356096) : Exiting Master process...
Jan 31 08:27:19 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [ALERT]    (356096) : Current worker (356098) exited with code 143 (Terminated)
Jan 31 08:27:19 compute-0 neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c[356092]: [WARNING]  (356096) : All workers exited. Exiting... (0)
Jan 31 08:27:19 compute-0 systemd[1]: libpod-a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8.scope: Deactivated successfully.
Jan 31 08:27:19 compute-0 podman[356427]: 2026-01-31 08:27:19.377474747 +0000 UTC m=+0.069634118 container died a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.384 247708 INFO nova.virt.libvirt.driver [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Instance destroyed successfully.
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.385 247708 DEBUG nova.objects.instance [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.433 247708 DEBUG nova.virt.libvirt.vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2107337279',display_name='tempest-TestNetworkAdvancedServerOps-server-2107337279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2107337279',id=161,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWmLoXJJT8HJ06ToetETaDpi1L/xOsFHTQftPXbFuOs01yOf7k2Y65Sy9zq5BZjupL2Q9fI+9QOPb9ecuesAa9df6vRKXVMZCeU6kML4//7cHw0FciNooD1B0Aw/wAsgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1409064792',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ka16rdmx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:27:11Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=651d6b65-a0ee-4942-bf60-88b037eb6508,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.435 247708 DEBUG nova.network.os_vif_util [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.437 247708 DEBUG nova.network.os_vif_util [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.438 247708 DEBUG os_vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.440 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec7fbb6b-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.446 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.448 247708 INFO os_vif [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:f3:24,bridge_name='br-int',has_traffic_filtering=True,id=ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f,network=Network(cc669a9b-1a99-4cea-8b35-6d932fb2087c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec7fbb6b-9a')
Jan 31 08:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8-userdata-shm.mount: Deactivated successfully.
Jan 31 08:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-546c36bf2932cdd78fd72242fc35f517e75f0af6c78a01cb73086e8250c6bf9f-merged.mount: Deactivated successfully.
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.455 247708 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.456 247708 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:19 compute-0 podman[356427]: 2026-01-31 08:27:19.482915823 +0000 UTC m=+0.175075204 container cleanup a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:27:19 compute-0 systemd[1]: libpod-conmon-a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8.scope: Deactivated successfully.
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.498 247708 DEBUG nova.objects.instance [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 651d6b65-a0ee-4942-bf60-88b037eb6508 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:19 compute-0 podman[356469]: 2026-01-31 08:27:19.590725837 +0000 UTC m=+0.084163855 container remove a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.596 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdfbf19-308e-4f20-8be0-aaedcf7b29bc]: (4, ('Sat Jan 31 08:27:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c (a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8)\na1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8\nSat Jan 31 08:27:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c (a1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8)\na1126a41eb2c77bec8a941215d97095613f21ea5067037c569a1de7dee6867d8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.598 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[222a0b19-9046-4b5d-bf38-dbc191d64521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.599 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc669a9b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:19 compute-0 kernel: tapcc669a9b-10: left promiscuous mode
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.614 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6e62f5-22b3-4824-986f-8bf97d346886]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.627 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2e096d-f640-4f2a-ab26-febf2059f4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.629 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[37ed423f-c40e-41b0-b379-ec017f09d2dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.644 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[006b3dc9-e615-4b9d-95fd-02141028514f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827280, 'reachable_time': 35323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356485, 'error': None, 'target': 'ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.646 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cc669a9b-1a99-4cea-8b35-6d932fb2087c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:27:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:19.646 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[73e2579e-43a9-4102-ace0-707f08e25c12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:19 compute-0 systemd[1]: run-netns-ovnmeta\x2dcc669a9b\x2d1a99\x2d4cea\x2d8b35\x2d6d932fb2087c.mount: Deactivated successfully.
Jan 31 08:27:19 compute-0 ceph-mon[74496]: pgmap v2810: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 381 op/s
Jan 31 08:27:19 compute-0 nova_compute[247704]: 2026-01-31 08:27:19.830 247708 DEBUG oslo_concurrency.processutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:19.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.115 247708 DEBUG nova.network.neutron [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.166 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.166 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance network_info: |[{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.167 247708 DEBUG oslo_concurrency.lockutils [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.167 247708 DEBUG nova.network.neutron [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.9 MiB/s wr, 329 op/s
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.172 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Start _get_guest_xml network_info=[{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.178 247708 WARNING nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:27:20
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', 'vms', 'images', 'default.rgw.control']
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.188 247708 DEBUG nova.virt.libvirt.host [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.189 247708 DEBUG nova.virt.libvirt.host [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.195 247708 DEBUG nova.virt.libvirt.host [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.195 247708 DEBUG nova.virt.libvirt.host [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.196 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.197 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.197 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.197 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.197 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.198 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.198 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.198 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.198 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.198 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.198 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.199 247708 DEBUG nova.virt.hardware [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.201 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.241 247708 DEBUG nova.compute.manager [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.241 247708 DEBUG oslo_concurrency.lockutils [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3122160911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.242 247708 DEBUG oslo_concurrency.lockutils [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.242 247708 DEBUG oslo_concurrency.lockutils [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.242 247708 DEBUG nova.compute.manager [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.242 247708 WARNING nova.compute.manager [req-2b005196-7c85-4fed-9904-34bcaf9126c8 req-e6a7320d-1e79-4f74-9bc8-8edc87358dfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-unplugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.261 247708 DEBUG oslo_concurrency.processutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.266 247708 DEBUG nova.compute.provider_tree [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.329 247708 DEBUG nova.scheduler.client.report [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.508 247708 DEBUG oslo_concurrency.lockutils [None req-326aeb0f-b433-409e-8910-e9a6fba4a58c 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.576 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247627803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.644 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.672 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:20 compute-0 nova_compute[247704]: 2026-01-31 08:27:20.684 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:27:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:27:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3122160911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3247627803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:27:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954419915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.119 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.121 247708 DEBUG nova.virt.libvirt.vif [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:27:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.122 247708 DEBUG nova.network.os_vif_util [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.122 247708 DEBUG nova.network.os_vif_util [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.124 247708 DEBUG nova.objects.instance [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.331 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <uuid>10816ede-cf43-4736-aba7-48389f607d30</uuid>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <name>instance-000000a5</name>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:name>multiattach-server-1</nova:name>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:27:20</nova:creationTime>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <nova:port uuid="aeb09486-b68f-4fa4-a410-dd0ffaf49b05">
Jan 31 08:27:21 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <system>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <entry name="serial">10816ede-cf43-4736-aba7-48389f607d30</entry>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <entry name="uuid">10816ede-cf43-4736-aba7-48389f607d30</entry>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </system>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <os>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </os>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <features>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </features>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/10816ede-cf43-4736-aba7-48389f607d30_disk">
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </source>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/10816ede-cf43-4736-aba7-48389f607d30_disk.config">
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </source>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:27:21 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:ec:78:f9"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <target dev="tapaeb09486-b6"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/console.log" append="off"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <video>
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </video>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:27:21 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:27:21 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:27:21 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:27:21 compute-0 nova_compute[247704]: </domain>
Jan 31 08:27:21 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.332 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Preparing to wait for external event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.333 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.333 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.333 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.334 247708 DEBUG nova.virt.libvirt.vif [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:27:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.334 247708 DEBUG nova.network.os_vif_util [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.335 247708 DEBUG nova.network.os_vif_util [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.335 247708 DEBUG os_vif [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.336 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.337 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.337 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.342 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.342 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaeb09486-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.343 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaeb09486-b6, col_values=(('external_ids', {'iface-id': 'aeb09486-b68f-4fa4-a410-dd0ffaf49b05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:78:f9', 'vm-uuid': '10816ede-cf43-4736-aba7-48389f607d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:21 compute-0 NetworkManager[49108]: <info>  [1769848041.3804] manager: (tapaeb09486-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.382 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.390 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.391 247708 INFO os_vif [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6')
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.512 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.513 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.514 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:ec:78:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.515 247708 INFO nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Using config drive
Jan 31 08:27:21 compute-0 nova_compute[247704]: 2026-01-31 08:27:21.550 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:21 compute-0 ceph-mon[74496]: pgmap v2811: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.9 MiB/s wr, 329 op/s
Jan 31 08:27:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2954419915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:21.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 287 op/s
Jan 31 08:27:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:22 compute-0 nova_compute[247704]: 2026-01-31 08:27:22.946 247708 INFO nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Creating config drive at /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/disk.config
Jan 31 08:27:22 compute-0 nova_compute[247704]: 2026-01-31 08:27:22.955 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_xkgr0vw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.096 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_xkgr0vw" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.133 247708 DEBUG nova.storage.rbd_utils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image 10816ede-cf43-4736-aba7-48389f607d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.137 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/disk.config 10816ede-cf43-4736-aba7-48389f607d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.171 247708 DEBUG nova.compute.manager [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.172 247708 DEBUG oslo_concurrency.lockutils [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.172 247708 DEBUG oslo_concurrency.lockutils [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.173 247708 DEBUG oslo_concurrency.lockutils [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.173 247708 DEBUG nova.compute.manager [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.173 247708 WARNING nova.compute.manager [req-3af41134-d726-4029-80ec-a36732781fdb req-9897c439-1a5e-41f2-b903-b0124fe0047a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.307 247708 DEBUG oslo_concurrency.processutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/disk.config 10816ede-cf43-4736-aba7-48389f607d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.308 247708 INFO nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Deleting local config drive /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/disk.config because it was imported into RBD.
Jan 31 08:27:23 compute-0 kernel: tapaeb09486-b6: entered promiscuous mode
Jan 31 08:27:23 compute-0 NetworkManager[49108]: <info>  [1769848043.3585] manager: (tapaeb09486-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Jan 31 08:27:23 compute-0 ovn_controller[149457]: 2026-01-31T08:27:23Z|00711|binding|INFO|Claiming lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for this chassis.
Jan 31 08:27:23 compute-0 ovn_controller[149457]: 2026-01-31T08:27:23Z|00712|binding|INFO|aeb09486-b68f-4fa4-a410-dd0ffaf49b05: Claiming fa:16:3e:ec:78:f9 10.100.0.11
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.359 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.368 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:23 compute-0 ovn_controller[149457]: 2026-01-31T08:27:23Z|00713|binding|INFO|Setting lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 ovn-installed in OVS
Jan 31 08:27:23 compute-0 ovn_controller[149457]: 2026-01-31T08:27:23Z|00714|binding|INFO|Setting lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 up in Southbound
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.373 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:78:f9 10.100.0.11'], port_security=['fa:16:3e:ec:78:f9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '10816ede-cf43-4736-aba7-48389f607d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=aeb09486-b68f-4fa4-a410-dd0ffaf49b05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.376 160028 INFO neutron.agent.ovn.metadata.agent [-] Port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:27:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:23.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.381 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:27:23 compute-0 systemd-udevd[356645]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.396 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d75f899-038b-4617-bef8-a3d82d3d51b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:23 compute-0 systemd-machined[214448]: New machine qemu-77-instance-000000a5.
Jan 31 08:27:23 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-000000a5.
Jan 31 08:27:23 compute-0 NetworkManager[49108]: <info>  [1769848043.4159] device (tapaeb09486-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:27:23 compute-0 NetworkManager[49108]: <info>  [1769848043.4207] device (tapaeb09486-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.426 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[dae175e1-8909-49c6-8f1e-90b9a700960c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.430 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[55a793c0-1624-47b9-86a3-449d4a87a0fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.468 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5cac62-f519-43bd-baa4-20b60b34cb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.486 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aba6bf12-1bec-425f-be42-d02b3928416d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 37081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356659, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.501 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7f74fae2-9218-4867-a00d-457a36e254c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813783, 'tstamp': 813783}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356660, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813786, 'tstamp': 813786}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356660, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.503 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.550 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.551 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.551 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:27:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:27:23.552 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:27:23 compute-0 ceph-mon[74496]: pgmap v2812: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 287 op/s
Jan 31 08:27:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.953 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848043.9516752, 10816ede-cf43-4736-aba7-48389f607d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.953 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Started (Lifecycle Event)
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.977 247708 DEBUG nova.compute.manager [req-5fd8de71-b9e2-4a5f-b50f-5479f414b61f req-f5b53ae4-91e6-45d6-8d59-077e1b199bc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.978 247708 DEBUG oslo_concurrency.lockutils [req-5fd8de71-b9e2-4a5f-b50f-5479f414b61f req-f5b53ae4-91e6-45d6-8d59-077e1b199bc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.978 247708 DEBUG oslo_concurrency.lockutils [req-5fd8de71-b9e2-4a5f-b50f-5479f414b61f req-f5b53ae4-91e6-45d6-8d59-077e1b199bc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.978 247708 DEBUG oslo_concurrency.lockutils [req-5fd8de71-b9e2-4a5f-b50f-5479f414b61f req-f5b53ae4-91e6-45d6-8d59-077e1b199bc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.979 247708 DEBUG nova.compute.manager [req-5fd8de71-b9e2-4a5f-b50f-5479f414b61f req-f5b53ae4-91e6-45d6-8d59-077e1b199bc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Processing event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.980 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.984 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.989 247708 INFO nova.virt.libvirt.driver [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance spawned successfully.
Jan 31 08:27:23 compute-0 nova_compute[247704]: 2026-01-31 08:27:23.989 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.040 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.044 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.065 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.065 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.066 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.067 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.068 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.068 247708 DEBUG nova.virt.libvirt.driver [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.089 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.090 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848043.9524367, 10816ede-cf43-4736-aba7-48389f607d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.090 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Paused (Lifecycle Event)
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.128 247708 DEBUG nova.network.neutron [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated VIF entry in instance network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.128 247708 DEBUG nova.network.neutron [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.159 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.164 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848043.98371, 10816ede-cf43-4736-aba7-48389f607d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.164 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Resumed (Lifecycle Event)
Jan 31 08:27:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.208 247708 DEBUG oslo_concurrency.lockutils [req-be7e211f-c72d-4e60-bd1d-ee8bb2caa5f9 req-f2be0c0e-c8e9-45bb-8be6-53aac22993f3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.215 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.219 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.237 247708 INFO nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Took 12.02 seconds to spawn the instance on the hypervisor.
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.238 247708 DEBUG nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.257 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.325 247708 INFO nova.compute.manager [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Took 13.30 seconds to build instance.
Jan 31 08:27:24 compute-0 nova_compute[247704]: 2026-01-31 08:27:24.355 247708 DEBUG oslo_concurrency.lockutils [None req-c1beddb9-7b8c-443a-9220-081c0e683d1e 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:25 compute-0 nova_compute[247704]: 2026-01-31 08:27:25.328 247708 DEBUG nova.compute.manager [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:25 compute-0 nova_compute[247704]: 2026-01-31 08:27:25.328 247708 DEBUG nova.compute.manager [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing instance network info cache due to event network-changed-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:25 compute-0 nova_compute[247704]: 2026-01-31 08:27:25.329 247708 DEBUG oslo_concurrency.lockutils [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:25 compute-0 nova_compute[247704]: 2026-01-31 08:27:25.329 247708 DEBUG oslo_concurrency.lockutils [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:25 compute-0 nova_compute[247704]: 2026-01-31 08:27:25.329 247708 DEBUG nova.network.neutron [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Refreshing network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:25.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:25 compute-0 nova_compute[247704]: 2026-01-31 08:27:25.578 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.086 247708 DEBUG nova.compute.manager [req-9fe8722e-c668-4c92-953c-192a50d45545 req-65201d6d-83aa-486a-9991-5a0f8241b6e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.086 247708 DEBUG oslo_concurrency.lockutils [req-9fe8722e-c668-4c92-953c-192a50d45545 req-65201d6d-83aa-486a-9991-5a0f8241b6e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.086 247708 DEBUG oslo_concurrency.lockutils [req-9fe8722e-c668-4c92-953c-192a50d45545 req-65201d6d-83aa-486a-9991-5a0f8241b6e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.087 247708 DEBUG oslo_concurrency.lockutils [req-9fe8722e-c668-4c92-953c-192a50d45545 req-65201d6d-83aa-486a-9991-5a0f8241b6e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.087 247708 DEBUG nova.compute.manager [req-9fe8722e-c668-4c92-953c-192a50d45545 req-65201d6d-83aa-486a-9991-5a0f8241b6e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.087 247708 WARNING nova.compute.manager [req-9fe8722e-c668-4c92-953c-192a50d45545 req-65201d6d-83aa-486a-9991-5a0f8241b6e0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state active and task_state None.
Jan 31 08:27:26 compute-0 ceph-mon[74496]: pgmap v2813: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 269 op/s
Jan 31 08:27:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 250 op/s
Jan 31 08:27:26 compute-0 nova_compute[247704]: 2026-01-31 08:27:26.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:27 compute-0 ceph-mon[74496]: pgmap v2814: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 250 op/s
Jan 31 08:27:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:27.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:28 compute-0 nova_compute[247704]: 2026-01-31 08:27:28.142 247708 DEBUG nova.network.neutron [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updated VIF entry in instance network info cache for port ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:28 compute-0 nova_compute[247704]: 2026-01-31 08:27:28.143 247708 DEBUG nova.network.neutron [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Updating instance_info_cache with network_info: [{"id": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "address": "fa:16:3e:75:f3:24", "network": {"id": "cc669a9b-1a99-4cea-8b35-6d932fb2087c", "bridge": "br-int", "label": "tempest-network-smoke--1449295874", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec7fbb6b-9a", "ovs_interfaceid": "ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 32 KiB/s wr, 70 op/s
Jan 31 08:27:28 compute-0 nova_compute[247704]: 2026-01-31 08:27:28.193 247708 DEBUG oslo_concurrency.lockutils [req-22e2ebb5-2416-4715-91a4-7a06c0bb65dc req-231f2606-1b3a-44f0-bb72-8cfb710a887a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-651d6b65-a0ee-4942-bf60-88b037eb6508" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:28 compute-0 podman[356706]: 2026-01-31 08:27:28.892810502 +0000 UTC m=+0.052793424 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 08:27:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Jan 31 08:27:29 compute-0 ceph-mon[74496]: pgmap v2815: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 32 KiB/s wr, 70 op/s
Jan 31 08:27:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/35819088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Jan 31 08:27:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Jan 31 08:27:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 660 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 398 KiB/s wr, 94 op/s
Jan 31 08:27:30 compute-0 ceph-mon[74496]: osdmap e360: 3 total, 3 up, 3 in
Jan 31 08:27:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3654738820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:30 compute-0 nova_compute[247704]: 2026-01-31 08:27:30.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:30 compute-0 nova_compute[247704]: 2026-01-31 08:27:30.994 247708 DEBUG nova.compute.manager [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:30 compute-0 nova_compute[247704]: 2026-01-31 08:27:30.994 247708 DEBUG nova.compute.manager [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing instance network info cache due to event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:27:30 compute-0 nova_compute[247704]: 2026-01-31 08:27:30.995 247708 DEBUG oslo_concurrency.lockutils [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:30 compute-0 nova_compute[247704]: 2026-01-31 08:27:30.995 247708 DEBUG oslo_concurrency.lockutils [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:30 compute-0 nova_compute[247704]: 2026-01-31 08:27:30.995 247708 DEBUG nova.network.neutron [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:27:31 compute-0 sudo[356726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:31 compute-0 sudo[356726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:31 compute-0 sudo[356726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:31 compute-0 sudo[356751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:31 compute-0 sudo[356751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:31 compute-0 sudo[356751]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.381 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:31 compute-0 ceph-mon[74496]: pgmap v2817: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 660 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 398 KiB/s wr, 94 op/s
Jan 31 08:27:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/965567276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.588 247708 DEBUG nova.compute.manager [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.589 247708 DEBUG oslo_concurrency.lockutils [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.589 247708 DEBUG oslo_concurrency.lockutils [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.589 247708 DEBUG oslo_concurrency.lockutils [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "651d6b65-a0ee-4942-bf60-88b037eb6508-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.589 247708 DEBUG nova.compute.manager [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] No waiting events found dispatching network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:27:31 compute-0 nova_compute[247704]: 2026-01-31 08:27:31.590 247708 WARNING nova.compute.manager [req-2b5601e5-e3c3-40ad-a8d8-1b4763b86351 req-60c26496-1f25-4769-b9c8-250a5a0fce81 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Received unexpected event network-vif-plugged-ec7fbb6b-9a5c-4cbd-a8b5-48e819e0265f for instance with vm_state resized and task_state resize_reverting.
Jan 31 08:27:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 674 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 980 KiB/s wr, 101 op/s
Jan 31 08:27:32 compute-0 nova_compute[247704]: 2026-01-31 08:27:32.645 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:33 compute-0 ceph-mon[74496]: pgmap v2818: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 674 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 980 KiB/s wr, 101 op/s
Jan 31 08:27:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 689 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 31 08:27:34 compute-0 nova_compute[247704]: 2026-01-31 08:27:34.361 247708 DEBUG nova.network.neutron [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated VIF entry in instance network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:27:34 compute-0 nova_compute[247704]: 2026-01-31 08:27:34.362 247708 DEBUG nova.network.neutron [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:34 compute-0 nova_compute[247704]: 2026-01-31 08:27:34.382 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848039.3807607, 651d6b65-a0ee-4942-bf60-88b037eb6508 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:27:34 compute-0 nova_compute[247704]: 2026-01-31 08:27:34.383 247708 INFO nova.compute.manager [-] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] VM Stopped (Lifecycle Event)
Jan 31 08:27:34 compute-0 nova_compute[247704]: 2026-01-31 08:27:34.510 247708 DEBUG nova.compute.manager [None req-893a4ba5-18a0-4036-8cef-488417764038 - - - - - -] [instance: 651d6b65-a0ee-4942-bf60-88b037eb6508] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:27:34 compute-0 nova_compute[247704]: 2026-01-31 08:27:34.545 247708 DEBUG oslo_concurrency.lockutils [req-58bf2f22-16b7-4623-8c56-7bd73cf2c3fe req-9c3b73d5-a74c-41eb-967d-0e58930d0f67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:35.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:35 compute-0 nova_compute[247704]: 2026-01-31 08:27:35.581 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012855073981016192 of space, bias 1.0, pg target 3.8565221943048575 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004322102410199144 of space, bias 1.0, pg target 1.283664415829146 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:27:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:27:35 compute-0 ceph-mon[74496]: pgmap v2819: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 689 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 31 08:27:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:35.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 689 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Jan 31 08:27:36 compute-0 nova_compute[247704]: 2026-01-31 08:27:36.385 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:36 compute-0 nova_compute[247704]: 2026-01-31 08:27:36.719 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1469878505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:37.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:37 compute-0 sudo[356780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:37 compute-0 sudo[356780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-0 sudo[356780]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Jan 31 08:27:37 compute-0 sudo[356805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:37 compute-0 sudo[356805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-0 sudo[356805]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Jan 31 08:27:37 compute-0 sudo[356830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:37 compute-0 sudo[356830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 31 08:27:37 compute-0 sudo[356830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:37 compute-0 sudo[356855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:27:37 compute-0 sudo[356855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:38 compute-0 ceph-mon[74496]: pgmap v2820: 305 pgs: 305 active+clean; 689 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Jan 31 08:27:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/953468057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1320643255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 695 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 190 op/s
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.186 247708 DEBUG oslo_concurrency.lockutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.187 247708 DEBUG oslo_concurrency.lockutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.229 247708 DEBUG nova.objects.instance [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:38 compute-0 sudo[356855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.342 247708 DEBUG oslo_concurrency.lockutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:38 compute-0 ovn_controller[149457]: 2026-01-31T08:27:38Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:78:f9 10.100.0.11
Jan 31 08:27:38 compute-0 ovn_controller[149457]: 2026-01-31T08:27:38Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:78:f9 10.100.0.11
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.930 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.988 247708 DEBUG oslo_concurrency.lockutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.989 247708 DEBUG oslo_concurrency.lockutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:38 compute-0 nova_compute[247704]: 2026-01-31 08:27:38.989 247708 INFO nova.compute.manager [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Attaching volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4 to /dev/vdb
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.252 247708 DEBUG os_brick.utils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.255 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.266 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.266 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9f2091-cbed-4e0f-97a4-68f3fb9787f6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.267 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.273 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.273 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[fd351707-0ed1-430d-86c6-2b881a121a2f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.274 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.284 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.285 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4afaa8-d578-4de1-86b1-772db75a52a5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.286 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[e928dea6-75d1-4c50-8c3f-fb27659da001]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.286 247708 DEBUG oslo_concurrency.processutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.320 247708 DEBUG oslo_concurrency.processutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.323 247708 DEBUG os_brick.initiator.connectors.lightos [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.323 247708 DEBUG os_brick.initiator.connectors.lightos [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.323 247708 DEBUG os_brick.initiator.connectors.lightos [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.324 247708 DEBUG os_brick.utils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.325 247708 DEBUG nova.virt.block_device [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating existing volume attachment record: cf5beae6-16a5-4e99-b7c5-ed0efcabe671 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:27:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:39.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.606 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:39 compute-0 ceph-mon[74496]: osdmap e361: 3 total, 3 up, 3 in
Jan 31 08:27:39 compute-0 nova_compute[247704]: 2026-01-31 08:27:39.831 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 713 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 199 op/s
Jan 31 08:27:40 compute-0 sshd-session[356918]: Invalid user ubuntu from 45.148.10.240 port 34638
Jan 31 08:27:40 compute-0 sshd-session[356918]: Connection closed by invalid user ubuntu 45.148.10.240 port 34638 [preauth]
Jan 31 08:27:40 compute-0 nova_compute[247704]: 2026-01-31 08:27:40.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:40 compute-0 nova_compute[247704]: 2026-01-31 08:27:40.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:27:40 compute-0 nova_compute[247704]: 2026-01-31 08:27:40.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:40 compute-0 nova_compute[247704]: 2026-01-31 08:27:40.706 247708 DEBUG nova.objects.instance [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'flavor' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:27:40 compute-0 ceph-mon[74496]: pgmap v2822: 305 pgs: 305 active+clean; 695 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 190 op/s
Jan 31 08:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2333100646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:27:40 compute-0 nova_compute[247704]: 2026-01-31 08:27:40.845 247708 DEBUG nova.virt.libvirt.driver [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Attempting to attach volume c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:27:40 compute-0 nova_compute[247704]: 2026-01-31 08:27:40.850 247708 DEBUG nova.virt.libvirt.guest [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:27:40 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4">
Jan 31 08:27:40 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:27:40 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:27:40 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   </source>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:27:40 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   <serial>c66e3c0d-56c2-4ac0-89fe-027a1e7afcf4</serial>
Jan 31 08:27:40 compute-0 nova_compute[247704]:   <shareable/>
Jan 31 08:27:40 compute-0 nova_compute[247704]: </disk>
Jan 31 08:27:40 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 716d8294-3946-4e51-8f8e-50c2134c0236 does not exist
Jan 31 08:27:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bc265bb4-3ae2-414b-8f58-d650ca74907c does not exist
Jan 31 08:27:40 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 518e5b32-6b1e-4529-9ff5-526f4f973d47 does not exist
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:27:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:27:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:27:40 compute-0 sudo[356938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:40 compute-0 sudo[356938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:40 compute-0 sudo[356938]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:41 compute-0 sudo[356965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:41 compute-0 sudo[356965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:41 compute-0 sudo[356965]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:41 compute-0 sudo[356990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:41 compute-0 sudo[356990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:41 compute-0 sudo[356990]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:41 compute-0 sudo[357015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:27:41 compute-0 sudo[357015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:41 compute-0 nova_compute[247704]: 2026-01-31 08:27:41.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:41 compute-0 podman[357079]: 2026-01-31 08:27:41.465731862 +0000 UTC m=+0.025634889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:27:41 compute-0 podman[357079]: 2026-01-31 08:27:41.886313615 +0000 UTC m=+0.446216642 container create b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:41 compute-0 nova_compute[247704]: 2026-01-31 08:27:41.939 247708 DEBUG nova.virt.libvirt.driver [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:41 compute-0 nova_compute[247704]: 2026-01-31 08:27:41.940 247708 DEBUG nova.virt.libvirt.driver [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:41 compute-0 nova_compute[247704]: 2026-01-31 08:27:41.940 247708 DEBUG nova.virt.libvirt.driver [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:27:41 compute-0 nova_compute[247704]: 2026-01-31 08:27:41.940 247708 DEBUG nova.virt.libvirt.driver [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:ec:78:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:27:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:41.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:42 compute-0 ceph-mon[74496]: pgmap v2823: 305 pgs: 305 active+clean; 713 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 199 op/s
Jan 31 08:27:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:27:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:27:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:27:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:27:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:27:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:27:42 compute-0 systemd[1]: Started libpod-conmon-b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0.scope.
Jan 31 08:27:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 210 op/s
Jan 31 08:27:42 compute-0 podman[357079]: 2026-01-31 08:27:42.298072843 +0000 UTC m=+0.857975880 container init b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:42 compute-0 podman[357079]: 2026-01-31 08:27:42.306637062 +0000 UTC m=+0.866540079 container start b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:27:42 compute-0 inspiring_heyrovsky[357096]: 167 167
Jan 31 08:27:42 compute-0 systemd[1]: libpod-b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0.scope: Deactivated successfully.
Jan 31 08:27:42 compute-0 conmon[357096]: conmon b8da857c4ceab4fcfe97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0.scope/container/memory.events
Jan 31 08:27:42 compute-0 podman[357079]: 2026-01-31 08:27:42.402756629 +0000 UTC m=+0.962659656 container attach b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:27:42 compute-0 podman[357079]: 2026-01-31 08:27:42.40526441 +0000 UTC m=+0.965167427 container died b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:27:42 compute-0 nova_compute[247704]: 2026-01-31 08:27:42.469 247708 DEBUG oslo_concurrency.lockutils [None req-cab50a1f-226e-463a-b3a5-de3a806aa1ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:42 compute-0 nova_compute[247704]: 2026-01-31 08:27:42.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f34e09bc4c45138424d88d499a8cfe0446e66cc93e9189cc6cd9173603de6c0-merged.mount: Deactivated successfully.
Jan 31 08:27:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:42 compute-0 podman[357079]: 2026-01-31 08:27:42.829771949 +0000 UTC m=+1.389674956 container remove b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_heyrovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:27:42 compute-0 systemd[1]: libpod-conmon-b8da857c4ceab4fcfe97aac0b9d926669d9ba1b429a4d7c8a6f365beb85c31b0.scope: Deactivated successfully.
Jan 31 08:27:43 compute-0 podman[357119]: 2026-01-31 08:27:43.068399041 +0000 UTC m=+0.122783092 container create 5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:43 compute-0 podman[357119]: 2026-01-31 08:27:42.979016229 +0000 UTC m=+0.033400280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:27:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:43 compute-0 systemd[1]: Started libpod-conmon-5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6.scope.
Jan 31 08:27:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6d5ddc27e5f4173eaf9ddde5b61c63bdc6dca4b16d5d7b2b0be6050f428d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6d5ddc27e5f4173eaf9ddde5b61c63bdc6dca4b16d5d7b2b0be6050f428d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6d5ddc27e5f4173eaf9ddde5b61c63bdc6dca4b16d5d7b2b0be6050f428d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6d5ddc27e5f4173eaf9ddde5b61c63bdc6dca4b16d5d7b2b0be6050f428d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3e6d5ddc27e5f4173eaf9ddde5b61c63bdc6dca4b16d5d7b2b0be6050f428d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:43 compute-0 ceph-mon[74496]: pgmap v2824: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 210 op/s
Jan 31 08:27:43 compute-0 podman[357119]: 2026-01-31 08:27:43.810712444 +0000 UTC m=+0.865096545 container init 5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:27:43 compute-0 podman[357119]: 2026-01-31 08:27:43.823658831 +0000 UTC m=+0.878042852 container start 5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:27:43 compute-0 podman[357119]: 2026-01-31 08:27:43.879828428 +0000 UTC m=+0.934212529 container attach 5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:27:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 204 op/s
Jan 31 08:27:44 compute-0 nova_compute[247704]: 2026-01-31 08:27:44.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:44 compute-0 practical_blackburn[357135]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:27:44 compute-0 practical_blackburn[357135]: --> relative data size: 1.0
Jan 31 08:27:44 compute-0 practical_blackburn[357135]: --> All data devices are unavailable
Jan 31 08:27:44 compute-0 systemd[1]: libpod-5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6.scope: Deactivated successfully.
Jan 31 08:27:44 compute-0 podman[357119]: 2026-01-31 08:27:44.668327533 +0000 UTC m=+1.722711554 container died 5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:44 compute-0 nova_compute[247704]: 2026-01-31 08:27:44.806 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:44 compute-0 nova_compute[247704]: 2026-01-31 08:27:44.806 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:44 compute-0 nova_compute[247704]: 2026-01-31 08:27:44.807 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:44 compute-0 nova_compute[247704]: 2026-01-31 08:27:44.807 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:27:44 compute-0 nova_compute[247704]: 2026-01-31 08:27:44.808 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3e6d5ddc27e5f4173eaf9ddde5b61c63bdc6dca4b16d5d7b2b0be6050f428d4-merged.mount: Deactivated successfully.
Jan 31 08:27:45 compute-0 podman[357119]: 2026-01-31 08:27:45.200372159 +0000 UTC m=+2.254756190 container remove 5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_blackburn, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:27:45 compute-0 sudo[357015]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:45 compute-0 systemd[1]: libpod-conmon-5c3e19c34c22c8d0c794cb9ac02649ff952165360131355ccf1f82836d1045b6.scope: Deactivated successfully.
Jan 31 08:27:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373990389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:45 compute-0 sudo[357183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:45 compute-0 sudo[357183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:45 compute-0 sudo[357183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.315 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:45 compute-0 sudo[357210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:45 compute-0 sudo[357210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:45 compute-0 sudo[357210]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:45 compute-0 sudo[357236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:45 compute-0 sudo[357236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:45 compute-0 sudo[357236]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:45.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:45 compute-0 sudo[357261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:27:45 compute-0 sudo[357261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:45 compute-0 podman[357327]: 2026-01-31 08:27:45.802740401 +0000 UTC m=+0.096316533 container create bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:45 compute-0 podman[357327]: 2026-01-31 08:27:45.731319259 +0000 UTC m=+0.024895381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:27:45 compute-0 systemd[1]: Started libpod-conmon-bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763.scope.
Jan 31 08:27:45 compute-0 ceph-mon[74496]: pgmap v2825: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 204 op/s
Jan 31 08:27:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/373990389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.935 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.935 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.935 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.938 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.939 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.942 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.942 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.945 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.945 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 nova_compute[247704]: 2026-01-31 08:27:45.946 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:27:45 compute-0 podman[357327]: 2026-01-31 08:27:45.975811414 +0000 UTC m=+0.269387526 container init bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:27:45 compute-0 podman[357327]: 2026-01-31 08:27:45.98623394 +0000 UTC m=+0.279810032 container start bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:27:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:45.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:45 compute-0 nice_borg[357344]: 167 167
Jan 31 08:27:45 compute-0 systemd[1]: libpod-bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763.scope: Deactivated successfully.
Jan 31 08:27:45 compute-0 conmon[357344]: conmon bedfaadefa8f0a47f351 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763.scope/container/memory.events
Jan 31 08:27:46 compute-0 podman[357327]: 2026-01-31 08:27:46.01480737 +0000 UTC m=+0.308383462 container attach bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:27:46 compute-0 podman[357327]: 2026-01-31 08:27:46.016479301 +0000 UTC m=+0.310055393 container died bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb590ed40b22ad20189ba740323ad9e3ceff9ee602fa9e584dfedcc127a143f-merged.mount: Deactivated successfully.
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.162 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.163 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3460MB free_disk=20.693790435791016GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.164 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.164 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 211 op/s
Jan 31 08:27:46 compute-0 podman[357327]: 2026-01-31 08:27:46.271381671 +0000 UTC m=+0.564957763 container remove bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_borg, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:27:46 compute-0 systemd[1]: libpod-conmon-bedfaadefa8f0a47f351bed403a871cc55b79a8ad3ce2a7c630d4d2be4e79763.scope: Deactivated successfully.
Jan 31 08:27:46 compute-0 podman[357349]: 2026-01-31 08:27:46.340235429 +0000 UTC m=+0.316672895 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.390 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.414 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.416 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bbc5f09e-71d7-4009-bdf6-06e95b32574c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.416 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.416 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 10816ede-cf43-4736-aba7-48389f607d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.416 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.417 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.437 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.462 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:27:46 compute-0 nova_compute[247704]: 2026-01-31 08:27:46.462 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:27:46 compute-0 podman[357394]: 2026-01-31 08:27:46.474179644 +0000 UTC m=+0.091738080 container create 03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:27:46 compute-0 podman[357394]: 2026-01-31 08:27:46.41568876 +0000 UTC m=+0.033247226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:27:46 compute-0 systemd[1]: Started libpod-conmon-03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12.scope.
Jan 31 08:27:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87a1509b99f20070f3e9fad59bf41887a59ce1ffe076368c231c4ecf0f1a0dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87a1509b99f20070f3e9fad59bf41887a59ce1ffe076368c231c4ecf0f1a0dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87a1509b99f20070f3e9fad59bf41887a59ce1ffe076368c231c4ecf0f1a0dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87a1509b99f20070f3e9fad59bf41887a59ce1ffe076368c231c4ecf0f1a0dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:46 compute-0 podman[357394]: 2026-01-31 08:27:46.607247517 +0000 UTC m=+0.224805953 container init 03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:27:46 compute-0 podman[357394]: 2026-01-31 08:27:46.611926403 +0000 UTC m=+0.229484819 container start 03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:27:46 compute-0 podman[357394]: 2026-01-31 08:27:46.647393662 +0000 UTC m=+0.264952118 container attach 03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:27:47 compute-0 nova_compute[247704]: 2026-01-31 08:27:47.008 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:27:47 compute-0 nova_compute[247704]: 2026-01-31 08:27:47.046 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:27:47 compute-0 nova_compute[247704]: 2026-01-31 08:27:47.179 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]: {
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:     "0": [
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:         {
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "devices": [
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "/dev/loop3"
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             ],
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "lv_name": "ceph_lv0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "lv_size": "7511998464",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "name": "ceph_lv0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "tags": {
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.crush_device_class": "",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.encrypted": "0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.osd_id": "0",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.type": "block",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:                 "ceph.vdo": "0"
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             },
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "type": "block",
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:             "vg_name": "ceph_vg0"
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:         }
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]:     ]
Jan 31 08:27:47 compute-0 sad_heisenberg[357414]: }
Jan 31 08:27:47 compute-0 systemd[1]: libpod-03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12.scope: Deactivated successfully.
Jan 31 08:27:47 compute-0 podman[357394]: 2026-01-31 08:27:47.373687002 +0000 UTC m=+0.991245418 container died 03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:27:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:27:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553281191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:27:47 compute-0 nova_compute[247704]: 2026-01-31 08:27:47.654 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:47 compute-0 nova_compute[247704]: 2026-01-31 08:27:47.663 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:27:47 compute-0 nova_compute[247704]: 2026-01-31 08:27:47.789 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:47.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 31 08:27:48 compute-0 nova_compute[247704]: 2026-01-31 08:27:48.211 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:27:48 compute-0 nova_compute[247704]: 2026-01-31 08:27:48.211 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:49.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:49.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Jan 31 08:27:50 compute-0 nova_compute[247704]: 2026-01-31 08:27:50.624 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:51 compute-0 sudo[357457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:51 compute-0 sudo[357457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:51 compute-0 sudo[357457]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:51 compute-0 nova_compute[247704]: 2026-01-31 08:27:51.392 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:51 compute-0 sudo[357482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:27:51 compute-0 sudo[357482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:27:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:27:51 compute-0 sudo[357482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:51.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 727 KiB/s wr, 134 op/s
Jan 31 08:27:52 compute-0 nova_compute[247704]: 2026-01-31 08:27:52.211 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:52 compute-0 nova_compute[247704]: 2026-01-31 08:27:52.211 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:27:52 compute-0 nova_compute[247704]: 2026-01-31 08:27:52.912 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:27:52 compute-0 nova_compute[247704]: 2026-01-31 08:27:52.913 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:27:52 compute-0 nova_compute[247704]: 2026-01-31 08:27:52.913 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:27:53 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 08:27:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:53.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:54.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 41 KiB/s wr, 125 op/s
Jan 31 08:27:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:55.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:55 compute-0 nova_compute[247704]: 2026-01-31 08:27:55.626 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:56.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 38 KiB/s wr, 94 op/s
Jan 31 08:27:56 compute-0 nova_compute[247704]: 2026-01-31 08:27:56.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:27:57 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 08:27:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:27:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:57.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:27:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:58.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 13 KiB/s wr, 23 op/s
Jan 31 08:27:58 compute-0 nova_compute[247704]: 2026-01-31 08:27:58.979 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:27:59 compute-0 nova_compute[247704]: 2026-01-31 08:27:59.397 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:27:59 compute-0 nova_compute[247704]: 2026-01-31 08:27:59.398 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:27:59 compute-0 nova_compute[247704]: 2026-01-31 08:27:59.398 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:59 compute-0 nova_compute[247704]: 2026-01-31 08:27:59.399 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:27:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:27:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:59.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:27:59 compute-0 nova_compute[247704]: 2026-01-31 08:27:59.743 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:00.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 117 KiB/s rd, 29 KiB/s wr, 19 op/s
Jan 31 08:28:00 compute-0 nova_compute[247704]: 2026-01-31 08:28:00.629 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:01 compute-0 ceph-mds[94769]: mds.beacon.cephfs.compute-0.voybui missed beacon ack from the monitors
Jan 31 08:28:01 compute-0 nova_compute[247704]: 2026-01-31 08:28:01.396 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).mds e9 check_health: resetting beacon timeouts due to mon delay (slow election?) of 14.0228 seconds
Jan 31 08:28:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:01 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 9.319046021s
Jan 31 08:28:01 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 9.319046974s
Jan 31 08:28:01 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.319399834s, txc = 0x55f9da6a2c00
Jan 31 08:28:01 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 08:28:01 compute-0 ceph-mon[74496]: paxos.0).electionLogic(47) init, last seen epoch 47, mid-election, bumping
Jan 31 08:28:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:02.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 31 08:28:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b87a1509b99f20070f3e9fad59bf41887a59ce1ffe076368c231c4ecf0f1a0dd-merged.mount: Deactivated successfully.
Jan 31 08:28:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:28:02 compute-0 ceph-mon[74496]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 08:28:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 08:28:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3308763206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:03 compute-0 ceph-mon[74496]: pgmap v2826: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 211 op/s
Jan 31 08:28:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3553281191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:03 compute-0 podman[357394]: 2026-01-31 08:28:03.790225139 +0000 UTC m=+17.407783565 container remove 03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:28:03 compute-0 sudo[357261]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:28:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:28:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:28:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 31 08:28:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.hhuoua(active, since 82m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:28:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:28:03 compute-0 systemd[1]: libpod-conmon-03bbb890db6447b2949c10926fefc1cb68cdf666a8a55c1c2aa8c89c5c468a12.scope: Deactivated successfully.
Jan 31 08:28:03 compute-0 podman[357513]: 2026-01-31 08:28:03.882937362 +0000 UTC m=+4.034680776 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:28:03 compute-0 sudo[357535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:03 compute-0 sudo[357535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:03 compute-0 sudo[357535]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:03 compute-0 sudo[357562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:03 compute-0 sudo[357562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:03 compute-0 sudo[357562]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:04 compute-0 sudo[357587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:04 compute-0 sudo[357587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:04 compute-0 sudo[357587]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:04.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:04 compute-0 sudo[357612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:28:04 compute-0 sudo[357612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 31 08:28:04 compute-0 podman[357678]: 2026-01-31 08:28:04.404313206 +0000 UTC m=+0.024625694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:28:04 compute-0 podman[357678]: 2026-01-31 08:28:04.915641894 +0000 UTC m=+0.535954352 container create 565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:28:05 compute-0 systemd[1]: Started libpod-conmon-565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d.scope.
Jan 31 08:28:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:28:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:05.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2827: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3567843601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2828: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/933239023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2829: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 727 KiB/s wr, 134 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/377933433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2830: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 41 KiB/s wr, 125 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2599037876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2831: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 38 KiB/s wr, 94 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2832: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 13 KiB/s wr, 23 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2833: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 117 KiB/s rd, 29 KiB/s wr, 19 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: mon.compute-2 calling monitor election
Jan 31 08:28:05 compute-0 ceph-mon[74496]: mon.compute-1 calling monitor election
Jan 31 08:28:05 compute-0 ceph-mon[74496]: mon.compute-0 calling monitor election
Jan 31 08:28:05 compute-0 ceph-mon[74496]: pgmap v2834: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 31 08:28:05 compute-0 ceph-mon[74496]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3308763206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/346384978' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/346384978' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:28:05 compute-0 ceph-mon[74496]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 08:28:05 compute-0 ceph-mon[74496]: fsmap cephfs:1 {0=cephfs.compute-2.ihffma=up:active} 2 up:standby
Jan 31 08:28:05 compute-0 ceph-mon[74496]: osdmap e361: 3 total, 3 up, 3 in
Jan 31 08:28:05 compute-0 ceph-mon[74496]: mgrmap e11: compute-0.hhuoua(active, since 82m), standbys: compute-2.wmgest, compute-1.hodsiu
Jan 31 08:28:05 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:28:05 compute-0 podman[357678]: 2026-01-31 08:28:05.622447456 +0000 UTC m=+1.242759984 container init 565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:28:05 compute-0 podman[357678]: 2026-01-31 08:28:05.629882448 +0000 UTC m=+1.250194916 container start 565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cannon, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:28:05 compute-0 nova_compute[247704]: 2026-01-31 08:28:05.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:05 compute-0 eloquent_cannon[357694]: 167 167
Jan 31 08:28:05 compute-0 systemd[1]: libpod-565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d.scope: Deactivated successfully.
Jan 31 08:28:05 compute-0 podman[357678]: 2026-01-31 08:28:05.819835416 +0000 UTC m=+1.440147844 container attach 565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 08:28:05 compute-0 podman[357678]: 2026-01-31 08:28:05.821709912 +0000 UTC m=+1.442022340 container died 565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:28:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:06.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 733 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 31 08:28:06 compute-0 nova_compute[247704]: 2026-01-31 08:28:06.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5835db1b59649c8c01bddaf9144deaa7292b369fd9ca68c83cd0c7fb8122564f-merged.mount: Deactivated successfully.
Jan 31 08:28:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:06 compute-0 ceph-mon[74496]: pgmap v2835: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 31 08:28:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2965806856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:07 compute-0 podman[357678]: 2026-01-31 08:28:07.350250373 +0000 UTC m=+2.970562801 container remove 565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:28:07 compute-0 systemd[1]: libpod-conmon-565fbecf62eb22d8ac5931e84f76fcae8cee134ee6211ae0349b477a9d4cf32d.scope: Deactivated successfully.
Jan 31 08:28:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:07.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:07 compute-0 podman[357720]: 2026-01-31 08:28:07.516333686 +0000 UTC m=+0.057194364 container create 1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:28:07 compute-0 systemd[1]: Started libpod-conmon-1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0.scope.
Jan 31 08:28:07 compute-0 podman[357720]: 2026-01-31 08:28:07.483675114 +0000 UTC m=+0.024541093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:28:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05aa169c923152bccc9162d98324eae1a5a6095e5b8c7381ee4a29af8eaa28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05aa169c923152bccc9162d98324eae1a5a6095e5b8c7381ee4a29af8eaa28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05aa169c923152bccc9162d98324eae1a5a6095e5b8c7381ee4a29af8eaa28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be05aa169c923152bccc9162d98324eae1a5a6095e5b8c7381ee4a29af8eaa28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:07 compute-0 podman[357720]: 2026-01-31 08:28:07.644783495 +0000 UTC m=+0.185644183 container init 1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:28:07 compute-0 podman[357720]: 2026-01-31 08:28:07.650839344 +0000 UTC m=+0.191700032 container start 1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:28:07 compute-0 podman[357720]: 2026-01-31 08:28:07.688492908 +0000 UTC m=+0.229353606 container attach 1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:28:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:08 compute-0 ceph-mon[74496]: pgmap v2836: 305 pgs: 305 active+clean; 733 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 31 08:28:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 741 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 31 08:28:08 compute-0 keen_mendel[357738]: {
Jan 31 08:28:08 compute-0 keen_mendel[357738]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:28:08 compute-0 keen_mendel[357738]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:28:08 compute-0 keen_mendel[357738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:28:08 compute-0 keen_mendel[357738]:         "osd_id": 0,
Jan 31 08:28:08 compute-0 keen_mendel[357738]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:28:08 compute-0 keen_mendel[357738]:         "type": "bluestore"
Jan 31 08:28:08 compute-0 keen_mendel[357738]:     }
Jan 31 08:28:08 compute-0 keen_mendel[357738]: }
Jan 31 08:28:08 compute-0 systemd[1]: libpod-1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0.scope: Deactivated successfully.
Jan 31 08:28:08 compute-0 podman[357720]: 2026-01-31 08:28:08.558726536 +0000 UTC m=+1.099587224 container died 1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-be05aa169c923152bccc9162d98324eae1a5a6095e5b8c7381ee4a29af8eaa28-merged.mount: Deactivated successfully.
Jan 31 08:28:09 compute-0 podman[357720]: 2026-01-31 08:28:09.010368701 +0000 UTC m=+1.551229399 container remove 1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:28:09 compute-0 systemd[1]: libpod-conmon-1e7ad6609a98dd7941741075a8a40fab9431d2edafd8ee4c13d43675ca1947d0.scope: Deactivated successfully.
Jan 31 08:28:09 compute-0 sudo[357612]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:28:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:28:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:28:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:28:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 82f6fddb-32ce-48ad-ba6a-a7a3688e014f does not exist
Jan 31 08:28:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1e4c2c35-7822-4d60-8af0-a766169402e8 does not exist
Jan 31 08:28:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6b7e3683-4335-4a1e-b9f9-e9f690066e1d does not exist
Jan 31 08:28:09 compute-0 sudo[357770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:09 compute-0 sudo[357770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:09 compute-0 sudo[357770]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:09 compute-0 sudo[357795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:28:09 compute-0 sudo[357795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:09 compute-0 sudo[357795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:09.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 728 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 2.7 MiB/s wr, 75 op/s
Jan 31 08:28:10 compute-0 ceph-mon[74496]: pgmap v2837: 305 pgs: 305 active+clean; 741 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 MiB/s wr, 27 op/s
Jan 31 08:28:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:28:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:28:10 compute-0 nova_compute[247704]: 2026-01-31 08:28:10.635 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:11.003 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:11 compute-0 nova_compute[247704]: 2026-01-31 08:28:11.004 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:11.005 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:11.199 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:11.199 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:11.200 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:11 compute-0 nova_compute[247704]: 2026-01-31 08:28:11.484 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:11 compute-0 sudo[357821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:11 compute-0 sudo[357821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:11 compute-0 sudo[357821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:11 compute-0 sudo[357847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:11 compute-0 sudo[357847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:11 compute-0 sudo[357847]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:11 compute-0 ceph-mon[74496]: pgmap v2838: 305 pgs: 305 active+clean; 728 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 2.7 MiB/s wr, 75 op/s
Jan 31 08:28:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:28:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:28:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 3.4 MiB/s wr, 84 op/s
Jan 31 08:28:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4148727517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:13.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:13 compute-0 ceph-mon[74496]: pgmap v2839: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 3.4 MiB/s wr, 84 op/s
Jan 31 08:28:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4211918690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/960312472' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:14.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Jan 31 08:28:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Jan 31 08:28:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Jan 31 08:28:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 720 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 31 08:28:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3856339751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:15 compute-0 ceph-mon[74496]: osdmap e362: 3 total, 3 up, 3 in
Jan 31 08:28:15 compute-0 ceph-mon[74496]: pgmap v2841: 305 pgs: 305 active+clean; 720 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 4.7 MiB/s wr, 132 op/s
Jan 31 08:28:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:15.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:15 compute-0 nova_compute[247704]: 2026-01-31 08:28:15.636 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:16.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 3.3 MiB/s wr, 138 op/s
Jan 31 08:28:16 compute-0 nova_compute[247704]: 2026-01-31 08:28:16.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1068672120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1560134433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:16 compute-0 podman[357874]: 2026-01-31 08:28:16.965738446 +0000 UTC m=+0.123408697 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:28:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:17.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:17 compute-0 ceph-mon[74496]: pgmap v2842: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 3.3 MiB/s wr, 138 op/s
Jan 31 08:28:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:18.008 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.6 MiB/s wr, 136 op/s
Jan 31 08:28:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:19.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:28:20
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'backups']
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Jan 31 08:28:20 compute-0 ceph-mon[74496]: pgmap v2843: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.6 MiB/s wr, 136 op/s
Jan 31 08:28:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 08:28:20 compute-0 nova_compute[247704]: 2026-01-31 08:28:20.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:28:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:28:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 08:28:20 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-0 ovn_controller[149457]: 2026-01-31T08:28:21Z|00715|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:28:21 compute-0 ovn_controller[149457]: 2026-01-31T08:28:21Z|00716|binding|INFO|Releasing lport 003d1f0e-744f-4244-8c9f-3a9be6033652 from this chassis (sb_readonly=0)
Jan 31 08:28:21 compute-0 nova_compute[247704]: 2026-01-31 08:28:21.109 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:21 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 08:28:21 compute-0 nova_compute[247704]: 2026-01-31 08:28:21.488 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:21.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:22.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:22 compute-0 ceph-mon[74496]: pgmap v2844: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Jan 31 08:28:22 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 08:28:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 703 KiB/s wr, 158 op/s
Jan 31 08:28:22 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 31 08:28:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:28:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:23.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:28:23 compute-0 ceph-mon[74496]: pgmap v2845: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 703 KiB/s wr, 158 op/s
Jan 31 08:28:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 174 op/s
Jan 31 08:28:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Jan 31 08:28:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Jan 31 08:28:25 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Jan 31 08:28:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:28:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:25.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:28:25 compute-0 nova_compute[247704]: 2026-01-31 08:28:25.641 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:26.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 41 KiB/s wr, 224 op/s
Jan 31 08:28:26 compute-0 ceph-mon[74496]: pgmap v2846: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 174 op/s
Jan 31 08:28:26 compute-0 ceph-mon[74496]: osdmap e363: 3 total, 3 up, 3 in
Jan 31 08:28:26 compute-0 nova_compute[247704]: 2026-01-31 08:28:26.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.352 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.353 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.390 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:28:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:27.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.506 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.507 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.520 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.520 247708 INFO nova.compute.claims [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:28:27 compute-0 nova_compute[247704]: 2026-01-31 08:28:27.767 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:28.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:28 compute-0 ceph-mon[74496]: pgmap v2848: 305 pgs: 305 active+clean; 723 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 41 KiB/s wr, 224 op/s
Jan 31 08:28:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/912552622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 697 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 39 KiB/s wr, 220 op/s
Jan 31 08:28:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559272313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.492 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.725s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.502 247708 DEBUG nova.compute.provider_tree [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.531 247708 DEBUG nova.scheduler.client.report [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.558 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.559 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.620 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.621 247708 DEBUG nova.network.neutron [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.648 247708 INFO nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.679 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.814 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.816 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.817 247708 INFO nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Creating image(s)
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.846 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.878 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.910 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.914 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.942 247708 DEBUG nova.policy [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eac51187531841e2891fc5d3c5f84123', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '722ab2e9dd674709953be812d4c88493', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.979 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.980 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.981 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:28 compute-0 nova_compute[247704]: 2026-01-31 08:28:28.981 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:29 compute-0 nova_compute[247704]: 2026-01-31 08:28:29.015 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:29 compute-0 nova_compute[247704]: 2026-01-31 08:28:29.021 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:29.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1559272313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:30.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 662 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 39 KiB/s wr, 198 op/s
Jan 31 08:28:30 compute-0 nova_compute[247704]: 2026-01-31 08:28:30.424 247708 DEBUG nova.network.neutron [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Successfully created port: 328810d5-4ccb-4c97-b300-745f5283e08f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:28:30 compute-0 nova_compute[247704]: 2026-01-31 08:28:30.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:30 compute-0 nova_compute[247704]: 2026-01-31 08:28:30.780 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:31 compute-0 ceph-mon[74496]: pgmap v2849: 305 pgs: 305 active+clean; 697 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 39 KiB/s wr, 220 op/s
Jan 31 08:28:31 compute-0 nova_compute[247704]: 2026-01-31 08:28:31.493 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:31.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:31 compute-0 sudo[358025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:31 compute-0 sudo[358025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:31 compute-0 sudo[358025]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:31 compute-0 sudo[358050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:31 compute-0 sudo[358050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:31 compute-0 sudo[358050]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 651 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 129 KiB/s wr, 184 op/s
Jan 31 08:28:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:32 compute-0 ceph-mon[74496]: pgmap v2850: 305 pgs: 305 active+clean; 662 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 39 KiB/s wr, 198 op/s
Jan 31 08:28:32 compute-0 nova_compute[247704]: 2026-01-31 08:28:32.544 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:32 compute-0 nova_compute[247704]: 2026-01-31 08:28:32.651 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] resizing rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:28:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:33.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.534 247708 DEBUG nova.objects.instance [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'migration_context' on Instance uuid 30230cc0-c29f-42f3-9135-d3bc8ec7f901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.610 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.610 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Ensure instance console log exists: /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.611 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.612 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.612 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:33 compute-0 ceph-mon[74496]: pgmap v2851: 305 pgs: 305 active+clean; 651 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 129 KiB/s wr, 184 op/s
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.932 247708 DEBUG nova.compute.manager [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.932 247708 DEBUG nova.compute.manager [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing instance network info cache due to event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.933 247708 DEBUG oslo_concurrency.lockutils [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.933 247708 DEBUG oslo_concurrency.lockutils [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:33 compute-0 nova_compute[247704]: 2026-01-31 08:28:33.934 247708 DEBUG nova.network.neutron [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.018 247708 DEBUG nova.network.neutron [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Successfully updated port: 328810d5-4ccb-4c97-b300-745f5283e08f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.051 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "refresh_cache-30230cc0-c29f-42f3-9135-d3bc8ec7f901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.051 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquired lock "refresh_cache-30230cc0-c29f-42f3-9135-d3bc8ec7f901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.052 247708 DEBUG nova.network.neutron [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:28:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:34.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 659 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 998 KiB/s wr, 174 op/s
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.232 247708 DEBUG nova.compute.manager [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-changed-328810d5-4ccb-4c97-b300-745f5283e08f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.233 247708 DEBUG nova.compute.manager [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Refreshing instance network info cache due to event network-changed-328810d5-4ccb-4c97-b300-745f5283e08f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.233 247708 DEBUG oslo_concurrency.lockutils [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-30230cc0-c29f-42f3-9135-d3bc8ec7f901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:34 compute-0 nova_compute[247704]: 2026-01-31 08:28:34.429 247708 DEBUG nova.network.neutron [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:28:34 compute-0 podman[358148]: 2026-01-31 08:28:34.918146238 +0000 UTC m=+0.081567501 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 08:28:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:35.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:35 compute-0 nova_compute[247704]: 2026-01-31 08:28:35.659 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012508287967615857 of space, bias 1.0, pg target 3.752486390284757 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432228416387493 of space, bias 1.0, pg target 1.2837183966708543 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:28:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:28:35 compute-0 ceph-mon[74496]: pgmap v2852: 305 pgs: 305 active+clean; 659 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 998 KiB/s wr, 174 op/s
Jan 31 08:28:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:36.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 712 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 877 KiB/s rd, 3.9 MiB/s wr, 198 op/s
Jan 31 08:28:36 compute-0 nova_compute[247704]: 2026-01-31 08:28:36.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:36 compute-0 nova_compute[247704]: 2026-01-31 08:28:36.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Jan 31 08:28:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Jan 31 08:28:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Jan 31 08:28:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:37 compute-0 ceph-mon[74496]: pgmap v2853: 305 pgs: 305 active+clean; 712 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 877 KiB/s rd, 3.9 MiB/s wr, 198 op/s
Jan 31 08:28:37 compute-0 ceph-mon[74496]: osdmap e364: 3 total, 3 up, 3 in
Jan 31 08:28:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 718 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 4.6 MiB/s wr, 218 op/s
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.400 247708 DEBUG nova.network.neutron [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Updating instance_info_cache with network_info: [{"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.522 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Releasing lock "refresh_cache-30230cc0-c29f-42f3-9135-d3bc8ec7f901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.522 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Instance network_info: |[{"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.523 247708 DEBUG oslo_concurrency.lockutils [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-30230cc0-c29f-42f3-9135-d3bc8ec7f901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.523 247708 DEBUG nova.network.neutron [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Refreshing network info cache for port 328810d5-4ccb-4c97-b300-745f5283e08f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.526 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Start _get_guest_xml network_info=[{"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.530 247708 WARNING nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.545 247708 DEBUG nova.virt.libvirt.host [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.546 247708 DEBUG nova.virt.libvirt.host [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.551 247708 DEBUG nova.virt.libvirt.host [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.552 247708 DEBUG nova.virt.libvirt.host [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.553 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.553 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.554 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.554 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.554 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.554 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.555 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.555 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.555 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.555 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.556 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.556 247708 DEBUG nova.virt.hardware [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.558 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.887 247708 DEBUG nova.network.neutron [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated VIF entry in instance network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:28:38 compute-0 nova_compute[247704]: 2026-01-31 08:28:38.888 247708 DEBUG nova.network.neutron [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:28:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663713159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.011 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.048 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.054 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.084 247708 DEBUG oslo_concurrency.lockutils [req-e821b530-64b8-464d-a139-3dd848ec8654 req-07efa565-478a-4652-be6d-b9339ca538ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3289421549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/663713159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:39.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:28:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999143793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.545 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.547 247708 DEBUG nova.virt.libvirt.vif [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:28:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-216807903',display_name='tempest-ServersNegativeTestJSON-server-216807903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-216807903',id=168,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-t8nmow1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:28Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=30230cc0-c29f-42f3-9135-d3bc8ec7f901,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.548 247708 DEBUG nova.network.os_vif_util [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.549 247708 DEBUG nova.network.os_vif_util [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.550 247708 DEBUG nova.objects.instance [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30230cc0-c29f-42f3-9135-d3bc8ec7f901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.593 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <uuid>30230cc0-c29f-42f3-9135-d3bc8ec7f901</uuid>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <name>instance-000000a8</name>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:name>tempest-ServersNegativeTestJSON-server-216807903</nova:name>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:28:38</nova:creationTime>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:user uuid="eac51187531841e2891fc5d3c5f84123">tempest-ServersNegativeTestJSON-1810683467-project-member</nova:user>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:project uuid="722ab2e9dd674709953be812d4c88493">tempest-ServersNegativeTestJSON-1810683467</nova:project>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <nova:port uuid="328810d5-4ccb-4c97-b300-745f5283e08f">
Jan 31 08:28:39 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <system>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <entry name="serial">30230cc0-c29f-42f3-9135-d3bc8ec7f901</entry>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <entry name="uuid">30230cc0-c29f-42f3-9135-d3bc8ec7f901</entry>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </system>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <os>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </os>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <features>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </features>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk">
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </source>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk.config">
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </source>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:28:39 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:9b:e1:8a"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <target dev="tap328810d5-4c"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/console.log" append="off"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <video>
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </video>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:28:39 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:28:39 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:28:39 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:28:39 compute-0 nova_compute[247704]: </domain>
Jan 31 08:28:39 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.594 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Preparing to wait for external event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.595 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.595 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.595 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.596 247708 DEBUG nova.virt.libvirt.vif [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:28:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-216807903',display_name='tempest-ServersNegativeTestJSON-server-216807903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-216807903',id=168,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-t8nmow1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:28:28Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=30230cc0-c29f-42f3-9135-d3bc8ec7f901,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.596 247708 DEBUG nova.network.os_vif_util [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.597 247708 DEBUG nova.network.os_vif_util [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.597 247708 DEBUG os_vif [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.598 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.598 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.599 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.604 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.604 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap328810d5-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.605 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap328810d5-4c, col_values=(('external_ids', {'iface-id': '328810d5-4ccb-4c97-b300-745f5283e08f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:e1:8a', 'vm-uuid': '30230cc0-c29f-42f3-9135-d3bc8ec7f901'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:39 compute-0 NetworkManager[49108]: <info>  [1769848119.6081] manager: (tap328810d5-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.615 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.617 247708 INFO os_vif [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c')
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.841 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.842 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.842 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] No VIF found with MAC fa:16:3e:9b:e1:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.843 247708 INFO nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Using config drive
Jan 31 08:28:39 compute-0 nova_compute[247704]: 2026-01-31 08:28:39.881 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:40.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:40 compute-0 ceph-mon[74496]: pgmap v2855: 305 pgs: 305 active+clean; 718 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 4.6 MiB/s wr, 218 op/s
Jan 31 08:28:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1999143793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:28:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 859 KiB/s rd, 4.7 MiB/s wr, 206 op/s
Jan 31 08:28:40 compute-0 nova_compute[247704]: 2026-01-31 08:28:40.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:40 compute-0 nova_compute[247704]: 2026-01-31 08:28:40.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:28:40 compute-0 nova_compute[247704]: 2026-01-31 08:28:40.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.382 247708 INFO nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Creating config drive at /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/disk.config
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.387 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmped02xpnp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:41.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.522 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmped02xpnp" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.869 247708 DEBUG nova.storage.rbd_utils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] rbd image 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.875 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/disk.config 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.903 247708 DEBUG nova.network.neutron [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Updated VIF entry in instance network info cache for port 328810d5-4ccb-4c97-b300-745f5283e08f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.905 247708 DEBUG nova.network.neutron [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Updating instance_info_cache with network_info: [{"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:41 compute-0 nova_compute[247704]: 2026-01-31 08:28:41.972 247708 DEBUG oslo_concurrency.lockutils [req-c218609f-79e3-4e64-9d52-c4d99fd88dc2 req-17dbf804-3042-4e1f-9588-f6e83964d0b1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-30230cc0-c29f-42f3-9135-d3bc8ec7f901" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:42 compute-0 ceph-mon[74496]: pgmap v2856: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 859 KiB/s rd, 4.7 MiB/s wr, 206 op/s
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.072 247708 DEBUG oslo_concurrency.processutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/disk.config 30230cc0-c29f-42f3-9135-d3bc8ec7f901_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.073 247708 INFO nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Deleting local config drive /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901/disk.config because it was imported into RBD.
Jan 31 08:28:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:42 compute-0 kernel: tap328810d5-4c: entered promiscuous mode
Jan 31 08:28:42 compute-0 NetworkManager[49108]: <info>  [1769848122.1383] manager: (tap328810d5-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Jan 31 08:28:42 compute-0 ovn_controller[149457]: 2026-01-31T08:28:42Z|00717|binding|INFO|Claiming lport 328810d5-4ccb-4c97-b300-745f5283e08f for this chassis.
Jan 31 08:28:42 compute-0 ovn_controller[149457]: 2026-01-31T08:28:42Z|00718|binding|INFO|328810d5-4ccb-4c97-b300-745f5283e08f: Claiming fa:16:3e:9b:e1:8a 10.100.0.11
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.139 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 ovn_controller[149457]: 2026-01-31T08:28:42Z|00719|binding|INFO|Setting lport 328810d5-4ccb-4c97-b300-745f5283e08f ovn-installed in OVS
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.152 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.155 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 systemd-udevd[358304]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:28:42 compute-0 NetworkManager[49108]: <info>  [1769848122.1786] device (tap328810d5-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:28:42 compute-0 NetworkManager[49108]: <info>  [1769848122.1794] device (tap328810d5-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:28:42 compute-0 systemd-machined[214448]: New machine qemu-78-instance-000000a8.
Jan 31 08:28:42 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-000000a8.
Jan 31 08:28:42 compute-0 ovn_controller[149457]: 2026-01-31T08:28:42Z|00720|binding|INFO|Setting lport 328810d5-4ccb-4c97-b300-745f5283e08f up in Southbound
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.206 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:e1:8a 10.100.0.11'], port_security=['fa:16:3e:9b:e1:8a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30230cc0-c29f-42f3-9135-d3bc8ec7f901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=328810d5-4ccb-4c97-b300-745f5283e08f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.207 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 328810d5-4ccb-4c97-b300-745f5283e08f in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 bound to our chassis
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.209 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 08:28:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 761 KiB/s rd, 4.6 MiB/s wr, 203 op/s
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.221 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fed019d1-f890-436e-b559-8c1046445156]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.223 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape14a8b1b-11 in ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.225 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape14a8b1b-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.225 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9c75dc4d-dfd3-46e8-a1f2-4d6092b1edd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.226 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[60c039b7-e83c-4e5c-9d98-0132c2a8fb00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.239 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[0da38afa-da5e-4e40-aba9-74bb451dbe1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.253 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[23bfefeb-8c4f-4f7d-871b-f95e99ae0d16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.283 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d51dd60a-9681-4239-9c05-b0380eb34a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.288 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc7dacb-b9c7-4873-9047-4211ce690791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 NetworkManager[49108]: <info>  [1769848122.2901] manager: (tape14a8b1b-10): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Jan 31 08:28:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.320 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[df71f97f-77a2-4cd3-8b02-efc6f7da47ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.324 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c5193ebb-842d-4aa3-afbf-c2e4d6f34794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 NetworkManager[49108]: <info>  [1769848122.3448] device (tape14a8b1b-10): carrier: link connected
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.353 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4002c5-ca93-452f-b2be-03c6e426ad9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.372 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e05d20a2-965a-425b-88ef-4b44149f3516]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836496, 'reachable_time': 23028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358339, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.392 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b88c71ad-a039-4acd-8288-f559894319ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:d359'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 836496, 'tstamp': 836496}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358340, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.415 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfab0eb-6f3c-4268-bac9-2be1ea44eeba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape14a8b1b-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:d3:59'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836496, 'reachable_time': 23028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 358341, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.449 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c1eae5e8-4e7e-437b-a73f-84ecd7bc0111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.505 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b31a569e-e060-4fdb-bd93-36c98969fc9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.508 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.508 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.508 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape14a8b1b-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.529 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 NetworkManager[49108]: <info>  [1769848122.5301] manager: (tape14a8b1b-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Jan 31 08:28:42 compute-0 kernel: tape14a8b1b-10: entered promiscuous mode
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.532 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.533 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape14a8b1b-10, col_values=(('external_ids', {'iface-id': '6550f58c-c002-4c91-85a2-53b4c1c807b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 ovn_controller[149457]: 2026-01-31T08:28:42Z|00721|binding|INFO|Releasing lport 6550f58c-c002-4c91-85a2-53b4c1c807b7 from this chassis (sb_readonly=0)
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.546 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.548 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.549 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8120574a-fee2-44dc-82d6-873da138498c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.549 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.pid.haproxy
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e14a8b1b-1e10-44db-9858-e5a0ae5f2476
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:28:42 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:42.550 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'env', 'PROCESS_TAG=haproxy-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e14a8b1b-1e10-44db-9858-e5a0ae5f2476.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.860 247708 DEBUG nova.compute.manager [req-b7ea49ee-2c05-49c7-a1f9-105730747efe req-1473d0ac-5a7a-4ec3-94a0-5814a74ed7ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.862 247708 DEBUG oslo_concurrency.lockutils [req-b7ea49ee-2c05-49c7-a1f9-105730747efe req-1473d0ac-5a7a-4ec3-94a0-5814a74ed7ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.863 247708 DEBUG oslo_concurrency.lockutils [req-b7ea49ee-2c05-49c7-a1f9-105730747efe req-1473d0ac-5a7a-4ec3-94a0-5814a74ed7ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.863 247708 DEBUG oslo_concurrency.lockutils [req-b7ea49ee-2c05-49c7-a1f9-105730747efe req-1473d0ac-5a7a-4ec3-94a0-5814a74ed7ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.863 247708 DEBUG nova.compute.manager [req-b7ea49ee-2c05-49c7-a1f9-105730747efe req-1473d0ac-5a7a-4ec3-94a0-5814a74ed7ca 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Processing event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.909 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848122.9088395, 30230cc0-c29f-42f3-9135-d3bc8ec7f901 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.910 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] VM Started (Lifecycle Event)
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.912 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.915 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.918 247708 INFO nova.virt.libvirt.driver [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Instance spawned successfully.
Jan 31 08:28:42 compute-0 nova_compute[247704]: 2026-01-31 08:28:42.919 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:28:42 compute-0 podman[358414]: 2026-01-31 08:28:42.877046177 +0000 UTC m=+0.023270962 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.073 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.077 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.101 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.102 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.103 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.103 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.104 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.105 247708 DEBUG nova.virt.libvirt.driver [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.158 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.159 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848122.9090617, 30230cc0-c29f-42f3-9135-d3bc8ec7f901 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.159 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] VM Paused (Lifecycle Event)
Jan 31 08:28:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:43.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.550 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.554 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848122.9141858, 30230cc0-c29f-42f3-9135-d3bc8ec7f901 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.554 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] VM Resumed (Lifecycle Event)
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:43 compute-0 podman[358414]: 2026-01-31 08:28:43.583673824 +0000 UTC m=+0.729898589 container create e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.605 247708 DEBUG oslo_concurrency.lockutils [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.606 247708 DEBUG oslo_concurrency.lockutils [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:43 compute-0 ceph-mon[74496]: pgmap v2857: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 761 KiB/s rd, 4.6 MiB/s wr, 203 op/s
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.656 247708 INFO nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Took 14.84 seconds to spawn the instance on the hypervisor.
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.656 247708 DEBUG nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.668 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.672 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:28:43 compute-0 systemd[1]: Started libpod-conmon-e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d.scope.
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.719 247708 INFO nova.compute.manager [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Detaching volume d1f290e7-6438-4a38-9b76-cefa7f5a6025
Jan 31 08:28:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:28:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd18b9c600d952f18e4e95e92264e179afad1b1295aa1038406b9fb0184f0e14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:28:43 compute-0 podman[358414]: 2026-01-31 08:28:43.769117571 +0000 UTC m=+0.915342336 container init e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:28:43 compute-0 podman[358414]: 2026-01-31 08:28:43.77600029 +0000 UTC m=+0.922225055 container start e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:28:43 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [NOTICE]   (358435) : New worker (358437) forked
Jan 31 08:28:43 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [NOTICE]   (358435) : Loading success.
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.820 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:28:43 compute-0 nova_compute[247704]: 2026-01-31 08:28:43.896 247708 INFO nova.compute.manager [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Took 16.43 seconds to build instance.
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.011 247708 INFO nova.virt.block_device [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Attempting to driver detach volume d1f290e7-6438-4a38-9b76-cefa7f5a6025 from mountpoint /dev/vdb
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.025 247708 DEBUG nova.virt.libvirt.driver [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Attempting to detach device vdb from instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.028 247708 DEBUG nova.virt.libvirt.guest [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-d1f290e7-6438-4a38-9b76-cefa7f5a6025">
Jan 31 08:28:44 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   </source>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <serial>d1f290e7-6438-4a38-9b76-cefa7f5a6025</serial>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]: </disk>
Jan 31 08:28:44 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.048 247708 INFO nova.virt.libvirt.driver [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully detached device vdb from instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d from the persistent domain config.
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.049 247708 DEBUG nova.virt.libvirt.driver [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.050 247708 DEBUG nova.virt.libvirt.guest [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-d1f290e7-6438-4a38-9b76-cefa7f5a6025">
Jan 31 08:28:44 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   </source>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <serial>d1f290e7-6438-4a38-9b76-cefa7f5a6025</serial>
Jan 31 08:28:44 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:28:44 compute-0 nova_compute[247704]: </disk>
Jan 31 08:28:44 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.060 247708 DEBUG oslo_concurrency.lockutils [None req-4835324c-e309-4044-bf69-6a7eddcf3c31 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:44.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.200 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769848124.200464, 884d5d5d-6ad9-46a8-867a-b01ed20a527d => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.203 247708 DEBUG nova.virt.libvirt.driver [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.205 247708 INFO nova.virt.libvirt.driver [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully detached device vdb from instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d from the live domain config.
Jan 31 08:28:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 588 KiB/s rd, 3.7 MiB/s wr, 188 op/s
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:44 compute-0 nova_compute[247704]: 2026-01-31 08:28:44.831 247708 DEBUG nova.objects.instance [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'flavor' on Instance uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.197 247708 DEBUG oslo_concurrency.lockutils [None req-8fad37dc-f3ea-4fb8-ac79-f466aedb1207 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.450 247708 DEBUG nova.compute.manager [req-7ceba75c-a262-400c-9500-47c15ec2e554 req-6e250556-2fce-42ad-b5f4-788982575753 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.452 247708 DEBUG oslo_concurrency.lockutils [req-7ceba75c-a262-400c-9500-47c15ec2e554 req-6e250556-2fce-42ad-b5f4-788982575753 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.453 247708 DEBUG oslo_concurrency.lockutils [req-7ceba75c-a262-400c-9500-47c15ec2e554 req-6e250556-2fce-42ad-b5f4-788982575753 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.453 247708 DEBUG oslo_concurrency.lockutils [req-7ceba75c-a262-400c-9500-47c15ec2e554 req-6e250556-2fce-42ad-b5f4-788982575753 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.453 247708 DEBUG nova.compute.manager [req-7ceba75c-a262-400c-9500-47c15ec2e554 req-6e250556-2fce-42ad-b5f4-788982575753 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] No waiting events found dispatching network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.454 247708 WARNING nova.compute.manager [req-7ceba75c-a262-400c-9500-47c15ec2e554 req-6e250556-2fce-42ad-b5f4-788982575753 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received unexpected event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f for instance with vm_state active and task_state None.
Jan 31 08:28:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:45.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.616 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.617 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.617 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.617 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.618 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:45 compute-0 nova_compute[247704]: 2026-01-31 08:28:45.664 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 357 KiB/s wr, 170 op/s
Jan 31 08:28:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3624284653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.452 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.835s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:46 compute-0 ceph-mon[74496]: pgmap v2858: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 588 KiB/s rd, 3.7 MiB/s wr, 188 op/s
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.985 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.985 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.992 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.993 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.998 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:46 compute-0 nova_compute[247704]: 2026-01-31 08:28:46.999 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.003 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.004 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.009 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.009 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.009 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.212 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.213 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3338MB free_disk=20.69400405883789GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.213 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.214 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:47.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.691 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.740 247708 INFO nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating resource usage from migration 25fea84c-439e-4b69-837e-757239c42f77
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.843 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.845 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.846 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.846 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.846 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.848 247708 INFO nova.compute.manager [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Terminating instance
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.849 247708 DEBUG nova.compute.manager [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.909 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.909 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bbc5f09e-71d7-4009-bdf6-06e95b32574c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.910 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.910 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 30230cc0-c29f-42f3-9135-d3bc8ec7f901 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.910 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Migration 25fea84c-439e-4b69-837e-757239c42f77 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.911 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.911 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:28:47 compute-0 podman[358473]: 2026-01-31 08:28:47.961726267 +0000 UTC m=+0.117754588 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.987 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.987 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.987 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.988 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.988 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.989 247708 INFO nova.compute.manager [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Terminating instance
Jan 31 08:28:47 compute-0 nova_compute[247704]: 2026-01-31 08:28:47.990 247708 DEBUG nova.compute.manager [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:28:48 compute-0 kernel: tap328810d5-4c (unregistering): left promiscuous mode
Jan 31 08:28:48 compute-0 NetworkManager[49108]: <info>  [1769848128.0443] device (tap328810d5-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:28:48 compute-0 ovn_controller[149457]: 2026-01-31T08:28:48Z|00722|binding|INFO|Releasing lport 328810d5-4ccb-4c97-b300-745f5283e08f from this chassis (sb_readonly=0)
Jan 31 08:28:48 compute-0 ovn_controller[149457]: 2026-01-31T08:28:48Z|00723|binding|INFO|Setting lport 328810d5-4ccb-4c97-b300-745f5283e08f down in Southbound
Jan 31 08:28:48 compute-0 ovn_controller[149457]: 2026-01-31T08:28:48Z|00724|binding|INFO|Removing iface tap328810d5-4c ovn-installed in OVS
Jan 31 08:28:48 compute-0 ceph-mon[74496]: pgmap v2859: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 357 KiB/s wr, 170 op/s
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.061 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3624284653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:48.071 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:e1:8a 10.100.0.11'], port_security=['fa:16:3e:9b:e1:8a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30230cc0-c29f-42f3-9135-d3bc8ec7f901', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '722ab2e9dd674709953be812d4c88493', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9520bea1-c264-4093-b5f2-d0e6aba13129', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2519b9f-dcfa-4a01-b5ee-7b195f240dfa, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=328810d5-4ccb-4c97-b300-745f5283e08f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:48.074 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 328810d5-4ccb-4c97-b300-745f5283e08f in datapath e14a8b1b-1e10-44db-9858-e5a0ae5f2476 unbound from our chassis
Jan 31 08:28:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:48.076 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:28:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:48.079 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5767da8a-6cab-498a-9227-b4694e3cd5fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:48.079 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 namespace which is not needed anymore
Jan 31 08:28:48 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Jan 31 08:28:48 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a8.scope: Consumed 5.859s CPU time.
Jan 31 08:28:48 compute-0 systemd-machined[214448]: Machine qemu-78-instance-000000a8 terminated.
Jan 31 08:28:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:28:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:48.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.206 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 333 KiB/s wr, 182 op/s
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.231 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.240 247708 INFO nova.virt.libvirt.driver [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Instance destroyed successfully.
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.241 247708 DEBUG nova.objects.instance [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lazy-loading 'resources' on Instance uuid 30230cc0-c29f-42f3-9135-d3bc8ec7f901 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.433 247708 DEBUG nova.virt.libvirt.vif [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:28:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-216807903',display_name='tempest-ServersNegativeTestJSON-server-216807903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-216807903',id=168,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:28:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='722ab2e9dd674709953be812d4c88493',ramdisk_id='',reservation_id='r-t8nmow1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1810683467',owner_user_name='tempest-ServersNegativeTestJSON-1810683467-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:28:43Z,user_data=None,user_id='eac51187531841e2891fc5d3c5f84123',uuid=30230cc0-c29f-42f3-9135-d3bc8ec7f901,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.434 247708 DEBUG nova.network.os_vif_util [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converting VIF {"id": "328810d5-4ccb-4c97-b300-745f5283e08f", "address": "fa:16:3e:9b:e1:8a", "network": {"id": "e14a8b1b-1e10-44db-9858-e5a0ae5f2476", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-313753571-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "722ab2e9dd674709953be812d4c88493", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap328810d5-4c", "ovs_interfaceid": "328810d5-4ccb-4c97-b300-745f5283e08f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.435 247708 DEBUG nova.network.os_vif_util [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.436 247708 DEBUG os_vif [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.444 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap328810d5-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.446 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.448 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.449 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.452 247708 INFO os_vif [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:e1:8a,bridge_name='br-int',has_traffic_filtering=True,id=328810d5-4ccb-4c97-b300-745f5283e08f,network=Network(e14a8b1b-1e10-44db-9858-e5a0ae5f2476),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap328810d5-4c')
Jan 31 08:28:48 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [NOTICE]   (358435) : haproxy version is 2.8.14-c23fe91
Jan 31 08:28:48 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [NOTICE]   (358435) : path to executable is /usr/sbin/haproxy
Jan 31 08:28:48 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [WARNING]  (358435) : Exiting Master process...
Jan 31 08:28:48 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [WARNING]  (358435) : Exiting Master process...
Jan 31 08:28:48 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [ALERT]    (358435) : Current worker (358437) exited with code 143 (Terminated)
Jan 31 08:28:48 compute-0 neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476[358431]: [WARNING]  (358435) : All workers exited. Exiting... (0)
Jan 31 08:28:48 compute-0 systemd[1]: libpod-e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d.scope: Deactivated successfully.
Jan 31 08:28:48 compute-0 podman[358525]: 2026-01-31 08:28:48.572643518 +0000 UTC m=+0.404187802 container died e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:28:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319289982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.846 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:48 compute-0 kernel: tap36dc53a0-e7 (unregistering): left promiscuous mode
Jan 31 08:28:48 compute-0 NetworkManager[49108]: <info>  [1769848128.8549] device (tap36dc53a0-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.856 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:48 compute-0 ovn_controller[149457]: 2026-01-31T08:28:48Z|00725|binding|INFO|Releasing lport 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 from this chassis (sb_readonly=0)
Jan 31 08:28:48 compute-0 ovn_controller[149457]: 2026-01-31T08:28:48Z|00726|binding|INFO|Setting lport 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 down in Southbound
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.861 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 ovn_controller[149457]: 2026-01-31T08:28:48Z|00727|binding|INFO|Removing iface tap36dc53a0-e7 ovn-installed in OVS
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d-userdata-shm.mount: Deactivated successfully.
Jan 31 08:28:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd18b9c600d952f18e4e95e92264e179afad1b1295aa1038406b9fb0184f0e14-merged.mount: Deactivated successfully.
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.898 247708 DEBUG nova.compute.manager [req-f36a977d-595c-4a4a-af58-834b0f080790 req-9d0e2eb9-b3cd-4d95-ac11-baed0a30bdce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-vif-unplugged-328810d5-4ccb-4c97-b300-745f5283e08f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.899 247708 DEBUG oslo_concurrency.lockutils [req-f36a977d-595c-4a4a-af58-834b0f080790 req-9d0e2eb9-b3cd-4d95-ac11-baed0a30bdce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.899 247708 DEBUG oslo_concurrency.lockutils [req-f36a977d-595c-4a4a-af58-834b0f080790 req-9d0e2eb9-b3cd-4d95-ac11-baed0a30bdce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.899 247708 DEBUG oslo_concurrency.lockutils [req-f36a977d-595c-4a4a-af58-834b0f080790 req-9d0e2eb9-b3cd-4d95-ac11-baed0a30bdce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.899 247708 DEBUG nova.compute.manager [req-f36a977d-595c-4a4a-af58-834b0f080790 req-9d0e2eb9-b3cd-4d95-ac11-baed0a30bdce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] No waiting events found dispatching network-vif-unplugged-328810d5-4ccb-4c97-b300-745f5283e08f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.899 247708 DEBUG nova.compute.manager [req-f36a977d-595c-4a4a-af58-834b0f080790 req-9d0e2eb9-b3cd-4d95-ac11-baed0a30bdce 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-vif-unplugged-328810d5-4ccb-4c97-b300-745f5283e08f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:28:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:48.904 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:30:cf 10.100.0.14'], port_security=['fa:16:3e:a6:30:cf 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '884d5d5d-6ad9-46a8-867a-b01ed20a527d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e26a2af1-a850-4885-977e-596b6be13fb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '418d5319c640455ab23850c0b0f24f92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82b0b280-5a6f-4a3b-a288-461c595a8d30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=068708bd-dd36-4d03-9d65-912eb9981ecc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:48 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Jan 31 08:28:48 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a2.scope: Consumed 21.132s CPU time.
Jan 31 08:28:48 compute-0 systemd-machined[214448]: Machine qemu-75-instance-000000a2 terminated.
Jan 31 08:28:48 compute-0 nova_compute[247704]: 2026-01-31 08:28:48.934 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.066 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.067 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.098 247708 INFO nova.virt.libvirt.driver [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Instance destroyed successfully.
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.098 247708 DEBUG nova.objects.instance [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lazy-loading 'resources' on Instance uuid 884d5d5d-6ad9-46a8-867a-b01ed20a527d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:28:49 compute-0 podman[358525]: 2026-01-31 08:28:49.166659514 +0000 UTC m=+0.998203748 container cleanup e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:28:49 compute-0 systemd[1]: libpod-conmon-e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d.scope: Deactivated successfully.
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.310 247708 DEBUG nova.virt.libvirt.vif [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:26:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-435720527',display_name='tempest-AttachVolumeNegativeTest-server-435720527',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-435720527',id=162,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGTasBb7yA6yl4NsN6VKzciwzxddV3xWuEa+pvWeaIqTayVegS57LL03uNYCNkNWMeetdRe0i56n50ShiYxicVG4xkWY3c9k2KD8xxZZvG+1vcbn+ZilpCG+NeW+N1kD9A==',key_name='tempest-keypair-1125943210',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:26:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='418d5319c640455ab23850c0b0f24f92',ramdisk_id='',reservation_id='r-h95jhdb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-562353674',owner_user_name='tempest-AttachVolumeNegativeTest-562353674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:26:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='48d684de9ba340f48e249b4cce857bfa',uuid=884d5d5d-6ad9-46a8-867a-b01ed20a527d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.311 247708 DEBUG nova.network.os_vif_util [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converting VIF {"id": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "address": "fa:16:3e:a6:30:cf", "network": {"id": "e26a2af1-a850-4885-977e-596b6be13fb8", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1152670441-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "418d5319c640455ab23850c0b0f24f92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36dc53a0-e7", "ovs_interfaceid": "36dc53a0-e7ae-426b-a9bb-da7abd1ebb84", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.311 247708 DEBUG nova.network.os_vif_util [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.311 247708 DEBUG os_vif [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.313 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36dc53a0-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.315 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1814181840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1319289982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.316 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.322 247708 INFO os_vif [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:30:cf,bridge_name='br-int',has_traffic_filtering=True,id=36dc53a0-e7ae-426b-a9bb-da7abd1ebb84,network=Network(e26a2af1-a850-4885-977e-596b6be13fb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36dc53a0-e7')
Jan 31 08:28:49 compute-0 podman[358620]: 2026-01-31 08:28:49.485879232 +0000 UTC m=+0.298814908 container remove e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.491 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aa83bb39-8158-4de8-aaff-8f9cd68b67af]: (4, ('Sat Jan 31 08:28:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d)\ne39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d\nSat Jan 31 08:28:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 (e39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d)\ne39c6844337ef9eeab96fccf931fdb91f4564f907f643fa0ff1e69bac395d42d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.494 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fb880468-6270-4db0-ad7b-85e9adf363ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.495 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape14a8b1b-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.497 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:49 compute-0 kernel: tape14a8b1b-10: left promiscuous mode
Jan 31 08:28:49 compute-0 nova_compute[247704]: 2026-01-31 08:28:49.505 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.509 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a919ac-12d0-4545-97a8-2293de6d79fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:49.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.525 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd599c63-6ab3-48d6-9eef-01ea1887b9da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.526 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[14017c76-1725-4a2c-bc9a-367e7201515d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.542 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa52c34-dbb9-4353-b70d-8c648c8d399e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836489, 'reachable_time': 37499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358654, 'error': None, 'target': 'ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 systemd[1]: run-netns-ovnmeta\x2de14a8b1b\x2d1e10\x2d44db\x2d9858\x2de5a0ae5f2476.mount: Deactivated successfully.
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.546 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e14a8b1b-1e10-44db-9858-e5a0ae5f2476 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.547 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8405fd-d1a7-44ef-a471-a371dd0f58a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.548 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 in datapath e26a2af1-a850-4885-977e-596b6be13fb8 unbound from our chassis
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.550 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e26a2af1-a850-4885-977e-596b6be13fb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.551 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[67601c5f-073e-4aba-8007-edd1cae0d158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:49.552 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 namespace which is not needed anymore
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:28:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:50.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:50 compute-0 nova_compute[247704]: 2026-01-31 08:28:50.190 247708 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:28:50 compute-0 nova_compute[247704]: 2026-01-31 08:28:50.191 247708 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:28:50 compute-0 nova_compute[247704]: 2026-01-31 08:28:50.191 247708 DEBUG nova.network.neutron [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:28:50 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [NOTICE]   (354764) : haproxy version is 2.8.14-c23fe91
Jan 31 08:28:50 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [NOTICE]   (354764) : path to executable is /usr/sbin/haproxy
Jan 31 08:28:50 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [WARNING]  (354764) : Exiting Master process...
Jan 31 08:28:50 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [WARNING]  (354764) : Exiting Master process...
Jan 31 08:28:50 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [ALERT]    (354764) : Current worker (354766) exited with code 143 (Terminated)
Jan 31 08:28:50 compute-0 neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8[354758]: [WARNING]  (354764) : All workers exited. Exiting... (0)
Jan 31 08:28:50 compute-0 systemd[1]: libpod-01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0.scope: Deactivated successfully.
Jan 31 08:28:50 compute-0 podman[358673]: 2026-01-31 08:28:50.212830257 +0000 UTC m=+0.570129861 container died 01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:28:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 118 KiB/s wr, 167 op/s
Jan 31 08:28:50 compute-0 ceph-mon[74496]: pgmap v2860: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 333 KiB/s wr, 182 op/s
Jan 31 08:28:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/643233360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2216114932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0-userdata-shm.mount: Deactivated successfully.
Jan 31 08:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d4828ed6d745975bd77c893242461ce89b76fd93d70cecc8394f35ca71ea95c-merged.mount: Deactivated successfully.
Jan 31 08:28:50 compute-0 nova_compute[247704]: 2026-01-31 08:28:50.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:50 compute-0 podman[358673]: 2026-01-31 08:28:50.699718866 +0000 UTC m=+1.057018410 container cleanup 01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:28:50 compute-0 podman[358703]: 2026-01-31 08:28:50.950507076 +0000 UTC m=+0.229823717 container remove 01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:28:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:50.980 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9963f1ee-d175-406c-868d-3d2ce8f734ec]: (4, ('Sat Jan 31 08:28:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0)\n01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0\nSat Jan 31 08:28:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 (01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0)\n01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:50 compute-0 systemd[1]: libpod-conmon-01f9551a0e9da07c3b794812d09975853684e03c329e137c57f634f8182aaba0.scope: Deactivated successfully.
Jan 31 08:28:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:50.983 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1638ae02-2e01-4919-84aa-55b22c8340db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:50.984 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape26a2af1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:50 compute-0 nova_compute[247704]: 2026-01-31 08:28:50.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:50 compute-0 kernel: tape26a2af1-a0: left promiscuous mode
Jan 31 08:28:50 compute-0 nova_compute[247704]: 2026-01-31 08:28:50.993 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:50.997 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4e827cdb-13f8-448c-8cf0-ca4afe0ef094]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:51.020 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[819d08d2-0025-4b9f-8528-c4abc61d9f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:51.022 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f7f6ec-169e-415c-81f5-9833cadc5365]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:51.041 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[30d1b313-3225-4f70-a557-972f366df332]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823231, 'reachable_time': 18987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358723, 'error': None, 'target': 'ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:51.044 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e26a2af1-a850-4885-977e-596b6be13fb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:28:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:51.044 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6546ef-eacb-4275-ae45-9b6235ee2d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:51 compute-0 systemd[1]: run-netns-ovnmeta\x2de26a2af1\x2da850\x2d4885\x2d977e\x2d596b6be13fb8.mount: Deactivated successfully.
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.109 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.110 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.110 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.111 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.111 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] No waiting events found dispatching network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.111 247708 WARNING nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received unexpected event network-vif-plugged-328810d5-4ccb-4c97-b300-745f5283e08f for instance with vm_state active and task_state deleting.
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.111 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-vif-unplugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.112 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.112 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.112 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.113 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] No waiting events found dispatching network-vif-unplugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.113 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-vif-unplugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.113 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.113 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.114 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.114 247708 DEBUG oslo_concurrency.lockutils [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.114 247708 DEBUG nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] No waiting events found dispatching network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.115 247708 WARNING nova.compute.manager [req-596d2d0c-9a3a-4f77-bc7a-c47c2ea8bdd1 req-7e380005-5aa0-43f2-9496-0367338e0bf6 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received unexpected event network-vif-plugged-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 for instance with vm_state active and task_state deleting.
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.335 247708 INFO nova.virt.libvirt.driver [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Deleting instance files /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901_del
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.336 247708 INFO nova.virt.libvirt.driver [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Deletion of /var/lib/nova/instances/30230cc0-c29f-42f3-9135-d3bc8ec7f901_del complete
Jan 31 08:28:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:51.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.633 247708 INFO nova.virt.libvirt.driver [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Deleting instance files /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d_del
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.634 247708 INFO nova.virt.libvirt.driver [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Deletion of /var/lib/nova/instances/884d5d5d-6ad9-46a8-867a-b01ed20a527d_del complete
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.640 247708 INFO nova.compute.manager [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Took 3.65 seconds to destroy the instance on the hypervisor.
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.640 247708 DEBUG oslo.service.loopingcall [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.640 247708 DEBUG nova.compute.manager [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.641 247708 DEBUG nova.network.neutron [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:28:51 compute-0 sudo[358726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:51 compute-0 sudo[358726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:51 compute-0 sudo[358726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:51 compute-0 ceph-mon[74496]: pgmap v2861: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 118 KiB/s wr, 167 op/s
Jan 31 08:28:51 compute-0 sudo[358751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:28:51 compute-0 sudo[358751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:51 compute-0 sudo[358751]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.965 247708 INFO nova.compute.manager [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Took 4.12 seconds to destroy the instance on the hypervisor.
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.965 247708 DEBUG oslo.service.loopingcall [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.966 247708 DEBUG nova.compute.manager [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:28:51 compute-0 nova_compute[247704]: 2026-01-31 08:28:51.966 247708 DEBUG nova.network.neutron [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:28:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:52.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 700 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 172 op/s
Jan 31 08:28:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:52 compute-0 nova_compute[247704]: 2026-01-31 08:28:52.346 247708 DEBUG nova.network.neutron [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:52 compute-0 nova_compute[247704]: 2026-01-31 08:28:52.399 247708 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:28:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1528313895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.068 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.068 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:28:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:53.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.762 247708 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.763 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Creating file /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/72308696c348409886fa510173ddd2e8.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.763 247708 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/72308696c348409886fa510173ddd2e8.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.794 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.795 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.796 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:53 compute-0 nova_compute[247704]: 2026-01-31 08:28:53.797 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:53 compute-0 ceph-mon[74496]: pgmap v2862: 305 pgs: 305 active+clean; 700 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 172 op/s
Jan 31 08:28:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3992570297' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:28:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3992570297' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:28:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.170 247708 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/72308696c348409886fa510173ddd2e8.tmp" returned: 1 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.170 247708 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30/72308696c348409886fa510173ddd2e8.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.171 247708 DEBUG nova.virt.libvirt.volume.remotefs [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Creating directory /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.171 247708 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 656 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 165 op/s
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.315 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.382 247708 DEBUG oslo_concurrency.processutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/10816ede-cf43-4736-aba7-48389f607d30" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:54 compute-0 nova_compute[247704]: 2026-01-31 08:28:54.387 247708 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 31 08:28:55 compute-0 nova_compute[247704]: 2026-01-31 08:28:55.286 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:28:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:55.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:28:55 compute-0 nova_compute[247704]: 2026-01-31 08:28:55.669 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:55 compute-0 ceph-mon[74496]: pgmap v2863: 305 pgs: 305 active+clean; 656 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 165 op/s
Jan 31 08:28:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 49 KiB/s wr, 177 op/s
Jan 31 08:28:56 compute-0 nova_compute[247704]: 2026-01-31 08:28:56.959 247708 DEBUG nova.network.neutron [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1234714180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.186 247708 INFO nova.compute.manager [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Took 5.55 seconds to deallocate network for instance.
Jan 31 08:28:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.352 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.353 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.401 247708 DEBUG nova.compute.manager [req-ad448d56-7a8e-424a-8e0e-49994da45677 req-7d4a1c89-8912-4e65-99f1-d3dc5a67524c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Received event network-vif-deleted-328810d5-4ccb-4c97-b300-745f5283e08f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:57.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.609 247708 DEBUG nova.network.neutron [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.686 247708 INFO nova.compute.manager [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Took 5.72 seconds to deallocate network for instance.
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.705 247708 DEBUG oslo_concurrency.processutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.803 247708 DEBUG nova.compute.manager [req-a738a6dd-7431-4af9-8f37-3ab49a450575 req-4902c1bf-d080-476f-af76-738f39f85e9a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Received event network-vif-deleted-36dc53a0-e7ae-426b-a9bb-da7abd1ebb84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:57 compute-0 nova_compute[247704]: 2026-01-31 08:28:57.816 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:58 compute-0 kernel: tapaeb09486-b6 (unregistering): left promiscuous mode
Jan 31 08:28:58 compute-0 NetworkManager[49108]: <info>  [1769848138.0116] device (tapaeb09486-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.020 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 ovn_controller[149457]: 2026-01-31T08:28:58Z|00728|binding|INFO|Releasing lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 from this chassis (sb_readonly=0)
Jan 31 08:28:58 compute-0 ovn_controller[149457]: 2026-01-31T08:28:58Z|00729|binding|INFO|Setting lport aeb09486-b68f-4fa4-a410-dd0ffaf49b05 down in Southbound
Jan 31 08:28:58 compute-0 ovn_controller[149457]: 2026-01-31T08:28:58Z|00730|binding|INFO|Removing iface tapaeb09486-b6 ovn-installed in OVS
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.022 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.028 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.042 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:78:f9 10.100.0.11'], port_security=['fa:16:3e:ec:78:f9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '10816ede-cf43-4736-aba7-48389f607d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=aeb09486-b68f-4fa4-a410-dd0ffaf49b05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.043 160028 INFO neutron.agent.ovn.metadata.agent [-] Port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.045 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.066 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[922f7e85-5fff-434d-b985-747a294391f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:58 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Jan 31 08:28:58 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a5.scope: Consumed 17.329s CPU time.
Jan 31 08:28:58 compute-0 systemd-machined[214448]: Machine qemu-77-instance-000000a5 terminated.
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.097 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[be902fd3-c681-4321-890f-414f25006bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:58 compute-0 ceph-mon[74496]: pgmap v2864: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 49 KiB/s wr, 177 op/s
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.101 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb00c12-7324-402c-9d41-94200f8fe4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.132 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9e49d6d0-fb34-42d9-a6d2-ec0a5edc2ca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.156 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[451e489b-628f-4a50-af43-3f4e0eabe168]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 30578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358812, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.177 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4ee6b4-72fc-45a1-81ed-58443b39e15d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813783, 'tstamp': 813783}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358813, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813786, 'tstamp': 813786}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 358813, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.180 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136540381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.187 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.187 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.187 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:28:58.188 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.207 247708 DEBUG oslo_concurrency.processutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.213 247708 DEBUG nova.compute.provider_tree [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 21 KiB/s wr, 104 op/s
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.246 247708 DEBUG nova.scheduler.client.report [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.306 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.309 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.417 247708 INFO nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance shutdown successfully after 4 seconds.
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.425 247708 INFO nova.virt.libvirt.driver [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Instance destroyed successfully.
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.426 247708 DEBUG nova.virt.libvirt.vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:27:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:28:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.427 247708 DEBUG nova.network.os_vif_util [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "vif_mac": "fa:16:3e:ec:78:f9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.428 247708 DEBUG nova.network.os_vif_util [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.428 247708 DEBUG os_vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.430 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.430 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeb09486-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.433 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.436 247708 INFO os_vif [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6')
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.593 247708 INFO nova.scheduler.client.report [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Deleted allocations for instance 30230cc0-c29f-42f3-9135-d3bc8ec7f901
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.621 247708 DEBUG oslo_concurrency.processutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.843 247708 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.844 247708 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.844 247708 DEBUG nova.virt.libvirt.driver [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:28:58 compute-0 nova_compute[247704]: 2026-01-31 08:28:58.877 247708 DEBUG oslo_concurrency.lockutils [None req-aabc78ed-a236-48ae-8af8-1ed1c8925878 eac51187531841e2891fc5d3c5f84123 722ab2e9dd674709953be812d4c88493 - - default default] Lock "30230cc0-c29f-42f3-9135-d3bc8ec7f901" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:28:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326598905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.074 247708 DEBUG oslo_concurrency.processutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.079 247708 DEBUG nova.compute.manager [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.079 247708 DEBUG oslo_concurrency.lockutils [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.079 247708 DEBUG oslo_concurrency.lockutils [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.080 247708 DEBUG oslo_concurrency.lockutils [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.080 247708 DEBUG nova.compute.manager [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.080 247708 WARNING nova.compute.manager [req-4189f661-9a37-4a1a-b5e5-b9f96aee2ee1 req-6fb3d761-76e5-4544-92e8-799a0f1fed38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-unplugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state active and task_state resize_migrating.
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.085 247708 DEBUG nova.compute.provider_tree [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2136540381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3570348620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/326598905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.218 247708 DEBUG nova.scheduler.client.report [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:28:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:28:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:59.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.640 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.831 247708 INFO nova.scheduler.client.report [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Deleted allocations for instance 884d5d5d-6ad9-46a8-867a-b01ed20a527d
Jan 31 08:28:59 compute-0 nova_compute[247704]: 2026-01-31 08:28:59.998 247708 DEBUG oslo_concurrency.lockutils [None req-20037c9c-26db-4642-9afc-96ef2a6056ff 48d684de9ba340f48e249b4cce857bfa 418d5319c640455ab23850c0b0f24f92 - - default default] Lock "884d5d5d-6ad9-46a8-867a-b01ed20a527d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:00.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:00 compute-0 nova_compute[247704]: 2026-01-31 08:29:00.120 247708 DEBUG neutronclient.v2_0.client [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:29:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 857 KiB/s wr, 84 op/s
Jan 31 08:29:00 compute-0 ceph-mon[74496]: pgmap v2865: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 21 KiB/s wr, 104 op/s
Jan 31 08:29:00 compute-0 nova_compute[247704]: 2026-01-31 08:29:00.607 247708 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:00 compute-0 nova_compute[247704]: 2026-01-31 08:29:00.607 247708 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:00 compute-0 nova_compute[247704]: 2026-01-31 08:29:00.607 247708 DEBUG oslo_concurrency.lockutils [None req-cd591ccf-e301-4e60-a3f3-a4fc5ce21321 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:00 compute-0 nova_compute[247704]: 2026-01-31 08:29:00.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:01.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:01 compute-0 ceph-mon[74496]: pgmap v2866: 305 pgs: 305 active+clean; 616 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 857 KiB/s wr, 84 op/s
Jan 31 08:29:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:02.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 1.7 MiB/s wr, 80 op/s
Jan 31 08:29:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:02 compute-0 nova_compute[247704]: 2026-01-31 08:29:02.624 247708 DEBUG nova.compute.manager [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:02 compute-0 nova_compute[247704]: 2026-01-31 08:29:02.625 247708 DEBUG oslo_concurrency.lockutils [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:02 compute-0 nova_compute[247704]: 2026-01-31 08:29:02.625 247708 DEBUG oslo_concurrency.lockutils [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:02 compute-0 nova_compute[247704]: 2026-01-31 08:29:02.625 247708 DEBUG oslo_concurrency.lockutils [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:02 compute-0 nova_compute[247704]: 2026-01-31 08:29:02.625 247708 DEBUG nova.compute.manager [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:02 compute-0 nova_compute[247704]: 2026-01-31 08:29:02.626 247708 WARNING nova.compute.manager [req-815221c8-f72b-452a-90cb-e87ddb3190b0 req-52b7ca9c-3447-4996-b22b-46ad52b41514 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state active and task_state resize_migrated.
Jan 31 08:29:03 compute-0 nova_compute[247704]: 2026-01-31 08:29:03.235 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848128.2229824, 30230cc0-c29f-42f3-9135-d3bc8ec7f901 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:03 compute-0 nova_compute[247704]: 2026-01-31 08:29:03.235 247708 INFO nova.compute.manager [-] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] VM Stopped (Lifecycle Event)
Jan 31 08:29:03 compute-0 nova_compute[247704]: 2026-01-31 08:29:03.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:03.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:03 compute-0 ceph-mon[74496]: pgmap v2867: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 1.7 MiB/s wr, 80 op/s
Jan 31 08:29:04 compute-0 nova_compute[247704]: 2026-01-31 08:29:04.034 247708 DEBUG nova.compute.manager [None req-280f1b7d-d454-4c44-b3d9-08f2092eca69 - - - - - -] [instance: 30230cc0-c29f-42f3-9135-d3bc8ec7f901] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:04 compute-0 nova_compute[247704]: 2026-01-31 08:29:04.097 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848129.0959618, 884d5d5d-6ad9-46a8-867a-b01ed20a527d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:04 compute-0 nova_compute[247704]: 2026-01-31 08:29:04.097 247708 INFO nova.compute.manager [-] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] VM Stopped (Lifecycle Event)
Jan 31 08:29:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 31 08:29:04 compute-0 nova_compute[247704]: 2026-01-31 08:29:04.243 247708 DEBUG nova.compute.manager [None req-0221a853-cff1-4733-89fe-851e87d221b8 - - - - - -] [instance: 884d5d5d-6ad9-46a8-867a-b01ed20a527d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:05.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:05 compute-0 nova_compute[247704]: 2026-01-31 08:29:05.672 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:05 compute-0 podman[358853]: 2026-01-31 08:29:05.942458 +0000 UTC m=+0.102670488 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:29:06 compute-0 nova_compute[247704]: 2026-01-31 08:29:06.097 247708 DEBUG nova.compute.manager [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:06 compute-0 nova_compute[247704]: 2026-01-31 08:29:06.097 247708 DEBUG nova.compute.manager [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing instance network info cache due to event network-changed-aeb09486-b68f-4fa4-a410-dd0ffaf49b05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:29:06 compute-0 nova_compute[247704]: 2026-01-31 08:29:06.098 247708 DEBUG oslo_concurrency.lockutils [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:06 compute-0 nova_compute[247704]: 2026-01-31 08:29:06.098 247708 DEBUG oslo_concurrency.lockutils [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:06 compute-0 nova_compute[247704]: 2026-01-31 08:29:06.098 247708 DEBUG nova.network.neutron [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Refreshing network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:29:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:06 compute-0 ceph-mon[74496]: pgmap v2868: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 31 08:29:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 31 08:29:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:07.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:07 compute-0 ceph-mon[74496]: pgmap v2869: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 31 08:29:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:08.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:29:08 compute-0 nova_compute[247704]: 2026-01-31 08:29:08.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:09.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:09 compute-0 ceph-mon[74496]: pgmap v2870: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:29:09 compute-0 sudo[358876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:09 compute-0 sudo[358876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:09 compute-0 sudo[358876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:09 compute-0 sudo[358901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:09 compute-0 sudo[358901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:09 compute-0 sudo[358901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:09 compute-0 sudo[358926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:09 compute-0 sudo[358926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:09 compute-0 sudo[358926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:10 compute-0 sudo[358951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:29:10 compute-0 sudo[358951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:10.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:29:10 compute-0 sudo[358951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:29:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:29:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:29:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1d21a0e4-7dce-4dd1-973d-77d2fb81165a does not exist
Jan 31 08:29:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8237e176-062f-4437-a2f2-749bf6d3c133 does not exist
Jan 31 08:29:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 835f3989-ccc0-459f-a598-3030921fdb0d does not exist
Jan 31 08:29:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:29:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:29:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:29:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:29:10 compute-0 nova_compute[247704]: 2026-01-31 08:29:10.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:10 compute-0 sudo[359007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:10 compute-0 sudo[359007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:10 compute-0 sudo[359007]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:10 compute-0 sudo[359032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:10 compute-0 sudo[359032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:10 compute-0 sudo[359032]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:29:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:29:10 compute-0 sudo[359057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:10 compute-0 sudo[359057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:10 compute-0 sudo[359057]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:10 compute-0 sudo[359082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:29:10 compute-0 sudo[359082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:29:11.199 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:29:11.200 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:29:11.201 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.132414283 +0000 UTC m=+0.021806875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.227249849 +0000 UTC m=+0.116642401 container create 5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:11 compute-0 systemd[1]: Started libpod-conmon-5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e.scope.
Jan 31 08:29:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.529435868 +0000 UTC m=+0.418828440 container init 5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.540563381 +0000 UTC m=+0.429955923 container start 5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 08:29:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:11.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:11 compute-0 busy_neumann[359164]: 167 167
Jan 31 08:29:11 compute-0 systemd[1]: libpod-5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e.scope: Deactivated successfully.
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.583917144 +0000 UTC m=+0.473309716 container attach 5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.585956294 +0000 UTC m=+0.475348846 container died 5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 31 08:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7a4f513fbbed1dac2bda049d4222862ce44c210d62d0588adecc75b49e089e2-merged.mount: Deactivated successfully.
Jan 31 08:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:29:11.706 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:29:11 compute-0 nova_compute[247704]: 2026-01-31 08:29:11.708 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:29:11.708 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:29:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:29:11.709 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:11 compute-0 podman[359148]: 2026-01-31 08:29:11.781887348 +0000 UTC m=+0.671279910 container remove 5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:29:11 compute-0 systemd[1]: libpod-conmon-5a26590db08a910c97688a5af4895fd361bacff588b401e4e32cf2dc6c9d338e.scope: Deactivated successfully.
Jan 31 08:29:11 compute-0 nova_compute[247704]: 2026-01-31 08:29:11.881 247708 DEBUG nova.network.neutron [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updated VIF entry in instance network info cache for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:29:11 compute-0 nova_compute[247704]: 2026-01-31 08:29:11.882 247708 DEBUG nova.network.neutron [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:11 compute-0 nova_compute[247704]: 2026-01-31 08:29:11.920 247708 DEBUG oslo_concurrency.lockutils [req-f501cdfe-d649-4bb1-85b2-0d31cb93b3d2 req-bb35528a-aef0-4a37-badb-e9e8c09c3264 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:11 compute-0 podman[359191]: 2026-01-31 08:29:11.958524909 +0000 UTC m=+0.063842445 container create 660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:29:11 compute-0 sudo[359205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:12 compute-0 sudo[359205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:12 compute-0 sudo[359205]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:12 compute-0 podman[359191]: 2026-01-31 08:29:11.917265548 +0000 UTC m=+0.022583114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:29:12 compute-0 systemd[1]: Started libpod-conmon-660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77.scope.
Jan 31 08:29:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cdfeaddf6d99039827e08522cfee8657926e9da90bfc3a535dbc7aac3b8038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:12 compute-0 ceph-mon[74496]: pgmap v2871: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:29:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/503516173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cdfeaddf6d99039827e08522cfee8657926e9da90bfc3a535dbc7aac3b8038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cdfeaddf6d99039827e08522cfee8657926e9da90bfc3a535dbc7aac3b8038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cdfeaddf6d99039827e08522cfee8657926e9da90bfc3a535dbc7aac3b8038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3cdfeaddf6d99039827e08522cfee8657926e9da90bfc3a535dbc7aac3b8038/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:12 compute-0 sudo[359231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:12 compute-0 sudo[359231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:12 compute-0 sudo[359231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:12.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 981 KiB/s wr, 27 op/s
Jan 31 08:29:12 compute-0 podman[359191]: 2026-01-31 08:29:12.242460272 +0000 UTC m=+0.347777828 container init 660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:12 compute-0 podman[359191]: 2026-01-31 08:29:12.250553491 +0000 UTC m=+0.355871027 container start 660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:29:12 compute-0 podman[359191]: 2026-01-31 08:29:12.287095966 +0000 UTC m=+0.392413502 container attach 660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:13 compute-0 dazzling_beaver[359251]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:29:13 compute-0 dazzling_beaver[359251]: --> relative data size: 1.0
Jan 31 08:29:13 compute-0 dazzling_beaver[359251]: --> All data devices are unavailable
Jan 31 08:29:13 compute-0 systemd[1]: libpod-660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77.scope: Deactivated successfully.
Jan 31 08:29:13 compute-0 podman[359191]: 2026-01-31 08:29:13.170628421 +0000 UTC m=+1.275945967 container died 660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3cdfeaddf6d99039827e08522cfee8657926e9da90bfc3a535dbc7aac3b8038-merged.mount: Deactivated successfully.
Jan 31 08:29:13 compute-0 nova_compute[247704]: 2026-01-31 08:29:13.262 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848138.2616696, 10816ede-cf43-4736-aba7-48389f607d30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:29:13 compute-0 nova_compute[247704]: 2026-01-31 08:29:13.266 247708 INFO nova.compute.manager [-] [instance: 10816ede-cf43-4736-aba7-48389f607d30] VM Stopped (Lifecycle Event)
Jan 31 08:29:13 compute-0 podman[359191]: 2026-01-31 08:29:13.302068784 +0000 UTC m=+1.407386320 container remove 660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:29:13 compute-0 systemd[1]: libpod-conmon-660ff76200623a5f71785067375adea0c8f06370823f74448e6fb10d13788d77.scope: Deactivated successfully.
Jan 31 08:29:13 compute-0 sudo[359082]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:13 compute-0 nova_compute[247704]: 2026-01-31 08:29:13.375 247708 DEBUG nova.compute.manager [None req-f3bcc979-f41a-4a26-ac8b-530cee4f3aaf - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:29:13 compute-0 nova_compute[247704]: 2026-01-31 08:29:13.380 247708 DEBUG nova.compute.manager [None req-f3bcc979-f41a-4a26-ac8b-530cee4f3aaf - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:29:13 compute-0 sudo[359287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:13 compute-0 sudo[359287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:13 compute-0 sudo[359287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:13 compute-0 nova_compute[247704]: 2026-01-31 08:29:13.454 247708 INFO nova.compute.manager [None req-f3bcc979-f41a-4a26-ac8b-530cee4f3aaf - - - - - -] [instance: 10816ede-cf43-4736-aba7-48389f607d30] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 08:29:13 compute-0 sudo[359312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:13 compute-0 sudo[359312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:13 compute-0 sudo[359312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:13 compute-0 nova_compute[247704]: 2026-01-31 08:29:13.479 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:13 compute-0 sudo[359337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:13 compute-0 sudo[359337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:13 compute-0 sudo[359337]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:13.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:13 compute-0 sudo[359363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:29:13 compute-0 sudo[359363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:13 compute-0 podman[359428]: 2026-01-31 08:29:13.853954357 +0000 UTC m=+0.022296208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:29:13 compute-0 podman[359428]: 2026-01-31 08:29:13.96380964 +0000 UTC m=+0.132151431 container create f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:14 compute-0 systemd[1]: Started libpod-conmon-f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307.scope.
Jan 31 08:29:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Jan 31 08:29:14 compute-0 podman[359428]: 2026-01-31 08:29:14.127011912 +0000 UTC m=+0.295353753 container init f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hermann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:29:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:14.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:14 compute-0 podman[359428]: 2026-01-31 08:29:14.134370292 +0000 UTC m=+0.302712083 container start f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:14 compute-0 magical_hermann[359444]: 167 167
Jan 31 08:29:14 compute-0 systemd[1]: libpod-f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307.scope: Deactivated successfully.
Jan 31 08:29:14 compute-0 conmon[359444]: conmon f89f37570e4d2adda609 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307.scope/container/memory.events
Jan 31 08:29:14 compute-0 ceph-mon[74496]: pgmap v2872: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 981 KiB/s wr, 27 op/s
Jan 31 08:29:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4283237309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:14 compute-0 podman[359428]: 2026-01-31 08:29:14.193591985 +0000 UTC m=+0.361933776 container attach f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hermann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:29:14 compute-0 podman[359428]: 2026-01-31 08:29:14.19420998 +0000 UTC m=+0.362551771 container died f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 91 KiB/s wr, 14 op/s
Jan 31 08:29:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Jan 31 08:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-231d11e8c695c80ca41e44928426e5544a235f0bc2981969e41e3fc16dbd0323-merged.mount: Deactivated successfully.
Jan 31 08:29:14 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Jan 31 08:29:14 compute-0 podman[359428]: 2026-01-31 08:29:14.369406506 +0000 UTC m=+0.537748307 container remove f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:29:14 compute-0 systemd[1]: libpod-conmon-f89f37570e4d2adda609d2c0425f69e7694f677639bb8bcaa2d95b38497f4307.scope: Deactivated successfully.
Jan 31 08:29:14 compute-0 podman[359468]: 2026-01-31 08:29:14.547590855 +0000 UTC m=+0.055309137 container create 987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:29:14 compute-0 systemd[1]: Started libpod-conmon-987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd.scope.
Jan 31 08:29:14 compute-0 podman[359468]: 2026-01-31 08:29:14.516772059 +0000 UTC m=+0.024490361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:29:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000d9945aa5b6c3653a2b665b6d986d1ea14d501ad624c3ef808b02333abf9f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000d9945aa5b6c3653a2b665b6d986d1ea14d501ad624c3ef808b02333abf9f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000d9945aa5b6c3653a2b665b6d986d1ea14d501ad624c3ef808b02333abf9f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000d9945aa5b6c3653a2b665b6d986d1ea14d501ad624c3ef808b02333abf9f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:14 compute-0 podman[359468]: 2026-01-31 08:29:14.678842033 +0000 UTC m=+0.186560345 container init 987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:14 compute-0 podman[359468]: 2026-01-31 08:29:14.685306432 +0000 UTC m=+0.193024714 container start 987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:29:14 compute-0 podman[359468]: 2026-01-31 08:29:14.717326108 +0000 UTC m=+0.225044390 container attach 987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:29:15 compute-0 ceph-mon[74496]: pgmap v2873: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 91 KiB/s wr, 14 op/s
Jan 31 08:29:15 compute-0 ceph-mon[74496]: osdmap e365: 3 total, 3 up, 3 in
Jan 31 08:29:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2624916612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1429176043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:15 compute-0 peaceful_cori[359484]: {
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:     "0": [
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:         {
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "devices": [
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "/dev/loop3"
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             ],
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "lv_name": "ceph_lv0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "lv_size": "7511998464",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "name": "ceph_lv0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "tags": {
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.crush_device_class": "",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.encrypted": "0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.osd_id": "0",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.type": "block",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:                 "ceph.vdo": "0"
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             },
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "type": "block",
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:             "vg_name": "ceph_vg0"
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:         }
Jan 31 08:29:15 compute-0 peaceful_cori[359484]:     ]
Jan 31 08:29:15 compute-0 peaceful_cori[359484]: }
Jan 31 08:29:15 compute-0 systemd[1]: libpod-987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd.scope: Deactivated successfully.
Jan 31 08:29:15 compute-0 podman[359468]: 2026-01-31 08:29:15.477616991 +0000 UTC m=+0.985335293 container died 987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-000d9945aa5b6c3653a2b665b6d986d1ea14d501ad624c3ef808b02333abf9f3-merged.mount: Deactivated successfully.
Jan 31 08:29:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:15.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:15 compute-0 nova_compute[247704]: 2026-01-31 08:29:15.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:15 compute-0 podman[359468]: 2026-01-31 08:29:15.681689464 +0000 UTC m=+1.189407746 container remove 987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cori, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:15 compute-0 systemd[1]: libpod-conmon-987182f199ba4046ce3ee0e5e0d980de82085e541f9683c79b4c8cff364d5bcd.scope: Deactivated successfully.
Jan 31 08:29:15 compute-0 sudo[359363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:15 compute-0 sudo[359507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:15 compute-0 sudo[359507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:15 compute-0 sudo[359507]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:15 compute-0 sudo[359532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:15 compute-0 sudo[359532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:15 compute-0 sudo[359532]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:15 compute-0 sudo[359557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:15 compute-0 sudo[359557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:15 compute-0 sudo[359557]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:15 compute-0 sudo[359582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:29:15 compute-0 sudo[359582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:29:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:16.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:29:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 3.7 KiB/s wr, 21 op/s
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.350983066 +0000 UTC m=+0.076228280 container create da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 31 08:29:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/569270921' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.299527755 +0000 UTC m=+0.024772989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:29:16 compute-0 systemd[1]: Started libpod-conmon-da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337.scope.
Jan 31 08:29:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.470755023 +0000 UTC m=+0.196000257 container init da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.479469356 +0000 UTC m=+0.204714570 container start da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:29:16 compute-0 suspicious_jemison[359665]: 167 167
Jan 31 08:29:16 compute-0 systemd[1]: libpod-da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337.scope: Deactivated successfully.
Jan 31 08:29:16 compute-0 conmon[359665]: conmon da4695e00375fc47df16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337.scope/container/memory.events
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.5048952 +0000 UTC m=+0.230140434 container attach da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.50569868 +0000 UTC m=+0.230943904 container died da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:29:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0ff328df17fc2b6f99dca25506c77d2eec9e05bd950df0229beff3284b74294-merged.mount: Deactivated successfully.
Jan 31 08:29:16 compute-0 podman[359649]: 2026-01-31 08:29:16.633976886 +0000 UTC m=+0.359222100 container remove da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_jemison, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:29:16 compute-0 systemd[1]: libpod-conmon-da4695e00375fc47df16445c3776fa4d8b1b61f6f96bb01d877bd9e1a44b3337.scope: Deactivated successfully.
Jan 31 08:29:16 compute-0 podman[359691]: 2026-01-31 08:29:16.790705078 +0000 UTC m=+0.058255049 container create c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_black, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:16 compute-0 systemd[1]: Started libpod-conmon-c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697.scope.
Jan 31 08:29:16 compute-0 podman[359691]: 2026-01-31 08:29:16.753407184 +0000 UTC m=+0.020957175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:29:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfc190b1a8d695ca6e696779ec2808b3b0cdc3ced0bea89d6bae164692035ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfc190b1a8d695ca6e696779ec2808b3b0cdc3ced0bea89d6bae164692035ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfc190b1a8d695ca6e696779ec2808b3b0cdc3ced0bea89d6bae164692035ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dfc190b1a8d695ca6e696779ec2808b3b0cdc3ced0bea89d6bae164692035ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:16 compute-0 podman[359691]: 2026-01-31 08:29:16.914599396 +0000 UTC m=+0.182149407 container init c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 08:29:16 compute-0 podman[359691]: 2026-01-31 08:29:16.923543885 +0000 UTC m=+0.191093856 container start c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_black, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:29:16 compute-0 podman[359691]: 2026-01-31 08:29:16.932134236 +0000 UTC m=+0.199684417 container attach c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_black, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:17 compute-0 nova_compute[247704]: 2026-01-31 08:29:17.283 247708 DEBUG nova.compute.manager [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:17 compute-0 nova_compute[247704]: 2026-01-31 08:29:17.285 247708 DEBUG oslo_concurrency.lockutils [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:17 compute-0 nova_compute[247704]: 2026-01-31 08:29:17.286 247708 DEBUG oslo_concurrency.lockutils [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:17 compute-0 nova_compute[247704]: 2026-01-31 08:29:17.286 247708 DEBUG oslo_concurrency.lockutils [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:17 compute-0 nova_compute[247704]: 2026-01-31 08:29:17.286 247708 DEBUG nova.compute.manager [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:17 compute-0 nova_compute[247704]: 2026-01-31 08:29:17.286 247708 WARNING nova.compute.manager [req-c13d3c8d-0c7d-4b05-9fef-4f677f56268d req-35d395d6-1465-48be-8c51-bef7bd2198a1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state resized and task_state None.
Jan 31 08:29:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:17 compute-0 ceph-mon[74496]: pgmap v2875: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 3.7 KiB/s wr, 21 op/s
Jan 31 08:29:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:17 compute-0 brave_black[359708]: {
Jan 31 08:29:17 compute-0 brave_black[359708]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:29:17 compute-0 brave_black[359708]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:29:17 compute-0 brave_black[359708]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:29:17 compute-0 brave_black[359708]:         "osd_id": 0,
Jan 31 08:29:17 compute-0 brave_black[359708]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:29:17 compute-0 brave_black[359708]:         "type": "bluestore"
Jan 31 08:29:17 compute-0 brave_black[359708]:     }
Jan 31 08:29:17 compute-0 brave_black[359708]: }
Jan 31 08:29:17 compute-0 systemd[1]: libpod-c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697.scope: Deactivated successfully.
Jan 31 08:29:17 compute-0 podman[359691]: 2026-01-31 08:29:17.82664082 +0000 UTC m=+1.094190831 container died c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_black, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:29:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dfc190b1a8d695ca6e696779ec2808b3b0cdc3ced0bea89d6bae164692035ed-merged.mount: Deactivated successfully.
Jan 31 08:29:18 compute-0 podman[359691]: 2026-01-31 08:29:18.084352579 +0000 UTC m=+1.351902550 container remove c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:29:18 compute-0 systemd[1]: libpod-conmon-c27d89e7c72ba4da57c27d8d066bac4f95b4e26e06365017aa0f050d13147697.scope: Deactivated successfully.
Jan 31 08:29:18 compute-0 sudo[359582]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:29:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:18.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:18 compute-0 podman[359743]: 2026-01-31 08:29:18.224297911 +0000 UTC m=+0.094970999 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 08:29:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 4.3 KiB/s wr, 32 op/s
Jan 31 08:29:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:29:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3f95411c-923b-4fc9-a6e1-70da6bc317e3 does not exist
Jan 31 08:29:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5a682502-62b1-441c-9def-77f90de030a4 does not exist
Jan 31 08:29:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9795df88-e6a6-47b9-96b3-3c985a1196a6 does not exist
Jan 31 08:29:18 compute-0 sudo[359769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:18 compute-0 sudo[359769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:18 compute-0 sudo[359769]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:18 compute-0 sudo[359794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:29:18 compute-0 sudo[359794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:18 compute-0 sudo[359794]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:18 compute-0 nova_compute[247704]: 2026-01-31 08:29:18.481 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:19 compute-0 ceph-mon[74496]: pgmap v2876: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 4.3 KiB/s wr, 32 op/s
Jan 31 08:29:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:29:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:19.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:20.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:29:20
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.meta', 'volumes']
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.222 247708 DEBUG nova.compute.manager [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.223 247708 DEBUG oslo_concurrency.lockutils [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.224 247708 DEBUG oslo_concurrency.lockutils [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.224 247708 DEBUG oslo_concurrency.lockutils [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.224 247708 DEBUG nova.compute.manager [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] No waiting events found dispatching network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.225 247708 WARNING nova.compute.manager [req-111eac0e-8b02-4a65-ba11-3bc6158e513c req-8c12ae04-f954-4255-9189-14f75b6007c2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Received unexpected event network-vif-plugged-aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for instance with vm_state resized and task_state None.
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 109 op/s
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:29:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.865 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "10816ede-cf43-4736-aba7-48389f607d30" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.866 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:20 compute-0 nova_compute[247704]: 2026-01-31 08:29:20.866 247708 DEBUG nova.compute.manager [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Going to confirm migration 22 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Jan 31 08:29:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:21.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:21 compute-0 ceph-mon[74496]: pgmap v2877: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 109 op/s
Jan 31 08:29:21 compute-0 nova_compute[247704]: 2026-01-31 08:29:21.820 247708 DEBUG neutronclient.v2_0.client [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port aeb09486-b68f-4fa4-a410-dd0ffaf49b05 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 31 08:29:21 compute-0 nova_compute[247704]: 2026-01-31 08:29:21.821 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:21 compute-0 nova_compute[247704]: 2026-01-31 08:29:21.821 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:21 compute-0 nova_compute[247704]: 2026-01-31 08:29:21.821 247708 DEBUG nova.network.neutron [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:29:21 compute-0 nova_compute[247704]: 2026-01-31 08:29:21.822 247708 DEBUG nova.objects.instance [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'info_cache' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 16 KiB/s wr, 193 op/s
Jan 31 08:29:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:23 compute-0 nova_compute[247704]: 2026-01-31 08:29:23.484 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:23.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:24 compute-0 ceph-mon[74496]: pgmap v2878: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 16 KiB/s wr, 193 op/s
Jan 31 08:29:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3450192108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Jan 31 08:29:24 compute-0 nova_compute[247704]: 2026-01-31 08:29:24.432 247708 DEBUG nova.network.neutron [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 10816ede-cf43-4736-aba7-48389f607d30] Updating instance_info_cache with network_info: [{"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:24 compute-0 nova_compute[247704]: 2026-01-31 08:29:24.496 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-10816ede-cf43-4736-aba7-48389f607d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:24 compute-0 nova_compute[247704]: 2026-01-31 08:29:24.497 247708 DEBUG nova.objects.instance [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'migration_context' on Instance uuid 10816ede-cf43-4736-aba7-48389f607d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:25 compute-0 ceph-mon[74496]: pgmap v2879: 305 pgs: 305 active+clean; 645 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 196 op/s
Jan 31 08:29:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:29:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:29:25 compute-0 nova_compute[247704]: 2026-01-31 08:29:25.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Jan 31 08:29:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:27.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:27 compute-0 ceph-mon[74496]: pgmap v2880: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Jan 31 08:29:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 31 08:29:28 compute-0 nova_compute[247704]: 2026-01-31 08:29:28.486 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:28 compute-0 nova_compute[247704]: 2026-01-31 08:29:28.677 247708 DEBUG nova.storage.rbd_utils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] removing snapshot(nova-resize) on rbd image(10816ede-cf43-4736-aba7-48389f607d30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:29:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Jan 31 08:29:29 compute-0 ceph-mon[74496]: pgmap v2881: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 31 08:29:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.832 247708 DEBUG nova.virt.libvirt.vif [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:27:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='multiattach-server-1',id=165,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-uhld1k6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=10816ede-cf43-4736-aba7-48389f607d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.832 247708 DEBUG nova.network.os_vif_util [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "address": "fa:16:3e:ec:78:f9", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaeb09486-b6", "ovs_interfaceid": "aeb09486-b68f-4fa4-a410-dd0ffaf49b05", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.833 247708 DEBUG nova.network.os_vif_util [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.834 247708 DEBUG os_vif [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.836 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.836 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaeb09486-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.836 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.839 247708 INFO os_vif [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:78:f9,bridge_name='br-int',has_traffic_filtering=True,id=aeb09486-b68f-4fa4-a410-dd0ffaf49b05,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaeb09486-b6')
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.839 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:29 compute-0 nova_compute[247704]: 2026-01-31 08:29:29.839 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:30 compute-0 nova_compute[247704]: 2026-01-31 08:29:30.099 247708 DEBUG oslo_concurrency.processutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 31 08:29:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:29:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308574213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:30 compute-0 nova_compute[247704]: 2026-01-31 08:29:30.525 247708 DEBUG oslo_concurrency.processutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:30 compute-0 nova_compute[247704]: 2026-01-31 08:29:30.532 247708 DEBUG nova.compute.provider_tree [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:29:30 compute-0 nova_compute[247704]: 2026-01-31 08:29:30.734 247708 DEBUG nova.scheduler.client.report [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:29:30 compute-0 nova_compute[247704]: 2026-01-31 08:29:30.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:30 compute-0 ceph-mon[74496]: osdmap e366: 3 total, 3 up, 3 in
Jan 31 08:29:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2308574213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:31.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:31 compute-0 ceph-mon[74496]: pgmap v2883: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 31 08:29:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:32 compute-0 sudo[359885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:32 compute-0 sudo[359885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:32 compute-0 sudo[359885]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 698 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 KiB/s rd, 3.1 MiB/s wr, 126 op/s
Jan 31 08:29:32 compute-0 sudo[359910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:32 compute-0 sudo[359910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:32 compute-0 sudo[359910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:32 compute-0 nova_compute[247704]: 2026-01-31 08:29:32.296 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:33 compute-0 nova_compute[247704]: 2026-01-31 08:29:33.359 247708 INFO nova.scheduler.client.report [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocation for migration 25fea84c-439e-4b69-837e-757239c42f77
Jan 31 08:29:33 compute-0 nova_compute[247704]: 2026-01-31 08:29:33.488 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:33.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:33 compute-0 nova_compute[247704]: 2026-01-31 08:29:33.696 247708 DEBUG oslo_concurrency.lockutils [None req-5c9231bf-4e0b-4bb1-bd11-59fca2ae7f7d 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "10816ede-cf43-4736-aba7-48389f607d30" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 12.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:34.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:34 compute-0 ceph-mon[74496]: pgmap v2884: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 698 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 KiB/s rd, 3.1 MiB/s wr, 126 op/s
Jan 31 08:29:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 707 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 802 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 31 08:29:35 compute-0 ceph-mon[74496]: pgmap v2885: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 707 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 802 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 31 08:29:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:35.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013670057463242137 of space, bias 1.0, pg target 4.101017238972641 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323011178578076 of space, bias 1.0, pg target 1.2796113088591103 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:29:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 08:29:35 compute-0 nova_compute[247704]: 2026-01-31 08:29:35.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:29:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:36.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:29:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 31 08:29:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/277042438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3857347424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:29:36 compute-0 podman[359937]: 2026-01-31 08:29:36.903170218 +0000 UTC m=+0.077232805 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:29:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Jan 31 08:29:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Jan 31 08:29:37 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Jan 31 08:29:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:37.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:37 compute-0 ceph-mon[74496]: pgmap v2886: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.8 MiB/s wr, 160 op/s
Jan 31 08:29:37 compute-0 ceph-mon[74496]: osdmap e367: 3 total, 3 up, 3 in
Jan 31 08:29:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:38.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 190 op/s
Jan 31 08:29:38 compute-0 nova_compute[247704]: 2026-01-31 08:29:38.491 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:38 compute-0 nova_compute[247704]: 2026-01-31 08:29:38.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:39.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:40 compute-0 ceph-mon[74496]: pgmap v2888: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.1 MiB/s wr, 190 op/s
Jan 31 08:29:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:40.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 544 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Jan 31 08:29:40 compute-0 nova_compute[247704]: 2026-01-31 08:29:40.808 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:41 compute-0 nova_compute[247704]: 2026-01-31 08:29:41.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:41 compute-0 nova_compute[247704]: 2026-01-31 08:29:41.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:41 compute-0 nova_compute[247704]: 2026-01-31 08:29:41.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:29:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:41.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:29:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:42.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:29:42 compute-0 ceph-mon[74496]: pgmap v2889: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 544 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Jan 31 08:29:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 1.7 MiB/s wr, 71 op/s
Jan 31 08:29:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:43 compute-0 nova_compute[247704]: 2026-01-31 08:29:43.492 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:43.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:44 compute-0 ceph-mon[74496]: pgmap v2890: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 1.7 MiB/s wr, 71 op/s
Jan 31 08:29:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:44.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 874 KiB/s wr, 64 op/s
Jan 31 08:29:45 compute-0 nova_compute[247704]: 2026-01-31 08:29:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:45.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:45 compute-0 nova_compute[247704]: 2026-01-31 08:29:45.815 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:46 compute-0 ceph-mon[74496]: pgmap v2891: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 874 KiB/s wr, 64 op/s
Jan 31 08:29:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:46.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 41 KiB/s wr, 90 op/s
Jan 31 08:29:46 compute-0 nova_compute[247704]: 2026-01-31 08:29:46.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:46 compute-0 nova_compute[247704]: 2026-01-31 08:29:46.770 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:46 compute-0 nova_compute[247704]: 2026-01-31 08:29:46.770 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:46 compute-0 nova_compute[247704]: 2026-01-31 08:29:46.771 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:46 compute-0 nova_compute[247704]: 2026-01-31 08:29:46.771 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:29:46 compute-0 nova_compute[247704]: 2026-01-31 08:29:46.771 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:29:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416992503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.261 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:47.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.725 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.726 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.731 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.731 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.922 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.925 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3859MB free_disk=20.693622589111328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.925 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:47 compute-0 nova_compute[247704]: 2026-01-31 08:29:47.926 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:48.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:48 compute-0 ceph-mon[74496]: pgmap v2892: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 41 KiB/s wr, 90 op/s
Jan 31 08:29:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2416992503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 38 KiB/s wr, 83 op/s
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.371 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.371 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bbc5f09e-71d7-4009-bdf6-06e95b32574c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.372 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.372 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.462 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.495 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:29:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836140822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.903 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.909 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:29:48 compute-0 podman[360006]: 2026-01-31 08:29:48.918726611 +0000 UTC m=+0.090677465 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:29:48 compute-0 nova_compute[247704]: 2026-01-31 08:29:48.957 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:29:49 compute-0 nova_compute[247704]: 2026-01-31 08:29:49.195 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:29:49 compute-0 nova_compute[247704]: 2026-01-31 08:29:49.196 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:49 compute-0 ceph-mon[74496]: pgmap v2893: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 38 KiB/s wr, 83 op/s
Jan 31 08:29:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3836140822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3395964674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:49.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:50.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 75 op/s
Jan 31 08:29:50 compute-0 nova_compute[247704]: 2026-01-31 08:29:50.819 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:51 compute-0 nova_compute[247704]: 2026-01-31 08:29:51.086 247708 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Creating tmpfile /var/lib/nova/instances/tmpda3il2h9 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Jan 31 08:29:51 compute-0 nova_compute[247704]: 2026-01-31 08:29:51.220 247708 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Jan 31 08:29:51 compute-0 ceph-mon[74496]: pgmap v2894: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 75 op/s
Jan 31 08:29:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2309520996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:29:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.195 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.196 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.196 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:29:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 70 op/s
Jan 31 08:29:52 compute-0 sudo[360037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:52 compute-0 sudo[360037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:52 compute-0 sudo[360037]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:52 compute-0 sudo[360062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:29:52 compute-0 sudo[360062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:52 compute-0 sudo[360062]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.770 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.771 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.771 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:29:52 compute-0 nova_compute[247704]: 2026-01-31 08:29:52.772 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:29:53 compute-0 ceph-mon[74496]: pgmap v2895: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 70 op/s
Jan 31 08:29:53 compute-0 nova_compute[247704]: 2026-01-31 08:29:53.498 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:29:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:29:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 64 op/s
Jan 31 08:29:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1075930529' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:29:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1075930529' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:29:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3050272016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:55 compute-0 nova_compute[247704]: 2026-01-31 08:29:55.074 247708 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b24a8d0-ad95-4460-acf1-0acb658330aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Jan 31 08:29:55 compute-0 nova_compute[247704]: 2026-01-31 08:29:55.144 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:29:55 compute-0 nova_compute[247704]: 2026-01-31 08:29:55.145 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:29:55 compute-0 nova_compute[247704]: 2026-01-31 08:29:55.145 247708 DEBUG nova.network.neutron [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:29:55 compute-0 ceph-mon[74496]: pgmap v2896: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 64 op/s
Jan 31 08:29:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:55.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:55 compute-0 nova_compute[247704]: 2026-01-31 08:29:55.821 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:56.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 746 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Jan 31 08:29:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1243262519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:29:56 compute-0 sshd-session[360089]: Invalid user ubuntu from 45.148.10.240 port 38804
Jan 31 08:29:57 compute-0 sshd-session[360089]: Connection closed by invalid user ubuntu 45.148.10.240 port 38804 [preauth]
Jan 31 08:29:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:57 compute-0 ceph-mon[74496]: pgmap v2897: 305 pgs: 305 active+clean; 746 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 84 op/s
Jan 31 08:29:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:57.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:57 compute-0 nova_compute[247704]: 2026-01-31 08:29:57.959 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [{"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:57 compute-0 nova_compute[247704]: 2026-01-31 08:29:57.994 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:57 compute-0 nova_compute[247704]: 2026-01-31 08:29:57.995 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:29:57 compute-0 nova_compute[247704]: 2026-01-31 08:29:57.995 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:57 compute-0 nova_compute[247704]: 2026-01-31 08:29:57.996 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:29:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:58.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:29:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 751 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 08:29:58 compute-0 nova_compute[247704]: 2026-01-31 08:29:58.500 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.475 247708 DEBUG nova.network.neutron [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:29:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:29:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:29:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.661 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.664 247708 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b24a8d0-ad95-4460-acf1-0acb658330aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.664 247708 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Creating instance directory: /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.665 247708 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Ensure instance console log exists: /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.665 247708 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.667 247708 DEBUG nova.virt.libvirt.vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2146916992',display_name='tempest-TestNetworkAdvancedServerOps-server-2146916992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2146916992',id=169,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK7T1FRM2hnvWHsS2vC2H2YEYiw3bA7e7xk8qp1+JMKzlq7jPy/PzZqxW2VC1KQOHQDuHMYvJPGgKGcZZL8ySdxPXOSaobgGLmos6mi9nlU6Dv+erNyHG2EMArHdJN1ryg==',key_name='tempest-TestNetworkAdvancedServerOps-694728097',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ah4zs948',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:19Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.667 247708 DEBUG nova.network.os_vif_util [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converting VIF {"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.668 247708 DEBUG nova.network.os_vif_util [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.668 247708 DEBUG os_vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.669 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.670 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.670 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.674 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap324249e6-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.675 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap324249e6-62, col_values=(('external_ids', {'iface-id': '324249e6-6299-4722-a570-3439880bde1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:84:a5', 'vm-uuid': '2b24a8d0-ad95-4460-acf1-0acb658330aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:59 compute-0 NetworkManager[49108]: <info>  [1769848199.6795] manager: (tap324249e6-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.686 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.687 247708 INFO os_vif [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62')
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.687 247708 DEBUG nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Jan 31 08:29:59 compute-0 nova_compute[247704]: 2026-01-31 08:29:59.688 247708 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b24a8d0-ad95-4460-acf1-0acb658330aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Jan 31 08:29:59 compute-0 ceph-mon[74496]: pgmap v2898: 305 pgs: 305 active+clean; 751 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 08:30:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:30:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:00.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 08:30:00 compute-0 nova_compute[247704]: 2026-01-31 08:30:00.356 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:30:00 compute-0 nova_compute[247704]: 2026-01-31 08:30:00.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:30:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:30:01 compute-0 ceph-mon[74496]: pgmap v2899: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 08:30:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:01.909 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:01.911 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:30:01 compute-0 nova_compute[247704]: 2026-01-31 08:30:01.910 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:02.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 08:30:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:03 compute-0 nova_compute[247704]: 2026-01-31 08:30:03.364 247708 DEBUG nova.network.neutron [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Port 324249e6-6299-4722-a570-3439880bde1f updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Jan 31 08:30:03 compute-0 nova_compute[247704]: 2026-01-31 08:30:03.366 247708 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpda3il2h9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b24a8d0-ad95-4460-acf1-0acb658330aa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Jan 31 08:30:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:03.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:03 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 08:30:03 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 08:30:03 compute-0 ceph-mon[74496]: pgmap v2900: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 08:30:03 compute-0 kernel: tap324249e6-62: entered promiscuous mode
Jan 31 08:30:03 compute-0 NetworkManager[49108]: <info>  [1769848203.9116] manager: (tap324249e6-62): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Jan 31 08:30:03 compute-0 ovn_controller[149457]: 2026-01-31T08:30:03Z|00731|binding|INFO|Claiming lport 324249e6-6299-4722-a570-3439880bde1f for this additional chassis.
Jan 31 08:30:03 compute-0 ovn_controller[149457]: 2026-01-31T08:30:03Z|00732|binding|INFO|324249e6-6299-4722-a570-3439880bde1f: Claiming fa:16:3e:73:84:a5 10.100.0.7
Jan 31 08:30:03 compute-0 nova_compute[247704]: 2026-01-31 08:30:03.911 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:03 compute-0 ovn_controller[149457]: 2026-01-31T08:30:03Z|00733|binding|INFO|Setting lport 324249e6-6299-4722-a570-3439880bde1f ovn-installed in OVS
Jan 31 08:30:03 compute-0 nova_compute[247704]: 2026-01-31 08:30:03.920 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:03 compute-0 nova_compute[247704]: 2026-01-31 08:30:03.922 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:03 compute-0 systemd-machined[214448]: New machine qemu-79-instance-000000a9.
Jan 31 08:30:03 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-000000a9.
Jan 31 08:30:03 compute-0 systemd-udevd[360129]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:30:03 compute-0 NetworkManager[49108]: <info>  [1769848203.9857] device (tap324249e6-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:30:03 compute-0 NetworkManager[49108]: <info>  [1769848203.9866] device (tap324249e6-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:30:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:04.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 08:30:04 compute-0 nova_compute[247704]: 2026-01-31 08:30:04.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:05 compute-0 ceph-osd[84816]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.077 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848205.0769508, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.078 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Started (Lifecycle Event)
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.130 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.455 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848205.4549098, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.455 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Resumed (Lifecycle Event)
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.606 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:05.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.613 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.746 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Jan 31 08:30:05 compute-0 nova_compute[247704]: 2026-01-31 08:30:05.826 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:05 compute-0 ceph-mon[74496]: pgmap v2901: 305 pgs: 305 active+clean; 759 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 08:30:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:06.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 779 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 105 op/s
Jan 31 08:30:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:07.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:07 compute-0 podman[360182]: 2026-01-31 08:30:07.923688157 +0000 UTC m=+0.084899482 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:30:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:08.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.3 MiB/s wr, 89 op/s
Jan 31 08:30:08 compute-0 ceph-mon[74496]: pgmap v2902: 305 pgs: 305 active+clean; 779 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 105 op/s
Jan 31 08:30:09 compute-0 ovn_controller[149457]: 2026-01-31T08:30:09Z|00734|binding|INFO|Claiming lport 324249e6-6299-4722-a570-3439880bde1f for this chassis.
Jan 31 08:30:09 compute-0 ovn_controller[149457]: 2026-01-31T08:30:09Z|00735|binding|INFO|324249e6-6299-4722-a570-3439880bde1f: Claiming fa:16:3e:73:84:a5 10.100.0.7
Jan 31 08:30:09 compute-0 ovn_controller[149457]: 2026-01-31T08:30:09Z|00736|binding|INFO|Setting lport 324249e6-6299-4722-a570-3439880bde1f up in Southbound
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.552 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:84:a5 10.100.0.7'], port_security=['fa:16:3e:73:84:a5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b24a8d0-ad95-4460-acf1-0acb658330aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c8cd691-2e19-4c04-b061-2e5304161623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '11', 'neutron:security_group_ids': '0e11440e-2192-492f-a554-7817b0fd324e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bee9a73-d2c9-492b-9a5b-302cf0b7a5b8, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=324249e6-6299-4722-a570-3439880bde1f) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.553 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 324249e6-6299-4722-a570-3439880bde1f in datapath 5c8cd691-2e19-4c04-b061-2e5304161623 bound to our chassis
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.555 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c8cd691-2e19-4c04-b061-2e5304161623
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.567 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0ea053-628d-4ba6-a292-6dfe31687e4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.568 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c8cd691-21 in ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.570 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c8cd691-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.571 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c171cdbc-b5bd-4718-9907-db03a84db7be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.572 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e47016-bdfc-4ee6-86f7-cd502d4a59df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.583 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[afd05724-d74d-4e16-a4a9-d428383017a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.592 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f59b12-ab53-49dc-ae12-53ab1f8f0097]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.621 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8b79ca47-78bd-4c94-9168-eb4ad09c274f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 NetworkManager[49108]: <info>  [1769848209.6288] manager: (tap5c8cd691-20): new Veth device (/org/freedesktop/NetworkManager/Devices/321)
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.627 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b06f0223-8b15-4eb1-8e22-6ac98e4cb3d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 systemd-udevd[360209]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.664 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[35fb69a2-79fd-4beb-aba2-c1eaab0d6351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.669 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[75007d61-6055-4698-ad3b-b048e311f7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 nova_compute[247704]: 2026-01-31 08:30:09.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:09 compute-0 NetworkManager[49108]: <info>  [1769848209.6914] device (tap5c8cd691-20): carrier: link connected
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.697 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d50335fd-05de-4d28-9d77-076280e27149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.715 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f79479ff-96b2-41e6-9017-ea9bdae5a8b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c8cd691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:0e:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845230, 'reachable_time': 31572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360229, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.732 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[96c64855-97b5-4c49-a704-6ab289243d6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:ed3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845230, 'tstamp': 845230}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360230, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.753 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0583e49f-fb52-4f37-99c2-2e1713c0f226]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c8cd691-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:0e:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845230, 'reachable_time': 31572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360231, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.788 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9d6158-43c3-4a14-b8cd-634dd1f04bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.836 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[88440d9b-8205-4b62-83cd-0e15645c2d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.838 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c8cd691-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.838 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.839 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c8cd691-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:09 compute-0 nova_compute[247704]: 2026-01-31 08:30:09.840 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:09 compute-0 NetworkManager[49108]: <info>  [1769848209.8417] manager: (tap5c8cd691-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Jan 31 08:30:09 compute-0 kernel: tap5c8cd691-20: entered promiscuous mode
Jan 31 08:30:09 compute-0 nova_compute[247704]: 2026-01-31 08:30:09.843 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.845 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c8cd691-20, col_values=(('external_ids', {'iface-id': '0e00cbca-3de8-4f0d-92ce-93192811a08b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:09 compute-0 nova_compute[247704]: 2026-01-31 08:30:09.846 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:09 compute-0 ovn_controller[149457]: 2026-01-31T08:30:09Z|00737|binding|INFO|Releasing lport 0e00cbca-3de8-4f0d-92ce-93192811a08b from this chassis (sb_readonly=0)
Jan 31 08:30:09 compute-0 nova_compute[247704]: 2026-01-31 08:30:09.846 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.847 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c8cd691-2e19-4c04-b061-2e5304161623.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c8cd691-2e19-4c04-b061-2e5304161623.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.848 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a9612ae8-0607-40d3-9a7f-8e8e963cde1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.849 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5c8cd691-2e19-4c04-b061-2e5304161623
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5c8cd691-2e19-4c04-b061-2e5304161623.pid.haproxy
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5c8cd691-2e19-4c04-b061-2e5304161623
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:30:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:09.850 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'env', 'PROCESS_TAG=haproxy-5c8cd691-2e19-4c04-b061-2e5304161623', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c8cd691-2e19-4c04-b061-2e5304161623.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:30:09 compute-0 nova_compute[247704]: 2026-01-31 08:30:09.853 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:10 compute-0 nova_compute[247704]: 2026-01-31 08:30:10.028 247708 INFO nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Post operation of migration started
Jan 31 08:30:10 compute-0 ceph-mon[74496]: pgmap v2903: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.3 MiB/s wr, 89 op/s
Jan 31 08:30:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 08:30:10 compute-0 podman[360264]: 2026-01-31 08:30:10.164257609 +0000 UTC m=+0.023201580 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:30:10 compute-0 nova_compute[247704]: 2026-01-31 08:30:10.427 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:10 compute-0 nova_compute[247704]: 2026-01-31 08:30:10.427 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:10 compute-0 nova_compute[247704]: 2026-01-31 08:30:10.428 247708 DEBUG nova.network.neutron [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:30:10 compute-0 podman[360264]: 2026-01-31 08:30:10.669436705 +0000 UTC m=+0.528380626 container create f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:30:10 compute-0 nova_compute[247704]: 2026-01-31 08:30:10.827 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:10 compute-0 systemd[1]: Started libpod-conmon-f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d.scope.
Jan 31 08:30:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2967b5dc5af895ce0bb96475a4fdfe5ddefe73fe0ff4a32c57b5b9f1ca236726/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:10 compute-0 podman[360264]: 2026-01-31 08:30:10.944610463 +0000 UTC m=+0.803554404 container init f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:30:10 compute-0 podman[360264]: 2026-01-31 08:30:10.950460467 +0000 UTC m=+0.809404388 container start f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:30:10 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [NOTICE]   (360284) : New worker (360286) forked
Jan 31 08:30:10 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [NOTICE]   (360284) : Loading success.
Jan 31 08:30:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:30:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1044056741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:11.200 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:11.201 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:11.202 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:11 compute-0 ceph-mon[74496]: pgmap v2904: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 08:30:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1044056741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:11 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 31 08:30:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:11.772203) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:30:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 31 08:30:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848211772274, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2210, "num_deletes": 254, "total_data_size": 3795547, "memory_usage": 3849824, "flush_reason": "Manual Compaction"}
Jan 31 08:30:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 31 08:30:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:11.913 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212064329, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3723977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61498, "largest_seqno": 63707, "table_properties": {"data_size": 3714058, "index_size": 6220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21440, "raw_average_key_size": 20, "raw_value_size": 3693969, "raw_average_value_size": 3596, "num_data_blocks": 270, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847992, "oldest_key_time": 1769847992, "file_creation_time": 1769848211, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 292203 microseconds, and 9581 cpu microseconds.
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.090 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.090 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.119 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.064402) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3723977 bytes OK
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.064434) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.184312) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.184374) EVENT_LOG_v1 {"time_micros": 1769848212184363, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.184407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3786495, prev total WAL file size 3786495, number of live WAL files 2.
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.185561) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3636KB)], [140(10MB)]
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212185647, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 14375749, "oldest_snapshot_seqno": -1}
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.216 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.216 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.225 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.225 247708 INFO nova.compute.claims [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:30:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 31 08:30:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.486 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:12 compute-0 sudo[360296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:12 compute-0 sudo[360296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:12 compute-0 sudo[360296]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9184 keys, 12495306 bytes, temperature: kUnknown
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212560498, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 12495306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12435128, "index_size": 36129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241491, "raw_average_key_size": 26, "raw_value_size": 12272836, "raw_average_value_size": 1336, "num_data_blocks": 1384, "num_entries": 9184, "num_filter_entries": 9184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848212, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:30:12 compute-0 sudo[360322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:12 compute-0 sudo[360322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:12 compute-0 sudo[360322]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.561070) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12495306 bytes
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.599287) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.3 rd, 33.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.2) write-amplify(3.4) OK, records in: 9712, records dropped: 528 output_compression: NoCompression
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.599347) EVENT_LOG_v1 {"time_micros": 1769848212599324, "job": 86, "event": "compaction_finished", "compaction_time_micros": 375207, "compaction_time_cpu_micros": 24863, "output_level": 6, "num_output_files": 1, "total_output_size": 12495306, "num_input_records": 9712, "num_output_records": 9184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212600156, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848212601338, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.185387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.601493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.601500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.601503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.601505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:12 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:30:12.601507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:30:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326995779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.966 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.973 247708 DEBUG nova.compute.provider_tree [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.990 247708 DEBUG nova.network.neutron [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:12 compute-0 nova_compute[247704]: 2026-01-31 08:30:12.997 247708 DEBUG nova.scheduler.client.report [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.049 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.067 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.068 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.080 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.081 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.081 247708 DEBUG oslo_concurrency.lockutils [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.087 247708 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Jan 31 08:30:13 compute-0 virtqemud[247621]: Domain id=79 name='instance-000000a9' uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa is tainted: custom-monitor
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.165 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.166 247708 DEBUG nova.network.neutron [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.217 247708 INFO nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.241 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.344 247708 INFO nova.virt.block_device [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Booting with volume cb325f0f-dc0c-4601-836f-df0aeb8cb723 at /dev/vda
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.419 247708 DEBUG nova.policy [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '85dfa8546d9942648bb4197c8b1947e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbdbdee526499e90da7e971ede68d3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.613 247708 DEBUG os_brick.utils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.616 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.634 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.634 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[67bfc151-a7f8-4934-af3b-bbe25fe36824]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.635 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.644 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.644 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[cded0752-d1df-4f6f-ad97-028a368274f5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.646 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.658 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.659 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[98975fa8-3183-48a6-96ef-1cefd05af939]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.660 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[86227f32-2562-4af1-a894-d3ff8c819d4c]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.662 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.692 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.696 247708 DEBUG os_brick.initiator.connectors.lightos [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.697 247708 DEBUG os_brick.initiator.connectors.lightos [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.697 247708 DEBUG os_brick.initiator.connectors.lightos [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.698 247708 DEBUG os_brick.utils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:30:13 compute-0 nova_compute[247704]: 2026-01-31 08:30:13.699 247708 DEBUG nova.virt.block_device [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Updating existing volume attachment record: 10af0b0e-5ef2-4c55-8239-29de6540845c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:30:13 compute-0 ceph-mon[74496]: pgmap v2905: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 31 08:30:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2326995779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:14 compute-0 nova_compute[247704]: 2026-01-31 08:30:14.097 247708 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Jan 31 08:30:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:14.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 31 08:30:14 compute-0 nova_compute[247704]: 2026-01-31 08:30:14.703 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4019013427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.105 247708 INFO nova.virt.libvirt.driver [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.108 247708 DEBUG nova.network.neutron [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Successfully created port: 5c2dac79-c85e-416c-8cba-93d89b11d6a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.115 247708 DEBUG nova.compute.manager [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.173 247708 DEBUG nova.objects.instance [None req-4d9e0df7-a0fd-4166-9f2c-e81fee790c8e af307e30a6d7498fba4a004660bea0ee cd06f893291343c99e6ddf4e33b223da - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 31 08:30:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:15.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.831 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.953 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.955 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.955 247708 INFO nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Creating image(s)
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.956 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.956 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Ensure instance console log exists: /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.956 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.956 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:15 compute-0 nova_compute[247704]: 2026-01-31 08:30:15.957 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:16 compute-0 ceph-mon[74496]: pgmap v2906: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 31 08:30:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:30:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:16.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:30:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 31 08:30:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:17.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:18 compute-0 ceph-mon[74496]: pgmap v2907: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 31 08:30:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/647630110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:18.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.417 247708 DEBUG nova.network.neutron [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Successfully updated port: 5c2dac79-c85e-416c-8cba-93d89b11d6a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:30:18 compute-0 sudo[360379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:18 compute-0 sudo[360379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:18 compute-0 sudo[360379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:18 compute-0 sudo[360404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:18 compute-0 sudo[360404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:18 compute-0 sudo[360404]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.787 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "refresh_cache-a845825c-4dfb-41ff-b896-557e01cb3e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.788 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquired lock "refresh_cache-a845825c-4dfb-41ff-b896-557e01cb3e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.788 247708 DEBUG nova.network.neutron [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.797 247708 DEBUG nova.compute.manager [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-changed-5c2dac79-c85e-416c-8cba-93d89b11d6a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.798 247708 DEBUG nova.compute.manager [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Refreshing instance network info cache due to event network-changed-5c2dac79-c85e-416c-8cba-93d89b11d6a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:30:18 compute-0 nova_compute[247704]: 2026-01-31 08:30:18.798 247708 DEBUG oslo_concurrency.lockutils [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-a845825c-4dfb-41ff-b896-557e01cb3e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:18 compute-0 sudo[360429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:18 compute-0 sudo[360429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:18 compute-0 sudo[360429]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:18 compute-0 sudo[360454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:30:18 compute-0 sudo[360454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:19 compute-0 sudo[360454]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:19 compute-0 podman[360493]: 2026-01-31 08:30:19.147819703 +0000 UTC m=+0.084912433 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:30:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:30:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:30:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:19 compute-0 sudo[360523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:19 compute-0 sudo[360523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:19 compute-0 sudo[360523]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:19 compute-0 nova_compute[247704]: 2026-01-31 08:30:19.368 247708 DEBUG nova.network.neutron [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:30:19 compute-0 sudo[360548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:19 compute-0 sudo[360548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:19 compute-0 sudo[360548]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:19 compute-0 sudo[360573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:19 compute-0 sudo[360573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:19 compute-0 sudo[360573]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:19 compute-0 sudo[360598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:30:19 compute-0 sudo[360598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:19.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:19 compute-0 nova_compute[247704]: 2026-01-31 08:30:19.706 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:19 compute-0 sudo[360598]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:30:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:30:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:30:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:30:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:30:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0e17379f-babe-4611-8da9-6adde81d5bfe does not exist
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c11b2f35-5841-42bc-a02a-1f93737ae2ae does not exist
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 33f711a9-3c80-4eb8-90de-9a23d411fda4 does not exist
Jan 31 08:30:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:30:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:30:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:30:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:30:20 compute-0 sudo[360655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:20 compute-0 sudo[360655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:20 compute-0 sudo[360655]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:20 compute-0 sudo[360680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:20 compute-0 sudo[360680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:20 compute-0 sudo[360680]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:20 compute-0 sudo[360705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:20 compute-0 sudo[360705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:20 compute-0 sudo[360705]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:30:20
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'vms', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'volumes']
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:30:20 compute-0 ceph-mon[74496]: pgmap v2908: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 18 op/s
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4186738958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:30:20 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:30:20 compute-0 sudo[360730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:30:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:20.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:20 compute-0 sudo[360730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.569562935 +0000 UTC m=+0.057604593 container create 73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:20 compute-0 systemd[1]: Started libpod-conmon-73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b.scope.
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.537374016 +0000 UTC m=+0.025415714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:30:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.668194424 +0000 UTC m=+0.156236102 container init 73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.675611236 +0000 UTC m=+0.163652894 container start 73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brown, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:30:20 compute-0 affectionate_brown[360814]: 167 167
Jan 31 08:30:20 compute-0 systemd[1]: libpod-73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b.scope: Deactivated successfully.
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.681985551 +0000 UTC m=+0.170027219 container attach 73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.68315226 +0000 UTC m=+0.171193918 container died 73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brown, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6f41b6bebf05141c32b0372cfdb52ee809e6cd035703a5be2f9957dc352332d-merged.mount: Deactivated successfully.
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:30:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:30:20 compute-0 podman[360797]: 2026-01-31 08:30:20.757580435 +0000 UTC m=+0.245622093 container remove 73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:30:20 compute-0 systemd[1]: libpod-conmon-73517c24ccacce1e17784e0ad703cbfbb40407f72d8dfe573d0ec6de6aa8a80b.scope: Deactivated successfully.
Jan 31 08:30:20 compute-0 nova_compute[247704]: 2026-01-31 08:30:20.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:20 compute-0 podman[360838]: 2026-01-31 08:30:20.911159441 +0000 UTC m=+0.050838507 container create a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_torvalds, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:20 compute-0 systemd[1]: Started libpod-conmon-a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca.scope.
Jan 31 08:30:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257560de59f9d39a04285e469424a81e4d7d138f2972c5f09a50c681c7c5b755/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:20 compute-0 podman[360838]: 2026-01-31 08:30:20.884627151 +0000 UTC m=+0.024306227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257560de59f9d39a04285e469424a81e4d7d138f2972c5f09a50c681c7c5b755/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257560de59f9d39a04285e469424a81e4d7d138f2972c5f09a50c681c7c5b755/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257560de59f9d39a04285e469424a81e4d7d138f2972c5f09a50c681c7c5b755/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/257560de59f9d39a04285e469424a81e4d7d138f2972c5f09a50c681c7c5b755/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:20 compute-0 podman[360838]: 2026-01-31 08:30:20.999294283 +0000 UTC m=+0.138973389 container init a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_torvalds, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:30:21 compute-0 podman[360838]: 2026-01-31 08:30:21.005626618 +0000 UTC m=+0.145305674 container start a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:30:21 compute-0 podman[360838]: 2026-01-31 08:30:21.0101844 +0000 UTC m=+0.149863466 container attach a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:30:21 compute-0 ceph-mon[74496]: pgmap v2909: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.223 247708 DEBUG nova.network.neutron [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Updating instance_info_cache with network_info: [{"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.401 247708 INFO nova.compute.manager [None req-9488d74f-d1b2-4d49-84dc-c3d13e0fae67 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Get console output
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.408 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.448 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Releasing lock "refresh_cache-a845825c-4dfb-41ff-b896-557e01cb3e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.449 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Instance network_info: |[{"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.450 247708 DEBUG oslo_concurrency.lockutils [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-a845825c-4dfb-41ff-b896-557e01cb3e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.450 247708 DEBUG nova.network.neutron [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Refreshing network info cache for port 5c2dac79-c85e-416c-8cba-93d89b11d6a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.454 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Start _get_guest_xml network_info=[{"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '10af0b0e-5ef2-4c55-8239-29de6540845c', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cb325f0f-dc0c-4601-836f-df0aeb8cb723', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cb325f0f-dc0c-4601-836f-df0aeb8cb723', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a845825c-4dfb-41ff-b896-557e01cb3e3b', 'attached_at': '', 'detached_at': '', 'volume_id': 'cb325f0f-dc0c-4601-836f-df0aeb8cb723', 'serial': 'cb325f0f-dc0c-4601-836f-df0aeb8cb723', 'multiattach': True}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.459 247708 WARNING nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.465 247708 DEBUG nova.virt.libvirt.host [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.466 247708 DEBUG nova.virt.libvirt.host [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.473 247708 DEBUG nova.virt.libvirt.host [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.474 247708 DEBUG nova.virt.libvirt.host [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.476 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.476 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.477 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.477 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.477 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.478 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.478 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.479 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.480 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.480 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.480 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.481 247708 DEBUG nova.virt.hardware [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.517 247708 DEBUG nova.storage.rbd_utils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image a845825c-4dfb-41ff-b896-557e01cb3e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:21 compute-0 nova_compute[247704]: 2026-01-31 08:30:21.523 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:21.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:21 compute-0 stoic_torvalds[360854]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:30:21 compute-0 stoic_torvalds[360854]: --> relative data size: 1.0
Jan 31 08:30:21 compute-0 stoic_torvalds[360854]: --> All data devices are unavailable
Jan 31 08:30:21 compute-0 systemd[1]: libpod-a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca.scope: Deactivated successfully.
Jan 31 08:30:21 compute-0 podman[360838]: 2026-01-31 08:30:21.823524464 +0000 UTC m=+0.963203540 container died a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-257560de59f9d39a04285e469424a81e4d7d138f2972c5f09a50c681c7c5b755-merged.mount: Deactivated successfully.
Jan 31 08:30:21 compute-0 podman[360838]: 2026-01-31 08:30:21.898492452 +0000 UTC m=+1.038171508 container remove a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:30:21 compute-0 systemd[1]: libpod-conmon-a27cfbfb95826fcb8fcbe4f69d73887f1c6ca900b2c77413decf5af05fdf79ca.scope: Deactivated successfully.
Jan 31 08:30:21 compute-0 sudo[360730]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:30:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662883223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:21 compute-0 sudo[360924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:22 compute-0 sudo[360924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.000 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:22 compute-0 sudo[360924]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:22 compute-0 sudo[360951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:22 compute-0 sudo[360951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:22 compute-0 sudo[360951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:22 compute-0 sudo[360976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:22 compute-0 sudo[360976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:22 compute-0 sudo[360976]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:22 compute-0 sudo[361001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:30:22 compute-0 sudo[361001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1662883223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:22.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 781 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 21 op/s
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.297 247708 DEBUG nova.virt.libvirt.vif [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-2125524709',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-2125524709',id=171,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-a9f0gb0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:30:13Z,user_data=None,user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=a845825c-4dfb-41ff-b896-557e01cb3e3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.299 247708 DEBUG nova.network.os_vif_util [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.300 247708 DEBUG nova.network.os_vif_util [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.302 247708 DEBUG nova.objects.instance [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a845825c-4dfb-41ff-b896-557e01cb3e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.414 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <uuid>a845825c-4dfb-41ff-b896-557e01cb3e3b</uuid>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <name>instance-000000ab</name>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <nova:name>tempest-AttachVolumeMultiAttachTest-server-2125524709</nova:name>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:30:21</nova:creationTime>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:user uuid="85dfa8546d9942648bb4197c8b1947e3">tempest-AttachVolumeMultiAttachTest-2017021026-project-member</nova:user>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:project uuid="48bbdbdee526499e90da7e971ede68d3">tempest-AttachVolumeMultiAttachTest-2017021026</nova:project>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <nova:port uuid="5c2dac79-c85e-416c-8cba-93d89b11d6a9">
Jan 31 08:30:22 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <system>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <entry name="serial">a845825c-4dfb-41ff-b896-557e01cb3e3b</entry>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <entry name="uuid">a845825c-4dfb-41ff-b896-557e01cb3e3b</entry>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </system>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <os>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </os>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <features>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </features>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/a845825c-4dfb-41ff-b896-557e01cb3e3b_disk.config">
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </source>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-cb325f0f-dc0c-4601-836f-df0aeb8cb723">
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </source>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:30:22 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <serial>cb325f0f-dc0c-4601-836f-df0aeb8cb723</serial>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <shareable/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:86:16:1a"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <target dev="tap5c2dac79-c8"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/console.log" append="off"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <video>
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </video>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:30:22 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:30:22 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:30:22 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:30:22 compute-0 nova_compute[247704]: </domain>
Jan 31 08:30:22 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.415 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Preparing to wait for external event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.415 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.416 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.416 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.416 247708 DEBUG nova.virt.libvirt.vif [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-2125524709',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-2125524709',id=171,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-a9f0gb0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:30:13Z,user_data=None,user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=a845825c-4dfb-41ff-b896-557e01cb3e3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.417 247708 DEBUG nova.network.os_vif_util [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.417 247708 DEBUG nova.network.os_vif_util [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.418 247708 DEBUG os_vif [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.418 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.419 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.419 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.423 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c2dac79-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.424 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c2dac79-c8, col_values=(('external_ids', {'iface-id': '5c2dac79-c85e-416c-8cba-93d89b11d6a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:86:16:1a', 'vm-uuid': 'a845825c-4dfb-41ff-b896-557e01cb3e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:30:22 compute-0 NetworkManager[49108]: <info>  [1769848222.4287] manager: (tap5c2dac79-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.436 247708 INFO os_vif [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8')
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.501475078 +0000 UTC m=+0.041145580 container create 9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:30:22 compute-0 systemd[1]: Started libpod-conmon-9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5.scope.
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.483828475 +0000 UTC m=+0.023498997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:30:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.5953591 +0000 UTC m=+0.135029632 container init 9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.605917989 +0000 UTC m=+0.145588491 container start 9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:22 compute-0 affectionate_leakey[361085]: 167 167
Jan 31 08:30:22 compute-0 systemd[1]: libpod-9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5.scope: Deactivated successfully.
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.610 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.611 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:30:22 compute-0 conmon[361085]: conmon 9f983b6e570cedd104b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5.scope/container/memory.events
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.612 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] No VIF found with MAC fa:16:3e:86:16:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.612 247708 INFO nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Using config drive
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.615230767 +0000 UTC m=+0.154901269 container attach 9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.616004446 +0000 UTC m=+0.155674948 container died 9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe7c6e071bd6be7b1c92d42d9b13b8ed34cd5435047a0c2a02aa70ee0b87295-merged.mount: Deactivated successfully.
Jan 31 08:30:22 compute-0 nova_compute[247704]: 2026-01-31 08:30:22.651 247708 DEBUG nova.storage.rbd_utils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image a845825c-4dfb-41ff-b896-557e01cb3e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:22 compute-0 podman[361069]: 2026-01-31 08:30:22.669346044 +0000 UTC m=+0.209016566 container remove 9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_leakey, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:30:22 compute-0 systemd[1]: libpod-conmon-9f983b6e570cedd104b2630a0678eaf6f9d41cbf4999530eeb631d9475f5c6d5.scope: Deactivated successfully.
Jan 31 08:30:22 compute-0 podman[361126]: 2026-01-31 08:30:22.80903477 +0000 UTC m=+0.042052643 container create b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:30:22 compute-0 systemd[1]: Started libpod-conmon-b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6.scope.
Jan 31 08:30:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a916fc249dcdd19514620e5b3ee39e05ab59e24c45e9f50c04568eac5543e71b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a916fc249dcdd19514620e5b3ee39e05ab59e24c45e9f50c04568eac5543e71b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a916fc249dcdd19514620e5b3ee39e05ab59e24c45e9f50c04568eac5543e71b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a916fc249dcdd19514620e5b3ee39e05ab59e24c45e9f50c04568eac5543e71b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:22 compute-0 podman[361126]: 2026-01-31 08:30:22.882626244 +0000 UTC m=+0.115644147 container init b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:30:22 compute-0 podman[361126]: 2026-01-31 08:30:22.790729681 +0000 UTC m=+0.023747574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:30:22 compute-0 podman[361126]: 2026-01-31 08:30:22.888644182 +0000 UTC m=+0.121662065 container start b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:30:22 compute-0 podman[361126]: 2026-01-31 08:30:22.893222994 +0000 UTC m=+0.126240867 container attach b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:30:23 compute-0 ceph-mon[74496]: pgmap v2910: 305 pgs: 305 active+clean; 781 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 21 op/s
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.449 247708 INFO nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Creating config drive at /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/disk.config
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.452 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5cz0janl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.588 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5cz0janl" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.621 247708 DEBUG nova.storage.rbd_utils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] rbd image a845825c-4dfb-41ff-b896-557e01cb3e3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.627 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/disk.config a845825c-4dfb-41ff-b896-557e01cb3e3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:23 compute-0 laughing_darwin[361141]: {
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:     "0": [
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:         {
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "devices": [
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "/dev/loop3"
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             ],
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "lv_name": "ceph_lv0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "lv_size": "7511998464",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "name": "ceph_lv0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "tags": {
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.crush_device_class": "",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.encrypted": "0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.osd_id": "0",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.type": "block",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:                 "ceph.vdo": "0"
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             },
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "type": "block",
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:             "vg_name": "ceph_vg0"
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:         }
Jan 31 08:30:23 compute-0 laughing_darwin[361141]:     ]
Jan 31 08:30:23 compute-0 laughing_darwin[361141]: }
Jan 31 08:30:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:23.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:23 compute-0 systemd[1]: libpod-b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6.scope: Deactivated successfully.
Jan 31 08:30:23 compute-0 podman[361173]: 2026-01-31 08:30:23.703182734 +0000 UTC m=+0.026886310 container died b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a916fc249dcdd19514620e5b3ee39e05ab59e24c45e9f50c04568eac5543e71b-merged.mount: Deactivated successfully.
Jan 31 08:30:23 compute-0 podman[361173]: 2026-01-31 08:30:23.780457439 +0000 UTC m=+0.104161195 container remove b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_darwin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:30:23 compute-0 systemd[1]: libpod-conmon-b622e79c59fe12b9dafb94d6568844ea5faa1b24174636bce41d02e2ce75d8f6.scope: Deactivated successfully.
Jan 31 08:30:23 compute-0 sudo[361001]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.875 247708 DEBUG nova.network.neutron [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Updated VIF entry in instance network info cache for port 5c2dac79-c85e-416c-8cba-93d89b11d6a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.876 247708 DEBUG nova.network.neutron [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Updating instance_info_cache with network_info: [{"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.879 247708 DEBUG oslo_concurrency.processutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/disk.config a845825c-4dfb-41ff-b896-557e01cb3e3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.880 247708 INFO nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Deleting local config drive /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b/disk.config because it was imported into RBD.
Jan 31 08:30:23 compute-0 sudo[361206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:23 compute-0 sudo[361206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:23 compute-0 sudo[361206]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:23 compute-0 kernel: tap5c2dac79-c8: entered promiscuous mode
Jan 31 08:30:23 compute-0 NetworkManager[49108]: <info>  [1769848223.9487] manager: (tap5c2dac79-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.962 247708 DEBUG oslo_concurrency.lockutils [req-1024940a-301e-4005-b21a-55027f56524d req-089d78fd-1621-46d0-87ea-f03b26b090cc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-a845825c-4dfb-41ff-b896-557e01cb3e3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:23 compute-0 ovn_controller[149457]: 2026-01-31T08:30:23Z|00738|binding|INFO|Claiming lport 5c2dac79-c85e-416c-8cba-93d89b11d6a9 for this chassis.
Jan 31 08:30:23 compute-0 ovn_controller[149457]: 2026-01-31T08:30:23Z|00739|binding|INFO|5c2dac79-c85e-416c-8cba-93d89b11d6a9: Claiming fa:16:3e:86:16:1a 10.100.0.7
Jan 31 08:30:23 compute-0 nova_compute[247704]: 2026-01-31 08:30:23.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:23 compute-0 ovn_controller[149457]: 2026-01-31T08:30:23Z|00740|binding|INFO|Setting lport 5c2dac79-c85e-416c-8cba-93d89b11d6a9 ovn-installed in OVS
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 systemd-machined[214448]: New machine qemu-80-instance-000000ab.
Jan 31 08:30:24 compute-0 ovn_controller[149457]: 2026-01-31T08:30:24Z|00741|binding|INFO|Setting lport 5c2dac79-c85e-416c-8cba-93d89b11d6a9 up in Southbound
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.018 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:16:1a 10.100.0.7'], port_security=['fa:16:3e:86:16:1a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a845825c-4dfb-41ff-b896-557e01cb3e3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4abfdb7-aa95-4407-b049-c51322e9a052', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5c2dac79-c85e-416c-8cba-93d89b11d6a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.019 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5c2dac79-c85e-416c-8cba-93d89b11d6a9 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad bound to our chassis
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.022 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:30:24 compute-0 systemd-udevd[361268]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:30:24 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-000000ab.
Jan 31 08:30:24 compute-0 sudo[361237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:24 compute-0 sudo[361237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:24 compute-0 sudo[361237]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.038 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e5138103-baac-4c8c-9389-0e89483c3af9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 NetworkManager[49108]: <info>  [1769848224.0421] device (tap5c2dac79-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:30:24 compute-0 NetworkManager[49108]: <info>  [1769848224.0425] device (tap5c2dac79-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.072 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c2864382-1ba0-4c1f-a68c-45efe6aaae7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.076 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[38f4497d-3efa-42d3-8e0f-4fa7bcb18e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.077 247708 DEBUG nova.compute.manager [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-changed-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.078 247708 DEBUG nova.compute.manager [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing instance network info cache due to event network-changed-324249e6-6299-4722-a570-3439880bde1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.078 247708 DEBUG oslo_concurrency.lockutils [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.078 247708 DEBUG oslo_concurrency.lockutils [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.079 247708 DEBUG nova.network.neutron [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Refreshing network info cache for port 324249e6-6299-4722-a570-3439880bde1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:30:24 compute-0 sudo[361274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:24 compute-0 sudo[361274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.105 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebc67a5-2b1a-437d-ad0b-d9cf50b03180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 sudo[361274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.125 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e1edc6ef-84dd-4f96-9868-eebf0b60f1f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 30578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361308, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.146 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b599c32e-592c-4d0b-a1f4-ae1e2da69669]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813783, 'tstamp': 813783}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361322, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813786, 'tstamp': 813786}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361322, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.148 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.150 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.151 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.151 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.152 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.152 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:24 compute-0 sudo[361309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:30:24 compute-0 sudo[361309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.166 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.166 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.167 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.167 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.167 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.169 247708 INFO nova.compute.manager [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Terminating instance
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.170 247708 DEBUG nova.compute.manager [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:30:24 compute-0 kernel: tap324249e6-62 (unregistering): left promiscuous mode
Jan 31 08:30:24 compute-0 NetworkManager[49108]: <info>  [1769848224.2421] device (tap324249e6-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:30:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:24.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:24 compute-0 ovn_controller[149457]: 2026-01-31T08:30:24Z|00742|binding|INFO|Releasing lport 324249e6-6299-4722-a570-3439880bde1f from this chassis (sb_readonly=0)
Jan 31 08:30:24 compute-0 ovn_controller[149457]: 2026-01-31T08:30:24Z|00743|binding|INFO|Setting lport 324249e6-6299-4722-a570-3439880bde1f down in Southbound
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.251 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 ovn_controller[149457]: 2026-01-31T08:30:24Z|00744|binding|INFO|Removing iface tap324249e6-62 ovn-installed in OVS
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.255 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 767 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Jan 31 08:30:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3100385578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.285 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:84:a5 10.100.0.7'], port_security=['fa:16:3e:73:84:a5 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b24a8d0-ad95-4460-acf1-0acb658330aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c8cd691-2e19-4c04-b061-2e5304161623', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '13', 'neutron:security_group_ids': '0e11440e-2192-492f-a554-7817b0fd324e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0bee9a73-d2c9-492b-9a5b-302cf0b7a5b8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=324249e6-6299-4722-a570-3439880bde1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.286 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 324249e6-6299-4722-a570-3439880bde1f in datapath 5c8cd691-2e19-4c04-b061-2e5304161623 unbound from our chassis
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.288 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c8cd691-2e19-4c04-b061-2e5304161623, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.289 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f93531bf-fc79-43ed-a68a-edcdde65cda5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.290 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 namespace which is not needed anymore
Jan 31 08:30:24 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Jan 31 08:30:24 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a9.scope: Consumed 2.050s CPU time.
Jan 31 08:30:24 compute-0 systemd-machined[214448]: Machine qemu-79-instance-000000a9 terminated.
Jan 31 08:30:24 compute-0 NetworkManager[49108]: <info>  [1769848224.3957] manager: (tap324249e6-62): new Tun device (/org/freedesktop/NetworkManager/Devices/325)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.400 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.411 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848224.409893, a845825c-4dfb-41ff-b896-557e01cb3e3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.411 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] VM Started (Lifecycle Event)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.420 247708 INFO nova.virt.libvirt.driver [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Instance destroyed successfully.
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.421 247708 DEBUG nova.objects.instance [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 2b24a8d0-ad95-4460-acf1-0acb658330aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:24 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [NOTICE]   (360284) : haproxy version is 2.8.14-c23fe91
Jan 31 08:30:24 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [NOTICE]   (360284) : path to executable is /usr/sbin/haproxy
Jan 31 08:30:24 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [WARNING]  (360284) : Exiting Master process...
Jan 31 08:30:24 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [ALERT]    (360284) : Current worker (360286) exited with code 143 (Terminated)
Jan 31 08:30:24 compute-0 neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623[360280]: [WARNING]  (360284) : All workers exited. Exiting... (0)
Jan 31 08:30:24 compute-0 systemd[1]: libpod-f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d.scope: Deactivated successfully.
Jan 31 08:30:24 compute-0 podman[361417]: 2026-01-31 08:30:24.436964257 +0000 UTC m=+0.056579528 container died f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.479 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2967b5dc5af895ce0bb96475a4fdfe5ddefe73fe0ff4a32c57b5b9f1ca236726-merged.mount: Deactivated successfully.
Jan 31 08:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d-userdata-shm.mount: Deactivated successfully.
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.487 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848224.4129617, a845825c-4dfb-41ff-b896-557e01cb3e3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.488 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] VM Paused (Lifecycle Event)
Jan 31 08:30:24 compute-0 podman[361417]: 2026-01-31 08:30:24.513500984 +0000 UTC m=+0.133116255 container cleanup f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.519 247708 DEBUG nova.virt.libvirt.vif [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:28:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2146916992',display_name='tempest-TestNetworkAdvancedServerOps-server-2146916992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2146916992',id=169,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK7T1FRM2hnvWHsS2vC2H2YEYiw3bA7e7xk8qp1+JMKzlq7jPy/PzZqxW2VC1KQOHQDuHMYvJPGgKGcZZL8ySdxPXOSaobgGLmos6mi9nlU6Dv+erNyHG2EMArHdJN1ryg==',key_name='tempest-TestNetworkAdvancedServerOps-694728097',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-ah4zs948',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:30:15Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=2b24a8d0-ad95-4460-acf1-0acb658330aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.520 247708 DEBUG nova.network.os_vif_util [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.521 247708 DEBUG nova.network.os_vif_util [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.522 247708 DEBUG os_vif [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:30:24 compute-0 systemd[1]: libpod-conmon-f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d.scope: Deactivated successfully.
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.523 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.524 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap324249e6-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.526 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.531 247708 INFO os_vif [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:84:a5,bridge_name='br-int',has_traffic_filtering=True,id=324249e6-6299-4722-a570-3439880bde1f,network=Network(5c8cd691-2e19-4c04-b061-2e5304161623),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap324249e6-62')
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.556 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.565 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:30:24 compute-0 podman[361472]: 2026-01-31 08:30:24.599222146 +0000 UTC m=+0.061398917 container remove f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.604 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9a66ee50-6089-41d0-be6a-d50ab676923b]: (4, ('Sat Jan 31 08:30:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 (f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d)\nf57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d\nSat Jan 31 08:30:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 (f57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d)\nf57c1a8675daa9a5d245a535243c19a5742689e23d9b6b3147e30b686bb9648d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.607 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8d694257-3f2c-4d25-8994-feeb6f84cffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.608 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c8cd691-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 kernel: tap5c8cd691-20: left promiscuous mode
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.615234519 +0000 UTC m=+0.071398131 container create de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.617 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.621 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b846fb04-04a1-4e97-9458-fbc6592f97d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.641 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.649 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c9035d34-4d73-48c2-bd1f-45b48c4f2a98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.651 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c0df8dc7-57c5-499b-8ebb-8aba536bc4c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 systemd[1]: Started libpod-conmon-de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f.scope.
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.569278902 +0000 UTC m=+0.025442544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.666 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[15e6140d-5943-4cfd-bd76-9ee5f68256fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845223, 'reachable_time': 38851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361521, 'error': None, 'target': 'ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.669 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c8cd691-2e19-4c04-b061-2e5304161623 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:30:24 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:24.669 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[1da9d1ff-aeae-4192-b4b0-1daf7193114c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:24 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c8cd691\x2d2e19\x2d4c04\x2db061\x2d2e5304161623.mount: Deactivated successfully.
Jan 31 08:30:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.711876339 +0000 UTC m=+0.168039971 container init de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.723021102 +0000 UTC m=+0.179184714 container start de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:30:24 compute-0 beautiful_ptolemy[361522]: 167 167
Jan 31 08:30:24 compute-0 systemd[1]: libpod-de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f.scope: Deactivated successfully.
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.729744206 +0000 UTC m=+0.185907938 container attach de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.730739131 +0000 UTC m=+0.186902743 container died de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:30:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5efad01ea68cb61875d718ff4996030161395f3a2dcf454ec38ff1eb150cff0-merged.mount: Deactivated successfully.
Jan 31 08:30:24 compute-0 podman[361473]: 2026-01-31 08:30:24.804759926 +0000 UTC m=+0.260923538 container remove de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_ptolemy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.824 247708 DEBUG nova.compute.manager [req-aec82b5b-4f05-4c1a-8393-f607e3a9ceea req-2158eaba-49aa-48d9-848e-28d5de47bbfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.825 247708 DEBUG oslo_concurrency.lockutils [req-aec82b5b-4f05-4c1a-8393-f607e3a9ceea req-2158eaba-49aa-48d9-848e-28d5de47bbfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.825 247708 DEBUG oslo_concurrency.lockutils [req-aec82b5b-4f05-4c1a-8393-f607e3a9ceea req-2158eaba-49aa-48d9-848e-28d5de47bbfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.825 247708 DEBUG oslo_concurrency.lockutils [req-aec82b5b-4f05-4c1a-8393-f607e3a9ceea req-2158eaba-49aa-48d9-848e-28d5de47bbfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.825 247708 DEBUG nova.compute.manager [req-aec82b5b-4f05-4c1a-8393-f607e3a9ceea req-2158eaba-49aa-48d9-848e-28d5de47bbfb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Processing event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.826 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:30:24 compute-0 systemd[1]: libpod-conmon-de11e99570c0355989b1c2c1a9d262bc97517d323b4dd393411231c1ffa9334f.scope: Deactivated successfully.
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.830 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848224.830348, a845825c-4dfb-41ff-b896-557e01cb3e3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.831 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] VM Resumed (Lifecycle Event)
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.833 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.836 247708 INFO nova.virt.libvirt.driver [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Instance spawned successfully.
Jan 31 08:30:24 compute-0 nova_compute[247704]: 2026-01-31 08:30:24.836 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:30:24 compute-0 podman[361548]: 2026-01-31 08:30:24.96437647 +0000 UTC m=+0.046672365 container create 8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.017 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.018 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:25 compute-0 systemd[1]: Started libpod-conmon-8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6.scope.
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.019 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.019 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.019 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.020 247708 DEBUG nova.virt.libvirt.driver [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:30:25 compute-0 podman[361548]: 2026-01-31 08:30:24.947285681 +0000 UTC m=+0.029581616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:30:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568326175f82d077e498520da180acdf561a1c66b1952d18de36d65389229104/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568326175f82d077e498520da180acdf561a1c66b1952d18de36d65389229104/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568326175f82d077e498520da180acdf561a1c66b1952d18de36d65389229104/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/568326175f82d077e498520da180acdf561a1c66b1952d18de36d65389229104/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:25 compute-0 podman[361548]: 2026-01-31 08:30:25.073023655 +0000 UTC m=+0.155319560 container init 8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:25 compute-0 podman[361548]: 2026-01-31 08:30:25.07853199 +0000 UTC m=+0.160827895 container start 8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 08:30:25 compute-0 podman[361548]: 2026-01-31 08:30:25.082591069 +0000 UTC m=+0.164886994 container attach 8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.086 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.094 247708 INFO nova.virt.libvirt.driver [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Deleting instance files /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa_del
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.095 247708 INFO nova.virt.libvirt.driver [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Deletion of /var/lib/nova/instances/2b24a8d0-ad95-4460-acf1-0acb658330aa_del complete
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.100 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.186 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.253 247708 INFO nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Took 9.30 seconds to spawn the instance on the hypervisor.
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.253 247708 DEBUG nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.268 247708 INFO nova.compute.manager [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Took 1.10 seconds to destroy the instance on the hypervisor.
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.269 247708 DEBUG oslo.service.loopingcall [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.269 247708 DEBUG nova.compute.manager [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.270 247708 DEBUG nova.network.neutron [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:30:25 compute-0 ceph-mon[74496]: pgmap v2911: 305 pgs: 305 active+clean; 767 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.470 247708 INFO nova.compute.manager [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Took 13.29 seconds to build instance.
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.496 247708 DEBUG oslo_concurrency.lockutils [None req-3173517e-e426-4c08-8ea8-9333b70f53eb 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:25.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.812 247708 DEBUG nova.compute.manager [req-ec3fcb7f-5186-4642-b043-4f0dd56df482 req-3118b643-729e-45af-8674-82c9ac3b5923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.814 247708 DEBUG oslo_concurrency.lockutils [req-ec3fcb7f-5186-4642-b043-4f0dd56df482 req-3118b643-729e-45af-8674-82c9ac3b5923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.814 247708 DEBUG oslo_concurrency.lockutils [req-ec3fcb7f-5186-4642-b043-4f0dd56df482 req-3118b643-729e-45af-8674-82c9ac3b5923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.814 247708 DEBUG oslo_concurrency.lockutils [req-ec3fcb7f-5186-4642-b043-4f0dd56df482 req-3118b643-729e-45af-8674-82c9ac3b5923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.814 247708 DEBUG nova.compute.manager [req-ec3fcb7f-5186-4642-b043-4f0dd56df482 req-3118b643-729e-45af-8674-82c9ac3b5923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.814 247708 DEBUG nova.compute.manager [req-ec3fcb7f-5186-4642-b043-4f0dd56df482 req-3118b643-729e-45af-8674-82c9ac3b5923 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-unplugged-324249e6-6299-4722-a570-3439880bde1f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:30:25 compute-0 nova_compute[247704]: 2026-01-31 08:30:25.834 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]: {
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:         "osd_id": 0,
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:         "type": "bluestore"
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]:     }
Jan 31 08:30:25 compute-0 upbeat_chandrasekhar[361565]: }
Jan 31 08:30:25 compute-0 systemd[1]: libpod-8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6.scope: Deactivated successfully.
Jan 31 08:30:25 compute-0 podman[361548]: 2026-01-31 08:30:25.934725684 +0000 UTC m=+1.017021599 container died 8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:30:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-568326175f82d077e498520da180acdf561a1c66b1952d18de36d65389229104-merged.mount: Deactivated successfully.
Jan 31 08:30:26 compute-0 podman[361548]: 2026-01-31 08:30:26.008654197 +0000 UTC m=+1.090950112 container remove 8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:30:26 compute-0 systemd[1]: libpod-conmon-8f4c7a2c435f8139f5b3c12cc37ab954563857524d4fcff999813b9068a458c6.scope: Deactivated successfully.
Jan 31 08:30:26 compute-0 sudo[361309]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:30:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:30:26 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ff703567-9665-4daf-b407-0f1e50cef439 does not exist
Jan 31 08:30:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2784879f-0528-4ec4-851f-cabf7243e13c does not exist
Jan 31 08:30:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 32759df0-3d98-46bd-93f7-05f8e88ea10d does not exist
Jan 31 08:30:26 compute-0 sudo[361601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:26 compute-0 sudo[361601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:26 compute-0 sudo[361601]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:26 compute-0 sudo[361626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:30:26 compute-0 sudo[361626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:26 compute-0 sudo[361626]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:26.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 664 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 382 KiB/s rd, 3.2 KiB/s wr, 73 op/s
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.393 247708 DEBUG nova.network.neutron [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.421 247708 INFO nova.compute.manager [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Took 1.15 seconds to deallocate network for instance.
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.535 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.535 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.542 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.616 247708 INFO nova.scheduler.client.report [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 2b24a8d0-ad95-4460-acf1-0acb658330aa
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.681 247708 DEBUG nova.network.neutron [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updated VIF entry in instance network info cache for port 324249e6-6299-4722-a570-3439880bde1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.682 247708 DEBUG nova.network.neutron [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Updating instance_info_cache with network_info: [{"id": "324249e6-6299-4722-a570-3439880bde1f", "address": "fa:16:3e:73:84:a5", "network": {"id": "5c8cd691-2e19-4c04-b061-2e5304161623", "bridge": "br-int", "label": "tempest-network-smoke--467716151", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap324249e6-62", "ovs_interfaceid": "324249e6-6299-4722-a570-3439880bde1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.684 247708 DEBUG nova.compute.manager [req-0b117dad-ae52-4cd1-8289-4c88410ecc02 req-528deb21-af16-42b3-8848-0f5259ee3d99 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-deleted-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.786 247708 DEBUG oslo_concurrency.lockutils [req-f43cd165-cc09-4544-81af-57f0682e6442 req-e38de3a2-c3d9-49ba-a0aa-e06c0357b510 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-2b24a8d0-ad95-4460-acf1-0acb658330aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:26 compute-0 nova_compute[247704]: 2026-01-31 08:30:26.904 247708 DEBUG oslo_concurrency.lockutils [None req-84fd25c1-9019-4997-ac5e-d6d10a0651c8 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:27 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.105 247708 DEBUG nova.compute.manager [req-a697a0c0-061b-41be-84a3-501a483b2c2f req-dad34438-1ff9-4c16-b825-b26a850bd700 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.105 247708 DEBUG oslo_concurrency.lockutils [req-a697a0c0-061b-41be-84a3-501a483b2c2f req-dad34438-1ff9-4c16-b825-b26a850bd700 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.106 247708 DEBUG oslo_concurrency.lockutils [req-a697a0c0-061b-41be-84a3-501a483b2c2f req-dad34438-1ff9-4c16-b825-b26a850bd700 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.106 247708 DEBUG oslo_concurrency.lockutils [req-a697a0c0-061b-41be-84a3-501a483b2c2f req-dad34438-1ff9-4c16-b825-b26a850bd700 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.106 247708 DEBUG nova.compute.manager [req-a697a0c0-061b-41be-84a3-501a483b2c2f req-dad34438-1ff9-4c16-b825-b26a850bd700 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] No waiting events found dispatching network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.106 247708 WARNING nova.compute.manager [req-a697a0c0-061b-41be-84a3-501a483b2c2f req-dad34438-1ff9-4c16-b825-b26a850bd700 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received unexpected event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 for instance with vm_state active and task_state None.
Jan 31 08:30:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:27.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.970 247708 DEBUG nova.compute.manager [req-4a739d15-b213-45b7-9115-b1cf11911592 req-2df594ab-6baa-4495-81f4-5a6860051b59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.971 247708 DEBUG oslo_concurrency.lockutils [req-4a739d15-b213-45b7-9115-b1cf11911592 req-2df594ab-6baa-4495-81f4-5a6860051b59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.971 247708 DEBUG oslo_concurrency.lockutils [req-4a739d15-b213-45b7-9115-b1cf11911592 req-2df594ab-6baa-4495-81f4-5a6860051b59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.971 247708 DEBUG oslo_concurrency.lockutils [req-4a739d15-b213-45b7-9115-b1cf11911592 req-2df594ab-6baa-4495-81f4-5a6860051b59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "2b24a8d0-ad95-4460-acf1-0acb658330aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.971 247708 DEBUG nova.compute.manager [req-4a739d15-b213-45b7-9115-b1cf11911592 req-2df594ab-6baa-4495-81f4-5a6860051b59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] No waiting events found dispatching network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:27 compute-0 nova_compute[247704]: 2026-01-31 08:30:27.972 247708 WARNING nova.compute.manager [req-4a739d15-b213-45b7-9115-b1cf11911592 req-2df594ab-6baa-4495-81f4-5a6860051b59 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Received unexpected event network-vif-plugged-324249e6-6299-4722-a570-3439880bde1f for instance with vm_state deleted and task_state None.
Jan 31 08:30:28 compute-0 ceph-mon[74496]: pgmap v2912: 305 pgs: 305 active+clean; 664 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 382 KiB/s rd, 3.2 KiB/s wr, 73 op/s
Jan 31 08:30:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:28.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 15 KiB/s wr, 116 op/s
Jan 31 08:30:29 compute-0 nova_compute[247704]: 2026-01-31 08:30:29.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:29.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Jan 31 08:30:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:30 compute-0 ceph-mon[74496]: pgmap v2913: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 15 KiB/s wr, 116 op/s
Jan 31 08:30:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 130 op/s
Jan 31 08:30:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Jan 31 08:30:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Jan 31 08:30:30 compute-0 nova_compute[247704]: 2026-01-31 08:30:30.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:31 compute-0 ceph-mon[74496]: pgmap v2914: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 130 op/s
Jan 31 08:30:31 compute-0 ceph-mon[74496]: osdmap e368: 3 total, 3 up, 3 in
Jan 31 08:30:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:31.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:32.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 135 op/s
Jan 31 08:30:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:32 compute-0 sudo[361654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:32 compute-0 sudo[361654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:32 compute-0 sudo[361654]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:32 compute-0 sudo[361679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:32 compute-0 sudo[361679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:32 compute-0 sudo[361679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Jan 31 08:30:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Jan 31 08:30:33 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Jan 31 08:30:33 compute-0 ceph-mon[74496]: pgmap v2916: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 135 op/s
Jan 31 08:30:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:33.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:33 compute-0 ovn_controller[149457]: 2026-01-31T08:30:33Z|00745|binding|INFO|Releasing lport 0b9d56f1-a803-44f1-b709-3bfbc71e0f57 from this chassis (sb_readonly=0)
Jan 31 08:30:33 compute-0 nova_compute[247704]: 2026-01-31 08:30:33.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:30:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:34.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:30:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 90 op/s
Jan 31 08:30:34 compute-0 nova_compute[247704]: 2026-01-31 08:30:34.531 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:34 compute-0 ceph-mon[74496]: osdmap e369: 3 total, 3 up, 3 in
Jan 31 08:30:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:35 compute-0 ceph-mon[74496]: pgmap v2918: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 90 op/s
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010880138539921832 of space, bias 1.0, pg target 3.2640415619765495 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005311932928531547 of space, bias 1.0, pg target 1.5776440797738693 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:30:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:30:35 compute-0 nova_compute[247704]: 2026-01-31 08:30:35.838 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:36.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 546 KiB/s rd, 10 KiB/s wr, 32 op/s
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.231 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.232 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.232 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.232 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.233 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.234 247708 INFO nova.compute.manager [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Terminating instance
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.235 247708 DEBUG nova.compute.manager [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:30:37 compute-0 kernel: tap5c2dac79-c8 (unregistering): left promiscuous mode
Jan 31 08:30:37 compute-0 NetworkManager[49108]: <info>  [1769848237.2954] device (tap5c2dac79-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 ovn_controller[149457]: 2026-01-31T08:30:37Z|00746|binding|INFO|Releasing lport 5c2dac79-c85e-416c-8cba-93d89b11d6a9 from this chassis (sb_readonly=0)
Jan 31 08:30:37 compute-0 ovn_controller[149457]: 2026-01-31T08:30:37Z|00747|binding|INFO|Setting lport 5c2dac79-c85e-416c-8cba-93d89b11d6a9 down in Southbound
Jan 31 08:30:37 compute-0 ovn_controller[149457]: 2026-01-31T08:30:37Z|00748|binding|INFO|Removing iface tap5c2dac79-c8 ovn-installed in OVS
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.307 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.313 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:86:16:1a 10.100.0.7'], port_security=['fa:16:3e:86:16:1a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a845825c-4dfb-41ff-b896-557e01cb3e3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4abfdb7-aa95-4407-b049-c51322e9a052', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=5c2dac79-c85e-416c-8cba-93d89b11d6a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.314 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.314 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 5c2dac79-c85e-416c-8cba-93d89b11d6a9 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.316 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.333 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ce12279f-7f19-473f-997f-32e3cc4bf447]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:37 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Jan 31 08:30:37 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000ab.scope: Consumed 12.097s CPU time.
Jan 31 08:30:37 compute-0 systemd-machined[214448]: Machine qemu-80-instance-000000ab terminated.
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.359 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d2743b21-8ea0-4c23-99db-25490a2f68f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.362 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4906d895-a6d5-48dc-84d6-a61f744f75f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.391 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[6c53eee7-beb4-489c-874f-199c6d3d1bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.408 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fa019a5c-dabf-4488-ba7d-00101e36da43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 30578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361718, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.421 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8df285a3-b7d2-4b85-a9de-043529d89a2e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813783, 'tstamp': 813783}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361719, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813786, 'tstamp': 813786}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361719, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.424 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.464 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.469 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.470 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.470 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.470 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:37.471 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.489 247708 INFO nova.virt.libvirt.driver [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Instance destroyed successfully.
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.490 247708 DEBUG nova.objects.instance [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid a845825c-4dfb-41ff-b896-557e01cb3e3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.512 247708 DEBUG nova.virt.libvirt.vif [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:30:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-2125524709',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-2125524709',id=171,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:30:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-a9f0gb0b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:30:25Z,user_data=None,user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=a845825c-4dfb-41ff-b896-557e01cb3e3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.512 247708 DEBUG nova.network.os_vif_util [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "address": "fa:16:3e:86:16:1a", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c2dac79-c8", "ovs_interfaceid": "5c2dac79-c85e-416c-8cba-93d89b11d6a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.513 247708 DEBUG nova.network.os_vif_util [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.513 247708 DEBUG os_vif [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.514 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.514 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c2dac79-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.518 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.520 247708 INFO os_vif [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:86:16:1a,bridge_name='br-int',has_traffic_filtering=True,id=5c2dac79-c85e-416c-8cba-93d89b11d6a9,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c2dac79-c8')
Jan 31 08:30:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:37.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.946 247708 DEBUG nova.compute.manager [req-139e8d09-354a-4983-9911-494002af04b9 req-8d1ee5f8-ba9f-4e73-bdb5-afe6a404ca57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-vif-unplugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.947 247708 DEBUG oslo_concurrency.lockutils [req-139e8d09-354a-4983-9911-494002af04b9 req-8d1ee5f8-ba9f-4e73-bdb5-afe6a404ca57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.947 247708 DEBUG oslo_concurrency.lockutils [req-139e8d09-354a-4983-9911-494002af04b9 req-8d1ee5f8-ba9f-4e73-bdb5-afe6a404ca57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.948 247708 DEBUG oslo_concurrency.lockutils [req-139e8d09-354a-4983-9911-494002af04b9 req-8d1ee5f8-ba9f-4e73-bdb5-afe6a404ca57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.948 247708 DEBUG nova.compute.manager [req-139e8d09-354a-4983-9911-494002af04b9 req-8d1ee5f8-ba9f-4e73-bdb5-afe6a404ca57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] No waiting events found dispatching network-vif-unplugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:37 compute-0 nova_compute[247704]: 2026-01-31 08:30:37.949 247708 DEBUG nova.compute.manager [req-139e8d09-354a-4983-9911-494002af04b9 req-8d1ee5f8-ba9f-4e73-bdb5-afe6a404ca57 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-vif-unplugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:30:37 compute-0 ceph-mon[74496]: pgmap v2919: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 546 KiB/s rd, 10 KiB/s wr, 32 op/s
Jan 31 08:30:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:38.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 664 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.7 MiB/s wr, 54 op/s
Jan 31 08:30:38 compute-0 nova_compute[247704]: 2026-01-31 08:30:38.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:38 compute-0 podman[361751]: 2026-01-31 08:30:38.911953246 +0000 UTC m=+0.069427964 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:30:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:39.118 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.119 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:39.121 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.307 247708 INFO nova.virt.libvirt.driver [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Deleting instance files /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b_del
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.308 247708 INFO nova.virt.libvirt.driver [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Deletion of /var/lib/nova/instances/a845825c-4dfb-41ff-b896-557e01cb3e3b_del complete
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.447 247708 INFO nova.compute.manager [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Took 2.21 seconds to destroy the instance on the hypervisor.
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.447 247708 DEBUG oslo.service.loopingcall [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.448 247708 DEBUG nova.compute.manager [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.448 247708 DEBUG nova.network.neutron [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.642 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848224.4131634, 2b24a8d0-ad95-4460-acf1-0acb658330aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.643 247708 INFO nova.compute.manager [-] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] VM Stopped (Lifecycle Event)
Jan 31 08:30:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:39.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:39 compute-0 nova_compute[247704]: 2026-01-31 08:30:39.672 247708 DEBUG nova.compute.manager [None req-106e5a1a-aca1-4c46-aef4-cd375e513727 - - - - - -] [instance: 2b24a8d0-ad95-4460-acf1-0acb658330aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.142 247708 DEBUG nova.compute.manager [req-7ec09710-46c3-43c1-b8c0-3b92fa7b25ba req-1a32252f-d62b-40c6-8bd8-e93423a1aa67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.142 247708 DEBUG oslo_concurrency.lockutils [req-7ec09710-46c3-43c1-b8c0-3b92fa7b25ba req-1a32252f-d62b-40c6-8bd8-e93423a1aa67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.142 247708 DEBUG oslo_concurrency.lockutils [req-7ec09710-46c3-43c1-b8c0-3b92fa7b25ba req-1a32252f-d62b-40c6-8bd8-e93423a1aa67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.142 247708 DEBUG oslo_concurrency.lockutils [req-7ec09710-46c3-43c1-b8c0-3b92fa7b25ba req-1a32252f-d62b-40c6-8bd8-e93423a1aa67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.143 247708 DEBUG nova.compute.manager [req-7ec09710-46c3-43c1-b8c0-3b92fa7b25ba req-1a32252f-d62b-40c6-8bd8-e93423a1aa67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] No waiting events found dispatching network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.143 247708 WARNING nova.compute.manager [req-7ec09710-46c3-43c1-b8c0-3b92fa7b25ba req-1a32252f-d62b-40c6-8bd8-e93423a1aa67 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received unexpected event network-vif-plugged-5c2dac79-c85e-416c-8cba-93d89b11d6a9 for instance with vm_state active and task_state deleting.
Jan 31 08:30:40 compute-0 ceph-mon[74496]: pgmap v2920: 305 pgs: 305 active+clean; 664 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.7 MiB/s wr, 54 op/s
Jan 31 08:30:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 672 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 08:30:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:40.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:40 compute-0 nova_compute[247704]: 2026-01-31 08:30:40.871 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:41 compute-0 nova_compute[247704]: 2026-01-31 08:30:41.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:41 compute-0 nova_compute[247704]: 2026-01-31 08:30:41.573 247708 DEBUG nova.network.neutron [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:41 compute-0 nova_compute[247704]: 2026-01-31 08:30:41.633 247708 INFO nova.compute.manager [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Took 2.18 seconds to deallocate network for instance.
Jan 31 08:30:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:41.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:41 compute-0 ceph-mon[74496]: pgmap v2921: 305 pgs: 305 active+clean; 672 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 223 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 08:30:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1777556716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Jan 31 08:30:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:42 compute-0 nova_compute[247704]: 2026-01-31 08:30:42.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:42 compute-0 nova_compute[247704]: 2026-01-31 08:30:42.723 247708 DEBUG nova.compute.manager [req-c7b42970-42a4-4f36-ad34-58c33b331226 req-6f8e26b0-ea80-4df0-920a-c562a7abf28c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Received event network-vif-deleted-5c2dac79-c85e-416c-8cba-93d89b11d6a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:30:42 compute-0 nova_compute[247704]: 2026-01-31 08:30:42.890 247708 INFO nova.compute.manager [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Took 1.26 seconds to detach 1 volumes for instance.
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.112 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.112 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.272 247708 DEBUG oslo_concurrency.processutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:43 compute-0 ceph-mon[74496]: pgmap v2922: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:30:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:43.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810849707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.741 247708 DEBUG oslo_concurrency.processutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.749 247708 DEBUG nova.compute.provider_tree [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.792 247708 DEBUG nova.scheduler.client.report [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:43 compute-0 nova_compute[247704]: 2026-01-31 08:30:43.945 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:44 compute-0 nova_compute[247704]: 2026-01-31 08:30:44.119 247708 INFO nova.scheduler.client.report [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocations for instance a845825c-4dfb-41ff-b896-557e01cb3e3b
Jan 31 08:30:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 692 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 31 08:30:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:44 compute-0 nova_compute[247704]: 2026-01-31 08:30:44.370 247708 DEBUG oslo_concurrency.lockutils [None req-d4783d6f-6726-4456-97cd-5cea4a641827 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "a845825c-4dfb-41ff-b896-557e01cb3e3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/810849707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:45 compute-0 ceph-mon[74496]: pgmap v2923: 305 pgs: 305 active+clean; 692 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 31 08:30:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:45.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:45 compute-0 nova_compute[247704]: 2026-01-31 08:30:45.873 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:30:46.124 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:30:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 256 KiB/s rd, 3.8 MiB/s wr, 89 op/s
Jan 31 08:30:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:46.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:46 compute-0 nova_compute[247704]: 2026-01-31 08:30:46.565 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:47 compute-0 nova_compute[247704]: 2026-01-31 08:30:47.517 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:30:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:47.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:30:47 compute-0 ceph-mon[74496]: pgmap v2924: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 256 KiB/s rd, 3.8 MiB/s wr, 89 op/s
Jan 31 08:30:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Jan 31 08:30:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Jan 31 08:30:47 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Jan 31 08:30:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Jan 31 08:30:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:48.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:48 compute-0 nova_compute[247704]: 2026-01-31 08:30:48.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:48 compute-0 nova_compute[247704]: 2026-01-31 08:30:48.696 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:48 compute-0 nova_compute[247704]: 2026-01-31 08:30:48.696 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:48 compute-0 nova_compute[247704]: 2026-01-31 08:30:48.696 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:48 compute-0 nova_compute[247704]: 2026-01-31 08:30:48.696 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:30:48 compute-0 nova_compute[247704]: 2026-01-31 08:30:48.697 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:48 compute-0 ceph-mon[74496]: osdmap e370: 3 total, 3 up, 3 in
Jan 31 08:30:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018523473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.141 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:49 compute-0 podman[361821]: 2026-01-31 08:30:49.263508311 +0000 UTC m=+0.075847881 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.269 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.270 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.274 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.274 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000a0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.435 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.436 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3799MB free_disk=20.73931884765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.436 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.437 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.548 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.549 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bbc5f09e-71d7-4009-bdf6-06e95b32574c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.549 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.549 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:30:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:49.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:49 compute-0 nova_compute[247704]: 2026-01-31 08:30:49.793 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:30:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160593388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 703 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Jan 31 08:30:50 compute-0 nova_compute[247704]: 2026-01-31 08:30:50.279 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:50 compute-0 nova_compute[247704]: 2026-01-31 08:30:50.285 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:50 compute-0 ceph-mon[74496]: pgmap v2926: 305 pgs: 305 active+clean; 723 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 3.2 MiB/s wr, 63 op/s
Jan 31 08:30:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3018523473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/573318407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:50 compute-0 nova_compute[247704]: 2026-01-31 08:30:50.329 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:50 compute-0 nova_compute[247704]: 2026-01-31 08:30:50.404 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:30:50 compute-0 nova_compute[247704]: 2026-01-31 08:30:50.405 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:50 compute-0 nova_compute[247704]: 2026-01-31 08:30:50.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:51 compute-0 nova_compute[247704]: 2026-01-31 08:30:51.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1160593388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:51 compute-0 ceph-mon[74496]: pgmap v2927: 305 pgs: 305 active+clean; 703 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Jan 31 08:30:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3089461330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1955153886' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:30:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1955153886' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:30:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:51.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 690 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 31 08:30:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:52.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3159674751' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:52 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:30:52 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:30:52 compute-0 nova_compute[247704]: 2026-01-31 08:30:52.407 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:52 compute-0 nova_compute[247704]: 2026-01-31 08:30:52.407 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:30:52 compute-0 nova_compute[247704]: 2026-01-31 08:30:52.488 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848237.4873576, a845825c-4dfb-41ff-b896-557e01cb3e3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:30:52 compute-0 nova_compute[247704]: 2026-01-31 08:30:52.489 247708 INFO nova.compute.manager [-] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] VM Stopped (Lifecycle Event)
Jan 31 08:30:52 compute-0 nova_compute[247704]: 2026-01-31 08:30:52.518 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:52 compute-0 sudo[361874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:52 compute-0 sudo[361874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:52 compute-0 sudo[361874]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:52 compute-0 nova_compute[247704]: 2026-01-31 08:30:52.854 247708 DEBUG nova.compute.manager [None req-efdce2a3-178b-46c8-abf0-0619ab8dab87 - - - - - -] [instance: a845825c-4dfb-41ff-b896-557e01cb3e3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:30:52 compute-0 sudo[361899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:30:52 compute-0 sudo[361899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:52 compute-0 sudo[361899]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:53 compute-0 nova_compute[247704]: 2026-01-31 08:30:53.084 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:30:53 compute-0 nova_compute[247704]: 2026-01-31 08:30:53.084 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:30:53 compute-0 nova_compute[247704]: 2026-01-31 08:30:53.084 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:30:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Jan 31 08:30:53 compute-0 ceph-mon[74496]: pgmap v2928: 305 pgs: 305 active+clean; 690 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 31 08:30:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3900223522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:30:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1006507322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:30:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1006507322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:30:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Jan 31 08:30:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Jan 31 08:30:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 34 op/s
Jan 31 08:30:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:54.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2760016401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:55 compute-0 ceph-mon[74496]: osdmap e371: 3 total, 3 up, 3 in
Jan 31 08:30:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:55.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:55 compute-0 nova_compute[247704]: 2026-01-31 08:30:55.920 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 34 KiB/s wr, 51 op/s
Jan 31 08:30:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:30:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:56.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:30:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Jan 31 08:30:56 compute-0 ceph-mon[74496]: pgmap v2930: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 34 op/s
Jan 31 08:30:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/820759006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:30:56 compute-0 nova_compute[247704]: 2026-01-31 08:30:56.638 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [{"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:30:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Jan 31 08:30:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Jan 31 08:30:56 compute-0 nova_compute[247704]: 2026-01-31 08:30:56.739 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-bbc5f09e-71d7-4009-bdf6-06e95b32574c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:30:56 compute-0 nova_compute[247704]: 2026-01-31 08:30:56.740 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:30:56 compute-0 nova_compute[247704]: 2026-01-31 08:30:56.740 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:56 compute-0 nova_compute[247704]: 2026-01-31 08:30:56.740 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Jan 31 08:30:57 compute-0 nova_compute[247704]: 2026-01-31 08:30:57.520 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:57.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Jan 31 08:30:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Jan 31 08:30:58 compute-0 ceph-mon[74496]: pgmap v2931: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 34 KiB/s wr, 51 op/s
Jan 31 08:30:58 compute-0 ceph-mon[74496]: osdmap e372: 3 total, 3 up, 3 in
Jan 31 08:30:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Jan 31 08:30:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:30:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:58.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:30:58 compute-0 nova_compute[247704]: 2026-01-31 08:30:58.889 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:58 compute-0 nova_compute[247704]: 2026-01-31 08:30:58.922 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:30:59 compute-0 ceph-mon[74496]: osdmap e373: 3 total, 3 up, 3 in
Jan 31 08:30:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:30:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:30:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.7 MiB/s wr, 282 op/s
Jan 31 08:31:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:00.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:00 compute-0 ceph-mon[74496]: pgmap v2934: 305 pgs: 305 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 127 op/s
Jan 31 08:31:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Jan 31 08:31:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Jan 31 08:31:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Jan 31 08:31:00 compute-0 nova_compute[247704]: 2026-01-31 08:31:00.958 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:01 compute-0 nova_compute[247704]: 2026-01-31 08:31:01.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:01.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Jan 31 08:31:02 compute-0 ceph-mon[74496]: pgmap v2935: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.7 MiB/s wr, 282 op/s
Jan 31 08:31:02 compute-0 ceph-mon[74496]: osdmap e374: 3 total, 3 up, 3 in
Jan 31 08:31:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 6.8 MiB/s wr, 299 op/s
Jan 31 08:31:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:02.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:02 compute-0 nova_compute[247704]: 2026-01-31 08:31:02.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Jan 31 08:31:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Jan 31 08:31:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:03 compute-0 ceph-mon[74496]: pgmap v2937: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 6.8 MiB/s wr, 299 op/s
Jan 31 08:31:03 compute-0 ceph-mon[74496]: osdmap e375: 3 total, 3 up, 3 in
Jan 31 08:31:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 298 op/s
Jan 31 08:31:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2377867101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2377867101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:05.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:05 compute-0 nova_compute[247704]: 2026-01-31 08:31:05.960 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:06 compute-0 ceph-mon[74496]: pgmap v2939: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 298 op/s
Jan 31 08:31:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.0 MiB/s wr, 175 op/s
Jan 31 08:31:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:07 compute-0 nova_compute[247704]: 2026-01-31 08:31:07.461 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:07 compute-0 nova_compute[247704]: 2026-01-31 08:31:07.462 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:07 compute-0 nova_compute[247704]: 2026-01-31 08:31:07.524 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:07.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Jan 31 08:31:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Jan 31 08:31:08 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Jan 31 08:31:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 72 op/s
Jan 31 08:31:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:08 compute-0 ceph-mon[74496]: pgmap v2940: 305 pgs: 305 active+clean; 726 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.0 MiB/s wr, 175 op/s
Jan 31 08:31:08 compute-0 ceph-mon[74496]: osdmap e376: 3 total, 3 up, 3 in
Jan 31 08:31:08 compute-0 nova_compute[247704]: 2026-01-31 08:31:08.596 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:31:09 compute-0 nova_compute[247704]: 2026-01-31 08:31:09.604 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:09 compute-0 nova_compute[247704]: 2026-01-31 08:31:09.605 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:09 compute-0 nova_compute[247704]: 2026-01-31 08:31:09.618 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:31:09 compute-0 nova_compute[247704]: 2026-01-31 08:31:09.618 247708 INFO nova.compute.claims [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:31:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:09.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:09 compute-0 ceph-mon[74496]: pgmap v2942: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 72 op/s
Jan 31 08:31:09 compute-0 podman[361933]: 2026-01-31 08:31:09.889609067 +0000 UTC m=+0.056131637 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:31:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 695 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 3.3 MiB/s wr, 86 op/s
Jan 31 08:31:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:10.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:11 compute-0 nova_compute[247704]: 2026-01-31 08:31:11.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:11 compute-0 nova_compute[247704]: 2026-01-31 08:31:11.064 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:11.201 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:11.202 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:11.202 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602169119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:11 compute-0 nova_compute[247704]: 2026-01-31 08:31:11.540 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:11 compute-0 nova_compute[247704]: 2026-01-31 08:31:11.546 247708 DEBUG nova.compute.provider_tree [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:11.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:11 compute-0 ceph-mon[74496]: pgmap v2943: 305 pgs: 305 active+clean; 695 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 3.3 MiB/s wr, 86 op/s
Jan 31 08:31:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1602169119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:11 compute-0 nova_compute[247704]: 2026-01-31 08:31:11.853 247708 DEBUG nova.scheduler.client.report [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:12 compute-0 nova_compute[247704]: 2026-01-31 08:31:12.252 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:12 compute-0 nova_compute[247704]: 2026-01-31 08:31:12.253 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:31:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 3.3 MiB/s wr, 93 op/s
Jan 31 08:31:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:12.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:12 compute-0 nova_compute[247704]: 2026-01-31 08:31:12.526 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:12 compute-0 sudo[361975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:12 compute-0 sudo[361975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:12 compute-0 sudo[361975]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:13 compute-0 sudo[362000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:13 compute-0 sudo[362000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:13 compute-0 sudo[362000]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:13 compute-0 nova_compute[247704]: 2026-01-31 08:31:13.185 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:31:13 compute-0 nova_compute[247704]: 2026-01-31 08:31:13.186 247708 DEBUG nova.network.neutron [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:31:13 compute-0 nova_compute[247704]: 2026-01-31 08:31:13.551 247708 INFO nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:31:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:13.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:13 compute-0 ceph-mon[74496]: pgmap v2944: 305 pgs: 305 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 3.3 MiB/s wr, 93 op/s
Jan 31 08:31:13 compute-0 nova_compute[247704]: 2026-01-31 08:31:13.848 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:31:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 114 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Jan 31 08:31:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:14 compute-0 nova_compute[247704]: 2026-01-31 08:31:14.836 247708 DEBUG nova.policy [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4d0e9d918b4041fabd5ded633b4cf404', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9710f0cf77d84353ae13fa47922b085d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.077 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.078 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.079 247708 INFO nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Creating image(s)
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.109 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.138 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.167 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.172 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.245 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.246 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.246 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.247 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.277 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:15 compute-0 nova_compute[247704]: 2026-01-31 08:31:15.281 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:15.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:16 compute-0 nova_compute[247704]: 2026-01-31 08:31:16.005 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 678 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 126 op/s
Jan 31 08:31:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:16 compute-0 ceph-mon[74496]: pgmap v2945: 305 pgs: 305 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 114 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Jan 31 08:31:17 compute-0 nova_compute[247704]: 2026-01-31 08:31:17.121 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:17 compute-0 nova_compute[247704]: 2026-01-31 08:31:17.195 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] resizing rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:31:17 compute-0 nova_compute[247704]: 2026-01-31 08:31:17.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:17.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:18 compute-0 ceph-mon[74496]: pgmap v2946: 305 pgs: 305 active+clean; 678 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 2.6 MiB/s wr, 126 op/s
Jan 31 08:31:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2605209381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:18 compute-0 nova_compute[247704]: 2026-01-31 08:31:18.156 247708 DEBUG nova.objects.instance [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'migration_context' on Instance uuid 10312c53-fc1c-4baf-a301-9ac7ba4bb651 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 667 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 429 KiB/s rd, 2.8 MiB/s wr, 136 op/s
Jan 31 08:31:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:18.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:18 compute-0 nova_compute[247704]: 2026-01-31 08:31:18.431 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:31:18 compute-0 nova_compute[247704]: 2026-01-31 08:31:18.432 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Ensure instance console log exists: /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:31:18 compute-0 nova_compute[247704]: 2026-01-31 08:31:18.432 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:18 compute-0 nova_compute[247704]: 2026-01-31 08:31:18.433 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:18 compute-0 nova_compute[247704]: 2026-01-31 08:31:18.433 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:19 compute-0 nova_compute[247704]: 2026-01-31 08:31:19.166 247708 DEBUG nova.network.neutron [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Successfully created port: 6c7a1046-54de-4879-bb02-d0a2fd364536 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:31:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:19.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:19 compute-0 podman[362195]: 2026-01-31 08:31:19.909163307 +0000 UTC m=+0.083893799 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:20 compute-0 ceph-mon[74496]: pgmap v2947: 305 pgs: 305 active+clean; 667 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 429 KiB/s rd, 2.8 MiB/s wr, 136 op/s
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:31:20
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 670 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 3.4 MiB/s wr, 130 op/s
Jan 31 08:31:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:20.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:31:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:31:21 compute-0 nova_compute[247704]: 2026-01-31 08:31:21.005 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:22 compute-0 ceph-mon[74496]: pgmap v2948: 305 pgs: 305 active+clean; 670 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 3.4 MiB/s wr, 130 op/s
Jan 31 08:31:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3503097286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Jan 31 08:31:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:22.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:22 compute-0 nova_compute[247704]: 2026-01-31 08:31:22.530 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:23 compute-0 ceph-mon[74496]: pgmap v2949: 305 pgs: 305 active+clean; 645 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.320 247708 DEBUG nova.network.neutron [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Successfully updated port: 6c7a1046-54de-4879-bb02-d0a2fd364536 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.328 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:23.328 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:23.330 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.481 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.482 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquired lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.482 247708 DEBUG nova.network.neutron [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.556 247708 DEBUG nova.compute.manager [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-changed-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.557 247708 DEBUG nova.compute.manager [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Refreshing instance network info cache due to event network-changed-6c7a1046-54de-4879-bb02-d0a2fd364536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:31:23 compute-0 nova_compute[247704]: 2026-01-31 08:31:23.557 247708 DEBUG oslo_concurrency.lockutils [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:31:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:24 compute-0 nova_compute[247704]: 2026-01-31 08:31:24.182 247708 DEBUG nova.network.neutron [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:31:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 335 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Jan 31 08:31:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:25 compute-0 ceph-mon[74496]: pgmap v2950: 305 pgs: 305 active+clean; 626 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 335 KiB/s rd, 1.9 MiB/s wr, 109 op/s
Jan 31 08:31:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:25.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:26 compute-0 nova_compute[247704]: 2026-01-31 08:31:26.007 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 321 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Jan 31 08:31:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:26.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:26 compute-0 sudo[362225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:26 compute-0 sudo[362225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-0 sudo[362225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-0 sudo[362250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:26 compute-0 sudo[362250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-0 sudo[362250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-0 sudo[362275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:26 compute-0 sudo[362275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-0 sudo[362275]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:26 compute-0 sudo[362300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:31:26 compute-0 sudo[362300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:31:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:31:27 compute-0 sudo[362300]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3568431585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 08:31:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:31:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.532 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.553 247708 DEBUG nova.network.neutron [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:27.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.951 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Releasing lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.951 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Instance network_info: |[{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.952 247708 DEBUG oslo_concurrency.lockutils [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.952 247708 DEBUG nova.network.neutron [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Refreshing network info cache for port 6c7a1046-54de-4879-bb02-d0a2fd364536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.956 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Start _get_guest_xml network_info=[{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.960 247708 WARNING nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.967 247708 DEBUG nova.virt.libvirt.host [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.968 247708 DEBUG nova.virt.libvirt.host [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.971 247708 DEBUG nova.virt.libvirt.host [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.971 247708 DEBUG nova.virt.libvirt.host [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.973 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.973 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.974 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.974 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.974 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.974 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.974 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.975 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.975 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.975 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.975 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.976 247708 DEBUG nova.virt.hardware [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:31:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:31:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:27 compute-0 nova_compute[247704]: 2026-01-31 08:31:27.980 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 31 08:31:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:28.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:28 compute-0 ceph-mon[74496]: pgmap v2951: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 321 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Jan 31 08:31:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:31:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2823635743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:31:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942918122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:28 compute-0 nova_compute[247704]: 2026-01-31 08:31:28.478 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:28 compute-0 nova_compute[247704]: 2026-01-31 08:31:28.690 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:28 compute-0 nova_compute[247704]: 2026-01-31 08:31:28.696 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 08:31:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198668702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.155 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.157 247708 DEBUG nova.virt.libvirt.vif [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:31:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1017045621',display_name='tempest-TestNetworkAdvancedServerOps-server-1017045621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1017045621',id=173,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTYK+BhJjAijQwA1D6+hu7h09ttTa3tDUilYfVAOXLs7Y9bMEIzPKdA0Hy6+tZbDHq/8tz93mNyDFSEj3ocLpKTUhRTkac+Txwb84VPqTZ9LbsC84WpYANyYnuLUqXoDw==',key_name='tempest-TestNetworkAdvancedServerOps-754632833',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-kjaq00rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:14Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=10312c53-fc1c-4baf-a301-9ac7ba4bb651,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.157 247708 DEBUG nova.network.os_vif_util [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.158 247708 DEBUG nova.network.os_vif_util [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.159 247708 DEBUG nova.objects.instance [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'pci_devices' on Instance uuid 10312c53-fc1c-4baf-a301-9ac7ba4bb651 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 19bd4c3a-ed20-4e03-8568-0e2b895a05db does not exist
Jan 31 08:31:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 850231ca-988f-4cb0-9b15-ed9abc9af4c5 does not exist
Jan 31 08:31:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1516e6c2-2f3d-42bb-877f-8a3154baafa7 does not exist
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:31:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:31:29 compute-0 sudo[362419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:29 compute-0 sudo[362419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:29 compute-0 sudo[362419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:29 compute-0 ceph-mon[74496]: pgmap v2952: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/942918122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1198668702' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:31:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:31:29 compute-0 sudo[362445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:29 compute-0 sudo[362445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:29 compute-0 sudo[362445]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.601 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <uuid>10312c53-fc1c-4baf-a301-9ac7ba4bb651</uuid>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <name>instance-000000ad</name>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1017045621</nova:name>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:31:27</nova:creationTime>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:user uuid="4d0e9d918b4041fabd5ded633b4cf404">tempest-TestNetworkAdvancedServerOps-483180749-project-member</nova:user>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:project uuid="9710f0cf77d84353ae13fa47922b085d">tempest-TestNetworkAdvancedServerOps-483180749</nova:project>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <nova:port uuid="6c7a1046-54de-4879-bb02-d0a2fd364536">
Jan 31 08:31:29 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <system>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <entry name="serial">10312c53-fc1c-4baf-a301-9ac7ba4bb651</entry>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <entry name="uuid">10312c53-fc1c-4baf-a301-9ac7ba4bb651</entry>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </system>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <os>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </os>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <features>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </features>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk">
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </source>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk.config">
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </source>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:31:29 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:9d:61:ce"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <target dev="tap6c7a1046-54"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/console.log" append="off"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <video>
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </video>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:31:29 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:31:29 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:31:29 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:31:29 compute-0 nova_compute[247704]: </domain>
Jan 31 08:31:29 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.602 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Preparing to wait for external event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.602 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.602 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.603 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.603 247708 DEBUG nova.virt.libvirt.vif [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:31:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1017045621',display_name='tempest-TestNetworkAdvancedServerOps-server-1017045621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1017045621',id=173,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTYK+BhJjAijQwA1D6+hu7h09ttTa3tDUilYfVAOXLs7Y9bMEIzPKdA0Hy6+tZbDHq/8tz93mNyDFSEj3ocLpKTUhRTkac+Txwb84VPqTZ9LbsC84WpYANyYnuLUqXoDw==',key_name='tempest-TestNetworkAdvancedServerOps-754632833',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-kjaq00rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:14Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=10312c53-fc1c-4baf-a301-9ac7ba4bb651,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.603 247708 DEBUG nova.network.os_vif_util [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.604 247708 DEBUG nova.network.os_vif_util [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.604 247708 DEBUG os_vif [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.605 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.606 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.609 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.609 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c7a1046-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.609 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c7a1046-54, col_values=(('external_ids', {'iface-id': '6c7a1046-54de-4879-bb02-d0a2fd364536', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:61:ce', 'vm-uuid': '10312c53-fc1c-4baf-a301-9ac7ba4bb651'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:29 compute-0 NetworkManager[49108]: <info>  [1769848289.6122] manager: (tap6c7a1046-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.619 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:29 compute-0 nova_compute[247704]: 2026-01-31 08:31:29.620 247708 INFO os_vif [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54')
Jan 31 08:31:29 compute-0 sudo[362470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:29 compute-0 sudo[362470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:29 compute-0 sudo[362470]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:29 compute-0 sudo[362497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:31:29 compute-0 sudo[362497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.065696456 +0000 UTC m=+0.047756532 container create 4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:31:30 compute-0 systemd[1]: Started libpod-conmon-4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f.scope.
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.041847491 +0000 UTC m=+0.023907587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:31:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.166952649 +0000 UTC m=+0.149012775 container init 4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.175579061 +0000 UTC m=+0.157639137 container start 4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:31:30 compute-0 crazy_carver[362580]: 167 167
Jan 31 08:31:30 compute-0 systemd[1]: libpod-4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f.scope: Deactivated successfully.
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.18576226 +0000 UTC m=+0.167822366 container attach 4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.186798106 +0000 UTC m=+0.168858192 container died 4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f67fac6fc83f466b88fd0dea70f866dff4d821458e2e9b360dd193c21d54a2-merged.mount: Deactivated successfully.
Jan 31 08:31:30 compute-0 podman[362564]: 2026-01-31 08:31:30.263127367 +0000 UTC m=+0.245187483 container remove 4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:30 compute-0 systemd[1]: libpod-conmon-4e219d202e3fd2d5323e9c2aa01ae0310e023c730f07a861096ac43bbb32b57f.scope: Deactivated successfully.
Jan 31 08:31:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 1.5 MiB/s wr, 74 op/s
Jan 31 08:31:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:30.334 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:30.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:30 compute-0 nova_compute[247704]: 2026-01-31 08:31:30.459 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:31:30 compute-0 nova_compute[247704]: 2026-01-31 08:31:30.459 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:31:30 compute-0 nova_compute[247704]: 2026-01-31 08:31:30.460 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] No VIF found with MAC fa:16:3e:9d:61:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:31:30 compute-0 nova_compute[247704]: 2026-01-31 08:31:30.460 247708 INFO nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Using config drive
Jan 31 08:31:30 compute-0 podman[362607]: 2026-01-31 08:31:30.464980446 +0000 UTC m=+0.100703279 container create df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:30 compute-0 podman[362607]: 2026-01-31 08:31:30.387775983 +0000 UTC m=+0.023498836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:31:30 compute-0 nova_compute[247704]: 2026-01-31 08:31:30.490 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:30 compute-0 systemd[1]: Started libpod-conmon-df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c.scope.
Jan 31 08:31:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:31:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:31:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:31:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d87d2a62fd177c12b04987a427174a1973dc7aebc905fc8ecb3ebd9de32a9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d87d2a62fd177c12b04987a427174a1973dc7aebc905fc8ecb3ebd9de32a9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d87d2a62fd177c12b04987a427174a1973dc7aebc905fc8ecb3ebd9de32a9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d87d2a62fd177c12b04987a427174a1973dc7aebc905fc8ecb3ebd9de32a9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d87d2a62fd177c12b04987a427174a1973dc7aebc905fc8ecb3ebd9de32a9d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:30 compute-0 podman[362607]: 2026-01-31 08:31:30.649539942 +0000 UTC m=+0.285262795 container init df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:30 compute-0 podman[362607]: 2026-01-31 08:31:30.656632836 +0000 UTC m=+0.292355669 container start df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:31:30 compute-0 podman[362607]: 2026-01-31 08:31:30.695198922 +0000 UTC m=+0.330921755 container attach df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:31:31 compute-0 nova_compute[247704]: 2026-01-31 08:31:31.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:31 compute-0 youthful_mestorf[362642]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:31:31 compute-0 youthful_mestorf[362642]: --> relative data size: 1.0
Jan 31 08:31:31 compute-0 youthful_mestorf[362642]: --> All data devices are unavailable
Jan 31 08:31:31 compute-0 systemd[1]: libpod-df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c.scope: Deactivated successfully.
Jan 31 08:31:31 compute-0 podman[362607]: 2026-01-31 08:31:31.557466736 +0000 UTC m=+1.193189569 container died df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-95d87d2a62fd177c12b04987a427174a1973dc7aebc905fc8ecb3ebd9de32a9d-merged.mount: Deactivated successfully.
Jan 31 08:31:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:31.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:31 compute-0 ceph-mon[74496]: pgmap v2953: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 1.5 MiB/s wr, 74 op/s
Jan 31 08:31:31 compute-0 podman[362607]: 2026-01-31 08:31:31.734251111 +0000 UTC m=+1.369973954 container remove df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:31:31 compute-0 systemd[1]: libpod-conmon-df0388343a2a8bb2041f3c6c2d5ad936c2104bab9361c98060047c5d16190d0c.scope: Deactivated successfully.
Jan 31 08:31:31 compute-0 sudo[362497]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:31 compute-0 sudo[362669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:31 compute-0 sudo[362669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:31 compute-0 sudo[362669]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:31 compute-0 sudo[362694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:31 compute-0 sudo[362694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:31 compute-0 sudo[362694]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:31 compute-0 sudo[362719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:31 compute-0 sudo[362719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:31 compute-0 sudo[362719]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:31 compute-0 sudo[362744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:31:31 compute-0 sudo[362744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 203 KiB/s wr, 45 op/s
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.309695441 +0000 UTC m=+0.040550035 container create 9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.329 247708 INFO nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Creating config drive at /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/disk.config
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.334 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpucznfnle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:32 compute-0 systemd[1]: Started libpod-conmon-9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64.scope.
Jan 31 08:31:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:32.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.290565752 +0000 UTC m=+0.021420376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.438276854 +0000 UTC m=+0.169131468 container init 9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.44466203 +0000 UTC m=+0.175516624 container start 9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:31:32 compute-0 suspicious_satoshi[362828]: 167 167
Jan 31 08:31:32 compute-0 systemd[1]: libpod-9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64.scope: Deactivated successfully.
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.452268007 +0000 UTC m=+0.183122621 container attach 9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.453187199 +0000 UTC m=+0.184041793 container died 9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.469 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpucznfnle" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-31d1ad40bc2505059dcf71b87ff03cfd1db0f05636cc284aa64e15cbd4d46bb3-merged.mount: Deactivated successfully.
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.505 247708 DEBUG nova.storage.rbd_utils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] rbd image 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:31:32 compute-0 podman[362810]: 2026-01-31 08:31:32.51031357 +0000 UTC m=+0.241168164 container remove 9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.511 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/disk.config 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:32 compute-0 systemd[1]: libpod-conmon-9446bb2e2e65a62698913b07a6c03463b05a30182e21d89b7a456e6ca78f7f64.scope: Deactivated successfully.
Jan 31 08:31:32 compute-0 podman[362889]: 2026-01-31 08:31:32.669367711 +0000 UTC m=+0.046939503 container create eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:31:32 compute-0 systemd[1]: Started libpod-conmon-eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c.scope.
Jan 31 08:31:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c3a1e0e200db9ce6fd3af29cf8a32e70db133ffea3e56cef3864862e55987d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c3a1e0e200db9ce6fd3af29cf8a32e70db133ffea3e56cef3864862e55987d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c3a1e0e200db9ce6fd3af29cf8a32e70db133ffea3e56cef3864862e55987d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c3a1e0e200db9ce6fd3af29cf8a32e70db133ffea3e56cef3864862e55987d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:32 compute-0 podman[362889]: 2026-01-31 08:31:32.648759145 +0000 UTC m=+0.026330957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:31:32 compute-0 podman[362889]: 2026-01-31 08:31:32.759947752 +0000 UTC m=+0.137519564 container init eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:31:32 compute-0 podman[362889]: 2026-01-31 08:31:32.767557038 +0000 UTC m=+0.145128830 container start eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.770 247708 DEBUG oslo_concurrency.processutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/disk.config 10312c53-fc1c-4baf-a301-9ac7ba4bb651_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.772 247708 INFO nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Deleting local config drive /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651/disk.config because it was imported into RBD.
Jan 31 08:31:32 compute-0 podman[362889]: 2026-01-31 08:31:32.774862097 +0000 UTC m=+0.152433909 container attach eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:31:32 compute-0 kernel: tap6c7a1046-54: entered promiscuous mode
Jan 31 08:31:32 compute-0 NetworkManager[49108]: <info>  [1769848292.8322] manager: (tap6c7a1046-54): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Jan 31 08:31:32 compute-0 ovn_controller[149457]: 2026-01-31T08:31:32Z|00749|binding|INFO|Claiming lport 6c7a1046-54de-4879-bb02-d0a2fd364536 for this chassis.
Jan 31 08:31:32 compute-0 ovn_controller[149457]: 2026-01-31T08:31:32Z|00750|binding|INFO|6c7a1046-54de-4879-bb02-d0a2fd364536: Claiming fa:16:3e:9d:61:ce 10.100.0.5
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:32 compute-0 ovn_controller[149457]: 2026-01-31T08:31:32Z|00751|binding|INFO|Setting lport 6c7a1046-54de-4879-bb02-d0a2fd364536 ovn-installed in OVS
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.845 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:32 compute-0 systemd-udevd[362926]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.862 247708 DEBUG nova.network.neutron [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updated VIF entry in instance network info cache for port 6c7a1046-54de-4879-bb02-d0a2fd364536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.863 247708 DEBUG nova.network.neutron [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:32 compute-0 systemd-machined[214448]: New machine qemu-81-instance-000000ad.
Jan 31 08:31:32 compute-0 ovn_controller[149457]: 2026-01-31T08:31:32Z|00752|binding|INFO|Setting lport 6c7a1046-54de-4879-bb02-d0a2fd364536 up in Southbound
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.869 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:61:ce 10.100.0.5'], port_security=['fa:16:3e:9d:61:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '10312c53-fc1c-4baf-a301-9ac7ba4bb651', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ac67f602-a4ad-4f17-93c3-d263afab8713', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36b1d1ee-9764-48fa-9a47-6af05aff3317, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6c7a1046-54de-4879-bb02-d0a2fd364536) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.871 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6c7a1046-54de-4879-bb02-d0a2fd364536 in datapath e776ddae-9f2f-4d5d-9275-c52333e26d47 bound to our chassis
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.873 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e776ddae-9f2f-4d5d-9275-c52333e26d47
Jan 31 08:31:32 compute-0 NetworkManager[49108]: <info>  [1769848292.8758] device (tap6c7a1046-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:31:32 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-000000ad.
Jan 31 08:31:32 compute-0 NetworkManager[49108]: <info>  [1769848292.8764] device (tap6c7a1046-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.888 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[efb39aa4-489c-460f-a552-a4513170cb4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.889 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape776ddae-91 in ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.892 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape776ddae-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.892 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6d93f7be-ac66-4e5c-81a8-d5b315acd070]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.894 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f654a324-6f86-41ab-9aa4-0e8f09196a69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.905 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[7725c7dc-fe46-4983-a6f0-86e13384c505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:32 compute-0 nova_compute[247704]: 2026-01-31 08:31:32.930 247708 DEBUG oslo_concurrency.lockutils [req-7030345e-dc58-474b-97c1-307246c0448a req-282016c8-628b-467f-9303-9945902101c1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.944 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d36870c2-30d9-4560-a95c-55019a9c39d9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.977 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ec528a7c-1730-422c-92a7-465c64e6a3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:32 compute-0 NetworkManager[49108]: <info>  [1769848292.9855] manager: (tape776ddae-90): new Veth device (/org/freedesktop/NetworkManager/Devices/328)
Jan 31 08:31:32 compute-0 systemd-udevd[362928]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:31:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:32.985 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4c149f-1f1c-433f-a841-a2765f775000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.018 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd644fd-3949-4d1b-a92e-add64c05098a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.022 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[89196e7b-c648-4b5b-be1a-9353332c93f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 NetworkManager[49108]: <info>  [1769848293.0460] device (tape776ddae-90): carrier: link connected
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.052 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4d045cf4-d602-418f-ae40-e7c7e12c41b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.069 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2204f88c-6826-4f00-9663-0d0fa5e69400]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape776ddae-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:f8:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853566, 'reachable_time': 40969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362982, 'error': None, 'target': 'ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 sudo[362958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:33 compute-0 sudo[362958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:33 compute-0 sudo[362958]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.087 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3d140368-0ce9-42bb-b2d1-146bb8111661]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:f827'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 853566, 'tstamp': 853566}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362984, 'error': None, 'target': 'ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.111 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f391d0-bae6-455e-afe1-aed7effe5cd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape776ddae-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:f8:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853566, 'reachable_time': 40969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362987, 'error': None, 'target': 'ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 sudo[362986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:33 compute-0 sudo[362986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:33 compute-0 sudo[362986]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.143 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a99f9dc4-2fba-49be-9058-1862a2160e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.196 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0d11ab6a-fb65-4354-b7f8-5158dfdca65d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.198 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape776ddae-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.199 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.199 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape776ddae-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:33 compute-0 NetworkManager[49108]: <info>  [1769848293.2028] manager: (tape776ddae-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Jan 31 08:31:33 compute-0 kernel: tape776ddae-90: entered promiscuous mode
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.204 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape776ddae-90, col_values=(('external_ids', {'iface-id': '8e261acf-5e56-454f-a39a-241d0a42e323'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:33 compute-0 ovn_controller[149457]: 2026-01-31T08:31:33Z|00753|binding|INFO|Releasing lport 8e261acf-5e56-454f-a39a-241d0a42e323 from this chassis (sb_readonly=0)
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.204 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.215 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e776ddae-9f2f-4d5d-9275-c52333e26d47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e776ddae-9f2f-4d5d-9275-c52333e26d47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.219 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[71602cf0-11e7-4184-b7e1-85592ab8f668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.220 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-e776ddae-9f2f-4d5d-9275-c52333e26d47
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/e776ddae-9f2f-4d5d-9275-c52333e26d47.pid.haproxy
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID e776ddae-9f2f-4d5d-9275-c52333e26d47
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:31:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:33.222 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'env', 'PROCESS_TAG=haproxy-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e776ddae-9f2f-4d5d-9275-c52333e26d47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.496 247708 DEBUG nova.compute.manager [req-67ebb99b-389f-4f4d-a2c9-957e5ed82a60 req-52413251-008b-4fb0-9cf2-4c4841b4c08f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.497 247708 DEBUG oslo_concurrency.lockutils [req-67ebb99b-389f-4f4d-a2c9-957e5ed82a60 req-52413251-008b-4fb0-9cf2-4c4841b4c08f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.497 247708 DEBUG oslo_concurrency.lockutils [req-67ebb99b-389f-4f4d-a2c9-957e5ed82a60 req-52413251-008b-4fb0-9cf2-4c4841b4c08f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.497 247708 DEBUG oslo_concurrency.lockutils [req-67ebb99b-389f-4f4d-a2c9-957e5ed82a60 req-52413251-008b-4fb0-9cf2-4c4841b4c08f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.497 247708 DEBUG nova.compute.manager [req-67ebb99b-389f-4f4d-a2c9-957e5ed82a60 req-52413251-008b-4fb0-9cf2-4c4841b4c08f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Processing event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:31:33 compute-0 optimistic_moore[362908]: {
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:     "0": [
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:         {
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "devices": [
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "/dev/loop3"
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             ],
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "lv_name": "ceph_lv0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "lv_size": "7511998464",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "name": "ceph_lv0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "tags": {
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.cluster_name": "ceph",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.crush_device_class": "",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.encrypted": "0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.osd_id": "0",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.type": "block",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:                 "ceph.vdo": "0"
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             },
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "type": "block",
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:             "vg_name": "ceph_vg0"
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:         }
Jan 31 08:31:33 compute-0 optimistic_moore[362908]:     ]
Jan 31 08:31:33 compute-0 optimistic_moore[362908]: }
Jan 31 08:31:33 compute-0 systemd[1]: libpod-eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c.scope: Deactivated successfully.
Jan 31 08:31:33 compute-0 podman[362889]: 2026-01-31 08:31:33.637958422 +0000 UTC m=+1.015530214 container died eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:31:33 compute-0 podman[363048]: 2026-01-31 08:31:33.614051136 +0000 UTC m=+0.026764619 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:31:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:33.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:33 compute-0 podman[363048]: 2026-01-31 08:31:33.731115305 +0000 UTC m=+0.143828838 container create ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:31:33 compute-0 ceph-mon[74496]: pgmap v2954: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 203 KiB/s wr, 45 op/s
Jan 31 08:31:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3932476374' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3932476374' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:33 compute-0 systemd[1]: Started libpod-conmon-ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4.scope.
Jan 31 08:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-11c3a1e0e200db9ce6fd3af29cf8a32e70db133ffea3e56cef3864862e55987d-merged.mount: Deactivated successfully.
Jan 31 08:31:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd3bf6866cda80262abf80fbc17412ccddccac8a0dfe44cb4b37956b5fe3e34c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:33 compute-0 podman[363048]: 2026-01-31 08:31:33.832564214 +0000 UTC m=+0.245277687 container init ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:31:33 compute-0 podman[363048]: 2026-01-31 08:31:33.838770215 +0000 UTC m=+0.251483668 container start ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.845 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.845 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.845 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.846 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.846 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.847 247708 INFO nova.compute.manager [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Terminating instance
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.848 247708 DEBUG nova.compute.manager [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:31:33 compute-0 podman[362889]: 2026-01-31 08:31:33.851557859 +0000 UTC m=+1.229129651 container remove eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:31:33 compute-0 systemd[1]: libpod-conmon-eb193b4cee3923506691089f117b2caca5bd1d99363fdb48042e86659e13004c.scope: Deactivated successfully.
Jan 31 08:31:33 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [NOTICE]   (363123) : New worker (363125) forked
Jan 31 08:31:33 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [NOTICE]   (363123) : Loading success.
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.868 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848293.868279, 10312c53-fc1c-4baf-a301-9ac7ba4bb651 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.869 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] VM Started (Lifecycle Event)
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.872 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.876 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.881 247708 INFO nova.virt.libvirt.driver [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Instance spawned successfully.
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.881 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:31:33 compute-0 sudo[362744]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.925 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.933 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.937 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.937 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.938 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.938 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.939 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.939 247708 DEBUG nova.virt.libvirt.driver [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:31:33 compute-0 sudo[363134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:33 compute-0 sudo[363134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:33 compute-0 sudo[363134]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.985 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.985 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848293.8693788, 10312c53-fc1c-4baf-a301-9ac7ba4bb651 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:33 compute-0 nova_compute[247704]: 2026-01-31 08:31:33.985 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] VM Paused (Lifecycle Event)
Jan 31 08:31:34 compute-0 sudo[363159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:34 compute-0 sudo[363159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:34 compute-0 sudo[363159]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:34 compute-0 kernel: tap85a0f13a-35 (unregistering): left promiscuous mode
Jan 31 08:31:34 compute-0 NetworkManager[49108]: <info>  [1769848294.0250] device (tap85a0f13a-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.024 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.030 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848293.874863, 10312c53-fc1c-4baf-a301-9ac7ba4bb651 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.031 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] VM Resumed (Lifecycle Event)
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.035 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 ovn_controller[149457]: 2026-01-31T08:31:34Z|00754|binding|INFO|Releasing lport 85a0f13a-358c-4d17-a146-95c5b877e950 from this chassis (sb_readonly=0)
Jan 31 08:31:34 compute-0 ovn_controller[149457]: 2026-01-31T08:31:34Z|00755|binding|INFO|Setting lport 85a0f13a-358c-4d17-a146-95c5b877e950 down in Southbound
Jan 31 08:31:34 compute-0 ovn_controller[149457]: 2026-01-31T08:31:34Z|00756|binding|INFO|Removing iface tap85a0f13a-35 ovn-installed in OVS
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.047 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:57:08 10.100.0.14'], port_security=['fa:16:3e:be:57:08 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'bbc5f09e-71d7-4009-bdf6-06e95b32574c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d5863f6-4aa0-486a-96ed-eb36f7d4a61d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=85a0f13a-358c-4d17-a146-95c5b877e950) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.050 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 85a0f13a-358c-4d17-a146-95c5b877e950 in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.052 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.070 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7adaee-08b5-49f8-9d26-faa27fff0293]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.072 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:34 compute-0 sudo[363186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.078 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:31:34 compute-0 sudo[363186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:34 compute-0 sudo[363186]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:34 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a0.scope: Deactivated successfully.
Jan 31 08:31:34 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a0.scope: Consumed 26.666s CPU time.
Jan 31 08:31:34 compute-0 systemd-machined[214448]: Machine qemu-74-instance-000000a0 terminated.
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.092 247708 INFO nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Took 19.01 seconds to spawn the instance on the hypervisor.
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.092 247708 DEBUG nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.105 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf057ef-8f95-4ced-acfa-cf3e53514eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.108 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[5690f649-daf6-481f-b845-2dba258cf93c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.112 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:31:34 compute-0 sudo[363215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:31:34 compute-0 sudo[363215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.141 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[be43f7f7-d6a5-4ed5-9a57-0672eb1d9f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.168 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[54e0226b-3121-40d1-a1e3-f42683111a91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26ad6a8f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:60:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813771, 'reachable_time': 30578, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363242, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.185 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[34416394-f733-4ca5-bcb8-31615c780f4d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813783, 'tstamp': 813783}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363243, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap26ad6a8f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 813786, 'tstamp': 813786}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363243, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.186 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.193 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26ad6a8f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.194 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.194 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26ad6a8f-30, col_values=(('external_ids', {'iface-id': '0b9d56f1-a803-44f1-b709-3bfbc71e0f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:34.195 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.236 247708 INFO nova.compute.manager [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Took 24.70 seconds to build instance.
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.267 247708 DEBUG oslo_concurrency.lockutils [None req-de0b7ea1-331d-46f0-af77-f471f172ce34 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.284 247708 INFO nova.virt.libvirt.driver [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Instance destroyed successfully.
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.285 247708 DEBUG nova.objects.instance [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid bbc5f09e-71d7-4009-bdf6-06e95b32574c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.316 247708 DEBUG nova.virt.libvirt.vif [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=160,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMx4MwFIYcLufTgJTIjkaLBrJONZCyY8Yf6X6pbg0U3Us81VO6LfliTNhhDzgfgfMWpf9GXPg5uphWD0tDnxS1Zf2IaRx1ENKXJOF1zVaOJTSt3BjSDZpbJsUpD0/zLEPw==',key_name='tempest-keypair-253684506',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:26:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-a49r1jvl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:26:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=bbc5f09e-71d7-4009-bdf6-06e95b32574c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.317 247708 DEBUG nova.network.os_vif_util [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "85a0f13a-358c-4d17-a146-95c5b877e950", "address": "fa:16:3e:be:57:08", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap85a0f13a-35", "ovs_interfaceid": "85a0f13a-358c-4d17-a146-95c5b877e950", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.317 247708 DEBUG nova.network.os_vif_util [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.318 247708 DEBUG os_vif [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.319 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.320 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85a0f13a-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:34 compute-0 nova_compute[247704]: 2026-01-31 08:31:34.326 247708 INFO os_vif [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:57:08,bridge_name='br-int',has_traffic_filtering=True,id=85a0f13a-358c-4d17-a146-95c5b877e950,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap85a0f13a-35')
Jan 31 08:31:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:34.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.465014361 +0000 UTC m=+0.053633126 container create 85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:31:34 compute-0 systemd[1]: Started libpod-conmon-85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a.scope.
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.432504694 +0000 UTC m=+0.021123479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:31:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.587336931 +0000 UTC m=+0.175955726 container init 85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.597963791 +0000 UTC m=+0.186582556 container start 85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:31:34 compute-0 nice_shirley[363330]: 167 167
Jan 31 08:31:34 compute-0 systemd[1]: libpod-85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a.scope: Deactivated successfully.
Jan 31 08:31:34 compute-0 conmon[363330]: conmon 85068c093159e88a7442 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a.scope/container/memory.events
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.605353873 +0000 UTC m=+0.193972658 container attach 85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shirley, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.606119952 +0000 UTC m=+0.194738737 container died 85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:31:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-034b8bd550b00a755801c08158820e2906e54d2599e4b0056306279b6bf712d3-merged.mount: Deactivated successfully.
Jan 31 08:31:34 compute-0 podman[363313]: 2026-01-31 08:31:34.745674464 +0000 UTC m=+0.334293229 container remove 85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:31:34 compute-0 systemd[1]: libpod-conmon-85068c093159e88a7442765353cad19cb7401eb8d9d1676bc0fe51640789743a.scope: Deactivated successfully.
Jan 31 08:31:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/245584937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:34 compute-0 podman[363356]: 2026-01-31 08:31:34.92658831 +0000 UTC m=+0.065682802 container create 9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_buck, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:34 compute-0 systemd[1]: Started libpod-conmon-9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a.scope.
Jan 31 08:31:34 compute-0 podman[363356]: 2026-01-31 08:31:34.889830159 +0000 UTC m=+0.028924651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:31:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bded04e34e583cd7a3bcf42fc61500946bff0579f80933dfe4720d39845b4de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bded04e34e583cd7a3bcf42fc61500946bff0579f80933dfe4720d39845b4de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bded04e34e583cd7a3bcf42fc61500946bff0579f80933dfe4720d39845b4de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bded04e34e583cd7a3bcf42fc61500946bff0579f80933dfe4720d39845b4de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:35 compute-0 podman[363356]: 2026-01-31 08:31:35.036385042 +0000 UTC m=+0.175479554 container init 9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_buck, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:31:35 compute-0 podman[363356]: 2026-01-31 08:31:35.042916402 +0000 UTC m=+0.182010914 container start 9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_buck, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:35 compute-0 podman[363356]: 2026-01-31 08:31:35.076115227 +0000 UTC m=+0.215209739 container attach 9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:31:35 compute-0 nova_compute[247704]: 2026-01-31 08:31:35.485 247708 INFO nova.virt.libvirt.driver [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Deleting instance files /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c_del
Jan 31 08:31:35 compute-0 nova_compute[247704]: 2026-01-31 08:31:35.487 247708 INFO nova.virt.libvirt.driver [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Deletion of /var/lib/nova/instances/bbc5f09e-71d7-4009-bdf6-06e95b32574c_del complete
Jan 31 08:31:35 compute-0 nova_compute[247704]: 2026-01-31 08:31:35.584 247708 INFO nova.compute.manager [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Took 1.74 seconds to destroy the instance on the hypervisor.
Jan 31 08:31:35 compute-0 nova_compute[247704]: 2026-01-31 08:31:35.585 247708 DEBUG oslo.service.loopingcall [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:31:35 compute-0 nova_compute[247704]: 2026-01-31 08:31:35.585 247708 DEBUG nova.compute.manager [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:31:35 compute-0 nova_compute[247704]: 2026-01-31 08:31:35.585 247708 DEBUG nova.network.neutron [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:31:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:35.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007511879420249395 of space, bias 1.0, pg target 2.2535638260748185 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323556439605435 of space, bias 1.0, pg target 1.2884198190024194 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.2078213332286432 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:31:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 08:31:35 compute-0 ceph-mon[74496]: pgmap v2955: 305 pgs: 305 active+clean; 563 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 31 08:31:35 compute-0 gracious_buck[363371]: {
Jan 31 08:31:35 compute-0 gracious_buck[363371]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:31:35 compute-0 gracious_buck[363371]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:31:35 compute-0 gracious_buck[363371]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:31:35 compute-0 gracious_buck[363371]:         "osd_id": 0,
Jan 31 08:31:35 compute-0 gracious_buck[363371]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:31:35 compute-0 gracious_buck[363371]:         "type": "bluestore"
Jan 31 08:31:35 compute-0 gracious_buck[363371]:     }
Jan 31 08:31:35 compute-0 gracious_buck[363371]: }
Jan 31 08:31:35 compute-0 systemd[1]: libpod-9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a.scope: Deactivated successfully.
Jan 31 08:31:35 compute-0 podman[363356]: 2026-01-31 08:31:35.956765671 +0000 UTC m=+1.095860193 container died 9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bded04e34e583cd7a3bcf42fc61500946bff0579f80933dfe4720d39845b4de9-merged.mount: Deactivated successfully.
Jan 31 08:31:36 compute-0 podman[363356]: 2026-01-31 08:31:36.078754402 +0000 UTC m=+1.217848904 container remove 9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:31:36 compute-0 systemd[1]: libpod-conmon-9e42b2018b1ef6bfb12b34cc07547b7ab3ad35fddb954223334327027639c03a.scope: Deactivated successfully.
Jan 31 08:31:36 compute-0 sudo[363215]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:31:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:31:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 309a8c1a-0acd-4a74-92af-bc697bc56136 does not exist
Jan 31 08:31:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev eeb752b5-1ce9-45d8-9e44-56df0e12672b does not exist
Jan 31 08:31:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b28c8cbd-38dc-4872-ba17-57a5bff7882b does not exist
Jan 31 08:31:36 compute-0 sudo[363408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:36 compute-0 sudo[363408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:36 compute-0 sudo[363408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:36 compute-0 sudo[363433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:31:36 compute-0 sudo[363433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:36 compute-0 sudo[363433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 480 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 17 KiB/s wr, 97 op/s
Jan 31 08:31:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:36.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.652 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.652 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.653 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.653 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.653 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] No waiting events found dispatching network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.653 247708 WARNING nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received unexpected event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 for instance with vm_state active and task_state None.
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.653 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-vif-unplugged-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.653 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.654 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.654 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.654 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] No waiting events found dispatching network-vif-unplugged-85a0f13a-358c-4d17-a146-95c5b877e950 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.654 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-vif-unplugged-85a0f13a-358c-4d17-a146-95c5b877e950 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.654 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.655 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.655 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.655 247708 DEBUG oslo_concurrency.lockutils [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.655 247708 DEBUG nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] No waiting events found dispatching network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:31:36 compute-0 nova_compute[247704]: 2026-01-31 08:31:36.655 247708 WARNING nova.compute.manager [req-3047378c-6388-425d-b65a-95ddabc38ba4 req-2e86c36c-be69-4d27-8bf7-3114803bcfeb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received unexpected event network-vif-plugged-85a0f13a-358c-4d17-a146-95c5b877e950 for instance with vm_state active and task_state deleting.
Jan 31 08:31:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:31:37 compute-0 nova_compute[247704]: 2026-01-31 08:31:37.659 247708 DEBUG nova.network.neutron [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:37.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:37 compute-0 nova_compute[247704]: 2026-01-31 08:31:37.810 247708 INFO nova.compute.manager [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Took 2.23 seconds to deallocate network for instance.
Jan 31 08:31:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.038 247708 DEBUG nova.compute.manager [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Received event network-vif-deleted-85a0f13a-358c-4d17-a146-95c5b877e950 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.038 247708 INFO nova.compute.manager [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Neutron deleted interface 85a0f13a-358c-4d17-a146-95c5b877e950; detaching it from the instance and deleting it from the info cache
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.039 247708 DEBUG nova.network.neutron [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.151 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.151 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.172 247708 DEBUG nova.compute.manager [req-e979ac27-c6aa-4de0-8d5a-2e456651a19c req-e301d94f-b214-425e-af38-f7be1376bbee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Detach interface failed, port_id=85a0f13a-358c-4d17-a146-95c5b877e950, reason: Instance bbc5f09e-71d7-4009-bdf6-06e95b32574c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:31:38 compute-0 ceph-mon[74496]: pgmap v2956: 305 pgs: 305 active+clean; 480 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 17 KiB/s wr, 97 op/s
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.283 247708 DEBUG oslo_concurrency.processutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 447 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:31:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:38.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092006587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.715 247708 DEBUG oslo_concurrency.processutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.723 247708 DEBUG nova.compute.provider_tree [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.761 247708 DEBUG nova.scheduler.client.report [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:38 compute-0 nova_compute[247704]: 2026-01-31 08:31:38.909 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:39 compute-0 nova_compute[247704]: 2026-01-31 08:31:39.077 247708 INFO nova.scheduler.client.report [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocations for instance bbc5f09e-71d7-4009-bdf6-06e95b32574c
Jan 31 08:31:39 compute-0 nova_compute[247704]: 2026-01-31 08:31:39.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:39 compute-0 nova_compute[247704]: 2026-01-31 08:31:39.360 247708 DEBUG oslo_concurrency.lockutils [None req-c5f25836-8ad8-4f67-a285-f8a67c69e6ae 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "bbc5f09e-71d7-4009-bdf6-06e95b32574c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3092006587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:39.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 140 op/s
Jan 31 08:31:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:40 compute-0 ceph-mon[74496]: pgmap v2957: 305 pgs: 305 active+clean; 447 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:31:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2549394482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:40 compute-0 podman[363482]: 2026-01-31 08:31:40.904142625 +0000 UTC m=+0.067199638 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:31:41 compute-0 nova_compute[247704]: 2026-01-31 08:31:41.014 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:41 compute-0 nova_compute[247704]: 2026-01-31 08:31:41.114 247708 DEBUG nova.compute.manager [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-changed-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:31:41 compute-0 nova_compute[247704]: 2026-01-31 08:31:41.115 247708 DEBUG nova.compute.manager [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Refreshing instance network info cache due to event network-changed-6c7a1046-54de-4879-bb02-d0a2fd364536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:31:41 compute-0 nova_compute[247704]: 2026-01-31 08:31:41.115 247708 DEBUG oslo_concurrency.lockutils [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:31:41 compute-0 nova_compute[247704]: 2026-01-31 08:31:41.115 247708 DEBUG oslo_concurrency.lockutils [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:31:41 compute-0 nova_compute[247704]: 2026-01-31 08:31:41.115 247708 DEBUG nova.network.neutron [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Refreshing network info cache for port 6c7a1046-54de-4879-bb02-d0a2fd364536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:31:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:41 compute-0 ceph-mon[74496]: pgmap v2958: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 140 op/s
Jan 31 08:31:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 442 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 171 op/s
Jan 31 08:31:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:42.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:42 compute-0 nova_compute[247704]: 2026-01-31 08:31:42.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/632567154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/185363447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:31:43 compute-0 nova_compute[247704]: 2026-01-31 08:31:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:43 compute-0 nova_compute[247704]: 2026-01-31 08:31:43.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:31:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:43.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:44 compute-0 ceph-mon[74496]: pgmap v2959: 305 pgs: 305 active+clean; 442 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 171 op/s
Jan 31 08:31:44 compute-0 nova_compute[247704]: 2026-01-31 08:31:44.276 247708 DEBUG nova.network.neutron [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updated VIF entry in instance network info cache for port 6c7a1046-54de-4879-bb02-d0a2fd364536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:31:44 compute-0 nova_compute[247704]: 2026-01-31 08:31:44.277 247708 DEBUG nova.network.neutron [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:31:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 437 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Jan 31 08:31:44 compute-0 nova_compute[247704]: 2026-01-31 08:31:44.325 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:44 compute-0 nova_compute[247704]: 2026-01-31 08:31:44.766 247708 DEBUG oslo_concurrency.lockutils [req-37a7328a-b1b3-469e-b10f-3b157b21e835 req-ea50a83d-0de5-4441-b1bb-719d31385cfc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:31:45 compute-0 ceph-mon[74496]: pgmap v2960: 305 pgs: 305 active+clean; 437 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Jan 31 08:31:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:45.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:46 compute-0 nova_compute[247704]: 2026-01-31 08:31:46.017 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 416 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 259 op/s
Jan 31 08:31:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:31:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:46.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:31:47 compute-0 ceph-mon[74496]: pgmap v2961: 305 pgs: 305 active+clean; 416 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 259 op/s
Jan 31 08:31:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/881127251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:47 compute-0 nova_compute[247704]: 2026-01-31 08:31:47.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:47.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.1 MiB/s wr, 201 op/s
Jan 31 08:31:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:48 compute-0 ovn_controller[149457]: 2026-01-31T08:31:48Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:61:ce 10.100.0.5
Jan 31 08:31:48 compute-0 ovn_controller[149457]: 2026-01-31T08:31:48Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:61:ce 10.100.0.5
Jan 31 08:31:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1011709156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1011709156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:49 compute-0 nova_compute[247704]: 2026-01-31 08:31:49.282 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848294.2813277, bbc5f09e-71d7-4009-bdf6-06e95b32574c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:49 compute-0 nova_compute[247704]: 2026-01-31 08:31:49.283 247708 INFO nova.compute.manager [-] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] VM Stopped (Lifecycle Event)
Jan 31 08:31:49 compute-0 nova_compute[247704]: 2026-01-31 08:31:49.327 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:49 compute-0 nova_compute[247704]: 2026-01-31 08:31:49.431 247708 DEBUG nova.compute.manager [None req-0e675c35-268d-43eb-aa41-c5ce032597f6 - - - - - -] [instance: bbc5f09e-71d7-4009-bdf6-06e95b32574c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:49 compute-0 ceph-mon[74496]: pgmap v2962: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.1 MiB/s wr, 201 op/s
Jan 31 08:31:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/602892621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.9 MiB/s wr, 245 op/s
Jan 31 08:31:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:50.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Jan 31 08:31:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Jan 31 08:31:50 compute-0 nova_compute[247704]: 2026-01-31 08:31:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:50 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Jan 31 08:31:50 compute-0 nova_compute[247704]: 2026-01-31 08:31:50.740 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:50 compute-0 nova_compute[247704]: 2026-01-31 08:31:50.741 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:50 compute-0 nova_compute[247704]: 2026-01-31 08:31:50.741 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:50 compute-0 nova_compute[247704]: 2026-01-31 08:31:50.742 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:31:50 compute-0 nova_compute[247704]: 2026-01-31 08:31:50.742 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:50 compute-0 podman[363506]: 2026-01-31 08:31:50.944939026 +0000 UTC m=+0.112724556 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.019 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375828464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.220 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.675 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.676 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.679 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.680 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:31:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:51.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.861 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.862 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3797MB free_disk=20.907024383544922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.862 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:51 compute-0 nova_compute[247704]: 2026-01-31 08:31:51.863 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:51 compute-0 ceph-mon[74496]: pgmap v2963: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.9 MiB/s wr, 245 op/s
Jan 31 08:31:51 compute-0 ceph-mon[74496]: osdmap e377: 3 total, 3 up, 3 in
Jan 31 08:31:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3824279903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2375828464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.107 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.107 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 10312c53-fc1c-4baf-a301-9ac7ba4bb651 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.107 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.108 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.188 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Jan 31 08:31:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:52.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:31:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401579954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.685 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.691 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:52 compute-0 nova_compute[247704]: 2026-01-31 08:31:52.785 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.072 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.073 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:53 compute-0 sudo[363577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:53 compute-0 sudo[363577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:53 compute-0 sudo[363577]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:53 compute-0 sudo[363602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:31:53 compute-0 sudo[363602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:53 compute-0 sudo[363602]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3401579954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:53.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.903 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.903 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.904 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.904 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.904 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.906 247708 INFO nova.compute.manager [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Terminating instance
Jan 31 08:31:53 compute-0 nova_compute[247704]: 2026-01-31 08:31:53.907 247708 DEBUG nova.compute.manager [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:31:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 230 op/s
Jan 31 08:31:54 compute-0 nova_compute[247704]: 2026-01-31 08:31:54.382 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:31:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:31:55 compute-0 nova_compute[247704]: 2026-01-31 08:31:55.073 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:55 compute-0 nova_compute[247704]: 2026-01-31 08:31:55.074 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:31:55 compute-0 nova_compute[247704]: 2026-01-31 08:31:55.074 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:31:55 compute-0 nova_compute[247704]: 2026-01-31 08:31:55.471 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:31:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:31:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:55.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:31:56 compute-0 nova_compute[247704]: 2026-01-31 08:31:56.023 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 183 op/s
Jan 31 08:31:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:56.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:57 compute-0 nova_compute[247704]: 2026-01-31 08:31:57.802 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:31:57 compute-0 nova_compute[247704]: 2026-01-31 08:31:57.803 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:31:57 compute-0 nova_compute[247704]: 2026-01-31 08:31:57.803 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:31:57 compute-0 nova_compute[247704]: 2026-01-31 08:31:57.803 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 10312c53-fc1c-4baf-a301-9ac7ba4bb651 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:57 compute-0 nova_compute[247704]: 2026-01-31 08:31:57.841 247708 INFO nova.compute.manager [None req-8d7eb7a2-ac91-45d8-a54d-d4cd5b42b004 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Get console output
Jan 31 08:31:57 compute-0 nova_compute[247704]: 2026-01-31 08:31:57.847 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:31:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 166 op/s
Jan 31 08:31:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:58.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:31:58 compute-0 ceph-mon[74496]: pgmap v2965: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 264 op/s
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.628 247708 INFO nova.compute.manager [None req-8027bf94-3169-4633-ade5-921e7883916b 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Pausing
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.629 247708 DEBUG nova.objects.instance [None req-8027bf94-3169-4633-ade5-921e7883916b 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'flavor' on Instance uuid 10312c53-fc1c-4baf-a301-9ac7ba4bb651 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:58 compute-0 kernel: tapeae745f2-80 (unregistering): left promiscuous mode
Jan 31 08:31:58 compute-0 NetworkManager[49108]: <info>  [1769848318.6375] device (tapeae745f2-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.646 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:58 compute-0 ovn_controller[149457]: 2026-01-31T08:31:58Z|00757|binding|INFO|Releasing lport eae745f2-80f8-4d46-9f62-485ae14b62ea from this chassis (sb_readonly=0)
Jan 31 08:31:58 compute-0 ovn_controller[149457]: 2026-01-31T08:31:58Z|00758|binding|INFO|Setting lport eae745f2-80f8-4d46-9f62-485ae14b62ea down in Southbound
Jan 31 08:31:58 compute-0 ovn_controller[149457]: 2026-01-31T08:31:58Z|00759|binding|INFO|Removing iface tapeae745f2-80 ovn-installed in OVS
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.654 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:58 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Jan 31 08:31:58 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009c.scope: Consumed 31.275s CPU time.
Jan 31 08:31:58 compute-0 systemd-machined[214448]: Machine qemu-73-instance-0000009c terminated.
Jan 31 08:31:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:31:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481018420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:31:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481018420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.745 247708 INFO nova.virt.libvirt.driver [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Instance destroyed successfully.
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.746 247708 DEBUG nova.objects.instance [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lazy-loading 'resources' on Instance uuid 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:31:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:58.878 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:48:03 10.100.0.12'], port_security=['fa:16:3e:b6:48:03 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1beb42da-13c9-4f95-8a5d-e2c3c1affd2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbdbdee526499e90da7e971ede68d3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4abfdb7-aa95-4407-b049-c51322e9a052', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=379aaecc-3dde-4f00-82cf-dc8bd8367d4b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=eae745f2-80f8-4d46-9f62-485ae14b62ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:31:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:58.880 160028 INFO neutron.agent.ovn.metadata.agent [-] Port eae745f2-80f8-4d46-9f62-485ae14b62ea in datapath 26ad6a8f-33d5-432e-83d3-63a9d2f165ad unbound from our chassis
Jan 31 08:31:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:58.881 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26ad6a8f-33d5-432e-83d3-63a9d2f165ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:31:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:58.883 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dba49dc8-3e0f-4a2f-bf1c-f5a81f06d9ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:58.884 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad namespace which is not needed anymore
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.916 247708 DEBUG nova.virt.libvirt.vif [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:24:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1810503935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1810503935',id=156,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:24:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbdbdee526499e90da7e971ede68d3',ramdisk_id='',reservation_id='r-lmso5hs7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-2017021026',owner_user_name='tempest-AttachVolumeMultiAttachTest-2017021026-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:24:58Z,user_data=None,user_id='85dfa8546d9942648bb4197c8b1947e3',uuid=1beb42da-13c9-4f95-8a5d-e2c3c1affd2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.917 247708 DEBUG nova.network.os_vif_util [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converting VIF {"id": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "address": "fa:16:3e:b6:48:03", "network": {"id": "26ad6a8f-33d5-432e-83d3-63a9d2f165ad", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-259510200-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbdbdee526499e90da7e971ede68d3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeae745f2-80", "ovs_interfaceid": "eae745f2-80f8-4d46-9f62-485ae14b62ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.917 247708 DEBUG nova.network.os_vif_util [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.918 247708 DEBUG os_vif [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.920 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.921 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeae745f2-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.951 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.954 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.956 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848318.9557316, 10312c53-fc1c-4baf-a301-9ac7ba4bb651 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.956 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] VM Paused (Lifecycle Event)
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.957 247708 DEBUG nova.compute.manager [None req-8027bf94-3169-4633-ade5-921e7883916b 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:58 compute-0 nova_compute[247704]: 2026-01-31 08:31:58.959 247708 INFO os_vif [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b6:48:03,bridge_name='br-int',has_traffic_filtering=True,id=eae745f2-80f8-4d46-9f62-485ae14b62ea,network=Network(26ad6a8f-33d5-432e-83d3-63a9d2f165ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeae745f2-80')
Jan 31 08:31:59 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [NOTICE]   (352029) : haproxy version is 2.8.14-c23fe91
Jan 31 08:31:59 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [NOTICE]   (352029) : path to executable is /usr/sbin/haproxy
Jan 31 08:31:59 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [WARNING]  (352029) : Exiting Master process...
Jan 31 08:31:59 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [WARNING]  (352029) : Exiting Master process...
Jan 31 08:31:59 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [ALERT]    (352029) : Current worker (352031) exited with code 143 (Terminated)
Jan 31 08:31:59 compute-0 neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad[352025]: [WARNING]  (352029) : All workers exited. Exiting... (0)
Jan 31 08:31:59 compute-0 systemd[1]: libpod-9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a.scope: Deactivated successfully.
Jan 31 08:31:59 compute-0 podman[363666]: 2026-01-31 08:31:59.020481554 +0000 UTC m=+0.050970591 container died 9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a-userdata-shm.mount: Deactivated successfully.
Jan 31 08:31:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-577026d6a9564561c364784dcfda717ad7092a046dabbbbf7ab687ecebaa972a-merged.mount: Deactivated successfully.
Jan 31 08:31:59 compute-0 nova_compute[247704]: 2026-01-31 08:31:59.054 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:31:59 compute-0 nova_compute[247704]: 2026-01-31 08:31:59.059 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:31:59 compute-0 podman[363666]: 2026-01-31 08:31:59.065892648 +0000 UTC m=+0.096381695 container cleanup 9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:31:59 compute-0 systemd[1]: libpod-conmon-9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a.scope: Deactivated successfully.
Jan 31 08:31:59 compute-0 podman[363696]: 2026-01-31 08:31:59.126118395 +0000 UTC m=+0.042802121 container remove 9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.131 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[def879b6-5dcf-4257-bd9a-23eeb5856c7b]: (4, ('Sat Jan 31 08:31:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a)\n9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a\nSat Jan 31 08:31:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad (9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a)\n9945767d7a7ba2910c965ce112f974a808aca9d16387c8ee7aee7c8d8439d05a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.133 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a185d4c3-467e-4c9c-8aba-0efb0922c4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.134 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26ad6a8f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:31:59 compute-0 nova_compute[247704]: 2026-01-31 08:31:59.136 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:59 compute-0 kernel: tap26ad6a8f-30: left promiscuous mode
Jan 31 08:31:59 compute-0 nova_compute[247704]: 2026-01-31 08:31:59.143 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.145 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3d1195-502d-43ad-9266-68c30eb10629]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.157 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9c31fbb8-faae-411a-9a6c-cf85f5677dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.159 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0efbaa33-c4a8-4dd1-9ff0-dd9a5058b8ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.174 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d787e1ea-ad52-4c50-8ba0-ae74c8e66380]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 813764, 'reachable_time': 19912, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363711, 'error': None, 'target': 'ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.178 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26ad6a8f-33d5-432e-83d3-63a9d2f165ad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:31:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:31:59.178 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d164b3-68d9-4b03-8ac6-2996264156c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:31:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d26ad6a8f\x2d33d5\x2d432e\x2d83d3\x2d63a9d2f165ad.mount: Deactivated successfully.
Jan 31 08:31:59 compute-0 ceph-mon[74496]: pgmap v2966: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 230 op/s
Jan 31 08:31:59 compute-0 ceph-mon[74496]: pgmap v2967: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 183 op/s
Jan 31 08:31:59 compute-0 ceph-mon[74496]: pgmap v2968: 305 pgs: 305 active+clean; 437 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 166 op/s
Jan 31 08:31:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2481018420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:31:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2481018420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:31:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/433518010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:31:59 compute-0 nova_compute[247704]: 2026-01-31 08:31:59.623 247708 INFO nova.virt.libvirt.driver [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Deleting instance files /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_del
Jan 31 08:31:59 compute-0 nova_compute[247704]: 2026-01-31 08:31:59.624 247708 INFO nova.virt.libvirt.driver [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Deletion of /var/lib/nova/instances/1beb42da-13c9-4f95-8a5d-e2c3c1affd2a_del complete
Jan 31 08:31:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:31:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:31:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:59.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.204 247708 INFO nova.compute.manager [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Took 6.30 seconds to destroy the instance on the hypervisor.
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.205 247708 DEBUG oslo.service.loopingcall [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.205 247708 DEBUG nova.compute.manager [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.206 247708 DEBUG nova.network.neutron [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:32:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 102 KiB/s wr, 128 op/s
Jan 31 08:32:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:00.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1410069005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1410069005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.997 247708 DEBUG nova.compute.manager [req-d79d8393-8bbe-4f3e-82d5-861e3a7af34b req-4fdd7513-4d02-4263-b004-3e4d7590522e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-vif-unplugged-eae745f2-80f8-4d46-9f62-485ae14b62ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.998 247708 DEBUG oslo_concurrency.lockutils [req-d79d8393-8bbe-4f3e-82d5-861e3a7af34b req-4fdd7513-4d02-4263-b004-3e4d7590522e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.999 247708 DEBUG oslo_concurrency.lockutils [req-d79d8393-8bbe-4f3e-82d5-861e3a7af34b req-4fdd7513-4d02-4263-b004-3e4d7590522e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.999 247708 DEBUG oslo_concurrency.lockutils [req-d79d8393-8bbe-4f3e-82d5-861e3a7af34b req-4fdd7513-4d02-4263-b004-3e4d7590522e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:00 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.999 247708 DEBUG nova.compute.manager [req-d79d8393-8bbe-4f3e-82d5-861e3a7af34b req-4fdd7513-4d02-4263-b004-3e4d7590522e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] No waiting events found dispatching network-vif-unplugged-eae745f2-80f8-4d46-9f62-485ae14b62ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:32:01 compute-0 nova_compute[247704]: 2026-01-31 08:32:00.999 247708 DEBUG nova.compute.manager [req-d79d8393-8bbe-4f3e-82d5-861e3a7af34b req-4fdd7513-4d02-4263-b004-3e4d7590522e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-vif-unplugged-eae745f2-80f8-4d46-9f62-485ae14b62ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:32:01 compute-0 nova_compute[247704]: 2026-01-31 08:32:01.026 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:01 compute-0 ceph-mon[74496]: pgmap v2969: 305 pgs: 305 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 102 KiB/s wr, 128 op/s
Jan 31 08:32:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2472068711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:01.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 102 KiB/s wr, 129 op/s
Jan 31 08:32:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:02.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Jan 31 08:32:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.097 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:32:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.322250) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323322362, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1438, "num_deletes": 255, "total_data_size": 2278675, "memory_usage": 2305640, "flush_reason": "Manual Compaction"}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323347882, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 1478122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63708, "largest_seqno": 65145, "table_properties": {"data_size": 1472581, "index_size": 2744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14755, "raw_average_key_size": 21, "raw_value_size": 1460436, "raw_average_value_size": 2138, "num_data_blocks": 119, "num_entries": 683, "num_filter_entries": 683, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848212, "oldest_key_time": 1769848212, "file_creation_time": 1769848323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 25682 microseconds, and 4380 cpu microseconds.
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.347948) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 1478122 bytes OK
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.347975) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.350330) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.350351) EVENT_LOG_v1 {"time_micros": 1769848323350343, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.350373) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2272326, prev total WAL file size 2272326, number of live WAL files 2.
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.351232) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323539' seq:72057594037927935, type:22 .. '6D6772737461740032353131' seq:0, type:0; will stop at (end)
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(1443KB)], [143(11MB)]
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323351268, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13973428, "oldest_snapshot_seqno": -1}
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.383 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.383 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.383 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.384 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.384 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.384 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 9386 keys, 10836417 bytes, temperature: kUnknown
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323564819, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10836417, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10777804, "index_size": 34052, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 246105, "raw_average_key_size": 26, "raw_value_size": 10614979, "raw_average_value_size": 1130, "num_data_blocks": 1299, "num_entries": 9386, "num_filter_entries": 9386, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848323, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.565197) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10836417 bytes
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.573317) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.4 rd, 50.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.9 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(16.8) write-amplify(7.3) OK, records in: 9867, records dropped: 481 output_compression: NoCompression
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.573424) EVENT_LOG_v1 {"time_micros": 1769848323573405, "job": 88, "event": "compaction_finished", "compaction_time_micros": 213651, "compaction_time_cpu_micros": 31495, "output_level": 6, "num_output_files": 1, "total_output_size": 10836417, "num_input_records": 9867, "num_output_records": 9386, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323574000, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848323575775, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.351100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.575833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.575837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.575839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.575841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:03.575843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:32:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:03.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.922 247708 DEBUG nova.network.neutron [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:32:03 compute-0 nova_compute[247704]: 2026-01-31 08:32:03.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:04 compute-0 ceph-mon[74496]: pgmap v2970: 305 pgs: 305 active+clean; 383 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 102 KiB/s wr, 129 op/s
Jan 31 08:32:04 compute-0 ceph-mon[74496]: osdmap e378: 3 total, 3 up, 3 in
Jan 31 08:32:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 31 KiB/s wr, 100 op/s
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.386 247708 DEBUG nova.compute.manager [req-41dbd3d3-411e-4845-a277-04f83ef2d6c5 req-1abaef68-b018-4fec-9201-725c71c9f3a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-vif-deleted-eae745f2-80f8-4d46-9f62-485ae14b62ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.386 247708 INFO nova.compute.manager [req-41dbd3d3-411e-4845-a277-04f83ef2d6c5 req-1abaef68-b018-4fec-9201-725c71c9f3a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Neutron deleted interface eae745f2-80f8-4d46-9f62-485ae14b62ea; detaching it from the instance and deleting it from the info cache
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.387 247708 DEBUG nova.network.neutron [req-41dbd3d3-411e-4845-a277-04f83ef2d6c5 req-1abaef68-b018-4fec-9201-725c71c9f3a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.389 247708 DEBUG nova.compute.manager [req-933938fc-18b7-4d94-8da5-74e454eb2aa0 req-fb26795f-4f8d-494a-97e5-9ae44a483c0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.389 247708 DEBUG oslo_concurrency.lockutils [req-933938fc-18b7-4d94-8da5-74e454eb2aa0 req-fb26795f-4f8d-494a-97e5-9ae44a483c0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.389 247708 DEBUG oslo_concurrency.lockutils [req-933938fc-18b7-4d94-8da5-74e454eb2aa0 req-fb26795f-4f8d-494a-97e5-9ae44a483c0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.389 247708 DEBUG oslo_concurrency.lockutils [req-933938fc-18b7-4d94-8da5-74e454eb2aa0 req-fb26795f-4f8d-494a-97e5-9ae44a483c0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.390 247708 DEBUG nova.compute.manager [req-933938fc-18b7-4d94-8da5-74e454eb2aa0 req-fb26795f-4f8d-494a-97e5-9ae44a483c0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] No waiting events found dispatching network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.390 247708 WARNING nova.compute.manager [req-933938fc-18b7-4d94-8da5-74e454eb2aa0 req-fb26795f-4f8d-494a-97e5-9ae44a483c0f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Received unexpected event network-vif-plugged-eae745f2-80f8-4d46-9f62-485ae14b62ea for instance with vm_state active and task_state deleting.
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.404 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.405 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.405 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:32:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:04.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:04.737 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:32:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:04.739 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:32:04 compute-0 nova_compute[247704]: 2026-01-31 08:32:04.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:05 compute-0 nova_compute[247704]: 2026-01-31 08:32:05.174 247708 INFO nova.compute.manager [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Took 4.97 seconds to deallocate network for instance.
Jan 31 08:32:05 compute-0 nova_compute[247704]: 2026-01-31 08:32:05.183 247708 DEBUG nova.compute.manager [req-41dbd3d3-411e-4845-a277-04f83ef2d6c5 req-1abaef68-b018-4fec-9201-725c71c9f3a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Detach interface failed, port_id=eae745f2-80f8-4d46-9f62-485ae14b62ea, reason: Instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:32:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 08:32:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:05.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:05 compute-0 nova_compute[247704]: 2026-01-31 08:32:05.781 247708 INFO nova.compute.manager [None req-f6f6d013-4df0-4f59-93ce-cc8494f5ffae 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Get console output
Jan 31 08:32:05 compute-0 nova_compute[247704]: 2026-01-31 08:32:05.786 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:32:06 compute-0 nova_compute[247704]: 2026-01-31 08:32:06.028 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:06 compute-0 ceph-mon[74496]: pgmap v2972: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 31 KiB/s wr, 100 op/s
Jan 31 08:32:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 664 KiB/s rd, 31 KiB/s wr, 95 op/s
Jan 31 08:32:06 compute-0 nova_compute[247704]: 2026-01-31 08:32:06.412 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:06 compute-0 nova_compute[247704]: 2026-01-31 08:32:06.566 247708 INFO nova.compute.manager [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Took 1.39 seconds to detach 1 volumes for instance.
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.068 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.069 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.187 247708 DEBUG oslo_concurrency.processutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.341 247708 INFO nova.compute.manager [None req-57b7ef15-ba4f-41c9-a881-18876f9feb9d 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Unpausing
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.344 247708 DEBUG nova.objects.instance [None req-57b7ef15-ba4f-41c9-a881-18876f9feb9d 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'flavor' on Instance uuid 10312c53-fc1c-4baf-a301-9ac7ba4bb651 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:32:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3019601616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3019601616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.526 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848327.5261776, 10312c53-fc1c-4baf-a301-9ac7ba4bb651 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.527 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] VM Resumed (Lifecycle Event)
Jan 31 08:32:07 compute-0 virtqemud[247621]: argument unsupported: QEMU guest agent is not configured
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.531 247708 DEBUG nova.virt.libvirt.guest [None req-57b7ef15-ba4f-41c9-a881-18876f9feb9d 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.531 247708 DEBUG nova.compute.manager [None req-57b7ef15-ba4f-41c9-a881-18876f9feb9d 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:32:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:32:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475266291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.627 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.633 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.648 247708 DEBUG oslo_concurrency.processutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.655 247708 DEBUG nova.compute.provider_tree [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:32:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:07.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:07 compute-0 nova_compute[247704]: 2026-01-31 08:32:07.835 247708 DEBUG nova.scheduler.client.report [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:32:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:08 compute-0 nova_compute[247704]: 2026-01-31 08:32:08.104 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:08 compute-0 nova_compute[247704]: 2026-01-31 08:32:08.233 247708 INFO nova.scheduler.client.report [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Deleted allocations for instance 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a
Jan 31 08:32:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 31 KiB/s wr, 103 op/s
Jan 31 08:32:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:08.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:08 compute-0 ceph-mon[74496]: pgmap v2973: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 664 KiB/s rd, 31 KiB/s wr, 95 op/s
Jan 31 08:32:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3475266291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:08 compute-0 nova_compute[247704]: 2026-01-31 08:32:08.649 247708 DEBUG oslo_concurrency.lockutils [None req-09f09ea0-ad2d-4917-aa16-515c9f6988ad 85dfa8546d9942648bb4197c8b1947e3 48bbdbdee526499e90da7e971ede68d3 - - default default] Lock "1beb42da-13c9-4f95-8a5d-e2c3c1affd2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:08 compute-0 nova_compute[247704]: 2026-01-31 08:32:08.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:09.743 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:32:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:09.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:10 compute-0 ceph-mon[74496]: pgmap v2974: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 31 KiB/s wr, 103 op/s
Jan 31 08:32:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 40 KiB/s wr, 56 op/s
Jan 31 08:32:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:10.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:11 compute-0 nova_compute[247704]: 2026-01-31 08:32:11.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:11.202 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:11.203 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:11.204 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:11 compute-0 nova_compute[247704]: 2026-01-31 08:32:11.387 247708 INFO nova.compute.manager [None req-77ec6966-aab0-4fc2-95b9-55592d877ffa 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Get console output
Jan 31 08:32:11 compute-0 nova_compute[247704]: 2026-01-31 08:32:11.393 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:32:11 compute-0 ceph-mon[74496]: pgmap v2975: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 40 KiB/s wr, 56 op/s
Jan 31 08:32:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:11.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:11 compute-0 podman[363761]: 2026-01-31 08:32:11.877007687 +0000 UTC m=+0.052539230 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:32:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 23 KiB/s wr, 32 op/s
Jan 31 08:32:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:12.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:13 compute-0 sudo[363780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:13 compute-0 sudo[363780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:13 compute-0 sudo[363780]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:13 compute-0 sudo[363805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:13 compute-0 sudo[363805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:13 compute-0 sudo[363805]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:13 compute-0 ceph-mon[74496]: pgmap v2976: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 23 KiB/s wr, 32 op/s
Jan 31 08:32:13 compute-0 nova_compute[247704]: 2026-01-31 08:32:13.744 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848318.7433374, 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:32:13 compute-0 nova_compute[247704]: 2026-01-31 08:32:13.745 247708 INFO nova.compute.manager [-] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] VM Stopped (Lifecycle Event)
Jan 31 08:32:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:13.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:13 compute-0 nova_compute[247704]: 2026-01-31 08:32:13.895 247708 DEBUG nova.compute.manager [None req-c73d134a-c419-4d79-aad9-88ea094eeebb - - - - - -] [instance: 1beb42da-13c9-4f95-8a5d-e2c3c1affd2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:32:13 compute-0 nova_compute[247704]: 2026-01-31 08:32:13.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 10 KiB/s wr, 20 op/s
Jan 31 08:32:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:15 compute-0 nova_compute[247704]: 2026-01-31 08:32:15.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:15 compute-0 ceph-mon[74496]: pgmap v2977: 305 pgs: 305 active+clean; 360 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 10 KiB/s wr, 20 op/s
Jan 31 08:32:16 compute-0 nova_compute[247704]: 2026-01-31 08:32:16.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 316 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 22 op/s
Jan 31 08:32:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:16 compute-0 nova_compute[247704]: 2026-01-31 08:32:16.746 247708 DEBUG nova.compute.manager [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-changed-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:16 compute-0 nova_compute[247704]: 2026-01-31 08:32:16.746 247708 DEBUG nova.compute.manager [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Refreshing instance network info cache due to event network-changed-6c7a1046-54de-4879-bb02-d0a2fd364536. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:32:16 compute-0 nova_compute[247704]: 2026-01-31 08:32:16.747 247708 DEBUG oslo_concurrency.lockutils [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:32:16 compute-0 nova_compute[247704]: 2026-01-31 08:32:16.747 247708 DEBUG oslo_concurrency.lockutils [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:32:16 compute-0 nova_compute[247704]: 2026-01-31 08:32:16.747 247708 DEBUG nova.network.neutron [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Refreshing network info cache for port 6c7a1046-54de-4879-bb02-d0a2fd364536 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:32:17 compute-0 ovn_controller[149457]: 2026-01-31T08:32:17Z|00760|binding|INFO|Releasing lport 8e261acf-5e56-454f-a39a-241d0a42e323 from this chassis (sb_readonly=0)
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.013 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 ovn_controller[149457]: 2026-01-31T08:32:17Z|00761|binding|INFO|Releasing lport 8e261acf-5e56-454f-a39a-241d0a42e323 from this chassis (sb_readonly=0)
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.107 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.108 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.109 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.109 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.110 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.111 247708 INFO nova.compute.manager [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Terminating instance
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.112 247708 DEBUG nova.compute.manager [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:32:17 compute-0 kernel: tap6c7a1046-54 (unregistering): left promiscuous mode
Jan 31 08:32:17 compute-0 NetworkManager[49108]: <info>  [1769848337.3662] device (tap6c7a1046-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.374 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 ovn_controller[149457]: 2026-01-31T08:32:17Z|00762|binding|INFO|Releasing lport 6c7a1046-54de-4879-bb02-d0a2fd364536 from this chassis (sb_readonly=0)
Jan 31 08:32:17 compute-0 ovn_controller[149457]: 2026-01-31T08:32:17Z|00763|binding|INFO|Setting lport 6c7a1046-54de-4879-bb02-d0a2fd364536 down in Southbound
Jan 31 08:32:17 compute-0 ovn_controller[149457]: 2026-01-31T08:32:17Z|00764|binding|INFO|Removing iface tap6c7a1046-54 ovn-installed in OVS
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.377 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.387 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:17.397 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:61:ce 10.100.0.5'], port_security=['fa:16:3e:9d:61:ce 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '10312c53-fc1c-4baf-a301-9ac7ba4bb651', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9710f0cf77d84353ae13fa47922b085d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ac67f602-a4ad-4f17-93c3-d263afab8713', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36b1d1ee-9764-48fa-9a47-6af05aff3317, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=6c7a1046-54de-4879-bb02-d0a2fd364536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:17.398 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 6c7a1046-54de-4879-bb02-d0a2fd364536 in datapath e776ddae-9f2f-4d5d-9275-c52333e26d47 unbound from our chassis
Jan 31 08:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:17.399 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e776ddae-9f2f-4d5d-9275-c52333e26d47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:17.400 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b150d26c-b140-408a-9b8c-7063a65b2d51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:17.401 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47 namespace which is not needed anymore
Jan 31 08:32:17 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Jan 31 08:32:17 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ad.scope: Consumed 14.866s CPU time.
Jan 31 08:32:17 compute-0 systemd-machined[214448]: Machine qemu-81-instance-000000ad terminated.
Jan 31 08:32:17 compute-0 sshd-session[363833]: Invalid user ubuntu from 45.148.10.240 port 58072
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.545 247708 INFO nova.virt.libvirt.driver [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Instance destroyed successfully.
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.546 247708 DEBUG nova.objects.instance [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lazy-loading 'resources' on Instance uuid 10312c53-fc1c-4baf-a301-9ac7ba4bb651 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:32:17 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [NOTICE]   (363123) : haproxy version is 2.8.14-c23fe91
Jan 31 08:32:17 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [NOTICE]   (363123) : path to executable is /usr/sbin/haproxy
Jan 31 08:32:17 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [WARNING]  (363123) : Exiting Master process...
Jan 31 08:32:17 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [WARNING]  (363123) : Exiting Master process...
Jan 31 08:32:17 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [ALERT]    (363123) : Current worker (363125) exited with code 143 (Terminated)
Jan 31 08:32:17 compute-0 neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47[363116]: [WARNING]  (363123) : All workers exited. Exiting... (0)
Jan 31 08:32:17 compute-0 systemd[1]: libpod-ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4.scope: Deactivated successfully.
Jan 31 08:32:17 compute-0 podman[363860]: 2026-01-31 08:32:17.57885037 +0000 UTC m=+0.091841224 container died ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.584 247708 DEBUG nova.virt.libvirt.vif [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:31:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1017045621',display_name='tempest-TestNetworkAdvancedServerOps-server-1017045621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1017045621',id=173,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTYK+BhJjAijQwA1D6+hu7h09ttTa3tDUilYfVAOXLs7Y9bMEIzPKdA0Hy6+tZbDHq/8tz93mNyDFSEj3ocLpKTUhRTkac+Txwb84VPqTZ9LbsC84WpYANyYnuLUqXoDw==',key_name='tempest-TestNetworkAdvancedServerOps-754632833',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:31:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9710f0cf77d84353ae13fa47922b085d',ramdisk_id='',reservation_id='r-kjaq00rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-483180749',owner_user_name='tempest-TestNetworkAdvancedServerOps-483180749-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:32:07Z,user_data=None,user_id='4d0e9d918b4041fabd5ded633b4cf404',uuid=10312c53-fc1c-4baf-a301-9ac7ba4bb651,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:32:17 compute-0 sshd-session[363833]: Connection closed by invalid user ubuntu 45.148.10.240 port 58072 [preauth]
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.585 247708 DEBUG nova.network.os_vif_util [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converting VIF {"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.585 247708 DEBUG nova.network.os_vif_util [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.586 247708 DEBUG os_vif [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.587 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.588 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c7a1046-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.589 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:17 compute-0 nova_compute[247704]: 2026-01-31 08:32:17.594 247708 INFO os_vif [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:61:ce,bridge_name='br-int',has_traffic_filtering=True,id=6c7a1046-54de-4879-bb02-d0a2fd364536,network=Network(e776ddae-9f2f-4d5d-9275-c52333e26d47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c7a1046-54')
Jan 31 08:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4-userdata-shm.mount: Deactivated successfully.
Jan 31 08:32:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd3bf6866cda80262abf80fbc17412ccddccac8a0dfe44cb4b37956b5fe3e34c-merged.mount: Deactivated successfully.
Jan 31 08:32:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:17 compute-0 podman[363860]: 2026-01-31 08:32:17.984562478 +0000 UTC m=+0.497553332 container cleanup ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:32:17 compute-0 systemd[1]: libpod-conmon-ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4.scope: Deactivated successfully.
Jan 31 08:32:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:18 compute-0 ceph-mon[74496]: pgmap v2978: 305 pgs: 305 active+clean; 316 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 22 op/s
Jan 31 08:32:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/216193617' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/216193617' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:18 compute-0 podman[363921]: 2026-01-31 08:32:18.20405354 +0000 UTC m=+0.198853137 container remove ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.208 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b542181c-14af-4df7-b8a9-00c21bbcf155]: (4, ('Sat Jan 31 08:32:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47 (ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4)\nddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4\nSat Jan 31 08:32:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47 (ddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4)\nddfb65ffe9b3519c8db157e51c881ff7de13b2636c103d5d78a26c4a6988f4a4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.210 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4abf0a-eec1-414f-bf49-a80f6318e7bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.212 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape776ddae-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.215 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:18 compute-0 kernel: tape776ddae-90: left promiscuous mode
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.221 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.225 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ed0612-08d3-4410-a42f-a6589400b9ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.247 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[678a6970-c68d-4526-870e-6bbc1d4e2c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.249 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[885823b3-b552-4a3b-a386-31776e0473ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.264 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c49cbcde-3f77-4d30-9932-d6aa3432d63e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853559, 'reachable_time': 34519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363936, 'error': None, 'target': 'ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.267 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e776ddae-9f2f-4d5d-9275-c52333e26d47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:32:18 compute-0 systemd[1]: run-netns-ovnmeta\x2de776ddae\x2d9f2f\x2d4d5d\x2d9275\x2dc52333e26d47.mount: Deactivated successfully.
Jan 31 08:32:18 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:32:18.267 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b43334-de15-4bb6-870a-3eccabe2914e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.280 247708 DEBUG nova.compute.manager [req-61882d00-8d58-4341-b3e3-98e5d0e631cd req-00161f57-409f-4f37-aa79-f816cb07241d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-vif-unplugged-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.281 247708 DEBUG oslo_concurrency.lockutils [req-61882d00-8d58-4341-b3e3-98e5d0e631cd req-00161f57-409f-4f37-aa79-f816cb07241d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.282 247708 DEBUG oslo_concurrency.lockutils [req-61882d00-8d58-4341-b3e3-98e5d0e631cd req-00161f57-409f-4f37-aa79-f816cb07241d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.282 247708 DEBUG oslo_concurrency.lockutils [req-61882d00-8d58-4341-b3e3-98e5d0e631cd req-00161f57-409f-4f37-aa79-f816cb07241d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.283 247708 DEBUG nova.compute.manager [req-61882d00-8d58-4341-b3e3-98e5d0e631cd req-00161f57-409f-4f37-aa79-f816cb07241d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] No waiting events found dispatching network-vif-unplugged-6c7a1046-54de-4879-bb02-d0a2fd364536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:32:18 compute-0 nova_compute[247704]: 2026-01-31 08:32:18.283 247708 DEBUG nova.compute.manager [req-61882d00-8d58-4341-b3e3-98e5d0e631cd req-00161f57-409f-4f37-aa79-f816cb07241d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-vif-unplugged-6c7a1046-54de-4879-bb02-d0a2fd364536 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:32:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 283 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 08:32:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:18.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:32:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:19.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.119 247708 INFO nova.virt.libvirt.driver [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Deleting instance files /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651_del
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.120 247708 INFO nova.virt.libvirt.driver [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Deletion of /var/lib/nova/instances/10312c53-fc1c-4baf-a301-9ac7ba4bb651_del complete
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:32:20
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.meta', '.mgr', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta']
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:32:20 compute-0 ceph-mon[74496]: pgmap v2979: 305 pgs: 305 active+clean; 283 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.325 247708 INFO nova.compute.manager [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Took 3.21 seconds to destroy the instance on the hypervisor.
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.326 247708 DEBUG oslo.service.loopingcall [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.326 247708 DEBUG nova.compute.manager [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.326 247708 DEBUG nova.network.neutron [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 241 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 08:32:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:20.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.587 247708 DEBUG nova.compute.manager [req-18a040d1-40ba-44bd-af80-a54fe244ad9a req-58747b76-58f3-40bc-bffe-0556fde8b201 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.588 247708 DEBUG oslo_concurrency.lockutils [req-18a040d1-40ba-44bd-af80-a54fe244ad9a req-58747b76-58f3-40bc-bffe-0556fde8b201 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.589 247708 DEBUG oslo_concurrency.lockutils [req-18a040d1-40ba-44bd-af80-a54fe244ad9a req-58747b76-58f3-40bc-bffe-0556fde8b201 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.589 247708 DEBUG oslo_concurrency.lockutils [req-18a040d1-40ba-44bd-af80-a54fe244ad9a req-58747b76-58f3-40bc-bffe-0556fde8b201 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.589 247708 DEBUG nova.compute.manager [req-18a040d1-40ba-44bd-af80-a54fe244ad9a req-58747b76-58f3-40bc-bffe-0556fde8b201 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] No waiting events found dispatching network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:32:20 compute-0 nova_compute[247704]: 2026-01-31 08:32:20.590 247708 WARNING nova.compute.manager [req-18a040d1-40ba-44bd-af80-a54fe244ad9a req-58747b76-58f3-40bc-bffe-0556fde8b201 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received unexpected event network-vif-plugged-6c7a1046-54de-4879-bb02-d0a2fd364536 for instance with vm_state active and task_state deleting.
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:32:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:32:21 compute-0 nova_compute[247704]: 2026-01-31 08:32:21.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:21 compute-0 ceph-mon[74496]: pgmap v2980: 305 pgs: 305 active+clean; 241 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 08:32:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:21.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:21 compute-0 podman[363940]: 2026-01-31 08:32:21.91557683 +0000 UTC m=+0.089834484 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:32:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 219 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 5.4 KiB/s wr, 37 op/s
Jan 31 08:32:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:22 compute-0 nova_compute[247704]: 2026-01-31 08:32:22.590 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:23 compute-0 nova_compute[247704]: 2026-01-31 08:32:23.065 247708 DEBUG nova.network.neutron [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updated VIF entry in instance network info cache for port 6c7a1046-54de-4879-bb02-d0a2fd364536. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:32:23 compute-0 nova_compute[247704]: 2026-01-31 08:32:23.066 247708 DEBUG nova.network.neutron [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [{"id": "6c7a1046-54de-4879-bb02-d0a2fd364536", "address": "fa:16:3e:9d:61:ce", "network": {"id": "e776ddae-9f2f-4d5d-9275-c52333e26d47", "bridge": "br-int", "label": "tempest-network-smoke--1844769573", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9710f0cf77d84353ae13fa47922b085d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c7a1046-54", "ovs_interfaceid": "6c7a1046-54de-4879-bb02-d0a2fd364536", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:32:23 compute-0 nova_compute[247704]: 2026-01-31 08:32:23.413 247708 DEBUG oslo_concurrency.lockutils [req-ea2f870d-1a32-4e94-947c-0f276d4bc2f4 req-767f73fd-9423-4173-8613-1ed10f72f1d7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-10312c53-fc1c-4baf-a301-9ac7ba4bb651" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:32:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:23.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:23 compute-0 ceph-mon[74496]: pgmap v2981: 305 pgs: 305 active+clean; 219 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 5.4 KiB/s wr, 37 op/s
Jan 31 08:32:23 compute-0 nova_compute[247704]: 2026-01-31 08:32:23.935 247708 DEBUG nova.network.neutron [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:32:24 compute-0 nova_compute[247704]: 2026-01-31 08:32:24.196 247708 DEBUG nova.compute.manager [req-5b539b9c-1e5c-4660-ac36-85570fc19a76 req-6b22378b-de84-4189-ae3b-f9c8046d4ea4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Received event network-vif-deleted-6c7a1046-54de-4879-bb02-d0a2fd364536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:32:24 compute-0 nova_compute[247704]: 2026-01-31 08:32:24.197 247708 INFO nova.compute.manager [req-5b539b9c-1e5c-4660-ac36-85570fc19a76 req-6b22378b-de84-4189-ae3b-f9c8046d4ea4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Neutron deleted interface 6c7a1046-54de-4879-bb02-d0a2fd364536; detaching it from the instance and deleting it from the info cache
Jan 31 08:32:24 compute-0 nova_compute[247704]: 2026-01-31 08:32:24.197 247708 DEBUG nova.network.neutron [req-5b539b9c-1e5c-4660-ac36-85570fc19a76 req-6b22378b-de84-4189-ae3b-f9c8046d4ea4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:32:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 5.4 KiB/s wr, 41 op/s
Jan 31 08:32:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:24.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:24 compute-0 nova_compute[247704]: 2026-01-31 08:32:24.704 247708 INFO nova.compute.manager [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Took 4.38 seconds to deallocate network for instance.
Jan 31 08:32:24 compute-0 nova_compute[247704]: 2026-01-31 08:32:24.731 247708 DEBUG nova.compute.manager [req-5b539b9c-1e5c-4660-ac36-85570fc19a76 req-6b22378b-de84-4189-ae3b-f9c8046d4ea4 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Detach interface failed, port_id=6c7a1046-54de-4879-bb02-d0a2fd364536, reason: Instance 10312c53-fc1c-4baf-a301-9ac7ba4bb651 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:32:25 compute-0 nova_compute[247704]: 2026-01-31 08:32:25.322 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:25 compute-0 nova_compute[247704]: 2026-01-31 08:32:25.323 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:25 compute-0 nova_compute[247704]: 2026-01-31 08:32:25.510 247708 DEBUG oslo_concurrency.processutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:25.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:32:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982090795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:25 compute-0 nova_compute[247704]: 2026-01-31 08:32:25.952 247708 DEBUG oslo_concurrency.processutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:25 compute-0 nova_compute[247704]: 2026-01-31 08:32:25.958 247708 DEBUG nova.compute.provider_tree [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:32:26 compute-0 ceph-mon[74496]: pgmap v2982: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 5.4 KiB/s wr, 41 op/s
Jan 31 08:32:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1982090795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:26 compute-0 nova_compute[247704]: 2026-01-31 08:32:26.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:26 compute-0 nova_compute[247704]: 2026-01-31 08:32:26.307 247708 DEBUG nova.scheduler.client.report [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:32:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 5.4 KiB/s wr, 34 op/s
Jan 31 08:32:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:26.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:26 compute-0 nova_compute[247704]: 2026-01-31 08:32:26.495 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:26 compute-0 nova_compute[247704]: 2026-01-31 08:32:26.748 247708 INFO nova.scheduler.client.report [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Deleted allocations for instance 10312c53-fc1c-4baf-a301-9ac7ba4bb651
Jan 31 08:32:27 compute-0 nova_compute[247704]: 2026-01-31 08:32:27.247 247708 DEBUG oslo_concurrency.lockutils [None req-fb2cdca1-60c1-49d8-b64a-fd78aeabd0f3 4d0e9d918b4041fabd5ded633b4cf404 9710f0cf77d84353ae13fa47922b085d - - default default] Lock "10312c53-fc1c-4baf-a301-9ac7ba4bb651" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:27 compute-0 nova_compute[247704]: 2026-01-31 08:32:27.592 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:27.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:28 compute-0 ceph-mon[74496]: pgmap v2983: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 5.4 KiB/s wr, 34 op/s
Jan 31 08:32:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 31 08:32:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:28.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:29 compute-0 ceph-mon[74496]: pgmap v2984: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 31 08:32:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:29.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:32:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:30.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:31 compute-0 nova_compute[247704]: 2026-01-31 08:32:31.073 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:31 compute-0 ceph-mon[74496]: pgmap v2985: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:32:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:31.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 19 op/s
Jan 31 08:32:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:32 compute-0 nova_compute[247704]: 2026-01-31 08:32:32.544 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848337.5419033, 10312c53-fc1c-4baf-a301-9ac7ba4bb651 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:32:32 compute-0 nova_compute[247704]: 2026-01-31 08:32:32.545 247708 INFO nova.compute.manager [-] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] VM Stopped (Lifecycle Event)
Jan 31 08:32:32 compute-0 nova_compute[247704]: 2026-01-31 08:32:32.594 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:32 compute-0 nova_compute[247704]: 2026-01-31 08:32:32.677 247708 DEBUG nova.compute.manager [None req-c5946dd1-6ec7-463b-a838-dbdcd96ecffe - - - - - -] [instance: 10312c53-fc1c-4baf-a301-9ac7ba4bb651] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:32:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:33 compute-0 sudo[363996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:33 compute-0 sudo[363996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:33 compute-0 sudo[363996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:33 compute-0 sudo[364021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:33 compute-0 sudo[364021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:33 compute-0 sudo[364021]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:33.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:34 compute-0 ceph-mon[74496]: pgmap v2986: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 511 B/s wr, 19 op/s
Jan 31 08:32:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:32:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:34.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021715929182951794 of space, bias 1.0, pg target 0.6514778754885538 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.6488060964544947 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:32:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:32:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:35.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:36 compute-0 nova_compute[247704]: 2026-01-31 08:32:36.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:36 compute-0 ceph-mon[74496]: pgmap v2987: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 08:32:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 31 08:32:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:36.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:36 compute-0 sudo[364048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:36 compute-0 sudo[364048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:36 compute-0 sudo[364048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:36 compute-0 sudo[364073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:36 compute-0 sudo[364073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:36 compute-0 sudo[364073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:36 compute-0 sudo[364098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:36 compute-0 sudo[364098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:36 compute-0 sudo[364098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:37 compute-0 sudo[364123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:32:37 compute-0 sudo[364123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:32:37 compute-0 sudo[364123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: pgmap v2988: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fc057cfe-7e4c-422c-bb43-250fcad752aa does not exist
Jan 31 08:32:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d7bfee38-50c1-4068-a559-f6260ad5e7fd does not exist
Jan 31 08:32:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cc747b98-60fc-40bf-bce8-14074f195307 does not exist
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:32:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:32:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:32:37 compute-0 nova_compute[247704]: 2026-01-31 08:32:37.596 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:37 compute-0 sudo[364181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:37 compute-0 sudo[364181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:37 compute-0 sudo[364181]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:37 compute-0 sudo[364206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:37 compute-0 sudo[364206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:37 compute-0 sudo[364206]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:37 compute-0 sudo[364231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:37 compute-0 sudo[364231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:37 compute-0 sudo[364231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:37 compute-0 sudo[364256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:32:37 compute-0 sudo[364256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.017274) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358017357, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 532, "num_deletes": 255, "total_data_size": 571927, "memory_usage": 582024, "flush_reason": "Manual Compaction"}
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358023962, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 565752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65146, "largest_seqno": 65677, "table_properties": {"data_size": 562822, "index_size": 901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6727, "raw_average_key_size": 18, "raw_value_size": 556979, "raw_average_value_size": 1525, "num_data_blocks": 40, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848324, "oldest_key_time": 1769848324, "file_creation_time": 1769848358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 6745 microseconds, and 2122 cpu microseconds.
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.024028) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 565752 bytes OK
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.024059) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.026166) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.026216) EVENT_LOG_v1 {"time_micros": 1769848358026205, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.026246) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 568944, prev total WAL file size 568944, number of live WAL files 2.
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.026704) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(552KB)], [146(10MB)]
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358026746, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11402169, "oldest_snapshot_seqno": -1}
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.125131864 +0000 UTC m=+0.024693726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 9231 keys, 11267601 bytes, temperature: kUnknown
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358228626, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11267601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11209138, "index_size": 34286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 243819, "raw_average_key_size": 26, "raw_value_size": 11048171, "raw_average_value_size": 1196, "num_data_blocks": 1306, "num_entries": 9231, "num_filter_entries": 9231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848358, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 1.2 KiB/s wr, 8 op/s
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.228978) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11267601 bytes
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.386835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.4 rd, 55.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(40.1) write-amplify(19.9) OK, records in: 9751, records dropped: 520 output_compression: NoCompression
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.386888) EVENT_LOG_v1 {"time_micros": 1769848358386868, "job": 90, "event": "compaction_finished", "compaction_time_micros": 202047, "compaction_time_cpu_micros": 23563, "output_level": 6, "num_output_files": 1, "total_output_size": 11267601, "num_input_records": 9751, "num_output_records": 9231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358387494, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.388703857 +0000 UTC m=+0.288265699 container create 94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848358389678, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.026655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.389873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.389883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.389887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.389890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:32:38.389894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:38 compute-0 systemd[1]: Started libpod-conmon-94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7.scope.
Jan 31 08:32:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.52177131 +0000 UTC m=+0.421333172 container init 94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.532031401 +0000 UTC m=+0.431593243 container start 94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.538758297 +0000 UTC m=+0.438320139 container attach 94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:32:38 compute-0 cranky_heisenberg[364338]: 167 167
Jan 31 08:32:38 compute-0 systemd[1]: libpod-94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7.scope: Deactivated successfully.
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.542628791 +0000 UTC m=+0.442190633 container died 94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:32:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4a17ff6915df0671e39d603bd4637fbddddff31d4f70c79a2071aefa76a9fb4-merged.mount: Deactivated successfully.
Jan 31 08:32:38 compute-0 podman[364322]: 2026-01-31 08:32:38.597764053 +0000 UTC m=+0.497325905 container remove 94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:38 compute-0 systemd[1]: libpod-conmon-94264761cdf0f545c2d9e9adcffe6627870e745ea667939f3a37b9f5ab0a62e7.scope: Deactivated successfully.
Jan 31 08:32:38 compute-0 podman[364361]: 2026-01-31 08:32:38.756970017 +0000 UTC m=+0.055593284 container create bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:38 compute-0 systemd[1]: Started libpod-conmon-bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283.scope.
Jan 31 08:32:38 compute-0 podman[364361]: 2026-01-31 08:32:38.735982943 +0000 UTC m=+0.034606200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:32:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9815ed254c338cf7551fbc46a706e6e55dd5ed90788828fca851b1fb0594d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9815ed254c338cf7551fbc46a706e6e55dd5ed90788828fca851b1fb0594d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9815ed254c338cf7551fbc46a706e6e55dd5ed90788828fca851b1fb0594d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9815ed254c338cf7551fbc46a706e6e55dd5ed90788828fca851b1fb0594d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9815ed254c338cf7551fbc46a706e6e55dd5ed90788828fca851b1fb0594d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:38 compute-0 podman[364361]: 2026-01-31 08:32:38.866579685 +0000 UTC m=+0.165202952 container init bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:32:38 compute-0 podman[364361]: 2026-01-31 08:32:38.876406216 +0000 UTC m=+0.175029443 container start bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_babbage, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:38 compute-0 podman[364361]: 2026-01-31 08:32:38.881321626 +0000 UTC m=+0.179944863 container attach bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:32:39 compute-0 priceless_babbage[364377]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:32:39 compute-0 priceless_babbage[364377]: --> relative data size: 1.0
Jan 31 08:32:39 compute-0 priceless_babbage[364377]: --> All data devices are unavailable
Jan 31 08:32:39 compute-0 systemd[1]: libpod-bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283.scope: Deactivated successfully.
Jan 31 08:32:39 compute-0 podman[364361]: 2026-01-31 08:32:39.741808076 +0000 UTC m=+1.040431383 container died bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9815ed254c338cf7551fbc46a706e6e55dd5ed90788828fca851b1fb0594d6-merged.mount: Deactivated successfully.
Jan 31 08:32:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:39 compute-0 podman[364361]: 2026-01-31 08:32:39.812787457 +0000 UTC m=+1.111410724 container remove bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:32:39 compute-0 systemd[1]: libpod-conmon-bd84f192de77ec060d084ccc4535f75805233bc59492976f2a968b8585ba1283.scope: Deactivated successfully.
Jan 31 08:32:39 compute-0 nova_compute[247704]: 2026-01-31 08:32:39.821 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:39 compute-0 sudo[364256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:39 compute-0 sudo[364408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:39 compute-0 sudo[364408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:39 compute-0 sudo[364408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:39 compute-0 sudo[364433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:39 compute-0 sudo[364433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:39 compute-0 sudo[364433]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:40 compute-0 sudo[364458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:40 compute-0 sudo[364458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:40 compute-0 sudo[364458]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:40 compute-0 ceph-mon[74496]: pgmap v2989: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 1.2 KiB/s wr, 8 op/s
Jan 31 08:32:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2808292028' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2808292028' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:40 compute-0 sudo[364483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:32:40 compute-0 sudo[364483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.436023279 +0000 UTC m=+0.042188465 container create 542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hawking, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:32:40 compute-0 systemd[1]: Started libpod-conmon-542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a.scope.
Jan 31 08:32:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:40.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.415056515 +0000 UTC m=+0.021221701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:32:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.528949167 +0000 UTC m=+0.135114343 container init 542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.537363424 +0000 UTC m=+0.143528560 container start 542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.541934656 +0000 UTC m=+0.148099802 container attach 542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hawking, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:40 compute-0 clever_hawking[364563]: 167 167
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.543822333 +0000 UTC m=+0.149987479 container died 542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hawking, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:40 compute-0 systemd[1]: libpod-542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a.scope: Deactivated successfully.
Jan 31 08:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf972b137dadeb5521a199cee89b682a112cdff57a6c73037acb7932976b0076-merged.mount: Deactivated successfully.
Jan 31 08:32:40 compute-0 podman[364547]: 2026-01-31 08:32:40.582896561 +0000 UTC m=+0.189061727 container remove 542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hawking, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:32:40 compute-0 systemd[1]: libpod-conmon-542ae0302fbcc4802f369c11c3317a2a10e7a42a947b59abe9c7547d16a6444a.scope: Deactivated successfully.
Jan 31 08:32:40 compute-0 podman[364588]: 2026-01-31 08:32:40.738930756 +0000 UTC m=+0.040271438 container create d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:32:40 compute-0 systemd[1]: Started libpod-conmon-d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b.scope.
Jan 31 08:32:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5822242ad06407a9466f9e7c28f6cf09d02ab77ffcee4d318a6f2e3295a6099a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5822242ad06407a9466f9e7c28f6cf09d02ab77ffcee4d318a6f2e3295a6099a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5822242ad06407a9466f9e7c28f6cf09d02ab77ffcee4d318a6f2e3295a6099a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5822242ad06407a9466f9e7c28f6cf09d02ab77ffcee4d318a6f2e3295a6099a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:40 compute-0 podman[364588]: 2026-01-31 08:32:40.816040877 +0000 UTC m=+0.117381609 container init d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:40 compute-0 podman[364588]: 2026-01-31 08:32:40.724325399 +0000 UTC m=+0.025666101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:32:40 compute-0 podman[364588]: 2026-01-31 08:32:40.822377573 +0000 UTC m=+0.123718265 container start d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:32:40 compute-0 podman[364588]: 2026-01-31 08:32:40.825561011 +0000 UTC m=+0.126901733 container attach d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:32:41 compute-0 nova_compute[247704]: 2026-01-31 08:32:41.078 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:41 compute-0 relaxed_cori[364604]: {
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:     "0": [
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:         {
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "devices": [
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "/dev/loop3"
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             ],
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "lv_name": "ceph_lv0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "lv_size": "7511998464",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "name": "ceph_lv0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "tags": {
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.crush_device_class": "",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.encrypted": "0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.osd_id": "0",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.type": "block",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:                 "ceph.vdo": "0"
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             },
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "type": "block",
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:             "vg_name": "ceph_vg0"
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:         }
Jan 31 08:32:41 compute-0 relaxed_cori[364604]:     ]
Jan 31 08:32:41 compute-0 relaxed_cori[364604]: }
Jan 31 08:32:41 compute-0 systemd[1]: libpod-d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b.scope: Deactivated successfully.
Jan 31 08:32:41 compute-0 podman[364588]: 2026-01-31 08:32:41.608766096 +0000 UTC m=+0.910106808 container died d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5822242ad06407a9466f9e7c28f6cf09d02ab77ffcee4d318a6f2e3295a6099a-merged.mount: Deactivated successfully.
Jan 31 08:32:41 compute-0 podman[364588]: 2026-01-31 08:32:41.662723069 +0000 UTC m=+0.964063761 container remove d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:32:41 compute-0 systemd[1]: libpod-conmon-d9083b5ed4653afbdad258fb06be50fbd02bd633bc50cf03280974e6f1ca249b.scope: Deactivated successfully.
Jan 31 08:32:41 compute-0 sudo[364483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:41 compute-0 sudo[364628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:41 compute-0 sudo[364628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:41 compute-0 sudo[364628]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:41.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:41 compute-0 sudo[364653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:41 compute-0 sudo[364653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:41 compute-0 sudo[364653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:41 compute-0 sudo[364678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:41 compute-0 sudo[364678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:41 compute-0 sudo[364678]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:41 compute-0 sudo[364709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:32:41 compute-0 sudo[364709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:41 compute-0 podman[364702]: 2026-01-31 08:32:41.993011998 +0000 UTC m=+0.071427402 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:32:42 compute-0 ceph-mon[74496]: pgmap v2990: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Jan 31 08:32:42 compute-0 nova_compute[247704]: 2026-01-31 08:32:42.280 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.343703685 +0000 UTC m=+0.082887923 container create f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cohen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.281661355 +0000 UTC m=+0.020845613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:32:42 compute-0 systemd[1]: Started libpod-conmon-f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d.scope.
Jan 31 08:32:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.441241257 +0000 UTC m=+0.180425505 container init f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.449732196 +0000 UTC m=+0.188916424 container start f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cohen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.45521178 +0000 UTC m=+0.194396028 container attach f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cohen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:32:42 compute-0 boring_cohen[364804]: 167 167
Jan 31 08:32:42 compute-0 systemd[1]: libpod-f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d.scope: Deactivated successfully.
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.459191228 +0000 UTC m=+0.198375516 container died f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-974c4f465f885a6cdc01bf9354e70ce861d0b15c7b5c0907bd35f36fb786e71f-merged.mount: Deactivated successfully.
Jan 31 08:32:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:42.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:42 compute-0 podman[364787]: 2026-01-31 08:32:42.504286543 +0000 UTC m=+0.243470771 container remove f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_cohen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:32:42 compute-0 systemd[1]: libpod-conmon-f408e567c68398c6c3c04a990854ce4531f1a66c2e1b201d2f790fc641640d1d.scope: Deactivated successfully.
Jan 31 08:32:42 compute-0 nova_compute[247704]: 2026-01-31 08:32:42.598 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:42 compute-0 podman[364827]: 2026-01-31 08:32:42.643356513 +0000 UTC m=+0.043907597 container create 0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:42 compute-0 systemd[1]: Started libpod-conmon-0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9.scope.
Jan 31 08:32:42 compute-0 podman[364827]: 2026-01-31 08:32:42.623111577 +0000 UTC m=+0.023662681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:32:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab726682bdba52d1e68a390e71ea5a365940e6916777916961f9d98117fa73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab726682bdba52d1e68a390e71ea5a365940e6916777916961f9d98117fa73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab726682bdba52d1e68a390e71ea5a365940e6916777916961f9d98117fa73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab726682bdba52d1e68a390e71ea5a365940e6916777916961f9d98117fa73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:42 compute-0 podman[364827]: 2026-01-31 08:32:42.741732726 +0000 UTC m=+0.142283820 container init 0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:42 compute-0 podman[364827]: 2026-01-31 08:32:42.750156573 +0000 UTC m=+0.150707677 container start 0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:42 compute-0 podman[364827]: 2026-01-31 08:32:42.754480199 +0000 UTC m=+0.155031363 container attach 0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:32:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:43 compute-0 nova_compute[247704]: 2026-01-31 08:32:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:43 compute-0 sweet_ellis[364843]: {
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:         "osd_id": 0,
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:         "type": "bluestore"
Jan 31 08:32:43 compute-0 sweet_ellis[364843]:     }
Jan 31 08:32:43 compute-0 sweet_ellis[364843]: }
Jan 31 08:32:43 compute-0 systemd[1]: libpod-0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9.scope: Deactivated successfully.
Jan 31 08:32:43 compute-0 podman[364827]: 2026-01-31 08:32:43.668904431 +0000 UTC m=+1.069455535 container died 0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aab726682bdba52d1e68a390e71ea5a365940e6916777916961f9d98117fa73-merged.mount: Deactivated successfully.
Jan 31 08:32:43 compute-0 podman[364827]: 2026-01-31 08:32:43.737527623 +0000 UTC m=+1.138078697 container remove 0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:32:43 compute-0 systemd[1]: libpod-conmon-0d91f8a7fe59c7420883483ede5e221f61dc65314858a9038414b5ecdfb194f9.scope: Deactivated successfully.
Jan 31 08:32:43 compute-0 sudo[364709]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:32:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:32:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:43.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0571c184-78e4-4d3e-be65-4ef823f395e3 does not exist
Jan 31 08:32:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 250de31e-f654-4d6c-a625-16b578c0de65 does not exist
Jan 31 08:32:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 73afe3d8-e0a0-46ca-bfac-dcd341869649 does not exist
Jan 31 08:32:43 compute-0 sudo[364876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:43 compute-0 sudo[364876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:43 compute-0 sudo[364876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:43 compute-0 sudo[364901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:32:43 compute-0 sudo[364901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:43 compute-0 sudo[364901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:44 compute-0 ceph-mon[74496]: pgmap v2991: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 31 08:32:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:32:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 31 08:32:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:32:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:44.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:32:45 compute-0 nova_compute[247704]: 2026-01-31 08:32:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:45 compute-0 nova_compute[247704]: 2026-01-31 08:32:45.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:32:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:32:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:45.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:32:46 compute-0 nova_compute[247704]: 2026-01-31 08:32:46.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:46 compute-0 ceph-mon[74496]: pgmap v2992: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 31 08:32:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 4.0 KiB/s wr, 15 op/s
Jan 31 08:32:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:46.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:47 compute-0 nova_compute[247704]: 2026-01-31 08:32:47.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:47 compute-0 nova_compute[247704]: 2026-01-31 08:32:47.602 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:47.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:48 compute-0 ceph-mon[74496]: pgmap v2993: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 4.0 KiB/s wr, 15 op/s
Jan 31 08:32:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 2.9 KiB/s wr, 14 op/s
Jan 31 08:32:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:49.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:50 compute-0 ceph-mon[74496]: pgmap v2994: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 2.9 KiB/s wr, 14 op/s
Jan 31 08:32:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Jan 31 08:32:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:50.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:51 compute-0 nova_compute[247704]: 2026-01-31 08:32:51.082 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:51.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:52 compute-0 ceph-mon[74496]: pgmap v2995: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Jan 31 08:32:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 2.7 KiB/s wr, 1 op/s
Jan 31 08:32:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:52.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:52 compute-0 nova_compute[247704]: 2026-01-31 08:32:52.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:52 compute-0 nova_compute[247704]: 2026-01-31 08:32:52.604 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:52 compute-0 podman[364930]: 2026-01-31 08:32:52.914309315 +0000 UTC m=+0.082001422 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:32:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:53 compute-0 ceph-mon[74496]: pgmap v2996: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 938 B/s rd, 2.7 KiB/s wr, 1 op/s
Jan 31 08:32:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/723900355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:32:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/214475849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:32:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/214475849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:53 compute-0 nova_compute[247704]: 2026-01-31 08:32:53.522 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:53 compute-0 nova_compute[247704]: 2026-01-31 08:32:53.522 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:53 compute-0 nova_compute[247704]: 2026-01-31 08:32:53.523 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:53 compute-0 nova_compute[247704]: 2026-01-31 08:32:53.523 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:32:53 compute-0 nova_compute[247704]: 2026-01-31 08:32:53.523 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:53 compute-0 sudo[364957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:53 compute-0 sudo[364957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:53 compute-0 sudo[364957]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:53 compute-0 sudo[364983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:32:53 compute-0 sudo[364983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:53 compute-0 sudo[364983]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:53.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:32:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1373963127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:53 compute-0 nova_compute[247704]: 2026-01-31 08:32:53.989 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:54 compute-0 nova_compute[247704]: 2026-01-31 08:32:54.203 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:32:54 compute-0 nova_compute[247704]: 2026-01-31 08:32:54.204 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4277MB free_disk=20.94268798828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:32:54 compute-0 nova_compute[247704]: 2026-01-31 08:32:54.204 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:54 compute-0 nova_compute[247704]: 2026-01-31 08:32:54.205 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:32:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/214475849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:32:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/214475849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:32:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1373963127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:54.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:55 compute-0 ceph-mon[74496]: pgmap v2997: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:32:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:55.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:32:56 compute-0 nova_compute[247704]: 2026-01-31 08:32:56.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:32:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:56.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:57 compute-0 nova_compute[247704]: 2026-01-31 08:32:57.606 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:32:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:57.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:57 compute-0 ceph-mon[74496]: pgmap v2998: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:32:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:32:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:32:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:32:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1899354999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:32:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:32:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:32:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:59.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:00 compute-0 ceph-mon[74496]: pgmap v2999: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:00.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:01 compute-0 nova_compute[247704]: 2026-01-31 08:33:01.086 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:01 compute-0 ceph-mon[74496]: pgmap v3000: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:01.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.137 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.137 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.156 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.179 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.179 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.206 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.232 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:33:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.493 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:02.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.608 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/929178862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.938 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:02 compute-0 nova_compute[247704]: 2026-01-31 08:33:02.946 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:03 compute-0 ceph-mon[74496]: pgmap v3001: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1638577862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/929178862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4132899862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:03.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:03 compute-0 nova_compute[247704]: 2026-01-31 08:33:03.846 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 31 08:33:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:04 compute-0 nova_compute[247704]: 2026-01-31 08:33:04.976 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:33:04 compute-0 nova_compute[247704]: 2026-01-31 08:33:04.976 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 10.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:05.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:05 compute-0 ceph-mon[74496]: pgmap v3002: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 31 08:33:05 compute-0 nova_compute[247704]: 2026-01-31 08:33:05.977 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:05 compute-0 nova_compute[247704]: 2026-01-31 08:33:05.978 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:06 compute-0 nova_compute[247704]: 2026-01-31 08:33:06.123 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 08:33:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:06.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:06 compute-0 nova_compute[247704]: 2026-01-31 08:33:06.764 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:06 compute-0 nova_compute[247704]: 2026-01-31 08:33:06.765 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:33:06 compute-0 nova_compute[247704]: 2026-01-31 08:33:06.765 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:33:07 compute-0 nova_compute[247704]: 2026-01-31 08:33:07.403 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:33:07 compute-0 nova_compute[247704]: 2026-01-31 08:33:07.405 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:07 compute-0 nova_compute[247704]: 2026-01-31 08:33:07.405 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:07 compute-0 nova_compute[247704]: 2026-01-31 08:33:07.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:07.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:08.082 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:33:08 compute-0 nova_compute[247704]: 2026-01-31 08:33:08.083 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:08.084 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:33:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 08:33:08 compute-0 ceph-mon[74496]: pgmap v3003: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 08:33:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:08.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:09.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:10.087 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:33:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 2 op/s
Jan 31 08:33:10 compute-0 ceph-mon[74496]: pgmap v3004: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 08:33:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:10.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:11 compute-0 nova_compute[247704]: 2026-01-31 08:33:11.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:11.203 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:11.204 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:11.204 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:11.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:12 compute-0 ceph-mon[74496]: pgmap v3005: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 2 op/s
Jan 31 08:33:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 3 op/s
Jan 31 08:33:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:12 compute-0 nova_compute[247704]: 2026-01-31 08:33:12.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:12 compute-0 podman[365061]: 2026-01-31 08:33:12.914959538 +0000 UTC m=+0.082343631 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 08:33:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3327579943' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3327579943' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:13 compute-0 sudo[365081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:13 compute-0 sudo[365081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:13 compute-0 sudo[365081]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:13 compute-0 sudo[365106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:13 compute-0 sudo[365106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:13 compute-0 sudo[365106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Jan 31 08:33:14 compute-0 ceph-mon[74496]: pgmap v3006: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 3 op/s
Jan 31 08:33:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:16 compute-0 ceph-mon[74496]: pgmap v3007: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 13 op/s
Jan 31 08:33:16 compute-0 nova_compute[247704]: 2026-01-31 08:33:16.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.1 KiB/s wr, 26 op/s
Jan 31 08:33:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:17 compute-0 nova_compute[247704]: 2026-01-31 08:33:17.616 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:33:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:17.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:33:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:33:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3387723786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:33:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3387723786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:18 compute-0 ceph-mon[74496]: pgmap v3008: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.1 KiB/s wr, 26 op/s
Jan 31 08:33:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.2 KiB/s wr, 27 op/s
Jan 31 08:33:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3387723786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3387723786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:19 compute-0 ceph-mon[74496]: pgmap v3009: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.2 KiB/s wr, 27 op/s
Jan 31 08:33:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:33:20
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'vms', 'images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Jan 31 08:33:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:33:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:33:21 compute-0 nova_compute[247704]: 2026-01-31 08:33:21.132 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:21 compute-0 ceph-mon[74496]: pgmap v3010: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 6.2 KiB/s wr, 29 op/s
Jan 31 08:33:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 164 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 6.9 KiB/s wr, 37 op/s
Jan 31 08:33:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3426008691' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3426008691' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:22 compute-0 nova_compute[247704]: 2026-01-31 08:33:22.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:23 compute-0 ceph-mon[74496]: pgmap v3011: 305 pgs: 305 active+clean; 164 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 6.9 KiB/s wr, 37 op/s
Jan 31 08:33:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1846240337' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1846240337' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:23 compute-0 podman[365137]: 2026-01-31 08:33:23.636855568 +0000 UTC m=+0.092619142 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:33:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 153 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 6.7 KiB/s wr, 38 op/s
Jan 31 08:33:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:24.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:25 compute-0 ceph-mon[74496]: pgmap v3012: 305 pgs: 305 active+clean; 153 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 6.7 KiB/s wr, 38 op/s
Jan 31 08:33:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1196754735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1196754735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:26 compute-0 nova_compute[247704]: 2026-01-31 08:33:26.135 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 7.6 KiB/s wr, 83 op/s
Jan 31 08:33:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:26.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1780901387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:27 compute-0 nova_compute[247704]: 2026-01-31 08:33:27.621 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:27 compute-0 ceph-mon[74496]: pgmap v3013: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 7.6 KiB/s wr, 83 op/s
Jan 31 08:33:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.2 KiB/s wr, 72 op/s
Jan 31 08:33:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:28.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:29 compute-0 ceph-mon[74496]: pgmap v3014: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.2 KiB/s wr, 72 op/s
Jan 31 08:33:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 70 op/s
Jan 31 08:33:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:31 compute-0 nova_compute[247704]: 2026-01-31 08:33:31.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:31 compute-0 ceph-mon[74496]: pgmap v3015: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 70 op/s
Jan 31 08:33:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Jan 31 08:33:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:32.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:32 compute-0 nova_compute[247704]: 2026-01-31 08:33:32.671 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.835641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412835698, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 734, "num_deletes": 251, "total_data_size": 998772, "memory_usage": 1013696, "flush_reason": "Manual Compaction"}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412848832, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 987800, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65678, "largest_seqno": 66411, "table_properties": {"data_size": 983972, "index_size": 1607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8809, "raw_average_key_size": 19, "raw_value_size": 976286, "raw_average_value_size": 2179, "num_data_blocks": 71, "num_entries": 448, "num_filter_entries": 448, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848358, "oldest_key_time": 1769848358, "file_creation_time": 1769848412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 13265 microseconds, and 4183 cpu microseconds.
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.848905) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 987800 bytes OK
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.848941) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.856493) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.856521) EVENT_LOG_v1 {"time_micros": 1769848412856513, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.856552) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 995051, prev total WAL file size 995051, number of live WAL files 2.
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.857263) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(964KB)], [149(10MB)]
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412857338, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12255401, "oldest_snapshot_seqno": -1}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 9165 keys, 10449212 bytes, temperature: kUnknown
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412974458, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10449212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10391853, "index_size": 33310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 243217, "raw_average_key_size": 26, "raw_value_size": 10232868, "raw_average_value_size": 1116, "num_data_blocks": 1259, "num_entries": 9165, "num_filter_entries": 9165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.974777) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10449212 bytes
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.976778) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.5 rd, 89.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(23.0) write-amplify(10.6) OK, records in: 9679, records dropped: 514 output_compression: NoCompression
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.976811) EVENT_LOG_v1 {"time_micros": 1769848412976796, "job": 92, "event": "compaction_finished", "compaction_time_micros": 117238, "compaction_time_cpu_micros": 20299, "output_level": 6, "num_output_files": 1, "total_output_size": 10449212, "num_input_records": 9679, "num_output_records": 9165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412977209, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848412978679, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.857114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.978730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.978735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.978737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.978739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:33:32.978741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:33:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:33 compute-0 sudo[365169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:33 compute-0 sudo[365169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:33 compute-0 sudo[365169]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:33 compute-0 ceph-mon[74496]: pgmap v3016: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 KiB/s wr, 67 op/s
Jan 31 08:33:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:33.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:33 compute-0 sudo[365194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:33 compute-0 sudo[365194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:33 compute-0 sudo[365194]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.1 KiB/s wr, 58 op/s
Jan 31 08:33:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:33:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:33:35 compute-0 ceph-mon[74496]: pgmap v3017: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.1 KiB/s wr, 58 op/s
Jan 31 08:33:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:36 compute-0 nova_compute[247704]: 2026-01-31 08:33:36.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.1 KiB/s wr, 56 op/s
Jan 31 08:33:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:37 compute-0 nova_compute[247704]: 2026-01-31 08:33:37.673 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:37.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:37 compute-0 ceph-mon[74496]: pgmap v3018: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.1 KiB/s wr, 56 op/s
Jan 31 08:33:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:33:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:39.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:40 compute-0 ceph-mon[74496]: pgmap v3019: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:33:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:40 compute-0 nova_compute[247704]: 2026-01-31 08:33:40.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:40.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:41 compute-0 nova_compute[247704]: 2026-01-31 08:33:41.205 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2748956884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:41 compute-0 ceph-mon[74496]: pgmap v3020: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:33:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 437 KiB/s wr, 24 op/s
Jan 31 08:33:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:42.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:42 compute-0 nova_compute[247704]: 2026-01-31 08:33:42.676 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:43 compute-0 ceph-mon[74496]: pgmap v3021: 305 pgs: 305 active+clean; 129 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 437 KiB/s wr, 24 op/s
Jan 31 08:33:43 compute-0 nova_compute[247704]: 2026-01-31 08:33:43.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:43 compute-0 podman[365224]: 2026-01-31 08:33:43.872901155 +0000 UTC m=+0.046414939 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:33:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:44 compute-0 sudo[365243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:44 compute-0 sudo[365243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 sudo[365243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 sudo[365268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:44 compute-0 sudo[365268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 sudo[365268]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 sudo[365293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:44 compute-0 sudo[365293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 sudo[365293]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 148 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Jan 31 08:33:44 compute-0 sudo[365318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:33:44 compute-0 sudo[365318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:44.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:44 compute-0 sudo[365318]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 99a685f6-8bb8-44e3-8970-d0bcad5d1ca7 does not exist
Jan 31 08:33:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev dac087b5-8c86-4e8e-a57d-c13f84c154f0 does not exist
Jan 31 08:33:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 980c6f8c-af87-4f1b-a3ed-8c5800b32f15 does not exist
Jan 31 08:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:33:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:33:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:33:44 compute-0 sudo[365376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:44 compute-0 sudo[365376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 sudo[365376]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 sudo[365401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:45 compute-0 sudo[365401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:45 compute-0 sudo[365401]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:45 compute-0 sudo[365426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:45 compute-0 sudo[365426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:45 compute-0 sudo[365426]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:45 compute-0 sudo[365451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:33:45 compute-0 sudo[365451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.451450093 +0000 UTC m=+0.044453621 container create 9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:33:45 compute-0 systemd[1]: Started libpod-conmon-9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b.scope.
Jan 31 08:33:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:45 compute-0 ceph-mon[74496]: pgmap v3022: 305 pgs: 305 active+clean; 148 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Jan 31 08:33:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:33:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:33:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:33:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:33:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.427565497 +0000 UTC m=+0.020569055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.535980566 +0000 UTC m=+0.128984114 container init 9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.543232483 +0000 UTC m=+0.136236011 container start 9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.547773984 +0000 UTC m=+0.140777512 container attach 9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:33:45 compute-0 xenodochial_fermi[365530]: 167 167
Jan 31 08:33:45 compute-0 systemd[1]: libpod-9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b.scope: Deactivated successfully.
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.560391514 +0000 UTC m=+0.153395042 container died 9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:45 compute-0 nova_compute[247704]: 2026-01-31 08:33:45.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:45 compute-0 nova_compute[247704]: 2026-01-31 08:33:45.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e44f2e295c7e87142ab1d1dad520ccf4d1d344a478ead5e213e6d46e2b0d561-merged.mount: Deactivated successfully.
Jan 31 08:33:45 compute-0 podman[365514]: 2026-01-31 08:33:45.613764483 +0000 UTC m=+0.206768021 container remove 9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:45 compute-0 systemd[1]: libpod-conmon-9f74a02cd4e47d88119288d066f1744ea430660693a07b81c8efed3fb7d2ea0b.scope: Deactivated successfully.
Jan 31 08:33:45 compute-0 podman[365557]: 2026-01-31 08:33:45.770594189 +0000 UTC m=+0.051938595 container create eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 08:33:45 compute-0 systemd[1]: Started libpod-conmon-eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914.scope.
Jan 31 08:33:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:45 compute-0 podman[365557]: 2026-01-31 08:33:45.744394546 +0000 UTC m=+0.025739012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f25652387c913dfc9969c14a239d6c942dac0ee49bf67efc0c87264bcf881630/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f25652387c913dfc9969c14a239d6c942dac0ee49bf67efc0c87264bcf881630/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f25652387c913dfc9969c14a239d6c942dac0ee49bf67efc0c87264bcf881630/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f25652387c913dfc9969c14a239d6c942dac0ee49bf67efc0c87264bcf881630/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f25652387c913dfc9969c14a239d6c942dac0ee49bf67efc0c87264bcf881630/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 podman[365557]: 2026-01-31 08:33:45.855593662 +0000 UTC m=+0.136938078 container init eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cray, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:33:45 compute-0 podman[365557]: 2026-01-31 08:33:45.862094432 +0000 UTC m=+0.143438818 container start eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:45 compute-0 podman[365557]: 2026-01-31 08:33:45.867496525 +0000 UTC m=+0.148840911 container attach eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:33:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:45.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:46 compute-0 nova_compute[247704]: 2026-01-31 08:33:46.255 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:46 compute-0 interesting_cray[365573]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:33:46 compute-0 interesting_cray[365573]: --> relative data size: 1.0
Jan 31 08:33:46 compute-0 interesting_cray[365573]: --> All data devices are unavailable
Jan 31 08:33:46 compute-0 systemd[1]: libpod-eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914.scope: Deactivated successfully.
Jan 31 08:33:46 compute-0 podman[365557]: 2026-01-31 08:33:46.652142125 +0000 UTC m=+0.933486501 container died eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f25652387c913dfc9969c14a239d6c942dac0ee49bf67efc0c87264bcf881630-merged.mount: Deactivated successfully.
Jan 31 08:33:46 compute-0 podman[365557]: 2026-01-31 08:33:46.754505565 +0000 UTC m=+1.035849941 container remove eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cray, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:33:46 compute-0 systemd[1]: libpod-conmon-eb238b16996278b679ab2e3508cad8d0545bc687fd6b46c84eae2e0ed324c914.scope: Deactivated successfully.
Jan 31 08:33:46 compute-0 sudo[365451]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:46 compute-0 sudo[365600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:46 compute-0 sudo[365600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:46 compute-0 sudo[365600]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:46 compute-0 sudo[365625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:46 compute-0 sudo[365625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:46 compute-0 sudo[365625]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:46 compute-0 sudo[365650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:46 compute-0 sudo[365650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:46 compute-0 sudo[365650]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:47 compute-0 sudo[365675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:33:47 compute-0 sudo[365675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.348130011 +0000 UTC m=+0.047931406 container create 2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:47 compute-0 systemd[1]: Started libpod-conmon-2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490.scope.
Jan 31 08:33:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.328551771 +0000 UTC m=+0.028353166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.431513786 +0000 UTC m=+0.131315191 container init 2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.439856021 +0000 UTC m=+0.139657396 container start 2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:47 compute-0 affectionate_perlman[365756]: 167 167
Jan 31 08:33:47 compute-0 systemd[1]: libpod-2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490.scope: Deactivated successfully.
Jan 31 08:33:47 compute-0 conmon[365756]: conmon 2afea7677e331556bfa1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490.scope/container/memory.events
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.450292096 +0000 UTC m=+0.150093521 container attach 2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.450965873 +0000 UTC m=+0.150767248 container died 2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3fb9622824f29e5e8a312142baead3bc985842a07d7469faf764845b52223e-merged.mount: Deactivated successfully.
Jan 31 08:33:47 compute-0 podman[365740]: 2026-01-31 08:33:47.528257358 +0000 UTC m=+0.228058733 container remove 2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:47 compute-0 ceph-mon[74496]: pgmap v3023: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:47 compute-0 systemd[1]: libpod-conmon-2afea7677e331556bfa18d2c932d642430e349dbb047750ed445f3654199c490.scope: Deactivated successfully.
Jan 31 08:33:47 compute-0 podman[365783]: 2026-01-31 08:33:47.669807909 +0000 UTC m=+0.054740753 container create 232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:47 compute-0 nova_compute[247704]: 2026-01-31 08:33:47.678 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:47 compute-0 systemd[1]: Started libpod-conmon-232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec.scope.
Jan 31 08:33:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:47 compute-0 podman[365783]: 2026-01-31 08:33:47.648901876 +0000 UTC m=+0.033834760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fd3baec88db1452cb4b70d6da940490a551f7315cd916f2e254f4e6063b404/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fd3baec88db1452cb4b70d6da940490a551f7315cd916f2e254f4e6063b404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fd3baec88db1452cb4b70d6da940490a551f7315cd916f2e254f4e6063b404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03fd3baec88db1452cb4b70d6da940490a551f7315cd916f2e254f4e6063b404/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:47 compute-0 podman[365783]: 2026-01-31 08:33:47.760645647 +0000 UTC m=+0.145578511 container init 232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:33:47 compute-0 podman[365783]: 2026-01-31 08:33:47.767251599 +0000 UTC m=+0.152184433 container start 232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:47 compute-0 podman[365783]: 2026-01-31 08:33:47.770898208 +0000 UTC m=+0.155831082 container attach 232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:47.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]: {
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:     "0": [
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:         {
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "devices": [
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "/dev/loop3"
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             ],
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "lv_name": "ceph_lv0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "lv_size": "7511998464",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "name": "ceph_lv0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "tags": {
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.crush_device_class": "",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.encrypted": "0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.osd_id": "0",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.type": "block",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:                 "ceph.vdo": "0"
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             },
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "type": "block",
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:             "vg_name": "ceph_vg0"
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:         }
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]:     ]
Jan 31 08:33:48 compute-0 charming_chaplygin[365799]: }
Jan 31 08:33:48 compute-0 nova_compute[247704]: 2026-01-31 08:33:48.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:48 compute-0 systemd[1]: libpod-232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec.scope: Deactivated successfully.
Jan 31 08:33:48 compute-0 podman[365783]: 2026-01-31 08:33:48.593615592 +0000 UTC m=+0.978548426 container died 232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:33:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:48.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-03fd3baec88db1452cb4b70d6da940490a551f7315cd916f2e254f4e6063b404-merged.mount: Deactivated successfully.
Jan 31 08:33:48 compute-0 podman[365783]: 2026-01-31 08:33:48.66489821 +0000 UTC m=+1.049831044 container remove 232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:33:48 compute-0 systemd[1]: libpod-conmon-232566bd4e672f5d8a9f89d57dc876889d2e09ff23f18e65bbe5c9e27902abec.scope: Deactivated successfully.
Jan 31 08:33:48 compute-0 sudo[365675]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:48 compute-0 sudo[365823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:48 compute-0 sudo[365823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:48 compute-0 sudo[365823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:48 compute-0 sudo[365848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:48 compute-0 sudo[365848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:48 compute-0 sudo[365848]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:48 compute-0 sudo[365873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:48 compute-0 sudo[365873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:48 compute-0 sudo[365873]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:48 compute-0 sudo[365898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:33:48 compute-0 sudo[365898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.247132447 +0000 UTC m=+0.047630549 container create 2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khorana, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:33:49 compute-0 systemd[1]: Started libpod-conmon-2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16.scope.
Jan 31 08:33:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.227584188 +0000 UTC m=+0.028082310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.331641509 +0000 UTC m=+0.132139631 container init 2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khorana, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.3381997 +0000 UTC m=+0.138697802 container start 2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.341801578 +0000 UTC m=+0.142299690 container attach 2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khorana, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:49 compute-0 recursing_khorana[365980]: 167 167
Jan 31 08:33:49 compute-0 systemd[1]: libpod-2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16.scope: Deactivated successfully.
Jan 31 08:33:49 compute-0 conmon[365980]: conmon 2d26c0c577345fdd2686 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16.scope/container/memory.events
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.344652128 +0000 UTC m=+0.145150240 container died 2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-80c186cb40ea29dc1a2c5d05da635a51701c67f25a523119df2edb9c5983328e-merged.mount: Deactivated successfully.
Jan 31 08:33:49 compute-0 podman[365964]: 2026-01-31 08:33:49.383032139 +0000 UTC m=+0.183530241 container remove 2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:49 compute-0 systemd[1]: libpod-conmon-2d26c0c577345fdd26861b98c9d601921d65caa1dbe8d413e7b9b7f8dc5a4a16.scope: Deactivated successfully.
Jan 31 08:33:49 compute-0 podman[366005]: 2026-01-31 08:33:49.526539698 +0000 UTC m=+0.037712936 container create 474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kalam, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:49 compute-0 ceph-mon[74496]: pgmap v3024: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:49 compute-0 systemd[1]: Started libpod-conmon-474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6.scope.
Jan 31 08:33:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd252f76b517b12d68fbda6842e6c11056b8b84b1f58f1c50651855bc5fb9e5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd252f76b517b12d68fbda6842e6c11056b8b84b1f58f1c50651855bc5fb9e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd252f76b517b12d68fbda6842e6c11056b8b84b1f58f1c50651855bc5fb9e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd252f76b517b12d68fbda6842e6c11056b8b84b1f58f1c50651855bc5fb9e5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 podman[366005]: 2026-01-31 08:33:49.508910226 +0000 UTC m=+0.020083494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:33:49 compute-0 podman[366005]: 2026-01-31 08:33:49.699279984 +0000 UTC m=+0.210453232 container init 474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:49 compute-0 podman[366005]: 2026-01-31 08:33:49.707738132 +0000 UTC m=+0.218911370 container start 474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kalam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:49 compute-0 podman[366005]: 2026-01-31 08:33:49.711224597 +0000 UTC m=+0.222397835 container attach 474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:33:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:50 compute-0 competent_kalam[366022]: {
Jan 31 08:33:50 compute-0 competent_kalam[366022]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:33:50 compute-0 competent_kalam[366022]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:33:50 compute-0 competent_kalam[366022]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:33:50 compute-0 competent_kalam[366022]:         "osd_id": 0,
Jan 31 08:33:50 compute-0 competent_kalam[366022]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:33:50 compute-0 competent_kalam[366022]:         "type": "bluestore"
Jan 31 08:33:50 compute-0 competent_kalam[366022]:     }
Jan 31 08:33:50 compute-0 competent_kalam[366022]: }
Jan 31 08:33:50 compute-0 systemd[1]: libpod-474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6.scope: Deactivated successfully.
Jan 31 08:33:50 compute-0 podman[366005]: 2026-01-31 08:33:50.556546445 +0000 UTC m=+1.067719683 container died 474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/522710267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:33:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3827577837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd252f76b517b12d68fbda6842e6c11056b8b84b1f58f1c50651855bc5fb9e5b-merged.mount: Deactivated successfully.
Jan 31 08:33:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:50 compute-0 podman[366005]: 2026-01-31 08:33:50.617280644 +0000 UTC m=+1.128453882 container remove 474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:33:50 compute-0 systemd[1]: libpod-conmon-474ee73a070c230f3c540e12c2e00e963b05108fea1098cc4f0b523af0122ac6.scope: Deactivated successfully.
Jan 31 08:33:50 compute-0 sudo[365898]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:33:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:33:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 232db034-ad7c-48ab-affd-41e4c2c06293 does not exist
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0511698c-c1b1-4534-96d3-7749672e3682 does not exist
Jan 31 08:33:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev db1f9c02-93ae-4881-acd6-6a2bea36fc98 does not exist
Jan 31 08:33:50 compute-0 sudo[366055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:50 compute-0 sudo[366055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:50 compute-0 sudo[366055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:50 compute-0 sudo[366080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:33:50 compute-0 sudo[366080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:50 compute-0 sudo[366080]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:51 compute-0 nova_compute[247704]: 2026-01-31 08:33:51.256 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:51 compute-0 ceph-mon[74496]: pgmap v3025: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:33:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2128890528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:52.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:52 compute-0 nova_compute[247704]: 2026-01-31 08:33:52.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:53 compute-0 ceph-mon[74496]: pgmap v3026: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:33:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3946079666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/955352894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:33:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/955352894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:33:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:33:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:33:53 compute-0 podman[366107]: 2026-01-31 08:33:53.925770381 +0000 UTC m=+0.095710447 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 08:33:53 compute-0 sudo[366125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:53 compute-0 sudo[366125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:53 compute-0 sudo[366125]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:54 compute-0 sudo[366159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:33:54 compute-0 sudo[366159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:54 compute-0 sudo[366159]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 1.3 MiB/s wr, 2 op/s
Jan 31 08:33:54 compute-0 nova_compute[247704]: 2026-01-31 08:33:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:33:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:54.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:33:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3141230756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:54 compute-0 nova_compute[247704]: 2026-01-31 08:33:54.897 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:54 compute-0 nova_compute[247704]: 2026-01-31 08:33:54.897 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:54 compute-0 nova_compute[247704]: 2026-01-31 08:33:54.897 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:54 compute-0 nova_compute[247704]: 2026-01-31 08:33:54.897 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:33:54 compute-0 nova_compute[247704]: 2026-01-31 08:33:54.897 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270474919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.335 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.491 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.493 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4257MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.494 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.494 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:55 compute-0 ceph-mon[74496]: pgmap v3027: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 1.3 MiB/s wr, 2 op/s
Jan 31 08:33:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4270474919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.876 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:33:55 compute-0 nova_compute[247704]: 2026-01-31 08:33:55.877 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:33:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:56.031 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:33:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:56.032 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.033 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.259 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.276 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 615 KiB/s wr, 46 op/s
Jan 31 08:33:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:33:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:56.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:33:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875697994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.747 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.753 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/875697994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.787 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.788 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:33:56 compute-0 nova_compute[247704]: 2026-01-31 08:33:56.788 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:33:57.035 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.681 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:33:57 compute-0 ceph-mon[74496]: pgmap v3028: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 615 KiB/s wr, 46 op/s
Jan 31 08:33:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/189213466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.789 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.789 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.790 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.846 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.846 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:57 compute-0 nova_compute[247704]: 2026-01-31 08:33:57.847 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:57.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 179 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 646 KiB/s wr, 74 op/s
Jan 31 08:33:58 compute-0 nova_compute[247704]: 2026-01-31 08:33:58.615 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:33:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:58.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:33:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3629884805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:58 compute-0 nova_compute[247704]: 2026-01-31 08:33:58.828 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:58 compute-0 nova_compute[247704]: 2026-01-31 08:33:58.829 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:58 compute-0 nova_compute[247704]: 2026-01-31 08:33:58.859 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.134 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.135 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.142 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.142 247708 INFO nova.compute.claims [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.428 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:59 compute-0 ceph-mon[74496]: pgmap v3029: 305 pgs: 305 active+clean; 179 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 646 KiB/s wr, 74 op/s
Jan 31 08:33:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:33:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726001139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.860 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.868 247708 DEBUG nova.compute.provider_tree [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:33:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:33:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:59.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.903 247708 DEBUG nova.scheduler.client.report [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.929 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:59 compute-0 nova_compute[247704]: 2026-01-31 08:33:59.930 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.006 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.007 247708 DEBUG nova.network.neutron [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.037 247708 INFO nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.077 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.237 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.239 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.239 247708 INFO nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Creating image(s)
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.265 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.294 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.324 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.330 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 209 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 78 op/s
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.399 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.401 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.401 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.402 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.433 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.438 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:00.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2726001139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.834 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:00 compute-0 nova_compute[247704]: 2026-01-31 08:34:00.912 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] resizing rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.027 247708 DEBUG nova.objects.instance [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'migration_context' on Instance uuid 4bb4a501-becc-4e54-a269-3ef1e73b5f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.333 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.334 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Ensure instance console log exists: /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.335 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.335 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:01 compute-0 nova_compute[247704]: 2026-01-31 08:34:01.335 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:01 compute-0 ceph-mon[74496]: pgmap v3030: 305 pgs: 305 active+clean; 209 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 78 op/s
Jan 31 08:34:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:01.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 228 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 31 08:34:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:02.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:02 compute-0 nova_compute[247704]: 2026-01-31 08:34:02.683 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:02 compute-0 nova_compute[247704]: 2026-01-31 08:34:02.738 247708 DEBUG nova.network.neutron [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Successfully created port: 8e84479a-ddb3-40f5-871f-c61675953845 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:34:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:03 compute-0 ceph-mon[74496]: pgmap v3031: 305 pgs: 305 active+clean; 228 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 31 08:34:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:03.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 228 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 31 08:34:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:04.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4123949811' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:05 compute-0 ceph-mon[74496]: pgmap v3032: 305 pgs: 305 active+clean; 228 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 31 08:34:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2887728537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:06 compute-0 ovn_controller[149457]: 2026-01-31T08:34:06Z|00765|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 08:34:06 compute-0 nova_compute[247704]: 2026-01-31 08:34:06.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 271 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 154 op/s
Jan 31 08:34:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:06.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:07 compute-0 nova_compute[247704]: 2026-01-31 08:34:07.685 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:07 compute-0 ceph-mon[74496]: pgmap v3033: 305 pgs: 305 active+clean; 271 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 154 op/s
Jan 31 08:34:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:07.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.366 247708 DEBUG nova.network.neutron [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Successfully updated port: 8e84479a-ddb3-40f5-871f-c61675953845 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:34:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 280 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 5.3 MiB/s wr, 118 op/s
Jan 31 08:34:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:08.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.828 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "refresh_cache-4bb4a501-becc-4e54-a269-3ef1e73b5f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.828 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquired lock "refresh_cache-4bb4a501-becc-4e54-a269-3ef1e73b5f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.828 247708 DEBUG nova.network.neutron [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:34:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1097566403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.974 247708 DEBUG nova.compute.manager [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-changed-8e84479a-ddb3-40f5-871f-c61675953845 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.975 247708 DEBUG nova.compute.manager [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Refreshing instance network info cache due to event network-changed-8e84479a-ddb3-40f5-871f-c61675953845. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:34:08 compute-0 nova_compute[247704]: 2026-01-31 08:34:08.975 247708 DEBUG oslo_concurrency.lockutils [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-4bb4a501-becc-4e54-a269-3ef1e73b5f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:09 compute-0 nova_compute[247704]: 2026-01-31 08:34:09.578 247708 DEBUG nova.network.neutron [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:34:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:09 compute-0 ceph-mon[74496]: pgmap v3034: 305 pgs: 305 active+clean; 280 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 5.3 MiB/s wr, 118 op/s
Jan 31 08:34:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 5.0 MiB/s wr, 116 op/s
Jan 31 08:34:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:10.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:11.204 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:11.205 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:11.205 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:11 compute-0 nova_compute[247704]: 2026-01-31 08:34:11.265 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:11 compute-0 ceph-mon[74496]: pgmap v3035: 305 pgs: 305 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 5.0 MiB/s wr, 116 op/s
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.274 247708 DEBUG nova.network.neutron [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Updating instance_info_cache with network_info: [{"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.414 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Releasing lock "refresh_cache-4bb4a501-becc-4e54-a269-3ef1e73b5f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.414 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Instance network_info: |[{"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.415 247708 DEBUG oslo_concurrency.lockutils [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-4bb4a501-becc-4e54-a269-3ef1e73b5f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.415 247708 DEBUG nova.network.neutron [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Refreshing network info cache for port 8e84479a-ddb3-40f5-871f-c61675953845 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.419 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Start _get_guest_xml network_info=[{"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.425 247708 WARNING nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.434 247708 DEBUG nova.virt.libvirt.host [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.435 247708 DEBUG nova.virt.libvirt.host [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.445 247708 DEBUG nova.virt.libvirt.host [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.446 247708 DEBUG nova.virt.libvirt.host [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.448 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.448 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.449 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.449 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.449 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.450 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.450 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.451 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.451 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.451 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.452 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.452 247708 DEBUG nova.virt.hardware [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.457 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.687 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:34:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3283750753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.887 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.911 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:12 compute-0 nova_compute[247704]: 2026-01-31 08:34:12.916 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3283750753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.381 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.382 247708 DEBUG nova.virt.libvirt.vif [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-2063876725',display_name='tempest-TestServerMultinode-server-2063876725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-2063876725',id=176,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-qbk4oh4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:00Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=4bb4a501-becc-4e54-a269-3ef1e73b5f6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.383 247708 DEBUG nova.network.os_vif_util [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.383 247708 DEBUG nova.network.os_vif_util [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.384 247708 DEBUG nova.objects.instance [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bb4a501-becc-4e54-a269-3ef1e73b5f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.431 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <uuid>4bb4a501-becc-4e54-a269-3ef1e73b5f6f</uuid>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <name>instance-000000b0</name>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:name>tempest-TestServerMultinode-server-2063876725</nova:name>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:34:12</nova:creationTime>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:user uuid="f601ac628957410b995fa67e240e4871">tempest-TestServerMultinode-2117392928-project-admin</nova:user>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:project uuid="d69b70b0e4e340758cc43d45c1113d2f">tempest-TestServerMultinode-2117392928</nova:project>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <nova:port uuid="8e84479a-ddb3-40f5-871f-c61675953845">
Jan 31 08:34:13 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <system>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <entry name="serial">4bb4a501-becc-4e54-a269-3ef1e73b5f6f</entry>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <entry name="uuid">4bb4a501-becc-4e54-a269-3ef1e73b5f6f</entry>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </system>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <os>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </os>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <features>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </features>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk">
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </source>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk.config">
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </source>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:34:13 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:03:24:aa"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <target dev="tap8e84479a-dd"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/console.log" append="off"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <video>
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </video>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:34:13 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:34:13 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:34:13 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:34:13 compute-0 nova_compute[247704]: </domain>
Jan 31 08:34:13 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.432 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Preparing to wait for external event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.432 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.432 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.433 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.433 247708 DEBUG nova.virt.libvirt.vif [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-2063876725',display_name='tempest-TestServerMultinode-server-2063876725',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-2063876725',id=176,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-qbk4oh4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:00Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=4bb4a501-becc-4e54-a269-3ef1e73b5f6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.433 247708 DEBUG nova.network.os_vif_util [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.434 247708 DEBUG nova.network.os_vif_util [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.434 247708 DEBUG os_vif [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.435 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.436 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.439 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e84479a-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.440 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e84479a-dd, col_values=(('external_ids', {'iface-id': '8e84479a-ddb3-40f5-871f-c61675953845', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:24:aa', 'vm-uuid': '4bb4a501-becc-4e54-a269-3ef1e73b5f6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.441 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:13 compute-0 NetworkManager[49108]: <info>  [1769848453.4426] manager: (tap8e84479a-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.444 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.449 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.450 247708 INFO os_vif [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd')
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.615 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.615 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.616 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] No VIF found with MAC fa:16:3e:03:24:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.616 247708 INFO nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Using config drive
Jan 31 08:34:13 compute-0 nova_compute[247704]: 2026-01-31 08:34:13.646 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:13.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:14 compute-0 ceph-mon[74496]: pgmap v3036: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Jan 31 08:34:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4035119726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:14 compute-0 sudo[366508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:14 compute-0 sudo[366508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:14 compute-0 sudo[366508]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:14 compute-0 sudo[366539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:14 compute-0 sudo[366539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:14 compute-0 podman[366532]: 2026-01-31 08:34:14.163027689 +0000 UTC m=+0.053957044 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:34:14 compute-0 sudo[366539]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1013 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.568 247708 INFO nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Creating config drive at /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/disk.config
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.573 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpedw75v6d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.706 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpedw75v6d" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.741 247708 DEBUG nova.storage.rbd_utils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] rbd image 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.747 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/disk.config 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.965 247708 DEBUG oslo_concurrency.processutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/disk.config 4bb4a501-becc-4e54-a269-3ef1e73b5f6f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:14 compute-0 nova_compute[247704]: 2026-01-31 08:34:14.969 247708 INFO nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Deleting local config drive /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f/disk.config because it was imported into RBD.
Jan 31 08:34:15 compute-0 kernel: tap8e84479a-dd: entered promiscuous mode
Jan 31 08:34:15 compute-0 NetworkManager[49108]: <info>  [1769848455.0443] manager: (tap8e84479a-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Jan 31 08:34:15 compute-0 ovn_controller[149457]: 2026-01-31T08:34:15Z|00766|binding|INFO|Claiming lport 8e84479a-ddb3-40f5-871f-c61675953845 for this chassis.
Jan 31 08:34:15 compute-0 ovn_controller[149457]: 2026-01-31T08:34:15Z|00767|binding|INFO|8e84479a-ddb3-40f5-871f-c61675953845: Claiming fa:16:3e:03:24:aa 10.100.0.7
Jan 31 08:34:15 compute-0 systemd-udevd[366627]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.083 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.087 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:15 compute-0 NetworkManager[49108]: <info>  [1769848455.0997] device (tap8e84479a-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:34:15 compute-0 NetworkManager[49108]: <info>  [1769848455.1005] device (tap8e84479a-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:34:15 compute-0 systemd-machined[214448]: New machine qemu-82-instance-000000b0.
Jan 31 08:34:15 compute-0 ovn_controller[149457]: 2026-01-31T08:34:15Z|00768|binding|INFO|Setting lport 8e84479a-ddb3-40f5-871f-c61675953845 ovn-installed in OVS
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.118 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:15 compute-0 ovn_controller[149457]: 2026-01-31T08:34:15Z|00769|binding|INFO|Setting lport 8e84479a-ddb3-40f5-871f-c61675953845 up in Southbound
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.120 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:24:aa 10.100.0.7'], port_security=['fa:16:3e:03:24:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4bb4a501-becc-4e54-a269-3ef1e73b5f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd69b70b0e4e340758cc43d45c1113d2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c7793481-993b-4688-8da1-03fa2a12b295', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f6c418e-e1df-4f83-9060-d1845d483973, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=8e84479a-ddb3-40f5-871f-c61675953845) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.121 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 8e84479a-ddb3-40f5-871f-c61675953845 in datapath 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d bound to our chassis
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.122 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 08:34:15 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-000000b0.
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.134 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[26aa55a7-9d48-43cd-8a8a-c88085f34d76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.136 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap786b4c20-d1 in ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.138 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap786b4c20-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.138 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[29aa2636-45eb-4e80-9487-a3e29434844e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.139 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[55d59bee-1cd0-4122-96e7-f94a70630cfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.150 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[62cdfd95-c8b0-425e-99fa-185ccb12fef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.163 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6855b958-6925-41b6-b65b-b92a09783f08]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.192 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7f78db81-40b2-4af0-be17-7ec980ce4c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 NetworkManager[49108]: <info>  [1769848455.2002] manager: (tap786b4c20-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/332)
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.200 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[51e4fec9-e6d6-4bd9-9677-9550c9ffd161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.236 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[75f99f04-8a4f-4711-bc33-014924ea1f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.239 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9c23e1a8-52e9-4769-ab94-5b0913da6373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 NetworkManager[49108]: <info>  [1769848455.2632] device (tap786b4c20-d0): carrier: link connected
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.268 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f33e3338-f4cc-4f93-9a60-e64d594f83d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.287 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce55373-d8dd-4b32-bc5d-b682007babf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap786b4c20-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:d1:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869788, 'reachable_time': 43286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366663, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.298 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd86510-06d3-485b-8569-2e4cff9fe585]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:d113'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869788, 'tstamp': 869788}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366664, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.316 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd891cd-ccc0-464a-936c-48eeab26a061]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap786b4c20-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:d1:13'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869788, 'reachable_time': 43286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366665, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.339 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed7660e-089a-4682-a6ea-9e24d58187f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.402 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5e8bfc-8341-4cf3-a279-74d307e1e7db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.404 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786b4c20-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.404 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.405 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap786b4c20-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:15 compute-0 kernel: tap786b4c20-d0: entered promiscuous mode
Jan 31 08:34:15 compute-0 NetworkManager[49108]: <info>  [1769848455.4091] manager: (tap786b4c20-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.416 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap786b4c20-d0, col_values=(('external_ids', {'iface-id': '7b0df60c-a238-46c0-acd9-976f981537f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:15 compute-0 ovn_controller[149457]: 2026-01-31T08:34:15Z|00770|binding|INFO|Releasing lport 7b0df60c-a238-46c0-acd9-976f981537f3 from this chassis (sb_readonly=0)
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.420 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.421 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.423 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0722181f-67e7-409b-bcf3-65665536b751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.424 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.pid.haproxy
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:15.426 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'env', 'PROCESS_TAG=haproxy-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/786b4c20-d3c9-4eba-b2c7-0e7b9805b52d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.624 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848455.6242101, 4bb4a501-becc-4e54-a269-3ef1e73b5f6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.625 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] VM Started (Lifecycle Event)
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.680 247708 DEBUG nova.network.neutron [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Updated VIF entry in instance network info cache for port 8e84479a-ddb3-40f5-871f-c61675953845. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.681 247708 DEBUG nova.network.neutron [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Updating instance_info_cache with network_info: [{"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.777 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.784 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848455.6251957, 4bb4a501-becc-4e54-a269-3ef1e73b5f6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.784 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] VM Paused (Lifecycle Event)
Jan 31 08:34:15 compute-0 podman[366740]: 2026-01-31 08:34:15.799318503 +0000 UTC m=+0.028115681 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:34:15 compute-0 nova_compute[247704]: 2026-01-31 08:34:15.898 247708 DEBUG oslo_concurrency.lockutils [req-edad787a-851b-4326-81e8-794f4da3d394 req-886b398d-70c9-415b-a254-c7859bad73a0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-4bb4a501-becc-4e54-a269-3ef1e73b5f6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:15.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:15 compute-0 podman[366740]: 2026-01-31 08:34:15.95415873 +0000 UTC m=+0.182955878 container create 15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:34:15 compute-0 systemd[1]: Started libpod-conmon-15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd.scope.
Jan 31 08:34:16 compute-0 nova_compute[247704]: 2026-01-31 08:34:16.014 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:16 compute-0 nova_compute[247704]: 2026-01-31 08:34:16.017 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:34:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c8ab6251db9ff8d3530670ab69fb23cabd5b1774b246fbbc50e32fa7f82ec0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:16 compute-0 ceph-mon[74496]: pgmap v3037: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1013 KiB/s rd, 3.4 MiB/s wr, 121 op/s
Jan 31 08:34:16 compute-0 podman[366740]: 2026-01-31 08:34:16.051513927 +0000 UTC m=+0.280311095 container init 15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:34:16 compute-0 podman[366740]: 2026-01-31 08:34:16.055905005 +0000 UTC m=+0.284702153 container start 15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:34:16 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [NOTICE]   (366759) : New worker (366761) forked
Jan 31 08:34:16 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [NOTICE]   (366759) : Loading success.
Jan 31 08:34:16 compute-0 nova_compute[247704]: 2026-01-31 08:34:16.130 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:34:16 compute-0 nova_compute[247704]: 2026-01-31 08:34:16.311 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 182 op/s
Jan 31 08:34:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:16.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.487 247708 DEBUG nova.compute.manager [req-9f67cc7f-ccf6-4c36-8015-142abb94952b req-4820e617-4cb9-4866-8502-4bd4759c082d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.488 247708 DEBUG oslo_concurrency.lockutils [req-9f67cc7f-ccf6-4c36-8015-142abb94952b req-4820e617-4cb9-4866-8502-4bd4759c082d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.489 247708 DEBUG oslo_concurrency.lockutils [req-9f67cc7f-ccf6-4c36-8015-142abb94952b req-4820e617-4cb9-4866-8502-4bd4759c082d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.489 247708 DEBUG oslo_concurrency.lockutils [req-9f67cc7f-ccf6-4c36-8015-142abb94952b req-4820e617-4cb9-4866-8502-4bd4759c082d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.489 247708 DEBUG nova.compute.manager [req-9f67cc7f-ccf6-4c36-8015-142abb94952b req-4820e617-4cb9-4866-8502-4bd4759c082d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Processing event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.490 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.496 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848457.4954011, 4bb4a501-becc-4e54-a269-3ef1e73b5f6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.496 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] VM Resumed (Lifecycle Event)
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.498 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.502 247708 INFO nova.virt.libvirt.driver [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Instance spawned successfully.
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.502 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:34:17 compute-0 ceph-mon[74496]: pgmap v3038: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 182 op/s
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.867 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.877 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.881 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.881 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.882 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.883 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.883 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.884 247708 DEBUG nova.virt.libvirt.driver [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:17.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:17 compute-0 nova_compute[247704]: 2026-01-31 08:34:17.982 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:34:18 compute-0 nova_compute[247704]: 2026-01-31 08:34:18.191 247708 INFO nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Took 17.95 seconds to spawn the instance on the hypervisor.
Jan 31 08:34:18 compute-0 nova_compute[247704]: 2026-01-31 08:34:18.192 247708 DEBUG nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Jan 31 08:34:18 compute-0 nova_compute[247704]: 2026-01-31 08:34:18.417 247708 INFO nova.compute.manager [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Took 19.46 seconds to build instance.
Jan 31 08:34:18 compute-0 nova_compute[247704]: 2026-01-31 08:34:18.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:18 compute-0 nova_compute[247704]: 2026-01-31 08:34:18.510 247708 DEBUG oslo_concurrency.lockutils [None req-48f7fd6f-50c7-44e1-ac4b-fe5ece040dc3 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:19 compute-0 ceph-mon[74496]: pgmap v3039: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 147 op/s
Jan 31 08:34:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/148716107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3577396168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:19.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:34:20
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:34:20 compute-0 nova_compute[247704]: 2026-01-31 08:34:20.657 247708 DEBUG nova.compute.manager [req-3bed9fff-57d2-4b50-8e2f-4bccb801fe87 req-79296ce5-c5b8-4f4e-9247-915547f7e5d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:20 compute-0 nova_compute[247704]: 2026-01-31 08:34:20.658 247708 DEBUG oslo_concurrency.lockutils [req-3bed9fff-57d2-4b50-8e2f-4bccb801fe87 req-79296ce5-c5b8-4f4e-9247-915547f7e5d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:20 compute-0 nova_compute[247704]: 2026-01-31 08:34:20.658 247708 DEBUG oslo_concurrency.lockutils [req-3bed9fff-57d2-4b50-8e2f-4bccb801fe87 req-79296ce5-c5b8-4f4e-9247-915547f7e5d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:20 compute-0 nova_compute[247704]: 2026-01-31 08:34:20.658 247708 DEBUG oslo_concurrency.lockutils [req-3bed9fff-57d2-4b50-8e2f-4bccb801fe87 req-79296ce5-c5b8-4f4e-9247-915547f7e5d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:20 compute-0 nova_compute[247704]: 2026-01-31 08:34:20.659 247708 DEBUG nova.compute.manager [req-3bed9fff-57d2-4b50-8e2f-4bccb801fe87 req-79296ce5-c5b8-4f4e-9247-915547f7e5d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] No waiting events found dispatching network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:34:20 compute-0 nova_compute[247704]: 2026-01-31 08:34:20.659 247708 WARNING nova.compute.manager [req-3bed9fff-57d2-4b50-8e2f-4bccb801fe87 req-79296ce5-c5b8-4f4e-9247-915547f7e5d1 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received unexpected event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 for instance with vm_state active and task_state None.
Jan 31 08:34:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:34:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:34:21 compute-0 nova_compute[247704]: 2026-01-31 08:34:21.312 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:21 compute-0 ceph-mon[74496]: pgmap v3040: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Jan 31 08:34:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:21.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 31 08:34:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:23 compute-0 nova_compute[247704]: 2026-01-31 08:34:23.445 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:23.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:23 compute-0 ceph-mon[74496]: pgmap v3041: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 31 08:34:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 31 08:34:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:24.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:24 compute-0 podman[366774]: 2026-01-31 08:34:24.921003235 +0000 UTC m=+0.092447348 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:34:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:25.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:25 compute-0 ceph-mon[74496]: pgmap v3042: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 181 op/s
Jan 31 08:34:26 compute-0 nova_compute[247704]: 2026-01-31 08:34:26.314 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 369 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.8 MiB/s wr, 272 op/s
Jan 31 08:34:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:26.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:27.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:27 compute-0 ceph-mon[74496]: pgmap v3043: 305 pgs: 305 active+clean; 369 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.8 MiB/s wr, 272 op/s
Jan 31 08:34:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 271 op/s
Jan 31 08:34:28 compute-0 nova_compute[247704]: 2026-01-31 08:34:28.448 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:34:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:28.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:34:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:29.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:30 compute-0 ceph-mon[74496]: pgmap v3044: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 271 op/s
Jan 31 08:34:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 357 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.8 MiB/s wr, 315 op/s
Jan 31 08:34:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:34:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:34:30 compute-0 ovn_controller[149457]: 2026-01-31T08:34:30Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:24:aa 10.100.0.7
Jan 31 08:34:30 compute-0 ovn_controller[149457]: 2026-01-31T08:34:30Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:24:aa 10.100.0.7
Jan 31 08:34:31 compute-0 nova_compute[247704]: 2026-01-31 08:34:31.316 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:31.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.003 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.004 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:32 compute-0 ceph-mon[74496]: pgmap v3045: 305 pgs: 305 active+clean; 357 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.8 MiB/s wr, 315 op/s
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.340 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:34:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 309 op/s
Jan 31 08:34:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.887 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.888 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.898 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:34:32 compute-0 nova_compute[247704]: 2026-01-31 08:34:32.899 247708 INFO nova.compute.claims [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:34:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:33 compute-0 nova_compute[247704]: 2026-01-31 08:34:33.451 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:33.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:34 compute-0 ceph-mon[74496]: pgmap v3046: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 309 op/s
Jan 31 08:34:34 compute-0 sudo[366809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:34 compute-0 sudo[366809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:34 compute-0 sudo[366809]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:34 compute-0 nova_compute[247704]: 2026-01-31 08:34:34.302 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:34 compute-0 sudo[366834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:34 compute-0 sudo[366834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:34 compute-0 sudo[366834]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:34:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 66K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1611 writes, 7177 keys, 1610 commit groups, 1.0 writes per commit group, ingest: 10.36 MB, 0.02 MB/s
                                           Interval WAL: 1611 writes, 1610 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     48.0      1.82              0.23        46    0.040       0      0       0.0       0.0
                                             L6      1/0    9.97 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1     63.3     54.0      8.30              1.13        45    0.184    320K    24K       0.0       0.0
                                            Sum      1/0    9.97 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     51.9     52.9     10.12              1.36        91    0.111    320K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6     43.6     44.3      1.68              0.20        12    0.140     57K   3096       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     63.3     54.0      8.30              1.13        45    0.184    320K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     48.1      1.82              0.23        45    0.040       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.085, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.52 GB write, 0.10 MB/s write, 0.51 GB read, 0.10 MB/s read, 10.1 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 57.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000631 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3369,55.12 MB,18.1325%) FilterBlock(92,891.67 KB,0.286439%) IndexBlock(92,1.46 MB,0.481053%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:34:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 355 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 289 op/s
Jan 31 08:34:34 compute-0 sshd-session[366807]: Invalid user ubuntu from 45.148.10.240 port 60018
Jan 31 08:34:34 compute-0 sshd-session[366807]: Connection closed by invalid user ubuntu 45.148.10.240 port 60018 [preauth]
Jan 31 08:34:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:34:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:34.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:34:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017647581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:34 compute-0 nova_compute[247704]: 2026-01-31 08:34:34.829 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:34 compute-0 nova_compute[247704]: 2026-01-31 08:34:34.836 247708 DEBUG nova.compute.provider_tree [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.047 247708 DEBUG nova.scheduler.client.report [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3017647581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/206312902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.330 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.331 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.616 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.616 247708 DEBUG nova.network.neutron [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00647697399032198 of space, bias 1.0, pg target 1.943092197096594 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6464803764191328 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:34:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.878 247708 DEBUG nova.policy [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:34:35 compute-0 nova_compute[247704]: 2026-01-31 08:34:35.881 247708 INFO nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:34:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:34:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:35.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:34:36 compute-0 ceph-mon[74496]: pgmap v3047: 305 pgs: 305 active+clean; 355 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 289 op/s
Jan 31 08:34:36 compute-0 nova_compute[247704]: 2026-01-31 08:34:36.145 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:34:36 compute-0 nova_compute[247704]: 2026-01-31 08:34:36.318 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Jan 31 08:34:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.210 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.212 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.212 247708 INFO nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Creating image(s)
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.244 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.280 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.311 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.316 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.621 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.622 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.623 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.623 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.624 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.626 247708 INFO nova.compute.manager [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Terminating instance
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.628 247708 DEBUG nova.compute.manager [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.630 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.631 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.635 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.636 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.669 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.674 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c6c9300a-7126-4527-bb8f-79350d3b2d99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:37 compute-0 kernel: tap8e84479a-dd (unregistering): left promiscuous mode
Jan 31 08:34:37 compute-0 NetworkManager[49108]: <info>  [1769848477.8092] device (tap8e84479a-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.809 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:37 compute-0 ovn_controller[149457]: 2026-01-31T08:34:37Z|00771|binding|INFO|Releasing lport 8e84479a-ddb3-40f5-871f-c61675953845 from this chassis (sb_readonly=0)
Jan 31 08:34:37 compute-0 ovn_controller[149457]: 2026-01-31T08:34:37Z|00772|binding|INFO|Setting lport 8e84479a-ddb3-40f5-871f-c61675953845 down in Southbound
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.817 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:37 compute-0 ovn_controller[149457]: 2026-01-31T08:34:37Z|00773|binding|INFO|Removing iface tap8e84479a-dd ovn-installed in OVS
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:37 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Jan 31 08:34:37 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b0.scope: Consumed 13.678s CPU time.
Jan 31 08:34:37 compute-0 systemd-machined[214448]: Machine qemu-82-instance-000000b0 terminated.
Jan 31 08:34:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:37.899 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:24:aa 10.100.0.7'], port_security=['fa:16:3e:03:24:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4bb4a501-becc-4e54-a269-3ef1e73b5f6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd69b70b0e4e340758cc43d45c1113d2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c7793481-993b-4688-8da1-03fa2a12b295', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f6c418e-e1df-4f83-9060-d1845d483973, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=8e84479a-ddb3-40f5-871f-c61675953845) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:37.900 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 8e84479a-ddb3-40f5-871f-c61675953845 in datapath 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d unbound from our chassis
Jan 31 08:34:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:37.903 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:34:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:37.904 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[35c6a31c-be38-4676-bb7d-6c128746f57c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:37 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:37.906 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d namespace which is not needed anymore
Jan 31 08:34:37 compute-0 kernel: tap8e84479a-dd: entered promiscuous mode
Jan 31 08:34:37 compute-0 kernel: tap8e84479a-dd (unregistering): left promiscuous mode
Jan 31 08:34:37 compute-0 NetworkManager[49108]: <info>  [1769848477.9217] manager: (tap8e84479a-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.925 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.940 247708 INFO nova.virt.libvirt.driver [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Instance destroyed successfully.
Jan 31 08:34:37 compute-0 nova_compute[247704]: 2026-01-31 08:34:37.941 247708 DEBUG nova.objects.instance [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lazy-loading 'resources' on Instance uuid 4bb4a501-becc-4e54-a269-3ef1e73b5f6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:37.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.105 247708 DEBUG nova.virt.libvirt.vif [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:33:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-2063876725',display_name='tempest-TestServerMultinode-server-2063876725',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-2063876725',id=176,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:34:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d69b70b0e4e340758cc43d45c1113d2f',ramdisk_id='',reservation_id='r-qbk4oh4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2117392928',owner_user_name='tempest-TestServerMultinode-2117392928-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:34:18Z,user_data=None,user_id='f601ac628957410b995fa67e240e4871',uuid=4bb4a501-becc-4e54-a269-3ef1e73b5f6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.105 247708 DEBUG nova.network.os_vif_util [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converting VIF {"id": "8e84479a-ddb3-40f5-871f-c61675953845", "address": "fa:16:3e:03:24:aa", "network": {"id": "786b4c20-d3c9-4eba-b2c7-0e7b9805b52d", "bridge": "br-int", "label": "tempest-TestServerMultinode-1060339927-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9b7e7970671444098929a5073af4ba21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e84479a-dd", "ovs_interfaceid": "8e84479a-ddb3-40f5-871f-c61675953845", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.107 247708 DEBUG nova.network.os_vif_util [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.108 247708 DEBUG os_vif [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.112 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.112 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e84479a-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.149 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.153 247708 INFO os_vif [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:24:aa,bridge_name='br-int',has_traffic_filtering=True,id=8e84479a-ddb3-40f5-871f-c61675953845,network=Network(786b4c20-d3c9-4eba-b2c7-0e7b9805b52d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e84479a-dd')
Jan 31 08:34:38 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [NOTICE]   (366759) : haproxy version is 2.8.14-c23fe91
Jan 31 08:34:38 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [NOTICE]   (366759) : path to executable is /usr/sbin/haproxy
Jan 31 08:34:38 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [WARNING]  (366759) : Exiting Master process...
Jan 31 08:34:38 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [WARNING]  (366759) : Exiting Master process...
Jan 31 08:34:38 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [ALERT]    (366759) : Current worker (366761) exited with code 143 (Terminated)
Jan 31 08:34:38 compute-0 neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d[366755]: [WARNING]  (366759) : All workers exited. Exiting... (0)
Jan 31 08:34:38 compute-0 systemd[1]: libpod-15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd.scope: Deactivated successfully.
Jan 31 08:34:38 compute-0 podman[367009]: 2026-01-31 08:34:38.167326586 +0000 UTC m=+0.161633623 container died 15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.211 247708 DEBUG nova.network.neutron [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Successfully created port: 3eaec8f3-e8af-4992-82d1-25154406dead _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:34:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:38 compute-0 ceph-mon[74496]: pgmap v3048: 305 pgs: 305 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 289 op/s
Jan 31 08:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd-userdata-shm.mount: Deactivated successfully.
Jan 31 08:34:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-07c8ab6251db9ff8d3530670ab69fb23cabd5b1774b246fbbc50e32fa7f82ec0-merged.mount: Deactivated successfully.
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.325 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 c6c9300a-7126-4527-bb8f-79350d3b2d99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:34:38 compute-0 podman[367009]: 2026-01-31 08:34:38.41169974 +0000 UTC m=+0.406006777 container cleanup 15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:34:38 compute-0 systemd[1]: libpod-conmon-15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd.scope: Deactivated successfully.
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.421 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:34:38 compute-0 podman[367094]: 2026-01-31 08:34:38.642695374 +0000 UTC m=+0.207956391 container remove 15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.645 247708 DEBUG nova.objects.instance [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid c6c9300a-7126-4527-bb8f-79350d3b2d99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.647 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a502b6-4a2a-400b-b277-672b9432e6ca]: (4, ('Sat Jan 31 08:34:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d (15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd)\n15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd\nSat Jan 31 08:34:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d (15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd)\n15ad60eb9ae2bcb85761ddee8de7ad025f48e7e00e730619c19df0c47c12c0fd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.649 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[940917f6-1f1e-4d4b-bd2c-cbead7c348eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.651 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786b4c20-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.653 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:38 compute-0 kernel: tap786b4c20-d0: left promiscuous mode
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.664 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6863afe9-1014-4482-8adf-11a09979bbd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.682 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[17637f49-fd64-4006-b42d-b9279b80f814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.683 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d6f6d9-d628-4eb5-bedb-8440048b125d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:38.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.697 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cac4fc1b-c7a9-4a7f-85cd-5d376424e037]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869780, 'reachable_time': 17135, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367146, 'error': None, 'target': 'ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.700 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-786b4c20-d3c9-4eba-b2c7-0e7b9805b52d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:34:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:38.700 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[f8d07f43-7429-41c9-9549-a9e204998a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d786b4c20\x2dd3c9\x2d4eba\x2db2c7\x2d0e7b9805b52d.mount: Deactivated successfully.
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.995 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.996 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Ensure instance console log exists: /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.997 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.997 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:38 compute-0 nova_compute[247704]: 2026-01-31 08:34:38.997 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:39 compute-0 ceph-mon[74496]: pgmap v3049: 305 pgs: 305 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 08:34:39 compute-0 nova_compute[247704]: 2026-01-31 08:34:39.559 247708 DEBUG nova.compute.manager [req-e58bda59-8178-4452-ae5f-e1be7ce53b92 req-1dcc480d-7c5b-4f9d-85c0-51dea8309ac8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-vif-unplugged-8e84479a-ddb3-40f5-871f-c61675953845 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:39 compute-0 nova_compute[247704]: 2026-01-31 08:34:39.559 247708 DEBUG oslo_concurrency.lockutils [req-e58bda59-8178-4452-ae5f-e1be7ce53b92 req-1dcc480d-7c5b-4f9d-85c0-51dea8309ac8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:39 compute-0 nova_compute[247704]: 2026-01-31 08:34:39.559 247708 DEBUG oslo_concurrency.lockutils [req-e58bda59-8178-4452-ae5f-e1be7ce53b92 req-1dcc480d-7c5b-4f9d-85c0-51dea8309ac8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:39 compute-0 nova_compute[247704]: 2026-01-31 08:34:39.560 247708 DEBUG oslo_concurrency.lockutils [req-e58bda59-8178-4452-ae5f-e1be7ce53b92 req-1dcc480d-7c5b-4f9d-85c0-51dea8309ac8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:39 compute-0 nova_compute[247704]: 2026-01-31 08:34:39.560 247708 DEBUG nova.compute.manager [req-e58bda59-8178-4452-ae5f-e1be7ce53b92 req-1dcc480d-7c5b-4f9d-85c0-51dea8309ac8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] No waiting events found dispatching network-vif-unplugged-8e84479a-ddb3-40f5-871f-c61675953845 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:34:39 compute-0 nova_compute[247704]: 2026-01-31 08:34:39.560 247708 DEBUG nova.compute.manager [req-e58bda59-8178-4452-ae5f-e1be7ce53b92 req-1dcc480d-7c5b-4f9d-85c0-51dea8309ac8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-vif-unplugged-8e84479a-ddb3-40f5-871f-c61675953845 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:34:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:39.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:40 compute-0 nova_compute[247704]: 2026-01-31 08:34:40.075 247708 INFO nova.virt.libvirt.driver [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Deleting instance files /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f_del
Jan 31 08:34:40 compute-0 nova_compute[247704]: 2026-01-31 08:34:40.076 247708 INFO nova.virt.libvirt.driver [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Deletion of /var/lib/nova/instances/4bb4a501-becc-4e54-a269-3ef1e73b5f6f_del complete
Jan 31 08:34:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 162 op/s
Jan 31 08:34:40 compute-0 nova_compute[247704]: 2026-01-31 08:34:40.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:40.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:41 compute-0 nova_compute[247704]: 2026-01-31 08:34:41.321 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:41 compute-0 nova_compute[247704]: 2026-01-31 08:34:41.323 247708 INFO nova.compute.manager [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Took 3.69 seconds to destroy the instance on the hypervisor.
Jan 31 08:34:41 compute-0 nova_compute[247704]: 2026-01-31 08:34:41.324 247708 DEBUG oslo.service.loopingcall [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:34:41 compute-0 nova_compute[247704]: 2026-01-31 08:34:41.324 247708 DEBUG nova.compute.manager [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:34:41 compute-0 nova_compute[247704]: 2026-01-31 08:34:41.324 247708 DEBUG nova.network.neutron [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:34:41 compute-0 nova_compute[247704]: 2026-01-31 08:34:41.355 247708 DEBUG nova.network.neutron [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Successfully updated port: 3eaec8f3-e8af-4992-82d1-25154406dead _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:34:41 compute-0 ceph-mon[74496]: pgmap v3050: 305 pgs: 305 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 162 op/s
Jan 31 08:34:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 2.8 MiB/s wr, 143 op/s
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.489 247708 DEBUG nova.compute.manager [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.489 247708 DEBUG oslo_concurrency.lockutils [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.490 247708 DEBUG oslo_concurrency.lockutils [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.490 247708 DEBUG oslo_concurrency.lockutils [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.490 247708 DEBUG nova.compute.manager [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] No waiting events found dispatching network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.491 247708 WARNING nova.compute.manager [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received unexpected event network-vif-plugged-8e84479a-ddb3-40f5-871f-c61675953845 for instance with vm_state active and task_state deleting.
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.491 247708 DEBUG nova.compute.manager [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-changed-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.491 247708 DEBUG nova.compute.manager [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Refreshing instance network info cache due to event network-changed-3eaec8f3-e8af-4992-82d1-25154406dead. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.492 247708 DEBUG oslo_concurrency.lockutils [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.492 247708 DEBUG oslo_concurrency.lockutils [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.492 247708 DEBUG nova.network.neutron [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Refreshing network info cache for port 3eaec8f3-e8af-4992-82d1-25154406dead _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:34:42 compute-0 nova_compute[247704]: 2026-01-31 08:34:42.497 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:34:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:34:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.152 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.524 247708 DEBUG nova.network.neutron [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:43.634 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:43.635 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.729 247708 DEBUG nova.compute.manager [req-346dad56-30dc-45b6-b039-586bf8d602d9 req-ae781ac1-4466-45c8-a6c0-6778381edb82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Received event network-vif-deleted-8e84479a-ddb3-40f5-871f-c61675953845 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.730 247708 INFO nova.compute.manager [req-346dad56-30dc-45b6-b039-586bf8d602d9 req-ae781ac1-4466-45c8-a6c0-6778381edb82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Neutron deleted interface 8e84479a-ddb3-40f5-871f-c61675953845; detaching it from the instance and deleting it from the info cache
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.730 247708 DEBUG nova.network.neutron [req-346dad56-30dc-45b6-b039-586bf8d602d9 req-ae781ac1-4466-45c8-a6c0-6778381edb82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:43 compute-0 nova_compute[247704]: 2026-01-31 08:34:43.927 247708 DEBUG nova.network.neutron [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:43.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:44 compute-0 ceph-mon[74496]: pgmap v3051: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 2.8 MiB/s wr, 143 op/s
Jan 31 08:34:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/163603597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:44 compute-0 nova_compute[247704]: 2026-01-31 08:34:44.398 247708 DEBUG nova.network.neutron [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 614 KiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 31 08:34:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:44 compute-0 podman[367151]: 2026-01-31 08:34:44.89599734 +0000 UTC m=+0.064257516 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:34:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:46 compute-0 ceph-mon[74496]: pgmap v3052: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 614 KiB/s rd, 2.4 MiB/s wr, 121 op/s
Jan 31 08:34:46 compute-0 nova_compute[247704]: 2026-01-31 08:34:46.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1.9 MiB/s wr, 116 op/s
Jan 31 08:34:46 compute-0 nova_compute[247704]: 2026-01-31 08:34:46.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:46 compute-0 nova_compute[247704]: 2026-01-31 08:34:46.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:34:46 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:46.638 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:47 compute-0 nova_compute[247704]: 2026-01-31 08:34:47.037 247708 DEBUG nova.compute.manager [req-346dad56-30dc-45b6-b039-586bf8d602d9 req-ae781ac1-4466-45c8-a6c0-6778381edb82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Detach interface failed, port_id=8e84479a-ddb3-40f5-871f-c61675953845, reason: Instance 4bb4a501-becc-4e54-a269-3ef1e73b5f6f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:34:47 compute-0 ceph-mon[74496]: pgmap v3053: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1.9 MiB/s wr, 116 op/s
Jan 31 08:34:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:48 compute-0 nova_compute[247704]: 2026-01-31 08:34:48.154 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:34:48 compute-0 nova_compute[247704]: 2026-01-31 08:34:48.564 247708 INFO nova.compute.manager [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Took 7.24 seconds to deallocate network for instance.
Jan 31 08:34:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:48 compute-0 nova_compute[247704]: 2026-01-31 08:34:48.902 247708 DEBUG oslo_concurrency.lockutils [req-fc6a5547-39ff-4a2a-ae2a-4fad6180104f req-e2b9db2f-3cd4-4673-9c4e-196b190627e3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:48 compute-0 nova_compute[247704]: 2026-01-31 08:34:48.903 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:34:48 compute-0 nova_compute[247704]: 2026-01-31 08:34:48.904 247708 DEBUG nova.network.neutron [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:34:49 compute-0 nova_compute[247704]: 2026-01-31 08:34:49.062 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:49 compute-0 nova_compute[247704]: 2026-01-31 08:34:49.063 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:49 compute-0 ceph-mon[74496]: pgmap v3054: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 08:34:49 compute-0 nova_compute[247704]: 2026-01-31 08:34:49.703 247708 DEBUG oslo_concurrency.processutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.082 247708 DEBUG nova.network.neutron [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:50 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721828798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.157 247708 DEBUG oslo_concurrency.processutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.163 247708 DEBUG nova.compute.provider_tree [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.260 247708 DEBUG nova.scheduler.client.report [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.398 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 469 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.534 247708 INFO nova.scheduler.client.report [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Deleted allocations for instance 4bb4a501-becc-4e54-a269-3ef1e73b5f6f
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3721828798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:50 compute-0 nova_compute[247704]: 2026-01-31 08:34:50.817 247708 DEBUG oslo_concurrency.lockutils [None req-d0d496b0-0e7f-4473-9835-7051df4f3d16 f601ac628957410b995fa67e240e4871 d69b70b0e4e340758cc43d45c1113d2f - - default default] Lock "4bb4a501-becc-4e54-a269-3ef1e73b5f6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:51 compute-0 sudo[367196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:51 compute-0 sudo[367196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-0 sudo[367196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-0 sudo[367221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:51 compute-0 sudo[367221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-0 sudo[367221]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-0 sudo[367246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:51 compute-0 sudo[367246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-0 sudo[367246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-0 sudo[367271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:34:51 compute-0 sudo[367271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-0 nova_compute[247704]: 2026-01-31 08:34:51.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:51 compute-0 ceph-mon[74496]: pgmap v3055: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 469 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 08:34:51 compute-0 sudo[367271]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:51.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:52 compute-0 nova_compute[247704]: 2026-01-31 08:34:52.349 247708 DEBUG nova.network.neutron [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:34:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 347 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Jan 31 08:34:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:34:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:52 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 58fcf9fc-13c5-4954-b2e8-fe659b2d3416 does not exist
Jan 31 08:34:52 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 03d813f5-eabc-4cbb-bad4-c6d4f4cbd3c4 does not exist
Jan 31 08:34:52 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d8d9acaf-3155-4531-8a0c-6b102fee3dbc does not exist
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:34:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:34:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:34:52 compute-0 sudo[367328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:52 compute-0 sudo[367328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:52 compute-0 sudo[367328]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:52 compute-0 sudo[367353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:52 compute-0 sudo[367353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:52 compute-0 sudo[367353]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:52 compute-0 sudo[367378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:52 compute-0 sudo[367378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:52 compute-0 sudo[367378]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:52 compute-0 nova_compute[247704]: 2026-01-31 08:34:52.938 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848477.936786, 4bb4a501-becc-4e54-a269-3ef1e73b5f6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:52 compute-0 nova_compute[247704]: 2026-01-31 08:34:52.939 247708 INFO nova.compute.manager [-] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] VM Stopped (Lifecycle Event)
Jan 31 08:34:52 compute-0 sudo[367403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:34:52 compute-0 sudo[367403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:34:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.156 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.286140204 +0000 UTC m=+0.045883226 container create 4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_johnson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:34:53 compute-0 systemd[1]: Started libpod-conmon-4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92.scope.
Jan 31 08:34:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.354130952 +0000 UTC m=+0.113874004 container init 4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.262183387 +0000 UTC m=+0.021926439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.362061336 +0000 UTC m=+0.121804368 container start 4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_johnson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:34:53 compute-0 exciting_johnson[367482]: 167 167
Jan 31 08:34:53 compute-0 systemd[1]: libpod-4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92.scope: Deactivated successfully.
Jan 31 08:34:53 compute-0 conmon[367482]: conmon 4dd92e1bd0139322983c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92.scope/container/memory.events
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.366930546 +0000 UTC m=+0.126673578 container attach 4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_johnson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.36754636 +0000 UTC m=+0.127289392 container died 4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e752137e54e67b8f1fc2ced88b695b3869060e4c7fe026b3af196528fa54060-merged.mount: Deactivated successfully.
Jan 31 08:34:53 compute-0 podman[367466]: 2026-01-31 08:34:53.408031923 +0000 UTC m=+0.167774955 container remove 4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_johnson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:34:53 compute-0 systemd[1]: libpod-conmon-4dd92e1bd0139322983c7230934e4e89c9664f02f1faf76fb6cd1d4128663d92.scope: Deactivated successfully.
Jan 31 08:34:53 compute-0 podman[367507]: 2026-01-31 08:34:53.606781287 +0000 UTC m=+0.058263459 container create 430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:53 compute-0 systemd[1]: Started libpod-conmon-430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433.scope.
Jan 31 08:34:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:53 compute-0 podman[367507]: 2026-01-31 08:34:53.58651921 +0000 UTC m=+0.038001432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378bb3656bf88b38c1a911ddceb3541fd927a03a732b2bb06220e0d614310914/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378bb3656bf88b38c1a911ddceb3541fd927a03a732b2bb06220e0d614310914/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378bb3656bf88b38c1a911ddceb3541fd927a03a732b2bb06220e0d614310914/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378bb3656bf88b38c1a911ddceb3541fd927a03a732b2bb06220e0d614310914/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/378bb3656bf88b38c1a911ddceb3541fd927a03a732b2bb06220e0d614310914/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:53 compute-0 podman[367507]: 2026-01-31 08:34:53.701574091 +0000 UTC m=+0.153056263 container init 430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:34:53 compute-0 podman[367507]: 2026-01-31 08:34:53.709056765 +0000 UTC m=+0.160538947 container start 430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:34:53 compute-0 podman[367507]: 2026-01-31 08:34:53.712730165 +0000 UTC m=+0.164212357 container attach 430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.915 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.917 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Instance network_info: |[{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.920 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Start _get_guest_xml network_info=[{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.926 247708 WARNING nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.933 247708 DEBUG nova.virt.libvirt.host [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.933 247708 DEBUG nova.virt.libvirt.host [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.937 247708 DEBUG nova.virt.libvirt.host [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.938 247708 DEBUG nova.virt.libvirt.host [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.939 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.939 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.940 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.940 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.940 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.940 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.940 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.941 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.941 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.941 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.941 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.942 247708 DEBUG nova.virt.hardware [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.944 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:53.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:34:53 compute-0 nova_compute[247704]: 2026-01-31 08:34:53.998 247708 DEBUG nova.compute.manager [None req-34e82ac1-6fda-49da-86fe-7647e4e6224f - - - - - -] [instance: 4bb4a501-becc-4e54-a269-3ef1e73b5f6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:54 compute-0 ceph-mon[74496]: pgmap v3056: 305 pgs: 305 active+clean; 347 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 1.6 MiB/s wr, 43 op/s
Jan 31 08:34:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1367228748' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:34:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1367228748' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:34:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:34:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4053869991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:54 compute-0 sudo[367548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:54 compute-0 sudo[367548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-0 sudo[367548]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 304 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.417 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.448 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.454 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:54 compute-0 sudo[367579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:54 compute-0 sudo[367579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-0 sudo[367579]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-0 reverent_wilbur[367523]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:34:54 compute-0 reverent_wilbur[367523]: --> relative data size: 1.0
Jan 31 08:34:54 compute-0 reverent_wilbur[367523]: --> All data devices are unavailable
Jan 31 08:34:54 compute-0 systemd[1]: libpod-430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433.scope: Deactivated successfully.
Jan 31 08:34:54 compute-0 podman[367507]: 2026-01-31 08:34:54.520919772 +0000 UTC m=+0.972401954 container died 430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-378bb3656bf88b38c1a911ddceb3541fd927a03a732b2bb06220e0d614310914-merged.mount: Deactivated successfully.
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:54 compute-0 podman[367507]: 2026-01-31 08:34:54.584180763 +0000 UTC m=+1.035662935 container remove 430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:54 compute-0 systemd[1]: libpod-conmon-430a1f6196f93aa84ebe18ffe0bebbeeea429f5b25b0e42c63ab9d7b548c0433.scope: Deactivated successfully.
Jan 31 08:34:54 compute-0 sudo[367403]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.633 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.634 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.634 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.634 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.635 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:54 compute-0 sudo[367662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:54 compute-0 sudo[367662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-0 sudo[367662]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-0 sudo[367688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:54 compute-0 sudo[367688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-0 sudo[367688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-0 sudo[367713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:54 compute-0 sudo[367713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-0 sudo[367713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:54 compute-0 sudo[367757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:34:54 compute-0 sudo[367757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:34:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255376033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.921 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.923 247708 DEBUG nova.virt.libvirt.vif [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=178,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWpNeypo9Q1sq9cZAjy5j4PdvjZU7wvIeGVblb4el5++/kXu5QxxUJCQ7sWuia4+hmpZjY2WQxIbUb1otO4A/PN5NRwLCFKeAM2g4yuKbNgm/SHmLCr0VD7rm0wEE+X+Q==',key_name='tempest-TestSecurityGroupsBasicOps-751432981',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-u4wqnf4n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:36Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=c6c9300a-7126-4527-bb8f-79350d3b2d99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.924 247708 DEBUG nova.network.os_vif_util [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.925 247708 DEBUG nova.network.os_vif_util [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:54 compute-0 nova_compute[247704]: 2026-01-31 08:34:54.926 247708 DEBUG nova.objects.instance [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid c6c9300a-7126-4527-bb8f-79350d3b2d99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.034 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <uuid>c6c9300a-7126-4527-bb8f-79350d3b2d99</uuid>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <name>instance-000000b2</name>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639</nova:name>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:34:53</nova:creationTime>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <nova:port uuid="3eaec8f3-e8af-4992-82d1-25154406dead">
Jan 31 08:34:55 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <system>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <entry name="serial">c6c9300a-7126-4527-bb8f-79350d3b2d99</entry>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <entry name="uuid">c6c9300a-7126-4527-bb8f-79350d3b2d99</entry>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </system>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <os>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </os>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <features>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </features>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c6c9300a-7126-4527-bb8f-79350d3b2d99_disk">
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </source>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/c6c9300a-7126-4527-bb8f-79350d3b2d99_disk.config">
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </source>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:34:55 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:13:13:d6"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <target dev="tap3eaec8f3-e8"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/console.log" append="off"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <video>
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </video>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:34:55 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:34:55 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:34:55 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:34:55 compute-0 nova_compute[247704]: </domain>
Jan 31 08:34:55 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.036 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Preparing to wait for external event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.036 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.036 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.036 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.037 247708 DEBUG nova.virt.libvirt.vif [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=178,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWpNeypo9Q1sq9cZAjy5j4PdvjZU7wvIeGVblb4el5++/kXu5QxxUJCQ7sWuia4+hmpZjY2WQxIbUb1otO4A/PN5NRwLCFKeAM2g4yuKbNgm/SHmLCr0VD7rm0wEE+X+Q==',key_name='tempest-TestSecurityGroupsBasicOps-751432981',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-u4wqnf4n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:36Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=c6c9300a-7126-4527-bb8f-79350d3b2d99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.038 247708 DEBUG nova.network.os_vif_util [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.038 247708 DEBUG nova.network.os_vif_util [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.039 247708 DEBUG os_vif [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.039 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.040 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.040 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:34:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109295381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.047 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.048 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eaec8f3-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.049 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3eaec8f3-e8, col_values=(('external_ids', {'iface-id': '3eaec8f3-e8af-4992-82d1-25154406dead', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:13:d6', 'vm-uuid': 'c6c9300a-7126-4527-bb8f-79350d3b2d99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:55 compute-0 NetworkManager[49108]: <info>  [1769848495.0529] manager: (tap3eaec8f3-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.055 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.059 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.060 247708 INFO os_vif [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8')
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.068 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:55 compute-0 podman[367812]: 2026-01-31 08:34:55.188198135 +0000 UTC m=+0.089327642 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.229 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.230 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.231 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.231 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.231 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:13:13:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.232 247708 INFO nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Using config drive
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.243005519 +0000 UTC m=+0.086702877 container create 87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shirley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.263 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.183014508 +0000 UTC m=+0.026711886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:34:55 compute-0 systemd[1]: Started libpod-conmon-87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c.scope.
Jan 31 08:34:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4053869991' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3255376033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2109295381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3869672271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.463 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.465 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4212MB free_disk=20.870807647705078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.465 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.466 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.584919332 +0000 UTC m=+0.428616780 container init 87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.59583451 +0000 UTC m=+0.439531878 container start 87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:34:55 compute-0 gifted_shirley[367888]: 167 167
Jan 31 08:34:55 compute-0 systemd[1]: libpod-87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c.scope: Deactivated successfully.
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.645570179 +0000 UTC m=+0.489267537 container attach 87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shirley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.647062476 +0000 UTC m=+0.490759844 container died 87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.779 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c6c9300a-7126-4527-bb8f-79350d3b2d99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.781 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.781 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4796f28d8264178d46a2b499bbab7ad372e6fe45807995c1139e5de2d77531a6-merged.mount: Deactivated successfully.
Jan 31 08:34:55 compute-0 podman[367842]: 2026-01-31 08:34:55.811358935 +0000 UTC m=+0.655056293 container remove 87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 31 08:34:55 compute-0 systemd[1]: libpod-conmon-87ac14aef9654a5ea8a9ef28a41990a8e981ab8dadc4ee84d98c0aa1b6f23e6c.scope: Deactivated successfully.
Jan 31 08:34:55 compute-0 nova_compute[247704]: 2026-01-31 08:34:55.846 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:34:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:56.013335488 +0000 UTC m=+0.060787942 container create 4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:34:56 compute-0 systemd[1]: Started libpod-conmon-4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0.scope.
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:55.994769233 +0000 UTC m=+0.042221687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:34:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3e14c9227241a57bb8ca9258991fb6c07e5793a72e76a9c363d6d5cdfb2b7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3e14c9227241a57bb8ca9258991fb6c07e5793a72e76a9c363d6d5cdfb2b7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3e14c9227241a57bb8ca9258991fb6c07e5793a72e76a9c363d6d5cdfb2b7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d3e14c9227241a57bb8ca9258991fb6c07e5793a72e76a9c363d6d5cdfb2b7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:56.140975087 +0000 UTC m=+0.188427531 container init 4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:56.148855851 +0000 UTC m=+0.196308295 container start 4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:56.153232538 +0000 UTC m=+0.200684962 container attach 4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elbakyan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064002843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-0 nova_compute[247704]: 2026-01-31 08:34:56.304 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:56 compute-0 nova_compute[247704]: 2026-01-31 08:34:56.313 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:56 compute-0 nova_compute[247704]: 2026-01-31 08:34:56.328 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 08:34:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:56.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:56 compute-0 nova_compute[247704]: 2026-01-31 08:34:56.813 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]: {
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:     "0": [
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:         {
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "devices": [
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "/dev/loop3"
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             ],
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "lv_name": "ceph_lv0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "lv_size": "7511998464",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "name": "ceph_lv0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "tags": {
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.crush_device_class": "",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.encrypted": "0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.osd_id": "0",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.type": "block",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:                 "ceph.vdo": "0"
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             },
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "type": "block",
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:             "vg_name": "ceph_vg0"
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:         }
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]:     ]
Jan 31 08:34:56 compute-0 affectionate_elbakyan[367949]: }
Jan 31 08:34:56 compute-0 systemd[1]: libpod-4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0.scope: Deactivated successfully.
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:56.917597831 +0000 UTC m=+0.965050265 container died 4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elbakyan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:56 compute-0 ceph-mon[74496]: pgmap v3057: 305 pgs: 305 active+clean; 304 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 1.5 MiB/s wr, 16 op/s
Jan 31 08:34:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3423816108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2292615389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1064002843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d3e14c9227241a57bb8ca9258991fb6c07e5793a72e76a9c363d6d5cdfb2b7d-merged.mount: Deactivated successfully.
Jan 31 08:34:56 compute-0 podman[367916]: 2026-01-31 08:34:56.98034823 +0000 UTC m=+1.027800664 container remove 4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 31 08:34:56 compute-0 nova_compute[247704]: 2026-01-31 08:34:56.982 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:34:56 compute-0 nova_compute[247704]: 2026-01-31 08:34:56.983 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:56 compute-0 systemd[1]: libpod-conmon-4abe15d8c49db909375d3659fd436227ccaa2bbb6c7f8a7e35d00a4e955137e0.scope: Deactivated successfully.
Jan 31 08:34:57 compute-0 sudo[367757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:57 compute-0 sudo[367975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:57 compute-0 sudo[367975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:57 compute-0 sudo[367975]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:57 compute-0 sudo[368000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:57 compute-0 sudo[368000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:57 compute-0 sudo[368000]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.165 247708 INFO nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Creating config drive at /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/disk.config
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.170 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgrzrlh4_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:57 compute-0 sudo[368025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:57 compute-0 sudo[368025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:57 compute-0 sudo[368025]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:57 compute-0 sudo[368051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:34:57 compute-0 sudo[368051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.307 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgrzrlh4_" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.339 247708 DEBUG nova.storage.rbd_utils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image c6c9300a-7126-4527-bb8f-79350d3b2d99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.344 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/disk.config c6c9300a-7126-4527-bb8f-79350d3b2d99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.559889441 +0000 UTC m=+0.037631134 container create 7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:57 compute-0 systemd[1]: Started libpod-conmon-7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb.scope.
Jan 31 08:34:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:34:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4220231484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.542405972 +0000 UTC m=+0.020147675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.651422055 +0000 UTC m=+0.129163838 container init 7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.660409025 +0000 UTC m=+0.138150718 container start 7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:57 compute-0 eloquent_swirles[368172]: 167 167
Jan 31 08:34:57 compute-0 systemd[1]: libpod-7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb.scope: Deactivated successfully.
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.666280079 +0000 UTC m=+0.144021852 container attach 7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.667407927 +0000 UTC m=+0.145149690 container died 7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca0f571bf9a07d545473f2af07f42d946f2bbb659bc30961d7265cd0c315d15-merged.mount: Deactivated successfully.
Jan 31 08:34:57 compute-0 podman[368155]: 2026-01-31 08:34:57.725659375 +0000 UTC m=+0.203401078 container remove 7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:34:57 compute-0 systemd[1]: libpod-conmon-7e7ed04773afaa2c611881597505c144c364344aa42c27b1673652703b8452cb.scope: Deactivated successfully.
Jan 31 08:34:57 compute-0 podman[368197]: 2026-01-31 08:34:57.876145355 +0000 UTC m=+0.058287190 container create 5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:57 compute-0 systemd[1]: Started libpod-conmon-5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4.scope.
Jan 31 08:34:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:57 compute-0 podman[368197]: 2026-01-31 08:34:57.854364021 +0000 UTC m=+0.036505916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5969b1930202f7a5b5b290d0297eebb1a0fd54fcb114c13112d111716071f385/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5969b1930202f7a5b5b290d0297eebb1a0fd54fcb114c13112d111716071f385/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5969b1930202f7a5b5b290d0297eebb1a0fd54fcb114c13112d111716071f385/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5969b1930202f7a5b5b290d0297eebb1a0fd54fcb114c13112d111716071f385/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:57 compute-0 podman[368197]: 2026-01-31 08:34:57.969505335 +0000 UTC m=+0.151647190 container init 5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:34:57 compute-0 podman[368197]: 2026-01-31 08:34:57.975989014 +0000 UTC m=+0.158130849 container start 5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:34:57 compute-0 podman[368197]: 2026-01-31 08:34:57.981987071 +0000 UTC m=+0.164128906 container attach 5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.984 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.985 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:34:57 compute-0 nova_compute[247704]: 2026-01-31 08:34:57.986 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:34:57 compute-0 ceph-mon[74496]: pgmap v3058: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 08:34:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4220231484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.083 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.083 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.084 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.084 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.167 247708 DEBUG oslo_concurrency.processutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/disk.config c6c9300a-7126-4527-bb8f-79350d3b2d99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.823s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.168 247708 INFO nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Deleting local config drive /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99/disk.config because it was imported into RBD.
Jan 31 08:34:58 compute-0 kernel: tap3eaec8f3-e8: entered promiscuous mode
Jan 31 08:34:58 compute-0 NetworkManager[49108]: <info>  [1769848498.2339] manager: (tap3eaec8f3-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Jan 31 08:34:58 compute-0 ovn_controller[149457]: 2026-01-31T08:34:58Z|00774|binding|INFO|Claiming lport 3eaec8f3-e8af-4992-82d1-25154406dead for this chassis.
Jan 31 08:34:58 compute-0 ovn_controller[149457]: 2026-01-31T08:34:58Z|00775|binding|INFO|3eaec8f3-e8af-4992-82d1-25154406dead: Claiming fa:16:3e:13:13:d6 10.100.0.14
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.237 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:58 compute-0 systemd-udevd[368232]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.271 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 systemd-machined[214448]: New machine qemu-83-instance-000000b2.
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.282 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 ovn_controller[149457]: 2026-01-31T08:34:58Z|00776|binding|INFO|Setting lport 3eaec8f3-e8af-4992-82d1-25154406dead ovn-installed in OVS
Jan 31 08:34:58 compute-0 NetworkManager[49108]: <info>  [1769848498.2862] device (tap3eaec8f3-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:34:58 compute-0 NetworkManager[49108]: <info>  [1769848498.2870] device (tap3eaec8f3-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.288 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-000000b2.
Jan 31 08:34:58 compute-0 ovn_controller[149457]: 2026-01-31T08:34:58Z|00777|binding|INFO|Setting lport 3eaec8f3-e8af-4992-82d1-25154406dead up in Southbound
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.333 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:13:d6 10.100.0.14'], port_security=['fa:16:3e:13:13:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c6c9300a-7126-4527-bb8f-79350d3b2d99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '34189c60-de2d-42e8-858e-38d35028fcf6 a07d9fd6-325a-44eb-8786-8aa95ea9c7b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8646ae3-0ba7-4a4d-a93f-9ccdaad61a18, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3eaec8f3-e8af-4992-82d1-25154406dead) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.335 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3eaec8f3-e8af-4992-82d1-25154406dead in datapath effe8892-77d9-4f8d-a6b7-18d16342aaf9 bound to our chassis
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.338 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network effe8892-77d9-4f8d-a6b7-18d16342aaf9
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.350 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c313576a-6372-4691-8f5a-2c00d619c497]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.352 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeffe8892-71 in ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.357 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeffe8892-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.357 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[554e3d57-29c1-4d5a-b6ba-8c6a5f2a80da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.358 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9ec8ea-9b4d-4eb1-85cc-216bdec62732]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.369 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a95228-5a7d-4eb1-bb73-bb56d6fc8b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.382 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c902876e-e57e-474d-91ca-be2b99f1310a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.412 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a000c8b2-7d73-4b1b-a9b7-e2d67d25fd52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.418 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bb398a25-e07a-4d66-9d81-ca6828678341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 NetworkManager[49108]: <info>  [1769848498.4198] manager: (tapeffe8892-70): new Veth device (/org/freedesktop/NetworkManager/Devices/337)
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.455 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[852a60e0-4eb2-4170-9ffd-97913f8de866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.458 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[95da4903-8c30-4736-8878-a445019349fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 NetworkManager[49108]: <info>  [1769848498.4890] device (tapeffe8892-70): carrier: link connected
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.495 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8f94ef8a-b8b2-43bc-b7ac-86ea21960126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.515 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a9279af8-9c17-4e83-a1e2-e58e4f313e46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeffe8892-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874110, 'reachable_time': 33167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368283, 'error': None, 'target': 'ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.533 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb0d0e4-379a-4bc3-b085-2e07bb7104ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:192a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874110, 'tstamp': 874110}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368284, 'error': None, 'target': 'ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.553 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f29079-6bde-4f22-b644-60568d078499]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeffe8892-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:19:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874110, 'reachable_time': 33167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368292, 'error': None, 'target': 'ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.583 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[be7e54e9-17a9-4d21-9ed4-ea847f4ffcd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.653 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4bbcd9-5a47-43fc-bc2e-3f99939a0296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.655 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeffe8892-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.655 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.655 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeffe8892-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.657 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 NetworkManager[49108]: <info>  [1769848498.6589] manager: (tapeffe8892-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 31 08:34:58 compute-0 kernel: tapeffe8892-70: entered promiscuous mode
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.660 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.664 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeffe8892-70, col_values=(('external_ids', {'iface-id': '15957490-998a-4a1b-bbd9-47328694c4d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.666 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 ovn_controller[149457]: 2026-01-31T08:34:58Z|00778|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.670 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/effe8892-77d9-4f8d-a6b7-18d16342aaf9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/effe8892-77d9-4f8d-a6b7-18d16342aaf9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.673 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d67a26-b2a9-49ff-8594-e7d6d5715bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.674 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-effe8892-77d9-4f8d-a6b7-18d16342aaf9
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/effe8892-77d9-4f8d-a6b7-18d16342aaf9.pid.haproxy
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID effe8892-77d9-4f8d-a6b7-18d16342aaf9
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.676 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:34:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:34:58.677 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'env', 'PROCESS_TAG=haproxy-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/effe8892-77d9-4f8d-a6b7-18d16342aaf9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.690 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848498.6899514, c6c9300a-7126-4527-bb8f-79350d3b2d99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.691 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] VM Started (Lifecycle Event)
Jan 31 08:34:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:34:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:58.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:34:58 compute-0 competent_nash[368213]: {
Jan 31 08:34:58 compute-0 competent_nash[368213]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:34:58 compute-0 competent_nash[368213]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:34:58 compute-0 competent_nash[368213]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:34:58 compute-0 competent_nash[368213]:         "osd_id": 0,
Jan 31 08:34:58 compute-0 competent_nash[368213]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:34:58 compute-0 competent_nash[368213]:         "type": "bluestore"
Jan 31 08:34:58 compute-0 competent_nash[368213]:     }
Jan 31 08:34:58 compute-0 competent_nash[368213]: }
Jan 31 08:34:58 compute-0 systemd[1]: libpod-5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4.scope: Deactivated successfully.
Jan 31 08:34:58 compute-0 podman[368197]: 2026-01-31 08:34:58.923922838 +0000 UTC m=+1.106064683 container died 5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.944 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5969b1930202f7a5b5b290d0297eebb1a0fd54fcb114c13112d111716071f385-merged.mount: Deactivated successfully.
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.954 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848498.690257, c6c9300a-7126-4527-bb8f-79350d3b2d99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:58 compute-0 nova_compute[247704]: 2026-01-31 08:34:58.957 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] VM Paused (Lifecycle Event)
Jan 31 08:34:58 compute-0 podman[368197]: 2026-01-31 08:34:58.997856571 +0000 UTC m=+1.179998406 container remove 5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:34:59 compute-0 systemd[1]: libpod-conmon-5316ee123c6ca4c10c6760ad8e7348b57cce56372e6bf990058d093d64c3fac4.scope: Deactivated successfully.
Jan 31 08:34:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1951587907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.019 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.025 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:34:59 compute-0 sudo[368051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:34:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:34:59 compute-0 podman[368368]: 2026-01-31 08:34:59.067508889 +0000 UTC m=+0.055047261 container create 4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:34:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:34:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b534aca0-7b6c-4658-8e7b-438aefa94f61 does not exist
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.076 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:34:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c1eb07d9-f578-43d0-a509-44e491583cd9 does not exist
Jan 31 08:34:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c62a5cc2-ac76-4db8-9fa1-9e1058f914e8 does not exist
Jan 31 08:34:59 compute-0 systemd[1]: Started libpod-conmon-4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227.scope.
Jan 31 08:34:59 compute-0 sudo[368379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:34:59 compute-0 podman[368368]: 2026-01-31 08:34:59.036687393 +0000 UTC m=+0.024225815 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:34:59 compute-0 sudo[368379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:59 compute-0 sudo[368379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2b6ce0d09af7372fe3ffb21261a06788c6ea97edb45055b1fdc8eeb7be9bdc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:59 compute-0 podman[368368]: 2026-01-31 08:34:59.165925693 +0000 UTC m=+0.153464115 container init 4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:34:59 compute-0 podman[368368]: 2026-01-31 08:34:59.17358801 +0000 UTC m=+0.161126392 container start 4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:34:59 compute-0 sudo[368412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:34:59 compute-0 sudo[368412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:59 compute-0 sudo[368412]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:59 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [NOTICE]   (368436) : New worker (368440) forked
Jan 31 08:34:59 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [NOTICE]   (368436) : Loading success.
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.291 247708 DEBUG nova.compute.manager [req-8108233c-a86c-4099-ac18-bf0e1e74fab6 req-cba8e695-c4e2-46cb-bbe6-cb0cc7ea6793 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.291 247708 DEBUG oslo_concurrency.lockutils [req-8108233c-a86c-4099-ac18-bf0e1e74fab6 req-cba8e695-c4e2-46cb-bbe6-cb0cc7ea6793 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.292 247708 DEBUG oslo_concurrency.lockutils [req-8108233c-a86c-4099-ac18-bf0e1e74fab6 req-cba8e695-c4e2-46cb-bbe6-cb0cc7ea6793 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.292 247708 DEBUG oslo_concurrency.lockutils [req-8108233c-a86c-4099-ac18-bf0e1e74fab6 req-cba8e695-c4e2-46cb-bbe6-cb0cc7ea6793 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.292 247708 DEBUG nova.compute.manager [req-8108233c-a86c-4099-ac18-bf0e1e74fab6 req-cba8e695-c4e2-46cb-bbe6-cb0cc7ea6793 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Processing event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.293 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.297 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848499.2978022, c6c9300a-7126-4527-bb8f-79350d3b2d99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.298 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] VM Resumed (Lifecycle Event)
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.300 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.305 247708 INFO nova.virt.libvirt.driver [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Instance spawned successfully.
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.306 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.369 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.375 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.375 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.376 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.376 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.376 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.377 247708 DEBUG nova.virt.libvirt.driver [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.383 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.450 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.607 247708 INFO nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Took 22.40 seconds to spawn the instance on the hypervisor.
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.608 247708 DEBUG nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.655 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.772 247708 INFO nova.compute.manager [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Took 27.20 seconds to build instance.
Jan 31 08:34:59 compute-0 nova_compute[247704]: 2026-01-31 08:34:59.837 247708 DEBUG oslo_concurrency.lockutils [None req-34a278b7-ce2e-43e4-8466-75539dd87337 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:34:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:34:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:59.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:00 compute-0 nova_compute[247704]: 2026-01-31 08:35:00.052 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:00 compute-0 ceph-mon[74496]: pgmap v3059: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 31 08:35:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:35:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:35:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1931554054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 31 08:35:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:00.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.328 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.532 247708 DEBUG nova.compute.manager [req-9a257788-d07b-4cc0-afbf-ed9fb81a0793 req-b0ef129c-46d3-4b89-8c70-cf9776744546 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.533 247708 DEBUG oslo_concurrency.lockutils [req-9a257788-d07b-4cc0-afbf-ed9fb81a0793 req-b0ef129c-46d3-4b89-8c70-cf9776744546 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.533 247708 DEBUG oslo_concurrency.lockutils [req-9a257788-d07b-4cc0-afbf-ed9fb81a0793 req-b0ef129c-46d3-4b89-8c70-cf9776744546 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.533 247708 DEBUG oslo_concurrency.lockutils [req-9a257788-d07b-4cc0-afbf-ed9fb81a0793 req-b0ef129c-46d3-4b89-8c70-cf9776744546 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.533 247708 DEBUG nova.compute.manager [req-9a257788-d07b-4cc0-afbf-ed9fb81a0793 req-b0ef129c-46d3-4b89-8c70-cf9776744546 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] No waiting events found dispatching network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:35:01 compute-0 nova_compute[247704]: 2026-01-31 08:35:01.534 247708 WARNING nova.compute.manager [req-9a257788-d07b-4cc0-afbf-ed9fb81a0793 req-b0ef129c-46d3-4b89-8c70-cf9776744546 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received unexpected event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead for instance with vm_state active and task_state None.
Jan 31 08:35:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:35:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:01.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:35:02 compute-0 ceph-mon[74496]: pgmap v3060: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 31 08:35:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 08:35:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:02.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4115752316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:03.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 132 op/s
Jan 31 08:35:04 compute-0 ceph-mon[74496]: pgmap v3061: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 08:35:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1044282013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:05 compute-0 nova_compute[247704]: 2026-01-31 08:35:05.056 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:05 compute-0 ceph-mon[74496]: pgmap v3062: 305 pgs: 305 active+clean; 213 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 132 op/s
Jan 31 08:35:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:05.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:06 compute-0 nova_compute[247704]: 2026-01-31 08:35:06.331 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 776 KiB/s wr, 154 op/s
Jan 31 08:35:06 compute-0 nova_compute[247704]: 2026-01-31 08:35:06.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:07 compute-0 ceph-mon[74496]: pgmap v3063: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 776 KiB/s wr, 154 op/s
Jan 31 08:35:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:07.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 79 op/s
Jan 31 08:35:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:09 compute-0 ceph-mon[74496]: pgmap v3064: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 79 op/s
Jan 31 08:35:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:09.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:10 compute-0 nova_compute[247704]: 2026-01-31 08:35:10.059 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Jan 31 08:35:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:10.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:35:11.205 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:35:11.206 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:35:11.207 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:11 compute-0 nova_compute[247704]: 2026-01-31 08:35:11.334 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:11 compute-0 NetworkManager[49108]: <info>  [1769848511.5196] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Jan 31 08:35:11 compute-0 NetworkManager[49108]: <info>  [1769848511.5207] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Jan 31 08:35:11 compute-0 nova_compute[247704]: 2026-01-31 08:35:11.518 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:11 compute-0 nova_compute[247704]: 2026-01-31 08:35:11.566 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:11 compute-0 ovn_controller[149457]: 2026-01-31T08:35:11Z|00779|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:35:11 compute-0 nova_compute[247704]: 2026-01-31 08:35:11.583 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:11 compute-0 nova_compute[247704]: 2026-01-31 08:35:11.589 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:11 compute-0 ceph-mon[74496]: pgmap v3065: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 83 op/s
Jan 31 08:35:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:11.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Jan 31 08:35:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:35:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:35:12 compute-0 nova_compute[247704]: 2026-01-31 08:35:12.923 247708 DEBUG nova.compute.manager [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-changed-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:35:12 compute-0 nova_compute[247704]: 2026-01-31 08:35:12.923 247708 DEBUG nova.compute.manager [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Refreshing instance network info cache due to event network-changed-3eaec8f3-e8af-4992-82d1-25154406dead. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:35:12 compute-0 nova_compute[247704]: 2026-01-31 08:35:12.923 247708 DEBUG oslo_concurrency.lockutils [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:35:12 compute-0 nova_compute[247704]: 2026-01-31 08:35:12.924 247708 DEBUG oslo_concurrency.lockutils [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:35:12 compute-0 nova_compute[247704]: 2026-01-31 08:35:12.924 247708 DEBUG nova.network.neutron [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Refreshing network info cache for port 3eaec8f3-e8af-4992-82d1-25154406dead _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:35:12 compute-0 ovn_controller[149457]: 2026-01-31T08:35:12Z|00780|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:35:12 compute-0 nova_compute[247704]: 2026-01-31 08:35:12.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:13 compute-0 ceph-mon[74496]: pgmap v3066: 305 pgs: 305 active+clean; 214 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 82 op/s
Jan 31 08:35:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:13.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:14 compute-0 ovn_controller[149457]: 2026-01-31T08:35:14Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:13:d6 10.100.0.14
Jan 31 08:35:14 compute-0 ovn_controller[149457]: 2026-01-31T08:35:14Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:13:d6 10.100.0.14
Jan 31 08:35:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 865 KiB/s wr, 95 op/s
Jan 31 08:35:14 compute-0 sudo[368458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:14 compute-0 sudo[368458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:14 compute-0 sudo[368458]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:14 compute-0 sudo[368483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:14 compute-0 sudo[368483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:14 compute-0 sudo[368483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:14.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:15 compute-0 nova_compute[247704]: 2026-01-31 08:35:15.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:15 compute-0 ceph-mon[74496]: pgmap v3067: 305 pgs: 305 active+clean; 222 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 865 KiB/s wr, 95 op/s
Jan 31 08:35:15 compute-0 podman[368509]: 2026-01-31 08:35:15.908359253 +0000 UTC m=+0.079313415 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:35:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:15.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:16 compute-0 nova_compute[247704]: 2026-01-31 08:35:16.360 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Jan 31 08:35:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:16.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:17 compute-0 ceph-mon[74496]: pgmap v3068: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Jan 31 08:35:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:17.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:18 compute-0 nova_compute[247704]: 2026-01-31 08:35:18.258 247708 DEBUG nova.network.neutron [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updated VIF entry in instance network info cache for port 3eaec8f3-e8af-4992-82d1-25154406dead. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:35:18 compute-0 nova_compute[247704]: 2026-01-31 08:35:18.259 247708 DEBUG nova.network.neutron [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:35:18 compute-0 nova_compute[247704]: 2026-01-31 08:35:18.358 247708 DEBUG oslo_concurrency.lockutils [req-8eed27b2-789a-4297-9a0b-b7d4b32ea50d req-7c2a14d3-55c3-4f74-946c-728eb6e8c680 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:35:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 08:35:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:19.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:20 compute-0 ceph-mon[74496]: pgmap v3069: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:20 compute-0 nova_compute[247704]: 2026-01-31 08:35:20.097 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:35:20
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data']
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:35:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:35:20 compute-0 ceph-mgr[74791]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:35:21 compute-0 nova_compute[247704]: 2026-01-31 08:35:21.364 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:21.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:22 compute-0 ceph-mon[74496]: pgmap v3070: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 08:35:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 31 08:35:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:22.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:24.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:24 compute-0 ceph-mon[74496]: pgmap v3071: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 31 08:35:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 31 08:35:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:24.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:25 compute-0 nova_compute[247704]: 2026-01-31 08:35:25.100 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:25 compute-0 podman[368534]: 2026-01-31 08:35:25.951485409 +0000 UTC m=+0.124218057 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:35:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:26.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:26 compute-0 ceph-mon[74496]: pgmap v3072: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 31 08:35:26 compute-0 nova_compute[247704]: 2026-01-31 08:35:26.367 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 144 op/s
Jan 31 08:35:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:26.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:27 compute-0 ceph-mon[74496]: pgmap v3073: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 144 op/s
Jan 31 08:35:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:28.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 08:35:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:28.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:29 compute-0 ceph-mon[74496]: pgmap v3074: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 08:35:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:30.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:30 compute-0 nova_compute[247704]: 2026-01-31 08:35:30.102 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:30.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:31 compute-0 nova_compute[247704]: 2026-01-31 08:35:31.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:31 compute-0 ceph-mon[74496]: pgmap v3075: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:32.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:35:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:32.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:35:33 compute-0 ovn_controller[149457]: 2026-01-31T08:35:33Z|00781|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:35:33 compute-0 nova_compute[247704]: 2026-01-31 08:35:33.180 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:33 compute-0 ceph-mon[74496]: pgmap v3076: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:34.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:34 compute-0 nova_compute[247704]: 2026-01-31 08:35:34.155 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:34 compute-0 sudo[368565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:34 compute-0 sudo[368565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:34 compute-0 sudo[368565]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:34 compute-0 sudo[368590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:34 compute-0 sudo[368590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:34 compute-0 sudo[368590]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:34.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:35 compute-0 nova_compute[247704]: 2026-01-31 08:35:35.105 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:35 compute-0 ceph-mon[74496]: pgmap v3077: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004340459531453564 of space, bias 1.0, pg target 1.302137859436069 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173955716080402 of space, bias 1.0, pg target 0.6500127591080402 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:35:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:35:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:35:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:36.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:35:36 compute-0 nova_compute[247704]: 2026-01-31 08:35:36.371 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:36.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:37 compute-0 nova_compute[247704]: 2026-01-31 08:35:37.686 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:38.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:35:38 compute-0 ceph-mon[74496]: pgmap v3078: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 08:35:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:40 compute-0 nova_compute[247704]: 2026-01-31 08:35:40.108 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:40.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:40 compute-0 ceph-mon[74496]: pgmap v3079: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:35:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:35:40 compute-0 nova_compute[247704]: 2026-01-31 08:35:40.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:41 compute-0 nova_compute[247704]: 2026-01-31 08:35:41.372 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:41 compute-0 ceph-mon[74496]: pgmap v3080: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Jan 31 08:35:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:35:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3676442032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:42.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s wr, 0 op/s
Jan 31 08:35:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3676442032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:35:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:43 compute-0 nova_compute[247704]: 2026-01-31 08:35:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:35:44 compute-0 ceph-mon[74496]: pgmap v3081: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s wr, 0 op/s
Jan 31 08:35:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Jan 31 08:35:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:45 compute-0 nova_compute[247704]: 2026-01-31 08:35:45.111 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:45 compute-0 ceph-mon[74496]: pgmap v3082: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Jan 31 08:35:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:46.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:46 compute-0 nova_compute[247704]: 2026-01-31 08:35:46.375 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 7.4 KiB/s wr, 0 op/s
Jan 31 08:35:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:46 compute-0 podman[368621]: 2026-01-31 08:35:46.914703748 +0000 UTC m=+0.074876046 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 08:35:47 compute-0 nova_compute[247704]: 2026-01-31 08:35:47.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:47 compute-0 nova_compute[247704]: 2026-01-31 08:35:47.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:35:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:48.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:48 compute-0 ceph-mon[74496]: pgmap v3083: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 7.4 KiB/s wr, 0 op/s
Jan 31 08:35:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 31 08:35:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:48.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:50 compute-0 nova_compute[247704]: 2026-01-31 08:35:50.114 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:50.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:50 compute-0 ceph-mon[74496]: pgmap v3084: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 31 08:35:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:35:50 compute-0 nova_compute[247704]: 2026-01-31 08:35:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:35:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:50.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:35:51 compute-0 nova_compute[247704]: 2026-01-31 08:35:51.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:35:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:52.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:35:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:35:52 compute-0 ceph-mon[74496]: pgmap v3085: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:35:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:35:53.343 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:35:53 compute-0 nova_compute[247704]: 2026-01-31 08:35:53.344 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:35:53.345 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:35:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:53 compute-0 ceph-mon[74496]: pgmap v3086: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:35:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/947235518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:35:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/947235518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:35:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:35:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:35:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 2.4 KiB/s wr, 2 op/s
Jan 31 08:35:54 compute-0 sudo[368645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:54 compute-0 sudo[368645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:54 compute-0 sudo[368645]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:54.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:54 compute-0 sudo[368670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:54 compute-0 sudo[368670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:54 compute-0 sudo[368670]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:55 compute-0 nova_compute[247704]: 2026-01-31 08:35:55.117 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1126290724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1326395793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:55 compute-0 nova_compute[247704]: 2026-01-31 08:35:55.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:56.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:56 compute-0 ceph-mon[74496]: pgmap v3087: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 2.4 KiB/s wr, 2 op/s
Jan 31 08:35:56 compute-0 nova_compute[247704]: 2026-01-31 08:35:56.379 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:35:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 3.7 KiB/s wr, 3 op/s
Jan 31 08:35:56 compute-0 nova_compute[247704]: 2026-01-31 08:35:56.677 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:56 compute-0 nova_compute[247704]: 2026-01-31 08:35:56.678 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:56 compute-0 nova_compute[247704]: 2026-01-31 08:35:56.678 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:56 compute-0 nova_compute[247704]: 2026-01-31 08:35:56.678 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:35:56 compute-0 nova_compute[247704]: 2026-01-31 08:35:56.678 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:35:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:56.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:35:56 compute-0 podman[368698]: 2026-01-31 08:35:56.934500231 +0000 UTC m=+0.106649866 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 08:35:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:35:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939846883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.151 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2939846883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.447 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.447 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.607 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.608 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4070MB free_disk=20.897106170654297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.609 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.609 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.886 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c6c9300a-7126-4527-bb8f-79350d3b2d99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.887 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.887 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:35:57 compute-0 nova_compute[247704]: 2026-01-31 08:35:57.928 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:58.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:35:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2355544338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:58 compute-0 nova_compute[247704]: 2026-01-31 08:35:58.411 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:58 compute-0 ceph-mon[74496]: pgmap v3088: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 3.7 KiB/s wr, 3 op/s
Jan 31 08:35:58 compute-0 nova_compute[247704]: 2026-01-31 08:35:58.416 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:35:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Jan 31 08:35:58 compute-0 nova_compute[247704]: 2026-01-31 08:35:58.753 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:35:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:35:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:35:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:35:59 compute-0 nova_compute[247704]: 2026-01-31 08:35:59.110 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:35:59 compute-0 nova_compute[247704]: 2026-01-31 08:35:59.110 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:59 compute-0 sudo[368768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:59 compute-0 sudo[368768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-0 sudo[368768]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2355544338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:59 compute-0 ceph-mon[74496]: pgmap v3089: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.7 KiB/s wr, 3 op/s
Jan 31 08:35:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/272334319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:35:59 compute-0 sudo[368793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:35:59 compute-0 sudo[368793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-0 sudo[368793]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:59 compute-0 sudo[368819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:35:59 compute-0 sudo[368819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:59 compute-0 sudo[368819]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:59 compute-0 sudo[368844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:35:59 compute-0 sudo[368844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:00 compute-0 nova_compute[247704]: 2026-01-31 08:36:00.110 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:00 compute-0 nova_compute[247704]: 2026-01-31 08:36:00.111 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:36:00 compute-0 nova_compute[247704]: 2026-01-31 08:36:00.111 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:36:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:00 compute-0 nova_compute[247704]: 2026-01-31 08:36:00.157 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:00.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:00 compute-0 podman[368941]: 2026-01-31 08:36:00.221717456 +0000 UTC m=+0.140825883 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:00 compute-0 podman[368941]: 2026-01-31 08:36:00.328484445 +0000 UTC m=+0.247592842 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:36:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:00.347 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:36:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.8 KiB/s wr, 3 op/s
Jan 31 08:36:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:00.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:01 compute-0 podman[369097]: 2026-01-31 08:36:01.172569663 +0000 UTC m=+0.241731079 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:36:01 compute-0 podman[369119]: 2026-01-31 08:36:01.242482036 +0000 UTC m=+0.053935253 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:36:01 compute-0 podman[369097]: 2026-01-31 08:36:01.262564309 +0000 UTC m=+0.331725705 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:36:01 compute-0 nova_compute[247704]: 2026-01-31 08:36:01.418 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:01 compute-0 podman[369162]: 2026-01-31 08:36:01.540514314 +0000 UTC m=+0.066281695 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vendor=Red Hat, Inc., release=1793, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 31 08:36:01 compute-0 podman[369183]: 2026-01-31 08:36:01.608186865 +0000 UTC m=+0.051039614 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 08:36:01 compute-0 podman[369162]: 2026-01-31 08:36:01.645807057 +0000 UTC m=+0.171574418 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 08:36:01 compute-0 sudo[368844]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:36:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:36:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:01 compute-0 nova_compute[247704]: 2026-01-31 08:36:01.803 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:36:01 compute-0 nova_compute[247704]: 2026-01-31 08:36:01.804 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:36:01 compute-0 nova_compute[247704]: 2026-01-31 08:36:01.804 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:36:01 compute-0 nova_compute[247704]: 2026-01-31 08:36:01.804 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6c9300a-7126-4527-bb8f-79350d3b2d99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:36:01 compute-0 sudo[369196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:01 compute-0 ceph-mon[74496]: pgmap v3090: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.8 KiB/s wr, 3 op/s
Jan 31 08:36:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:01 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:01 compute-0 sudo[369196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:01 compute-0 sudo[369196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 sudo[369221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:01 compute-0 sudo[369221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:01 compute-0 sudo[369221]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 sudo[369246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:01 compute-0 sudo[369246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:01 compute-0 sudo[369246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 sudo[369271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:36:01 compute-0 sudo[369271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:02.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:02 compute-0 sudo[369271]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:36:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:36:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:36:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8667e19a-100d-48ae-94c4-4d617b4969aa does not exist
Jan 31 08:36:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 20b88c6f-bea3-4554-86e5-259515d05208 does not exist
Jan 31 08:36:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b370b475-1925-474d-9497-5634bc70fd94 does not exist
Jan 31 08:36:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:36:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:36:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:36:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 3 op/s
Jan 31 08:36:02 compute-0 sudo[369327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:02 compute-0 sudo[369327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:02 compute-0 sudo[369327]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:02 compute-0 sudo[369352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:02 compute-0 sudo[369352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:02 compute-0 sudo[369352]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:02 compute-0 sudo[369377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:02 compute-0 sudo[369377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:02 compute-0 sudo[369377]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:02 compute-0 sudo[369402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:36:02 compute-0 sudo[369402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2076501237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:36:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:36:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:02.929458843 +0000 UTC m=+0.023606320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:03.070414139 +0000 UTC m=+0.164561596 container create 699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:36:03 compute-0 systemd[1]: Started libpod-conmon-699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d.scope.
Jan 31 08:36:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:03.201888873 +0000 UTC m=+0.296036340 container init 699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:03.210262918 +0000 UTC m=+0.304410375 container start 699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:36:03 compute-0 nice_lamarr[369484]: 167 167
Jan 31 08:36:03 compute-0 systemd[1]: libpod-699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d.scope: Deactivated successfully.
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:03.264535439 +0000 UTC m=+0.358682916 container attach 699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:03.265328868 +0000 UTC m=+0.359476325 container died 699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:36:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f96626524b494461ed2148247870a87bfe9bd5c9e829e0caf91d925e0ea01a6c-merged.mount: Deactivated successfully.
Jan 31 08:36:03 compute-0 podman[369468]: 2026-01-31 08:36:03.464963924 +0000 UTC m=+0.559111381 container remove 699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:03 compute-0 systemd[1]: libpod-conmon-699c997520f034b67fa9222ee0f6f2e56a64cd8b77a8bc28f8b3b40707b6e01d.scope: Deactivated successfully.
Jan 31 08:36:03 compute-0 podman[369512]: 2026-01-31 08:36:03.628209096 +0000 UTC m=+0.059457318 container create 3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:36:03 compute-0 systemd[1]: Started libpod-conmon-3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c.scope.
Jan 31 08:36:03 compute-0 podman[369512]: 2026-01-31 08:36:03.59327643 +0000 UTC m=+0.024524672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:36:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/102204b64f32178b84dc576992eece4df2ed50b41d3ed2b84d732df8e4d8eb70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/102204b64f32178b84dc576992eece4df2ed50b41d3ed2b84d732df8e4d8eb70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/102204b64f32178b84dc576992eece4df2ed50b41d3ed2b84d732df8e4d8eb70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/102204b64f32178b84dc576992eece4df2ed50b41d3ed2b84d732df8e4d8eb70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/102204b64f32178b84dc576992eece4df2ed50b41d3ed2b84d732df8e4d8eb70/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:03 compute-0 podman[369512]: 2026-01-31 08:36:03.820463601 +0000 UTC m=+0.251711863 container init 3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:36:03 compute-0 podman[369512]: 2026-01-31 08:36:03.830967968 +0000 UTC m=+0.262216220 container start 3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jones, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:36:03 compute-0 podman[369512]: 2026-01-31 08:36:03.93913784 +0000 UTC m=+0.370386082 container attach 3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:03 compute-0 ceph-mon[74496]: pgmap v3091: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 3 op/s
Jan 31 08:36:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:04.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 3 op/s
Jan 31 08:36:04 compute-0 confident_jones[369530]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:36:04 compute-0 confident_jones[369530]: --> relative data size: 1.0
Jan 31 08:36:04 compute-0 confident_jones[369530]: --> All data devices are unavailable
Jan 31 08:36:04 compute-0 systemd[1]: libpod-3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c.scope: Deactivated successfully.
Jan 31 08:36:04 compute-0 podman[369512]: 2026-01-31 08:36:04.761337792 +0000 UTC m=+1.192586064 container died 3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jones, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-102204b64f32178b84dc576992eece4df2ed50b41d3ed2b84d732df8e4d8eb70-merged.mount: Deactivated successfully.
Jan 31 08:36:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:04.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:04 compute-0 podman[369512]: 2026-01-31 08:36:04.879803317 +0000 UTC m=+1.311051539 container remove 3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_jones, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:04 compute-0 systemd[1]: libpod-conmon-3d3fef16c41f01970b86a5d490df0e8e9ab63d41b1aa3766b7324298221bf88c.scope: Deactivated successfully.
Jan 31 08:36:04 compute-0 sudo[369402]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:04 compute-0 sudo[369559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:04 compute-0 sudo[369559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:04 compute-0 sudo[369559]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:05 compute-0 sudo[369584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:05 compute-0 sudo[369584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:05 compute-0 sudo[369584]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:05 compute-0 sudo[369609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:05 compute-0 sudo[369609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:05 compute-0 sudo[369609]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:05 compute-0 sudo[369634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:36:05 compute-0 sudo[369634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:05 compute-0 nova_compute[247704]: 2026-01-31 08:36:05.159 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.454457718 +0000 UTC m=+0.043279762 container create 520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:36:05 compute-0 systemd[1]: Started libpod-conmon-520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8.scope.
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.431424113 +0000 UTC m=+0.020246167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.577923656 +0000 UTC m=+0.166745720 container init 520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chatelet, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.5842297 +0000 UTC m=+0.173051734 container start 520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chatelet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:05 compute-0 thirsty_chatelet[369716]: 167 167
Jan 31 08:36:05 compute-0 systemd[1]: libpod-520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8.scope: Deactivated successfully.
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.591483369 +0000 UTC m=+0.180305433 container attach 520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.591920559 +0000 UTC m=+0.180742593 container died 520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chatelet, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0445c9036ccebbe7cb7f5884d18c0893f6d8fc5f3724e94423c6b78d1954391-merged.mount: Deactivated successfully.
Jan 31 08:36:05 compute-0 podman[369700]: 2026-01-31 08:36:05.661781492 +0000 UTC m=+0.250603536 container remove 520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_chatelet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:05 compute-0 systemd[1]: libpod-conmon-520207830ab0aa32021a748210244a8e1cdd7038eb4f3c223f2fe44856c472e8.scope: Deactivated successfully.
Jan 31 08:36:05 compute-0 podman[369744]: 2026-01-31 08:36:05.807059345 +0000 UTC m=+0.054728014 container create a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:36:05 compute-0 systemd[1]: Started libpod-conmon-a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049.scope.
Jan 31 08:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:05 compute-0 podman[369744]: 2026-01-31 08:36:05.783141137 +0000 UTC m=+0.030809806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0f80d01088137a7f0bf0ec96e3d6bb734e2502bd2264a23236b37eb67edc93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0f80d01088137a7f0bf0ec96e3d6bb734e2502bd2264a23236b37eb67edc93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0f80d01088137a7f0bf0ec96e3d6bb734e2502bd2264a23236b37eb67edc93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba0f80d01088137a7f0bf0ec96e3d6bb734e2502bd2264a23236b37eb67edc93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 podman[369744]: 2026-01-31 08:36:05.904386051 +0000 UTC m=+0.152054800 container init a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:05 compute-0 podman[369744]: 2026-01-31 08:36:05.912189203 +0000 UTC m=+0.159857892 container start a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:05 compute-0 podman[369744]: 2026-01-31 08:36:05.928048441 +0000 UTC m=+0.175717140 container attach a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:05 compute-0 ceph-mon[74496]: pgmap v3092: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 3 op/s
Jan 31 08:36:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:06.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:06 compute-0 nova_compute[247704]: 2026-01-31 08:36:06.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Jan 31 08:36:06 compute-0 relaxed_cray[369760]: {
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:     "0": [
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:         {
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "devices": [
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "/dev/loop3"
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             ],
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "lv_name": "ceph_lv0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "lv_size": "7511998464",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "name": "ceph_lv0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "tags": {
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.crush_device_class": "",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.encrypted": "0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.osd_id": "0",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.type": "block",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:                 "ceph.vdo": "0"
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             },
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "type": "block",
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:             "vg_name": "ceph_vg0"
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:         }
Jan 31 08:36:06 compute-0 relaxed_cray[369760]:     ]
Jan 31 08:36:06 compute-0 relaxed_cray[369760]: }
Jan 31 08:36:06 compute-0 systemd[1]: libpod-a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049.scope: Deactivated successfully.
Jan 31 08:36:06 compute-0 podman[369744]: 2026-01-31 08:36:06.700946903 +0000 UTC m=+0.948615562 container died a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0f80d01088137a7f0bf0ec96e3d6bb734e2502bd2264a23236b37eb67edc93-merged.mount: Deactivated successfully.
Jan 31 08:36:06 compute-0 podman[369744]: 2026-01-31 08:36:06.839732086 +0000 UTC m=+1.087400745 container remove a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:06 compute-0 systemd[1]: libpod-conmon-a23aca066c185f1254e84c4b8706cddfa0e56c4876697544314feaf8a91ba049.scope: Deactivated successfully.
Jan 31 08:36:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:06 compute-0 sudo[369634]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:06 compute-0 sudo[369785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:06 compute-0 sudo[369785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:06 compute-0 sudo[369785]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:06 compute-0 sudo[369810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:06 compute-0 sudo[369810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:07 compute-0 sudo[369810]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:07 compute-0 sudo[369835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:07 compute-0 sudo[369835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:07 compute-0 sudo[369835]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:07 compute-0 sudo[369860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:36:07 compute-0 sudo[369860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.493828425 +0000 UTC m=+0.101758055 container create f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.425145961 +0000 UTC m=+0.033075631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:36:07 compute-0 systemd[1]: Started libpod-conmon-f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043.scope.
Jan 31 08:36:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.66610323 +0000 UTC m=+0.274032880 container init f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.671631705 +0000 UTC m=+0.279561335 container start f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:07 compute-0 elastic_merkle[369942]: 167 167
Jan 31 08:36:07 compute-0 systemd[1]: libpod-f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043.scope: Deactivated successfully.
Jan 31 08:36:07 compute-0 conmon[369942]: conmon f1428e18860fa2601ab7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043.scope/container/memory.events
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.697428768 +0000 UTC m=+0.305358418 container attach f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.69835503 +0000 UTC m=+0.306284660 container died f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:36:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-985a400211a578e18d451595a6fa91fa9a825377999759a926ce81ceafb0efc3-merged.mount: Deactivated successfully.
Jan 31 08:36:07 compute-0 podman[369925]: 2026-01-31 08:36:07.946549807 +0000 UTC m=+0.554479437 container remove f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_merkle, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:36:07 compute-0 systemd[1]: libpod-conmon-f1428e18860fa2601ab769a18d55000411a6f250f5fddc61d23905b9c993e043.scope: Deactivated successfully.
Jan 31 08:36:08 compute-0 nova_compute[247704]: 2026-01-31 08:36:08.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:08 compute-0 ceph-mon[74496]: pgmap v3093: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Jan 31 08:36:08 compute-0 podman[369968]: 2026-01-31 08:36:08.127878463 +0000 UTC m=+0.091140576 container create 10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:36:08 compute-0 podman[369968]: 2026-01-31 08:36:08.057010636 +0000 UTC m=+0.020272769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:36:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:08.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:08 compute-0 systemd[1]: Started libpod-conmon-10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f.scope.
Jan 31 08:36:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef68c638a8162a51e28b4ad20542f42a93f2af4c18860828cf595f6ee78eb65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef68c638a8162a51e28b4ad20542f42a93f2af4c18860828cf595f6ee78eb65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef68c638a8162a51e28b4ad20542f42a93f2af4c18860828cf595f6ee78eb65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef68c638a8162a51e28b4ad20542f42a93f2af4c18860828cf595f6ee78eb65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:08 compute-0 podman[369968]: 2026-01-31 08:36:08.414352609 +0000 UTC m=+0.377614772 container init 10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:36:08 compute-0 podman[369968]: 2026-01-31 08:36:08.422681183 +0000 UTC m=+0.385943286 container start 10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:36:08 compute-0 podman[369968]: 2026-01-31 08:36:08.44542886 +0000 UTC m=+0.408691023 container attach 10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.2 KiB/s wr, 0 op/s
Jan 31 08:36:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:09 compute-0 nova_compute[247704]: 2026-01-31 08:36:09.198 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:36:09 compute-0 sweet_diffie[369984]: {
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:         "osd_id": 0,
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:         "type": "bluestore"
Jan 31 08:36:09 compute-0 sweet_diffie[369984]:     }
Jan 31 08:36:09 compute-0 sweet_diffie[369984]: }
Jan 31 08:36:09 compute-0 podman[369968]: 2026-01-31 08:36:09.303265896 +0000 UTC m=+1.266528009 container died 10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:36:09 compute-0 systemd[1]: libpod-10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f.scope: Deactivated successfully.
Jan 31 08:36:09 compute-0 ceph-mon[74496]: pgmap v3094: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.2 KiB/s wr, 0 op/s
Jan 31 08:36:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef68c638a8162a51e28b4ad20542f42a93f2af4c18860828cf595f6ee78eb65-merged.mount: Deactivated successfully.
Jan 31 08:36:09 compute-0 nova_compute[247704]: 2026-01-31 08:36:09.480 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:36:09 compute-0 nova_compute[247704]: 2026-01-31 08:36:09.480 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:36:09 compute-0 nova_compute[247704]: 2026-01-31 08:36:09.481 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:09 compute-0 nova_compute[247704]: 2026-01-31 08:36:09.482 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:09 compute-0 podman[369968]: 2026-01-31 08:36:09.491183423 +0000 UTC m=+1.454445536 container remove 10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:36:09 compute-0 systemd[1]: libpod-conmon-10ea775ffb258b0181f5c6d99b00279256ec0e42957c6bf3f5b6e386a9b6307f.scope: Deactivated successfully.
Jan 31 08:36:09 compute-0 sudo[369860]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:36:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:36:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a8597b87-781c-4fdf-8b44-0fb185ba70d9 does not exist
Jan 31 08:36:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 31c62235-71da-4f29-b173-335061e010fa does not exist
Jan 31 08:36:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b52bc22b-8c50-4a4e-887e-cf6fdb7f8512 does not exist
Jan 31 08:36:09 compute-0 sudo[370017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:09 compute-0 sudo[370017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:09 compute-0 sudo[370017]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:09 compute-0 sudo[370042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:36:09 compute-0 sudo[370042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:09 compute-0 sudo[370042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:10 compute-0 nova_compute[247704]: 2026-01-31 08:36:10.163 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:10.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:10 compute-0 nova_compute[247704]: 2026-01-31 08:36:10.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 10 KiB/s wr, 22 op/s
Jan 31 08:36:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:36:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:10.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:10 compute-0 nova_compute[247704]: 2026-01-31 08:36:10.927 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:11.206 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:11.207 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:11.207 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:11 compute-0 nova_compute[247704]: 2026-01-31 08:36:11.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:11 compute-0 ceph-mon[74496]: pgmap v3095: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 10 KiB/s wr, 22 op/s
Jan 31 08:36:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:12.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 9.5 KiB/s wr, 25 op/s
Jan 31 08:36:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:12.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:13 compute-0 ceph-mon[74496]: pgmap v3096: 305 pgs: 305 active+clean; 208 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 9.5 KiB/s wr, 25 op/s
Jan 31 08:36:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:14.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:14.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:15 compute-0 sudo[370069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:15 compute-0 sudo[370069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:15 compute-0 sudo[370069]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:15 compute-0 sudo[370094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:15 compute-0 sudo[370094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:15 compute-0 sudo[370094]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:15 compute-0 nova_compute[247704]: 2026-01-31 08:36:15.167 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:16 compute-0 ceph-mon[74496]: pgmap v3097: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1655685972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:16.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:16 compute-0 nova_compute[247704]: 2026-01-31 08:36:16.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:17 compute-0 podman[370121]: 2026-01-31 08:36:17.939621588 +0000 UTC m=+0.101274665 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:36:18 compute-0 ceph-mon[74496]: pgmap v3098: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:18.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:20 compute-0 nova_compute[247704]: 2026-01-31 08:36:20.170 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:20.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:20 compute-0 ceph-mon[74496]: pgmap v3099: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:36:20
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', '.mgr', 'backups']
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 29 op/s
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:36:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:36:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:20.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:21 compute-0 nova_compute[247704]: 2026-01-31 08:36:21.430 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:22.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:22 compute-0 ceph-mon[74496]: pgmap v3100: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 29 op/s
Jan 31 08:36:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Jan 31 08:36:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:22.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:23 compute-0 ceph-mon[74496]: pgmap v3101: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 9.2 KiB/s wr, 7 op/s
Jan 31 08:36:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:24.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 8.7 KiB/s wr, 4 op/s
Jan 31 08:36:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:24.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:25 compute-0 nova_compute[247704]: 2026-01-31 08:36:25.173 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:25 compute-0 ceph-mon[74496]: pgmap v3102: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 8.7 KiB/s wr, 4 op/s
Jan 31 08:36:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:26.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:26 compute-0 nova_compute[247704]: 2026-01-31 08:36:26.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 31 08:36:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:26.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:27 compute-0 nova_compute[247704]: 2026-01-31 08:36:27.723 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:27 compute-0 ceph-mon[74496]: pgmap v3103: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 31 08:36:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1161939193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:27 compute-0 podman[370147]: 2026-01-31 08:36:27.919213356 +0000 UTC m=+0.088992893 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:36:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:28.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 8.3 KiB/s wr, 11 op/s
Jan 31 08:36:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1696661387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:36:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1696661387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:36:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:28.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:36:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 51K writes, 192K keys, 51K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 51K writes, 18K syncs, 2.77 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5151 writes, 19K keys, 5151 commit groups, 1.0 writes per commit group, ingest: 17.64 MB, 0.03 MB/s
                                           Interval WAL: 5151 writes, 1874 syncs, 2.75 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:36:30 compute-0 ceph-mon[74496]: pgmap v3104: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s rd, 8.3 KiB/s wr, 11 op/s
Jan 31 08:36:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4077521189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:30 compute-0 nova_compute[247704]: 2026-01-31 08:36:30.176 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:30.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 508 KiB/s wr, 25 op/s
Jan 31 08:36:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:30.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:31 compute-0 nova_compute[247704]: 2026-01-31 08:36:31.434 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:32.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:32 compute-0 ceph-mon[74496]: pgmap v3105: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 508 KiB/s wr, 25 op/s
Jan 31 08:36:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 250 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.4 MiB/s wr, 40 op/s
Jan 31 08:36:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:32.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:34.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:34 compute-0 ceph-mon[74496]: pgmap v3106: 305 pgs: 305 active+clean; 250 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.4 MiB/s wr, 40 op/s
Jan 31 08:36:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 262 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Jan 31 08:36:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:34.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:35 compute-0 sudo[370178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:35 compute-0 sudo[370178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:35 compute-0 nova_compute[247704]: 2026-01-31 08:36:35.180 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:35 compute-0 sudo[370178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:35 compute-0 sudo[370203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:35 compute-0 sudo[370203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:35 compute-0 sudo[370203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:35 compute-0 ceph-mon[74496]: pgmap v3107: 305 pgs: 305 active+clean; 262 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0037144998720454125 of space, bias 1.0, pg target 1.1143499616136237 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:36:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:36:35 compute-0 ovn_controller[149457]: 2026-01-31T08:36:35Z|00782|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:36:35 compute-0 nova_compute[247704]: 2026-01-31 08:36:35.967 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:36.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:36 compute-0 nova_compute[247704]: 2026-01-31 08:36:36.425 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:36 compute-0 nova_compute[247704]: 2026-01-31 08:36:36.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 31 08:36:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:36.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:38.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 31 08:36:38 compute-0 ceph-mon[74496]: pgmap v3108: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 31 08:36:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/120920381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2551731456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:38.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3052520029' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:40 compute-0 ceph-mon[74496]: pgmap v3109: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 68 op/s
Jan 31 08:36:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/572176419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:36:40 compute-0 nova_compute[247704]: 2026-01-31 08:36:40.184 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:40.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Jan 31 08:36:40 compute-0 nova_compute[247704]: 2026-01-31 08:36:40.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:40.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:41 compute-0 nova_compute[247704]: 2026-01-31 08:36:41.438 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 08:36:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:42.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:42 compute-0 ceph-mon[74496]: pgmap v3110: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Jan 31 08:36:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 44 op/s
Jan 31 08:36:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:42.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:43 compute-0 ceph-mon[74496]: pgmap v3111: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 44 op/s
Jan 31 08:36:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:44.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Jan 31 08:36:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:44.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:45 compute-0 nova_compute[247704]: 2026-01-31 08:36:45.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:45 compute-0 ceph-mon[74496]: pgmap v3112: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.1 MiB/s wr, 32 op/s
Jan 31 08:36:45 compute-0 nova_compute[247704]: 2026-01-31 08:36:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:46 compute-0 nova_compute[247704]: 2026-01-31 08:36:46.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 828 KiB/s wr, 35 op/s
Jan 31 08:36:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:46.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:47 compute-0 ceph-mon[74496]: pgmap v3113: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 828 KiB/s wr, 35 op/s
Jan 31 08:36:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:48.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:36:48 compute-0 podman[370235]: 2026-01-31 08:36:48.890304527 +0000 UTC m=+0.055573452 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:36:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:49 compute-0 nova_compute[247704]: 2026-01-31 08:36:49.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:49 compute-0 nova_compute[247704]: 2026-01-31 08:36:49.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:36:49 compute-0 ceph-mon[74496]: pgmap v3114: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:50 compute-0 nova_compute[247704]: 2026-01-31 08:36:50.190 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:50.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 13 KiB/s wr, 37 op/s
Jan 31 08:36:50 compute-0 nova_compute[247704]: 2026-01-31 08:36:50.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:50.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:51 compute-0 nova_compute[247704]: 2026-01-31 08:36:51.442 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:51 compute-0 ceph-mon[74496]: pgmap v3115: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 13 KiB/s wr, 37 op/s
Jan 31 08:36:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:36:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:52.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:36:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 79 op/s
Jan 31 08:36:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:53.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:53 compute-0 ceph-mon[74496]: pgmap v3116: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 79 op/s
Jan 31 08:36:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3253503215' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:36:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3253503215' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:36:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Jan 31 08:36:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:55.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:55 compute-0 nova_compute[247704]: 2026-01-31 08:36:55.193 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:55 compute-0 sudo[370260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:55 compute-0 sudo[370260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:55 compute-0 sudo[370260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:55 compute-0 sudo[370285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:36:55 compute-0 sudo[370285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:55 compute-0 sudo[370285]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:55 compute-0 sshd-session[370258]: Invalid user ubuntu from 45.148.10.240 port 40006
Jan 31 08:36:55 compute-0 sshd-session[370258]: Connection closed by invalid user ubuntu 45.148.10.240 port 40006 [preauth]
Jan 31 08:36:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:56.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:56 compute-0 nova_compute[247704]: 2026-01-31 08:36:56.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 135 op/s
Jan 31 08:36:56 compute-0 ceph-mon[74496]: pgmap v3117: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Jan 31 08:36:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:57.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:57 compute-0 nova_compute[247704]: 2026-01-31 08:36:57.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:57 compute-0 nova_compute[247704]: 2026-01-31 08:36:57.747 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:57 compute-0 nova_compute[247704]: 2026-01-31 08:36:57.747 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:57 compute-0 nova_compute[247704]: 2026-01-31 08:36:57.748 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:57 compute-0 nova_compute[247704]: 2026-01-31 08:36:57.748 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:36:57 compute-0 nova_compute[247704]: 2026-01-31 08:36:57.748 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:57 compute-0 ceph-mon[74496]: pgmap v3118: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 135 op/s
Jan 31 08:36:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:36:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720688179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:58 compute-0 nova_compute[247704]: 2026-01-31 08:36:58.223 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:36:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:58.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:36:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 139 op/s
Jan 31 08:36:58 compute-0 nova_compute[247704]: 2026-01-31 08:36:58.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:36:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:58.739 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:36:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:58.740 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:36:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:36:58.741 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:36:58 compute-0 podman[370334]: 2026-01-31 08:36:58.915006922 +0000 UTC m=+0.089085479 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:36:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:36:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:36:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.038 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.039 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:36:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3720688179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.205 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.206 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4082MB free_disk=20.900920867919922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.206 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.207 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.839 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance c6c9300a-7126-4527-bb8f-79350d3b2d99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.840 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.840 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:36:59 compute-0 nova_compute[247704]: 2026-01-31 08:36:59.894 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:00 compute-0 nova_compute[247704]: 2026-01-31 08:37:00.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:00.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:37:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280977155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:00 compute-0 nova_compute[247704]: 2026-01-31 08:37:00.379 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:00 compute-0 nova_compute[247704]: 2026-01-31 08:37:00.385 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:37:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 139 op/s
Jan 31 08:37:00 compute-0 nova_compute[247704]: 2026-01-31 08:37:00.595 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:37:00 compute-0 nova_compute[247704]: 2026-01-31 08:37:00.597 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:37:00 compute-0 nova_compute[247704]: 2026-01-31 08:37:00.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:01 compute-0 ceph-mon[74496]: pgmap v3119: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 139 op/s
Jan 31 08:37:01 compute-0 nova_compute[247704]: 2026-01-31 08:37:01.446 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:01 compute-0 nova_compute[247704]: 2026-01-31 08:37:01.599 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:01 compute-0 nova_compute[247704]: 2026-01-31 08:37:01.599 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:37:01 compute-0 nova_compute[247704]: 2026-01-31 08:37:01.599 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:37:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:02.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:02 compute-0 nova_compute[247704]: 2026-01-31 08:37:02.445 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:37:02 compute-0 nova_compute[247704]: 2026-01-31 08:37:02.445 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:37:02 compute-0 nova_compute[247704]: 2026-01-31 08:37:02.445 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:37:02 compute-0 nova_compute[247704]: 2026-01-31 08:37:02.446 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6c9300a-7126-4527-bb8f-79350d3b2d99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:37:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 12 KiB/s wr, 117 op/s
Jan 31 08:37:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2280977155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:02 compute-0 ceph-mon[74496]: pgmap v3120: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 139 op/s
Jan 31 08:37:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1795149143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4265750067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 613 KiB/s wr, 78 op/s
Jan 31 08:37:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:05 compute-0 nova_compute[247704]: 2026-01-31 08:37:05.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3438790100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:05 compute-0 ceph-mon[74496]: pgmap v3121: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 12 KiB/s wr, 117 op/s
Jan 31 08:37:05 compute-0 ovn_controller[149457]: 2026-01-31T08:37:05Z|00783|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:37:05 compute-0 nova_compute[247704]: 2026-01-31 08:37:05.620 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:06.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3154223725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:06 compute-0 ceph-mon[74496]: pgmap v3122: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 613 KiB/s wr, 78 op/s
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.449 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.543 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.604 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.605 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.605 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.605 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.606 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:06 compute-0 nova_compute[247704]: 2026-01-31 08:37:06.606 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:37:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:07.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:07 compute-0 ceph-mon[74496]: pgmap v3123: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 316 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 408 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 31 08:37:08 compute-0 ovn_controller[149457]: 2026-01-31T08:37:08Z|00784|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:37:08 compute-0 nova_compute[247704]: 2026-01-31 08:37:08.789 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:08 compute-0 nova_compute[247704]: 2026-01-31 08:37:08.851 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.375733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629375837, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2126, "num_deletes": 251, "total_data_size": 3834833, "memory_usage": 3885824, "flush_reason": "Manual Compaction"}
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 31 08:37:09 compute-0 nova_compute[247704]: 2026-01-31 08:37:09.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:09 compute-0 nova_compute[247704]: 2026-01-31 08:37:09.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629690951, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3755670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66412, "largest_seqno": 68537, "table_properties": {"data_size": 3746166, "index_size": 5997, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19787, "raw_average_key_size": 20, "raw_value_size": 3727160, "raw_average_value_size": 3842, "num_data_blocks": 262, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848413, "oldest_key_time": 1769848413, "file_creation_time": 1769848629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 315400 microseconds, and 7666 cpu microseconds.
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.691064) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3755670 bytes OK
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.691167) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.707000) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.707128) EVENT_LOG_v1 {"time_micros": 1769848629707116, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.707161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3826228, prev total WAL file size 3826228, number of live WAL files 2.
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.708373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3667KB)], [152(10204KB)]
Jan 31 08:37:09 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848629708457, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 14204882, "oldest_snapshot_seqno": -1}
Jan 31 08:37:09 compute-0 nova_compute[247704]: 2026-01-31 08:37:09.767 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:37:10 compute-0 sudo[370390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:10 compute-0 sudo[370390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[370390]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 sudo[370415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:10 compute-0 sudo[370415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[370415]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 sudo[370440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:10 compute-0 sudo[370440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[370440]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9618 keys, 12284802 bytes, temperature: kUnknown
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848630188502, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12284802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12223035, "index_size": 36630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 253434, "raw_average_key_size": 26, "raw_value_size": 12054574, "raw_average_value_size": 1253, "num_data_blocks": 1397, "num_entries": 9618, "num_filter_entries": 9618, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:37:10 compute-0 nova_compute[247704]: 2026-01-31 08:37:10.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:10 compute-0 sudo[370465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:37:10 compute-0 sudo[370465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.188860) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12284802 bytes
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.244402) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.6 rd, 25.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 10135, records dropped: 517 output_compression: NoCompression
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.244453) EVENT_LOG_v1 {"time_micros": 1769848630244433, "job": 94, "event": "compaction_finished", "compaction_time_micros": 480174, "compaction_time_cpu_micros": 30681, "output_level": 6, "num_output_files": 1, "total_output_size": 12284802, "num_input_records": 10135, "num_output_records": 9618, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848630245458, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848630247032, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:09.708156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.247167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.247210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.247212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.247214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:37:10.247215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:37:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:10.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 323 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 491 KiB/s rd, 3.3 MiB/s wr, 83 op/s
Jan 31 08:37:10 compute-0 sudo[370465]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 ceph-mon[74496]: pgmap v3124: 305 pgs: 305 active+clean; 316 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 408 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 31 08:37:10 compute-0 sudo[370522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:10 compute-0 sudo[370522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[370522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 nova_compute[247704]: 2026-01-31 08:37:10.761 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:10 compute-0 sudo[370547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:10 compute-0 sudo[370547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[370547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 sudo[370572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:10 compute-0 sudo[370572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[370572]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 sudo[370597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 08:37:10 compute-0 sudo[370597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:11.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:11 compute-0 sudo[370597]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:37:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:11.207 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:11.207 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:11.208 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:37:11 compute-0 nova_compute[247704]: 2026-01-31 08:37:11.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fd817260-5c72-4fef-a5c2-e45b767ec7cc does not exist
Jan 31 08:37:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8c38524e-cd35-4c43-9151-7b20ef0a0c9a does not exist
Jan 31 08:37:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cb6affc5-636c-48fc-b996-4c8734443c42 does not exist
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:37:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:37:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:37:11 compute-0 sudo[370640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:11 compute-0 sudo[370640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:11 compute-0 sudo[370640]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:11 compute-0 sudo[370666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:11 compute-0 sudo[370666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:11 compute-0 sudo[370666]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:11 compute-0 sudo[370691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:11 compute-0 sudo[370691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:11 compute-0 sudo[370691]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:11 compute-0 sudo[370716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:37:11 compute-0 sudo[370716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:12 compute-0 podman[370779]: 2026-01-31 08:37:12.00495592 +0000 UTC m=+0.024302742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:37:12 compute-0 podman[370779]: 2026-01-31 08:37:12.149826565 +0000 UTC m=+0.169173357 container create 7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:37:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:37:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:12.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:37:12 compute-0 systemd[1]: Started libpod-conmon-7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3.scope.
Jan 31 08:37:12 compute-0 ceph-mon[74496]: pgmap v3125: 305 pgs: 305 active+clean; 323 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 491 KiB/s rd, 3.3 MiB/s wr, 83 op/s
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:37:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:37:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 340 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 590 KiB/s rd, 4.2 MiB/s wr, 102 op/s
Jan 31 08:37:12 compute-0 podman[370779]: 2026-01-31 08:37:12.531048931 +0000 UTC m=+0.550395743 container init 7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:37:12 compute-0 podman[370779]: 2026-01-31 08:37:12.5425151 +0000 UTC m=+0.561861932 container start 7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:37:12 compute-0 competent_tu[370796]: 167 167
Jan 31 08:37:12 compute-0 systemd[1]: libpod-7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3.scope: Deactivated successfully.
Jan 31 08:37:12 compute-0 podman[370779]: 2026-01-31 08:37:12.703908297 +0000 UTC m=+0.723255149 container attach 7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:37:12 compute-0 podman[370779]: 2026-01-31 08:37:12.704663546 +0000 UTC m=+0.724010348 container died 7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-362f882de701a023a973835647e63406b5d54cda6c74800472aac76d6f68ccb6-merged.mount: Deactivated successfully.
Jan 31 08:37:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:13.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:13 compute-0 podman[370779]: 2026-01-31 08:37:13.27119589 +0000 UTC m=+1.290542692 container remove 7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:37:13 compute-0 systemd[1]: libpod-conmon-7deb68291a92a67bce81e862cf2b3a3e877fd0f0ddcd5e3d2ac9e0c36e6037e3.scope: Deactivated successfully.
Jan 31 08:37:13 compute-0 ceph-mon[74496]: pgmap v3126: 305 pgs: 305 active+clean; 340 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 590 KiB/s rd, 4.2 MiB/s wr, 102 op/s
Jan 31 08:37:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:13 compute-0 podman[370822]: 2026-01-31 08:37:13.431779448 +0000 UTC m=+0.026392603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:37:13 compute-0 podman[370822]: 2026-01-31 08:37:13.529030845 +0000 UTC m=+0.123643970 container create e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:37:13 compute-0 systemd[1]: Started libpod-conmon-e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3.scope.
Jan 31 08:37:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2c0ee69d2e081a612e87b65171552539c51eabf95554bf9de18d6e5f231b3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2c0ee69d2e081a612e87b65171552539c51eabf95554bf9de18d6e5f231b3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2c0ee69d2e081a612e87b65171552539c51eabf95554bf9de18d6e5f231b3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2c0ee69d2e081a612e87b65171552539c51eabf95554bf9de18d6e5f231b3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae2c0ee69d2e081a612e87b65171552539c51eabf95554bf9de18d6e5f231b3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:13 compute-0 podman[370822]: 2026-01-31 08:37:13.784712076 +0000 UTC m=+0.379325201 container init e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:37:13 compute-0 podman[370822]: 2026-01-31 08:37:13.793262013 +0000 UTC m=+0.387875148 container start e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:37:13 compute-0 podman[370822]: 2026-01-31 08:37:13.854508454 +0000 UTC m=+0.449121599 container attach e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 350 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 599 KiB/s rd, 4.2 MiB/s wr, 104 op/s
Jan 31 08:37:14 compute-0 romantic_curie[370839]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:37:14 compute-0 romantic_curie[370839]: --> relative data size: 1.0
Jan 31 08:37:14 compute-0 romantic_curie[370839]: --> All data devices are unavailable
Jan 31 08:37:14 compute-0 systemd[1]: libpod-e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3.scope: Deactivated successfully.
Jan 31 08:37:14 compute-0 podman[370822]: 2026-01-31 08:37:14.619929619 +0000 UTC m=+1.214542754 container died e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:37:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae2c0ee69d2e081a612e87b65171552539c51eabf95554bf9de18d6e5f231b3e-merged.mount: Deactivated successfully.
Jan 31 08:37:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:15.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:15 compute-0 nova_compute[247704]: 2026-01-31 08:37:15.206 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:15 compute-0 podman[370822]: 2026-01-31 08:37:15.367834737 +0000 UTC m=+1.962447872 container remove e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:37:15 compute-0 systemd[1]: libpod-conmon-e60c32474ba07f861d4b9d1a910d42442fd5b9566a032d90d77eaa36fcd448d3.scope: Deactivated successfully.
Jan 31 08:37:15 compute-0 sudo[370716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-0 sudo[370866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:15 compute-0 sudo[370866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-0 sudo[370866]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-0 sudo[370886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:15 compute-0 sudo[370886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-0 sudo[370886]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-0 sudo[370914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:15 compute-0 sudo[370914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-0 sudo[370914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-0 sudo[370934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:15 compute-0 sudo[370934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-0 sudo[370934]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-0 sudo[370966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:15 compute-0 sudo[370966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-0 sudo[370966]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:15 compute-0 sudo[370992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:37:15 compute-0 sudo[370992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:15 compute-0 ceph-mon[74496]: pgmap v3127: 305 pgs: 305 active+clean; 350 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 599 KiB/s rd, 4.2 MiB/s wr, 104 op/s
Jan 31 08:37:15 compute-0 podman[371057]: 2026-01-31 08:37:15.889449328 +0000 UTC m=+0.021104394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:37:16 compute-0 podman[371057]: 2026-01-31 08:37:16.044716627 +0000 UTC m=+0.176371663 container create 708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:37:16 compute-0 systemd[1]: Started libpod-conmon-708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da.scope.
Jan 31 08:37:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:37:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:16.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:37:16 compute-0 podman[371057]: 2026-01-31 08:37:16.287585327 +0000 UTC m=+0.419240453 container init 708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:37:16 compute-0 podman[371057]: 2026-01-31 08:37:16.297375585 +0000 UTC m=+0.429030631 container start 708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:37:16 compute-0 epic_mahavira[371074]: 167 167
Jan 31 08:37:16 compute-0 systemd[1]: libpod-708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da.scope: Deactivated successfully.
Jan 31 08:37:16 compute-0 conmon[371074]: conmon 708d65ebcfd238500589 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da.scope/container/memory.events
Jan 31 08:37:16 compute-0 podman[371057]: 2026-01-31 08:37:16.333397551 +0000 UTC m=+0.465052637 container attach 708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:16 compute-0 podman[371057]: 2026-01-31 08:37:16.334034017 +0000 UTC m=+0.465689073 container died 708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:37:16 compute-0 nova_compute[247704]: 2026-01-31 08:37:16.453 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-16f0d72505f09774ca78ba1f0f610188f0ae32bdb485a066dd23ae5582cc9535-merged.mount: Deactivated successfully.
Jan 31 08:37:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Jan 31 08:37:16 compute-0 podman[371057]: 2026-01-31 08:37:16.534766731 +0000 UTC m=+0.666421757 container remove 708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:37:16 compute-0 systemd[1]: libpod-conmon-708d65ebcfd238500589af73063ccd57e7f24b9c4d193678f8e33e3de29338da.scope: Deactivated successfully.
Jan 31 08:37:16 compute-0 podman[371099]: 2026-01-31 08:37:16.723454812 +0000 UTC m=+0.096266553 container create 0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:37:16 compute-0 podman[371099]: 2026-01-31 08:37:16.652821894 +0000 UTC m=+0.025633665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:37:16 compute-0 systemd[1]: Started libpod-conmon-0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00.scope.
Jan 31 08:37:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2676e88cc5e3e148baf922d0b019c79d533400304f564be9ab03578536b65b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2676e88cc5e3e148baf922d0b019c79d533400304f564be9ab03578536b65b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2676e88cc5e3e148baf922d0b019c79d533400304f564be9ab03578536b65b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2676e88cc5e3e148baf922d0b019c79d533400304f564be9ab03578536b65b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:16 compute-0 podman[371099]: 2026-01-31 08:37:16.86638059 +0000 UTC m=+0.239192331 container init 0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:37:16 compute-0 podman[371099]: 2026-01-31 08:37:16.874297883 +0000 UTC m=+0.247109624 container start 0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:16 compute-0 podman[371099]: 2026-01-31 08:37:16.896247206 +0000 UTC m=+0.269059087 container attach 0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:17.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:17 compute-0 elastic_golick[371116]: {
Jan 31 08:37:17 compute-0 elastic_golick[371116]:     "0": [
Jan 31 08:37:17 compute-0 elastic_golick[371116]:         {
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "devices": [
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "/dev/loop3"
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             ],
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "lv_name": "ceph_lv0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "lv_size": "7511998464",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "name": "ceph_lv0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "tags": {
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.cluster_name": "ceph",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.crush_device_class": "",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.encrypted": "0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.osd_id": "0",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.type": "block",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:                 "ceph.vdo": "0"
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             },
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "type": "block",
Jan 31 08:37:17 compute-0 elastic_golick[371116]:             "vg_name": "ceph_vg0"
Jan 31 08:37:17 compute-0 elastic_golick[371116]:         }
Jan 31 08:37:17 compute-0 elastic_golick[371116]:     ]
Jan 31 08:37:17 compute-0 elastic_golick[371116]: }
Jan 31 08:37:17 compute-0 systemd[1]: libpod-0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00.scope: Deactivated successfully.
Jan 31 08:37:17 compute-0 podman[371099]: 2026-01-31 08:37:17.721956138 +0000 UTC m=+1.094767879 container died 0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2676e88cc5e3e148baf922d0b019c79d533400304f564be9ab03578536b65b-merged.mount: Deactivated successfully.
Jan 31 08:37:17 compute-0 podman[371099]: 2026-01-31 08:37:17.780268997 +0000 UTC m=+1.153080738 container remove 0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:17 compute-0 systemd[1]: libpod-conmon-0629763373e93d14b1f61a83014247f0d4a6ee21187d313b20c6a8601dc81b00.scope: Deactivated successfully.
Jan 31 08:37:17 compute-0 sudo[370992]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:17 compute-0 sudo[371137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:17 compute-0 sudo[371137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:17 compute-0 sudo[371137]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:17 compute-0 ceph-mon[74496]: pgmap v3128: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 3.7 MiB/s wr, 121 op/s
Jan 31 08:37:17 compute-0 sudo[371162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:17 compute-0 sudo[371162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:17 compute-0 sudo[371162]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:17 compute-0 sudo[371187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:17 compute-0 sudo[371187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:17 compute-0 sudo[371187]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:18 compute-0 sudo[371212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:37:18 compute-0 sudo[371212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:18.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.360128226 +0000 UTC m=+0.037367031 container create b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:18 compute-0 systemd[1]: Started libpod-conmon-b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5.scope.
Jan 31 08:37:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.431770229 +0000 UTC m=+0.109009044 container init b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.438521743 +0000 UTC m=+0.115760548 container start b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.34508741 +0000 UTC m=+0.022326235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.441825593 +0000 UTC m=+0.119064398 container attach b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:37:18 compute-0 affectionate_mclean[371294]: 167 167
Jan 31 08:37:18 compute-0 systemd[1]: libpod-b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5.scope: Deactivated successfully.
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.444845737 +0000 UTC m=+0.122084542 container died b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:37:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-06f961415d97c1dd502ed018db0d2c9f50eb5342922c086b7d98db7244639f4c-merged.mount: Deactivated successfully.
Jan 31 08:37:18 compute-0 podman[371278]: 2026-01-31 08:37:18.480168437 +0000 UTC m=+0.157407242 container remove b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mclean, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:37:18 compute-0 systemd[1]: libpod-conmon-b248b1322c7b9d6ee6afcbf86d7bc093864deca3a04386bc660617ad566265d5.scope: Deactivated successfully.
Jan 31 08:37:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Jan 31 08:37:18 compute-0 nova_compute[247704]: 2026-01-31 08:37:18.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:18 compute-0 podman[371317]: 2026-01-31 08:37:18.619699271 +0000 UTC m=+0.045484487 container create a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:37:18 compute-0 systemd[1]: Started libpod-conmon-a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88.scope.
Jan 31 08:37:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219c658b734e1bafda5929edb3a07b07f55fee6608f165116e8401d924f33d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219c658b734e1bafda5929edb3a07b07f55fee6608f165116e8401d924f33d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219c658b734e1bafda5929edb3a07b07f55fee6608f165116e8401d924f33d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219c658b734e1bafda5929edb3a07b07f55fee6608f165116e8401d924f33d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:18 compute-0 podman[371317]: 2026-01-31 08:37:18.599328696 +0000 UTC m=+0.025113952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:37:18 compute-0 podman[371317]: 2026-01-31 08:37:18.706531134 +0000 UTC m=+0.132316380 container init a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:18 compute-0 podman[371317]: 2026-01-31 08:37:18.71456697 +0000 UTC m=+0.140352186 container start a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:37:18 compute-0 podman[371317]: 2026-01-31 08:37:18.717699686 +0000 UTC m=+0.143484912 container attach a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:37:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:19.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]: {
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:         "osd_id": 0,
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:         "type": "bluestore"
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]:     }
Jan 31 08:37:19 compute-0 happy_brahmagupta[371333]: }
Jan 31 08:37:19 compute-0 systemd[1]: libpod-a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88.scope: Deactivated successfully.
Jan 31 08:37:19 compute-0 podman[371317]: 2026-01-31 08:37:19.529858478 +0000 UTC m=+0.955643754 container died a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1219c658b734e1bafda5929edb3a07b07f55fee6608f165116e8401d924f33d4-merged.mount: Deactivated successfully.
Jan 31 08:37:19 compute-0 podman[371317]: 2026-01-31 08:37:19.58623747 +0000 UTC m=+1.012022696 container remove a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:37:19 compute-0 systemd[1]: libpod-conmon-a7ccd8f7d70d7ff4cde9525a8c4f2c6afe0106d8d2859c562e44cdc8a3bbcb88.scope: Deactivated successfully.
Jan 31 08:37:19 compute-0 sudo[371212]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:19 compute-0 podman[371354]: 2026-01-31 08:37:19.620932893 +0000 UTC m=+0.056284080 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:37:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:37:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:37:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:19 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 437ecb09-a29c-42bf-a724-871063f31a7c does not exist
Jan 31 08:37:19 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 04178232-fba3-4812-9071-b0c1d2a5254c does not exist
Jan 31 08:37:19 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c417878e-6bbd-4860-b189-1efee4ba0a36 does not exist
Jan 31 08:37:19 compute-0 sudo[371389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:19 compute-0 sudo[371389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:19 compute-0 sudo[371389]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:19 compute-0 sudo[371415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:37:19 compute-0 sudo[371415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:19 compute-0 sudo[371415]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:19 compute-0 ceph-mon[74496]: pgmap v3129: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Jan 31 08:37:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:20 compute-0 nova_compute[247704]: 2026-01-31 08:37:20.210 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:37:20
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.meta', 'vms']
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:37:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:20.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 537 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:37:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:37:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:21.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:21 compute-0 nova_compute[247704]: 2026-01-31 08:37:21.455 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:21 compute-0 ceph-mon[74496]: pgmap v3130: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 537 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Jan 31 08:37:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:22.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 989 KiB/s wr, 48 op/s
Jan 31 08:37:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:24 compute-0 ceph-mon[74496]: pgmap v3131: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 989 KiB/s wr, 48 op/s
Jan 31 08:37:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:24.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 133 KiB/s wr, 28 op/s
Jan 31 08:37:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:25.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:25 compute-0 nova_compute[247704]: 2026-01-31 08:37:25.212 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:26 compute-0 ceph-mon[74496]: pgmap v3132: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 133 KiB/s wr, 28 op/s
Jan 31 08:37:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:26 compute-0 nova_compute[247704]: 2026-01-31 08:37:26.458 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 102 KiB/s wr, 41 op/s
Jan 31 08:37:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:27.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:27 compute-0 ceph-mon[74496]: pgmap v3133: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 102 KiB/s wr, 41 op/s
Jan 31 08:37:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 45 KiB/s wr, 32 op/s
Jan 31 08:37:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:29.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:29 compute-0 nova_compute[247704]: 2026-01-31 08:37:29.574 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:29 compute-0 nova_compute[247704]: 2026-01-31 08:37:29.690 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid c6c9300a-7126-4527-bb8f-79350d3b2d99 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:37:29 compute-0 nova_compute[247704]: 2026-01-31 08:37:29.691 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:29 compute-0 nova_compute[247704]: 2026-01-31 08:37:29.691 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:29 compute-0 podman[371446]: 2026-01-31 08:37:29.916887886 +0000 UTC m=+0.076018711 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 31 08:37:30 compute-0 nova_compute[247704]: 2026-01-31 08:37:30.059 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:30 compute-0 ceph-mon[74496]: pgmap v3134: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 45 KiB/s wr, 32 op/s
Jan 31 08:37:30 compute-0 nova_compute[247704]: 2026-01-31 08:37:30.214 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 257 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 529 KiB/s wr, 45 op/s
Jan 31 08:37:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:31.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:31 compute-0 nova_compute[247704]: 2026-01-31 08:37:31.461 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:31 compute-0 ceph-mon[74496]: pgmap v3135: 305 pgs: 305 active+clean; 257 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 529 KiB/s wr, 45 op/s
Jan 31 08:37:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:32.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 249 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 31 08:37:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/598306189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:37:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:33.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:33 compute-0 nova_compute[247704]: 2026-01-31 08:37:33.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:33 compute-0 ceph-mon[74496]: pgmap v3136: 305 pgs: 305 active+clean; 249 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 31 08:37:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2614260154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:37:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 31 08:37:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/356581148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:35.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:35 compute-0 nova_compute[247704]: 2026-01-31 08:37:35.217 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:35 compute-0 sudo[371474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:35 compute-0 sudo[371474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:35 compute-0 sudo[371474]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:35 compute-0 sudo[371500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:35 compute-0 sudo[371500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:35 compute-0 sudo[371500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003168148322631677 of space, bias 1.0, pg target 0.950444496789503 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:37:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:37:36 compute-0 ceph-mon[74496]: pgmap v3137: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 31 08:37:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:36 compute-0 nova_compute[247704]: 2026-01-31 08:37:36.480 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:37.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:38.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:38 compute-0 ceph-mon[74496]: pgmap v3138: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:39.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:39 compute-0 nova_compute[247704]: 2026-01-31 08:37:39.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:39 compute-0 ceph-mon[74496]: pgmap v3139: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 08:37:39 compute-0 ovn_controller[149457]: 2026-01-31T08:37:39Z|00785|binding|INFO|Releasing lport 15957490-998a-4a1b-bbd9-47328694c4d3 from this chassis (sb_readonly=0)
Jan 31 08:37:39 compute-0 nova_compute[247704]: 2026-01-31 08:37:39.961 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 31 08:37:41 compute-0 nova_compute[247704]: 2026-01-31 08:37:41.093 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:41.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:41 compute-0 nova_compute[247704]: 2026-01-31 08:37:41.482 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:41 compute-0 nova_compute[247704]: 2026-01-31 08:37:41.679 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:42 compute-0 ceph-mon[74496]: pgmap v3140: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 31 08:37:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:42.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 31 08:37:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.361 247708 DEBUG nova.compute.manager [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-changed-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.361 247708 DEBUG nova.compute.manager [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Refreshing instance network info cache due to event network-changed-3eaec8f3-e8af-4992-82d1-25154406dead. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.362 247708 DEBUG oslo_concurrency.lockutils [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.362 247708 DEBUG oslo_concurrency.lockutils [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.362 247708 DEBUG nova.network.neutron [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Refreshing network info cache for port 3eaec8f3-e8af-4992-82d1-25154406dead _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:37:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.981 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.982 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.982 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.983 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.983 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.984 247708 INFO nova.compute.manager [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Terminating instance
Jan 31 08:37:43 compute-0 nova_compute[247704]: 2026-01-31 08:37:43.985 247708 DEBUG nova.compute.manager [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:37:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:37:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:37:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 413 KiB/s wr, 111 op/s
Jan 31 08:37:44 compute-0 kernel: tap3eaec8f3-e8 (unregistering): left promiscuous mode
Jan 31 08:37:44 compute-0 NetworkManager[49108]: <info>  [1769848664.5730] device (tap3eaec8f3-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.627 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:44 compute-0 ovn_controller[149457]: 2026-01-31T08:37:44Z|00786|binding|INFO|Releasing lport 3eaec8f3-e8af-4992-82d1-25154406dead from this chassis (sb_readonly=0)
Jan 31 08:37:44 compute-0 ovn_controller[149457]: 2026-01-31T08:37:44Z|00787|binding|INFO|Setting lport 3eaec8f3-e8af-4992-82d1-25154406dead down in Southbound
Jan 31 08:37:44 compute-0 ovn_controller[149457]: 2026-01-31T08:37:44Z|00788|binding|INFO|Removing iface tap3eaec8f3-e8 ovn-installed in OVS
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.630 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.635 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:44 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Jan 31 08:37:44 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b2.scope: Consumed 20.828s CPU time.
Jan 31 08:37:44 compute-0 systemd-machined[214448]: Machine qemu-83-instance-000000b2 terminated.
Jan 31 08:37:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:44.788 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:13:d6 10.100.0.14'], port_security=['fa:16:3e:13:13:d6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c6c9300a-7126-4527-bb8f-79350d3b2d99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '34189c60-de2d-42e8-858e-38d35028fcf6 a07d9fd6-325a-44eb-8786-8aa95ea9c7b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8646ae3-0ba7-4a4d-a93f-9ccdaad61a18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3eaec8f3-e8af-4992-82d1-25154406dead) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:37:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:44.790 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3eaec8f3-e8af-4992-82d1-25154406dead in datapath effe8892-77d9-4f8d-a6b7-18d16342aaf9 unbound from our chassis
Jan 31 08:37:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:44.794 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network effe8892-77d9-4f8d-a6b7-18d16342aaf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:37:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:44.796 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[db86986c-0780-4a63-bf5a-fc69095adbe4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:44.797 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9 namespace which is not needed anymore
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.821 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.833 247708 INFO nova.virt.libvirt.driver [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Instance destroyed successfully.
Jan 31 08:37:44 compute-0 nova_compute[247704]: 2026-01-31 08:37:44.834 247708 DEBUG nova.objects.instance [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid c6c9300a-7126-4527-bb8f-79350d3b2d99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:37:44 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [NOTICE]   (368436) : haproxy version is 2.8.14-c23fe91
Jan 31 08:37:44 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [NOTICE]   (368436) : path to executable is /usr/sbin/haproxy
Jan 31 08:37:44 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [WARNING]  (368436) : Exiting Master process...
Jan 31 08:37:44 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [WARNING]  (368436) : Exiting Master process...
Jan 31 08:37:44 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [ALERT]    (368436) : Current worker (368440) exited with code 143 (Terminated)
Jan 31 08:37:44 compute-0 neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9[368407]: [WARNING]  (368436) : All workers exited. Exiting... (0)
Jan 31 08:37:44 compute-0 systemd[1]: libpod-4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227.scope: Deactivated successfully.
Jan 31 08:37:44 compute-0 podman[371564]: 2026-01-31 08:37:44.991037142 +0000 UTC m=+0.097635287 container died 4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.038 247708 DEBUG nova.virt.libvirt.vif [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:34:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-100557639',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=178,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWpNeypo9Q1sq9cZAjy5j4PdvjZU7wvIeGVblb4el5++/kXu5QxxUJCQ7sWuia4+hmpZjY2WQxIbUb1otO4A/PN5NRwLCFKeAM2g4yuKbNgm/SHmLCr0VD7rm0wEE+X+Q==',key_name='tempest-TestSecurityGroupsBasicOps-751432981',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:34:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-u4wqnf4n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:34:59Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=c6c9300a-7126-4527-bb8f-79350d3b2d99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.039 247708 DEBUG nova.network.os_vif_util [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.040 247708 DEBUG nova.network.os_vif_util [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.041 247708 DEBUG os_vif [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.044 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eaec8f3-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.046 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.048 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.052 247708 INFO os_vif [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:13:d6,bridge_name='br-int',has_traffic_filtering=True,id=3eaec8f3-e8af-4992-82d1-25154406dead,network=Network(effe8892-77d9-4f8d-a6b7-18d16342aaf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eaec8f3-e8')
Jan 31 08:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227-userdata-shm.mount: Deactivated successfully.
Jan 31 08:37:45 compute-0 ceph-mon[74496]: pgmap v3141: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 31 08:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e2b6ce0d09af7372fe3ffb21261a06788c6ea97edb45055b1fdc8eeb7be9bdc-merged.mount: Deactivated successfully.
Jan 31 08:37:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:37:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:45.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:37:45 compute-0 podman[371564]: 2026-01-31 08:37:45.199825082 +0000 UTC m=+0.306423237 container cleanup 4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:37:45 compute-0 systemd[1]: libpod-conmon-4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227.scope: Deactivated successfully.
Jan 31 08:37:45 compute-0 podman[371613]: 2026-01-31 08:37:45.587490245 +0000 UTC m=+0.366936259 container remove 4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.594 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[42501a5b-64e5-4f65-b092-ae9bd6ab0ca6]: (4, ('Sat Jan 31 08:37:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9 (4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227)\n4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227\nSat Jan 31 08:37:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9 (4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227)\n4f54e4630c976e1748fcbc2429d6120a3d4a0ef98010acd2836b639d3370b227\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.597 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[72d1831c-c607-448b-b1a0-ee46414deda5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.598 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeffe8892-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.600 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:45 compute-0 kernel: tapeffe8892-70: left promiscuous mode
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.605 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0c578876-09ff-428c-bb69-d8079c87e6b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 nova_compute[247704]: 2026-01-31 08:37:45.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.625 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2e69ff1c-ee1f-4c29-aeda-a4fd2865fa92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.627 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3e791667-54ee-448f-af0c-bbb4a474cff3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.643 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dbfdcf74-cd19-495b-8bd4-e898b60ef057]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874102, 'reachable_time': 44601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371629, 'error': None, 'target': 'ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.645 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-effe8892-77d9-4f8d-a6b7-18d16342aaf9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:37:45 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:45.645 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[be0aa3a8-5a2a-43d2-beb1-9f13772e51d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:37:45 compute-0 systemd[1]: run-netns-ovnmeta\x2deffe8892\x2d77d9\x2d4f8d\x2da6b7\x2d18d16342aaf9.mount: Deactivated successfully.
Jan 31 08:37:46 compute-0 ceph-mon[74496]: pgmap v3142: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 413 KiB/s wr, 111 op/s
Jan 31 08:37:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:46.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.456 247708 DEBUG nova.compute.manager [req-b218a11f-4675-453b-99bc-3df38a6acc1b req-5c75a5c0-4f58-49f1-902e-7fcbe7532a94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-vif-unplugged-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.457 247708 DEBUG oslo_concurrency.lockutils [req-b218a11f-4675-453b-99bc-3df38a6acc1b req-5c75a5c0-4f58-49f1-902e-7fcbe7532a94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.457 247708 DEBUG oslo_concurrency.lockutils [req-b218a11f-4675-453b-99bc-3df38a6acc1b req-5c75a5c0-4f58-49f1-902e-7fcbe7532a94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.457 247708 DEBUG oslo_concurrency.lockutils [req-b218a11f-4675-453b-99bc-3df38a6acc1b req-5c75a5c0-4f58-49f1-902e-7fcbe7532a94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.457 247708 DEBUG nova.compute.manager [req-b218a11f-4675-453b-99bc-3df38a6acc1b req-5c75a5c0-4f58-49f1-902e-7fcbe7532a94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] No waiting events found dispatching network-vif-unplugged-3eaec8f3-e8af-4992-82d1-25154406dead pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.458 247708 DEBUG nova.compute.manager [req-b218a11f-4675-453b-99bc-3df38a6acc1b req-5c75a5c0-4f58-49f1-902e-7fcbe7532a94 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-vif-unplugged-3eaec8f3-e8af-4992-82d1-25154406dead for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.484 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.697 247708 INFO nova.virt.libvirt.driver [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Deleting instance files /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99_del
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.698 247708 INFO nova.virt.libvirt.driver [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Deletion of /var/lib/nova/instances/c6c9300a-7126-4527-bb8f-79350d3b2d99_del complete
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.940 247708 DEBUG nova.network.neutron [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updated VIF entry in instance network info cache for port 3eaec8f3-e8af-4992-82d1-25154406dead. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:37:46 compute-0 nova_compute[247704]: 2026-01-31 08:37:46.941 247708 DEBUG nova.network.neutron [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [{"id": "3eaec8f3-e8af-4992-82d1-25154406dead", "address": "fa:16:3e:13:13:d6", "network": {"id": "effe8892-77d9-4f8d-a6b7-18d16342aaf9", "bridge": "br-int", "label": "tempest-network-smoke--1227442907", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eaec8f3-e8", "ovs_interfaceid": "3eaec8f3-e8af-4992-82d1-25154406dead", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:37:47 compute-0 nova_compute[247704]: 2026-01-31 08:37:47.060 247708 INFO nova.compute.manager [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Took 3.07 seconds to destroy the instance on the hypervisor.
Jan 31 08:37:47 compute-0 nova_compute[247704]: 2026-01-31 08:37:47.061 247708 DEBUG oslo.service.loopingcall [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:37:47 compute-0 nova_compute[247704]: 2026-01-31 08:37:47.061 247708 DEBUG nova.compute.manager [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:37:47 compute-0 nova_compute[247704]: 2026-01-31 08:37:47.061 247708 DEBUG nova.network.neutron [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:37:47 compute-0 nova_compute[247704]: 2026-01-31 08:37:47.089 247708 DEBUG oslo_concurrency.lockutils [req-0c13cba7-156f-43d1-a9d5-f22c32ea19cf req-49341700-f729-4cb6-8fa8-e745d0d2a5be 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-c6c9300a-7126-4527-bb8f-79350d3b2d99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:37:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:47.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:47 compute-0 ceph-mon[74496]: pgmap v3143: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 08:37:48 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 31 08:37:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 226 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 76 op/s
Jan 31 08:37:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.575 247708 DEBUG nova.network.neutron [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.740 247708 DEBUG nova.compute.manager [req-dc891725-02c7-4c4a-8931-f4532fc1cf19 req-ea5fdd85-986e-4555-a074-b84c34a6de43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-vif-deleted-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.741 247708 INFO nova.compute.manager [req-dc891725-02c7-4c4a-8931-f4532fc1cf19 req-ea5fdd85-986e-4555-a074-b84c34a6de43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Neutron deleted interface 3eaec8f3-e8af-4992-82d1-25154406dead; detaching it from the instance and deleting it from the info cache
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.741 247708 DEBUG nova.network.neutron [req-dc891725-02c7-4c4a-8931-f4532fc1cf19 req-ea5fdd85-986e-4555-a074-b84c34a6de43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.743 247708 DEBUG nova.compute.manager [req-996c11f7-998c-4d27-90a8-daf6fb43fbc0 req-4744da5e-6be1-4d30-8d6f-8671e6a6b456 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.744 247708 DEBUG oslo_concurrency.lockutils [req-996c11f7-998c-4d27-90a8-daf6fb43fbc0 req-4744da5e-6be1-4d30-8d6f-8671e6a6b456 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.744 247708 DEBUG oslo_concurrency.lockutils [req-996c11f7-998c-4d27-90a8-daf6fb43fbc0 req-4744da5e-6be1-4d30-8d6f-8671e6a6b456 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.745 247708 DEBUG oslo_concurrency.lockutils [req-996c11f7-998c-4d27-90a8-daf6fb43fbc0 req-4744da5e-6be1-4d30-8d6f-8671e6a6b456 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.745 247708 DEBUG nova.compute.manager [req-996c11f7-998c-4d27-90a8-daf6fb43fbc0 req-4744da5e-6be1-4d30-8d6f-8671e6a6b456 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] No waiting events found dispatching network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:37:48 compute-0 nova_compute[247704]: 2026-01-31 08:37:48.745 247708 WARNING nova.compute.manager [req-996c11f7-998c-4d27-90a8-daf6fb43fbc0 req-4744da5e-6be1-4d30-8d6f-8671e6a6b456 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Received unexpected event network-vif-plugged-3eaec8f3-e8af-4992-82d1-25154406dead for instance with vm_state active and task_state deleting.
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.071 247708 INFO nova.compute.manager [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Took 2.01 seconds to deallocate network for instance.
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.098 247708 DEBUG nova.compute.manager [req-dc891725-02c7-4c4a-8931-f4532fc1cf19 req-ea5fdd85-986e-4555-a074-b84c34a6de43 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Detach interface failed, port_id=3eaec8f3-e8af-4992-82d1-25154406dead, reason: Instance c6c9300a-7126-4527-bb8f-79350d3b2d99 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:37:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:49.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.312 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.312 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.382 247708 DEBUG oslo_concurrency.processutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:49 compute-0 ceph-mon[74496]: pgmap v3144: 305 pgs: 305 active+clean; 226 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 76 op/s
Jan 31 08:37:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:37:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131859032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.869 247708 DEBUG oslo_concurrency.processutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:49 compute-0 nova_compute[247704]: 2026-01-31 08:37:49.876 247708 DEBUG nova.compute.provider_tree [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:37:49 compute-0 podman[371653]: 2026-01-31 08:37:49.905159843 +0000 UTC m=+0.076367049 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.032 247708 DEBUG nova.scheduler.client.report [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.048 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:37:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:50.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:37:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 194 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 746 KiB/s wr, 102 op/s
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.646 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/131859032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.736 247708 INFO nova.scheduler.client.report [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance c6c9300a-7126-4527-bb8f-79350d3b2d99
Jan 31 08:37:50 compute-0 nova_compute[247704]: 2026-01-31 08:37:50.916 247708 DEBUG oslo_concurrency.lockutils [None req-e6e56fc6-821d-49da-afab-8dfb220009ef c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "c6c9300a-7126-4527-bb8f-79350d3b2d99" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:51 compute-0 nova_compute[247704]: 2026-01-31 08:37:51.524 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:51 compute-0 nova_compute[247704]: 2026-01-31 08:37:51.734 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:51 compute-0 ceph-mon[74496]: pgmap v3145: 305 pgs: 305 active+clean; 194 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 746 KiB/s wr, 102 op/s
Jan 31 08:37:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:52.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.7 MiB/s wr, 103 op/s
Jan 31 08:37:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:37:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:37:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:54 compute-0 ceph-mon[74496]: pgmap v3146: 305 pgs: 305 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.7 MiB/s wr, 103 op/s
Jan 31 08:37:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/869108237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:37:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/869108237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:37:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:54.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 08:37:55 compute-0 nova_compute[247704]: 2026-01-31 08:37:55.099 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:37:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:55.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:37:55 compute-0 sudo[371677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:55 compute-0 sudo[371677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:55 compute-0 sudo[371677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:55 compute-0 sudo[371702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:37:55 compute-0 sudo[371702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:55 compute-0 sudo[371702]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:56 compute-0 ceph-mon[74496]: pgmap v3147: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 08:37:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:56.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:37:56 compute-0 nova_compute[247704]: 2026-01-31 08:37:56.526 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:37:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:37:57 compute-0 ceph-mon[74496]: pgmap v3148: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.610 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.611 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.611 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.611 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.612 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:57 compute-0 nova_compute[247704]: 2026-01-31 08:37:57.827 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:37:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026186397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.054 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.257 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.259 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4295MB free_disk=20.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.259 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.259 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:37:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:58.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:37:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/751218233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2026186397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.498 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.499 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:37:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.531 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:37:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:58.877 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:37:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:37:58.878 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:37:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:37:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654399255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.962 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:58 compute-0 nova_compute[247704]: 2026-01-31 08:37:58.968 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:37:59 compute-0 nova_compute[247704]: 2026-01-31 08:37:59.024 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:37:59 compute-0 nova_compute[247704]: 2026-01-31 08:37:59.111 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:37:59 compute-0 nova_compute[247704]: 2026-01-31 08:37:59.112 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:37:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:37:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:59.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:37:59 compute-0 ceph-mon[74496]: pgmap v3149: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:37:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1654399255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3217867652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:37:59 compute-0 nova_compute[247704]: 2026-01-31 08:37:59.831 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848664.8296218, c6c9300a-7126-4527-bb8f-79350d3b2d99 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:37:59 compute-0 nova_compute[247704]: 2026-01-31 08:37:59.831 247708 INFO nova.compute.manager [-] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] VM Stopped (Lifecycle Event)
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.002 247708 DEBUG nova.compute.manager [None req-638ca46c-f80b-4406-b1a4-b42bd0ea5991 - - - - - -] [instance: c6c9300a-7126-4527-bb8f-79350d3b2d99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.102 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.112 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.112 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.113 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.193 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.194 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:00.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 08:38:00 compute-0 nova_compute[247704]: 2026-01-31 08:38:00.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:00 compute-0 podman[371775]: 2026-01-31 08:38:00.899248653 +0000 UTC m=+0.071978932 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:01 compute-0 nova_compute[247704]: 2026-01-31 08:38:01.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:02.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 31 08:38:02 compute-0 ceph-mon[74496]: pgmap v3150: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 08:38:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1761214171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1097934177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:02.881 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:03.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:04 compute-0 ceph-mon[74496]: pgmap v3151: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Jan 31 08:38:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:04.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 501 KiB/s wr, 29 op/s
Jan 31 08:38:04 compute-0 nova_compute[247704]: 2026-01-31 08:38:04.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:05 compute-0 nova_compute[247704]: 2026-01-31 08:38:05.164 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:05 compute-0 ceph-mon[74496]: pgmap v3152: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 501 KiB/s wr, 29 op/s
Jan 31 08:38:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:06.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 138 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 13 KiB/s wr, 20 op/s
Jan 31 08:38:06 compute-0 nova_compute[247704]: 2026-01-31 08:38:06.531 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Jan 31 08:38:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Jan 31 08:38:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Jan 31 08:38:08 compute-0 ceph-mon[74496]: pgmap v3153: 305 pgs: 305 active+clean; 138 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 13 KiB/s wr, 20 op/s
Jan 31 08:38:08 compute-0 ceph-mon[74496]: osdmap e379: 3 total, 3 up, 3 in
Jan 31 08:38:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 137 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 31 08:38:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:09 compute-0 ceph-mon[74496]: pgmap v3155: 305 pgs: 305 active+clean; 137 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.6 MiB/s wr, 39 op/s
Jan 31 08:38:10 compute-0 nova_compute[247704]: 2026-01-31 08:38:10.168 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:10.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Jan 31 08:38:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1947169940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:11.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:11.207 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:11.208 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:11.208 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:11 compute-0 nova_compute[247704]: 2026-01-31 08:38:11.533 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:11 compute-0 ceph-mon[74496]: pgmap v3156: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Jan 31 08:38:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:12.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 31 08:38:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:14 compute-0 ceph-mon[74496]: pgmap v3157: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 31 08:38:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:14.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Jan 31 08:38:15 compute-0 nova_compute[247704]: 2026-01-31 08:38:15.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:15.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:15 compute-0 sudo[371810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:15 compute-0 sudo[371810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:15 compute-0 sudo[371810]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:15 compute-0 sudo[371835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:15 compute-0 sudo[371835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:15 compute-0 sudo[371835]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:16 compute-0 ceph-mon[74496]: pgmap v3158: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Jan 31 08:38:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:16.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 MiB/s wr, 35 op/s
Jan 31 08:38:16 compute-0 nova_compute[247704]: 2026-01-31 08:38:16.534 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:17.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:18 compute-0 ceph-mon[74496]: pgmap v3159: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 MiB/s wr, 35 op/s
Jan 31 08:38:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:18.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 08:38:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:19.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:20 compute-0 sudo[371862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:20 compute-0 sudo[371862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 sudo[371862]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:20 compute-0 sudo[371893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:20 compute-0 sudo[371893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 sudo[371893]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:20 compute-0 podman[371886]: 2026-01-31 08:38:20.121465702 +0000 UTC m=+0.076471122 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 31 08:38:20 compute-0 sudo[371928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:20 compute-0 sudo[371928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 sudo[371928]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 nova_compute[247704]: 2026-01-31 08:38:20.175 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:20 compute-0 sudo[371953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:38:20 compute-0 sudo[371953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:38:20
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms']
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:38:20 compute-0 ceph-mon[74496]: pgmap v3160: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 08:38:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:20.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 382 KiB/s wr, 17 op/s
Jan 31 08:38:20 compute-0 sudo[371953]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 sudo[372008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:20 compute-0 sudo[372008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 sudo[372008]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 sudo[372033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:20 compute-0 sudo[372033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 sudo[372033]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:38:20 compute-0 sudo[372058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:20 compute-0 sudo[372058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 sudo[372058]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:38:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:38:20 compute-0 sudo[372083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:38:20 compute-0 sudo[372083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:38:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:38:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:21.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.11429415 +0000 UTC m=+0.022289914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.2918412 +0000 UTC m=+0.199836924 container create 487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:38:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:38:21 compute-0 systemd[1]: Started libpod-conmon-487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f.scope.
Jan 31 08:38:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:38:21 compute-0 nova_compute[247704]: 2026-01-31 08:38:21.537 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.578504414 +0000 UTC m=+0.486500138 container init 487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.585897915 +0000 UTC m=+0.493893639 container start 487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:38:21 compute-0 boring_neumann[372163]: 167 167
Jan 31 08:38:21 compute-0 systemd[1]: libpod-487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f.scope: Deactivated successfully.
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.635242545 +0000 UTC m=+0.543238289 container attach 487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.636500606 +0000 UTC m=+0.544496330 container died 487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:38:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d204dbcd5734a3f6c5589a2710899ba5bc3603afe704fe3e8914dc2daa64e209-merged.mount: Deactivated successfully.
Jan 31 08:38:21 compute-0 podman[372147]: 2026-01-31 08:38:21.892524096 +0000 UTC m=+0.800519820 container remove 487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_neumann, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:38:21 compute-0 systemd[1]: libpod-conmon-487013530bc5387c4511c7292c9667a0a4670afe0c8fe55e812eb0a8ec2fd12f.scope: Deactivated successfully.
Jan 31 08:38:22 compute-0 podman[372190]: 2026-01-31 08:38:22.062513552 +0000 UTC m=+0.078967432 container create 5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:38:22 compute-0 ceph-mon[74496]: pgmap v3161: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 382 KiB/s wr, 17 op/s
Jan 31 08:38:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-0 podman[372190]: 2026-01-31 08:38:22.007590626 +0000 UTC m=+0.024044516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:22 compute-0 systemd[1]: Started libpod-conmon-5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2.scope.
Jan 31 08:38:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a02af0cfe46dc5e96d3b1e4d9384747f1ca5e5257f22346d48cb183ec721cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a02af0cfe46dc5e96d3b1e4d9384747f1ca5e5257f22346d48cb183ec721cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a02af0cfe46dc5e96d3b1e4d9384747f1ca5e5257f22346d48cb183ec721cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a02af0cfe46dc5e96d3b1e4d9384747f1ca5e5257f22346d48cb183ec721cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:22 compute-0 podman[372190]: 2026-01-31 08:38:22.193408857 +0000 UTC m=+0.209862737 container init 5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:22 compute-0 podman[372190]: 2026-01-31 08:38:22.205992064 +0000 UTC m=+0.222445944 container start 5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:38:22 compute-0 podman[372190]: 2026-01-31 08:38:22.226923633 +0000 UTC m=+0.243377533 container attach 5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:38:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:22.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:38:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.1 KiB/s wr, 45 op/s
Jan 31 08:38:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:38:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:23.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]: [
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:     {
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "available": false,
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "ceph_device": false,
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "lsm_data": {},
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "lvs": [],
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "path": "/dev/sr0",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "rejected_reasons": [
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "Insufficient space (<5GB)",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "Has a FileSystem"
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         ],
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         "sys_api": {
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "actuators": null,
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "device_nodes": "sr0",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "devname": "sr0",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "human_readable_size": "482.00 KB",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "id_bus": "ata",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "model": "QEMU DVD-ROM",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "nr_requests": "2",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "parent": "/dev/sr0",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "partitions": {},
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "path": "/dev/sr0",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "removable": "1",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "rev": "2.5+",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "ro": "0",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "rotational": "1",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "sas_address": "",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "sas_device_handle": "",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "scheduler_mode": "mq-deadline",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "sectors": 0,
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "sectorsize": "2048",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "size": 493568.0,
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "support_discard": "2048",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "type": "disk",
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:             "vendor": "QEMU"
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:         }
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]:     }
Jan 31 08:38:23 compute-0 adoring_hypatia[372207]: ]
Jan 31 08:38:23 compute-0 systemd[1]: libpod-5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2.scope: Deactivated successfully.
Jan 31 08:38:23 compute-0 podman[372190]: 2026-01-31 08:38:23.507159964 +0000 UTC m=+1.523613864 container died 5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:38:23 compute-0 systemd[1]: libpod-5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2.scope: Consumed 1.263s CPU time.
Jan 31 08:38:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a02af0cfe46dc5e96d3b1e4d9384747f1ca5e5257f22346d48cb183ec721cad-merged.mount: Deactivated successfully.
Jan 31 08:38:23 compute-0 ceph-mon[74496]: pgmap v3162: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.1 KiB/s wr, 45 op/s
Jan 31 08:38:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-0 podman[372190]: 2026-01-31 08:38:23.603385005 +0000 UTC m=+1.619838885 container remove 5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:38:23 compute-0 systemd[1]: libpod-conmon-5443f368efe9b5c30f8b42b2db7b86cda243aceb65b4c5e6336d4656bbc54fa2.scope: Deactivated successfully.
Jan 31 08:38:23 compute-0 sudo[372083]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:38:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:38:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 94307636-966d-404d-9776-e0fca86fdb98 does not exist
Jan 31 08:38:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6804cca3-3e29-46d0-b9cd-3031e6eca443 does not exist
Jan 31 08:38:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0df5fd15-8a94-40f7-9969-ded2c902f386 does not exist
Jan 31 08:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:38:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:38:24 compute-0 sudo[373299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:24 compute-0 sudo[373299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:24 compute-0 sudo[373299]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:24 compute-0 sudo[373324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:24 compute-0 sudo[373324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:24 compute-0 sudo[373324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:24 compute-0 sudo[373349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:24 compute-0 sudo[373349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:24.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:24 compute-0 sudo[373349]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:24 compute-0 sudo[373374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:38:24 compute-0 sudo[373374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Jan 31 08:38:24 compute-0 podman[373441]: 2026-01-31 08:38:24.766509116 +0000 UTC m=+0.057911030 container create 55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:38:24 compute-0 systemd[1]: Started libpod-conmon-55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9.scope.
Jan 31 08:38:24 compute-0 podman[373441]: 2026-01-31 08:38:24.733610196 +0000 UTC m=+0.025012130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:24 compute-0 podman[373441]: 2026-01-31 08:38:24.915632755 +0000 UTC m=+0.207034689 container init 55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:38:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:38:24 compute-0 podman[373441]: 2026-01-31 08:38:24.923681601 +0000 UTC m=+0.215083515 container start 55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:38:24 compute-0 agitated_matsumoto[373457]: 167 167
Jan 31 08:38:24 compute-0 systemd[1]: libpod-55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9.scope: Deactivated successfully.
Jan 31 08:38:24 compute-0 podman[373441]: 2026-01-31 08:38:24.953104527 +0000 UTC m=+0.244506461 container attach 55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:38:24 compute-0 podman[373441]: 2026-01-31 08:38:24.954673385 +0000 UTC m=+0.246075299 container died 55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-266a04c18d4acd2b3420caca47c0492c0193db6499b24fe7b3287912f2f28d60-merged.mount: Deactivated successfully.
Jan 31 08:38:25 compute-0 podman[373441]: 2026-01-31 08:38:25.133196848 +0000 UTC m=+0.424598752 container remove 55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_matsumoto, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:38:25 compute-0 systemd[1]: libpod-conmon-55ed1c93835c8e5aac1d58402c8fd9676fb6e062cd453b6150bf4ca89008a6f9.scope: Deactivated successfully.
Jan 31 08:38:25 compute-0 nova_compute[247704]: 2026-01-31 08:38:25.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:38:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:25.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:38:25 compute-0 podman[373481]: 2026-01-31 08:38:25.283000693 +0000 UTC m=+0.044516374 container create 509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:38:25 compute-0 systemd[1]: Started libpod-conmon-509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b.scope.
Jan 31 08:38:25 compute-0 podman[373481]: 2026-01-31 08:38:25.263256033 +0000 UTC m=+0.024771734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2b18457c91671b09287d30b68824e8a7f758ccb7aed210d82514a1ac55bdcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2b18457c91671b09287d30b68824e8a7f758ccb7aed210d82514a1ac55bdcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2b18457c91671b09287d30b68824e8a7f758ccb7aed210d82514a1ac55bdcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2b18457c91671b09287d30b68824e8a7f758ccb7aed210d82514a1ac55bdcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a2b18457c91671b09287d30b68824e8a7f758ccb7aed210d82514a1ac55bdcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:25 compute-0 podman[373481]: 2026-01-31 08:38:25.39132269 +0000 UTC m=+0.152838401 container init 509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:25 compute-0 podman[373481]: 2026-01-31 08:38:25.398995026 +0000 UTC m=+0.160510707 container start 509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:38:25 compute-0 podman[373481]: 2026-01-31 08:38:25.404218543 +0000 UTC m=+0.165734224 container attach 509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:38:26 compute-0 gifted_sammet[373498]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:38:26 compute-0 gifted_sammet[373498]: --> relative data size: 1.0
Jan 31 08:38:26 compute-0 gifted_sammet[373498]: --> All data devices are unavailable
Jan 31 08:38:26 compute-0 ceph-mon[74496]: pgmap v3163: 305 pgs: 305 active+clean; 141 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Jan 31 08:38:26 compute-0 systemd[1]: libpod-509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b.scope: Deactivated successfully.
Jan 31 08:38:26 compute-0 podman[373481]: 2026-01-31 08:38:26.227908005 +0000 UTC m=+0.989423716 container died 509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:38:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a2b18457c91671b09287d30b68824e8a7f758ccb7aed210d82514a1ac55bdcc-merged.mount: Deactivated successfully.
Jan 31 08:38:26 compute-0 podman[373481]: 2026-01-31 08:38:26.290645022 +0000 UTC m=+1.052160693 container remove 509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:38:26 compute-0 systemd[1]: libpod-conmon-509f48268dcd5ce363d09276da370285dd179db782605b8213ee6ff764bd593b.scope: Deactivated successfully.
Jan 31 08:38:26 compute-0 sudo[373374]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:26.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:26 compute-0 sudo[373526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:26 compute-0 sudo[373526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:26 compute-0 sudo[373526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:26 compute-0 sudo[373551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:26 compute-0 sudo[373551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:26 compute-0 sudo[373551]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:26 compute-0 sudo[373576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:26 compute-0 sudo[373576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:26 compute-0 sudo[373576]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 183 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 08:38:26 compute-0 nova_compute[247704]: 2026-01-31 08:38:26.538 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:26 compute-0 sudo[373601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:38:26 compute-0 sudo[373601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:26 compute-0 podman[373666]: 2026-01-31 08:38:26.875530534 +0000 UTC m=+0.041319257 container create de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:38:26 compute-0 systemd[1]: Started libpod-conmon-de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab.scope.
Jan 31 08:38:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:26 compute-0 podman[373666]: 2026-01-31 08:38:26.856821728 +0000 UTC m=+0.022610471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:26 compute-0 podman[373666]: 2026-01-31 08:38:26.959464976 +0000 UTC m=+0.125253699 container init de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:38:26 compute-0 podman[373666]: 2026-01-31 08:38:26.965670717 +0000 UTC m=+0.131459440 container start de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pare, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:38:26 compute-0 elated_pare[373682]: 167 167
Jan 31 08:38:26 compute-0 podman[373666]: 2026-01-31 08:38:26.97074148 +0000 UTC m=+0.136530463 container attach de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pare, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:38:26 compute-0 systemd[1]: libpod-de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab.scope: Deactivated successfully.
Jan 31 08:38:26 compute-0 conmon[373682]: conmon de454c36871579d3fea0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab.scope/container/memory.events
Jan 31 08:38:26 compute-0 podman[373666]: 2026-01-31 08:38:26.972897402 +0000 UTC m=+0.138686125 container died de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4929df199f893e0428cb614be9babcbc803a43903eda24e8fce27efe2820bfa-merged.mount: Deactivated successfully.
Jan 31 08:38:27 compute-0 podman[373666]: 2026-01-31 08:38:27.02499343 +0000 UTC m=+0.190782153 container remove de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:38:27 compute-0 systemd[1]: libpod-conmon-de454c36871579d3fea0f88321a77deaefcd8414bf37a8709528523d3ee44cab.scope: Deactivated successfully.
Jan 31 08:38:27 compute-0 podman[373708]: 2026-01-31 08:38:27.164340141 +0000 UTC m=+0.041312547 container create 2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:27 compute-0 systemd[1]: Started libpod-conmon-2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14.scope.
Jan 31 08:38:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:27.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d7e5e3c14265af556f27870287a8cfe982d9a7bb708711b1f12d4d8ff70d25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d7e5e3c14265af556f27870287a8cfe982d9a7bb708711b1f12d4d8ff70d25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d7e5e3c14265af556f27870287a8cfe982d9a7bb708711b1f12d4d8ff70d25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d7e5e3c14265af556f27870287a8cfe982d9a7bb708711b1f12d4d8ff70d25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:27 compute-0 podman[373708]: 2026-01-31 08:38:27.147218604 +0000 UTC m=+0.024191040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/860123472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:27 compute-0 podman[373708]: 2026-01-31 08:38:27.262904999 +0000 UTC m=+0.139877495 container init 2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:38:27 compute-0 podman[373708]: 2026-01-31 08:38:27.268780032 +0000 UTC m=+0.145752438 container start 2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ellis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:27 compute-0 podman[373708]: 2026-01-31 08:38:27.275370602 +0000 UTC m=+0.152342998 container attach 2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]: {
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:     "0": [
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:         {
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "devices": [
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "/dev/loop3"
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             ],
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "lv_name": "ceph_lv0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "lv_size": "7511998464",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "name": "ceph_lv0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "tags": {
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.crush_device_class": "",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.encrypted": "0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.osd_id": "0",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.type": "block",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:                 "ceph.vdo": "0"
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             },
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "type": "block",
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:             "vg_name": "ceph_vg0"
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:         }
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]:     ]
Jan 31 08:38:28 compute-0 dazzling_ellis[373724]: }
Jan 31 08:38:28 compute-0 systemd[1]: libpod-2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14.scope: Deactivated successfully.
Jan 31 08:38:28 compute-0 podman[373708]: 2026-01-31 08:38:28.126949103 +0000 UTC m=+1.003921529 container died 2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-80d7e5e3c14265af556f27870287a8cfe982d9a7bb708711b1f12d4d8ff70d25-merged.mount: Deactivated successfully.
Jan 31 08:38:28 compute-0 podman[373708]: 2026-01-31 08:38:28.19297102 +0000 UTC m=+1.069943426 container remove 2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:38:28 compute-0 systemd[1]: libpod-conmon-2bfe688f76d35167f12c5dc26b69f39487a97a622bced837af8490ce3d255e14.scope: Deactivated successfully.
Jan 31 08:38:28 compute-0 sudo[373601]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:28 compute-0 sudo[373747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:28 compute-0 sudo[373747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:28 compute-0 sudo[373747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:28 compute-0 ceph-mon[74496]: pgmap v3164: 305 pgs: 305 active+clean; 183 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 08:38:28 compute-0 sudo[373772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:28 compute-0 sudo[373772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:28 compute-0 sudo[373772]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:28 compute-0 sudo[373797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:28 compute-0 sudo[373797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:28 compute-0 sudo[373797]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:28 compute-0 sudo[373822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:38:28 compute-0 sudo[373822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 200 op/s
Jan 31 08:38:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:28 compute-0 podman[373888]: 2026-01-31 08:38:28.790856088 +0000 UTC m=+0.041510032 container create 1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:38:28 compute-0 systemd[1]: Started libpod-conmon-1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b.scope.
Jan 31 08:38:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:28 compute-0 podman[373888]: 2026-01-31 08:38:28.772142142 +0000 UTC m=+0.022796106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:28 compute-0 podman[373888]: 2026-01-31 08:38:28.888418841 +0000 UTC m=+0.139072815 container init 1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:38:28 compute-0 podman[373888]: 2026-01-31 08:38:28.895809991 +0000 UTC m=+0.146463935 container start 1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:38:28 compute-0 boring_williams[373904]: 167 167
Jan 31 08:38:28 compute-0 systemd[1]: libpod-1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b.scope: Deactivated successfully.
Jan 31 08:38:28 compute-0 podman[373888]: 2026-01-31 08:38:28.903442507 +0000 UTC m=+0.154096471 container attach 1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:38:28 compute-0 podman[373888]: 2026-01-31 08:38:28.904307498 +0000 UTC m=+0.154961442 container died 1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3789d0c8236d36761a93e2cf62880c85c081e7a2fbce6520b27f87d8ea1b0178-merged.mount: Deactivated successfully.
Jan 31 08:38:29 compute-0 podman[373888]: 2026-01-31 08:38:29.019583493 +0000 UTC m=+0.270237437 container remove 1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:38:29 compute-0 systemd[1]: libpod-conmon-1411980989a39c8d0b591ff6f3da000f0bac70aac3ad3ee4e064b45a80eac75b.scope: Deactivated successfully.
Jan 31 08:38:29 compute-0 podman[373929]: 2026-01-31 08:38:29.190756138 +0000 UTC m=+0.055855801 container create e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:29 compute-0 podman[373929]: 2026-01-31 08:38:29.158684157 +0000 UTC m=+0.023783840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:38:29 compute-0 systemd[1]: Started libpod-conmon-e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d.scope.
Jan 31 08:38:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6054c133b8f9641da635d1cb6dcdb7189db97d43e9f2efb00ca86078e630173e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6054c133b8f9641da635d1cb6dcdb7189db97d43e9f2efb00ca86078e630173e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6054c133b8f9641da635d1cb6dcdb7189db97d43e9f2efb00ca86078e630173e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6054c133b8f9641da635d1cb6dcdb7189db97d43e9f2efb00ca86078e630173e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:29 compute-0 podman[373929]: 2026-01-31 08:38:29.297607118 +0000 UTC m=+0.162706801 container init e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:38:29 compute-0 podman[373929]: 2026-01-31 08:38:29.303320677 +0000 UTC m=+0.168420340 container start e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:38:29 compute-0 podman[373929]: 2026-01-31 08:38:29.307140789 +0000 UTC m=+0.172240482 container attach e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:38:30 compute-0 nifty_bassi[373946]: {
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:         "osd_id": 0,
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:         "type": "bluestore"
Jan 31 08:38:30 compute-0 nifty_bassi[373946]:     }
Jan 31 08:38:30 compute-0 nifty_bassi[373946]: }
Jan 31 08:38:30 compute-0 systemd[1]: libpod-e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d.scope: Deactivated successfully.
Jan 31 08:38:30 compute-0 podman[373929]: 2026-01-31 08:38:30.228747085 +0000 UTC m=+1.093846758 container died e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:38:30 compute-0 nova_compute[247704]: 2026-01-31 08:38:30.228 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6054c133b8f9641da635d1cb6dcdb7189db97d43e9f2efb00ca86078e630173e-merged.mount: Deactivated successfully.
Jan 31 08:38:30 compute-0 podman[373929]: 2026-01-31 08:38:30.299595188 +0000 UTC m=+1.164694881 container remove e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:38:30 compute-0 systemd[1]: libpod-conmon-e66447aecd64565bb78236b2a327936434e641331b4c3f4b46270b96fbbd4a0d.scope: Deactivated successfully.
Jan 31 08:38:30 compute-0 ceph-mon[74496]: pgmap v3165: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 200 op/s
Jan 31 08:38:30 compute-0 sudo[373822]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:38:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:38:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:30.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 581ac322-e3ee-4ca2-85c6-2def7878ca61 does not exist
Jan 31 08:38:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b198095b-830b-404f-82f6-80bb9d03b7c8 does not exist
Jan 31 08:38:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8c4bbeb3-68e8-46c9-b7d2-934309b52b1d does not exist
Jan 31 08:38:30 compute-0 sudo[373981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:30 compute-0 sudo[373981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:30 compute-0 sudo[373981]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:30 compute-0 sudo[374006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:38:30 compute-0 sudo[374006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:30 compute-0 sudo[374006]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 271 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 252 op/s
Jan 31 08:38:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:31.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:38:31 compute-0 ceph-mon[74496]: pgmap v3166: 305 pgs: 305 active+clean; 271 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 252 op/s
Jan 31 08:38:31 compute-0 nova_compute[247704]: 2026-01-31 08:38:31.541 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:31 compute-0 podman[374032]: 2026-01-31 08:38:31.928979885 +0000 UTC m=+0.093464716 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:38:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:32.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 324 op/s
Jan 31 08:38:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:33.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:33 compute-0 ceph-mon[74496]: pgmap v3167: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.1 MiB/s wr, 324 op/s
Jan 31 08:38:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:34.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 7.1 MiB/s wr, 295 op/s
Jan 31 08:38:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:35 compute-0 nova_compute[247704]: 2026-01-31 08:38:35.232 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005130361006420994 of space, bias 1.0, pg target 1.539108301926298 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:38:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:38:35 compute-0 ceph-mon[74496]: pgmap v3168: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 7.1 MiB/s wr, 295 op/s
Jan 31 08:38:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/906920371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1303395771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2515805486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:35 compute-0 sudo[374061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:35 compute-0 sudo[374061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:35 compute-0 sudo[374061]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:36 compute-0 sudo[374086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:36 compute-0 sudo[374086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:36 compute-0 sudo[374086]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 7.1 MiB/s wr, 268 op/s
Jan 31 08:38:36 compute-0 nova_compute[247704]: 2026-01-31 08:38:36.542 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:37 compute-0 ceph-mon[74496]: pgmap v3169: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 165 KiB/s rd, 7.1 MiB/s wr, 268 op/s
Jan 31 08:38:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/755283535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.205 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.205 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:38.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.477 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:38:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 134 KiB/s rd, 5.3 MiB/s wr, 215 op/s
Jan 31 08:38:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.696 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.697 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.705 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.705 247708 INFO nova.compute.claims [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:38:38 compute-0 nova_compute[247704]: 2026-01-31 08:38:38.996 247708 DEBUG nova.scheduler.client.report [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.052 247708 DEBUG nova.scheduler.client.report [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.053 247708 DEBUG nova.compute.provider_tree [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.104 247708 DEBUG nova.scheduler.client.report [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.140 247708 DEBUG nova.scheduler.client.report [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.196 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:39.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849201847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.636 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.643 247708 DEBUG nova.compute.provider_tree [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.706 247708 DEBUG nova.scheduler.client.report [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.830 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.831 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.911 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:38:39 compute-0 nova_compute[247704]: 2026-01-31 08:38:39.912 247708 DEBUG nova.network.neutron [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:38:40 compute-0 nova_compute[247704]: 2026-01-31 08:38:40.152 247708 DEBUG nova.policy [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cfc8a271e75e4a92b16ee6b5da9cfc9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b38141686534a0fb9b947a7886cd4b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:38:40 compute-0 nova_compute[247704]: 2026-01-31 08:38:40.220 247708 INFO nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:38:40 compute-0 nova_compute[247704]: 2026-01-31 08:38:40.235 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:40.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 878 KiB/s rd, 4.5 MiB/s wr, 159 op/s
Jan 31 08:38:40 compute-0 ceph-mon[74496]: pgmap v3170: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 134 KiB/s rd, 5.3 MiB/s wr, 215 op/s
Jan 31 08:38:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/849201847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:40 compute-0 nova_compute[247704]: 2026-01-31 08:38:40.767 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:38:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:41.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.240 247708 INFO nova.virt.block_device [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Booting with volume 52013e9d-e76f-40eb-be07-4c6f98de0219 at /dev/vda
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.420 247708 DEBUG os_brick.utils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.422 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.433 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.434 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[69ac6a45-9a4b-4d77-b1a2-1595bae695dd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.435 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.442 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.442 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[5984f890-7db2-49fa-add8-3e6800ac0f21]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.444 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.453 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.454 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7852daca-bedc-4601-9a99-dae5c31ee338]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.456 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[59174779-22f2-479c-bfb5-9cff7e51a8e5]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.456 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.481 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.483 247708 DEBUG os_brick.initiator.connectors.lightos [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.484 247708 DEBUG os_brick.initiator.connectors.lightos [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.484 247708 DEBUG os_brick.initiator.connectors.lightos [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.484 247708 DEBUG os_brick.utils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.485 247708 DEBUG nova.virt.block_device [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating existing volume attachment record: 527351c5-ae12-43b2-bd7c-041790d888c0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.545 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:41 compute-0 nova_compute[247704]: 2026-01-31 08:38:41.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:41 compute-0 ceph-mon[74496]: pgmap v3171: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 878 KiB/s rd, 4.5 MiB/s wr, 159 op/s
Jan 31 08:38:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:42.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Jan 31 08:38:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/929030816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2556544296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:42 compute-0 nova_compute[247704]: 2026-01-31 08:38:42.751 247708 DEBUG nova.network.neutron [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Successfully created port: a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:38:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:43 compute-0 ceph-mon[74496]: pgmap v3172: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Jan 31 08:38:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 31 08:38:45 compute-0 nova_compute[247704]: 2026-01-31 08:38:45.239 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:45.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:45 compute-0 nova_compute[247704]: 2026-01-31 08:38:45.555 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:45 compute-0 nova_compute[247704]: 2026-01-31 08:38:45.555 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:45 compute-0 ceph-mon[74496]: pgmap v3173: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.287 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.288 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.289 247708 INFO nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Creating image(s)
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.289 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.289 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Ensure instance console log exists: /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.290 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.290 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.290 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.326 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:38:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:46.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:38:46 compute-0 nova_compute[247704]: 2026-01-31 08:38:46.546 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:47 compute-0 nova_compute[247704]: 2026-01-31 08:38:47.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:47 compute-0 nova_compute[247704]: 2026-01-31 08:38:47.989 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:47 compute-0 nova_compute[247704]: 2026-01-31 08:38:47.989 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:47 compute-0 nova_compute[247704]: 2026-01-31 08:38:47.996 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:38:47 compute-0 nova_compute[247704]: 2026-01-31 08:38:47.997 247708 INFO nova.compute.claims [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:38:48 compute-0 ceph-mon[74496]: pgmap v3174: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:38:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:48.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 08:38:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:48 compute-0 nova_compute[247704]: 2026-01-31 08:38:48.903 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:49.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828810280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.394 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.400 247708 DEBUG nova.compute.provider_tree [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.462 247708 DEBUG nova.scheduler.client.report [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.534 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.535 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.766 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.767 247708 DEBUG nova.network.neutron [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:38:49 compute-0 nova_compute[247704]: 2026-01-31 08:38:49.929 247708 INFO nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.030 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.139 247708 DEBUG nova.policy [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cfc8a271e75e4a92b16ee6b5da9cfc9f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b38141686534a0fb9b947a7886cd4b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.211 247708 INFO nova.virt.block_device [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Booting with volume 937c2f7d-b961-4dec-b1ad-34dd4c2d4cac at /dev/vda
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.240 247708 DEBUG nova.network.neutron [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Successfully updated port: a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.244 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.332 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.333 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.333 247708 DEBUG nova.network.neutron [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:38:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:50.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.437 247708 DEBUG os_brick.utils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.438 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.449 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.449 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaa5e79-7396-4e0a-98cc-4f500f80d25b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.451 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.459 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.459 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[ec80c3bb-4d98-4a8b-9397-f110942b2548]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.461 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.468 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.469 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[064f98d0-c0dd-43c0-a9ef-907a5d4b654b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.471 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5b0b86-102a-4245-b015-948174598bb0]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.472 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.508 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.512 247708 DEBUG os_brick.initiator.connectors.lightos [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.513 247708 DEBUG os_brick.initiator.connectors.lightos [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.513 247708 DEBUG os_brick.initiator.connectors.lightos [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.513 247708 DEBUG os_brick.utils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.514 247708 DEBUG nova.virt.block_device [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating existing volume attachment record: 724a860c-bb9f-4cd1-8cef-1d90670977f8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.522 247708 DEBUG nova.compute.manager [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.523 247708 DEBUG nova.compute.manager [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.523 247708 DEBUG oslo_concurrency.lockutils [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 78 op/s
Jan 31 08:38:50 compute-0 ceph-mon[74496]: pgmap v3175: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 08:38:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1828810280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:50 compute-0 nova_compute[247704]: 2026-01-31 08:38:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:50 compute-0 podman[374176]: 2026-01-31 08:38:50.88673974 +0000 UTC m=+0.051574256 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:38:51 compute-0 nova_compute[247704]: 2026-01-31 08:38:51.005 247708 DEBUG nova.network.neutron [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:38:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:51 compute-0 nova_compute[247704]: 2026-01-31 08:38:51.548 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:52 compute-0 ceph-mon[74496]: pgmap v3176: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 78 op/s
Jan 31 08:38:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2756972332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 910 KiB/s wr, 70 op/s
Jan 31 08:38:52 compute-0 nova_compute[247704]: 2026-01-31 08:38:52.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:52 compute-0 nova_compute[247704]: 2026-01-31 08:38:52.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:38:53 compute-0 nova_compute[247704]: 2026-01-31 08:38:53.102 247708 DEBUG nova.network.neutron [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Successfully created port: 637c7294-89aa-4b8f-8485-df3303e84675 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:38:53 compute-0 nova_compute[247704]: 2026-01-31 08:38:53.252 247708 DEBUG nova.network.neutron [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:53.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.012 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.013 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Instance network_info: |[{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.014 247708 DEBUG oslo_concurrency.lockutils [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.014 247708 DEBUG nova.network.neutron [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.019 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Start _get_guest_xml network_info=[{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '527351c5-ae12-43b2-bd7c-041790d888c0', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-52013e9d-e76f-40eb-be07-4c6f98de0219', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '52013e9d-e76f-40eb-be07-4c6f98de0219', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9e883c68-083a-45ab-81fe-942de74e50ef', 'attached_at': '', 'detached_at': '', 'volume_id': '52013e9d-e76f-40eb-be07-4c6f98de0219', 'serial': '52013e9d-e76f-40eb-be07-4c6f98de0219'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.025 247708 WARNING nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.038 247708 DEBUG nova.virt.libvirt.host [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.040 247708 DEBUG nova.virt.libvirt.host [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.045 247708 DEBUG nova.virt.libvirt.host [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.045 247708 DEBUG nova.virt.libvirt.host [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.047 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.047 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.047 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.048 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.048 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.048 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.048 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.049 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.049 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.049 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.050 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.050 247708 DEBUG nova.virt.hardware [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.081 247708 DEBUG nova.storage.rbd_utils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 9e883c68-083a-45ab-81fe-942de74e50ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.087 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.159 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.161 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.161 247708 INFO nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Creating image(s)
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.162 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.162 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Ensure instance console log exists: /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.162 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.163 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.163 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:54 compute-0 ceph-mon[74496]: pgmap v3177: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 910 KiB/s wr, 70 op/s
Jan 31 08:38:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3861035264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:38:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3861035264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:38:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:54.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:38:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880464520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.555 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.796 247708 DEBUG nova.virt.libvirt.vif [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-864114139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-864114139',id=184,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-y7du7vy3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:41Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=9e883c68-083a-45ab-81fe-942de74e50ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.797 247708 DEBUG nova.network.os_vif_util [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.798 247708 DEBUG nova.network.os_vif_util [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.799 247708 DEBUG nova.objects.instance [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.865 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <uuid>9e883c68-083a-45ab-81fe-942de74e50ef</uuid>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <name>instance-000000b8</name>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-864114139</nova:name>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:38:54</nova:creationTime>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:user uuid="cfc8a271e75e4a92b16ee6b5da9cfc9f">tempest-TestInstancesWithCinderVolumes-791993230-project-member</nova:user>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:project uuid="4b38141686534a0fb9b947a7886cd4b6">tempest-TestInstancesWithCinderVolumes-791993230</nova:project>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <nova:port uuid="a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f">
Jan 31 08:38:54 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <system>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <entry name="serial">9e883c68-083a-45ab-81fe-942de74e50ef</entry>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <entry name="uuid">9e883c68-083a-45ab-81fe-942de74e50ef</entry>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </system>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <os>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </os>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <features>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </features>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/9e883c68-083a-45ab-81fe-942de74e50ef_disk.config">
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </source>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-52013e9d-e76f-40eb-be07-4c6f98de0219">
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </source>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:38:54 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <serial>52013e9d-e76f-40eb-be07-4c6f98de0219</serial>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:71:f7:bb"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <target dev="tapa9f7b34e-c5"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/console.log" append="off"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <video>
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </video>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:38:54 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:38:54 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:38:54 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:38:54 compute-0 nova_compute[247704]: </domain>
Jan 31 08:38:54 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.865 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Preparing to wait for external event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.866 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.866 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.866 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.866 247708 DEBUG nova.virt.libvirt.vif [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-864114139',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-864114139',id=184,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-y7du7vy3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:41Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=9e883c68-083a-45ab-81fe-942de74e50ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.867 247708 DEBUG nova.network.os_vif_util [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.867 247708 DEBUG nova.network.os_vif_util [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.868 247708 DEBUG os_vif [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.869 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.869 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.874 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.874 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9f7b34e-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.874 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9f7b34e-c5, col_values=(('external_ids', {'iface-id': 'a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:f7:bb', 'vm-uuid': '9e883c68-083a-45ab-81fe-942de74e50ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:54 compute-0 NetworkManager[49108]: <info>  [1769848734.8771] manager: (tapa9f7b34e-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:54 compute-0 nova_compute[247704]: 2026-01-31 08:38:54.884 247708 INFO os_vif [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5')
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.046 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.047 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.047 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:71:f7:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.047 247708 INFO nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Using config drive
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.077 247708 DEBUG nova.storage.rbd_utils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 9e883c68-083a-45ab-81fe-942de74e50ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.085 247708 DEBUG nova.network.neutron [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Successfully updated port: 637c7294-89aa-4b8f-8485-df3303e84675 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.211 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.211 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.212 247708 DEBUG nova.network.neutron [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:38:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2880464520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.253 247708 DEBUG nova.compute.manager [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.253 247708 DEBUG nova.compute.manager [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.254 247708 DEBUG oslo_concurrency.lockutils [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:38:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:55.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.927 247708 INFO nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Creating config drive at /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/disk.config
Jan 31 08:38:55 compute-0 nova_compute[247704]: 2026-01-31 08:38:55.932 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbwjtyvip execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.064 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbwjtyvip" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:56 compute-0 sudo[374262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:56 compute-0 sudo[374262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:56 compute-0 sudo[374262]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.102 247708 DEBUG nova.storage.rbd_utils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 9e883c68-083a-45ab-81fe-942de74e50ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.106 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/disk.config 9e883c68-083a-45ab-81fe-942de74e50ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.128 247708 DEBUG nova.network.neutron [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:38:56 compute-0 sudo[374302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:38:56 compute-0 sudo[374302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:56 compute-0 sudo[374302]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:38:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:56.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:38:56 compute-0 ceph-mon[74496]: pgmap v3178: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.463 247708 DEBUG nova.network.neutron [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.464 247708 DEBUG nova.network.neutron [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.540 247708 DEBUG oslo_concurrency.lockutils [req-dfae117f-7a6e-417e-93a4-5282066caee8 req-b4f39874-82b7-414a-b0c9-b92b74e3b2d0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:38:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 356 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1619984088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.763 247708 DEBUG oslo_concurrency.processutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/disk.config 9e883c68-083a-45ab-81fe-942de74e50ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.764 247708 INFO nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Deleting local config drive /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef/disk.config because it was imported into RBD.
Jan 31 08:38:56 compute-0 kernel: tapa9f7b34e-c5: entered promiscuous mode
Jan 31 08:38:56 compute-0 NetworkManager[49108]: <info>  [1769848736.8216] manager: (tapa9f7b34e-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/342)
Jan 31 08:38:56 compute-0 systemd-udevd[374359]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.866 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:56 compute-0 ovn_controller[149457]: 2026-01-31T08:38:56Z|00789|binding|INFO|Claiming lport a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f for this chassis.
Jan 31 08:38:56 compute-0 ovn_controller[149457]: 2026-01-31T08:38:56Z|00790|binding|INFO|a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f: Claiming fa:16:3e:71:f7:bb 10.100.0.8
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.870 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:56 compute-0 NetworkManager[49108]: <info>  [1769848736.8796] device (tapa9f7b34e-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:38:56 compute-0 NetworkManager[49108]: <info>  [1769848736.8807] device (tapa9f7b34e-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:38:56 compute-0 ovn_controller[149457]: 2026-01-31T08:38:56Z|00791|binding|INFO|Setting lport a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f ovn-installed in OVS
Jan 31 08:38:56 compute-0 nova_compute[247704]: 2026-01-31 08:38:56.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:56 compute-0 systemd-machined[214448]: New machine qemu-84-instance-000000b8.
Jan 31 08:38:56 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-000000b8.
Jan 31 08:38:57 compute-0 ovn_controller[149457]: 2026-01-31T08:38:57Z|00792|binding|INFO|Setting lport a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f up in Southbound
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.083 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:f7:bb 10.100.0.8'], port_security=['fa:16:3e:71:f7:bb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9e883c68-083a-45ab-81fe-942de74e50ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-090376c2-ac34-46f0-acd4-344bb2bc1154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b38141686534a0fb9b947a7886cd4b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f827482b-eee1-43ff-a797-1ec84e5a6d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05d66413-db50-49eb-973e-490542297b8d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.084 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f in datapath 090376c2-ac34-46f0-acd4-344bb2bc1154 bound to our chassis
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.086 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.098 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2017e7-c6dd-4e21-8e05-d345456f2b89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.100 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap090376c2-a1 in ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.102 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap090376c2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.102 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b7b377-6434-4c7e-b2af-d835ce3f899e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.104 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[76168dee-9822-407c-b1f0-92ba3c23760c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.114 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e73d47e-a25c-44ba-9898-9e25e9b150ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.130 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6dde885e-c586-427d-a095-70c477f14066]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.158 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[96b72ff0-747f-4ea3-a0c2-20db44e06f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.165 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[11ded55a-fb3e-467f-90ff-3274a1358e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 NetworkManager[49108]: <info>  [1769848737.1666] manager: (tap090376c2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/343)
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.203 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ef018f26-2eba-43b3-9cb7-f0746241d14e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.207 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a2a242-8440-4fc0-a5c8-e14109409c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 NetworkManager[49108]: <info>  [1769848737.2252] device (tap090376c2-a0): carrier: link connected
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.231 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[37c7142e-93f7-4be8-8521-4d5addbffffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.250 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b620d8-279a-420a-bc71-02ef5974ad69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap090376c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:64:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897984, 'reachable_time': 40404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374395, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.264 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4a85c4eb-67e3-4c86-90a0-a633897cbc6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:640e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897984, 'tstamp': 897984}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374411, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:38:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.281 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a385c1-4714-4e89-8dde-2a694a417485]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap090376c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:64:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897984, 'reachable_time': 40404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374415, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.310 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[21711664-63c0-4606-8a93-d3db20c96faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.360 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc80bdc-0cbe-4824-90d1-72cc6ce7fbd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.362 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap090376c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.363 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.364 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap090376c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:57 compute-0 kernel: tap090376c2-a0: entered promiscuous mode
Jan 31 08:38:57 compute-0 NetworkManager[49108]: <info>  [1769848737.3671] manager: (tap090376c2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.368 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.369 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap090376c2-a0, col_values=(('external_ids', {'iface-id': 'c8a7eefb-b644-411b-b95a-f875570edfa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:38:57 compute-0 ovn_controller[149457]: 2026-01-31T08:38:57Z|00793|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.371 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.372 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/090376c2-ac34-46f0-acd4-344bb2bc1154.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/090376c2-ac34-46f0-acd4-344bb2bc1154.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.373 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3a05c5-ac7a-424b-93dd-220481917c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.373 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/090376c2-ac34-46f0-acd4-344bb2bc1154.pid.haproxy
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:38:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:38:57.375 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'env', 'PROCESS_TAG=haproxy-090376c2-ac34-46f0-acd4-344bb2bc1154', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/090376c2-ac34-46f0-acd4-344bb2bc1154.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.375 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:57 compute-0 ceph-mon[74496]: pgmap v3179: 305 pgs: 305 active+clean; 356 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 08:38:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1619984088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:57 compute-0 podman[374464]: 2026-01-31 08:38:57.70167775 +0000 UTC m=+0.023675067 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:38:57 compute-0 podman[374464]: 2026-01-31 08:38:57.805010095 +0000 UTC m=+0.127007392 container create 358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.812 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848737.808674, 9e883c68-083a-45ab-81fe-942de74e50ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:38:57 compute-0 nova_compute[247704]: 2026-01-31 08:38:57.814 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] VM Started (Lifecycle Event)
Jan 31 08:38:57 compute-0 systemd[1]: Started libpod-conmon-358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396.scope.
Jan 31 08:38:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e874a6bc25a888a6b7a755bf0e157ffd73e9face97c7407632a65d8f5ef9293a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:57 compute-0 podman[374464]: 2026-01-31 08:38:57.896762167 +0000 UTC m=+0.218759494 container init 358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:38:57 compute-0 podman[374464]: 2026-01-31 08:38:57.904173608 +0000 UTC m=+0.226170905 container start 358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:38:57 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [NOTICE]   (374488) : New worker (374490) forked
Jan 31 08:38:57 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [NOTICE]   (374488) : Loading success.
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.069 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.075 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848737.8090043, 9e883c68-083a-45ab-81fe-942de74e50ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.076 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] VM Paused (Lifecycle Event)
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.378 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.382 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:38:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:58.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 795 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/409800787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:58 compute-0 nova_compute[247704]: 2026-01-31 08:38:58.851 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.200 247708 DEBUG nova.network.neutron [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:38:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:38:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:38:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.362 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.362 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.362 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.363 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.363 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:38:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87716784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.832 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:59 compute-0 nova_compute[247704]: 2026-01-31 08:38:59.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:38:59 compute-0 ceph-mon[74496]: pgmap v3180: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 795 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 31 08:38:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/87716784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.037 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.038 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Instance network_info: |[{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.039 247708 DEBUG oslo_concurrency.lockutils [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.039 247708 DEBUG nova.network.neutron [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.042 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Start _get_guest_xml network_info=[{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '724a860c-bb9f-4cd1-8cef-1d90670977f8', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-937c2f7d-b961-4dec-b1ad-34dd4c2d4cac', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '937c2f7d-b961-4dec-b1ad-34dd4c2d4cac', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5f00cd9b-b5f3-4eb6-ab53-387687853c27', 'attached_at': '', 'detached_at': '', 'volume_id': '937c2f7d-b961-4dec-b1ad-34dd4c2d4cac', 'serial': '937c2f7d-b961-4dec-b1ad-34dd4c2d4cac'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.047 247708 WARNING nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.051 247708 DEBUG nova.virt.libvirt.host [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.053 247708 DEBUG nova.virt.libvirt.host [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.056 247708 DEBUG nova.virt.libvirt.host [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.056 247708 DEBUG nova.virt.libvirt.host [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.058 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.059 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.059 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.060 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.061 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.061 247708 DEBUG nova.virt.hardware [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.086 247708 DEBUG nova.storage.rbd_utils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 5f00cd9b-b5f3-4eb6-ab53-387687853c27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.091 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:00.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:39:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078090768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 08:39:00 compute-0 nova_compute[247704]: 2026-01-31 08:39:00.566 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1078090768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.097 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.098 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:39:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.305 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.306 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=20.94274139404297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.306 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.306 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.333 247708 DEBUG nova.virt.libvirt.vif [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1174976220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1174976220',id=185,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-kh6rav0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:50Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=5f00cd9b-b5f3-4eb6-ab53-387687853c27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.334 247708 DEBUG nova.network.os_vif_util [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.336 247708 DEBUG nova.network.os_vif_util [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.337 247708 DEBUG nova.objects.instance [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.551 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.736 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <uuid>5f00cd9b-b5f3-4eb6-ab53-387687853c27</uuid>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <name>instance-000000b9</name>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1174976220</nova:name>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:39:00</nova:creationTime>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:user uuid="cfc8a271e75e4a92b16ee6b5da9cfc9f">tempest-TestInstancesWithCinderVolumes-791993230-project-member</nova:user>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:project uuid="4b38141686534a0fb9b947a7886cd4b6">tempest-TestInstancesWithCinderVolumes-791993230</nova:project>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <nova:port uuid="637c7294-89aa-4b8f-8485-df3303e84675">
Jan 31 08:39:01 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <system>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <entry name="serial">5f00cd9b-b5f3-4eb6-ab53-387687853c27</entry>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <entry name="uuid">5f00cd9b-b5f3-4eb6-ab53-387687853c27</entry>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </system>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <os>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </os>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <features>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </features>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/5f00cd9b-b5f3-4eb6-ab53-387687853c27_disk.config">
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </source>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-937c2f7d-b961-4dec-b1ad-34dd4c2d4cac">
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </source>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:39:01 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <serial>937c2f7d-b961-4dec-b1ad-34dd4c2d4cac</serial>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:4b:6c:09"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <target dev="tap637c7294-89"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/console.log" append="off"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <video>
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </video>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:39:01 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:39:01 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:39:01 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:39:01 compute-0 nova_compute[247704]: </domain>
Jan 31 08:39:01 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.737 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Preparing to wait for external event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.737 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.737 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.738 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.738 247708 DEBUG nova.virt.libvirt.vif [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1174976220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1174976220',id=185,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-kh6rav0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:50Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=5f00cd9b-b5f3-4eb6-ab53-387687853c27,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.739 247708 DEBUG nova.network.os_vif_util [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.739 247708 DEBUG nova.network.os_vif_util [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.740 247708 DEBUG os_vif [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.740 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.741 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.741 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.745 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap637c7294-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.746 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap637c7294-89, col_values=(('external_ids', {'iface-id': '637c7294-89aa-4b8f-8485-df3303e84675', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:6c:09', 'vm-uuid': '5f00cd9b-b5f3-4eb6-ab53-387687853c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:01 compute-0 NetworkManager[49108]: <info>  [1769848741.7494] manager: (tap637c7294-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.750 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.755 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:01 compute-0 nova_compute[247704]: 2026-01-31 08:39:01.756 247708 INFO os_vif [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89')
Jan 31 08:39:02 compute-0 ceph-mon[74496]: pgmap v3181: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 08:39:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:02.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 367 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 140 op/s
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.790 247708 DEBUG nova.compute.manager [req-4425ae88-d618-44dd-8d89-8c7faaf70f86 req-33e93e2f-7865-472a-98e5-5413d0b69980 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.790 247708 DEBUG oslo_concurrency.lockutils [req-4425ae88-d618-44dd-8d89-8c7faaf70f86 req-33e93e2f-7865-472a-98e5-5413d0b69980 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.791 247708 DEBUG oslo_concurrency.lockutils [req-4425ae88-d618-44dd-8d89-8c7faaf70f86 req-33e93e2f-7865-472a-98e5-5413d0b69980 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.791 247708 DEBUG oslo_concurrency.lockutils [req-4425ae88-d618-44dd-8d89-8c7faaf70f86 req-33e93e2f-7865-472a-98e5-5413d0b69980 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.791 247708 DEBUG nova.compute.manager [req-4425ae88-d618-44dd-8d89-8c7faaf70f86 req-33e93e2f-7865-472a-98e5-5413d0b69980 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Processing event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.792 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.797 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848742.796448, 9e883c68-083a-45ab-81fe-942de74e50ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.797 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] VM Resumed (Lifecycle Event)
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.799 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.818 247708 INFO nova.virt.libvirt.driver [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Instance spawned successfully.
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.820 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.825 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.826 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.826 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:4b:6c:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.829 247708 INFO nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Using config drive
Jan 31 08:39:02 compute-0 nova_compute[247704]: 2026-01-31 08:39:02.867 247708 DEBUG nova.storage.rbd_utils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 5f00cd9b-b5f3-4eb6-ab53-387687853c27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:02 compute-0 podman[374566]: 2026-01-31 08:39:02.943977117 +0000 UTC m=+0.096220743 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:39:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.085 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:04 compute-0 ceph-mon[74496]: pgmap v3182: 305 pgs: 305 active+clean; 367 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 140 op/s
Jan 31 08:39:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/136018308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:04.089 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:39:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:04.097 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.364 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.377 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.384 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.385 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.386 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.387 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.388 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.389 247708 DEBUG nova.virt.libvirt.driver [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:04.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 395 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.929 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9e883c68-083a-45ab-81fe-942de74e50ef actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.929 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.930 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:39:04 compute-0 nova_compute[247704]: 2026-01-31 08:39:04.930 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:39:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:05.101 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:05 compute-0 nova_compute[247704]: 2026-01-31 08:39:05.284 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:39:05 compute-0 nova_compute[247704]: 2026-01-31 08:39:05.695 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:05 compute-0 nova_compute[247704]: 2026-01-31 08:39:05.730 247708 INFO nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Took 19.44 seconds to spawn the instance on the hypervisor.
Jan 31 08:39:05 compute-0 nova_compute[247704]: 2026-01-31 08:39:05.731 247708 DEBUG nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:05 compute-0 nova_compute[247704]: 2026-01-31 08:39:05.858 247708 INFO nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Creating config drive at /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/disk.config
Jan 31 08:39:05 compute-0 nova_compute[247704]: 2026-01-31 08:39:05.863 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9mjb5wdq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.002 247708 DEBUG nova.compute.manager [req-aa88f3d3-f149-4cad-9e00-4efc53f93ff6 req-e10d9c52-7670-461b-a1fb-2304451f3052 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.002 247708 DEBUG oslo_concurrency.lockutils [req-aa88f3d3-f149-4cad-9e00-4efc53f93ff6 req-e10d9c52-7670-461b-a1fb-2304451f3052 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.002 247708 DEBUG oslo_concurrency.lockutils [req-aa88f3d3-f149-4cad-9e00-4efc53f93ff6 req-e10d9c52-7670-461b-a1fb-2304451f3052 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.003 247708 DEBUG oslo_concurrency.lockutils [req-aa88f3d3-f149-4cad-9e00-4efc53f93ff6 req-e10d9c52-7670-461b-a1fb-2304451f3052 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.003 247708 DEBUG nova.compute.manager [req-aa88f3d3-f149-4cad-9e00-4efc53f93ff6 req-e10d9c52-7670-461b-a1fb-2304451f3052 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] No waiting events found dispatching network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.003 247708 WARNING nova.compute.manager [req-aa88f3d3-f149-4cad-9e00-4efc53f93ff6 req-e10d9c52-7670-461b-a1fb-2304451f3052 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received unexpected event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f for instance with vm_state building and task_state spawning.
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.003 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9mjb5wdq" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.040 247708 DEBUG nova.storage.rbd_utils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] rbd image 5f00cd9b-b5f3-4eb6-ab53-387687853c27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.045 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/disk.config 5f00cd9b-b5f3-4eb6-ab53-387687853c27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:06 compute-0 ceph-mon[74496]: pgmap v3183: 305 pgs: 305 active+clean; 395 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Jan 31 08:39:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2964727998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:39:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962867648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.138 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.143 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.233 247708 DEBUG oslo_concurrency.processutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/disk.config 5f00cd9b-b5f3-4eb6-ab53-387687853c27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.234 247708 INFO nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Deleting local config drive /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27/disk.config because it was imported into RBD.
Jan 31 08:39:06 compute-0 kernel: tap637c7294-89: entered promiscuous mode
Jan 31 08:39:06 compute-0 NetworkManager[49108]: <info>  [1769848746.2945] manager: (tap637c7294-89): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Jan 31 08:39:06 compute-0 ovn_controller[149457]: 2026-01-31T08:39:06Z|00794|binding|INFO|Claiming lport 637c7294-89aa-4b8f-8485-df3303e84675 for this chassis.
Jan 31 08:39:06 compute-0 ovn_controller[149457]: 2026-01-31T08:39:06Z|00795|binding|INFO|637c7294-89aa-4b8f-8485-df3303e84675: Claiming fa:16:3e:4b:6c:09 10.100.0.14
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.357 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:06 compute-0 ovn_controller[149457]: 2026-01-31T08:39:06Z|00796|binding|INFO|Setting lport 637c7294-89aa-4b8f-8485-df3303e84675 ovn-installed in OVS
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.374 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.382 247708 INFO nova.compute.manager [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Took 27.73 seconds to build instance.
Jan 31 08:39:06 compute-0 systemd-udevd[374684]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:39:06 compute-0 systemd-machined[214448]: New machine qemu-85-instance-000000b9.
Jan 31 08:39:06 compute-0 NetworkManager[49108]: <info>  [1769848746.3969] device (tap637c7294-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:39:06 compute-0 NetworkManager[49108]: <info>  [1769848746.3983] device (tap637c7294-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:39:06 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-000000b9.
Jan 31 08:39:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:06.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 406 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 161 op/s
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.554 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.801 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.924 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848746.9242387, 5f00cd9b-b5f3-4eb6-ab53-387687853c27 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:06 compute-0 nova_compute[247704]: 2026-01-31 08:39:06.925 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] VM Started (Lifecycle Event)
Jan 31 08:39:06 compute-0 ovn_controller[149457]: 2026-01-31T08:39:06Z|00797|binding|INFO|Setting lport 637c7294-89aa-4b8f-8485-df3303e84675 up in Southbound
Jan 31 08:39:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:06.962 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:6c:09 10.100.0.14'], port_security=['fa:16:3e:4b:6c:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5f00cd9b-b5f3-4eb6-ab53-387687853c27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-090376c2-ac34-46f0-acd4-344bb2bc1154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b38141686534a0fb9b947a7886cd4b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f827482b-eee1-43ff-a797-1ec84e5a6d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05d66413-db50-49eb-973e-490542297b8d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=637c7294-89aa-4b8f-8485-df3303e84675) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:39:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:06.963 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 637c7294-89aa-4b8f-8485-df3303e84675 in datapath 090376c2-ac34-46f0-acd4-344bb2bc1154 bound to our chassis
Jan 31 08:39:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:06.965 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 08:39:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:06.982 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[61ec3c70-56da-4bb4-821a-d3d3c31da654]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.018 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[59529790-affb-4755-a30f-9db5b4d22c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.025 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e8a620-f8bb-41bf-895c-79fe3ea7b9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.053 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7c333efe-6e6f-444b-93e8-625ffb652a14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.072 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[42725e73-b9ce-4aa0-92b0-feda5abc9724]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap090376c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:64:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897984, 'reachable_time': 30594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374741, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.094 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1436bc-e25a-4729-903e-e428bdcdabc8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap090376c2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897994, 'tstamp': 897994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374742, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap090376c2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897997, 'tstamp': 897997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374742, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.096 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap090376c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:07 compute-0 nova_compute[247704]: 2026-01-31 08:39:07.099 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.100 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap090376c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.101 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.101 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap090376c2-a0, col_values=(('external_ids', {'iface-id': 'c8a7eefb-b644-411b-b95a-f875570edfa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:39:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:07.102 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:39:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/962867648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2495342213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:07 compute-0 nova_compute[247704]: 2026-01-31 08:39:07.836 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:07 compute-0 nova_compute[247704]: 2026-01-31 08:39:07.842 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848746.9244044, 5f00cd9b-b5f3-4eb6-ab53-387687853c27 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:07 compute-0 nova_compute[247704]: 2026-01-31 08:39:07.842 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] VM Paused (Lifecycle Event)
Jan 31 08:39:08 compute-0 ceph-mon[74496]: pgmap v3184: 305 pgs: 305 active+clean; 406 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 161 op/s
Jan 31 08:39:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:08.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.497 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.498 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.528 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.533 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:39:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 406 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.606 247708 DEBUG oslo_concurrency.lockutils [None req-92866e04-ac95-4adc-8a91-dde988aaf1b0 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.700 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:39:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.900 247708 DEBUG nova.compute.manager [req-9d3b5fae-a87c-4529-ac3f-6c1d0040d3d6 req-e6635233-8c41-4442-8170-838d2659d222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.901 247708 DEBUG oslo_concurrency.lockutils [req-9d3b5fae-a87c-4529-ac3f-6c1d0040d3d6 req-e6635233-8c41-4442-8170-838d2659d222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.902 247708 DEBUG oslo_concurrency.lockutils [req-9d3b5fae-a87c-4529-ac3f-6c1d0040d3d6 req-e6635233-8c41-4442-8170-838d2659d222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.902 247708 DEBUG oslo_concurrency.lockutils [req-9d3b5fae-a87c-4529-ac3f-6c1d0040d3d6 req-e6635233-8c41-4442-8170-838d2659d222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.903 247708 DEBUG nova.compute.manager [req-9d3b5fae-a87c-4529-ac3f-6c1d0040d3d6 req-e6635233-8c41-4442-8170-838d2659d222 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Processing event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.905 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.914 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769848748.9136384, 5f00cd9b-b5f3-4eb6-ab53-387687853c27 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.914 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] VM Resumed (Lifecycle Event)
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.916 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.921 247708 INFO nova.virt.libvirt.driver [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Instance spawned successfully.
Jan 31 08:39:08 compute-0 nova_compute[247704]: 2026-01-31 08:39:08.921 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.259 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.264 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.265 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.266 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.266 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.266 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.267 247708 DEBUG nova.virt.libvirt.driver [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.272 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:39:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:09.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.497 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.498 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.499 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.499 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.552 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.711 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.758 247708 DEBUG nova.network.neutron [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.759 247708 DEBUG nova.network.neutron [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.763 247708 INFO nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Took 15.60 seconds to spawn the instance on the hypervisor.
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.763 247708 DEBUG nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.816 247708 DEBUG oslo_concurrency.lockutils [req-ff906f1d-fa09-4314-b80a-e593b1399850 req-3133e0ce-2c3d-494d-adc7-e16f4d59704e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.899 247708 INFO nova.compute.manager [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Took 22.86 seconds to build instance.
Jan 31 08:39:09 compute-0 nova_compute[247704]: 2026-01-31 08:39:09.949 247708 DEBUG oslo_concurrency.lockutils [None req-1ddb9b24-d9e5-421f-aa0d-da8121b5ec51 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:10 compute-0 nova_compute[247704]: 2026-01-31 08:39:10.105 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:39:10 compute-0 nova_compute[247704]: 2026-01-31 08:39:10.105 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:39:10 compute-0 nova_compute[247704]: 2026-01-31 08:39:10.106 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:39:10 compute-0 nova_compute[247704]: 2026-01-31 08:39:10.106 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:39:10 compute-0 ceph-mon[74496]: pgmap v3185: 305 pgs: 305 active+clean; 406 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Jan 31 08:39:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2241056019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 415 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.122 247708 DEBUG nova.compute.manager [req-1ff57096-72cd-4496-aa1b-0dbdb3d14c88 req-739dfe24-c7d9-4ec1-9c85-5c8466dc6233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.123 247708 DEBUG oslo_concurrency.lockutils [req-1ff57096-72cd-4496-aa1b-0dbdb3d14c88 req-739dfe24-c7d9-4ec1-9c85-5c8466dc6233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.123 247708 DEBUG oslo_concurrency.lockutils [req-1ff57096-72cd-4496-aa1b-0dbdb3d14c88 req-739dfe24-c7d9-4ec1-9c85-5c8466dc6233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.124 247708 DEBUG oslo_concurrency.lockutils [req-1ff57096-72cd-4496-aa1b-0dbdb3d14c88 req-739dfe24-c7d9-4ec1-9c85-5c8466dc6233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.124 247708 DEBUG nova.compute.manager [req-1ff57096-72cd-4496-aa1b-0dbdb3d14c88 req-739dfe24-c7d9-4ec1-9c85-5c8466dc6233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] No waiting events found dispatching network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.125 247708 WARNING nova.compute.manager [req-1ff57096-72cd-4496-aa1b-0dbdb3d14c88 req-739dfe24-c7d9-4ec1-9c85-5c8466dc6233 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received unexpected event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 for instance with vm_state active and task_state None.
Jan 31 08:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:11.208 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:11.209 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:39:11.210 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:11.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.555 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:11 compute-0 nova_compute[247704]: 2026-01-31 08:39:11.750 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:12.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:12 compute-0 ceph-mon[74496]: pgmap v3186: 305 pgs: 305 active+clean; 415 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Jan 31 08:39:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Jan 31 08:39:12 compute-0 nova_compute[247704]: 2026-01-31 08:39:12.737 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:39:12 compute-0 nova_compute[247704]: 2026-01-31 08:39:12.807 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:39:12 compute-0 nova_compute[247704]: 2026-01-31 08:39:12.808 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:39:12 compute-0 nova_compute[247704]: 2026-01-31 08:39:12.809 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:12 compute-0 nova_compute[247704]: 2026-01-31 08:39:12.809 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:13.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:13 compute-0 ceph-mon[74496]: pgmap v3187: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Jan 31 08:39:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4186895028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:14.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 230 op/s
Jan 31 08:39:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3098512103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1072869473' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:15.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:15 compute-0 nova_compute[247704]: 2026-01-31 08:39:15.867 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:16 compute-0 sudo[374748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:16 compute-0 sudo[374748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:16 compute-0 sudo[374748]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:16 compute-0 ceph-mon[74496]: pgmap v3188: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.4 MiB/s wr, 230 op/s
Jan 31 08:39:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1354848748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:16 compute-0 sudo[374773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:16 compute-0 sudo[374773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:16 compute-0 sudo[374773]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:16.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 440 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 195 op/s
Jan 31 08:39:16 compute-0 nova_compute[247704]: 2026-01-31 08:39:16.559 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:16 compute-0 nova_compute[247704]: 2026-01-31 08:39:16.752 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:17.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:18 compute-0 ovn_controller[149457]: 2026-01-31T08:39:18Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:f7:bb 10.100.0.8
Jan 31 08:39:18 compute-0 ovn_controller[149457]: 2026-01-31T08:39:18Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:f7:bb 10.100.0.8
Jan 31 08:39:18 compute-0 ceph-mon[74496]: pgmap v3189: 305 pgs: 305 active+clean; 440 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 195 op/s
Jan 31 08:39:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:18.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 190 op/s
Jan 31 08:39:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:19.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:19 compute-0 ceph-mon[74496]: pgmap v3190: 305 pgs: 305 active+clean; 468 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 190 op/s
Jan 31 08:39:19 compute-0 sshd-session[374799]: Invalid user ubuntu from 45.148.10.240 port 60818
Jan 31 08:39:19 compute-0 sshd-session[374799]: Connection closed by invalid user ubuntu 45.148.10.240 port 60818 [preauth]
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:39:20
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:39:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:20.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 495 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:39:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:39:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:21.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:21 compute-0 nova_compute[247704]: 2026-01-31 08:39:21.562 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:21 compute-0 ceph-mon[74496]: pgmap v3191: 305 pgs: 305 active+clean; 495 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 31 08:39:21 compute-0 nova_compute[247704]: 2026-01-31 08:39:21.754 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:21 compute-0 podman[374803]: 2026-01-31 08:39:21.893365927 +0000 UTC m=+0.060775119 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:39:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:22.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.8 MiB/s wr, 276 op/s
Jan 31 08:39:22 compute-0 ovn_controller[149457]: 2026-01-31T08:39:22Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:6c:09 10.100.0.14
Jan 31 08:39:22 compute-0 ovn_controller[149457]: 2026-01-31T08:39:22Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:6c:09 10.100.0.14
Jan 31 08:39:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:23 compute-0 nova_compute[247704]: 2026-01-31 08:39:23.736 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:23 compute-0 NetworkManager[49108]: <info>  [1769848763.7377] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Jan 31 08:39:23 compute-0 NetworkManager[49108]: <info>  [1769848763.7391] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Jan 31 08:39:23 compute-0 nova_compute[247704]: 2026-01-31 08:39:23.781 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:23 compute-0 ovn_controller[149457]: 2026-01-31T08:39:23Z|00798|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:39:23 compute-0 nova_compute[247704]: 2026-01-31 08:39:23.809 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:23 compute-0 ceph-mon[74496]: pgmap v3192: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.8 MiB/s wr, 276 op/s
Jan 31 08:39:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:24.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 545 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 31 08:39:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:26 compute-0 ceph-mon[74496]: pgmap v3193: 305 pgs: 305 active+clean; 545 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 31 08:39:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3702192440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:26.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Jan 31 08:39:26 compute-0 nova_compute[247704]: 2026-01-31 08:39:26.565 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:26 compute-0 nova_compute[247704]: 2026-01-31 08:39:26.756 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/239804748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:39:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:27.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Jan 31 08:39:28 compute-0 ceph-mon[74496]: pgmap v3194: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 232 op/s
Jan 31 08:39:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:29 compute-0 ceph-mon[74496]: pgmap v3195: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.9 MiB/s wr, 228 op/s
Jan 31 08:39:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:30.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 207 op/s
Jan 31 08:39:30 compute-0 sudo[374829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:30 compute-0 sudo[374829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:30 compute-0 sudo[374829]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:30 compute-0 sudo[374854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:30 compute-0 sudo[374854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:30 compute-0 sudo[374854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:30 compute-0 sudo[374879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:30 compute-0 sudo[374879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:30 compute-0 sudo[374879]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:30 compute-0 sudo[374904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:39:30 compute-0 sudo[374904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:31 compute-0 sudo[374904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:31 compute-0 nova_compute[247704]: 2026-01-31 08:39:31.568 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:31 compute-0 nova_compute[247704]: 2026-01-31 08:39:31.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:31 compute-0 ceph-mon[74496]: pgmap v3196: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 207 op/s
Jan 31 08:39:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:32.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 555 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 168 op/s
Jan 31 08:39:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:39:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:39:33 compute-0 podman[374963]: 2026-01-31 08:39:33.903001458 +0000 UTC m=+0.076749178 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:39:34 compute-0 ceph-mon[74496]: pgmap v3197: 305 pgs: 305 active+clean; 555 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.2 MiB/s wr, 168 op/s
Jan 31 08:39:34 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 562 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8bf7769d-d356-40a9-a1ab-08a9f52c37ab does not exist
Jan 31 08:39:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f2c6ad69-7c3a-4b5f-b04b-749e1c2b59eb does not exist
Jan 31 08:39:34 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ee53b16e-e1d7-4830-9653-cf1b1c9d90c2 does not exist
Jan 31 08:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:39:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:39:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:39:34 compute-0 sudo[374989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:34 compute-0 sudo[374989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:34 compute-0 sudo[374989]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:34 compute-0 sudo[375014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:34 compute-0 sudo[375014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:34 compute-0 sudo[375014]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:34 compute-0 sudo[375039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:34 compute-0 sudo[375039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:34 compute-0 sudo[375039]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:34 compute-0 sudo[375064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:39:34 compute-0 sudo[375064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:35 compute-0 podman[375128]: 2026-01-31 08:39:35.381903323 +0000 UTC m=+0.078235685 container create 0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:39:35 compute-0 podman[375128]: 2026-01-31 08:39:35.328001441 +0000 UTC m=+0.024333833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:35 compute-0 systemd[1]: Started libpod-conmon-0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f.scope.
Jan 31 08:39:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:39:35 compute-0 ceph-mon[74496]: pgmap v3198: 305 pgs: 305 active+clean; 562 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:39:35 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0049420641983063465 of space, bias 1.0, pg target 1.4826192594919039 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008650384445375024 of space, bias 1.0, pg target 2.5864649491671323 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:39:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 08:39:36 compute-0 podman[375128]: 2026-01-31 08:39:36.055132145 +0000 UTC m=+0.751464527 container init 0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:36 compute-0 podman[375128]: 2026-01-31 08:39:36.065728372 +0000 UTC m=+0.762060734 container start 0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:36 compute-0 peaceful_galois[375145]: 167 167
Jan 31 08:39:36 compute-0 systemd[1]: libpod-0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f.scope: Deactivated successfully.
Jan 31 08:39:36 compute-0 podman[375128]: 2026-01-31 08:39:36.076129375 +0000 UTC m=+0.772461757 container attach 0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:36 compute-0 podman[375128]: 2026-01-31 08:39:36.076530335 +0000 UTC m=+0.772862727 container died 0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-246ed3189ca349391263c0728f46ff29f58d3089b2370abbcb6b4112f56025c3-merged.mount: Deactivated successfully.
Jan 31 08:39:36 compute-0 sudo[375163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:36 compute-0 sudo[375163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:36 compute-0 sudo[375163]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:36 compute-0 podman[375128]: 2026-01-31 08:39:36.403846409 +0000 UTC m=+1.100178781 container remove 0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:39:36 compute-0 sudo[375188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:36 compute-0 sudo[375188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:36 compute-0 sudo[375188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:36 compute-0 systemd[1]: libpod-conmon-0db54f764adf474d8abefdbda4f98dee9410d7fdc0dc4de2c14bd98bda6e4c1f.scope: Deactivated successfully.
Jan 31 08:39:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 569 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Jan 31 08:39:36 compute-0 nova_compute[247704]: 2026-01-31 08:39:36.570 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:36 compute-0 podman[375220]: 2026-01-31 08:39:36.530364237 +0000 UTC m=+0.023466312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:36 compute-0 nova_compute[247704]: 2026-01-31 08:39:36.802 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:36 compute-0 podman[375220]: 2026-01-31 08:39:36.917054526 +0000 UTC m=+0.410156571 container create 4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:39:37 compute-0 systemd[1]: Started libpod-conmon-4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe.scope.
Jan 31 08:39:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83071a3b3fecdb9e1ef9e4c40204470b59719edb4318ea57d7825889843e7bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83071a3b3fecdb9e1ef9e4c40204470b59719edb4318ea57d7825889843e7bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83071a3b3fecdb9e1ef9e4c40204470b59719edb4318ea57d7825889843e7bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83071a3b3fecdb9e1ef9e4c40204470b59719edb4318ea57d7825889843e7bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e83071a3b3fecdb9e1ef9e4c40204470b59719edb4318ea57d7825889843e7bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:37 compute-0 podman[375220]: 2026-01-31 08:39:37.257657914 +0000 UTC m=+0.750759959 container init 4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:37 compute-0 podman[375220]: 2026-01-31 08:39:37.264413768 +0000 UTC m=+0.757515813 container start 4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:39:37 compute-0 podman[375220]: 2026-01-31 08:39:37.281956695 +0000 UTC m=+0.775058730 container attach 4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:39:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:37 compute-0 ceph-mon[74496]: pgmap v3199: 305 pgs: 305 active+clean; 569 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Jan 31 08:39:38 compute-0 determined_rosalind[375236]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:39:38 compute-0 determined_rosalind[375236]: --> relative data size: 1.0
Jan 31 08:39:38 compute-0 determined_rosalind[375236]: --> All data devices are unavailable
Jan 31 08:39:38 compute-0 systemd[1]: libpod-4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe.scope: Deactivated successfully.
Jan 31 08:39:38 compute-0 conmon[375236]: conmon 4a846c747643df75a39c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe.scope/container/memory.events
Jan 31 08:39:38 compute-0 podman[375220]: 2026-01-31 08:39:38.090458757 +0000 UTC m=+1.583560792 container died 4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e83071a3b3fecdb9e1ef9e4c40204470b59719edb4318ea57d7825889843e7bb-merged.mount: Deactivated successfully.
Jan 31 08:39:38 compute-0 podman[375220]: 2026-01-31 08:39:38.15634789 +0000 UTC m=+1.649449935 container remove 4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:38 compute-0 systemd[1]: libpod-conmon-4a846c747643df75a39c862fc54222964b061e3acf48ab78fb6855b241295fbe.scope: Deactivated successfully.
Jan 31 08:39:38 compute-0 sudo[375064]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:38 compute-0 sudo[375266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:38 compute-0 sudo[375266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:38 compute-0 sudo[375266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:38 compute-0 sudo[375291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:38 compute-0 sudo[375291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:38 compute-0 sudo[375291]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:38 compute-0 sudo[375316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:38 compute-0 sudo[375316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:38 compute-0 sudo[375316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:38 compute-0 sudo[375341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:39:38 compute-0 sudo[375341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 577 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.721254136 +0000 UTC m=+0.042948916 container create 967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:38 compute-0 systemd[1]: Started libpod-conmon-967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d.scope.
Jan 31 08:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.792753226 +0000 UTC m=+0.114448026 container init 967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.700577633 +0000 UTC m=+0.022272433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.799802948 +0000 UTC m=+0.121497728 container start 967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.803104988 +0000 UTC m=+0.124799768 container attach 967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khayyam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:39:38 compute-0 inspiring_khayyam[375423]: 167 167
Jan 31 08:39:38 compute-0 systemd[1]: libpod-967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d.scope: Deactivated successfully.
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.806935441 +0000 UTC m=+0.128630221 container died 967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 08:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-08f59aafce0b0e88aefb74ead9342ba4c2e83b22cae1af02f51d01a3edd3ee5a-merged.mount: Deactivated successfully.
Jan 31 08:39:38 compute-0 podman[375407]: 2026-01-31 08:39:38.848768249 +0000 UTC m=+0.170463029 container remove 967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:38 compute-0 systemd[1]: libpod-conmon-967e92873932af74a02d1d0fd9dd19dd41972473be461bb7a137355cdf93125d.scope: Deactivated successfully.
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.874409) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778874454, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1655, "num_deletes": 256, "total_data_size": 2938850, "memory_usage": 2986376, "flush_reason": "Manual Compaction"}
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778905705, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 2841736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68538, "largest_seqno": 70192, "table_properties": {"data_size": 2834087, "index_size": 4527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16107, "raw_average_key_size": 20, "raw_value_size": 2818706, "raw_average_value_size": 3510, "num_data_blocks": 198, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848630, "oldest_key_time": 1769848630, "file_creation_time": 1769848778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 31355 microseconds, and 4912 cpu microseconds.
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.905755) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 2841736 bytes OK
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.905781) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.907553) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.907605) EVENT_LOG_v1 {"time_micros": 1769848778907598, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.907631) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 2931845, prev total WAL file size 2931845, number of live WAL files 2.
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.908341) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(2775KB)], [155(11MB)]
Jan 31 08:39:38 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848778908440, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 15126538, "oldest_snapshot_seqno": -1}
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9888 keys, 14972067 bytes, temperature: kUnknown
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779048781, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 14972067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14905622, "index_size": 40604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24773, "raw_key_size": 260038, "raw_average_key_size": 26, "raw_value_size": 14729856, "raw_average_value_size": 1489, "num_data_blocks": 1564, "num_entries": 9888, "num_filter_entries": 9888, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848778, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:39:39 compute-0 podman[375448]: 2026-01-31 08:39:38.969923236 +0000 UTC m=+0.021632006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.049287) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14972067 bytes
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.068408) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.7 rd, 106.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.7 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(10.6) write-amplify(5.3) OK, records in: 10421, records dropped: 533 output_compression: NoCompression
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.068447) EVENT_LOG_v1 {"time_micros": 1769848779068430, "job": 96, "event": "compaction_finished", "compaction_time_micros": 140438, "compaction_time_cpu_micros": 38207, "output_level": 6, "num_output_files": 1, "total_output_size": 14972067, "num_input_records": 10421, "num_output_records": 9888, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:39 compute-0 podman[375448]: 2026-01-31 08:39:39.069032588 +0000 UTC m=+0.120741338 container create d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779069133, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848779071057, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:38.908162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.071123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.071130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.071133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.071136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:39:39.071140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:39:39 compute-0 systemd[1]: Started libpod-conmon-d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8.scope.
Jan 31 08:39:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf883fdf885ca0c0072f7a9830870d1f232144085d33dcc0dc30e9f74f8b5968/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf883fdf885ca0c0072f7a9830870d1f232144085d33dcc0dc30e9f74f8b5968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf883fdf885ca0c0072f7a9830870d1f232144085d33dcc0dc30e9f74f8b5968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf883fdf885ca0c0072f7a9830870d1f232144085d33dcc0dc30e9f74f8b5968/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:39 compute-0 podman[375448]: 2026-01-31 08:39:39.144027693 +0000 UTC m=+0.195736463 container init d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:39:39 compute-0 podman[375448]: 2026-01-31 08:39:39.149282322 +0000 UTC m=+0.200991062 container start d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:39:39 compute-0 podman[375448]: 2026-01-31 08:39:39.152031718 +0000 UTC m=+0.203740468 container attach d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:39:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:39.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:39 compute-0 ceph-mon[74496]: pgmap v3200: 305 pgs: 305 active+clean; 577 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 31 08:39:39 compute-0 modest_merkle[375465]: {
Jan 31 08:39:39 compute-0 modest_merkle[375465]:     "0": [
Jan 31 08:39:39 compute-0 modest_merkle[375465]:         {
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "devices": [
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "/dev/loop3"
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             ],
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "lv_name": "ceph_lv0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "lv_size": "7511998464",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "name": "ceph_lv0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "tags": {
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.crush_device_class": "",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.encrypted": "0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.osd_id": "0",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.type": "block",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:                 "ceph.vdo": "0"
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             },
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "type": "block",
Jan 31 08:39:39 compute-0 modest_merkle[375465]:             "vg_name": "ceph_vg0"
Jan 31 08:39:39 compute-0 modest_merkle[375465]:         }
Jan 31 08:39:39 compute-0 modest_merkle[375465]:     ]
Jan 31 08:39:39 compute-0 modest_merkle[375465]: }
Jan 31 08:39:39 compute-0 systemd[1]: libpod-d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8.scope: Deactivated successfully.
Jan 31 08:39:39 compute-0 podman[375448]: 2026-01-31 08:39:39.939196022 +0000 UTC m=+0.990904772 container died d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf883fdf885ca0c0072f7a9830870d1f232144085d33dcc0dc30e9f74f8b5968-merged.mount: Deactivated successfully.
Jan 31 08:39:40 compute-0 podman[375448]: 2026-01-31 08:39:40.019302221 +0000 UTC m=+1.071010961 container remove d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:39:40 compute-0 systemd[1]: libpod-conmon-d99f89a4f3b038d4b1576288c7d8b57b0a1ab3233db170d26f01e016ce0277c8.scope: Deactivated successfully.
Jan 31 08:39:40 compute-0 sudo[375341]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:40 compute-0 sudo[375486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:40 compute-0 sudo[375486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:40 compute-0 sudo[375486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:40 compute-0 sudo[375511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:40 compute-0 sudo[375511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:40 compute-0 sudo[375511]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:40 compute-0 sudo[375536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:40 compute-0 sudo[375536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:40 compute-0 sudo[375536]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:40 compute-0 sudo[375561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:39:40 compute-0 sudo[375561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:40.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.578703992 +0000 UTC m=+0.041769067 container create cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pascal, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:39:40 compute-0 systemd[1]: Started libpod-conmon-cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014.scope.
Jan 31 08:39:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.559974286 +0000 UTC m=+0.023039381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.669381818 +0000 UTC m=+0.132446923 container init cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.675988199 +0000 UTC m=+0.139053274 container start cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.679246938 +0000 UTC m=+0.142312033 container attach cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:40 compute-0 systemd[1]: libpod-cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014.scope: Deactivated successfully.
Jan 31 08:39:40 compute-0 goofy_pascal[375641]: 167 167
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.680804826 +0000 UTC m=+0.143869901 container died cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e421f1455f825e0ced9db2f4a801a45187f2361c9fb54b351cc5b2b68e7a17-merged.mount: Deactivated successfully.
Jan 31 08:39:40 compute-0 podman[375624]: 2026-01-31 08:39:40.712947828 +0000 UTC m=+0.176012903 container remove cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:40 compute-0 systemd[1]: libpod-conmon-cd0e2d077b6c07610857a273ca51b0eabc071edfb4d1ebbf76fb100404241014.scope: Deactivated successfully.
Jan 31 08:39:40 compute-0 podman[375666]: 2026-01-31 08:39:40.846958979 +0000 UTC m=+0.042970847 container create e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:40 compute-0 systemd[1]: Started libpod-conmon-e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521.scope.
Jan 31 08:39:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc974f51288f820e891353d0c40b72a48611fd69924d2f1425057fa8db2b81d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc974f51288f820e891353d0c40b72a48611fd69924d2f1425057fa8db2b81d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc974f51288f820e891353d0c40b72a48611fd69924d2f1425057fa8db2b81d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dc974f51288f820e891353d0c40b72a48611fd69924d2f1425057fa8db2b81d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:40 compute-0 podman[375666]: 2026-01-31 08:39:40.82931962 +0000 UTC m=+0.025331518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:39:40 compute-0 podman[375666]: 2026-01-31 08:39:40.931996339 +0000 UTC m=+0.128008227 container init e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:39:40 compute-0 podman[375666]: 2026-01-31 08:39:40.93821977 +0000 UTC m=+0.134231638 container start e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:39:40 compute-0 podman[375666]: 2026-01-31 08:39:40.951139614 +0000 UTC m=+0.147151612 container attach e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:41.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:41 compute-0 nova_compute[247704]: 2026-01-31 08:39:41.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:41 compute-0 nova_compute[247704]: 2026-01-31 08:39:41.571 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]: {
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:         "osd_id": 0,
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:         "type": "bluestore"
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]:     }
Jan 31 08:39:41 compute-0 xenodochial_kapitsa[375684]: }
Jan 31 08:39:41 compute-0 systemd[1]: libpod-e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521.scope: Deactivated successfully.
Jan 31 08:39:41 compute-0 podman[375666]: 2026-01-31 08:39:41.793692435 +0000 UTC m=+0.989704303 container died e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:39:41 compute-0 nova_compute[247704]: 2026-01-31 08:39:41.804 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dc974f51288f820e891353d0c40b72a48611fd69924d2f1425057fa8db2b81d-merged.mount: Deactivated successfully.
Jan 31 08:39:41 compute-0 podman[375666]: 2026-01-31 08:39:41.850406266 +0000 UTC m=+1.046418134 container remove e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:41 compute-0 systemd[1]: libpod-conmon-e88f016bd4d69ebff02a39803e6e155c417b87c7c117b7b259c09d54a2850521.scope: Deactivated successfully.
Jan 31 08:39:41 compute-0 sudo[375561]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:39:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:39:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 08:39:42 compute-0 ceph-mon[74496]: pgmap v3201: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 31 08:39:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 17085c6b-1a15-4d58-b3ef-e70c9bcbde61 does not exist
Jan 31 08:39:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6464aa0f-9338-40b0-9a81-ddf78d5225d7 does not exist
Jan 31 08:39:43 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a57a6fef-1890-4ea1-a047-49c0a324966f does not exist
Jan 31 08:39:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:43 compute-0 sudo[375716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:43 compute-0 sudo[375716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:43 compute-0 sudo[375716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:43 compute-0 sudo[375741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:39:43 compute-0 sudo[375741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:43 compute-0 sudo[375741]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:44 compute-0 ceph-mon[74496]: pgmap v3202: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 08:39:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:39:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:39:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:45.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 31 08:39:46 compute-0 nova_compute[247704]: 2026-01-31 08:39:46.575 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:46 compute-0 nova_compute[247704]: 2026-01-31 08:39:46.806 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:47 compute-0 ceph-mon[74496]: pgmap v3203: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:39:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:47.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:47 compute-0 nova_compute[247704]: 2026-01-31 08:39:47.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:48 compute-0 ceph-mon[74496]: pgmap v3204: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 91 op/s
Jan 31 08:39:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 587 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 854 KiB/s wr, 65 op/s
Jan 31 08:39:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:49.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:49 compute-0 ceph-mon[74496]: pgmap v3205: 305 pgs: 305 active+clean; 587 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 854 KiB/s wr, 65 op/s
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:50 compute-0 nova_compute[247704]: 2026-01-31 08:39:50.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 606 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Jan 31 08:39:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:51.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:51 compute-0 nova_compute[247704]: 2026-01-31 08:39:51.577 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:51 compute-0 ceph-mon[74496]: pgmap v3206: 305 pgs: 305 active+clean; 606 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Jan 31 08:39:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/153502525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:39:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/153502525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:39:51 compute-0 nova_compute[247704]: 2026-01-31 08:39:51.808 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:52.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 615 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Jan 31 08:39:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/339453851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:39:52 compute-0 podman[375771]: 2026-01-31 08:39:52.882296277 +0000 UTC m=+0.053698748 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:39:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:53 compute-0 ceph-mon[74496]: pgmap v3207: 305 pgs: 305 active+clean; 615 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 2.3 MiB/s wr, 67 op/s
Jan 31 08:39:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2237009316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:39:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2237009316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:39:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:54 compute-0 nova_compute[247704]: 2026-01-31 08:39:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:54 compute-0 nova_compute[247704]: 2026-01-31 08:39:54.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:39:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 2.5 MiB/s wr, 82 op/s
Jan 31 08:39:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:55.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:55 compute-0 ceph-mon[74496]: pgmap v3208: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 2.5 MiB/s wr, 82 op/s
Jan 31 08:39:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:39:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:39:56 compute-0 sudo[375793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:56 compute-0 sudo[375793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:56 compute-0 sudo[375793]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:56 compute-0 sudo[375818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:39:56 compute-0 sudo[375818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:56 compute-0 sudo[375818]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 2.3 MiB/s wr, 83 op/s
Jan 31 08:39:56 compute-0 nova_compute[247704]: 2026-01-31 08:39:56.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:56 compute-0 nova_compute[247704]: 2026-01-31 08:39:56.810 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:39:57 compute-0 ovn_controller[149457]: 2026-01-31T08:39:57Z|00799|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 08:39:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:57.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:39:58 compute-0 ceph-mon[74496]: pgmap v3209: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 2.3 MiB/s wr, 83 op/s
Jan 31 08:39:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:39:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:39:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 2.3 MiB/s wr, 85 op/s
Jan 31 08:39:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:39:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:39:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:59.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:40:00 compute-0 ceph-mon[74496]: pgmap v3210: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 2.3 MiB/s wr, 85 op/s
Jan 31 08:40:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:40:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:00.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:00 compute-0 nova_compute[247704]: 2026-01-31 08:40:00.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 539 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.0 MiB/s wr, 108 op/s
Jan 31 08:40:00 compute-0 nova_compute[247704]: 2026-01-31 08:40:00.898 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:00 compute-0 nova_compute[247704]: 2026-01-31 08:40:00.898 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:00 compute-0 nova_compute[247704]: 2026-01-31 08:40:00.898 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:00 compute-0 nova_compute[247704]: 2026-01-31 08:40:00.899 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:40:00 compute-0 nova_compute[247704]: 2026-01-31 08:40:00.899 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984588358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.349 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:01 compute-0 ceph-mon[74496]: pgmap v3211: 305 pgs: 305 active+clean; 539 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.0 MiB/s wr, 108 op/s
Jan 31 08:40:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/984588358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:01.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.582 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:40:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4210333335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:40:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:40:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4210333335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.900 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.900 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.906 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:40:01 compute-0 nova_compute[247704]: 2026-01-31 08:40:01.906 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.092 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.093 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3900MB free_disk=20.871307373046875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.093 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.093 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4217328167' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4210333335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:40:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4210333335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:40:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:02.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 539 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 726 KiB/s wr, 78 op/s
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.729 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9e883c68-083a-45ab-81fe-942de74e50ef actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.730 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.730 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.731 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:40:02 compute-0 nova_compute[247704]: 2026-01-31 08:40:02.817 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:40:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114058315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:03 compute-0 nova_compute[247704]: 2026-01-31 08:40:03.281 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:03 compute-0 nova_compute[247704]: 2026-01-31 08:40:03.287 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:03.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:03 compute-0 ceph-mon[74496]: pgmap v3212: 305 pgs: 305 active+clean; 539 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 726 KiB/s wr, 78 op/s
Jan 31 08:40:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2114058315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:40:03.642 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:40:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:40:03.644 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:40:03 compute-0 nova_compute[247704]: 2026-01-31 08:40:03.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:04 compute-0 nova_compute[247704]: 2026-01-31 08:40:04.171 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:04.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 244 KiB/s wr, 75 op/s
Jan 31 08:40:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/246431782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:04 compute-0 nova_compute[247704]: 2026-01-31 08:40:04.845 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:40:04 compute-0 nova_compute[247704]: 2026-01-31 08:40:04.845 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:04 compute-0 podman[375892]: 2026-01-31 08:40:04.932021983 +0000 UTC m=+0.104985496 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:05.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:05 compute-0 ceph-mon[74496]: pgmap v3213: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 244 KiB/s wr, 75 op/s
Jan 31 08:40:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1740064414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:05 compute-0 nova_compute[247704]: 2026-01-31 08:40:05.845 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:05 compute-0 nova_compute[247704]: 2026-01-31 08:40:05.846 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:40:05 compute-0 nova_compute[247704]: 2026-01-31 08:40:05.846 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:40:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:06.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 36 KiB/s wr, 54 op/s
Jan 31 08:40:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:40:06.647 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.722 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.723 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.723 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.723 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.740 247708 DEBUG oslo_concurrency.lockutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.741 247708 DEBUG oslo_concurrency.lockutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:06 compute-0 nova_compute[247704]: 2026-01-31 08:40:06.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:07 compute-0 nova_compute[247704]: 2026-01-31 08:40:07.272 247708 DEBUG nova.objects.instance [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:07.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:07 compute-0 ceph-mon[74496]: pgmap v3214: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 36 KiB/s wr, 54 op/s
Jan 31 08:40:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2125069320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/397668415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:08.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 39 KiB/s wr, 49 op/s
Jan 31 08:40:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:08 compute-0 nova_compute[247704]: 2026-01-31 08:40:08.885 247708 DEBUG oslo_concurrency.lockutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 2.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:09 compute-0 ceph-mon[74496]: pgmap v3215: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 39 KiB/s wr, 49 op/s
Jan 31 08:40:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 37 KiB/s wr, 54 op/s
Jan 31 08:40:10 compute-0 nova_compute[247704]: 2026-01-31 08:40:10.612 247708 DEBUG oslo_concurrency.lockutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:10 compute-0 nova_compute[247704]: 2026-01-31 08:40:10.613 247708 DEBUG oslo_concurrency.lockutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:10 compute-0 nova_compute[247704]: 2026-01-31 08:40:10.613 247708 INFO nova.compute.manager [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attaching volume 5e3dddd5-2020-4bc8-ad80-973dfd3573b1 to /dev/vdb
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.210 247708 DEBUG os_brick.utils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:40:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:40:11.210 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:40:11.211 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:40:11.211 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.212 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.222 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.222 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[18941b80-1b90-4b25-9750-48e17cd416c9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.224 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.231 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.231 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[039f0228-e278-4686-8395-a4bbb0702a7e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.233 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.243 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.243 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[86004cb8-692c-49db-9a6b-e3051a77ddee]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.245 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[375303c0-1c8e-496e-a34e-008526d06653]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.246 247708 DEBUG oslo_concurrency.processutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.267 247708 DEBUG oslo_concurrency.processutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.270 247708 DEBUG os_brick.initiator.connectors.lightos [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.270 247708 DEBUG os_brick.initiator.connectors.lightos [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.270 247708 DEBUG os_brick.initiator.connectors.lightos [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.271 247708 DEBUG os_brick.utils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.271 247708 DEBUG nova.virt.block_device [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating existing volume attachment record: ef357034-57ec-488f-9238-ba9816613ea0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:40:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:11.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.493 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.726 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.727 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.728 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.728 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:11 compute-0 nova_compute[247704]: 2026-01-31 08:40:11.815 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:11 compute-0 ceph-mon[74496]: pgmap v3216: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 37 KiB/s wr, 54 op/s
Jan 31 08:40:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 513 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 28 op/s
Jan 31 08:40:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:13.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:13 compute-0 nova_compute[247704]: 2026-01-31 08:40:13.439 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:13 compute-0 nova_compute[247704]: 2026-01-31 08:40:13.592 247708 DEBUG nova.objects.instance [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:13 compute-0 nova_compute[247704]: 2026-01-31 08:40:13.833 247708 DEBUG nova.virt.libvirt.driver [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attempting to attach volume 5e3dddd5-2020-4bc8-ad80-973dfd3573b1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:40:13 compute-0 nova_compute[247704]: 2026-01-31 08:40:13.837 247708 DEBUG nova.virt.libvirt.guest [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:40:13 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:40:13 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-5e3dddd5-2020-4bc8-ad80-973dfd3573b1">
Jan 31 08:40:13 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:40:13 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:40:13 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:40:13 compute-0 nova_compute[247704]:   </source>
Jan 31 08:40:13 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:40:13 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:40:13 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:40:13 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:40:13 compute-0 nova_compute[247704]:   <serial>5e3dddd5-2020-4bc8-ad80-973dfd3573b1</serial>
Jan 31 08:40:13 compute-0 nova_compute[247704]: </disk>
Jan 31 08:40:13 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:40:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:13 compute-0 ceph-mon[74496]: pgmap v3217: 305 pgs: 305 active+clean; 513 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 28 op/s
Jan 31 08:40:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/951835952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:14 compute-0 nova_compute[247704]: 2026-01-31 08:40:14.222 247708 DEBUG nova.virt.libvirt.driver [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:14 compute-0 nova_compute[247704]: 2026-01-31 08:40:14.224 247708 DEBUG nova.virt.libvirt.driver [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:14 compute-0 nova_compute[247704]: 2026-01-31 08:40:14.224 247708 DEBUG nova.virt.libvirt.driver [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:14 compute-0 nova_compute[247704]: 2026-01-31 08:40:14.225 247708 DEBUG nova.virt.libvirt.driver [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:71:f7:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:40:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:14.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 467 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 24 KiB/s wr, 38 op/s
Jan 31 08:40:14 compute-0 nova_compute[247704]: 2026-01-31 08:40:14.749 247708 DEBUG oslo_concurrency.lockutils [None req-75d3ce55-1922-4ee5-aeba-f540c3c438c9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:15.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Jan 31 08:40:15 compute-0 ceph-mon[74496]: pgmap v3218: 305 pgs: 305 active+clean; 467 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 24 KiB/s wr, 38 op/s
Jan 31 08:40:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3737376873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Jan 31 08:40:15 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Jan 31 08:40:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:16.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:16 compute-0 nova_compute[247704]: 2026-01-31 08:40:16.589 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 39 op/s
Jan 31 08:40:16 compute-0 sudo[375952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:16 compute-0 sudo[375952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:16 compute-0 sudo[375952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:16 compute-0 nova_compute[247704]: 2026-01-31 08:40:16.709 247708 DEBUG oslo_concurrency.lockutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:16 compute-0 nova_compute[247704]: 2026-01-31 08:40:16.709 247708 DEBUG oslo_concurrency.lockutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:16 compute-0 sudo[375977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:16 compute-0 sudo[375977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:16 compute-0 sudo[375977]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:16 compute-0 nova_compute[247704]: 2026-01-31 08:40:16.817 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:16 compute-0 nova_compute[247704]: 2026-01-31 08:40:16.839 247708 DEBUG nova.objects.instance [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:16 compute-0 ceph-mon[74496]: osdmap e380: 3 total, 3 up, 3 in
Jan 31 08:40:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/532896343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:16 compute-0 nova_compute[247704]: 2026-01-31 08:40:16.987 247708 DEBUG oslo_concurrency.lockutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:17.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.440 247708 DEBUG oslo_concurrency.lockutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.441 247708 DEBUG oslo_concurrency.lockutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.441 247708 INFO nova.compute.manager [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attaching volume 32f698bc-fdf5-4d95-826a-13f70e60d6b8 to /dev/vdc
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.764 247708 DEBUG os_brick.utils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.765 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.777 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.777 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[035bcc06-57aa-44ae-a0b4-242b88ee852b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.778 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.786 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.787 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2fa45e-e08b-4fd8-a8e3-6a7743b2984d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.788 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.795 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.796 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[375493c4-7e42-402e-aedc-6b02cf02749e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.798 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7767e544-ffaa-4899-9f11-bcda4354f2c1]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.798 247708 DEBUG oslo_concurrency.processutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.827 247708 DEBUG oslo_concurrency.processutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.829 247708 DEBUG os_brick.initiator.connectors.lightos [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.829 247708 DEBUG os_brick.initiator.connectors.lightos [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.829 247708 DEBUG os_brick.initiator.connectors.lightos [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.830 247708 DEBUG os_brick.utils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:40:17 compute-0 nova_compute[247704]: 2026-01-31 08:40:17.830 247708 DEBUG nova.virt.block_device [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating existing volume attachment record: 853a354f-45a8-4694-8e0d-8b79cafba13b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:40:17 compute-0 ceph-mon[74496]: pgmap v3220: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 39 op/s
Jan 31 08:40:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2290654224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:18.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 11 KiB/s wr, 38 op/s
Jan 31 08:40:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:19.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.699 247708 DEBUG nova.objects.instance [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.795 247708 DEBUG nova.virt.libvirt.driver [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attempting to attach volume 32f698bc-fdf5-4d95-826a-13f70e60d6b8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.798 247708 DEBUG nova.virt.libvirt.guest [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:40:19 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:40:19 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-32f698bc-fdf5-4d95-826a-13f70e60d6b8">
Jan 31 08:40:19 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:40:19 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:40:19 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:40:19 compute-0 nova_compute[247704]:   </source>
Jan 31 08:40:19 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:40:19 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:40:19 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:40:19 compute-0 nova_compute[247704]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:40:19 compute-0 nova_compute[247704]:   <serial>32f698bc-fdf5-4d95-826a-13f70e60d6b8</serial>
Jan 31 08:40:19 compute-0 nova_compute[247704]: </disk>
Jan 31 08:40:19 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.968 247708 DEBUG nova.virt.libvirt.driver [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.969 247708 DEBUG nova.virt.libvirt.driver [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.969 247708 DEBUG nova.virt.libvirt.driver [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.969 247708 DEBUG nova.virt.libvirt.driver [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:40:19 compute-0 nova_compute[247704]: 2026-01-31 08:40:19.970 247708 DEBUG nova.virt.libvirt.driver [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:71:f7:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:40:20 compute-0 ceph-mon[74496]: pgmap v3221: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 11 KiB/s wr, 38 op/s
Jan 31 08:40:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1514366191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:40:20
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:40:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:20.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 KiB/s wr, 110 op/s
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:40:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:40:21 compute-0 nova_compute[247704]: 2026-01-31 08:40:21.341 247708 DEBUG oslo_concurrency.lockutils [None req-100e7d22-4922-401d-aa8c-cdd9dc9b14a1 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:21.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:21 compute-0 nova_compute[247704]: 2026-01-31 08:40:21.631 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:21 compute-0 nova_compute[247704]: 2026-01-31 08:40:21.819 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:22 compute-0 ceph-mon[74496]: pgmap v3222: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 KiB/s wr, 110 op/s
Jan 31 08:40:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:22.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 131 op/s
Jan 31 08:40:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:23.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:23 compute-0 podman[376033]: 2026-01-31 08:40:23.874712249 +0000 UTC m=+0.049898955 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 08:40:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:23 compute-0 ovn_controller[149457]: 2026-01-31T08:40:23Z|00800|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:40:23 compute-0 nova_compute[247704]: 2026-01-31 08:40:23.935 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:23 compute-0 ovn_controller[149457]: 2026-01-31T08:40:23Z|00801|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:40:23 compute-0 nova_compute[247704]: 2026-01-31 08:40:23.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.080112) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824080719, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 652, "num_deletes": 250, "total_data_size": 844354, "memory_usage": 857088, "flush_reason": "Manual Compaction"}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824220047, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 575300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70193, "largest_seqno": 70844, "table_properties": {"data_size": 572221, "index_size": 986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8355, "raw_average_key_size": 20, "raw_value_size": 565717, "raw_average_value_size": 1410, "num_data_blocks": 43, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848779, "oldest_key_time": 1769848779, "file_creation_time": 1769848824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 140042 microseconds, and 3933 cpu microseconds.
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.220177) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 575300 bytes OK
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.220211) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.227592) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.227627) EVENT_LOG_v1 {"time_micros": 1769848824227616, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.227657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 840920, prev total WAL file size 841635, number of live WAL files 2.
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.228583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353130' seq:72057594037927935, type:22 .. '6D6772737461740032373631' seq:0, type:0; will stop at (end)
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(561KB)], [158(14MB)]
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824228684, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 15547367, "oldest_snapshot_seqno": -1}
Jan 31 08:40:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 119 op/s
Jan 31 08:40:24 compute-0 ceph-mon[74496]: pgmap v3223: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 131 op/s
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9790 keys, 11950979 bytes, temperature: kUnknown
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824914586, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11950979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11889437, "index_size": 35957, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 258248, "raw_average_key_size": 26, "raw_value_size": 11719531, "raw_average_value_size": 1197, "num_data_blocks": 1371, "num_entries": 9790, "num_filter_entries": 9790, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.914928) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11950979 bytes
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.956733) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.7 rd, 17.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.3 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(47.8) write-amplify(20.8) OK, records in: 10289, records dropped: 499 output_compression: NoCompression
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.956808) EVENT_LOG_v1 {"time_micros": 1769848824956780, "job": 98, "event": "compaction_finished", "compaction_time_micros": 685980, "compaction_time_cpu_micros": 46146, "output_level": 6, "num_output_files": 1, "total_output_size": 11950979, "num_input_records": 10289, "num_output_records": 9790, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824957297, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848824960599, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.228480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.960731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.960739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.960741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.960743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:24 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:24.960745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:25.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:26 compute-0 ceph-mon[74496]: pgmap v3224: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 119 op/s
Jan 31 08:40:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 673 B/s wr, 100 op/s
Jan 31 08:40:26 compute-0 nova_compute[247704]: 2026-01-31 08:40:26.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:26 compute-0 nova_compute[247704]: 2026-01-31 08:40:26.821 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:27.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:27 compute-0 nova_compute[247704]: 2026-01-31 08:40:27.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:27 compute-0 NetworkManager[49108]: <info>  [1769848827.9288] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Jan 31 08:40:27 compute-0 NetworkManager[49108]: <info>  [1769848827.9313] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Jan 31 08:40:27 compute-0 nova_compute[247704]: 2026-01-31 08:40:27.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:27 compute-0 ovn_controller[149457]: 2026-01-31T08:40:27Z|00802|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:40:28 compute-0 nova_compute[247704]: 2026-01-31 08:40:28.009 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:28 compute-0 ceph-mon[74496]: pgmap v3225: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 673 B/s wr, 100 op/s
Jan 31 08:40:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:28.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 88 op/s
Jan 31 08:40:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:29 compute-0 nova_compute[247704]: 2026-01-31 08:40:29.077 247708 DEBUG nova.compute.manager [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:29 compute-0 nova_compute[247704]: 2026-01-31 08:40:29.078 247708 DEBUG nova.compute.manager [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:29 compute-0 nova_compute[247704]: 2026-01-31 08:40:29.078 247708 DEBUG oslo_concurrency.lockutils [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:29 compute-0 nova_compute[247704]: 2026-01-31 08:40:29.078 247708 DEBUG oslo_concurrency.lockutils [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:29 compute-0 nova_compute[247704]: 2026-01-31 08:40:29.079 247708 DEBUG nova.network.neutron [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Jan 31 08:40:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Jan 31 08:40:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Jan 31 08:40:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:29.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:30 compute-0 ceph-mon[74496]: pgmap v3226: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 88 op/s
Jan 31 08:40:30 compute-0 ceph-mon[74496]: osdmap e381: 3 total, 3 up, 3 in
Jan 31 08:40:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2559611403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:30.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 08:40:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:31.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:31 compute-0 nova_compute[247704]: 2026-01-31 08:40:31.636 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:31 compute-0 nova_compute[247704]: 2026-01-31 08:40:31.823 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:32 compute-0 ceph-mon[74496]: pgmap v3228: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 08:40:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:32.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 31 08:40:32 compute-0 nova_compute[247704]: 2026-01-31 08:40:32.723 247708 DEBUG nova.network.neutron [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:40:32 compute-0 nova_compute[247704]: 2026-01-31 08:40:32.723 247708 DEBUG nova.network.neutron [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.260558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833261192, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 341, "num_deletes": 251, "total_data_size": 191983, "memory_usage": 198664, "flush_reason": "Manual Compaction"}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833265443, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 190497, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70845, "largest_seqno": 71185, "table_properties": {"data_size": 188326, "index_size": 334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 183952, "raw_average_value_size": 623, "num_data_blocks": 15, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848824, "oldest_key_time": 1769848824, "file_creation_time": 1769848833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 4924 microseconds, and 1579 cpu microseconds.
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.265494) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 190497 bytes OK
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.265521) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.268113) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.268138) EVENT_LOG_v1 {"time_micros": 1769848833268130, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.268165) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 189669, prev total WAL file size 189669, number of live WAL files 2.
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.268690) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(186KB)], [161(11MB)]
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833268841, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12141476, "oldest_snapshot_seqno": -1}
Jan 31 08:40:33 compute-0 nova_compute[247704]: 2026-01-31 08:40:33.372 247708 DEBUG oslo_concurrency.lockutils [req-93fc943e-f9bb-4f76-ada4-73b8120ca070 req-bd0d7d43-f42a-4495-b061-e44ad738e79a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9571 keys, 10210000 bytes, temperature: kUnknown
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833393247, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10210000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10151463, "index_size": 33528, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 254420, "raw_average_key_size": 26, "raw_value_size": 9986887, "raw_average_value_size": 1043, "num_data_blocks": 1262, "num_entries": 9571, "num_filter_entries": 9571, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769848833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.393546) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10210000 bytes
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.395481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.5 rd, 82.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(117.3) write-amplify(53.6) OK, records in: 10085, records dropped: 514 output_compression: NoCompression
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.395527) EVENT_LOG_v1 {"time_micros": 1769848833395494, "job": 100, "event": "compaction_finished", "compaction_time_micros": 124471, "compaction_time_cpu_micros": 38302, "output_level": 6, "num_output_files": 1, "total_output_size": 10210000, "num_input_records": 10085, "num_output_records": 9571, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833395709, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848833397070, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.268481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.397339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.397351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.397355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.397358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:40:33.397366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:33.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:33 compute-0 nova_compute[247704]: 2026-01-31 08:40:33.475 247708 DEBUG nova.compute.manager [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:33 compute-0 nova_compute[247704]: 2026-01-31 08:40:33.475 247708 DEBUG nova.compute.manager [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:33 compute-0 nova_compute[247704]: 2026-01-31 08:40:33.475 247708 DEBUG oslo_concurrency.lockutils [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:33 compute-0 nova_compute[247704]: 2026-01-31 08:40:33.476 247708 DEBUG oslo_concurrency.lockutils [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:33 compute-0 nova_compute[247704]: 2026-01-31 08:40:33.476 247708 DEBUG nova.network.neutron [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:34 compute-0 ceph-mon[74496]: pgmap v3229: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 31 08:40:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 645 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:35.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:35 compute-0 nova_compute[247704]: 2026-01-31 08:40:35.708 247708 DEBUG nova.compute.manager [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:35 compute-0 nova_compute[247704]: 2026-01-31 08:40:35.709 247708 DEBUG nova.compute.manager [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:35 compute-0 nova_compute[247704]: 2026-01-31 08:40:35.709 247708 DEBUG oslo_concurrency.lockutils [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021934033593895403 of space, bias 1.0, pg target 0.6580210078168621 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00866147141959799 of space, bias 1.0, pg target 2.5984414258793973 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:40:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:40:35 compute-0 podman[376061]: 2026-01-31 08:40:35.967383463 +0000 UTC m=+0.136842201 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:40:36 compute-0 ceph-mon[74496]: pgmap v3230: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 645 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.629 247708 DEBUG nova.network.neutron [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.630 247708 DEBUG nova.network.neutron [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.638 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.661 247708 DEBUG oslo_concurrency.lockutils [req-1437704e-866e-4fd1-8eb1-27ddea62b2ce req-aa77723d-d8e9-48a7-9533-c22eb64daf7d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.663 247708 DEBUG oslo_concurrency.lockutils [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.663 247708 DEBUG nova.network.neutron [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:36 compute-0 sudo[376087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:36 compute-0 sudo[376087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:36 compute-0 sudo[376087]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:36 compute-0 nova_compute[247704]: 2026-01-31 08:40:36.825 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:36 compute-0 sudo[376112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:36 compute-0 sudo[376112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:36 compute-0 sudo[376112]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:37 compute-0 nova_compute[247704]: 2026-01-31 08:40:37.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:38 compute-0 ceph-mon[74496]: pgmap v3231: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:38.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Jan 31 08:40:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Jan 31 08:40:38 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Jan 31 08:40:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:40.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 415 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 349 KiB/s rd, 17 KiB/s wr, 52 op/s
Jan 31 08:40:41 compute-0 ceph-mon[74496]: pgmap v3232: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 31 08:40:41 compute-0 ceph-mon[74496]: osdmap e382: 3 total, 3 up, 3 in
Jan 31 08:40:41 compute-0 nova_compute[247704]: 2026-01-31 08:40:41.640 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:41 compute-0 nova_compute[247704]: 2026-01-31 08:40:41.869 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:42 compute-0 ceph-mon[74496]: pgmap v3234: 305 pgs: 305 active+clean; 415 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 349 KiB/s rd, 17 KiB/s wr, 52 op/s
Jan 31 08:40:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:40:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:40:42 compute-0 nova_compute[247704]: 2026-01-31 08:40:42.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 398 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 17 KiB/s wr, 59 op/s
Jan 31 08:40:43 compute-0 sudo[376140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:43 compute-0 sudo[376140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-0 sudo[376140]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:43 compute-0 sudo[376165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:43 compute-0 sudo[376165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-0 sudo[376165]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:43 compute-0 sudo[376190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:43 compute-0 sudo[376190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-0 sudo[376190]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:43 compute-0 sudo[376215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:40:43 compute-0 sudo[376215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:44 compute-0 sudo[376215]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-0 ceph-mon[74496]: pgmap v3235: 305 pgs: 305 active+clean; 398 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 17 KiB/s wr, 59 op/s
Jan 31 08:40:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-0 sudo[376262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:44 compute-0 sudo[376262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:44 compute-0 sudo[376262]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:44.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:44 compute-0 sudo[376287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:44 compute-0 sudo[376287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:44 compute-0 sudo[376287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:44 compute-0 sudo[376313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:44 compute-0 sudo[376313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:44 compute-0 sudo[376313]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:44 compute-0 sudo[376338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:40:44 compute-0 sudo[376338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:44.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:44 compute-0 nova_compute[247704]: 2026-01-31 08:40:44.763 247708 DEBUG nova.network.neutron [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:40:44 compute-0 nova_compute[247704]: 2026-01-31 08:40:44.764 247708 DEBUG nova.network.neutron [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:40:44 compute-0 sudo[376338]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cd6e60aa-b738-48a7-b596-2834f128b15e does not exist
Jan 31 08:40:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 96ae5a1a-4c5a-4788-a54b-9cd282ab0a3e does not exist
Jan 31 08:40:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 81ffcb4e-df8e-494a-8c81-8b89ef37aed8 does not exist
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:40:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:40:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:40:44 compute-0 nova_compute[247704]: 2026-01-31 08:40:44.952 247708 DEBUG oslo_concurrency.lockutils [req-a0271878-b081-4c44-bb5d-ae0de69c21b3 req-0d3e2b77-efdb-4faf-bf2b-d1ee5a2baa4e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:40:44 compute-0 sudo[376394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:44 compute-0 sudo[376394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:45 compute-0 sudo[376394]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:45 compute-0 sudo[376419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:45 compute-0 sudo[376419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:45 compute-0 sudo[376419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:45 compute-0 sudo[376444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:45 compute-0 sudo[376444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:45 compute-0 sudo[376444]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:40:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:40:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:40:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:40:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:40:45 compute-0 sudo[376469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:40:45 compute-0 sudo[376469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.484548375 +0000 UTC m=+0.041940732 container create f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 31 08:40:45 compute-0 systemd[1]: Started libpod-conmon-f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d.scope.
Jan 31 08:40:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.466335442 +0000 UTC m=+0.023727819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.57352379 +0000 UTC m=+0.130916257 container init f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_meitner, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.584885767 +0000 UTC m=+0.142278164 container start f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.590252378 +0000 UTC m=+0.147644735 container attach f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:40:45 compute-0 systemd[1]: libpod-f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d.scope: Deactivated successfully.
Jan 31 08:40:45 compute-0 modest_meitner[376551]: 167 167
Jan 31 08:40:45 compute-0 conmon[376551]: conmon f30f7d2bb831bd87260b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d.scope/container/memory.events
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.596906069 +0000 UTC m=+0.154298426 container died f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5167a5971ddb4a48030880323d4d3547469cb7d17e3157610e5a34a4e36973e-merged.mount: Deactivated successfully.
Jan 31 08:40:45 compute-0 podman[376535]: 2026-01-31 08:40:45.643937794 +0000 UTC m=+0.201330151 container remove f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_meitner, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:40:45 compute-0 systemd[1]: libpod-conmon-f30f7d2bb831bd87260b0d8ba911ed87db985beb4c58862fa2de982dfce5da0d.scope: Deactivated successfully.
Jan 31 08:40:45 compute-0 podman[376575]: 2026-01-31 08:40:45.835027003 +0000 UTC m=+0.051093564 container create 7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:40:45 compute-0 systemd[1]: Started libpod-conmon-7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414.scope.
Jan 31 08:40:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:45 compute-0 podman[376575]: 2026-01-31 08:40:45.813345116 +0000 UTC m=+0.029411717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f1fca75f198a3171814c059381d1a2a90d22e806a798bb34379f1156d8ce44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f1fca75f198a3171814c059381d1a2a90d22e806a798bb34379f1156d8ce44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f1fca75f198a3171814c059381d1a2a90d22e806a798bb34379f1156d8ce44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f1fca75f198a3171814c059381d1a2a90d22e806a798bb34379f1156d8ce44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09f1fca75f198a3171814c059381d1a2a90d22e806a798bb34379f1156d8ce44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:45 compute-0 podman[376575]: 2026-01-31 08:40:45.932587317 +0000 UTC m=+0.148653878 container init 7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:40:45 compute-0 podman[376575]: 2026-01-31 08:40:45.938169223 +0000 UTC m=+0.154235784 container start 7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:40:45 compute-0 podman[376575]: 2026-01-31 08:40:45.94260982 +0000 UTC m=+0.158676391 container attach 7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:46 compute-0 ceph-mon[74496]: pgmap v3236: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:46.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:46.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:46 compute-0 nova_compute[247704]: 2026-01-31 08:40:46.643 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:46 compute-0 optimistic_hamilton[376592]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:40:46 compute-0 optimistic_hamilton[376592]: --> relative data size: 1.0
Jan 31 08:40:46 compute-0 optimistic_hamilton[376592]: --> All data devices are unavailable
Jan 31 08:40:46 compute-0 systemd[1]: libpod-7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414.scope: Deactivated successfully.
Jan 31 08:40:46 compute-0 podman[376575]: 2026-01-31 08:40:46.795489323 +0000 UTC m=+1.011556004 container died 7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-09f1fca75f198a3171814c059381d1a2a90d22e806a798bb34379f1156d8ce44-merged.mount: Deactivated successfully.
Jan 31 08:40:46 compute-0 nova_compute[247704]: 2026-01-31 08:40:46.871 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:46 compute-0 podman[376575]: 2026-01-31 08:40:46.873786558 +0000 UTC m=+1.089853109 container remove 7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:40:46 compute-0 systemd[1]: libpod-conmon-7dc6b70a2e7f1a6c45b148465c6f81c78bc7cf7a32f5ddbdea54478d42d3f414.scope: Deactivated successfully.
Jan 31 08:40:46 compute-0 sudo[376469]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:46 compute-0 sudo[376619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:46 compute-0 sudo[376619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:46 compute-0 sudo[376619]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:47 compute-0 sudo[376644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:47 compute-0 sudo[376644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:47 compute-0 sudo[376644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:47 compute-0 sudo[376669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:47 compute-0 sudo[376669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:47 compute-0 sudo[376669]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:47 compute-0 sudo[376694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:40:47 compute-0 sudo[376694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.519196362 +0000 UTC m=+0.054520817 container create 533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:40:47 compute-0 systemd[1]: Started libpod-conmon-533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063.scope.
Jan 31 08:40:47 compute-0 nova_compute[247704]: 2026-01-31 08:40:47.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.499895463 +0000 UTC m=+0.035219918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.60993662 +0000 UTC m=+0.145261125 container init 533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.61570366 +0000 UTC m=+0.151028105 container start 533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:47 compute-0 infallible_borg[376775]: 167 167
Jan 31 08:40:47 compute-0 systemd[1]: libpod-533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063.scope: Deactivated successfully.
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.619644427 +0000 UTC m=+0.154968912 container attach 533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.620519728 +0000 UTC m=+0.155844203 container died 533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5490e8c78a571e136db684ae4b8be89d84172ea1d37aba190aea80ff2e5c4f1e-merged.mount: Deactivated successfully.
Jan 31 08:40:47 compute-0 podman[376759]: 2026-01-31 08:40:47.659290461 +0000 UTC m=+0.194614886 container remove 533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:40:47 compute-0 systemd[1]: libpod-conmon-533581e83b5a7ffe9171fc495f2991bff11ccda1ace684e2cdda6b398c95e063.scope: Deactivated successfully.
Jan 31 08:40:47 compute-0 podman[376799]: 2026-01-31 08:40:47.837253741 +0000 UTC m=+0.064916420 container create 2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:40:47 compute-0 systemd[1]: Started libpod-conmon-2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858.scope.
Jan 31 08:40:47 compute-0 podman[376799]: 2026-01-31 08:40:47.808483432 +0000 UTC m=+0.036146151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:40:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfcc7ffbb35470a8e5896ce8e049e7304861da7e57312dc5b05156927d2470c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfcc7ffbb35470a8e5896ce8e049e7304861da7e57312dc5b05156927d2470c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfcc7ffbb35470a8e5896ce8e049e7304861da7e57312dc5b05156927d2470c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cfcc7ffbb35470a8e5896ce8e049e7304861da7e57312dc5b05156927d2470c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:47 compute-0 podman[376799]: 2026-01-31 08:40:47.93911841 +0000 UTC m=+0.166781059 container init 2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:47 compute-0 podman[376799]: 2026-01-31 08:40:47.946217343 +0000 UTC m=+0.173879982 container start 2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:40:47 compute-0 podman[376799]: 2026-01-31 08:40:47.949430251 +0000 UTC m=+0.177092890 container attach 2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:48 compute-0 ceph-mon[74496]: pgmap v3237: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:48.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:48.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:48 compute-0 crazy_murdock[376816]: {
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:     "0": [
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:         {
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "devices": [
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "/dev/loop3"
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             ],
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "lv_name": "ceph_lv0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "lv_size": "7511998464",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "name": "ceph_lv0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "tags": {
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.crush_device_class": "",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.encrypted": "0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.osd_id": "0",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.type": "block",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:                 "ceph.vdo": "0"
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             },
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "type": "block",
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:             "vg_name": "ceph_vg0"
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:         }
Jan 31 08:40:48 compute-0 crazy_murdock[376816]:     ]
Jan 31 08:40:48 compute-0 crazy_murdock[376816]: }
Jan 31 08:40:48 compute-0 systemd[1]: libpod-2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858.scope: Deactivated successfully.
Jan 31 08:40:48 compute-0 podman[376799]: 2026-01-31 08:40:48.78571009 +0000 UTC m=+1.013372729 container died 2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cfcc7ffbb35470a8e5896ce8e049e7304861da7e57312dc5b05156927d2470c-merged.mount: Deactivated successfully.
Jan 31 08:40:48 compute-0 podman[376799]: 2026-01-31 08:40:48.838987806 +0000 UTC m=+1.066650465 container remove 2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:48 compute-0 systemd[1]: libpod-conmon-2fa5786389688b3e26a492611613e93855dc973acd32230f3417b668bff47858.scope: Deactivated successfully.
Jan 31 08:40:48 compute-0 sudo[376694]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:48 compute-0 sudo[376838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:48 compute-0 sudo[376838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:48 compute-0 sudo[376838]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:49 compute-0 sudo[376863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:49 compute-0 sudo[376863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:49 compute-0 sudo[376863]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:49 compute-0 sudo[376888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:49 compute-0 sudo[376888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:49 compute-0 sudo[376888]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:49 compute-0 sudo[376913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:40:49 compute-0 sudo[376913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1154103464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.473393152 +0000 UTC m=+0.038357934 container create 6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:40:49 compute-0 systemd[1]: Started libpod-conmon-6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9.scope.
Jan 31 08:40:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.539924632 +0000 UTC m=+0.104889434 container init 6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.547902375 +0000 UTC m=+0.112867167 container start 6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.455770143 +0000 UTC m=+0.020734965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.551866202 +0000 UTC m=+0.116831004 container attach 6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:40:49 compute-0 friendly_kare[376997]: 167 167
Jan 31 08:40:49 compute-0 systemd[1]: libpod-6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9.scope: Deactivated successfully.
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.553535582 +0000 UTC m=+0.118500374 container died 6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 08:40:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-22a02be63c58bc656ba2818c96dceb1cba0b69d59f704965ecd88838807a75d5-merged.mount: Deactivated successfully.
Jan 31 08:40:49 compute-0 podman[376981]: 2026-01-31 08:40:49.60152499 +0000 UTC m=+0.166489782 container remove 6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:49 compute-0 systemd[1]: libpod-conmon-6480090352d5105bb34da10d62c40fea9af98dd5b12c89bff099b128a2887bf9.scope: Deactivated successfully.
Jan 31 08:40:49 compute-0 podman[377020]: 2026-01-31 08:40:49.794053925 +0000 UTC m=+0.064378858 container create 6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:40:49 compute-0 systemd[1]: Started libpod-conmon-6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032.scope.
Jan 31 08:40:49 compute-0 podman[377020]: 2026-01-31 08:40:49.766033253 +0000 UTC m=+0.036358216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:40:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58fc2879667d84709a038f3dfb49618e5c5636fb55ccf4712daffb68eefc1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58fc2879667d84709a038f3dfb49618e5c5636fb55ccf4712daffb68eefc1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58fc2879667d84709a038f3dfb49618e5c5636fb55ccf4712daffb68eefc1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58fc2879667d84709a038f3dfb49618e5c5636fb55ccf4712daffb68eefc1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:49 compute-0 podman[377020]: 2026-01-31 08:40:49.898026114 +0000 UTC m=+0.168351047 container init 6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:40:49 compute-0 podman[377020]: 2026-01-31 08:40:49.912122897 +0000 UTC m=+0.182447800 container start 6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:40:49 compute-0 podman[377020]: 2026-01-31 08:40:49.91635964 +0000 UTC m=+0.186684583 container attach 6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:50.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:50 compute-0 ceph-mon[74496]: pgmap v3238: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 31 08:40:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:50.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 31 08:40:50 compute-0 elated_davinci[377037]: {
Jan 31 08:40:50 compute-0 elated_davinci[377037]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:40:50 compute-0 elated_davinci[377037]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:40:50 compute-0 elated_davinci[377037]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:40:50 compute-0 elated_davinci[377037]:         "osd_id": 0,
Jan 31 08:40:50 compute-0 elated_davinci[377037]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:40:50 compute-0 elated_davinci[377037]:         "type": "bluestore"
Jan 31 08:40:50 compute-0 elated_davinci[377037]:     }
Jan 31 08:40:50 compute-0 elated_davinci[377037]: }
Jan 31 08:40:50 compute-0 systemd[1]: libpod-6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032.scope: Deactivated successfully.
Jan 31 08:40:50 compute-0 podman[377020]: 2026-01-31 08:40:50.81517199 +0000 UTC m=+1.085496923 container died 6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:40:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d58fc2879667d84709a038f3dfb49618e5c5636fb55ccf4712daffb68eefc1d-merged.mount: Deactivated successfully.
Jan 31 08:40:50 compute-0 podman[377020]: 2026-01-31 08:40:50.865630138 +0000 UTC m=+1.135955041 container remove 6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:40:50 compute-0 systemd[1]: libpod-conmon-6594c4d2b7eef0a68856b492a2cfe4203a840654cc48b4bfba4310e72d041032.scope: Deactivated successfully.
Jan 31 08:40:50 compute-0 sudo[376913]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:40:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:40:50 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev df8e9bfb-81aa-444c-8f95-d4cb6ed9bc7c does not exist
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d59c0492-fdaf-4680-9394-8824e41f0539 does not exist
Jan 31 08:40:50 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 42b6c6b2-82bf-48cc-a87c-72f4e4014b1b does not exist
Jan 31 08:40:51 compute-0 sudo[377073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:51 compute-0 sudo[377073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:51 compute-0 sudo[377073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:51 compute-0 sudo[377098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:40:51 compute-0 sudo[377098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:51 compute-0 sudo[377098]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:51 compute-0 nova_compute[247704]: 2026-01-31 08:40:51.643 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:51 compute-0 nova_compute[247704]: 2026-01-31 08:40:51.873 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:51 compute-0 ceph-mon[74496]: pgmap v3239: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 31 08:40:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:40:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:40:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:52.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:40:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:40:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:40:52 compute-0 nova_compute[247704]: 2026-01-31 08:40:52.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 852 B/s wr, 14 op/s
Jan 31 08:40:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:53 compute-0 ceph-mon[74496]: pgmap v3240: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 852 B/s wr, 14 op/s
Jan 31 08:40:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3761983410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:40:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3761983410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:40:54 compute-0 podman[377124]: 2026-01-31 08:40:54.008923221 +0000 UTC m=+0.064697835 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:40:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:54.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 08:40:55 compute-0 nova_compute[247704]: 2026-01-31 08:40:55.011 247708 DEBUG nova.compute.manager [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:40:55 compute-0 nova_compute[247704]: 2026-01-31 08:40:55.011 247708 DEBUG nova.compute.manager [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:40:55 compute-0 nova_compute[247704]: 2026-01-31 08:40:55.012 247708 DEBUG oslo_concurrency.lockutils [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:40:55 compute-0 nova_compute[247704]: 2026-01-31 08:40:55.012 247708 DEBUG oslo_concurrency.lockutils [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:40:55 compute-0 nova_compute[247704]: 2026-01-31 08:40:55.012 247708 DEBUG nova.network.neutron [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:40:56 compute-0 ceph-mon[74496]: pgmap v3241: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Jan 31 08:40:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:56 compute-0 nova_compute[247704]: 2026-01-31 08:40:56.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:56 compute-0 nova_compute[247704]: 2026-01-31 08:40:56.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:40:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:40:56 compute-0 nova_compute[247704]: 2026-01-31 08:40:56.647 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:56 compute-0 nova_compute[247704]: 2026-01-31 08:40:56.875 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:40:56 compute-0 sudo[377146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:56 compute-0 sudo[377146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:56 compute-0 sudo[377146]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:56 compute-0 sudo[377171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:40:56 compute-0 sudo[377171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:56 compute-0 sudo[377171]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:58 compute-0 ceph-mon[74496]: pgmap v3242: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:40:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:58.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:40:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:40:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:40:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:40:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:59 compute-0 nova_compute[247704]: 2026-01-31 08:40:59.829 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:00 compute-0 nova_compute[247704]: 2026-01-31 08:41:00.104 247708 DEBUG nova.network.neutron [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:00 compute-0 nova_compute[247704]: 2026-01-31 08:41:00.105 247708 DEBUG nova.network.neutron [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:00 compute-0 ceph-mon[74496]: pgmap v3243: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:41:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1036511080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:00 compute-0 nova_compute[247704]: 2026-01-31 08:41:00.394 247708 DEBUG oslo_concurrency.lockutils [req-eceb14c4-71ed-4029-ba53-b8940e8de3a4 req-512d87bc-be02-4271-8e79-c7007501ed82 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 8.9 KiB/s wr, 2 op/s
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.355 247708 DEBUG oslo_concurrency.lockutils [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.355 247708 DEBUG oslo_concurrency.lockutils [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:01 compute-0 ceph-mon[74496]: pgmap v3244: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 8.9 KiB/s wr, 2 op/s
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.640 247708 INFO nova.compute.manager [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Detaching volume 5e3dddd5-2020-4bc8-ad80-973dfd3573b1
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.648 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.807 247708 INFO nova.virt.block_device [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attempting to driver detach volume 5e3dddd5-2020-4bc8-ad80-973dfd3573b1 from mountpoint /dev/vdb
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.819 247708 DEBUG nova.virt.libvirt.driver [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Attempting to detach device vdb from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.820 247708 DEBUG nova.virt.libvirt.guest [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-5e3dddd5-2020-4bc8-ad80-973dfd3573b1">
Jan 31 08:41:01 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   </source>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <serial>5e3dddd5-2020-4bc8-ad80-973dfd3573b1</serial>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]: </disk>
Jan 31 08:41:01 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.913 247708 INFO nova.virt.libvirt.driver [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdb from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the persistent domain config.
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.913 247708 DEBUG nova.virt.libvirt.driver [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:41:01 compute-0 nova_compute[247704]: 2026-01-31 08:41:01.914 247708 DEBUG nova.virt.libvirt.guest [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-5e3dddd5-2020-4bc8-ad80-973dfd3573b1">
Jan 31 08:41:01 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   </source>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <serial>5e3dddd5-2020-4bc8-ad80-973dfd3573b1</serial>
Jan 31 08:41:01 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:41:01 compute-0 nova_compute[247704]: </disk>
Jan 31 08:41:01 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.129 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769848862.128864, 9e883c68-083a-45ab-81fe-942de74e50ef => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.131 247708 DEBUG nova.virt.libvirt.driver [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9e883c68-083a-45ab-81fe-942de74e50ef _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.134 247708 INFO nova.virt.libvirt.driver [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdb from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the live domain config.
Jan 31 08:41:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:02.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:02.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 8.9 KiB/s wr, 2 op/s
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.782 247708 DEBUG nova.objects.instance [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.784 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.784 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:02 compute-0 nova_compute[247704]: 2026-01-31 08:41:02.784 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:41:03 compute-0 nova_compute[247704]: 2026-01-31 08:41:03.250 247708 DEBUG oslo_concurrency.lockutils [None req-91be2428-fc23-4097-aba6-43fbcd3ef19b cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:03 compute-0 ceph-mon[74496]: pgmap v3245: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 KiB/s rd, 8.9 KiB/s wr, 2 op/s
Jan 31 08:41:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1342224106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:03 compute-0 ovn_controller[149457]: 2026-01-31T08:41:03Z|00803|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:41:03 compute-0 nova_compute[247704]: 2026-01-31 08:41:03.998 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:04.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 186 KiB/s wr, 6 op/s
Jan 31 08:41:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:04.981 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:41:04 compute-0 nova_compute[247704]: 2026-01-31 08:41:04.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:04.984 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:41:05 compute-0 ceph-mon[74496]: pgmap v3246: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 186 KiB/s wr, 6 op/s
Jan 31 08:41:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3864762599' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3864762599' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1859579774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.242 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.472 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.473 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.473 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.474 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.475 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:06.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 187 KiB/s wr, 21 op/s
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.629 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.629 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.630 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.630 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.631 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.657 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.692 247708 DEBUG oslo_concurrency.lockutils [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.693 247708 DEBUG oslo_concurrency.lockutils [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.853 247708 INFO nova.compute.manager [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Detaching volume 32f698bc-fdf5-4d95-826a-13f70e60d6b8
Jan 31 08:41:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1931840101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:06 compute-0 nova_compute[247704]: 2026-01-31 08:41:06.879 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:06 compute-0 podman[377223]: 2026-01-31 08:41:06.930970372 +0000 UTC m=+0.093128286 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 08:41:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:06.986 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.041 247708 INFO nova.virt.block_device [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Attempting to driver detach volume 32f698bc-fdf5-4d95-826a-13f70e60d6b8 from mountpoint /dev/vdc
Jan 31 08:41:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:41:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254181457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.053 247708 DEBUG nova.virt.libvirt.driver [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Attempting to detach device vdc from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.055 247708 DEBUG nova.virt.libvirt.guest [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-32f698bc-fdf5-4d95-826a-13f70e60d6b8">
Jan 31 08:41:07 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   </source>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <serial>32f698bc-fdf5-4d95-826a-13f70e60d6b8</serial>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]: </disk>
Jan 31 08:41:07 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.063 247708 INFO nova.virt.libvirt.driver [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdc from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the persistent domain config.
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.064 247708 DEBUG nova.virt.libvirt.driver [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.065 247708 DEBUG nova.virt.libvirt.guest [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-32f698bc-fdf5-4d95-826a-13f70e60d6b8">
Jan 31 08:41:07 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   </source>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <serial>32f698bc-fdf5-4d95-826a-13f70e60d6b8</serial>
Jan 31 08:41:07 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 08:41:07 compute-0 nova_compute[247704]: </disk>
Jan 31 08:41:07 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.074 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.135 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769848867.1342113, 9e883c68-083a-45ab-81fe-942de74e50ef => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.136 247708 DEBUG nova.virt.libvirt.driver [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 9e883c68-083a-45ab-81fe-942de74e50ef _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.140 247708 INFO nova.virt.libvirt.driver [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdc from instance 9e883c68-083a-45ab-81fe-942de74e50ef from the live domain config.
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.537 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.538 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.543 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.544 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.615 247708 DEBUG nova.objects.instance [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.744 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.745 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3845MB free_disk=20.987842559814453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.745 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.746 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:07 compute-0 nova_compute[247704]: 2026-01-31 08:41:07.905 247708 DEBUG oslo_concurrency.lockutils [None req-91af3499-bc9c-4a56-8735-dc8b310f63b3 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:07 compute-0 ceph-mon[74496]: pgmap v3247: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 187 KiB/s wr, 21 op/s
Jan 31 08:41:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1254181457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.017 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9e883c68-083a-45ab-81fe-942de74e50ef actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.018 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.018 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.018 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.084 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:08.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:41:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395554617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.527 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.534 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:41:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 361 KiB/s wr, 25 op/s
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.656 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.808 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:41:08 compute-0 nova_compute[247704]: 2026-01-31 08:41:08.809 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1395554617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:09 compute-0 ceph-mon[74496]: pgmap v3248: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 361 KiB/s wr, 25 op/s
Jan 31 08:41:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:10 compute-0 nova_compute[247704]: 2026-01-31 08:41:10.377 247708 DEBUG nova.compute.manager [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:10 compute-0 nova_compute[247704]: 2026-01-31 08:41:10.377 247708 DEBUG nova.compute.manager [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:10 compute-0 nova_compute[247704]: 2026-01-31 08:41:10.377 247708 DEBUG oslo_concurrency.lockutils [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:10 compute-0 nova_compute[247704]: 2026-01-31 08:41:10.378 247708 DEBUG oslo_concurrency.lockutils [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:10 compute-0 nova_compute[247704]: 2026-01-31 08:41:10.378 247708 DEBUG nova.network.neutron [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 367 KiB/s wr, 29 op/s
Jan 31 08:41:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2250320806' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2250320806' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:11.210 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:11.211 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:11.212 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:11 compute-0 nova_compute[247704]: 2026-01-31 08:41:11.652 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:11 compute-0 nova_compute[247704]: 2026-01-31 08:41:11.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:12 compute-0 ceph-mon[74496]: pgmap v3249: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 367 KiB/s wr, 29 op/s
Jan 31 08:41:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:12.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 358 KiB/s wr, 27 op/s
Jan 31 08:41:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:14 compute-0 ceph-mon[74496]: pgmap v3250: 305 pgs: 305 active+clean; 381 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 358 KiB/s wr, 27 op/s
Jan 31 08:41:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:14.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:14.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 361 KiB/s wr, 40 op/s
Jan 31 08:41:14 compute-0 nova_compute[247704]: 2026-01-31 08:41:14.803 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:15 compute-0 nova_compute[247704]: 2026-01-31 08:41:15.359 247708 DEBUG nova.network.neutron [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:15 compute-0 nova_compute[247704]: 2026-01-31 08:41:15.360 247708 DEBUG nova.network.neutron [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:15 compute-0 ceph-mon[74496]: pgmap v3251: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 361 KiB/s wr, 40 op/s
Jan 31 08:41:15 compute-0 nova_compute[247704]: 2026-01-31 08:41:15.577 247708 DEBUG oslo_concurrency.lockutils [req-206ecaab-f0d5-4ae6-abe2-2909e2438651 req-e6574070-4843-4acc-9c45-a8dc8d317956 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:16 compute-0 nova_compute[247704]: 2026-01-31 08:41:16.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:16.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 183 KiB/s wr, 35 op/s
Jan 31 08:41:16 compute-0 nova_compute[247704]: 2026-01-31 08:41:16.654 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2175157771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:16 compute-0 nova_compute[247704]: 2026-01-31 08:41:16.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:17 compute-0 sudo[377281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:17 compute-0 sudo[377281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:17 compute-0 sudo[377281]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:17 compute-0 sudo[377306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:17 compute-0 sudo[377306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:17 compute-0 sudo[377306]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:18 compute-0 ceph-mon[74496]: pgmap v3252: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 183 KiB/s wr, 35 op/s
Jan 31 08:41:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:18.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 183 KiB/s wr, 21 op/s
Jan 31 08:41:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:18 compute-0 nova_compute[247704]: 2026-01-31 08:41:18.976 247708 DEBUG oslo_concurrency.lockutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:18 compute-0 nova_compute[247704]: 2026-01-31 08:41:18.977 247708 DEBUG oslo_concurrency.lockutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:19 compute-0 nova_compute[247704]: 2026-01-31 08:41:19.123 247708 DEBUG nova.objects.instance [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:19 compute-0 nova_compute[247704]: 2026-01-31 08:41:19.444 247708 DEBUG oslo_concurrency.lockutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:19 compute-0 ceph-mon[74496]: pgmap v3253: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 183 KiB/s wr, 21 op/s
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:41:20
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.control', 'images', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:41:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:20.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.6 MiB/s wr, 33 op/s
Jan 31 08:41:20 compute-0 nova_compute[247704]: 2026-01-31 08:41:20.755 247708 DEBUG oslo_concurrency.lockutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:20 compute-0 nova_compute[247704]: 2026-01-31 08:41:20.756 247708 DEBUG oslo_concurrency.lockutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:20 compute-0 nova_compute[247704]: 2026-01-31 08:41:20.756 247708 INFO nova.compute.manager [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attaching volume f0bb0c0d-db6f-42f3-aad1-86e70cc695a6 to /dev/vdb
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:41:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.067 247708 DEBUG os_brick.utils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.069 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.078 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.078 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[019cbeb9-c66f-4341-b94c-61ad429b785f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.080 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.086 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.086 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[71a8bf85-fe96-402c-93fd-9f5cdf4c0b01]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.087 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.093 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.093 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[4295232d-91e9-4017-a299-a79df491c145]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.094 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[deac6123-b843-47c2-a12a-7728792478e9]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.095 247708 DEBUG oslo_concurrency.processutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.112 247708 DEBUG oslo_concurrency.processutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.114 247708 DEBUG os_brick.initiator.connectors.lightos [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.114 247708 DEBUG os_brick.initiator.connectors.lightos [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.114 247708 DEBUG os_brick.initiator.connectors.lightos [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.115 247708 DEBUG os_brick.utils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (47ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.115 247708 DEBUG nova.virt.block_device [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating existing volume attachment record: 08e8d20e-744f-44e5-988a-74c74ca5dbc2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.657 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:21 compute-0 ceph-mon[74496]: pgmap v3254: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.6 MiB/s wr, 33 op/s
Jan 31 08:41:21 compute-0 nova_compute[247704]: 2026-01-31 08:41:21.885 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:41:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587974525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:22 compute-0 nova_compute[247704]: 2026-01-31 08:41:22.450 247708 DEBUG nova.objects.instance [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:22 compute-0 nova_compute[247704]: 2026-01-31 08:41:22.547 247708 DEBUG nova.virt.libvirt.driver [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attempting to attach volume f0bb0c0d-db6f-42f3-aad1-86e70cc695a6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:41:22 compute-0 nova_compute[247704]: 2026-01-31 08:41:22.550 247708 DEBUG nova.virt.libvirt.guest [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:41:22 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:41:22 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-f0bb0c0d-db6f-42f3-aad1-86e70cc695a6">
Jan 31 08:41:22 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:22 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:22 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:22 compute-0 nova_compute[247704]:   </source>
Jan 31 08:41:22 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:41:22 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:41:22 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:41:22 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:41:22 compute-0 nova_compute[247704]:   <serial>f0bb0c0d-db6f-42f3-aad1-86e70cc695a6</serial>
Jan 31 08:41:22 compute-0 nova_compute[247704]: </disk>
Jan 31 08:41:22 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:41:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:22.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Jan 31 08:41:23 compute-0 nova_compute[247704]: 2026-01-31 08:41:23.078 247708 DEBUG nova.virt.libvirt.driver [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:23 compute-0 nova_compute[247704]: 2026-01-31 08:41:23.080 247708 DEBUG nova.virt.libvirt.driver [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:23 compute-0 nova_compute[247704]: 2026-01-31 08:41:23.080 247708 DEBUG nova.virt.libvirt.driver [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:23 compute-0 nova_compute[247704]: 2026-01-31 08:41:23.081 247708 DEBUG nova.virt.libvirt.driver [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:4b:6c:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:41:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1587974525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:24 compute-0 ceph-mon[74496]: pgmap v3255: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.6 MiB/s wr, 29 op/s
Jan 31 08:41:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:24.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:24.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 08:41:24 compute-0 podman[377362]: 2026-01-31 08:41:24.888305114 +0000 UTC m=+0.063936397 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:41:25 compute-0 ceph-mon[74496]: pgmap v3256: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 08:41:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:26 compute-0 nova_compute[247704]: 2026-01-31 08:41:26.658 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:26 compute-0 nova_compute[247704]: 2026-01-31 08:41:26.887 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:27 compute-0 ceph-mon[74496]: pgmap v3257: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:28.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:28 compute-0 nova_compute[247704]: 2026-01-31 08:41:28.843 247708 DEBUG oslo_concurrency.lockutils [None req-e96ff3dd-470f-4040-aa06-cd470a9e1761 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 8.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:29 compute-0 nova_compute[247704]: 2026-01-31 08:41:29.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:30 compute-0 ceph-mon[74496]: pgmap v3258: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:30 compute-0 nova_compute[247704]: 2026-01-31 08:41:30.621 247708 DEBUG oslo_concurrency.lockutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:30 compute-0 nova_compute[247704]: 2026-01-31 08:41:30.623 247708 DEBUG oslo_concurrency.lockutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:30 compute-0 nova_compute[247704]: 2026-01-31 08:41:30.764 247708 DEBUG nova.objects.instance [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:31 compute-0 nova_compute[247704]: 2026-01-31 08:41:31.066 247708 DEBUG oslo_concurrency.lockutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2642737543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:31 compute-0 nova_compute[247704]: 2026-01-31 08:41:31.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:31 compute-0 nova_compute[247704]: 2026-01-31 08:41:31.889 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:32 compute-0 ceph-mon[74496]: pgmap v3259: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 08:41:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2081854837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:32.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 176 KiB/s wr, 13 op/s
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.703 247708 DEBUG oslo_concurrency.lockutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.704 247708 DEBUG oslo_concurrency.lockutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.704 247708 INFO nova.compute.manager [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attaching volume 273b2ec0-261d-40ed-887b-96124699257b to /dev/vdc
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.907 247708 DEBUG os_brick.utils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.908 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.918 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.919 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[8956e878-457b-4a37-8e8f-f3eb0f523475]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.920 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.926 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.927 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[640a14aa-15d0-4e99-8cfb-944ab8bbe552]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.928 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.936 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.936 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[87719fa4-dd41-43f0-8cb1-d3881efccd7f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.938 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[a927b2e0-01ea-4434-b2b8-26f62bc94662]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.938 247708 DEBUG oslo_concurrency.processutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.965 247708 DEBUG oslo_concurrency.processutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.968 247708 DEBUG os_brick.initiator.connectors.lightos [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.968 247708 DEBUG os_brick.initiator.connectors.lightos [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.968 247708 DEBUG os_brick.initiator.connectors.lightos [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.968 247708 DEBUG os_brick.utils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:41:32 compute-0 nova_compute[247704]: 2026-01-31 08:41:32.969 247708 DEBUG nova.virt.block_device [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating existing volume attachment record: 129c0381-e5b0-416a-8a82-a0a26dc055b8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:41:33 compute-0 ceph-mon[74496]: pgmap v3260: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 176 KiB/s wr, 13 op/s
Jan 31 08:41:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:34 compute-0 nova_compute[247704]: 2026-01-31 08:41:34.258 247708 DEBUG nova.objects.instance [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:41:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:34 compute-0 nova_compute[247704]: 2026-01-31 08:41:34.461 247708 DEBUG nova.virt.libvirt.driver [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attempting to attach volume 273b2ec0-261d-40ed-887b-96124699257b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:41:34 compute-0 nova_compute[247704]: 2026-01-31 08:41:34.465 247708 DEBUG nova.virt.libvirt.guest [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:41:34 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:41:34 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-273b2ec0-261d-40ed-887b-96124699257b">
Jan 31 08:41:34 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:41:34 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:41:34 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:41:34 compute-0 nova_compute[247704]:   </source>
Jan 31 08:41:34 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:41:34 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:41:34 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:41:34 compute-0 nova_compute[247704]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:41:34 compute-0 nova_compute[247704]:   <serial>273b2ec0-261d-40ed-887b-96124699257b</serial>
Jan 31 08:41:34 compute-0 nova_compute[247704]: </disk>
Jan 31 08:41:34 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:41:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:34.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 188 KiB/s wr, 15 op/s
Jan 31 08:41:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3396622110' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:41:35 compute-0 nova_compute[247704]: 2026-01-31 08:41:35.569 247708 DEBUG nova.virt.libvirt.driver [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:35 compute-0 nova_compute[247704]: 2026-01-31 08:41:35.569 247708 DEBUG nova.virt.libvirt.driver [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:35 compute-0 nova_compute[247704]: 2026-01-31 08:41:35.569 247708 DEBUG nova.virt.libvirt.driver [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:35 compute-0 nova_compute[247704]: 2026-01-31 08:41:35.570 247708 DEBUG nova.virt.libvirt.driver [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:41:35 compute-0 nova_compute[247704]: 2026-01-31 08:41:35.570 247708 DEBUG nova.virt.libvirt.driver [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] No VIF found with MAC fa:16:3e:4b:6c:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010163665549972082 of space, bias 1.0, pg target 0.30490996649916247 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00866892332030523 of space, bias 1.0, pg target 2.600676996091569 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:41:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:41:36 compute-0 ceph-mon[74496]: pgmap v3261: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 188 KiB/s wr, 15 op/s
Jan 31 08:41:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:36.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:36.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 13 KiB/s wr, 6 op/s
Jan 31 08:41:36 compute-0 nova_compute[247704]: 2026-01-31 08:41:36.663 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:36 compute-0 nova_compute[247704]: 2026-01-31 08:41:36.926 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:37 compute-0 sudo[377416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:37 compute-0 sudo[377416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:37 compute-0 sudo[377416]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:37 compute-0 sudo[377447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:37 compute-0 sudo[377447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:37 compute-0 sudo[377447]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:37 compute-0 podman[377440]: 2026-01-31 08:41:37.309166662 +0000 UTC m=+0.090613416 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 31 08:41:38 compute-0 sshd-session[377492]: Invalid user ubuntu from 45.148.10.240 port 56410
Jan 31 08:41:38 compute-0 sshd-session[377492]: Connection closed by invalid user ubuntu 45.148.10.240 port 56410 [preauth]
Jan 31 08:41:38 compute-0 nova_compute[247704]: 2026-01-31 08:41:38.273 247708 DEBUG oslo_concurrency.lockutils [None req-6347fbce-1d40-4b80-8924-230d3e3a28c8 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:41:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:38.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:41:38 compute-0 ceph-mon[74496]: pgmap v3262: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 13 KiB/s wr, 6 op/s
Jan 31 08:41:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:38.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 138 KiB/s rd, 14 KiB/s wr, 14 op/s
Jan 31 08:41:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:39 compute-0 ceph-mon[74496]: pgmap v3263: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 138 KiB/s rd, 14 KiB/s wr, 14 op/s
Jan 31 08:41:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:40.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:40.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:41 compute-0 nova_compute[247704]: 2026-01-31 08:41:41.667 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:41 compute-0 nova_compute[247704]: 2026-01-31 08:41:41.928 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:42 compute-0 ceph-mon[74496]: pgmap v3264: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:42 compute-0 nova_compute[247704]: 2026-01-31 08:41:42.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:42.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:43.099 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:41:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:43.100 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:41:43 compute-0 nova_compute[247704]: 2026-01-31 08:41:43.100 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:43 compute-0 nova_compute[247704]: 2026-01-31 08:41:43.796 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:44 compute-0 nova_compute[247704]: 2026-01-31 08:41:44.072 247708 DEBUG nova.compute.manager [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:44 compute-0 nova_compute[247704]: 2026-01-31 08:41:44.072 247708 DEBUG nova.compute.manager [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:44 compute-0 nova_compute[247704]: 2026-01-31 08:41:44.073 247708 DEBUG oslo_concurrency.lockutils [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:44 compute-0 nova_compute[247704]: 2026-01-31 08:41:44.073 247708 DEBUG oslo_concurrency.lockutils [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:44 compute-0 nova_compute[247704]: 2026-01-31 08:41:44.073 247708 DEBUG nova.network.neutron [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:44 compute-0 ceph-mon[74496]: pgmap v3265: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:44.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:44.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:45 compute-0 ceph-mon[74496]: pgmap v3266: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Jan 31 08:41:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:46.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:46.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 74 op/s
Jan 31 08:41:46 compute-0 nova_compute[247704]: 2026-01-31 08:41:46.668 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:46 compute-0 nova_compute[247704]: 2026-01-31 08:41:46.930 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:47 compute-0 nova_compute[247704]: 2026-01-31 08:41:47.270 247708 DEBUG nova.network.neutron [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:47 compute-0 nova_compute[247704]: 2026-01-31 08:41:47.271 247708 DEBUG nova.network.neutron [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:47 compute-0 nova_compute[247704]: 2026-01-31 08:41:47.449 247708 DEBUG oslo_concurrency.lockutils [req-10104793-a4b8-4ad8-ac76-95196d26a54d req-6b480711-cabd-4fac-aae5-33c4b98b0ba7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:47 compute-0 ceph-mon[74496]: pgmap v3267: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 74 op/s
Jan 31 08:41:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:48 compute-0 nova_compute[247704]: 2026-01-31 08:41:48.541 247708 DEBUG nova.compute.manager [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:48 compute-0 nova_compute[247704]: 2026-01-31 08:41:48.542 247708 DEBUG nova.compute.manager [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:48 compute-0 nova_compute[247704]: 2026-01-31 08:41:48.542 247708 DEBUG oslo_concurrency.lockutils [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:48 compute-0 nova_compute[247704]: 2026-01-31 08:41:48.542 247708 DEBUG oslo_concurrency.lockutils [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:48 compute-0 nova_compute[247704]: 2026-01-31 08:41:48.543 247708 DEBUG nova.network.neutron [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:48.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 76 op/s
Jan 31 08:41:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:41:49.103 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:41:49 compute-0 nova_compute[247704]: 2026-01-31 08:41:49.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:50 compute-0 ceph-mon[74496]: pgmap v3268: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 KiB/s wr, 76 op/s
Jan 31 08:41:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:41:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:41:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:50.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 431 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 512 KiB/s wr, 75 op/s
Jan 31 08:41:51 compute-0 sudo[377502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:51 compute-0 sudo[377502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-0 sudo[377502]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.371 247708 DEBUG nova.network.neutron [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.372 247708 DEBUG nova.network.neutron [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:51 compute-0 sudo[377527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:51 compute-0 sudo[377527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-0 sudo[377527]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.446 247708 DEBUG oslo_concurrency.lockutils [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.447 247708 DEBUG nova.compute.manager [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.447 247708 DEBUG nova.compute.manager [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.447 247708 DEBUG oslo_concurrency.lockutils [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.447 247708 DEBUG oslo_concurrency.lockutils [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.448 247708 DEBUG nova.network.neutron [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:51 compute-0 sudo[377552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:51 compute-0 sudo[377552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-0 sudo[377552]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-0 ceph-mon[74496]: pgmap v3269: 305 pgs: 305 active+clean; 431 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 512 KiB/s wr, 75 op/s
Jan 31 08:41:51 compute-0 sudo[377577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:41:51 compute-0 sudo[377577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:41:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:41:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:41:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:51 compute-0 sudo[377577]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:51 compute-0 nova_compute[247704]: 2026-01-31 08:41:51.932 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:41:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 08:41:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:41:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:52.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 431 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 512 KiB/s wr, 13 op/s
Jan 31 08:41:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 08:41:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:41:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:41:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:41:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:41:52 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:41:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:41:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:41:52 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2637242050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:41:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 63da7949-f45b-41cb-aa2f-c4eb093ad930 does not exist
Jan 31 08:41:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6b946668-58a0-4386-aa66-49f7056248e4 does not exist
Jan 31 08:41:53 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 561cfb7e-3dd8-4f6d-b7a0-0214f81d9bd0 does not exist
Jan 31 08:41:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:41:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:41:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:41:53 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:41:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:41:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:41:53 compute-0 sudo[377634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:53 compute-0 sudo[377634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:53 compute-0 sudo[377634]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:53 compute-0 sudo[377659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:53 compute-0 sudo[377659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:53 compute-0 sudo[377659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:53 compute-0 sudo[377684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:53 compute-0 sudo[377684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:53 compute-0 sudo[377684]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:53 compute-0 sudo[377709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:41:53 compute-0 sudo[377709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:53 compute-0 nova_compute[247704]: 2026-01-31 08:41:53.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:53 compute-0 podman[377773]: 2026-01-31 08:41:53.589429516 +0000 UTC m=+0.059476089 container create 530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swanson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:41:53 compute-0 systemd[1]: Started libpod-conmon-530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16.scope.
Jan 31 08:41:53 compute-0 podman[377773]: 2026-01-31 08:41:53.555994762 +0000 UTC m=+0.026041355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:53 compute-0 podman[377773]: 2026-01-31 08:41:53.704729101 +0000 UTC m=+0.174775714 container init 530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swanson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:41:53 compute-0 podman[377773]: 2026-01-31 08:41:53.712249364 +0000 UTC m=+0.182295947 container start 530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swanson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:41:53 compute-0 sharp_swanson[377789]: 167 167
Jan 31 08:41:53 compute-0 systemd[1]: libpod-530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16.scope: Deactivated successfully.
Jan 31 08:41:53 compute-0 conmon[377789]: conmon 530709f86294719facdb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16.scope/container/memory.events
Jan 31 08:41:53 compute-0 podman[377773]: 2026-01-31 08:41:53.726209454 +0000 UTC m=+0.196256127 container attach 530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swanson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:41:53 compute-0 podman[377773]: 2026-01-31 08:41:53.726554903 +0000 UTC m=+0.196601496 container died 530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swanson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac2da006352e912e98125665593c7919f8d012ed830a8ea7e746d827ecd1ae7-merged.mount: Deactivated successfully.
Jan 31 08:41:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:54 compute-0 ceph-mon[74496]: pgmap v3270: 305 pgs: 305 active+clean; 431 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 512 KiB/s wr, 13 op/s
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/537536257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:41:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/537536257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:41:54 compute-0 podman[377773]: 2026-01-31 08:41:54.039107498 +0000 UTC m=+0.509154101 container remove 530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_swanson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:41:54 compute-0 systemd[1]: libpod-conmon-530709f86294719facdb5db96177957425c036e50b66c72f74f0e20a6c6c3c16.scope: Deactivated successfully.
Jan 31 08:41:54 compute-0 podman[377813]: 2026-01-31 08:41:54.30262053 +0000 UTC m=+0.101144352 container create f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:41:54 compute-0 podman[377813]: 2026-01-31 08:41:54.237726001 +0000 UTC m=+0.036249853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:41:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:54 compute-0 systemd[1]: Started libpod-conmon-f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a.scope.
Jan 31 08:41:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173d88bb3c3f303d02c3ae069fd2e3030f0aa10d61f9a8dc2730f2d12c4528ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173d88bb3c3f303d02c3ae069fd2e3030f0aa10d61f9a8dc2730f2d12c4528ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173d88bb3c3f303d02c3ae069fd2e3030f0aa10d61f9a8dc2730f2d12c4528ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173d88bb3c3f303d02c3ae069fd2e3030f0aa10d61f9a8dc2730f2d12c4528ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173d88bb3c3f303d02c3ae069fd2e3030f0aa10d61f9a8dc2730f2d12c4528ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:54 compute-0 podman[377813]: 2026-01-31 08:41:54.542865166 +0000 UTC m=+0.341389068 container init f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:41:54 compute-0 podman[377813]: 2026-01-31 08:41:54.550849429 +0000 UTC m=+0.349373281 container start f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:41:54 compute-0 podman[377813]: 2026-01-31 08:41:54.618767042 +0000 UTC m=+0.417290894 container attach f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:41:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:54.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 446 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 187 KiB/s rd, 1.9 MiB/s wr, 44 op/s
Jan 31 08:41:55 compute-0 youthful_torvalds[377830]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:41:55 compute-0 youthful_torvalds[377830]: --> relative data size: 1.0
Jan 31 08:41:55 compute-0 youthful_torvalds[377830]: --> All data devices are unavailable
Jan 31 08:41:55 compute-0 systemd[1]: libpod-f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a.scope: Deactivated successfully.
Jan 31 08:41:55 compute-0 podman[377813]: 2026-01-31 08:41:55.404024669 +0000 UTC m=+1.202548491 container died f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 31 08:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-173d88bb3c3f303d02c3ae069fd2e3030f0aa10d61f9a8dc2730f2d12c4528ea-merged.mount: Deactivated successfully.
Jan 31 08:41:56 compute-0 ceph-mon[74496]: pgmap v3271: 305 pgs: 305 active+clean; 446 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 187 KiB/s rd, 1.9 MiB/s wr, 44 op/s
Jan 31 08:41:56 compute-0 podman[377813]: 2026-01-31 08:41:56.062752898 +0000 UTC m=+1.861276710 container remove f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:41:56 compute-0 systemd[1]: libpod-conmon-f3519964137f34b3549ca0e88004e86d837aaaa097faab906a9e47d090ca690a.scope: Deactivated successfully.
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.071 247708 DEBUG nova.network.neutron [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.072 247708 DEBUG nova.network.neutron [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:41:56 compute-0 sudo[377709]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:56 compute-0 podman[377845]: 2026-01-31 08:41:56.127032452 +0000 UTC m=+0.688279529 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:41:56 compute-0 sudo[377876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:56 compute-0 sudo[377876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:56 compute-0 sudo[377876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:56 compute-0 sudo[377901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:56 compute-0 sudo[377901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:56 compute-0 sudo[377901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:56 compute-0 sudo[377926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:56 compute-0 sudo[377926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:56 compute-0 sudo[377926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.276 247708 DEBUG oslo_concurrency.lockutils [req-a8bee915-49ff-4e83-9f48-062b7fb2173c req-2118ffbc-c378-4913-a246-78cdc14404c7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:41:56 compute-0 sudo[377952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:41:56 compute-0 sudo[377952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:41:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:41:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:41:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:56.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:41:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:56 compute-0 podman[378019]: 2026-01-31 08:41:56.740074729 +0000 UTC m=+0.111488025 container create 0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:41:56 compute-0 podman[378019]: 2026-01-31 08:41:56.66205464 +0000 UTC m=+0.033467956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:41:56 compute-0 nova_compute[247704]: 2026-01-31 08:41:56.935 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:41:57 compute-0 systemd[1]: Started libpod-conmon-0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261.scope.
Jan 31 08:41:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:57 compute-0 podman[378019]: 2026-01-31 08:41:57.20369812 +0000 UTC m=+0.575111376 container init 0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_swirles, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:41:57 compute-0 podman[378019]: 2026-01-31 08:41:57.21478421 +0000 UTC m=+0.586197466 container start 0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 08:41:57 compute-0 xenodochial_swirles[378035]: 167 167
Jan 31 08:41:57 compute-0 systemd[1]: libpod-0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261.scope: Deactivated successfully.
Jan 31 08:41:57 compute-0 podman[378019]: 2026-01-31 08:41:57.239400398 +0000 UTC m=+0.610813654 container attach 0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_swirles, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:41:57 compute-0 podman[378019]: 2026-01-31 08:41:57.240001273 +0000 UTC m=+0.611414529 container died 0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:41:57 compute-0 sudo[378052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:57 compute-0 sudo[378052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:57 compute-0 sudo[378052]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce896d508ad6cd1f00793b9236084ff995a0574d0019f30f6fd070172713214b-merged.mount: Deactivated successfully.
Jan 31 08:41:57 compute-0 sudo[378078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:57 compute-0 sudo[378078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:57 compute-0 sudo[378078]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:57 compute-0 podman[378019]: 2026-01-31 08:41:57.69813904 +0000 UTC m=+1.069552326 container remove 0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:41:57 compute-0 systemd[1]: libpod-conmon-0da8a44711f0a7c20c5ce41d673c7153bd4b3bc83975267184fb82f6d4b03261.scope: Deactivated successfully.
Jan 31 08:41:57 compute-0 podman[378110]: 2026-01-31 08:41:57.843129018 +0000 UTC m=+0.028375152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:41:57 compute-0 podman[378110]: 2026-01-31 08:41:57.978678006 +0000 UTC m=+0.163924090 container create 6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:41:58 compute-0 systemd[1]: Started libpod-conmon-6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba.scope.
Jan 31 08:41:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499cf274c5c0679546c96731628861dfd1e75cdcdc1b53d016c7cc041b4cacea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499cf274c5c0679546c96731628861dfd1e75cdcdc1b53d016c7cc041b4cacea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499cf274c5c0679546c96731628861dfd1e75cdcdc1b53d016c7cc041b4cacea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/499cf274c5c0679546c96731628861dfd1e75cdcdc1b53d016c7cc041b4cacea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:58 compute-0 podman[378110]: 2026-01-31 08:41:58.21230007 +0000 UTC m=+0.397546144 container init 6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 08:41:58 compute-0 podman[378110]: 2026-01-31 08:41:58.220335346 +0000 UTC m=+0.405581390 container start 6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:41:58 compute-0 podman[378110]: 2026-01-31 08:41:58.311441493 +0000 UTC m=+0.496687627 container attach 6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:41:58 compute-0 ceph-mon[74496]: pgmap v3272: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 08:41:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:41:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:41:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:58.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:41:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 456 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 2.2 MiB/s wr, 52 op/s
Jan 31 08:41:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:59 compute-0 silly_dhawan[378127]: {
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:     "0": [
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:         {
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "devices": [
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "/dev/loop3"
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             ],
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "lv_name": "ceph_lv0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "lv_size": "7511998464",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "name": "ceph_lv0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "tags": {
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.crush_device_class": "",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.encrypted": "0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.osd_id": "0",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.type": "block",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:                 "ceph.vdo": "0"
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             },
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "type": "block",
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:             "vg_name": "ceph_vg0"
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:         }
Jan 31 08:41:59 compute-0 silly_dhawan[378127]:     ]
Jan 31 08:41:59 compute-0 silly_dhawan[378127]: }
Jan 31 08:41:59 compute-0 systemd[1]: libpod-6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba.scope: Deactivated successfully.
Jan 31 08:41:59 compute-0 podman[378110]: 2026-01-31 08:41:59.092201501 +0000 UTC m=+1.277447635 container died 6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:41:59 compute-0 nova_compute[247704]: 2026-01-31 08:41:59.113 247708 DEBUG nova.compute.manager [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:41:59 compute-0 nova_compute[247704]: 2026-01-31 08:41:59.114 247708 DEBUG nova.compute.manager [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:41:59 compute-0 nova_compute[247704]: 2026-01-31 08:41:59.115 247708 DEBUG oslo_concurrency.lockutils [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:41:59 compute-0 nova_compute[247704]: 2026-01-31 08:41:59.115 247708 DEBUG oslo_concurrency.lockutils [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:41:59 compute-0 nova_compute[247704]: 2026-01-31 08:41:59.115 247708 DEBUG nova.network.neutron [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:41:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-499cf274c5c0679546c96731628861dfd1e75cdcdc1b53d016c7cc041b4cacea-merged.mount: Deactivated successfully.
Jan 31 08:41:59 compute-0 ceph-mon[74496]: pgmap v3273: 305 pgs: 305 active+clean; 456 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 2.2 MiB/s wr, 52 op/s
Jan 31 08:41:59 compute-0 podman[378110]: 2026-01-31 08:41:59.805137578 +0000 UTC m=+1.990383682 container remove 6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:41:59 compute-0 systemd[1]: libpod-conmon-6ae9762e4376288b5bf8b9ab444fbfe45aa5661c86ea370154b9a8b7fff2b1ba.scope: Deactivated successfully.
Jan 31 08:41:59 compute-0 sudo[377952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:59 compute-0 sudo[378151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:41:59 compute-0 sudo[378151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:59 compute-0 sudo[378151]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:59 compute-0 sudo[378176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:59 compute-0 sudo[378176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:59 compute-0 sudo[378176]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:00 compute-0 sudo[378201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:00 compute-0 sudo[378201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:00 compute-0 sudo[378201]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:00 compute-0 sudo[378226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:42:00 compute-0 sudo[378226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:00.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.482644474 +0000 UTC m=+0.056648240 container create 156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:00 compute-0 systemd[1]: Started libpod-conmon-156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586.scope.
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.448696448 +0000 UTC m=+0.022700234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:42:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.565230803 +0000 UTC m=+0.139234589 container init 156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.572287885 +0000 UTC m=+0.146291651 container start 156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.577249845 +0000 UTC m=+0.151253661 container attach 156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:42:00 compute-0 dreamy_swanson[378309]: 167 167
Jan 31 08:42:00 compute-0 systemd[1]: libpod-156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586.scope: Deactivated successfully.
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.580697459 +0000 UTC m=+0.154701225 container died 156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:42:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bcd2a85abe5c7e80319873b9e125a1976d43fe4148f42c663ba9e5046547fda-merged.mount: Deactivated successfully.
Jan 31 08:42:00 compute-0 podman[378293]: 2026-01-31 08:42:00.622218779 +0000 UTC m=+0.196222545 container remove 156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:42:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:42:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:00.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:42:00 compute-0 systemd[1]: libpod-conmon-156ea19e2e93e4db6453cee35518ea9d889df2aa9d8c61e7cc3c7423f3b90586.scope: Deactivated successfully.
Jan 31 08:42:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 276 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Jan 31 08:42:00 compute-0 podman[378335]: 2026-01-31 08:42:00.77719733 +0000 UTC m=+0.058528835 container create 6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:00 compute-0 systemd[1]: Started libpod-conmon-6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4.scope.
Jan 31 08:42:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b1c4013bc90bf8b9db4ba3507d3e1615ee0eadf793877d4d7f52123cf77c56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b1c4013bc90bf8b9db4ba3507d3e1615ee0eadf793877d4d7f52123cf77c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b1c4013bc90bf8b9db4ba3507d3e1615ee0eadf793877d4d7f52123cf77c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b1c4013bc90bf8b9db4ba3507d3e1615ee0eadf793877d4d7f52123cf77c56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:00 compute-0 podman[378335]: 2026-01-31 08:42:00.756986049 +0000 UTC m=+0.038317554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:42:00 compute-0 podman[378335]: 2026-01-31 08:42:00.860748974 +0000 UTC m=+0.142080569 container init 6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:42:00 compute-0 podman[378335]: 2026-01-31 08:42:00.871660809 +0000 UTC m=+0.152992354 container start 6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:42:00 compute-0 podman[378335]: 2026-01-31 08:42:00.875388969 +0000 UTC m=+0.156720474 container attach 6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:01 compute-0 nova_compute[247704]: 2026-01-31 08:42:01.676 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]: {
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:         "osd_id": 0,
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:         "type": "bluestore"
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]:     }
Jan 31 08:42:01 compute-0 amazing_keldysh[378352]: }
Jan 31 08:42:01 compute-0 systemd[1]: libpod-6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4.scope: Deactivated successfully.
Jan 31 08:42:01 compute-0 conmon[378352]: conmon 6717f24a4cade081cdd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4.scope/container/memory.events
Jan 31 08:42:01 compute-0 podman[378335]: 2026-01-31 08:42:01.837778117 +0000 UTC m=+1.119109642 container died 6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:42:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-69b1c4013bc90bf8b9db4ba3507d3e1615ee0eadf793877d4d7f52123cf77c56-merged.mount: Deactivated successfully.
Jan 31 08:42:01 compute-0 podman[378335]: 2026-01-31 08:42:01.906343535 +0000 UTC m=+1.187675040 container remove 6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:42:01 compute-0 systemd[1]: libpod-conmon-6717f24a4cade081cdd22460c24372dfba563a69c31c1f9b85b6b7db87f1e0c4.scope: Deactivated successfully.
Jan 31 08:42:01 compute-0 nova_compute[247704]: 2026-01-31 08:42:01.938 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:01 compute-0 sudo[378226]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:42:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:42:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:42:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:42:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0fd0fab0-888c-457f-8bba-30172481565e does not exist
Jan 31 08:42:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0f679f22-79ba-4914-868d-cd2b47341bf7 does not exist
Jan 31 08:42:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bf7e3fdf-c49b-4583-b59f-a50da2eb2a5b does not exist
Jan 31 08:42:02 compute-0 sudo[378385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:02 compute-0 sudo[378385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:02 compute-0 sudo[378385]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:02.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:02 compute-0 sudo[378411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:42:02 compute-0 sudo[378411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:02 compute-0 sudo[378411]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:02 compute-0 ceph-mon[74496]: pgmap v3274: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 276 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Jan 31 08:42:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:42:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:02.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 3.4 MiB/s wr, 74 op/s
Jan 31 08:42:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:42:03 compute-0 ceph-mon[74496]: pgmap v3275: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 3.4 MiB/s wr, 74 op/s
Jan 31 08:42:03 compute-0 nova_compute[247704]: 2026-01-31 08:42:03.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:03 compute-0 nova_compute[247704]: 2026-01-31 08:42:03.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:42:03 compute-0 nova_compute[247704]: 2026-01-31 08:42:03.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:42:03 compute-0 nova_compute[247704]: 2026-01-31 08:42:03.703 247708 DEBUG nova.network.neutron [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:42:03 compute-0 nova_compute[247704]: 2026-01-31 08:42:03.703 247708 DEBUG nova.network.neutron [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:03 compute-0 nova_compute[247704]: 2026-01-31 08:42:03.836 247708 DEBUG oslo_concurrency.lockutils [req-4ca54db4-5a1e-4ed6-9947-0c01285e1680 req-f2751b33-32a4-4e97-8dcc-14fac1a5a409 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:42:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:04 compute-0 nova_compute[247704]: 2026-01-31 08:42:04.273 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:42:04 compute-0 nova_compute[247704]: 2026-01-31 08:42:04.273 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:42:04 compute-0 nova_compute[247704]: 2026-01-31 08:42:04.273 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:42:04 compute-0 nova_compute[247704]: 2026-01-31 08:42:04.274 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:42:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:04.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 3.4 MiB/s wr, 77 op/s
Jan 31 08:42:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2310673947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-0 ceph-mon[74496]: pgmap v3276: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 3.4 MiB/s wr, 77 op/s
Jan 31 08:42:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3250871770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/119435219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/471124134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:42:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:06.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:42:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:06.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Jan 31 08:42:06 compute-0 nova_compute[247704]: 2026-01-31 08:42:06.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:06 compute-0 nova_compute[247704]: 2026-01-31 08:42:06.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:06 compute-0 nova_compute[247704]: 2026-01-31 08:42:06.946 247708 DEBUG oslo_concurrency.lockutils [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:06 compute-0 nova_compute[247704]: 2026-01-31 08:42:06.948 247708 DEBUG oslo_concurrency.lockutils [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.072 247708 INFO nova.compute.manager [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Detaching volume f0bb0c0d-db6f-42f3-aad1-86e70cc695a6
Jan 31 08:42:07 compute-0 ceph-mon[74496]: pgmap v3277: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 2.0 MiB/s wr, 47 op/s
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.535 247708 INFO nova.virt.block_device [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attempting to driver detach volume f0bb0c0d-db6f-42f3-aad1-86e70cc695a6 from mountpoint /dev/vdb
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.549 247708 DEBUG nova.virt.libvirt.driver [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Attempting to detach device vdb from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.549 247708 DEBUG nova.virt.libvirt.guest [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-f0bb0c0d-db6f-42f3-aad1-86e70cc695a6">
Jan 31 08:42:07 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   </source>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <serial>f0bb0c0d-db6f-42f3-aad1-86e70cc695a6</serial>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]: </disk>
Jan 31 08:42:07 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.561 247708 INFO nova.virt.libvirt.driver [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdb from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the persistent domain config.
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.561 247708 DEBUG nova.virt.libvirt.driver [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.562 247708 DEBUG nova.virt.libvirt.guest [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-f0bb0c0d-db6f-42f3-aad1-86e70cc695a6">
Jan 31 08:42:07 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   </source>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <serial>f0bb0c0d-db6f-42f3-aad1-86e70cc695a6</serial>
Jan 31 08:42:07 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:42:07 compute-0 nova_compute[247704]: </disk>
Jan 31 08:42:07 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.674 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769848927.6738605, 5f00cd9b-b5f3-4eb6-ab53-387687853c27 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.676 247708 DEBUG nova.virt.libvirt.driver [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:42:07 compute-0 nova_compute[247704]: 2026-01-31 08:42:07.680 247708 INFO nova.virt.libvirt.driver [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdb from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the live domain config.
Jan 31 08:42:07 compute-0 podman[378440]: 2026-01-31 08:42:07.944289001 +0000 UTC m=+0.104591786 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.103 247708 DEBUG nova.objects.instance [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.190 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.274 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.274 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.275 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.275 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.275 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.332 247708 DEBUG oslo_concurrency.lockutils [None req-bd7d56c8-6d87-4fb9-8201-7df2dce9cab9 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.382 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.383 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.383 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.383 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.384 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3452087438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/579147508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:08.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 506 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 31 08:42:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:42:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503004743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:08 compute-0 nova_compute[247704]: 2026-01-31 08:42:08.805 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.038 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.039 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.045 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.045 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.045 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.248 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.249 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3810MB free_disk=20.921566009521484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.250 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.250 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.458 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9e883c68-083a-45ab-81fe-942de74e50ef actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.458 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.459 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.459 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:42:09 compute-0 nova_compute[247704]: 2026-01-31 08:42:09.552 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:09 compute-0 ceph-mon[74496]: pgmap v3278: 305 pgs: 305 active+clean; 506 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 31 08:42:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2503004743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:42:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186885530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:10 compute-0 nova_compute[247704]: 2026-01-31 08:42:10.020 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:10 compute-0 nova_compute[247704]: 2026-01-31 08:42:10.028 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:42:10 compute-0 nova_compute[247704]: 2026-01-31 08:42:10.222 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:42:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:10 compute-0 nova_compute[247704]: 2026-01-31 08:42:10.607 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:42:10 compute-0 nova_compute[247704]: 2026-01-31 08:42:10.608 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:10.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 1.9 MiB/s wr, 69 op/s
Jan 31 08:42:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4186885530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:42:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2291142291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:42:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2291142291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:42:11.211 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:42:11.212 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:42:11.213 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:11 compute-0 nova_compute[247704]: 2026-01-31 08:42:11.681 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:11 compute-0 ceph-mon[74496]: pgmap v3279: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 1.9 MiB/s wr, 69 op/s
Jan 31 08:42:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2291142291' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2291142291' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:11 compute-0 nova_compute[247704]: 2026-01-31 08:42:11.943 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:12.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 216 KiB/s wr, 33 op/s
Jan 31 08:42:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.025 247708 DEBUG oslo_concurrency.lockutils [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.026 247708 DEBUG oslo_concurrency.lockutils [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:14 compute-0 ceph-mon[74496]: pgmap v3280: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 216 KiB/s wr, 33 op/s
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.392 247708 INFO nova.compute.manager [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Detaching volume 273b2ec0-261d-40ed-887b-96124699257b
Jan 31 08:42:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:42:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.602 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.642 247708 INFO nova.virt.block_device [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Attempting to driver detach volume 273b2ec0-261d-40ed-887b-96124699257b from mountpoint /dev/vdc
Jan 31 08:42:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 390 KiB/s wr, 90 op/s
Jan 31 08:42:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:14.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.655 247708 DEBUG nova.virt.libvirt.driver [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Attempting to detach device vdc from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.656 247708 DEBUG nova.virt.libvirt.guest [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-273b2ec0-261d-40ed-887b-96124699257b">
Jan 31 08:42:14 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   </source>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <serial>273b2ec0-261d-40ed-887b-96124699257b</serial>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]: </disk>
Jan 31 08:42:14 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.667 247708 INFO nova.virt.libvirt.driver [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdc from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the persistent domain config.
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.668 247708 DEBUG nova.virt.libvirt.driver [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.668 247708 DEBUG nova.virt.libvirt.guest [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-273b2ec0-261d-40ed-887b-96124699257b">
Jan 31 08:42:14 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   </source>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <target dev="vdc" bus="virtio"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <serial>273b2ec0-261d-40ed-887b-96124699257b</serial>
Jan 31 08:42:14 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 08:42:14 compute-0 nova_compute[247704]: </disk>
Jan 31 08:42:14 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.800 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769848934.800458, 5f00cd9b-b5f3-4eb6-ab53-387687853c27 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.802 247708 DEBUG nova.virt.libvirt.driver [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:42:14 compute-0 nova_compute[247704]: 2026-01-31 08:42:14.804 247708 INFO nova.virt.libvirt.driver [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully detached device vdc from instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 from the live domain config.
Jan 31 08:42:15 compute-0 nova_compute[247704]: 2026-01-31 08:42:15.578 247708 DEBUG nova.objects.instance [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'flavor' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:42:15 compute-0 ceph-mon[74496]: pgmap v3281: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 390 KiB/s wr, 90 op/s
Jan 31 08:42:16 compute-0 nova_compute[247704]: 2026-01-31 08:42:16.194 247708 DEBUG oslo_concurrency.lockutils [None req-3cd3041a-81e4-44f1-a227-52868c71de6a cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:16.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:16 compute-0 nova_compute[247704]: 2026-01-31 08:42:16.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:16 compute-0 nova_compute[247704]: 2026-01-31 08:42:16.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:42:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 427 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 377 KiB/s wr, 119 op/s
Jan 31 08:42:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:16.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:16 compute-0 nova_compute[247704]: 2026-01-31 08:42:16.684 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:16 compute-0 nova_compute[247704]: 2026-01-31 08:42:16.945 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:17 compute-0 sudo[378519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:17 compute-0 sudo[378519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:17 compute-0 sudo[378519]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:17 compute-0 sudo[378544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:17 compute-0 sudo[378544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:17 compute-0 sudo[378544]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:17 compute-0 ceph-mon[74496]: pgmap v3282: 305 pgs: 305 active+clean; 427 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 377 KiB/s wr, 119 op/s
Jan 31 08:42:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 369 KiB/s wr, 134 op/s
Jan 31 08:42:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:20 compute-0 ceph-mon[74496]: pgmap v3283: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 369 KiB/s wr, 134 op/s
Jan 31 08:42:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1667139749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1667139749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:42:20
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log', 'backups', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:42:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 199 KiB/s wr, 137 op/s
Jan 31 08:42:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:20.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:20 compute-0 nova_compute[247704]: 2026-01-31 08:42:20.678 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:20 compute-0 nova_compute[247704]: 2026-01-31 08:42:20.679 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:42:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:42:21 compute-0 nova_compute[247704]: 2026-01-31 08:42:21.182 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:42:21 compute-0 nova_compute[247704]: 2026-01-31 08:42:21.685 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:21 compute-0 nova_compute[247704]: 2026-01-31 08:42:21.947 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:22 compute-0 ceph-mon[74496]: pgmap v3284: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 199 KiB/s wr, 137 op/s
Jan 31 08:42:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 182 KiB/s wr, 112 op/s
Jan 31 08:42:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:22.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:23 compute-0 nova_compute[247704]: 2026-01-31 08:42:23.530 247708 DEBUG nova.compute.manager [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:23 compute-0 nova_compute[247704]: 2026-01-31 08:42:23.530 247708 DEBUG nova.compute.manager [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:42:23 compute-0 nova_compute[247704]: 2026-01-31 08:42:23.530 247708 DEBUG oslo_concurrency.lockutils [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:42:23 compute-0 nova_compute[247704]: 2026-01-31 08:42:23.530 247708 DEBUG oslo_concurrency.lockutils [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:42:23 compute-0 nova_compute[247704]: 2026-01-31 08:42:23.530 247708 DEBUG nova.network.neutron [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:42:23.795 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:42:23 compute-0 nova_compute[247704]: 2026-01-31 08:42:23.795 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:42:23.796 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:42:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:42:23.797 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:42:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:24 compute-0 ceph-mon[74496]: pgmap v3285: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 182 KiB/s wr, 112 op/s
Jan 31 08:42:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/445970851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:42:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 129 op/s
Jan 31 08:42:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:42:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:24.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:42:25 compute-0 ceph-mon[74496]: pgmap v3286: 305 pgs: 305 active+clean; 436 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 129 op/s
Jan 31 08:42:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 970 KiB/s rd, 2.0 MiB/s wr, 86 op/s
Jan 31 08:42:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:26.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:26 compute-0 nova_compute[247704]: 2026-01-31 08:42:26.689 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:26 compute-0 podman[378575]: 2026-01-31 08:42:26.89696389 +0000 UTC m=+0.068509878 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:42:26 compute-0 nova_compute[247704]: 2026-01-31 08:42:26.951 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:27 compute-0 nova_compute[247704]: 2026-01-31 08:42:27.958 247708 DEBUG nova.network.neutron [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:42:27 compute-0 nova_compute[247704]: 2026-01-31 08:42:27.959 247708 DEBUG nova.network.neutron [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:28 compute-0 ceph-mon[74496]: pgmap v3287: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 970 KiB/s rd, 2.0 MiB/s wr, 86 op/s
Jan 31 08:42:28 compute-0 nova_compute[247704]: 2026-01-31 08:42:28.242 247708 DEBUG oslo_concurrency.lockutils [req-0851517a-224c-4c2c-9298-f7fd6ec42122 req-94b202bf-e6a7-4481-a815-ad6edabd8f38 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:42:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:42:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:28.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:42:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 448 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 31 08:42:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:42:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:28.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:42:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:30 compute-0 ceph-mon[74496]: pgmap v3288: 305 pgs: 305 active+clean; 448 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 31 08:42:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:30.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 491 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 08:42:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:42:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:30.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.354 247708 DEBUG nova.compute.manager [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-changed-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.354 247708 DEBUG nova.compute.manager [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing instance network info cache due to event network-changed-637c7294-89aa-4b8f-8485-df3303e84675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.354 247708 DEBUG oslo_concurrency.lockutils [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.355 247708 DEBUG oslo_concurrency.lockutils [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.355 247708 DEBUG nova.network.neutron [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Refreshing network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.691 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:31 compute-0 nova_compute[247704]: 2026-01-31 08:42:31.954 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:32 compute-0 ceph-mon[74496]: pgmap v3289: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 491 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 08:42:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:32.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 487 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 08:42:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:32.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:33 compute-0 ovn_controller[149457]: 2026-01-31T08:42:33Z|00804|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:42:33 compute-0 nova_compute[247704]: 2026-01-31 08:42:33.083 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:33 compute-0 ovn_controller[149457]: 2026-01-31T08:42:33Z|00805|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:42:33 compute-0 nova_compute[247704]: 2026-01-31 08:42:33.136 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:33 compute-0 ceph-mon[74496]: pgmap v3290: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 487 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 08:42:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:33 compute-0 nova_compute[247704]: 2026-01-31 08:42:33.969 247708 DEBUG nova.network.neutron [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updated VIF entry in instance network info cache for port 637c7294-89aa-4b8f-8485-df3303e84675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:42:33 compute-0 nova_compute[247704]: 2026-01-31 08:42:33.970 247708 DEBUG nova.network.neutron [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [{"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:34 compute-0 nova_compute[247704]: 2026-01-31 08:42:34.208 247708 DEBUG oslo_concurrency.lockutils [req-b01e867f-53aa-49ee-abc3-909ec03c4f23 req-d66ed11e-9980-4adb-9c82-c8f98e506d2d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-5f00cd9b-b5f3-4eb6-ab53-387687853c27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:42:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:34.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 490 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 08:42:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:34.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:35 compute-0 ceph-mon[74496]: pgmap v3291: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 490 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002183588660897078 of space, bias 1.0, pg target 0.6550765982691235 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00867637522101247 of space, bias 1.0, pg target 2.602912566303741 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:42:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:42:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:42:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:42:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 1.0 MiB/s wr, 55 op/s
Jan 31 08:42:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:36.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:36 compute-0 nova_compute[247704]: 2026-01-31 08:42:36.693 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:36 compute-0 nova_compute[247704]: 2026-01-31 08:42:36.956 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:37 compute-0 sudo[378602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:37 compute-0 sudo[378602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:37 compute-0 sudo[378602]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:37 compute-0 sudo[378627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:37 compute-0 sudo[378627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:37 compute-0 sudo[378627]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:37 compute-0 ceph-mon[74496]: pgmap v3292: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 1.0 MiB/s wr, 55 op/s
Jan 31 08:42:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:38.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 157 KiB/s wr, 40 op/s
Jan 31 08:42:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:38.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:38 compute-0 podman[378653]: 2026-01-31 08:42:38.943230544 +0000 UTC m=+0.106486372 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:42:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:40 compute-0 ceph-mon[74496]: pgmap v3293: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 157 KiB/s wr, 40 op/s
Jan 31 08:42:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 243 KiB/s rd, 122 KiB/s wr, 35 op/s
Jan 31 08:42:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:40.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:40 compute-0 nova_compute[247704]: 2026-01-31 08:42:40.727 247708 DEBUG nova.compute.manager [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:42:40 compute-0 nova_compute[247704]: 2026-01-31 08:42:40.728 247708 DEBUG nova.compute.manager [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing instance network info cache due to event network-changed-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:42:40 compute-0 nova_compute[247704]: 2026-01-31 08:42:40.728 247708 DEBUG oslo_concurrency.lockutils [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:42:40 compute-0 nova_compute[247704]: 2026-01-31 08:42:40.729 247708 DEBUG oslo_concurrency.lockutils [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:42:40 compute-0 nova_compute[247704]: 2026-01-31 08:42:40.729 247708 DEBUG nova.network.neutron [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Refreshing network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:42:41 compute-0 nova_compute[247704]: 2026-01-31 08:42:41.696 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:41 compute-0 nova_compute[247704]: 2026-01-31 08:42:41.958 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:42 compute-0 ceph-mon[74496]: pgmap v3294: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 243 KiB/s rd, 122 KiB/s wr, 35 op/s
Jan 31 08:42:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:42.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 31 08:42:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:42.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:44 compute-0 ceph-mon[74496]: pgmap v3295: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 31 08:42:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:44 compute-0 nova_compute[247704]: 2026-01-31 08:42:44.602 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 31 08:42:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:44.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:46 compute-0 ceph-mon[74496]: pgmap v3296: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 31 08:42:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:46.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 19 KiB/s wr, 1 op/s
Jan 31 08:42:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:46.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:46 compute-0 nova_compute[247704]: 2026-01-31 08:42:46.698 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:46 compute-0 nova_compute[247704]: 2026-01-31 08:42:46.961 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:48 compute-0 ceph-mon[74496]: pgmap v3297: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 19 KiB/s wr, 1 op/s
Jan 31 08:42:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:48.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:42:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:50 compute-0 ceph-mon[74496]: pgmap v3298: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:42:50 compute-0 nova_compute[247704]: 2026-01-31 08:42:50.357 247708 DEBUG nova.network.neutron [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updated VIF entry in instance network info cache for port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:42:50 compute-0 nova_compute[247704]: 2026-01-31 08:42:50.358 247708 DEBUG nova.network.neutron [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [{"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:42:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:50 compute-0 nova_compute[247704]: 2026-01-31 08:42:50.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:42:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:50 compute-0 nova_compute[247704]: 2026-01-31 08:42:50.713 247708 DEBUG oslo_concurrency.lockutils [req-95ad12a7-39bf-4463-beb9-b2283541522c req-1538bc6c-cb02-486f-b054-8bd8e9f0aa80 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-9e883c68-083a-45ab-81fe-942de74e50ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:42:51 compute-0 nova_compute[247704]: 2026-01-31 08:42:51.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:51 compute-0 nova_compute[247704]: 2026-01-31 08:42:51.963 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:52 compute-0 ceph-mon[74496]: pgmap v3299: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:42:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:52.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:42:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:54 compute-0 ceph-mon[74496]: pgmap v3300: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:42:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4045926153' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:42:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4045926153' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:42:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:54.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:54 compute-0 nova_compute[247704]: 2026-01-31 08:42:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:42:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:54.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2214913132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:56 compute-0 ceph-mon[74496]: pgmap v3301: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:42:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3260758072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:42:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:42:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:42:56 compute-0 nova_compute[247704]: 2026-01-31 08:42:56.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:56 compute-0 nova_compute[247704]: 2026-01-31 08:42:56.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:42:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 2 op/s
Jan 31 08:42:56 compute-0 nova_compute[247704]: 2026-01-31 08:42:56.702 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:56.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:56 compute-0 nova_compute[247704]: 2026-01-31 08:42:56.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:42:57 compute-0 sudo[378690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:57 compute-0 sudo[378690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:57 compute-0 sudo[378690]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:57 compute-0 sudo[378716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:42:57 compute-0 sudo[378716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:57 compute-0 sudo[378716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:57 compute-0 podman[378714]: 2026-01-31 08:42:57.860703629 +0000 UTC m=+0.047093027 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 08:42:58 compute-0 ceph-mon[74496]: pgmap v3302: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 KiB/s rd, 2 op/s
Jan 31 08:42:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 2 op/s
Jan 31 08:42:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:42:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:42:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:58.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:42:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:59 compute-0 ceph-mon[74496]: pgmap v3303: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 2 op/s
Jan 31 08:43:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:00.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 31 08:43:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:00.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:01 compute-0 nova_compute[247704]: 2026-01-31 08:43:01.705 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:01 compute-0 ceph-mon[74496]: pgmap v3304: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 31 08:43:01 compute-0 nova_compute[247704]: 2026-01-31 08:43:01.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.276 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.277 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.277 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.278 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.278 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.279 247708 INFO nova.compute.manager [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Terminating instance
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.280 247708 DEBUG nova.compute.manager [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:43:02 compute-0 kernel: tap637c7294-89 (unregistering): left promiscuous mode
Jan 31 08:43:02 compute-0 NetworkManager[49108]: <info>  [1769848982.3321] device (tap637c7294-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.339 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 ovn_controller[149457]: 2026-01-31T08:43:02Z|00806|binding|INFO|Releasing lport 637c7294-89aa-4b8f-8485-df3303e84675 from this chassis (sb_readonly=0)
Jan 31 08:43:02 compute-0 ovn_controller[149457]: 2026-01-31T08:43:02Z|00807|binding|INFO|Setting lport 637c7294-89aa-4b8f-8485-df3303e84675 down in Southbound
Jan 31 08:43:02 compute-0 ovn_controller[149457]: 2026-01-31T08:43:02Z|00808|binding|INFO|Removing iface tap637c7294-89 ovn-installed in OVS
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.343 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.349 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Jan 31 08:43:02 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b9.scope: Consumed 23.474s CPU time.
Jan 31 08:43:02 compute-0 systemd-machined[214448]: Machine qemu-85-instance-000000b9 terminated.
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.431 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:6c:09 10.100.0.14'], port_security=['fa:16:3e:4b:6c:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5f00cd9b-b5f3-4eb6-ab53-387687853c27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-090376c2-ac34-46f0-acd4-344bb2bc1154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b38141686534a0fb9b947a7886cd4b6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f827482b-eee1-43ff-a797-1ec84e5a6d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05d66413-db50-49eb-973e-490542297b8d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=637c7294-89aa-4b8f-8485-df3303e84675) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.433 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 637c7294-89aa-4b8f-8485-df3303e84675 in datapath 090376c2-ac34-46f0-acd4-344bb2bc1154 unbound from our chassis
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.434 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 090376c2-ac34-46f0-acd4-344bb2bc1154
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.452 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[98c49b79-3957-4e2c-ba88-226fa5cc9f2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:02.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.488 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[64aa0afe-12b8-4dc5-a5d2-1f5bc5c8ef94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.493 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1e5a6775-2557-45f5-994e-cee42ba70074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.502 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.521 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0418fb72-52eb-4fde-9ab9-9d92a509c303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.538 247708 INFO nova.virt.libvirt.driver [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Instance destroyed successfully.
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.538 247708 DEBUG nova.objects.instance [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'resources' on Instance uuid 5f00cd9b-b5f3-4eb6-ab53-387687853c27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.546 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d54d91d-9dff-44b0-91c4-20d02a2e1884]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap090376c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:64:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897984, 'reachable_time': 30594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378784, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.563 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8d094780-a478-4f6c-af3c-82bb9923ad9e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap090376c2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897994, 'tstamp': 897994}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378785, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap090376c2-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897997, 'tstamp': 897997}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378785, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.565 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap090376c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.600 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.606 247708 DEBUG nova.virt.libvirt.vif [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1174976220',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1174976220',id=185,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-kh6rav0r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:39:09Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=5f00cd9b-b5f3-4eb6-ab53-387687853c27,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.606 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap090376c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.606 247708 DEBUG nova.network.os_vif_util [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "637c7294-89aa-4b8f-8485-df3303e84675", "address": "fa:16:3e:4b:6c:09", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap637c7294-89", "ovs_interfaceid": "637c7294-89aa-4b8f-8485-df3303e84675", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.607 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.607 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap090376c2-a0, col_values=(('external_ids', {'iface-id': 'c8a7eefb-b644-411b-b95a-f875570edfa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.607 247708 DEBUG nova.network.os_vif_util [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:43:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:02.607 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.607 247708 DEBUG os_vif [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.609 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.609 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap637c7294-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.615 247708 INFO os_vif [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:6c:09,bridge_name='br-int',has_traffic_filtering=True,id=637c7294-89aa-4b8f-8485-df3303e84675,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap637c7294-89')
Jan 31 08:43:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 31 08:43:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:02.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:02 compute-0 sudo[378805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:02 compute-0 sudo[378805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-0 sudo[378805]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:02 compute-0 sudo[378830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:02 compute-0 sudo[378830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-0 sudo[378830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.828 247708 INFO nova.virt.libvirt.driver [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Deleting instance files /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27_del
Jan 31 08:43:02 compute-0 nova_compute[247704]: 2026-01-31 08:43:02.830 247708 INFO nova.virt.libvirt.driver [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Deletion of /var/lib/nova/instances/5f00cd9b-b5f3-4eb6-ab53-387687853c27_del complete
Jan 31 08:43:02 compute-0 sudo[378855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:02 compute-0 sudo[378855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:02 compute-0 sudo[378855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:02 compute-0 sudo[378881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:43:02 compute-0 sudo[378881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:03 compute-0 sudo[378881]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 786d7079-4432-46b4-bb40-b9d6ad4fbb2c does not exist
Jan 31 08:43:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4a0a8728-1bdb-43c9-8b60-12279e16d157 does not exist
Jan 31 08:43:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d7ebc31c-7646-4ff5-b6c6-bf5d3380f1ed does not exist
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:43:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:43:03 compute-0 sudo[378937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:03 compute-0 sudo[378937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:03 compute-0 sudo[378937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:03 compute-0 sudo[378962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:03 compute-0 sudo[378962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:03 compute-0 sudo[378962]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:03 compute-0 nova_compute[247704]: 2026-01-31 08:43:03.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:03 compute-0 nova_compute[247704]: 2026-01-31 08:43:03.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:43:03 compute-0 nova_compute[247704]: 2026-01-31 08:43:03.570 247708 INFO nova.compute.manager [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Took 1.29 seconds to destroy the instance on the hypervisor.
Jan 31 08:43:03 compute-0 nova_compute[247704]: 2026-01-31 08:43:03.571 247708 DEBUG oslo.service.loopingcall [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:43:03 compute-0 nova_compute[247704]: 2026-01-31 08:43:03.571 247708 DEBUG nova.compute.manager [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:43:03 compute-0 nova_compute[247704]: 2026-01-31 08:43:03.572 247708 DEBUG nova.network.neutron [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:43:03 compute-0 sudo[378987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:03 compute-0 sudo[378987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:03 compute-0 sudo[378987]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:03 compute-0 sudo[379012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:43:03 compute-0 sudo[379012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:03 compute-0 ceph-mon[74496]: pgmap v3305: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:43:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:04.00323515 +0000 UTC m=+0.054781954 container create 1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.021 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.021 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:43:04 compute-0 systemd[1]: Started libpod-conmon-1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281.scope.
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:03.976138561 +0000 UTC m=+0.027685335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:43:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:04.096582522 +0000 UTC m=+0.148129316 container init 1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:04.10350066 +0000 UTC m=+0.155047424 container start 1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:04.107377134 +0000 UTC m=+0.158923988 container attach 1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:43:04 compute-0 fervent_rhodes[379094]: 167 167
Jan 31 08:43:04 compute-0 systemd[1]: libpod-1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281.scope: Deactivated successfully.
Jan 31 08:43:04 compute-0 conmon[379094]: conmon 1335be779585965062ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281.scope/container/memory.events
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:04.112088139 +0000 UTC m=+0.163634943 container died 1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.119 247708 DEBUG nova.compute.manager [req-04133040-3634-42e7-9443-81d30e31b21d req-13963ce0-b898-4db0-8ee9-03ac3eece3fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-vif-unplugged-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.120 247708 DEBUG oslo_concurrency.lockutils [req-04133040-3634-42e7-9443-81d30e31b21d req-13963ce0-b898-4db0-8ee9-03ac3eece3fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.120 247708 DEBUG oslo_concurrency.lockutils [req-04133040-3634-42e7-9443-81d30e31b21d req-13963ce0-b898-4db0-8ee9-03ac3eece3fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.120 247708 DEBUG oslo_concurrency.lockutils [req-04133040-3634-42e7-9443-81d30e31b21d req-13963ce0-b898-4db0-8ee9-03ac3eece3fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.120 247708 DEBUG nova.compute.manager [req-04133040-3634-42e7-9443-81d30e31b21d req-13963ce0-b898-4db0-8ee9-03ac3eece3fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] No waiting events found dispatching network-vif-unplugged-637c7294-89aa-4b8f-8485-df3303e84675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.121 247708 DEBUG nova.compute.manager [req-04133040-3634-42e7-9443-81d30e31b21d req-13963ce0-b898-4db0-8ee9-03ac3eece3fc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-vif-unplugged-637c7294-89aa-4b8f-8485-df3303e84675 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e774506690a26e30e06912238a8b699ab7320eda4143e14248eff19da79d734-merged.mount: Deactivated successfully.
Jan 31 08:43:04 compute-0 podman[379078]: 2026-01-31 08:43:04.164565336 +0000 UTC m=+0.216112130 container remove 1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:43:04 compute-0 systemd[1]: libpod-conmon-1335be779585965062ed68fd7879e36dc4027511143cee5472b4155a7477c281.scope: Deactivated successfully.
Jan 31 08:43:04 compute-0 podman[379117]: 2026-01-31 08:43:04.308442817 +0000 UTC m=+0.049203408 container create b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:43:04 compute-0 systemd[1]: Started libpod-conmon-b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18.scope.
Jan 31 08:43:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21288515c64e2a911066df9758bb6b2558533db697291f4d0dcbd16317545cc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21288515c64e2a911066df9758bb6b2558533db697291f4d0dcbd16317545cc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21288515c64e2a911066df9758bb6b2558533db697291f4d0dcbd16317545cc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21288515c64e2a911066df9758bb6b2558533db697291f4d0dcbd16317545cc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21288515c64e2a911066df9758bb6b2558533db697291f4d0dcbd16317545cc2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:04 compute-0 podman[379117]: 2026-01-31 08:43:04.28556067 +0000 UTC m=+0.026321311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:43:04 compute-0 podman[379117]: 2026-01-31 08:43:04.393780133 +0000 UTC m=+0.134540774 container init b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:43:04 compute-0 podman[379117]: 2026-01-31 08:43:04.405992751 +0000 UTC m=+0.146753362 container start b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_engelbart, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:43:04 compute-0 podman[379117]: 2026-01-31 08:43:04.411196147 +0000 UTC m=+0.151956758 container attach b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_engelbart, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:43:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:04.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:04 compute-0 nova_compute[247704]: 2026-01-31 08:43:04.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 82 op/s
Jan 31 08:43:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:04.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:05 compute-0 cranky_engelbart[379135]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:43:05 compute-0 cranky_engelbart[379135]: --> relative data size: 1.0
Jan 31 08:43:05 compute-0 cranky_engelbart[379135]: --> All data devices are unavailable
Jan 31 08:43:05 compute-0 systemd[1]: libpod-b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18.scope: Deactivated successfully.
Jan 31 08:43:05 compute-0 podman[379117]: 2026-01-31 08:43:05.273796866 +0000 UTC m=+1.014557467 container died b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-21288515c64e2a911066df9758bb6b2558533db697291f4d0dcbd16317545cc2-merged.mount: Deactivated successfully.
Jan 31 08:43:05 compute-0 podman[379117]: 2026-01-31 08:43:05.329138992 +0000 UTC m=+1.069899613 container remove b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:43:05 compute-0 systemd[1]: libpod-conmon-b7c2d2606d481190d42444cfb980d7db83bb135c5a2de824e9440b88f637ba18.scope: Deactivated successfully.
Jan 31 08:43:05 compute-0 sudo[379012]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:05 compute-0 sudo[379162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:05 compute-0 sudo[379162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:05 compute-0 sudo[379162]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:05 compute-0 sudo[379187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:05 compute-0 sudo[379187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:05 compute-0 sudo[379187]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:05 compute-0 sudo[379212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:05 compute-0 sudo[379212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:05 compute-0 sudo[379212]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:05 compute-0 nova_compute[247704]: 2026-01-31 08:43:05.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:05 compute-0 nova_compute[247704]: 2026-01-31 08:43:05.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:05 compute-0 sudo[379237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:43:05 compute-0 sudo[379237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:05 compute-0 ceph-mon[74496]: pgmap v3306: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 82 op/s
Jan 31 08:43:05 compute-0 podman[379301]: 2026-01-31 08:43:05.900624088 +0000 UTC m=+0.041840180 container create 208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:05 compute-0 systemd[1]: Started libpod-conmon-208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980.scope.
Jan 31 08:43:05 compute-0 podman[379301]: 2026-01-31 08:43:05.881693858 +0000 UTC m=+0.022909950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:43:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:06 compute-0 podman[379301]: 2026-01-31 08:43:06.006639388 +0000 UTC m=+0.147855470 container init 208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:43:06 compute-0 podman[379301]: 2026-01-31 08:43:06.013534985 +0000 UTC m=+0.154751067 container start 208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:43:06 compute-0 podman[379301]: 2026-01-31 08:43:06.016524608 +0000 UTC m=+0.157740720 container attach 208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:43:06 compute-0 lucid_pascal[379317]: 167 167
Jan 31 08:43:06 compute-0 systemd[1]: libpod-208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980.scope: Deactivated successfully.
Jan 31 08:43:06 compute-0 podman[379301]: 2026-01-31 08:43:06.019689235 +0000 UTC m=+0.160905337 container died 208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa4c236ce5fea17e30ae616b498c159722e8600a5b3e3ef8e872c2a4e04193f3-merged.mount: Deactivated successfully.
Jan 31 08:43:06 compute-0 podman[379301]: 2026-01-31 08:43:06.063681626 +0000 UTC m=+0.204897708 container remove 208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pascal, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:43:06 compute-0 systemd[1]: libpod-conmon-208568f556d148c7a9ed83ebc83b01a0bdb63f857c663af6617784301ee99980.scope: Deactivated successfully.
Jan 31 08:43:06 compute-0 podman[379341]: 2026-01-31 08:43:06.218853912 +0000 UTC m=+0.048062312 container create ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:43:06 compute-0 systemd[1]: Started libpod-conmon-ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8.scope.
Jan 31 08:43:06 compute-0 podman[379341]: 2026-01-31 08:43:06.201440307 +0000 UTC m=+0.030648727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:43:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775981fb7be21d02a983ba82763f39e3bac2df5444b235203fc3f4799262d02f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775981fb7be21d02a983ba82763f39e3bac2df5444b235203fc3f4799262d02f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775981fb7be21d02a983ba82763f39e3bac2df5444b235203fc3f4799262d02f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/775981fb7be21d02a983ba82763f39e3bac2df5444b235203fc3f4799262d02f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:06 compute-0 podman[379341]: 2026-01-31 08:43:06.339016925 +0000 UTC m=+0.168225395 container init ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 08:43:06 compute-0 podman[379341]: 2026-01-31 08:43:06.346173309 +0000 UTC m=+0.175381709 container start ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:43:06 compute-0 podman[379341]: 2026-01-31 08:43:06.350291009 +0000 UTC m=+0.179499499 container attach ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.409 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.411 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.412 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.412 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.413 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:06.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.586 247708 DEBUG nova.compute.manager [req-18b85c90-83e7-457a-9062-67f41584a275 req-0865e4b9-d329-4ade-9b23-4400d0b69bea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.587 247708 DEBUG oslo_concurrency.lockutils [req-18b85c90-83e7-457a-9062-67f41584a275 req-0865e4b9-d329-4ade-9b23-4400d0b69bea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.587 247708 DEBUG oslo_concurrency.lockutils [req-18b85c90-83e7-457a-9062-67f41584a275 req-0865e4b9-d329-4ade-9b23-4400d0b69bea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.587 247708 DEBUG oslo_concurrency.lockutils [req-18b85c90-83e7-457a-9062-67f41584a275 req-0865e4b9-d329-4ade-9b23-4400d0b69bea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.588 247708 DEBUG nova.compute.manager [req-18b85c90-83e7-457a-9062-67f41584a275 req-0865e4b9-d329-4ade-9b23-4400d0b69bea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] No waiting events found dispatching network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.588 247708 WARNING nova.compute.manager [req-18b85c90-83e7-457a-9062-67f41584a275 req-0865e4b9-d329-4ade-9b23-4400d0b69bea 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received unexpected event network-vif-plugged-637c7294-89aa-4b8f-8485-df3303e84675 for instance with vm_state active and task_state deleting.
Jan 31 08:43:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 84 op/s
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.707 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:06.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758710956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:06 compute-0 nova_compute[247704]: 2026-01-31 08:43:06.882 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]: {
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:     "0": [
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:         {
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "devices": [
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "/dev/loop3"
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             ],
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "lv_name": "ceph_lv0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "lv_size": "7511998464",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "name": "ceph_lv0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "tags": {
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.cluster_name": "ceph",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.crush_device_class": "",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.encrypted": "0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.osd_id": "0",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.type": "block",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:                 "ceph.vdo": "0"
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             },
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "type": "block",
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:             "vg_name": "ceph_vg0"
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:         }
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]:     ]
Jan 31 08:43:07 compute-0 interesting_bhabha[379358]: }
Jan 31 08:43:07 compute-0 systemd[1]: libpod-ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8.scope: Deactivated successfully.
Jan 31 08:43:07 compute-0 podman[379341]: 2026-01-31 08:43:07.114081944 +0000 UTC m=+0.943290344 container died ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-775981fb7be21d02a983ba82763f39e3bac2df5444b235203fc3f4799262d02f-merged.mount: Deactivated successfully.
Jan 31 08:43:07 compute-0 podman[379341]: 2026-01-31 08:43:07.167277348 +0000 UTC m=+0.996485748 container remove ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:07 compute-0 systemd[1]: libpod-conmon-ba3256cf7b1865ec8349d5bc744815181506ce5bdbca6a6d2384f6b3998656f8.scope: Deactivated successfully.
Jan 31 08:43:07 compute-0 sudo[379237]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:07 compute-0 sudo[379405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:07 compute-0 sudo[379405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:07 compute-0 sudo[379405]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:07 compute-0 sudo[379430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:07 compute-0 sudo[379430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:07 compute-0 sudo[379430]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:07 compute-0 sudo[379455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:07 compute-0 sudo[379455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:07 compute-0 sudo[379455]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:07 compute-0 sudo[379480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:43:07 compute-0 sudo[379480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.413 247708 DEBUG nova.compute.manager [req-355aa029-fe1c-4a22-90da-9a570a24bfbe req-7b5f5854-8cb2-4f10-b2b2-11a157ef537e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Received event network-vif-deleted-637c7294-89aa-4b8f-8485-df3303e84675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.413 247708 INFO nova.compute.manager [req-355aa029-fe1c-4a22-90da-9a570a24bfbe req-7b5f5854-8cb2-4f10-b2b2-11a157ef537e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Neutron deleted interface 637c7294-89aa-4b8f-8485-df3303e84675; detaching it from the instance and deleting it from the info cache
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.413 247708 DEBUG nova.network.neutron [req-355aa029-fe1c-4a22-90da-9a570a24bfbe req-7b5f5854-8cb2-4f10-b2b2-11a157ef537e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.566 247708 DEBUG nova.network.neutron [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.701310592 +0000 UTC m=+0.035666538 container create 12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:43:07 compute-0 systemd[1]: Started libpod-conmon-12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c.scope.
Jan 31 08:43:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.779 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.780 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.68639846 +0000 UTC m=+0.020754436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.788852633 +0000 UTC m=+0.123208599 container init 12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.794987552 +0000 UTC m=+0.129343498 container start 12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.798960568 +0000 UTC m=+0.133316514 container attach 12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 08:43:07 compute-0 flamboyant_nobel[379558]: 167 167
Jan 31 08:43:07 compute-0 systemd[1]: libpod-12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c.scope: Deactivated successfully.
Jan 31 08:43:07 compute-0 conmon[379558]: conmon 12dcb9c1407758deced7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c.scope/container/memory.events
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.803211102 +0000 UTC m=+0.137567058 container died 12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 08:43:07 compute-0 ceph-mon[74496]: pgmap v3307: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 84 op/s
Jan 31 08:43:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2758710956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3970953333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-88a282336b479bf0cd1fdd4c29e59c9248bb3342e743865b6e2f675535b544eb-merged.mount: Deactivated successfully.
Jan 31 08:43:07 compute-0 podman[379542]: 2026-01-31 08:43:07.845927912 +0000 UTC m=+0.180283858 container remove 12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:43:07 compute-0 systemd[1]: libpod-conmon-12dcb9c1407758deced75d6abf4b86d4f6ec4b38cbccc7908fdd56750104a24c.scope: Deactivated successfully.
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.945 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.947 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3928MB free_disk=20.942401885986328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.947 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:07 compute-0 nova_compute[247704]: 2026-01-31 08:43:07.947 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:07 compute-0 podman[379581]: 2026-01-31 08:43:07.992125059 +0000 UTC m=+0.037583976 container create 879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:43:08 compute-0 systemd[1]: Started libpod-conmon-879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224.scope.
Jan 31 08:43:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf342269f929b83aa88ac63d410e231c764a191ad31456783b86d908f994de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf342269f929b83aa88ac63d410e231c764a191ad31456783b86d908f994de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf342269f929b83aa88ac63d410e231c764a191ad31456783b86d908f994de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5cf342269f929b83aa88ac63d410e231c764a191ad31456783b86d908f994de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:08 compute-0 podman[379581]: 2026-01-31 08:43:07.97576226 +0000 UTC m=+0.021221197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:43:08 compute-0 podman[379581]: 2026-01-31 08:43:08.080635182 +0000 UTC m=+0.126094119 container init 879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:43:08 compute-0 podman[379581]: 2026-01-31 08:43:08.086125416 +0000 UTC m=+0.131584343 container start 879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:43:08 compute-0 podman[379581]: 2026-01-31 08:43:08.089623461 +0000 UTC m=+0.135082378 container attach 879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:43:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:08.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.568 247708 INFO nova.compute.manager [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Took 5.00 seconds to deallocate network for instance.
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.575 247708 DEBUG nova.compute.manager [req-355aa029-fe1c-4a22-90da-9a570a24bfbe req-7b5f5854-8cb2-4f10-b2b2-11a157ef537e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Detach interface failed, port_id=637c7294-89aa-4b8f-8485-df3303e84675, reason: Instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:43:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 682 B/s wr, 87 op/s
Jan 31 08:43:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:08.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.725 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 9e883c68-083a-45ab-81fe-942de74e50ef actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.725 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.726 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.726 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:43:08 compute-0 nova_compute[247704]: 2026-01-31 08:43:08.872 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:08 compute-0 eager_bohr[379597]: {
Jan 31 08:43:08 compute-0 eager_bohr[379597]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:43:08 compute-0 eager_bohr[379597]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:43:08 compute-0 eager_bohr[379597]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:43:08 compute-0 eager_bohr[379597]:         "osd_id": 0,
Jan 31 08:43:08 compute-0 eager_bohr[379597]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:43:08 compute-0 eager_bohr[379597]:         "type": "bluestore"
Jan 31 08:43:08 compute-0 eager_bohr[379597]:     }
Jan 31 08:43:08 compute-0 eager_bohr[379597]: }
Jan 31 08:43:08 compute-0 systemd[1]: libpod-879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224.scope: Deactivated successfully.
Jan 31 08:43:08 compute-0 podman[379581]: 2026-01-31 08:43:08.922682631 +0000 UTC m=+0.968141548 container died 879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5cf342269f929b83aa88ac63d410e231c764a191ad31456783b86d908f994de-merged.mount: Deactivated successfully.
Jan 31 08:43:08 compute-0 podman[379581]: 2026-01-31 08:43:08.982724663 +0000 UTC m=+1.028183590 container remove 879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:43:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:08 compute-0 systemd[1]: libpod-conmon-879f40c4387762454c57b3d52e1a811eeb02759428af298d921da97cf844e224.scope: Deactivated successfully.
Jan 31 08:43:09 compute-0 sudo[379480]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:43:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:43:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 71e7578e-ea7e-471d-a90c-dbc28dc831fd does not exist
Jan 31 08:43:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 92ebdb33-4d4e-4dc1-9bfd-189fadf079fd does not exist
Jan 31 08:43:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 519ffd4e-5dfb-408a-97a5-46c9df426a62 does not exist
Jan 31 08:43:09 compute-0 podman[379632]: 2026-01-31 08:43:09.098375686 +0000 UTC m=+0.114367664 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 08:43:09 compute-0 sudo[379673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:09 compute-0 sudo[379673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:09 compute-0 sudo[379673]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:09 compute-0 nova_compute[247704]: 2026-01-31 08:43:09.120 247708 INFO nova.compute.manager [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Took 0.55 seconds to detach 1 volumes for instance.
Jan 31 08:43:09 compute-0 sudo[379701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:43:09 compute-0 sudo[379701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:09 compute-0 sudo[379701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257022816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:09 compute-0 nova_compute[247704]: 2026-01-31 08:43:09.348 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:09 compute-0 nova_compute[247704]: 2026-01-31 08:43:09.356 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:09 compute-0 nova_compute[247704]: 2026-01-31 08:43:09.768 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:09 compute-0 ceph-mon[74496]: pgmap v3308: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 682 B/s wr, 87 op/s
Jan 31 08:43:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1569233818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:09 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:43:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2257022816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:10 compute-0 nova_compute[247704]: 2026-01-31 08:43:10.082 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:10.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:10 compute-0 nova_compute[247704]: 2026-01-31 08:43:10.550 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:43:10 compute-0 nova_compute[247704]: 2026-01-31 08:43:10.551 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:10 compute-0 nova_compute[247704]: 2026-01-31 08:43:10.551 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:10 compute-0 nova_compute[247704]: 2026-01-31 08:43:10.622 247708 DEBUG oslo_concurrency.processutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 123 op/s
Jan 31 08:43:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:10.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2965992183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:11 compute-0 nova_compute[247704]: 2026-01-31 08:43:11.061 247708 DEBUG oslo_concurrency.processutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:11 compute-0 nova_compute[247704]: 2026-01-31 08:43:11.068 247708 DEBUG nova.compute.provider_tree [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:11.212 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:11.213 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:11.213 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:11 compute-0 nova_compute[247704]: 2026-01-31 08:43:11.412 247708 DEBUG nova.scheduler.client.report [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:11 compute-0 nova_compute[247704]: 2026-01-31 08:43:11.628 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:11 compute-0 nova_compute[247704]: 2026-01-31 08:43:11.710 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:11 compute-0 nova_compute[247704]: 2026-01-31 08:43:11.820 247708 INFO nova.scheduler.client.report [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Deleted allocations for instance 5f00cd9b-b5f3-4eb6-ab53-387687853c27
Jan 31 08:43:11 compute-0 ceph-mon[74496]: pgmap v3309: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 123 op/s
Jan 31 08:43:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2965992183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1324190554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:12 compute-0 nova_compute[247704]: 2026-01-31 08:43:12.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 13 KiB/s wr, 55 op/s
Jan 31 08:43:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:12.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:12 compute-0 nova_compute[247704]: 2026-01-31 08:43:12.821 247708 DEBUG oslo_concurrency.lockutils [None req-e550cfb0-d99f-4ab0-9013-53a189d47d4f cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "5f00cd9b-b5f3-4eb6-ab53-387687853c27" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:13 compute-0 ceph-mon[74496]: pgmap v3310: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 13 KiB/s wr, 55 op/s
Jan 31 08:43:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3180732374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:14 compute-0 nova_compute[247704]: 2026-01-31 08:43:14.546 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 56 op/s
Jan 31 08:43:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:43:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:14.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.174 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.175 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.176 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.176 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.177 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.179 247708 INFO nova.compute.manager [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Terminating instance
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.182 247708 DEBUG nova.compute.manager [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:43:15 compute-0 kernel: tapa9f7b34e-c5 (unregistering): left promiscuous mode
Jan 31 08:43:15 compute-0 NetworkManager[49108]: <info>  [1769848995.3257] device (tapa9f7b34e-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:43:15 compute-0 ovn_controller[149457]: 2026-01-31T08:43:15Z|00809|binding|INFO|Releasing lport a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f from this chassis (sb_readonly=0)
Jan 31 08:43:15 compute-0 ovn_controller[149457]: 2026-01-31T08:43:15Z|00810|binding|INFO|Setting lport a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f down in Southbound
Jan 31 08:43:15 compute-0 ovn_controller[149457]: 2026-01-31T08:43:15Z|00811|binding|INFO|Removing iface tapa9f7b34e-c5 ovn-installed in OVS
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.385 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.390 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Jan 31 08:43:15 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b8.scope: Consumed 25.099s CPU time.
Jan 31 08:43:15 compute-0 systemd-machined[214448]: Machine qemu-84-instance-000000b8 terminated.
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.604 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.608 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.618 247708 INFO nova.virt.libvirt.driver [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Instance destroyed successfully.
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.618 247708 DEBUG nova.objects.instance [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lazy-loading 'resources' on Instance uuid 9e883c68-083a-45ab-81fe-942de74e50ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:43:15 compute-0 ovn_controller[149457]: 2026-01-31T08:43:15Z|00812|binding|INFO|Releasing lport c8a7eefb-b644-411b-b95a-f875570edfa9 from this chassis (sb_readonly=0)
Jan 31 08:43:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:15.775 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:f7:bb 10.100.0.8'], port_security=['fa:16:3e:71:f7:bb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9e883c68-083a-45ab-81fe-942de74e50ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-090376c2-ac34-46f0-acd4-344bb2bc1154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b38141686534a0fb9b947a7886cd4b6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f827482b-eee1-43ff-a797-1ec84e5a6d1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05d66413-db50-49eb-973e-490542297b8d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:43:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:15.776 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f in datapath 090376c2-ac34-46f0-acd4-344bb2bc1154 unbound from our chassis
Jan 31 08:43:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:15.779 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 090376c2-ac34-46f0-acd4-344bb2bc1154, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:43:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:15.780 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a29f3ef6-8f3d-46d7-bc61-2e6f9bff9f86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:15.781 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 namespace which is not needed anymore
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [NOTICE]   (374488) : haproxy version is 2.8.14-c23fe91
Jan 31 08:43:15 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [NOTICE]   (374488) : path to executable is /usr/sbin/haproxy
Jan 31 08:43:15 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [WARNING]  (374488) : Exiting Master process...
Jan 31 08:43:15 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [ALERT]    (374488) : Current worker (374490) exited with code 143 (Terminated)
Jan 31 08:43:15 compute-0 neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154[374484]: [WARNING]  (374488) : All workers exited. Exiting... (0)
Jan 31 08:43:15 compute-0 systemd[1]: libpod-358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396.scope: Deactivated successfully.
Jan 31 08:43:15 compute-0 podman[379788]: 2026-01-31 08:43:15.914991261 +0000 UTC m=+0.049793333 container died 358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:43:15 compute-0 ceph-mon[74496]: pgmap v3311: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 56 op/s
Jan 31 08:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396-userdata-shm.mount: Deactivated successfully.
Jan 31 08:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e874a6bc25a888a6b7a755bf0e157ffd73e9face97c7407632a65d8f5ef9293a-merged.mount: Deactivated successfully.
Jan 31 08:43:15 compute-0 podman[379788]: 2026-01-31 08:43:15.956243844 +0000 UTC m=+0.091045926 container cleanup 358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 08:43:15 compute-0 systemd[1]: libpod-conmon-358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396.scope: Deactivated successfully.
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.982 247708 DEBUG nova.virt.libvirt.vif [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-864114139',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-864114139',id=184,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKzg6Lk8OuDDWEBQQtNVGTD92uncKX4uuGYvXITCu78FVc0dCeMJjMpvMnamF80j6P2vfKzi9siS1JCEwYFhLgZ6vk2tD+oJq2pafl3D7QkbaZkrlvSItHgJLM4cymh3Sg==',key_name='tempest-TestInstancesWithCinderVolumes-232350541',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b38141686534a0fb9b947a7886cd4b6',ramdisk_id='',reservation_id='r-y7du7vy3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-791993230',owner_user_name='tempest-TestInstancesWithCinderVolumes-791993230-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:39:06Z,user_data=None,user_id='cfc8a271e75e4a92b16ee6b5da9cfc9f',uuid=9e883c68-083a-45ab-81fe-942de74e50ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.983 247708 DEBUG nova.network.os_vif_util [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converting VIF {"id": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "address": "fa:16:3e:71:f7:bb", "network": {"id": "090376c2-ac34-46f0-acd4-344bb2bc1154", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-919045303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b38141686534a0fb9b947a7886cd4b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9f7b34e-c5", "ovs_interfaceid": "a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.984 247708 DEBUG nova.network.os_vif_util [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.984 247708 DEBUG os_vif [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.986 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9f7b34e-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.988 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.990 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:43:15 compute-0 nova_compute[247704]: 2026-01-31 08:43:15.992 247708 INFO os_vif [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:f7:bb,bridge_name='br-int',has_traffic_filtering=True,id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f,network=Network(090376c2-ac34-46f0-acd4-344bb2bc1154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9f7b34e-c5')
Jan 31 08:43:16 compute-0 podman[379819]: 2026-01-31 08:43:16.017467004 +0000 UTC m=+0.044303570 container remove 358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.022 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3198fc7c-c8cc-445e-8abb-55f329424a4a]: (4, ('Sat Jan 31 08:43:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 (358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396)\n358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396\nSat Jan 31 08:43:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 (358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396)\n358d9549e073839f94dd13e9f2b6b4ff8d9ddeb176b3ec49400b37dc74405396\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.024 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c9e700-e763-4a36-81ff-a787a730cc69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.025 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap090376c2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:16 compute-0 kernel: tap090376c2-a0: left promiscuous mode
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.030 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.037 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc1c82c-8416-4a46-9d6d-a06060717381]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.052 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a20159e8-8dc0-46cf-a658-b2a5d9f9c0f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.053 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[585e5522-561e-4ad7-a701-1259c78b4b02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.067 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b13193de-d364-4183-b829-60b74281bb19]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897977, 'reachable_time': 28235, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379853, 'error': None, 'target': 'ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d090376c2\x2dac34\x2d46f0\x2dacd4\x2d344bb2bc1154.mount: Deactivated successfully.
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.071 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-090376c2-ac34-46f0-acd4-344bb2bc1154 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:43:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:16.071 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[07db0a90-ea28-4f9c-8aeb-fd55ac302ba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.246 247708 INFO nova.virt.libvirt.driver [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Deleting instance files /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef_del
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.247 247708 INFO nova.virt.libvirt.driver [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Deletion of /var/lib/nova/instances/9e883c68-083a-45ab-81fe-942de74e50ef_del complete
Jan 31 08:43:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 23 KiB/s wr, 45 op/s
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.713 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:16.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.989 247708 INFO nova.compute.manager [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Took 1.81 seconds to destroy the instance on the hypervisor.
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.990 247708 DEBUG oslo.service.loopingcall [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.990 247708 DEBUG nova.compute.manager [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:43:16 compute-0 nova_compute[247704]: 2026-01-31 08:43:16.991 247708 DEBUG nova.network.neutron [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:43:17 compute-0 nova_compute[247704]: 2026-01-31 08:43:17.532 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848982.5310013, 5f00cd9b-b5f3-4eb6-ab53-387687853c27 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:43:17 compute-0 nova_compute[247704]: 2026-01-31 08:43:17.533 247708 INFO nova.compute.manager [-] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] VM Stopped (Lifecycle Event)
Jan 31 08:43:17 compute-0 nova_compute[247704]: 2026-01-31 08:43:17.942 247708 DEBUG nova.compute.manager [None req-9a0a4469-fdb2-45cf-9897-e87b16626aaa - - - - - -] [instance: 5f00cd9b-b5f3-4eb6-ab53-387687853c27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:43:17 compute-0 sudo[379856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:17 compute-0 sudo[379856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:17 compute-0 ceph-mon[74496]: pgmap v3312: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 23 KiB/s wr, 45 op/s
Jan 31 08:43:17 compute-0 sudo[379856]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:18 compute-0 sudo[379881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:18 compute-0 sudo[379881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:18 compute-0 sudo[379881]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:43:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:43:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 526 KiB/s rd, 23 KiB/s wr, 46 op/s
Jan 31 08:43:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:18.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:19 compute-0 ceph-mon[74496]: pgmap v3313: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 526 KiB/s rd, 23 KiB/s wr, 46 op/s
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:43:20
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'vms', '.rgw.root', 'images', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:43:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:20.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 502 KiB/s rd, 23 KiB/s wr, 51 op/s
Jan 31 08:43:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:20.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:43:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:43:20 compute-0 nova_compute[247704]: 2026-01-31 08:43:20.921 247708 DEBUG nova.compute.manager [req-092f4f62-f6b8-44c1-b3f8-ff9236e3ccd8 req-eecd3b4d-8156-4c12-815a-41d93cb0600c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-vif-unplugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:20 compute-0 nova_compute[247704]: 2026-01-31 08:43:20.921 247708 DEBUG oslo_concurrency.lockutils [req-092f4f62-f6b8-44c1-b3f8-ff9236e3ccd8 req-eecd3b4d-8156-4c12-815a-41d93cb0600c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:20 compute-0 nova_compute[247704]: 2026-01-31 08:43:20.921 247708 DEBUG oslo_concurrency.lockutils [req-092f4f62-f6b8-44c1-b3f8-ff9236e3ccd8 req-eecd3b4d-8156-4c12-815a-41d93cb0600c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:20 compute-0 nova_compute[247704]: 2026-01-31 08:43:20.921 247708 DEBUG oslo_concurrency.lockutils [req-092f4f62-f6b8-44c1-b3f8-ff9236e3ccd8 req-eecd3b4d-8156-4c12-815a-41d93cb0600c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:20 compute-0 nova_compute[247704]: 2026-01-31 08:43:20.922 247708 DEBUG nova.compute.manager [req-092f4f62-f6b8-44c1-b3f8-ff9236e3ccd8 req-eecd3b4d-8156-4c12-815a-41d93cb0600c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] No waiting events found dispatching network-vif-unplugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:43:20 compute-0 nova_compute[247704]: 2026-01-31 08:43:20.922 247708 DEBUG nova.compute.manager [req-092f4f62-f6b8-44c1-b3f8-ff9236e3ccd8 req-eecd3b4d-8156-4c12-815a-41d93cb0600c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-vif-unplugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:43:21 compute-0 nova_compute[247704]: 2026-01-31 08:43:21.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:21 compute-0 nova_compute[247704]: 2026-01-31 08:43:21.714 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:22 compute-0 ceph-mon[74496]: pgmap v3314: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 502 KiB/s rd, 23 KiB/s wr, 51 op/s
Jan 31 08:43:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 14 op/s
Jan 31 08:43:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:22.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:23 compute-0 nova_compute[247704]: 2026-01-31 08:43:23.980 247708 DEBUG nova.compute.manager [req-d0a40698-0cd9-40ec-a90f-d17932748202 req-5fef3948-9946-4a73-b1a9-12e7248b641d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:23 compute-0 nova_compute[247704]: 2026-01-31 08:43:23.981 247708 DEBUG oslo_concurrency.lockutils [req-d0a40698-0cd9-40ec-a90f-d17932748202 req-5fef3948-9946-4a73-b1a9-12e7248b641d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:23 compute-0 nova_compute[247704]: 2026-01-31 08:43:23.981 247708 DEBUG oslo_concurrency.lockutils [req-d0a40698-0cd9-40ec-a90f-d17932748202 req-5fef3948-9946-4a73-b1a9-12e7248b641d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:23 compute-0 nova_compute[247704]: 2026-01-31 08:43:23.982 247708 DEBUG oslo_concurrency.lockutils [req-d0a40698-0cd9-40ec-a90f-d17932748202 req-5fef3948-9946-4a73-b1a9-12e7248b641d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:23 compute-0 nova_compute[247704]: 2026-01-31 08:43:23.982 247708 DEBUG nova.compute.manager [req-d0a40698-0cd9-40ec-a90f-d17932748202 req-5fef3948-9946-4a73-b1a9-12e7248b641d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] No waiting events found dispatching network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:43:23 compute-0 nova_compute[247704]: 2026-01-31 08:43:23.982 247708 WARNING nova.compute.manager [req-d0a40698-0cd9-40ec-a90f-d17932748202 req-5fef3948-9946-4a73-b1a9-12e7248b641d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received unexpected event network-vif-plugged-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f for instance with vm_state active and task_state deleting.
Jan 31 08:43:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:24 compute-0 ceph-mon[74496]: pgmap v3315: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 14 op/s
Jan 31 08:43:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:24.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 14 op/s
Jan 31 08:43:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:25.121 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:43:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:25.122 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:43:25 compute-0 nova_compute[247704]: 2026-01-31 08:43:25.121 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:26 compute-0 nova_compute[247704]: 2026-01-31 08:43:26.047 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:26 compute-0 ceph-mon[74496]: pgmap v3316: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 14 op/s
Jan 31 08:43:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:26.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 08:43:26 compute-0 nova_compute[247704]: 2026-01-31 08:43:26.717 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:26.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:27 compute-0 nova_compute[247704]: 2026-01-31 08:43:27.553 247708 DEBUG nova.compute.manager [req-b2da1967-9a84-470f-8bf1-13ec0cdf57a4 req-4dc11bfc-f645-4e72-8384-91688a4ffd53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Received event network-vif-deleted-a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:27 compute-0 nova_compute[247704]: 2026-01-31 08:43:27.554 247708 INFO nova.compute.manager [req-b2da1967-9a84-470f-8bf1-13ec0cdf57a4 req-4dc11bfc-f645-4e72-8384-91688a4ffd53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Neutron deleted interface a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f; detaching it from the instance and deleting it from the info cache
Jan 31 08:43:27 compute-0 nova_compute[247704]: 2026-01-31 08:43:27.554 247708 DEBUG nova.network.neutron [req-b2da1967-9a84-470f-8bf1-13ec0cdf57a4 req-4dc11bfc-f645-4e72-8384-91688a4ffd53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:43:27 compute-0 nova_compute[247704]: 2026-01-31 08:43:27.616 247708 DEBUG nova.network.neutron [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:43:27 compute-0 nova_compute[247704]: 2026-01-31 08:43:27.887 247708 INFO nova.compute.manager [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Took 10.90 seconds to deallocate network for instance.
Jan 31 08:43:27 compute-0 nova_compute[247704]: 2026-01-31 08:43:27.890 247708 DEBUG nova.compute.manager [req-b2da1967-9a84-470f-8bf1-13ec0cdf57a4 req-4dc11bfc-f645-4e72-8384-91688a4ffd53 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Detach interface failed, port_id=a9f7b34e-c576-4f4c-a72f-cfc875e5fa3f, reason: Instance 9e883c68-083a-45ab-81fe-942de74e50ef could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:43:28 compute-0 ceph-mon[74496]: pgmap v3317: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 08:43:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:28 compute-0 nova_compute[247704]: 2026-01-31 08:43:28.630 247708 INFO nova.compute.manager [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Took 0.74 seconds to detach 1 volumes for instance.
Jan 31 08:43:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 453 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 08:43:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:28 compute-0 podman[379913]: 2026-01-31 08:43:28.889403297 +0000 UTC m=+0.054530488 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:43:28 compute-0 nova_compute[247704]: 2026-01-31 08:43:28.940 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:28 compute-0 nova_compute[247704]: 2026-01-31 08:43:28.940 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:29 compute-0 nova_compute[247704]: 2026-01-31 08:43:29.009 247708 DEBUG oslo_concurrency.processutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817827549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:29 compute-0 nova_compute[247704]: 2026-01-31 08:43:29.469 247708 DEBUG oslo_concurrency.processutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:29 compute-0 nova_compute[247704]: 2026-01-31 08:43:29.476 247708 DEBUG nova.compute.provider_tree [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:29 compute-0 nova_compute[247704]: 2026-01-31 08:43:29.602 247708 DEBUG nova.scheduler.client.report [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:29 compute-0 nova_compute[247704]: 2026-01-31 08:43:29.813 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:29 compute-0 nova_compute[247704]: 2026-01-31 08:43:29.989 247708 INFO nova.scheduler.client.report [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Deleted allocations for instance 9e883c68-083a-45ab-81fe-942de74e50ef
Jan 31 08:43:30 compute-0 ceph-mon[74496]: pgmap v3318: 305 pgs: 305 active+clean; 453 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 08:43:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/817827549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:30 compute-0 nova_compute[247704]: 2026-01-31 08:43:30.433 247708 DEBUG oslo_concurrency.lockutils [None req-72295f91-8474-44ac-a3b7-9c6295e87105 cfc8a271e75e4a92b16ee6b5da9cfc9f 4b38141686534a0fb9b947a7886cd4b6 - - default default] Lock "9e883c68-083a-45ab-81fe-942de74e50ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:30.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:30 compute-0 nova_compute[247704]: 2026-01-31 08:43:30.616 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848995.615981, 9e883c68-083a-45ab-81fe-942de74e50ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:43:30 compute-0 nova_compute[247704]: 2026-01-31 08:43:30.618 247708 INFO nova.compute.manager [-] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] VM Stopped (Lifecycle Event)
Jan 31 08:43:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 31 08:43:30 compute-0 nova_compute[247704]: 2026-01-31 08:43:30.700 247708 DEBUG nova.compute.manager [None req-f21f9772-8a56-493b-ac1a-948294de70b3 - - - - - -] [instance: 9e883c68-083a-45ab-81fe-942de74e50ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:43:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:30.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:31 compute-0 nova_compute[247704]: 2026-01-31 08:43:31.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1048975873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:31 compute-0 nova_compute[247704]: 2026-01-31 08:43:31.719 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:32 compute-0 ceph-mon[74496]: pgmap v3319: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 4.1 KiB/s wr, 37 op/s
Jan 31 08:43:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Jan 31 08:43:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:34 compute-0 ceph-mon[74496]: pgmap v3320: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.8 KiB/s wr, 27 op/s
Jan 31 08:43:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 31 op/s
Jan 31 08:43:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:35.124 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:35 compute-0 ceph-mon[74496]: pgmap v3321: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 31 op/s
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010905220547180347 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00867637522101247 of space, bias 1.0, pg target 2.602912566303741 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:43:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:43:36 compute-0 nova_compute[247704]: 2026-01-31 08:43:36.056 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:36.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:36 compute-0 nova_compute[247704]: 2026-01-31 08:43:36.721 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:36.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:37 compute-0 ceph-mon[74496]: pgmap v3322: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:38 compute-0 sudo[379960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:38 compute-0 sudo[379960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:38 compute-0 sudo[379960]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.117 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.118 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:38 compute-0 sudo[379985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:38 compute-0 sudo[379985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:38 compute-0 sudo[379985]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.341 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:43:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:38.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.693 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.694 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.702 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:43:38 compute-0 nova_compute[247704]: 2026-01-31 08:43:38.702 247708 INFO nova.compute.claims [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:43:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:38.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:39 compute-0 nova_compute[247704]: 2026-01-31 08:43:39.770 247708 DEBUG nova.scheduler.client.report [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:43:39 compute-0 ceph-mon[74496]: pgmap v3323: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2520344341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:39 compute-0 podman[380011]: 2026-01-31 08:43:39.94398921 +0000 UTC m=+0.118686938 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.264 247708 DEBUG nova.scheduler.client.report [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.265 247708 DEBUG nova.compute.provider_tree [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.290 247708 DEBUG nova.scheduler.client.report [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.329 247708 DEBUG nova.scheduler.client.report [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.389 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:43:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919391643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.835 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:40 compute-0 nova_compute[247704]: 2026-01-31 08:43:40.841 247708 DEBUG nova.compute.provider_tree [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:41 compute-0 nova_compute[247704]: 2026-01-31 08:43:41.022 247708 DEBUG nova.scheduler.client.report [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:41 compute-0 nova_compute[247704]: 2026-01-31 08:43:41.058 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:41 compute-0 nova_compute[247704]: 2026-01-31 08:43:41.724 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:41 compute-0 ceph-mon[74496]: pgmap v3324: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 31 08:43:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1919391643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:43:42 compute-0 nova_compute[247704]: 2026-01-31 08:43:42.480 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:42 compute-0 nova_compute[247704]: 2026-01-31 08:43:42.482 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:43:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:43:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:42.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:43:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 08:43:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:42.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:43 compute-0 nova_compute[247704]: 2026-01-31 08:43:43.372 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:43:43 compute-0 nova_compute[247704]: 2026-01-31 08:43:43.372 247708 DEBUG nova.network.neutron [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:43:43 compute-0 nova_compute[247704]: 2026-01-31 08:43:43.820 247708 INFO nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:43:43 compute-0 ceph-mon[74496]: pgmap v3325: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 08:43:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:44 compute-0 nova_compute[247704]: 2026-01-31 08:43:44.059 247708 DEBUG nova.policy [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:43:44 compute-0 nova_compute[247704]: 2026-01-31 08:43:44.158 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:43:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:43:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:44.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:43:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 08:43:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:44.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.328 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.330 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.330 247708 INFO nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Creating image(s)
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.373 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.420 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.460 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.465 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.533 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.534 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.534 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.534 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.560 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.563 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.593 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.858 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:45 compute-0 ceph-mon[74496]: pgmap v3326: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 08:43:45 compute-0 nova_compute[247704]: 2026-01-31 08:43:45.956 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.071 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.080 247708 DEBUG nova.objects.instance [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid bcdf8b78-95f0-4cc3-9071-7deaa5a85eee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.157 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.158 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Ensure instance console log exists: /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.159 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.159 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.160 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:46.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 KiB/s rd, 255 B/s wr, 9 op/s
Jan 31 08:43:46 compute-0 nova_compute[247704]: 2026-01-31 08:43:46.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:46.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:43:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650116221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:43:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650116221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:47 compute-0 nova_compute[247704]: 2026-01-31 08:43:47.277 247708 DEBUG nova.network.neutron [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Successfully created port: 23e23487-5e42-43f0-8a71-74a84e458eac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:43:47 compute-0 ceph-mon[74496]: pgmap v3327: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 KiB/s rd, 255 B/s wr, 9 op/s
Jan 31 08:43:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1650116221' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1650116221' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:48.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 562 KiB/s wr, 12 op/s
Jan 31 08:43:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:48.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1500088091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1500088091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:49 compute-0 ceph-mon[74496]: pgmap v3328: 305 pgs: 305 active+clean; 341 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 562 KiB/s wr, 12 op/s
Jan 31 08:43:50 compute-0 nova_compute[247704]: 2026-01-31 08:43:50.020 247708 DEBUG nova.network.neutron [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Successfully updated port: 23e23487-5e42-43f0-8a71-74a84e458eac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:50 compute-0 nova_compute[247704]: 2026-01-31 08:43:50.147 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:43:50 compute-0 nova_compute[247704]: 2026-01-31 08:43:50.147 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:43:50 compute-0 nova_compute[247704]: 2026-01-31 08:43:50.148 247708 DEBUG nova.network.neutron [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:43:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:43:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:50.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:43:50 compute-0 nova_compute[247704]: 2026-01-31 08:43:50.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 31 08:43:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:50.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4162508920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4162508920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:51 compute-0 nova_compute[247704]: 2026-01-31 08:43:51.014 247708 DEBUG nova.compute.manager [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-changed-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:51 compute-0 nova_compute[247704]: 2026-01-31 08:43:51.014 247708 DEBUG nova.compute.manager [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Refreshing instance network info cache due to event network-changed-23e23487-5e42-43f0-8a71-74a84e458eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:43:51 compute-0 nova_compute[247704]: 2026-01-31 08:43:51.015 247708 DEBUG oslo_concurrency.lockutils [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:43:51 compute-0 nova_compute[247704]: 2026-01-31 08:43:51.073 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:51 compute-0 nova_compute[247704]: 2026-01-31 08:43:51.728 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:51 compute-0 ceph-mon[74496]: pgmap v3329: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 31 08:43:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:52.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 31 08:43:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:52.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:53 compute-0 nova_compute[247704]: 2026-01-31 08:43:53.252 247708 DEBUG nova.network.neutron [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:43:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:54 compute-0 ceph-mon[74496]: pgmap v3330: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 31 08:43:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1823716638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:43:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1823716638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:43:54 compute-0 nova_compute[247704]: 2026-01-31 08:43:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:54.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 08:43:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.540 247708 DEBUG nova.network.neutron [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.680 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.681 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Instance network_info: |[{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.687 247708 DEBUG oslo_concurrency.lockutils [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.688 247708 DEBUG nova.network.neutron [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Refreshing network info cache for port 23e23487-5e42-43f0-8a71-74a84e458eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.693 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Start _get_guest_xml network_info=[{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.701 247708 WARNING nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.722 247708 DEBUG nova.virt.libvirt.host [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.723 247708 DEBUG nova.virt.libvirt.host [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.731 247708 DEBUG nova.virt.libvirt.host [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.732 247708 DEBUG nova.virt.libvirt.host [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.733 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.734 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.734 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.735 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.735 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.735 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.735 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.736 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.736 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.736 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.736 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.736 247708 DEBUG nova.virt.hardware [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:43:55 compute-0 nova_compute[247704]: 2026-01-31 08:43:55.739 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:43:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416574134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.262 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.299 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.306 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:56.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.731 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:43:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555142641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.757 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.759 247708 DEBUG nova.virt.libvirt.vif [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=190,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+IEKLcDIGbeOvOdAVtorZ1xtiCqnJ7fs7G+aYTHXv48LqaidcMSGgy+Nrfu6X80mnMDyQMW/ANMH0isk5utMRMD3EHvSyRl+Xh4xqHrF93AhlQmH4UiDGLeTiTMaGHCw==',key_name='tempest-TestSecurityGroupsBasicOps-342550486',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-mkjmkem9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:43:44Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=bcdf8b78-95f0-4cc3-9071-7deaa5a85eee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.760 247708 DEBUG nova.network.os_vif_util [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.761 247708 DEBUG nova.network.os_vif_util [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.763 247708 DEBUG nova.objects.instance [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid bcdf8b78-95f0-4cc3-9071-7deaa5a85eee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:43:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.851 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <uuid>bcdf8b78-95f0-4cc3-9071-7deaa5a85eee</uuid>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <name>instance-000000be</name>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987</nova:name>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:43:55</nova:creationTime>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <nova:port uuid="23e23487-5e42-43f0-8a71-74a84e458eac">
Jan 31 08:43:56 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <system>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <entry name="serial">bcdf8b78-95f0-4cc3-9071-7deaa5a85eee</entry>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <entry name="uuid">bcdf8b78-95f0-4cc3-9071-7deaa5a85eee</entry>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </system>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <os>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </os>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <features>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </features>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk">
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </source>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk.config">
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </source>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:43:56 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:eb:f9:2f"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <target dev="tap23e23487-5e"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/console.log" append="off"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <video>
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </video>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:43:56 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:43:56 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:43:56 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:43:56 compute-0 nova_compute[247704]: </domain>
Jan 31 08:43:56 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.853 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Preparing to wait for external event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.854 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.855 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.855 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.856 247708 DEBUG nova.virt.libvirt.vif [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=190,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+IEKLcDIGbeOvOdAVtorZ1xtiCqnJ7fs7G+aYTHXv48LqaidcMSGgy+Nrfu6X80mnMDyQMW/ANMH0isk5utMRMD3EHvSyRl+Xh4xqHrF93AhlQmH4UiDGLeTiTMaGHCw==',key_name='tempest-TestSecurityGroupsBasicOps-342550486',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-mkjmkem9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:43:44Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=bcdf8b78-95f0-4cc3-9071-7deaa5a85eee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.856 247708 DEBUG nova.network.os_vif_util [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.857 247708 DEBUG nova.network.os_vif_util [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.857 247708 DEBUG os_vif [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.858 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.859 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.859 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.862 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.863 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23e23487-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.863 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23e23487-5e, col_values=(('external_ids', {'iface-id': '23e23487-5e42-43f0-8a71-74a84e458eac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:f9:2f', 'vm-uuid': 'bcdf8b78-95f0-4cc3-9071-7deaa5a85eee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.865 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:56 compute-0 NetworkManager[49108]: <info>  [1769849036.8664] manager: (tap23e23487-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.871 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:56 compute-0 nova_compute[247704]: 2026-01-31 08:43:56.872 247708 INFO os_vif [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e')
Jan 31 08:43:56 compute-0 ceph-mon[74496]: pgmap v3331: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 08:43:56 compute-0 sshd-session[380297]: Invalid user ubuntu from 45.148.10.240 port 45846
Jan 31 08:43:57 compute-0 sshd-session[380297]: Connection closed by invalid user ubuntu 45.148.10.240 port 45846 [preauth]
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.105 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.106 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.106 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:eb:f9:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.107 247708 INFO nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Using config drive
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.134 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:57 compute-0 nova_compute[247704]: 2026-01-31 08:43:57.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:43:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/416574134' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:43:57 compute-0 ceph-mon[74496]: pgmap v3332: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 08:43:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2555142641' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:43:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Jan 31 08:43:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Jan 31 08:43:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.025 247708 DEBUG nova.network.neutron [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updated VIF entry in instance network info cache for port 23e23487-5e42-43f0-8a71-74a84e458eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.025 247708 DEBUG nova.network.neutron [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.076 247708 DEBUG oslo_concurrency.lockutils [req-863a95ef-d6da-46ef-bea1-278723ea6232 req-62305b0e-658b-4676-9317-77f3923694cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:43:58 compute-0 sudo[380322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:58 compute-0 sudo[380322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:58 compute-0 sudo[380322]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:58 compute-0 sudo[380347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:43:58 compute-0 sudo[380347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:58 compute-0 sudo[380347]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.284 247708 INFO nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Creating config drive at /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/disk.config
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.288 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2y3b33go execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.415 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp2y3b33go" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.465 247708 DEBUG nova.storage.rbd_utils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.471 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/disk.config bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:43:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:58.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:43:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.5 MiB/s wr, 81 op/s
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.760 247708 DEBUG oslo_concurrency.processutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/disk.config bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.762 247708 INFO nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Deleting local config drive /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee/disk.config because it was imported into RBD.
Jan 31 08:43:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:43:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:43:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:43:58 compute-0 kernel: tap23e23487-5e: entered promiscuous mode
Jan 31 08:43:58 compute-0 ovn_controller[149457]: 2026-01-31T08:43:58Z|00813|binding|INFO|Claiming lport 23e23487-5e42-43f0-8a71-74a84e458eac for this chassis.
Jan 31 08:43:58 compute-0 ovn_controller[149457]: 2026-01-31T08:43:58Z|00814|binding|INFO|23e23487-5e42-43f0-8a71-74a84e458eac: Claiming fa:16:3e:eb:f9:2f 10.100.0.10
Jan 31 08:43:58 compute-0 NetworkManager[49108]: <info>  [1769849038.8295] manager: (tap23e23487-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:58 compute-0 systemd-machined[214448]: New machine qemu-86-instance-000000be.
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.870 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.874 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:f9:2f 10.100.0.10'], port_security=['fa:16:3e:eb:f9:2f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bcdf8b78-95f0-4cc3-9071-7deaa5a85eee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa8535db-1bf5-453e-8521-d36054020c47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '392901e2-d98d-4afe-a0d0-ff202a839d60 77ec18db-6d74-4aac-9268-2e86e3cdfbe8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ebb0cf2-f8f8-4f5f-9f1c-79de32e76bba, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=23e23487-5e42-43f0-8a71-74a84e458eac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.877 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 23e23487-5e42-43f0-8a71-74a84e458eac in datapath aa8535db-1bf5-453e-8521-d36054020c47 bound to our chassis
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.881 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa8535db-1bf5-453e-8521-d36054020c47
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.881 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:58 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-000000be.
Jan 31 08:43:58 compute-0 ovn_controller[149457]: 2026-01-31T08:43:58Z|00815|binding|INFO|Setting lport 23e23487-5e42-43f0-8a71-74a84e458eac ovn-installed in OVS
Jan 31 08:43:58 compute-0 ovn_controller[149457]: 2026-01-31T08:43:58Z|00816|binding|INFO|Setting lport 23e23487-5e42-43f0-8a71-74a84e458eac up in Southbound
Jan 31 08:43:58 compute-0 nova_compute[247704]: 2026-01-31 08:43:58.886 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.895 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a19d1689-0ee9-4a1c-bba1-a641558a1be8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.896 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa8535db-11 in ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.898 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa8535db-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.899 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1931bc08-7ded-4fa7-a828-1622d6af76dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.899 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e5c19b-cb3c-44aa-92d2-011f1572e19a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 systemd-udevd[380427]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.916 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[e0cf3d9c-864f-4bf4-a4fb-e9e47ef73be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 NetworkManager[49108]: <info>  [1769849038.9245] device (tap23e23487-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:43:58 compute-0 NetworkManager[49108]: <info>  [1769849038.9259] device (tap23e23487-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.933 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d5acf3e8-1dcb-4784-99de-5bcc08c458d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.964 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[e8547740-0ad6-49f1-ba5c-fd7c64ec26f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:58.970 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[181a05f8-8a14-40e3-869d-632b129181a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:58 compute-0 NetworkManager[49108]: <info>  [1769849038.9717] manager: (tapaa8535db-10): new Veth device (/org/freedesktop/NetworkManager/Devices/353)
Jan 31 08:43:58 compute-0 ceph-mon[74496]: osdmap e383: 3 total, 3 up, 3 in
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.008 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8de24366-f5b0-49a7-9872-8196a92d6d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.011 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[b015fe16-9824-4c33-8abb-e0afb9826155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 podman[380428]: 2026-01-31 08:43:59.017172033 +0000 UTC m=+0.082464188 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:43:59 compute-0 NetworkManager[49108]: <info>  [1769849039.0406] device (tapaa8535db-10): carrier: link connected
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.047 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[20a9e1ae-5c07-440b-9fd5-01a070cb059a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.064 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8259af9d-881d-41d1-8b52-b6a225f9e027]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa8535db-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:1b:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928165, 'reachable_time': 30440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380477, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.079 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7391ffdc-46ed-4add-88f9-85361f643eaf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:1b7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 928165, 'tstamp': 928165}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380478, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.099 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2f169e-77e6-4734-9350-87dd9a9412ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa8535db-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:1b:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928165, 'reachable_time': 30440, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380479, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.120 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bce833-5963-4227-9b12-280dc0c12b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.179 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[601350ca-1459-4bfa-aae8-c984ffbb046b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.181 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8535db-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.182 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.183 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa8535db-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:59 compute-0 NetworkManager[49108]: <info>  [1769849039.1866] manager: (tapaa8535db-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Jan 31 08:43:59 compute-0 kernel: tapaa8535db-10: entered promiscuous mode
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa8535db-10, col_values=(('external_ids', {'iface-id': '5305ef22-1d04-4b5f-9e47-65b8bd8d2725'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.194 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:59 compute-0 ovn_controller[149457]: 2026-01-31T08:43:59Z|00817|binding|INFO|Releasing lport 5305ef22-1d04-4b5f-9e47-65b8bd8d2725 from this chassis (sb_readonly=0)
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.196 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa8535db-1bf5-453e-8521-d36054020c47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa8535db-1bf5-453e-8521-d36054020c47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.197 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[67620cde-120b-4387-a3b8-c8fcbf7d0dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.198 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-aa8535db-1bf5-453e-8521-d36054020c47
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/aa8535db-1bf5-453e-8521-d36054020c47.pid.haproxy
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID aa8535db-1bf5-453e-8521-d36054020c47
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:43:59 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:43:59.199 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'env', 'PROCESS_TAG=haproxy-aa8535db-1bf5-453e-8521-d36054020c47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa8535db-1bf5-453e-8521-d36054020c47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.201 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.442 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849039.4415581, bcdf8b78-95f0-4cc3-9071-7deaa5a85eee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.442 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] VM Started (Lifecycle Event)
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.603 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:43:59 compute-0 podman[380553]: 2026-01-31 08:43:59.60515428 +0000 UTC m=+0.054217191 container create 231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.608 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849039.441728, bcdf8b78-95f0-4cc3-9071-7deaa5a85eee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.609 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] VM Paused (Lifecycle Event)
Jan 31 08:43:59 compute-0 systemd[1]: Started libpod-conmon-231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7.scope.
Jan 31 08:43:59 compute-0 podman[380553]: 2026-01-31 08:43:59.574542515 +0000 UTC m=+0.023605456 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:43:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35a1edbb2353bb63ef63704419f5ec55f914ef92b7d6647c0c811ca5d1d4d05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:43:59 compute-0 podman[380553]: 2026-01-31 08:43:59.699459354 +0000 UTC m=+0.148522285 container init 231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:43:59 compute-0 podman[380553]: 2026-01-31 08:43:59.708404852 +0000 UTC m=+0.157467783 container start 231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:43:59 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [NOTICE]   (380573) : New worker (380575) forked
Jan 31 08:43:59 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [NOTICE]   (380573) : Loading success.
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.861 247708 DEBUG nova.compute.manager [req-55617c0c-9ea0-4fca-83d9-7760c19622cb req-aaa965d2-9651-41ac-b51b-2c8e20b05b1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.863 247708 DEBUG oslo_concurrency.lockutils [req-55617c0c-9ea0-4fca-83d9-7760c19622cb req-aaa965d2-9651-41ac-b51b-2c8e20b05b1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.864 247708 DEBUG oslo_concurrency.lockutils [req-55617c0c-9ea0-4fca-83d9-7760c19622cb req-aaa965d2-9651-41ac-b51b-2c8e20b05b1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.865 247708 DEBUG oslo_concurrency.lockutils [req-55617c0c-9ea0-4fca-83d9-7760c19622cb req-aaa965d2-9651-41ac-b51b-2c8e20b05b1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.865 247708 DEBUG nova.compute.manager [req-55617c0c-9ea0-4fca-83d9-7760c19622cb req-aaa965d2-9651-41ac-b51b-2c8e20b05b1c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Processing event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.866 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.873 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.876 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:43:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:43:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.880 247708 INFO nova.virt.libvirt.driver [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Instance spawned successfully.
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.880 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.883 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849039.8721316, bcdf8b78-95f0-4cc3-9071-7deaa5a85eee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:43:59 compute-0 nova_compute[247704]: 2026-01-31 08:43:59.883 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] VM Resumed (Lifecycle Event)
Jan 31 08:44:00 compute-0 ceph-mon[74496]: pgmap v3334: 305 pgs: 305 active+clean; 187 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.5 MiB/s wr, 81 op/s
Jan 31 08:44:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:00.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.587 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.592 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.592 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.592 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.593 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.593 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.593 247708 DEBUG nova.virt.libvirt.driver [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.598 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:44:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 08:44:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:00 compute-0 nova_compute[247704]: 2026-01-31 08:44:00.976 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:44:01 compute-0 nova_compute[247704]: 2026-01-31 08:44:01.220 247708 INFO nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Took 15.89 seconds to spawn the instance on the hypervisor.
Jan 31 08:44:01 compute-0 nova_compute[247704]: 2026-01-31 08:44:01.221 247708 DEBUG nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:44:01 compute-0 nova_compute[247704]: 2026-01-31 08:44:01.596 247708 INFO nova.compute.manager [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Took 22.93 seconds to build instance.
Jan 31 08:44:01 compute-0 nova_compute[247704]: 2026-01-31 08:44:01.736 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:01 compute-0 nova_compute[247704]: 2026-01-31 08:44:01.865 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:01 compute-0 nova_compute[247704]: 2026-01-31 08:44:01.905 247708 DEBUG oslo_concurrency.lockutils [None req-e68cde3a-81ba-4fe9-8b2e-577d11a3964f c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:02 compute-0 ceph-mon[74496]: pgmap v3335: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 08:44:02 compute-0 nova_compute[247704]: 2026-01-31 08:44:02.512 247708 DEBUG nova.compute.manager [req-2b30d547-0d9b-4bd9-b93e-916f965b5e1e req-470c9b80-f823-4e90-8a38-59d0b331c096 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:44:02 compute-0 nova_compute[247704]: 2026-01-31 08:44:02.513 247708 DEBUG oslo_concurrency.lockutils [req-2b30d547-0d9b-4bd9-b93e-916f965b5e1e req-470c9b80-f823-4e90-8a38-59d0b331c096 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:02 compute-0 nova_compute[247704]: 2026-01-31 08:44:02.514 247708 DEBUG oslo_concurrency.lockutils [req-2b30d547-0d9b-4bd9-b93e-916f965b5e1e req-470c9b80-f823-4e90-8a38-59d0b331c096 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:02 compute-0 nova_compute[247704]: 2026-01-31 08:44:02.515 247708 DEBUG oslo_concurrency.lockutils [req-2b30d547-0d9b-4bd9-b93e-916f965b5e1e req-470c9b80-f823-4e90-8a38-59d0b331c096 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:02 compute-0 nova_compute[247704]: 2026-01-31 08:44:02.515 247708 DEBUG nova.compute.manager [req-2b30d547-0d9b-4bd9-b93e-916f965b5e1e req-470c9b80-f823-4e90-8a38-59d0b331c096 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] No waiting events found dispatching network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:44:02 compute-0 nova_compute[247704]: 2026-01-31 08:44:02.515 247708 WARNING nova.compute.manager [req-2b30d547-0d9b-4bd9-b93e-916f965b5e1e req-470c9b80-f823-4e90-8a38-59d0b331c096 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received unexpected event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac for instance with vm_state active and task_state None.
Jan 31 08:44:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:02.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 08:44:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/783645137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.978 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.979 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.980 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:44:03 compute-0 nova_compute[247704]: 2026-01-31 08:44:03.980 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bcdf8b78-95f0-4cc3-9071-7deaa5a85eee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:44:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Jan 31 08:44:04 compute-0 ceph-mon[74496]: pgmap v3336: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 KiB/s wr, 30 op/s
Jan 31 08:44:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Jan 31 08:44:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Jan 31 08:44:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:04.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 116 op/s
Jan 31 08:44:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:05 compute-0 ceph-mon[74496]: osdmap e384: 3 total, 3 up, 3 in
Jan 31 08:44:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3149711549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.230219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045230266, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2145, "num_deletes": 253, "total_data_size": 3860212, "memory_usage": 3916960, "flush_reason": "Manual Compaction"}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045277601, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3781573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71186, "largest_seqno": 73330, "table_properties": {"data_size": 3771879, "index_size": 6123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20388, "raw_average_key_size": 20, "raw_value_size": 3752409, "raw_average_value_size": 3801, "num_data_blocks": 266, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848834, "oldest_key_time": 1769848834, "file_creation_time": 1769849045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 47494 microseconds, and 9817 cpu microseconds.
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.277697) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3781573 bytes OK
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.277740) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.279360) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.279385) EVENT_LOG_v1 {"time_micros": 1769849045279376, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.279412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3851508, prev total WAL file size 3851508, number of live WAL files 2.
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.280481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3692KB)], [164(9970KB)]
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045280558, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 13991573, "oldest_snapshot_seqno": -1}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 10030 keys, 12039729 bytes, temperature: kUnknown
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045375367, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 12039729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11976552, "index_size": 37016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 264787, "raw_average_key_size": 26, "raw_value_size": 11802528, "raw_average_value_size": 1176, "num_data_blocks": 1405, "num_entries": 10030, "num_filter_entries": 10030, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849045, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.375799) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12039729 bytes
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.377644) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.4 rd, 126.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10558, records dropped: 528 output_compression: NoCompression
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.377676) EVENT_LOG_v1 {"time_micros": 1769849045377661, "job": 102, "event": "compaction_finished", "compaction_time_micros": 94948, "compaction_time_cpu_micros": 22104, "output_level": 6, "num_output_files": 1, "total_output_size": 12039729, "num_input_records": 10558, "num_output_records": 10030, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045378429, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849045379828, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.280348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.380033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.380046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.380051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.380055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:44:05.380060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:44:06 compute-0 ceph-mon[74496]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 20 KiB/s wr, 116 op/s
Jan 31 08:44:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:06.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 130 op/s
Jan 31 08:44:06 compute-0 nova_compute[247704]: 2026-01-31 08:44:06.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:06.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:06 compute-0 nova_compute[247704]: 2026-01-31 08:44:06.867 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:08 compute-0 ceph-mon[74496]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 130 op/s
Jan 31 08:44:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/197898601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 110 op/s
Jan 31 08:44:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:08.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:09 compute-0 sudo[380590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:09 compute-0 sudo[380590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-0 sudo[380590]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-0 sudo[380615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:09 compute-0 sudo[380615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-0 sudo[380615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-0 sudo[380640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:09 compute-0 sudo[380640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:09 compute-0 sudo[380640]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:09 compute-0 sudo[380665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:44:09 compute-0 sudo[380665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:10 compute-0 sudo[380665]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:44:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:44:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:44:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2951890a-6556-4290-bcf7-7f955ca3bf74 does not exist
Jan 31 08:44:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c4aa89c6-494a-4972-963b-7a8173c6dd76 does not exist
Jan 31 08:44:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 075ee88a-7074-467d-8228-f2d133505dbf does not exist
Jan 31 08:44:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:44:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:44:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:44:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:44:10 compute-0 sudo[380721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:10 compute-0 sudo[380721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:10 compute-0 sudo[380721]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:10 compute-0 ceph-mon[74496]: pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 110 op/s
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/98158991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:44:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:44:10 compute-0 sudo[380752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:10 compute-0 sudo[380752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:10 compute-0 sudo[380752]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:10 compute-0 podman[380745]: 2026-01-31 08:44:10.353139892 +0000 UTC m=+0.113162385 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:10 compute-0 sudo[380797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:10 compute-0 sudo[380797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:10 compute-0 sudo[380797]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:10 compute-0 sudo[380823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:44:10 compute-0 sudo[380823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.728053924 +0000 UTC m=+0.055985813 container create aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tharp, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:44:10 compute-0 systemd[1]: Started libpod-conmon-aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35.scope.
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.697763587 +0000 UTC m=+0.025695526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:44:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.824615564 +0000 UTC m=+0.152547403 container init aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tharp, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.834471574 +0000 UTC m=+0.162403423 container start aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.83803644 +0000 UTC m=+0.165968279 container attach aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:44:10 compute-0 systemd[1]: libpod-aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35.scope: Deactivated successfully.
Jan 31 08:44:10 compute-0 quizzical_tharp[380903]: 167 167
Jan 31 08:44:10 compute-0 conmon[380903]: conmon aa9c071f67f1d5d58cc0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35.scope/container/memory.events
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.845067271 +0000 UTC m=+0.172999120 container died aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a56c36f0a81cf837b1b3cc1053d538b05de9bfd232849270657bf6e9d8ba3fe8-merged.mount: Deactivated successfully.
Jan 31 08:44:10 compute-0 podman[380887]: 2026-01-31 08:44:10.889323608 +0000 UTC m=+0.217255447 container remove aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_tharp, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:10 compute-0 systemd[1]: libpod-conmon-aa9c071f67f1d5d58cc07f80aa460f8466a847c6aec83c552ab642a5963f1c35.scope: Deactivated successfully.
Jan 31 08:44:11 compute-0 podman[380928]: 2026-01-31 08:44:11.082352185 +0000 UTC m=+0.066179091 container create bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:44:11 compute-0 systemd[1]: Started libpod-conmon-bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282.scope.
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.130 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:44:11 compute-0 podman[380928]: 2026-01-31 08:44:11.043134411 +0000 UTC m=+0.026961347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:44:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89c2fbfcfa81d0b3db5552330ff252e8b9d809d9f038eb4d3dd461abc6469890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89c2fbfcfa81d0b3db5552330ff252e8b9d809d9f038eb4d3dd461abc6469890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89c2fbfcfa81d0b3db5552330ff252e8b9d809d9f038eb4d3dd461abc6469890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89c2fbfcfa81d0b3db5552330ff252e8b9d809d9f038eb4d3dd461abc6469890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89c2fbfcfa81d0b3db5552330ff252e8b9d809d9f038eb4d3dd461abc6469890/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:11 compute-0 podman[380928]: 2026-01-31 08:44:11.171109304 +0000 UTC m=+0.154936230 container init bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:44:11 compute-0 podman[380928]: 2026-01-31 08:44:11.17873293 +0000 UTC m=+0.162559836 container start bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:44:11 compute-0 podman[380928]: 2026-01-31 08:44:11.183126927 +0000 UTC m=+0.166953833 container attach bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:11.214 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:11.216 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:11.217 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.592 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.593 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.593 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.593 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.593 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.651 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.651 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.652 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.652 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.653 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:11 compute-0 nova_compute[247704]: 2026-01-31 08:44:11.869 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:12 compute-0 elated_matsumoto[380944]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:44:12 compute-0 elated_matsumoto[380944]: --> relative data size: 1.0
Jan 31 08:44:12 compute-0 elated_matsumoto[380944]: --> All data devices are unavailable
Jan 31 08:44:12 compute-0 systemd[1]: libpod-bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282.scope: Deactivated successfully.
Jan 31 08:44:12 compute-0 podman[380928]: 2026-01-31 08:44:12.104054806 +0000 UTC m=+1.087881722 container died bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:44:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:44:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898821738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:12 compute-0 nova_compute[247704]: 2026-01-31 08:44:12.143 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-89c2fbfcfa81d0b3db5552330ff252e8b9d809d9f038eb4d3dd461abc6469890-merged.mount: Deactivated successfully.
Jan 31 08:44:12 compute-0 podman[380928]: 2026-01-31 08:44:12.215589079 +0000 UTC m=+1.199415985 container remove bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:44:12 compute-0 systemd[1]: libpod-conmon-bbcc02776b949dfdbcbd1d9a821ea8dbadc0a8ff39dd31aa53058942e9f6b282.scope: Deactivated successfully.
Jan 31 08:44:12 compute-0 sudo[380823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:12 compute-0 sudo[380996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:12 compute-0 sudo[380996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:12 compute-0 sudo[380996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:12 compute-0 ceph-mon[74496]: pgmap v3341: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 08:44:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/898821738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:12 compute-0 sudo[381021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:12 compute-0 sudo[381021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:12 compute-0 sudo[381021]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:12 compute-0 sudo[381046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:12 compute-0 sudo[381046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:12 compute-0 sudo[381046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:12 compute-0 sudo[381071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:44:12 compute-0 sudo[381071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:12.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 08:44:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:12.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:12 compute-0 podman[381137]: 2026-01-31 08:44:12.865699548 +0000 UTC m=+0.087436159 container create 69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:12 compute-0 podman[381137]: 2026-01-31 08:44:12.799149468 +0000 UTC m=+0.020886059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:44:12 compute-0 systemd[1]: Started libpod-conmon-69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031.scope.
Jan 31 08:44:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:12 compute-0 podman[381137]: 2026-01-31 08:44:12.994517003 +0000 UTC m=+0.216253624 container init 69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:44:13 compute-0 podman[381137]: 2026-01-31 08:44:13.002124607 +0000 UTC m=+0.223861188 container start 69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:44:13 compute-0 great_hypatia[381153]: 167 167
Jan 31 08:44:13 compute-0 systemd[1]: libpod-69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031.scope: Deactivated successfully.
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.038 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.038 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:44:13 compute-0 podman[381137]: 2026-01-31 08:44:13.045332359 +0000 UTC m=+0.267068960 container attach 69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:44:13 compute-0 podman[381137]: 2026-01-31 08:44:13.045778809 +0000 UTC m=+0.267515380 container died 69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ccc2d8b8a477525d51fa1d53fd943e257426f381807daefe5ce46abbef24d22-merged.mount: Deactivated successfully.
Jan 31 08:44:13 compute-0 podman[381137]: 2026-01-31 08:44:13.173012095 +0000 UTC m=+0.394748676 container remove 69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.224 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.228 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3973MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.229 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.229 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:13 compute-0 systemd[1]: libpod-conmon-69deee8ebcde70208f2a9d03c55f4ab72df9ece84a17fed7a0548ff14ea66031.scope: Deactivated successfully.
Jan 31 08:44:13 compute-0 podman[381177]: 2026-01-31 08:44:13.360847905 +0000 UTC m=+0.078107051 container create c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:44:13 compute-0 ceph-mon[74496]: pgmap v3342: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 84 op/s
Jan 31 08:44:13 compute-0 podman[381177]: 2026-01-31 08:44:13.312682874 +0000 UTC m=+0.029942110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:44:13 compute-0 systemd[1]: Started libpod-conmon-c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba.scope.
Jan 31 08:44:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fc42b84c8e65e26af155901352b7c9fbe29e91780726b714f6581453d715f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fc42b84c8e65e26af155901352b7c9fbe29e91780726b714f6581453d715f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fc42b84c8e65e26af155901352b7c9fbe29e91780726b714f6581453d715f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25fc42b84c8e65e26af155901352b7c9fbe29e91780726b714f6581453d715f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:13 compute-0 podman[381177]: 2026-01-31 08:44:13.53897503 +0000 UTC m=+0.256234276 container init c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:13 compute-0 podman[381177]: 2026-01-31 08:44:13.550133411 +0000 UTC m=+0.267392557 container start c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:13 compute-0 podman[381177]: 2026-01-31 08:44:13.584482497 +0000 UTC m=+0.301741723 container attach c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.655 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bcdf8b78-95f0-4cc3-9071-7deaa5a85eee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.657 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.658 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:44:13 compute-0 nova_compute[247704]: 2026-01-31 08:44:13.874 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:14 compute-0 ovn_controller[149457]: 2026-01-31T08:44:14Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:f9:2f 10.100.0.10
Jan 31 08:44:14 compute-0 ovn_controller[149457]: 2026-01-31T08:44:14Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:f9:2f 10.100.0.10
Jan 31 08:44:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:44:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985699015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:14 compute-0 nova_compute[247704]: 2026-01-31 08:44:14.329 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:14 compute-0 nova_compute[247704]: 2026-01-31 08:44:14.338 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]: {
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:     "0": [
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:         {
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "devices": [
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "/dev/loop3"
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             ],
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "lv_name": "ceph_lv0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "lv_size": "7511998464",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "name": "ceph_lv0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "tags": {
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.crush_device_class": "",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.encrypted": "0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.osd_id": "0",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.type": "block",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:                 "ceph.vdo": "0"
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             },
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "type": "block",
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:             "vg_name": "ceph_vg0"
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:         }
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]:     ]
Jan 31 08:44:14 compute-0 ecstatic_noyce[381193]: }
Jan 31 08:44:14 compute-0 systemd[1]: libpod-c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba.scope: Deactivated successfully.
Jan 31 08:44:14 compute-0 podman[381177]: 2026-01-31 08:44:14.405183197 +0000 UTC m=+1.122442343 container died c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:14 compute-0 nova_compute[247704]: 2026-01-31 08:44:14.487 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:44:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3985699015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-25fc42b84c8e65e26af155901352b7c9fbe29e91780726b714f6581453d715f3-merged.mount: Deactivated successfully.
Jan 31 08:44:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 85 op/s
Jan 31 08:44:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:14.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:14 compute-0 podman[381177]: 2026-01-31 08:44:14.996151366 +0000 UTC m=+1.713410552 container remove c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:15 compute-0 nova_compute[247704]: 2026-01-31 08:44:15.018 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:44:15 compute-0 nova_compute[247704]: 2026-01-31 08:44:15.019 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:15 compute-0 sudo[381071]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:15 compute-0 systemd[1]: libpod-conmon-c90b69bd0c8dc4963a049cd00a39c852c3e4b5046d2669dcf88cda1882a690ba.scope: Deactivated successfully.
Jan 31 08:44:15 compute-0 sudo[381236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:15 compute-0 sudo[381236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:15 compute-0 sudo[381236]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:15 compute-0 nova_compute[247704]: 2026-01-31 08:44:15.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:15 compute-0 NetworkManager[49108]: <info>  [1769849055.1485] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Jan 31 08:44:15 compute-0 NetworkManager[49108]: <info>  [1769849055.1498] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Jan 31 08:44:15 compute-0 nova_compute[247704]: 2026-01-31 08:44:15.173 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:15 compute-0 ovn_controller[149457]: 2026-01-31T08:44:15Z|00818|binding|INFO|Releasing lport 5305ef22-1d04-4b5f-9e47-65b8bd8d2725 from this chassis (sb_readonly=0)
Jan 31 08:44:15 compute-0 nova_compute[247704]: 2026-01-31 08:44:15.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:15 compute-0 sudo[381261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:15 compute-0 sudo[381261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:15 compute-0 sudo[381261]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:15 compute-0 sudo[381287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:15 compute-0 sudo[381287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:15 compute-0 sudo[381287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:15 compute-0 sudo[381312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:44:15 compute-0 sudo[381312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:15 compute-0 ceph-mon[74496]: pgmap v3343: 305 pgs: 305 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 85 op/s
Jan 31 08:44:15 compute-0 podman[381378]: 2026-01-31 08:44:15.728782812 +0000 UTC m=+0.110168740 container create 53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:15 compute-0 podman[381378]: 2026-01-31 08:44:15.642973555 +0000 UTC m=+0.024359463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:44:15 compute-0 systemd[1]: Started libpod-conmon-53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc.scope.
Jan 31 08:44:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:16 compute-0 podman[381378]: 2026-01-31 08:44:16.011955803 +0000 UTC m=+0.393341771 container init 53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:44:16 compute-0 podman[381378]: 2026-01-31 08:44:16.020748797 +0000 UTC m=+0.402134705 container start 53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:16 compute-0 compassionate_moore[381395]: 167 167
Jan 31 08:44:16 compute-0 systemd[1]: libpod-53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc.scope: Deactivated successfully.
Jan 31 08:44:16 compute-0 podman[381378]: 2026-01-31 08:44:16.115940173 +0000 UTC m=+0.497326191 container attach 53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:16 compute-0 podman[381378]: 2026-01-31 08:44:16.117680095 +0000 UTC m=+0.499066023 container died 53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:16.189 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.189 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:16.193 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:44:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:16.194 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dc04fce48accc7850a50c37603c51303d6d2db0cf436c12f20d43cd5133bc60-merged.mount: Deactivated successfully.
Jan 31 08:44:16 compute-0 podman[381378]: 2026-01-31 08:44:16.555332105 +0000 UTC m=+0.936717993 container remove 53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:16 compute-0 systemd[1]: libpod-conmon-53acefa379aaa42b0e08a5fc7c41927c559f5c9226c10a5c0a21e3d6e456ffcc.scope: Deactivated successfully.
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.598 247708 DEBUG nova.compute.manager [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-changed-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.598 247708 DEBUG nova.compute.manager [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Refreshing instance network info cache due to event network-changed-23e23487-5e42-43f0-8a71-74a84e458eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.599 247708 DEBUG oslo_concurrency.lockutils [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.599 247708 DEBUG oslo_concurrency.lockutils [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.599 247708 DEBUG nova.network.neutron [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Refreshing network info cache for port 23e23487-5e42-43f0-8a71-74a84e458eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:44:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:16.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 193 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 803 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.760 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:16 compute-0 podman[381421]: 2026-01-31 08:44:16.696913819 +0000 UTC m=+0.025447750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:44:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:16.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:16 compute-0 podman[381421]: 2026-01-31 08:44:16.829194748 +0000 UTC m=+0.157728669 container create 03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:44:16 compute-0 nova_compute[247704]: 2026-01-31 08:44:16.871 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:16 compute-0 systemd[1]: Started libpod-conmon-03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b.scope.
Jan 31 08:44:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d043e0f66689c9954b5bed31be80d6f9caf4f6653769f565a8a33c191f53327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d043e0f66689c9954b5bed31be80d6f9caf4f6653769f565a8a33c191f53327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d043e0f66689c9954b5bed31be80d6f9caf4f6653769f565a8a33c191f53327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d043e0f66689c9954b5bed31be80d6f9caf4f6653769f565a8a33c191f53327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:17 compute-0 podman[381421]: 2026-01-31 08:44:17.037981079 +0000 UTC m=+0.366515010 container init 03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:44:17 compute-0 podman[381421]: 2026-01-31 08:44:17.045915611 +0000 UTC m=+0.374449522 container start 03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yonath, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:17 compute-0 podman[381421]: 2026-01-31 08:44:17.086433317 +0000 UTC m=+0.414967318 container attach 03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yonath, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:44:17 compute-0 determined_yonath[381437]: {
Jan 31 08:44:17 compute-0 determined_yonath[381437]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:44:17 compute-0 determined_yonath[381437]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:44:17 compute-0 determined_yonath[381437]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:44:17 compute-0 determined_yonath[381437]:         "osd_id": 0,
Jan 31 08:44:17 compute-0 determined_yonath[381437]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:44:17 compute-0 determined_yonath[381437]:         "type": "bluestore"
Jan 31 08:44:17 compute-0 determined_yonath[381437]:     }
Jan 31 08:44:17 compute-0 determined_yonath[381437]: }
Jan 31 08:44:17 compute-0 systemd[1]: libpod-03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b.scope: Deactivated successfully.
Jan 31 08:44:17 compute-0 conmon[381437]: conmon 03b79efe47f9cb037052 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b.scope/container/memory.events
Jan 31 08:44:17 compute-0 podman[381421]: 2026-01-31 08:44:17.861128847 +0000 UTC m=+1.189662758 container died 03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d043e0f66689c9954b5bed31be80d6f9caf4f6653769f565a8a33c191f53327-merged.mount: Deactivated successfully.
Jan 31 08:44:18 compute-0 ceph-mon[74496]: pgmap v3344: 305 pgs: 305 active+clean; 193 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 803 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 08:44:18 compute-0 sudo[381472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:18 compute-0 sudo[381472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:18 compute-0 sudo[381472]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:18 compute-0 sudo[381497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:18 compute-0 sudo[381497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:18 compute-0 sudo[381497]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:18 compute-0 podman[381421]: 2026-01-31 08:44:18.531277144 +0000 UTC m=+1.859811055 container remove 03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:44:18 compute-0 sudo[381312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:44:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:18 compute-0 systemd[1]: libpod-conmon-03b79efe47f9cb037052d301fecfb54b76eb43cca90bb605b66cf66da3f08e3b.scope: Deactivated successfully.
Jan 31 08:44:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:44:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 195 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 08:44:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 14de0c1b-0526-4177-a0ac-9333ae70e9ed does not exist
Jan 31 08:44:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e86fbe28-d30b-4ce5-baaf-1e789df5fb5b does not exist
Jan 31 08:44:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2e88bcce-fe1e-4c14-915e-c603e33a0707 does not exist
Jan 31 08:44:18 compute-0 sudo[381522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:18 compute-0 sudo[381522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:18 compute-0 sudo[381522]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:18.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:18 compute-0 sudo[381547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:44:18 compute-0 sudo[381547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:18 compute-0 sudo[381547]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:19 compute-0 ceph-mon[74496]: pgmap v3345: 305 pgs: 305 active+clean; 195 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 08:44:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:44:20
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', 'vms']
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:44:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:20.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:44:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:20.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:44:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:44:21 compute-0 nova_compute[247704]: 2026-01-31 08:44:21.762 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:21 compute-0 ceph-mon[74496]: pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:21 compute-0 nova_compute[247704]: 2026-01-31 08:44:21.873 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:22 compute-0 nova_compute[247704]: 2026-01-31 08:44:22.013 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:23 compute-0 ceph-mon[74496]: pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:25 compute-0 nova_compute[247704]: 2026-01-31 08:44:25.033 247708 DEBUG nova.network.neutron [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updated VIF entry in instance network info cache for port 23e23487-5e42-43f0-8a71-74a84e458eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:44:25 compute-0 nova_compute[247704]: 2026-01-31 08:44:25.034 247708 DEBUG nova.network.neutron [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:44:25 compute-0 nova_compute[247704]: 2026-01-31 08:44:25.358 247708 DEBUG oslo_concurrency.lockutils [req-db7d32cb-ae03-49d0-9aaf-4851bd500c04 req-665bfacf-379a-46f5-b6d5-5c0b16143d11 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:44:25 compute-0 ceph-mon[74496]: pgmap v3348: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:44:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:26.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 762 KiB/s wr, 31 op/s
Jan 31 08:44:26 compute-0 nova_compute[247704]: 2026-01-31 08:44:26.767 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:26.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:26 compute-0 nova_compute[247704]: 2026-01-31 08:44:26.875 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:27 compute-0 ceph-mon[74496]: pgmap v3349: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 762 KiB/s wr, 31 op/s
Jan 31 08:44:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 92 KiB/s wr, 16 op/s
Jan 31 08:44:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:28.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:29 compute-0 podman[381578]: 2026-01-31 08:44:29.890718545 +0000 UTC m=+0.054204810 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Jan 31 08:44:29 compute-0 ceph-mon[74496]: pgmap v3350: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 92 KiB/s wr, 16 op/s
Jan 31 08:44:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:30.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 65 KiB/s wr, 10 op/s
Jan 31 08:44:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:30.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:31 compute-0 nova_compute[247704]: 2026-01-31 08:44:31.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:31 compute-0 nova_compute[247704]: 2026-01-31 08:44:31.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:31 compute-0 ceph-mon[74496]: pgmap v3351: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 65 KiB/s wr, 10 op/s
Jan 31 08:44:32 compute-0 ovn_controller[149457]: 2026-01-31T08:44:32Z|00819|binding|INFO|Releasing lport 5305ef22-1d04-4b5f-9e47-65b8bd8d2725 from this chassis (sb_readonly=0)
Jan 31 08:44:32 compute-0 nova_compute[247704]: 2026-01-31 08:44:32.129 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:32.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Jan 31 08:44:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:32.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:32 compute-0 nova_compute[247704]: 2026-01-31 08:44:32.931 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:33 compute-0 ceph-mon[74496]: pgmap v3352: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Jan 31 08:44:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:44:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1554 writes, 6631 keys, 1552 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s
                                           Interval WAL: 1554 writes, 1552 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     41.6      2.36              0.26        51    0.046       0      0       0.0       0.0
                                             L6      1/0   11.48 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2     60.4     51.5      9.82              1.31        50    0.196    372K    27K       0.0       0.0
                                            Sum      1/0   11.48 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     48.7     49.6     12.18              1.57       101    0.121    372K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.5     32.8     33.5      2.07              0.20        10    0.207     51K   2591       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     60.4     51.5      9.82              1.31        50    0.196    372K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     41.6      2.36              0.26        50    0.047       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.096, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.59 GB write, 0.10 MB/s write, 0.58 GB read, 0.10 MB/s read, 12.2 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 2.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 65.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000564 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3816,62.63 MB,20.6009%) FilterBlock(102,1.00 MB,0.329344%) IndexBlock(102,1.68 MB,0.551284%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:44:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:34.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:44:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021699571352131026 of space, bias 1.0, pg target 0.6509871405639308 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:44:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:44:35 compute-0 ceph-mon[74496]: pgmap v3353: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:44:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:44:36 compute-0 nova_compute[247704]: 2026-01-31 08:44:36.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:36.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:36 compute-0 nova_compute[247704]: 2026-01-31 08:44:36.880 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/225521445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:38 compute-0 ceph-mon[74496]: pgmap v3354: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:44:38 compute-0 sudo[381603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:38 compute-0 sudo[381603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:38 compute-0 sudo[381603]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:38 compute-0 sudo[381628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:38 compute-0 sudo[381628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:38 compute-0 sudo[381628]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:38.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:40 compute-0 ceph-mon[74496]: pgmap v3355: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:40.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:40.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:40 compute-0 podman[381654]: 2026-01-31 08:44:40.929541934 +0000 UTC m=+0.094481360 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:41 compute-0 nova_compute[247704]: 2026-01-31 08:44:41.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:41 compute-0 nova_compute[247704]: 2026-01-31 08:44:41.882 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:42 compute-0 ceph-mon[74496]: pgmap v3356: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:43 compute-0 ceph-mon[74496]: pgmap v3357: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:44:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 229 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 867 KiB/s wr, 14 op/s
Jan 31 08:44:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:45 compute-0 ceph-mon[74496]: pgmap v3358: 305 pgs: 305 active+clean; 229 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 867 KiB/s wr, 14 op/s
Jan 31 08:44:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:46.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:46 compute-0 nova_compute[247704]: 2026-01-31 08:44:46.778 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:46 compute-0 nova_compute[247704]: 2026-01-31 08:44:46.884 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:47 compute-0 nova_compute[247704]: 2026-01-31 08:44:47.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:47 compute-0 ceph-mon[74496]: pgmap v3359: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:49 compute-0 ceph-mon[74496]: pgmap v3360: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:50.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:51 compute-0 nova_compute[247704]: 2026-01-31 08:44:51.782 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:51 compute-0 nova_compute[247704]: 2026-01-31 08:44:51.886 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:51 compute-0 ceph-mon[74496]: pgmap v3361: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:52 compute-0 nova_compute[247704]: 2026-01-31 08:44:52.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:52.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:44:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:44:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:44:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3864210388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:44:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:44:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3864210388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:44:53 compute-0 ceph-mon[74496]: pgmap v3362: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3864210388' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:44:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3864210388' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:44:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3662965045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:44:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:54.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:54.692 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:44:54 compute-0 nova_compute[247704]: 2026-01-31 08:44:54.693 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:44:54.693 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:44:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:55 compute-0 ceph-mon[74496]: pgmap v3363: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:44:56 compute-0 nova_compute[247704]: 2026-01-31 08:44:56.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 950 KiB/s wr, 13 op/s
Jan 31 08:44:56 compute-0 nova_compute[247704]: 2026-01-31 08:44:56.784 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:44:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:44:56 compute-0 nova_compute[247704]: 2026-01-31 08:44:56.887 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:44:58 compute-0 ceph-mon[74496]: pgmap v3364: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 950 KiB/s wr, 13 op/s
Jan 31 08:44:58 compute-0 nova_compute[247704]: 2026-01-31 08:44:58.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:58 compute-0 nova_compute[247704]: 2026-01-31 08:44:58.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:44:58 compute-0 sudo[381690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:58 compute-0 sudo[381690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:58 compute-0 sudo[381690]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:58 compute-0 sudo[381715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:44:58 compute-0 sudo[381715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:58 compute-0 sudo[381715]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:58.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 254 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 386 KiB/s wr, 13 op/s
Jan 31 08:44:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:44:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:44:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:58.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:44:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:00 compute-0 ceph-mon[74496]: pgmap v3365: 305 pgs: 305 active+clean; 254 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 386 KiB/s wr, 13 op/s
Jan 31 08:45:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:45:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:00.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:00 compute-0 podman[381741]: 2026-01-31 08:45:00.89294306 +0000 UTC m=+0.060941633 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 08:45:01 compute-0 nova_compute[247704]: 2026-01-31 08:45:01.786 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:01 compute-0 nova_compute[247704]: 2026-01-31 08:45:01.889 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:02 compute-0 ceph-mon[74496]: pgmap v3366: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:45:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:02.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:45:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:02.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:04 compute-0 ceph-mon[74496]: pgmap v3367: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:45:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2843535445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:04.695 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:04.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1282128364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:05 compute-0 nova_compute[247704]: 2026-01-31 08:45:05.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:05 compute-0 nova_compute[247704]: 2026-01-31 08:45:05.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:45:05 compute-0 nova_compute[247704]: 2026-01-31 08:45:05.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:45:06 compute-0 ceph-mon[74496]: pgmap v3368: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:06 compute-0 nova_compute[247704]: 2026-01-31 08:45:06.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:06 compute-0 nova_compute[247704]: 2026-01-31 08:45:06.891 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:07 compute-0 nova_compute[247704]: 2026-01-31 08:45:07.418 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:45:07 compute-0 nova_compute[247704]: 2026-01-31 08:45:07.419 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:45:07 compute-0 nova_compute[247704]: 2026-01-31 08:45:07.419 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:45:07 compute-0 nova_compute[247704]: 2026-01-31 08:45:07.420 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bcdf8b78-95f0-4cc3-9071-7deaa5a85eee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:45:08 compute-0 ceph-mon[74496]: pgmap v3369: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4221698089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:08.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3818538128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:10 compute-0 ceph-mon[74496]: pgmap v3370: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:45:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2327328053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:10.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 31 08:45:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:11.216 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:11.216 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:11.217 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:11 compute-0 nova_compute[247704]: 2026-01-31 08:45:11.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:11 compute-0 nova_compute[247704]: 2026-01-31 08:45:11.893 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:11 compute-0 podman[381766]: 2026-01-31 08:45:11.928221702 +0000 UTC m=+0.094362396 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:45:12 compute-0 ceph-mon[74496]: pgmap v3371: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.4 MiB/s wr, 15 op/s
Jan 31 08:45:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:12.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 31 08:45:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:13 compute-0 ceph-mon[74496]: pgmap v3372: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.3 KiB/s wr, 1 op/s
Jan 31 08:45:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/337564026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:13 compute-0 nova_compute[247704]: 2026-01-31 08:45:13.761 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.065 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.066 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.066 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.066 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.067 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.173 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.174 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.174 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.174 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.174 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:45:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664369261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.597 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3664369261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:14.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 1 op/s
Jan 31 08:45:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.977 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:45:14 compute-0 nova_compute[247704]: 2026-01-31 08:45:14.978 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:45:15 compute-0 nova_compute[247704]: 2026-01-31 08:45:15.134 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:45:15 compute-0 nova_compute[247704]: 2026-01-31 08:45:15.136 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4089MB free_disk=20.901229858398438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:45:15 compute-0 nova_compute[247704]: 2026-01-31 08:45:15.136 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:15 compute-0 nova_compute[247704]: 2026-01-31 08:45:15.136 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:15 compute-0 ceph-mon[74496]: pgmap v3373: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 8.4 KiB/s wr, 1 op/s
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.164 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance bcdf8b78-95f0-4cc3-9071-7deaa5a85eee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.165 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.165 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.234 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:45:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680370433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.666 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.672 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.701 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.703 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.704 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:16.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3680370433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.796 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:16.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:16 compute-0 nova_compute[247704]: 2026-01-31 08:45:16.894 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:17 compute-0 ceph-mon[74496]: pgmap v3374: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 31 08:45:18 compute-0 sudo[381842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:18 compute-0 sudo[381842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:18 compute-0 sudo[381842]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:18.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:45:18 compute-0 sudo[381867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:18 compute-0 sudo[381867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:18 compute-0 sudo[381867]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:19 compute-0 sudo[381892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:19 compute-0 sudo[381892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-0 sudo[381892]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-0 sudo[381917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:19 compute-0 sudo[381917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-0 sudo[381917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-0 sudo[381942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:19 compute-0 sudo[381942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-0 sudo[381942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-0 sudo[381967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:45:19 compute-0 sudo[381967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:19 compute-0 sudo[381967]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:19 compute-0 ceph-mon[74496]: pgmap v3375: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:45:20
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:45:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:45:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:45:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:20.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 45 op/s
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:45:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:45:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:45:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:45:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:45:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b8e76d14-5e47-4dff-ba81-1154f83d2bec does not exist
Jan 31 08:45:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4c73a079-b819-4c76-aead-1cb7a1753ab2 does not exist
Jan 31 08:45:21 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5e475a07-2885-46cd-9d34-f574fc1108c0 does not exist
Jan 31 08:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:45:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:45:21 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:45:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:45:21 compute-0 sudo[382026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:21 compute-0 sudo[382026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:21 compute-0 sudo[382026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:21 compute-0 sudo[382051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:21 compute-0 sudo[382051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:21 compute-0 sudo[382051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:21 compute-0 sudo[382076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:21 compute-0 sudo[382076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:21 compute-0 sudo[382076]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:21 compute-0 sudo[382101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:45:21 compute-0 sudo[382101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-0 ceph-mon[74496]: pgmap v3376: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 45 op/s
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3518480498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:45:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:45:21 compute-0 nova_compute[247704]: 2026-01-31 08:45:21.698 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:21 compute-0 nova_compute[247704]: 2026-01-31 08:45:21.698 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.753543723 +0000 UTC m=+0.049305980 container create 7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:21 compute-0 nova_compute[247704]: 2026-01-31 08:45:21.798 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:21 compute-0 systemd[1]: Started libpod-conmon-7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90.scope.
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.725491801 +0000 UTC m=+0.021254148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:45:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.850830621 +0000 UTC m=+0.146592958 container init 7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.861780948 +0000 UTC m=+0.157543205 container start 7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.865304153 +0000 UTC m=+0.161066490 container attach 7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:21 compute-0 suspicious_swanson[382183]: 167 167
Jan 31 08:45:21 compute-0 systemd[1]: libpod-7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90.scope: Deactivated successfully.
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.869882964 +0000 UTC m=+0.165645251 container died 7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-327c2ec10065766394018d5ad486514548e696c10aeeea434549a94a6920b96b-merged.mount: Deactivated successfully.
Jan 31 08:45:21 compute-0 nova_compute[247704]: 2026-01-31 08:45:21.896 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:21 compute-0 podman[382167]: 2026-01-31 08:45:21.91163747 +0000 UTC m=+0.207399757 container remove 7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:45:21 compute-0 systemd[1]: libpod-conmon-7f3f860ddbcb983571bae1697c608134310d2c357e53820b9970caa4928d6a90.scope: Deactivated successfully.
Jan 31 08:45:22 compute-0 podman[382208]: 2026-01-31 08:45:22.115512831 +0000 UTC m=+0.057490320 container create d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:22 compute-0 systemd[1]: Started libpod-conmon-d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae.scope.
Jan 31 08:45:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:22 compute-0 podman[382208]: 2026-01-31 08:45:22.095063733 +0000 UTC m=+0.037041242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c449eea68a34bfb9c7dd056b9b77845a1c7c16532fe3c28dee074cc60a3c00f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c449eea68a34bfb9c7dd056b9b77845a1c7c16532fe3c28dee074cc60a3c00f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c449eea68a34bfb9c7dd056b9b77845a1c7c16532fe3c28dee074cc60a3c00f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c449eea68a34bfb9c7dd056b9b77845a1c7c16532fe3c28dee074cc60a3c00f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c449eea68a34bfb9c7dd056b9b77845a1c7c16532fe3c28dee074cc60a3c00f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:22 compute-0 podman[382208]: 2026-01-31 08:45:22.218709843 +0000 UTC m=+0.160687302 container init d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:45:22 compute-0 podman[382208]: 2026-01-31 08:45:22.230319005 +0000 UTC m=+0.172296454 container start d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:45:22 compute-0 podman[382208]: 2026-01-31 08:45:22.233733477 +0000 UTC m=+0.175710926 container attach d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3257554958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:45:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 45 op/s
Jan 31 08:45:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:23 compute-0 stupefied_wu[382224]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:45:23 compute-0 stupefied_wu[382224]: --> relative data size: 1.0
Jan 31 08:45:23 compute-0 stupefied_wu[382224]: --> All data devices are unavailable
Jan 31 08:45:23 compute-0 systemd[1]: libpod-d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae.scope: Deactivated successfully.
Jan 31 08:45:23 compute-0 podman[382208]: 2026-01-31 08:45:23.069184686 +0000 UTC m=+1.011162135 container died d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c449eea68a34bfb9c7dd056b9b77845a1c7c16532fe3c28dee074cc60a3c00f-merged.mount: Deactivated successfully.
Jan 31 08:45:23 compute-0 podman[382208]: 2026-01-31 08:45:23.122504123 +0000 UTC m=+1.064481572 container remove d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:45:23 compute-0 systemd[1]: libpod-conmon-d19500533abec2e5c6360a9cb16cf563bdafca9be071768788604ab24c4bd6ae.scope: Deactivated successfully.
Jan 31 08:45:23 compute-0 sudo[382101]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:23 compute-0 sudo[382255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:23 compute-0 sudo[382255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:23 compute-0 sudo[382255]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:23 compute-0 sudo[382280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:23 compute-0 sudo[382280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:23 compute-0 sudo[382280]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:23 compute-0 sudo[382305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:23 compute-0 sudo[382305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:23 compute-0 sudo[382305]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:23 compute-0 sudo[382330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:45:23 compute-0 sudo[382330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:23 compute-0 ceph-mon[74496]: pgmap v3377: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 45 op/s
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.800857879 +0000 UTC m=+0.044558685 container create f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:23 compute-0 systemd[1]: Started libpod-conmon-f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6.scope.
Jan 31 08:45:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.782139783 +0000 UTC m=+0.025840599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.880263151 +0000 UTC m=+0.123963967 container init f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.885770645 +0000 UTC m=+0.129471431 container start f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.889292741 +0000 UTC m=+0.132993537 container attach f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 08:45:23 compute-0 adoring_keldysh[382412]: 167 167
Jan 31 08:45:23 compute-0 systemd[1]: libpod-f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6.scope: Deactivated successfully.
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.893059302 +0000 UTC m=+0.136760098 container died f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-303c818fa1ed36242bf822aa0615baed61802e0168fbf558f47ad015f1ade81a-merged.mount: Deactivated successfully.
Jan 31 08:45:23 compute-0 podman[382396]: 2026-01-31 08:45:23.930051883 +0000 UTC m=+0.173752679 container remove f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_keldysh, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:45:23 compute-0 systemd[1]: libpod-conmon-f49b5344f60f60720507ca0e63ef6efda6cf75c4d9320f8c6a9dec947fd1a7b6.scope: Deactivated successfully.
Jan 31 08:45:24 compute-0 podman[382436]: 2026-01-31 08:45:24.083163328 +0000 UTC m=+0.041223584 container create fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:45:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:24 compute-0 systemd[1]: Started libpod-conmon-fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1.scope.
Jan 31 08:45:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86acd610741d4fe60fabfcd7862a7c63ca44af22546c5b8ca7d5b21b18e52f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86acd610741d4fe60fabfcd7862a7c63ca44af22546c5b8ca7d5b21b18e52f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86acd610741d4fe60fabfcd7862a7c63ca44af22546c5b8ca7d5b21b18e52f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86acd610741d4fe60fabfcd7862a7c63ca44af22546c5b8ca7d5b21b18e52f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:24 compute-0 podman[382436]: 2026-01-31 08:45:24.149660196 +0000 UTC m=+0.107720542 container init fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:45:24 compute-0 podman[382436]: 2026-01-31 08:45:24.157033995 +0000 UTC m=+0.115094251 container start fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:45:24 compute-0 podman[382436]: 2026-01-31 08:45:24.063240593 +0000 UTC m=+0.021300879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:45:24 compute-0 podman[382436]: 2026-01-31 08:45:24.160952991 +0000 UTC m=+0.119013307 container attach fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:45:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:24.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Jan 31 08:45:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:24.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:24 compute-0 friendly_gould[382452]: {
Jan 31 08:45:24 compute-0 friendly_gould[382452]:     "0": [
Jan 31 08:45:24 compute-0 friendly_gould[382452]:         {
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "devices": [
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "/dev/loop3"
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             ],
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "lv_name": "ceph_lv0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "lv_size": "7511998464",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "name": "ceph_lv0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "tags": {
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.crush_device_class": "",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.encrypted": "0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.osd_id": "0",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.type": "block",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:                 "ceph.vdo": "0"
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             },
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "type": "block",
Jan 31 08:45:24 compute-0 friendly_gould[382452]:             "vg_name": "ceph_vg0"
Jan 31 08:45:24 compute-0 friendly_gould[382452]:         }
Jan 31 08:45:24 compute-0 friendly_gould[382452]:     ]
Jan 31 08:45:24 compute-0 friendly_gould[382452]: }
Jan 31 08:45:24 compute-0 systemd[1]: libpod-fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1.scope: Deactivated successfully.
Jan 31 08:45:24 compute-0 conmon[382452]: conmon fc9467c8d512396c7c74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1.scope/container/memory.events
Jan 31 08:45:24 compute-0 podman[382436]: 2026-01-31 08:45:24.943371089 +0000 UTC m=+0.901431345 container died fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f86acd610741d4fe60fabfcd7862a7c63ca44af22546c5b8ca7d5b21b18e52f4-merged.mount: Deactivated successfully.
Jan 31 08:45:25 compute-0 podman[382436]: 2026-01-31 08:45:25.008192326 +0000 UTC m=+0.966252602 container remove fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:25 compute-0 systemd[1]: libpod-conmon-fc9467c8d512396c7c74362eda5eae8ccc3d0d3ebe3b4a9d9c9646d5f79102e1.scope: Deactivated successfully.
Jan 31 08:45:25 compute-0 sudo[382330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:25 compute-0 sudo[382476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:25 compute-0 sudo[382476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:25 compute-0 sudo[382476]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:25 compute-0 sudo[382501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:25 compute-0 sudo[382501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:25 compute-0 sudo[382501]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:25 compute-0 sudo[382527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:25 compute-0 sudo[382527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:25 compute-0 sudo[382527]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:25 compute-0 sudo[382552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:45:25 compute-0 sudo[382552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.583241638 +0000 UTC m=+0.048566843 container create fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:45:25 compute-0 systemd[1]: Started libpod-conmon-fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2.scope.
Jan 31 08:45:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.55866465 +0000 UTC m=+0.023990095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.661268046 +0000 UTC m=+0.126593261 container init fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.667237641 +0000 UTC m=+0.132562846 container start fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.670070971 +0000 UTC m=+0.135396176 container attach fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:25 compute-0 fervent_shirley[382632]: 167 167
Jan 31 08:45:25 compute-0 systemd[1]: libpod-fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2.scope: Deactivated successfully.
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.672362436 +0000 UTC m=+0.137687701 container died fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-27a37d112bee75f2f13ce32b8a7adaa47a0fb2add8a2fee0e54eac277e47992f-merged.mount: Deactivated successfully.
Jan 31 08:45:25 compute-0 podman[382616]: 2026-01-31 08:45:25.708100606 +0000 UTC m=+0.173425811 container remove fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:25 compute-0 systemd[1]: libpod-conmon-fd21b8a4132e0bc9c4ac1136429de9c79f630777f73698a1ababa2186bb8dfc2.scope: Deactivated successfully.
Jan 31 08:45:25 compute-0 podman[382654]: 2026-01-31 08:45:25.870933828 +0000 UTC m=+0.059616722 container create 0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:25 compute-0 systemd[1]: Started libpod-conmon-0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f.scope.
Jan 31 08:45:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:25 compute-0 podman[382654]: 2026-01-31 08:45:25.849830505 +0000 UTC m=+0.038513439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d280abdd727b48929381a346eea43def200e5a210166f956119e54b5949caa28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d280abdd727b48929381a346eea43def200e5a210166f956119e54b5949caa28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d280abdd727b48929381a346eea43def200e5a210166f956119e54b5949caa28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d280abdd727b48929381a346eea43def200e5a210166f956119e54b5949caa28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:25 compute-0 ceph-mon[74496]: pgmap v3378: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Jan 31 08:45:25 compute-0 podman[382654]: 2026-01-31 08:45:25.972197362 +0000 UTC m=+0.160880326 container init 0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 08:45:25 compute-0 podman[382654]: 2026-01-31 08:45:25.979404187 +0000 UTC m=+0.168087101 container start 0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:45:25 compute-0 podman[382654]: 2026-01-31 08:45:25.983045726 +0000 UTC m=+0.171728650 container attach 0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:45:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:26.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 80 op/s
Jan 31 08:45:26 compute-0 nova_compute[247704]: 2026-01-31 08:45:26.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:26 compute-0 jolly_lamport[382671]: {
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:         "osd_id": 0,
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:         "type": "bluestore"
Jan 31 08:45:26 compute-0 jolly_lamport[382671]:     }
Jan 31 08:45:26 compute-0 jolly_lamport[382671]: }
Jan 31 08:45:26 compute-0 systemd[1]: libpod-0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f.scope: Deactivated successfully.
Jan 31 08:45:26 compute-0 podman[382654]: 2026-01-31 08:45:26.890024245 +0000 UTC m=+1.078707149 container died 0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:45:26 compute-0 nova_compute[247704]: 2026-01-31 08:45:26.898 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d280abdd727b48929381a346eea43def200e5a210166f956119e54b5949caa28-merged.mount: Deactivated successfully.
Jan 31 08:45:26 compute-0 podman[382654]: 2026-01-31 08:45:26.950044516 +0000 UTC m=+1.138727400 container remove 0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lamport, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:45:26 compute-0 systemd[1]: libpod-conmon-0d54d4fdc6259f8371fd52991762fb7870e9b328116fbedd892ea8e968a7d91f.scope: Deactivated successfully.
Jan 31 08:45:26 compute-0 sudo[382552]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:45:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:45:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6411e096-2601-4ffd-b3cd-75b6284390ca does not exist
Jan 31 08:45:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 02cd871c-4115-4901-995d-5e7142a80b3f does not exist
Jan 31 08:45:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 965a462c-0c4d-464a-aa19-0d942da05602 does not exist
Jan 31 08:45:27 compute-0 sudo[382708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:27 compute-0 sudo[382708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:27 compute-0 sudo[382708]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:27 compute-0 sudo[382733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:45:27 compute-0 sudo[382733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:27 compute-0 sudo[382733]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:27 compute-0 nova_compute[247704]: 2026-01-31 08:45:27.537 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:28 compute-0 ceph-mon[74496]: pgmap v3379: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 80 op/s
Jan 31 08:45:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:45:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:28.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 31 08:45:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:28.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:30 compute-0 ceph-mon[74496]: pgmap v3380: 305 pgs: 305 active+clean; 293 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 31 08:45:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 296 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 352 KiB/s wr, 144 op/s
Jan 31 08:45:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:30.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:31 compute-0 nova_compute[247704]: 2026-01-31 08:45:31.803 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:31 compute-0 nova_compute[247704]: 2026-01-31 08:45:31.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:31 compute-0 podman[382760]: 2026-01-31 08:45:31.909048577 +0000 UTC m=+0.082059868 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:45:32 compute-0 ceph-mon[74496]: pgmap v3381: 305 pgs: 305 active+clean; 296 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 352 KiB/s wr, 144 op/s
Jan 31 08:45:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:32.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 296 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 352 KiB/s wr, 109 op/s
Jan 31 08:45:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:32.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:34 compute-0 ceph-mon[74496]: pgmap v3382: 305 pgs: 305 active+clean; 296 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 352 KiB/s wr, 109 op/s
Jan 31 08:45:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:34.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 324 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Jan 31 08:45:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:34.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005329563035082822 of space, bias 1.0, pg target 1.5988689105248466 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:45:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:45:36 compute-0 ceph-mon[74496]: pgmap v3383: 305 pgs: 305 active+clean; 324 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 154 op/s
Jan 31 08:45:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:45:36 compute-0 nova_compute[247704]: 2026-01-31 08:45:36.804 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:36 compute-0 nova_compute[247704]: 2026-01-31 08:45:36.902 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:36.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:38 compute-0 ceph-mon[74496]: pgmap v3384: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 08:45:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:38.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 337 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Jan 31 08:45:38 compute-0 sudo[382781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:38 compute-0 sudo[382781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:38 compute-0 sudo[382781]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:38 compute-0 sudo[382806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:38 compute-0 sudo[382806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:38 compute-0 sudo[382806]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:38.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:40 compute-0 ceph-mon[74496]: pgmap v3385: 305 pgs: 305 active+clean; 337 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Jan 31 08:45:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:40.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 311 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Jan 31 08:45:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:40.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:41 compute-0 ceph-mon[74496]: pgmap v3386: 305 pgs: 305 active+clean; 311 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 200 op/s
Jan 31 08:45:41 compute-0 nova_compute[247704]: 2026-01-31 08:45:41.807 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:41 compute-0 nova_compute[247704]: 2026-01-31 08:45:41.904 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:42 compute-0 nova_compute[247704]: 2026-01-31 08:45:42.273 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:42.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 311 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 634 KiB/s rd, 3.9 MiB/s wr, 142 op/s
Jan 31 08:45:42 compute-0 podman[382833]: 2026-01-31 08:45:42.919678858 +0000 UTC m=+0.095057233 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:45:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:45:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:42.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:45:43 compute-0 ceph-mon[74496]: pgmap v3387: 305 pgs: 305 active+clean; 311 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 634 KiB/s rd, 3.9 MiB/s wr, 142 op/s
Jan 31 08:45:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:45:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 638 KiB/s rd, 4.0 MiB/s wr, 149 op/s
Jan 31 08:45:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:44.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:46 compute-0 ceph-mon[74496]: pgmap v3388: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 638 KiB/s rd, 4.0 MiB/s wr, 149 op/s
Jan 31 08:45:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3035427977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:45:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Jan 31 08:45:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:46.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:46 compute-0 nova_compute[247704]: 2026-01-31 08:45:46.810 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:46 compute-0 nova_compute[247704]: 2026-01-31 08:45:46.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:46.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:47 compute-0 nova_compute[247704]: 2026-01-31 08:45:47.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:47 compute-0 ceph-mon[74496]: pgmap v3389: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Jan 31 08:45:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Jan 31 08:45:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:49 compute-0 ceph-mon[74496]: pgmap v3390: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Jan 31 08:45:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:50.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:51 compute-0 nova_compute[247704]: 2026-01-31 08:45:51.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:51 compute-0 ceph-mon[74496]: pgmap v3391: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Jan 31 08:45:51 compute-0 nova_compute[247704]: 2026-01-31 08:45:51.909 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:52 compute-0 nova_compute[247704]: 2026-01-31 08:45:52.220 247708 DEBUG nova.compute.manager [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-changed-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:52 compute-0 nova_compute[247704]: 2026-01-31 08:45:52.221 247708 DEBUG nova.compute.manager [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Refreshing instance network info cache due to event network-changed-23e23487-5e42-43f0-8a71-74a84e458eac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:45:52 compute-0 nova_compute[247704]: 2026-01-31 08:45:52.222 247708 DEBUG oslo_concurrency.lockutils [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:45:52 compute-0 nova_compute[247704]: 2026-01-31 08:45:52.222 247708 DEBUG oslo_concurrency.lockutils [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:45:52 compute-0 nova_compute[247704]: 2026-01-31 08:45:52.223 247708 DEBUG nova.network.neutron [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Refreshing network info cache for port 23e23487-5e42-43f0-8a71-74a84e458eac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:45:52 compute-0 nova_compute[247704]: 2026-01-31 08:45:52.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 14 KiB/s wr, 7 op/s
Jan 31 08:45:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:52.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:53 compute-0 ceph-mon[74496]: pgmap v3392: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 14 KiB/s wr, 7 op/s
Jan 31 08:45:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2695108008' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:45:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2695108008' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:45:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.514 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.514 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.514 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.515 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.515 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.516 247708 INFO nova.compute.manager [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Terminating instance
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.517 247708 DEBUG nova.compute.manager [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:45:54 compute-0 kernel: tap23e23487-5e (unregistering): left promiscuous mode
Jan 31 08:45:54 compute-0 NetworkManager[49108]: <info>  [1769849154.5861] device (tap23e23487-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:45:54 compute-0 ovn_controller[149457]: 2026-01-31T08:45:54Z|00820|binding|INFO|Releasing lport 23e23487-5e42-43f0-8a71-74a84e458eac from this chassis (sb_readonly=0)
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.595 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:54 compute-0 ovn_controller[149457]: 2026-01-31T08:45:54Z|00821|binding|INFO|Setting lport 23e23487-5e42-43f0-8a71-74a84e458eac down in Southbound
Jan 31 08:45:54 compute-0 ovn_controller[149457]: 2026-01-31T08:45:54Z|00822|binding|INFO|Removing iface tap23e23487-5e ovn-installed in OVS
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.598 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:54 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000be.scope: Deactivated successfully.
Jan 31 08:45:54 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000be.scope: Consumed 19.032s CPU time.
Jan 31 08:45:54 compute-0 systemd-machined[214448]: Machine qemu-86-instance-000000be terminated.
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.765 247708 INFO nova.virt.libvirt.driver [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Instance destroyed successfully.
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.766 247708 DEBUG nova.objects.instance [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid bcdf8b78-95f0-4cc3-9071-7deaa5a85eee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:45:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 14 KiB/s wr, 7 op/s
Jan 31 08:45:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:45:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:45:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:54.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:54.964 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:f9:2f 10.100.0.10'], port_security=['fa:16:3e:eb:f9:2f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bcdf8b78-95f0-4cc3-9071-7deaa5a85eee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa8535db-1bf5-453e-8521-d36054020c47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '392901e2-d98d-4afe-a0d0-ff202a839d60 77ec18db-6d74-4aac-9268-2e86e3cdfbe8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0ebb0cf2-f8f8-4f5f-9f1c-79de32e76bba, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=23e23487-5e42-43f0-8a71-74a84e458eac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:45:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:54.966 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 23e23487-5e42-43f0-8a71-74a84e458eac in datapath aa8535db-1bf5-453e-8521-d36054020c47 unbound from our chassis
Jan 31 08:45:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:54.968 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa8535db-1bf5-453e-8521-d36054020c47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:45:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:54.969 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4e15a3-27a9-4641-9434-092e190d9a74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:54.970 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 namespace which is not needed anymore
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.996 247708 DEBUG nova.virt.libvirt.vif [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-245495987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=190,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+IEKLcDIGbeOvOdAVtorZ1xtiCqnJ7fs7G+aYTHXv48LqaidcMSGgy+Nrfu6X80mnMDyQMW/ANMH0isk5utMRMD3EHvSyRl+Xh4xqHrF93AhlQmH4UiDGLeTiTMaGHCw==',key_name='tempest-TestSecurityGroupsBasicOps-342550486',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:44:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-mkjmkem9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:44:01Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=bcdf8b78-95f0-4cc3-9071-7deaa5a85eee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.997 247708 DEBUG nova.network.os_vif_util [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.998 247708 DEBUG nova.network.os_vif_util [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:45:54 compute-0 nova_compute[247704]: 2026-01-31 08:45:54.998 247708 DEBUG os_vif [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.001 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.001 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23e23487-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.003 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.005 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.008 247708 INFO os_vif [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:f9:2f,bridge_name='br-int',has_traffic_filtering=True,id=23e23487-5e42-43f0-8a71-74a84e458eac,network=Network(aa8535db-1bf5-453e-8521-d36054020c47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23e23487-5e')
Jan 31 08:45:55 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [NOTICE]   (380573) : haproxy version is 2.8.14-c23fe91
Jan 31 08:45:55 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [NOTICE]   (380573) : path to executable is /usr/sbin/haproxy
Jan 31 08:45:55 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [WARNING]  (380573) : Exiting Master process...
Jan 31 08:45:55 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [WARNING]  (380573) : Exiting Master process...
Jan 31 08:45:55 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [ALERT]    (380573) : Current worker (380575) exited with code 143 (Terminated)
Jan 31 08:45:55 compute-0 neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47[380569]: [WARNING]  (380573) : All workers exited. Exiting... (0)
Jan 31 08:45:55 compute-0 systemd[1]: libpod-231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7.scope: Deactivated successfully.
Jan 31 08:45:55 compute-0 podman[382916]: 2026-01-31 08:45:55.093798691 +0000 UTC m=+0.046861190 container died 231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7-userdata-shm.mount: Deactivated successfully.
Jan 31 08:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f35a1edbb2353bb63ef63704419f5ec55f914ef92b7d6647c0c811ca5d1d4d05-merged.mount: Deactivated successfully.
Jan 31 08:45:55 compute-0 podman[382916]: 2026-01-31 08:45:55.138382816 +0000 UTC m=+0.091445315 container cleanup 231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 08:45:55 compute-0 systemd[1]: libpod-conmon-231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7.scope: Deactivated successfully.
Jan 31 08:45:55 compute-0 podman[382947]: 2026-01-31 08:45:55.194453691 +0000 UTC m=+0.039232286 container remove 231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.199 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d652d9f7-e751-4fed-b5fc-2f7050ab0a12]: (4, ('Sat Jan 31 08:45:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 (231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7)\n231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7\nSat Jan 31 08:45:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 (231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7)\n231db1217cb8108b2eb0b9f6486c91571786077dbeeaf254e485491f8fed95f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.202 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8953dc9c-1340-4c6f-834f-a32e1a23efc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.203 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa8535db-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.206 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:55 compute-0 kernel: tapaa8535db-10: left promiscuous mode
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.211 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.212 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.215 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7fc9f2-34eb-496e-9c6c-b3b2d9d45cd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.233 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc85075-de51-47a8-8623-9c1b1769daca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.235 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd910f4-de40-4100-a3a4-5dfa5aba8898]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.248 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fdb222-d3d3-4879-94d7-bfd67d3fc1ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 928157, 'reachable_time': 24877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382962, 'error': None, 'target': 'ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 systemd[1]: run-netns-ovnmeta\x2daa8535db\x2d1bf5\x2d453e\x2d8521\x2dd36054020c47.mount: Deactivated successfully.
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.253 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa8535db-1bf5-453e-8521-d36054020c47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:45:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:55.253 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[827a0aa1-791c-40fb-821e-59e6843d3a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.603 247708 INFO nova.virt.libvirt.driver [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Deleting instance files /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_del
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.604 247708 INFO nova.virt.libvirt.driver [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Deletion of /var/lib/nova/instances/bcdf8b78-95f0-4cc3-9071-7deaa5a85eee_del complete
Jan 31 08:45:55 compute-0 ceph-mon[74496]: pgmap v3393: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 14 KiB/s wr, 7 op/s
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.931 247708 INFO nova.compute.manager [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Took 1.41 seconds to destroy the instance on the hypervisor.
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.932 247708 DEBUG oslo.service.loopingcall [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.932 247708 DEBUG nova.compute.manager [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.932 247708 DEBUG nova.network.neutron [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.991 247708 DEBUG nova.compute.manager [req-7a5ce24e-6ea3-4947-9d53-41b86ebdc70b req-07121b30-0458-4f73-a5fc-5c3148de257b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-vif-unplugged-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.991 247708 DEBUG oslo_concurrency.lockutils [req-7a5ce24e-6ea3-4947-9d53-41b86ebdc70b req-07121b30-0458-4f73-a5fc-5c3148de257b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.991 247708 DEBUG oslo_concurrency.lockutils [req-7a5ce24e-6ea3-4947-9d53-41b86ebdc70b req-07121b30-0458-4f73-a5fc-5c3148de257b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.991 247708 DEBUG oslo_concurrency.lockutils [req-7a5ce24e-6ea3-4947-9d53-41b86ebdc70b req-07121b30-0458-4f73-a5fc-5c3148de257b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.992 247708 DEBUG nova.compute.manager [req-7a5ce24e-6ea3-4947-9d53-41b86ebdc70b req-07121b30-0458-4f73-a5fc-5c3148de257b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] No waiting events found dispatching network-vif-unplugged-23e23487-5e42-43f0-8a71-74a84e458eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:45:55 compute-0 nova_compute[247704]: 2026-01-31 08:45:55.992 247708 DEBUG nova.compute.manager [req-7a5ce24e-6ea3-4947-9d53-41b86ebdc70b req-07121b30-0458-4f73-a5fc-5c3148de257b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-vif-unplugged-23e23487-5e42-43f0-8a71-74a84e458eac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:45:56 compute-0 nova_compute[247704]: 2026-01-31 08:45:56.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 2.6 KiB/s wr, 14 op/s
Jan 31 08:45:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:56 compute-0 nova_compute[247704]: 2026-01-31 08:45:56.812 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:56.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:57 compute-0 ceph-mon[74496]: pgmap v3394: 305 pgs: 305 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 2.6 KiB/s wr, 14 op/s
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.432 247708 DEBUG nova.network.neutron [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updated VIF entry in instance network info cache for port 23e23487-5e42-43f0-8a71-74a84e458eac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.432 247708 DEBUG nova.network.neutron [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [{"id": "23e23487-5e42-43f0-8a71-74a84e458eac", "address": "fa:16:3e:eb:f9:2f", "network": {"id": "aa8535db-1bf5-453e-8521-d36054020c47", "bridge": "br-int", "label": "tempest-network-smoke--713413715", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23e23487-5e", "ovs_interfaceid": "23e23487-5e42-43f0-8a71-74a84e458eac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.557 247708 DEBUG nova.compute.manager [req-524aaeb2-fc17-4555-a6e5-012bd038a2a0 req-df945dba-6926-4ca8-b990-92a99e4638a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.558 247708 DEBUG oslo_concurrency.lockutils [req-524aaeb2-fc17-4555-a6e5-012bd038a2a0 req-df945dba-6926-4ca8-b990-92a99e4638a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.558 247708 DEBUG oslo_concurrency.lockutils [req-524aaeb2-fc17-4555-a6e5-012bd038a2a0 req-df945dba-6926-4ca8-b990-92a99e4638a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.558 247708 DEBUG oslo_concurrency.lockutils [req-524aaeb2-fc17-4555-a6e5-012bd038a2a0 req-df945dba-6926-4ca8-b990-92a99e4638a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.558 247708 DEBUG nova.compute.manager [req-524aaeb2-fc17-4555-a6e5-012bd038a2a0 req-df945dba-6926-4ca8-b990-92a99e4638a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] No waiting events found dispatching network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.558 247708 WARNING nova.compute.manager [req-524aaeb2-fc17-4555-a6e5-012bd038a2a0 req-df945dba-6926-4ca8-b990-92a99e4638a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received unexpected event network-vif-plugged-23e23487-5e42-43f0-8a71-74a84e458eac for instance with vm_state active and task_state deleting.
Jan 31 08:45:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 31 08:45:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:45:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:58.861 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:45:58 compute-0 nova_compute[247704]: 2026-01-31 08:45:58.861 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:58.862 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:45:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:45:58.864 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:45:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:45:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:58.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:45:58 compute-0 sudo[382966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:58 compute-0 sudo[382966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:58 compute-0 sudo[382966]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:59 compute-0 sudo[382991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:45:59 compute-0 sudo[382991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:59 compute-0 sudo[382991]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.108 247708 DEBUG oslo_concurrency.lockutils [req-0dec66e4-b3a5-4fbf-bc1a-b702e02c9165 req-404ff297-3143-48c5-90d6-5f42320ebbb3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.372 247708 DEBUG nova.network.neutron [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.503 247708 DEBUG nova.compute.manager [req-d508144e-8770-48b7-bb01-e804b65c18e2 req-f744d30f-6b20-4b90-a3f7-c5b4adba1ad5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Received event network-vif-deleted-23e23487-5e42-43f0-8a71-74a84e458eac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.504 247708 INFO nova.compute.manager [req-d508144e-8770-48b7-bb01-e804b65c18e2 req-f744d30f-6b20-4b90-a3f7-c5b4adba1ad5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Neutron deleted interface 23e23487-5e42-43f0-8a71-74a84e458eac; detaching it from the instance and deleting it from the info cache
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.504 247708 DEBUG nova.network.neutron [req-d508144e-8770-48b7-bb01-e804b65c18e2 req-f744d30f-6b20-4b90-a3f7-c5b4adba1ad5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.642 247708 INFO nova.compute.manager [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Took 3.71 seconds to deallocate network for instance.
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.784 247708 DEBUG nova.compute.manager [req-d508144e-8770-48b7-bb01-e804b65c18e2 req-f744d30f-6b20-4b90-a3f7-c5b4adba1ad5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Detach interface failed, port_id=23e23487-5e42-43f0-8a71-74a84e458eac, reason: Instance bcdf8b78-95f0-4cc3-9071-7deaa5a85eee could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:45:59 compute-0 ceph-mon[74496]: pgmap v3395: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.953 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:59 compute-0 nova_compute[247704]: 2026-01-31 08:45:59.954 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:00 compute-0 nova_compute[247704]: 2026-01-31 08:46:00.004 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:00 compute-0 nova_compute[247704]: 2026-01-31 08:46:00.042 247708 DEBUG oslo_concurrency.processutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:46:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3128772206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:00 compute-0 nova_compute[247704]: 2026-01-31 08:46:00.509 247708 DEBUG oslo_concurrency.processutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:00 compute-0 nova_compute[247704]: 2026-01-31 08:46:00.519 247708 DEBUG nova.compute.provider_tree [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:46:00 compute-0 nova_compute[247704]: 2026-01-31 08:46:00.683 247708 DEBUG nova.scheduler.client.report [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:46:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:46:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:00.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:00 compute-0 nova_compute[247704]: 2026-01-31 08:46:00.853 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3128772206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:01 compute-0 nova_compute[247704]: 2026-01-31 08:46:01.203 247708 INFO nova.scheduler.client.report [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance bcdf8b78-95f0-4cc3-9071-7deaa5a85eee
Jan 31 08:46:01 compute-0 nova_compute[247704]: 2026-01-31 08:46:01.746 247708 DEBUG oslo_concurrency.lockutils [None req-89f38619-adec-44c4-b396-d320f34ba5b3 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "bcdf8b78-95f0-4cc3-9071-7deaa5a85eee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:01 compute-0 nova_compute[247704]: 2026-01-31 08:46:01.815 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:01 compute-0 ceph-mon[74496]: pgmap v3396: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:46:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:46:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:02.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:02 compute-0 podman[383040]: 2026-01-31 08:46:02.921142407 +0000 UTC m=+0.083469522 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:46:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:02.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:03 compute-0 ceph-mon[74496]: pgmap v3397: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:46:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Jan 31 08:46:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:46:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:46:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:04.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:05 compute-0 nova_compute[247704]: 2026-01-31 08:46:05.008 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:05 compute-0 ceph-mon[74496]: pgmap v3398: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.0 KiB/s wr, 30 op/s
Jan 31 08:46:06 compute-0 nova_compute[247704]: 2026-01-31 08:46:06.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:06 compute-0 nova_compute[247704]: 2026-01-31 08:46:06.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:46:06 compute-0 nova_compute[247704]: 2026-01-31 08:46:06.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:46:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Jan 31 08:46:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:06.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:06 compute-0 nova_compute[247704]: 2026-01-31 08:46:06.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:06.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:07 compute-0 nova_compute[247704]: 2026-01-31 08:46:07.516 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:46:07 compute-0 nova_compute[247704]: 2026-01-31 08:46:07.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:07 compute-0 ceph-mon[74496]: pgmap v3399: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Jan 31 08:46:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 511 B/s wr, 19 op/s
Jan 31 08:46:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:08.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:09 compute-0 nova_compute[247704]: 2026-01-31 08:46:09.763 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849154.7617967, bcdf8b78-95f0-4cc3-9071-7deaa5a85eee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:46:09 compute-0 nova_compute[247704]: 2026-01-31 08:46:09.764 247708 INFO nova.compute.manager [-] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] VM Stopped (Lifecycle Event)
Jan 31 08:46:09 compute-0 ceph-mon[74496]: pgmap v3400: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 511 B/s wr, 19 op/s
Jan 31 08:46:10 compute-0 nova_compute[247704]: 2026-01-31 08:46:10.010 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 2.5 KiB/s wr, 11 op/s
Jan 31 08:46:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:10.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:46:11.217 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:46:11.217 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:46:11.217 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:11 compute-0 nova_compute[247704]: 2026-01-31 08:46:11.821 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:12 compute-0 ceph-mon[74496]: pgmap v3401: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 2.5 KiB/s wr, 11 op/s
Jan 31 08:46:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s
Jan 31 08:46:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:12.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:13 compute-0 podman[383066]: 2026-01-31 08:46:13.904105097 +0000 UTC m=+0.081331100 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 08:46:14 compute-0 ceph-mon[74496]: pgmap v3402: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 2.2 KiB/s wr, 5 op/s
Jan 31 08:46:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 3.2 KiB/s wr, 6 op/s
Jan 31 08:46:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:46:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:15 compute-0 sshd-session[383094]: Invalid user ubuntu from 45.148.10.240 port 52214
Jan 31 08:46:15 compute-0 sshd-session[383094]: Connection closed by invalid user ubuntu 45.148.10.240 port 52214 [preauth]
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.595 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.595 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.595 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.596 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.596 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:15 compute-0 nova_compute[247704]: 2026-01-31 08:46:15.971 247708 DEBUG nova.compute.manager [None req-30b3f33c-c715-47ff-a831-c89f0fa5cd2b - - - - - -] [instance: bcdf8b78-95f0-4cc3-9071-7deaa5a85eee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:46:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:46:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2231380898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.039 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:16 compute-0 ceph-mon[74496]: pgmap v3403: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 3.2 KiB/s wr, 6 op/s
Jan 31 08:46:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2472085658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/33009808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2231380898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.207 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.209 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4296MB free_disk=20.942718505859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.209 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.209 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.785 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.786 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:46:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.1 KiB/s wr, 3 op/s
Jan 31 08:46:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.822 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:16 compute-0 nova_compute[247704]: 2026-01-31 08:46:16.846 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:46:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/584075436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:17 compute-0 nova_compute[247704]: 2026-01-31 08:46:17.277 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:17 compute-0 nova_compute[247704]: 2026-01-31 08:46:17.282 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:46:17 compute-0 nova_compute[247704]: 2026-01-31 08:46:17.470 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:46:18 compute-0 ceph-mon[74496]: pgmap v3404: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 3.1 KiB/s wr, 3 op/s
Jan 31 08:46:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3721907473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/584075436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/934742884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:18 compute-0 nova_compute[247704]: 2026-01-31 08:46:18.146 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:46:18 compute-0 nova_compute[247704]: 2026-01-31 08:46:18.147 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 3.1 KiB/s wr, 1 op/s
Jan 31 08:46:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:19 compute-0 sudo[383143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:19 compute-0 sudo[383143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:19 compute-0 sudo[383143]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:19 compute-0 sudo[383168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:19 compute-0 sudo[383168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:19 compute-0 sudo[383168]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:19 compute-0 nova_compute[247704]: 2026-01-31 08:46:19.148 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:19 compute-0 nova_compute[247704]: 2026-01-31 08:46:19.149 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:19 compute-0 nova_compute[247704]: 2026-01-31 08:46:19.149 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:20 compute-0 nova_compute[247704]: 2026-01-31 08:46:20.014 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:20 compute-0 ceph-mon[74496]: pgmap v3405: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 3.1 KiB/s wr, 1 op/s
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:46:20
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control']
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 31 08:46:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:46:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:46:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:21 compute-0 nova_compute[247704]: 2026-01-31 08:46:21.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:22 compute-0 ceph-mon[74496]: pgmap v3406: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 31 08:46:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1636768766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:46:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:22.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:22.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:24 compute-0 ceph-mon[74496]: pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 08:46:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 17 op/s
Jan 31 08:46:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:24.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:24.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:25 compute-0 nova_compute[247704]: 2026-01-31 08:46:25.016 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:26 compute-0 ceph-mon[74496]: pgmap v3408: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 17 op/s
Jan 31 08:46:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:46:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:26.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:26 compute-0 nova_compute[247704]: 2026-01-31 08:46:26.826 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:26.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:27 compute-0 sudo[383198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:27 compute-0 sudo[383198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-0 sudo[383198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-0 sudo[383223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:27 compute-0 sudo[383223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-0 sudo[383223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-0 sudo[383248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:27 compute-0 sudo[383248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:27 compute-0 sudo[383248]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:27 compute-0 sudo[383273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:46:27 compute-0 sudo[383273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:28 compute-0 podman[383370]: 2026-01-31 08:46:28.148271648 +0000 UTC m=+0.068801084 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:46:28 compute-0 podman[383370]: 2026-01-31 08:46:28.239600481 +0000 UTC m=+0.160129927 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:46:28 compute-0 ceph-mon[74496]: pgmap v3409: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.563 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.564 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.564 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.565 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.565 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:28 compute-0 nova_compute[247704]: 2026-01-31 08:46:28.565 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:46:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:28.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:28 compute-0 podman[383523]: 2026-01-31 08:46:28.928115094 +0000 UTC m=+0.115663565 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:46:28 compute-0 podman[383523]: 2026-01-31 08:46:28.938588869 +0000 UTC m=+0.126137330 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:46:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:28.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:29 compute-0 podman[383588]: 2026-01-31 08:46:29.159781581 +0000 UTC m=+0.056806754 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 08:46:29 compute-0 podman[383588]: 2026-01-31 08:46:29.199066277 +0000 UTC m=+0.096091440 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, name=keepalived, distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 08:46:29 compute-0 sudo[383273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:29 compute-0 sudo[383618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:29 compute-0 sudo[383618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:29 compute-0 sudo[383618]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:46:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 54K writes, 205K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 54K writes, 19K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3480 writes, 13K keys, 3480 commit groups, 1.0 writes per commit group, ingest: 14.24 MB, 0.02 MB/s
                                           Interval WAL: 3480 writes, 1349 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:46:29 compute-0 sudo[383643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:29 compute-0 sudo[383643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:29 compute-0 sudo[383643]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:29 compute-0 sudo[383668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:29 compute-0 sudo[383668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:29 compute-0 sudo[383668]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:29 compute-0 sudo[383693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:46:29 compute-0 sudo[383693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:29 compute-0 sudo[383693]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 71f5e01e-51bc-4e72-a0a4-1df1d092dd69 does not exist
Jan 31 08:46:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 51254832-e9a0-456f-b071-80a401fd383b does not exist
Jan 31 08:46:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8ed90a30-8517-41df-a79c-3b16abe6cbb1 does not exist
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:46:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:46:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:46:30 compute-0 sudo[383750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:30 compute-0 sudo[383750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.018 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:30 compute-0 sudo[383750]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:30 compute-0 sudo[383775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:30 compute-0 sudo[383775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:30 compute-0 sudo[383775]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:30 compute-0 sudo[383800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:30 compute-0 sudo[383800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:30 compute-0 sudo[383800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:30 compute-0 sudo[383825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:46:30 compute-0 sudo[383825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.220 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.221 247708 WARNING nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.221 247708 WARNING nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.221 247708 WARNING nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.221 247708 WARNING nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.221 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Removable base files: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/b9f2679fd6c29acf70c52ec6988a633671574c3f
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8c488581cdd7eb690478040e04ee9da4cb107c7c
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 INFO nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/9e1a9e45ead02c86d01a9ee8ffca1eed876d06d9
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 31 08:46:30 compute-0 nova_compute[247704]: 2026-01-31 08:46:30.222 247708 DEBUG nova.virt.libvirt.imagecache [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 31 08:46:30 compute-0 ceph-mon[74496]: pgmap v3410: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:46:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.502429871 +0000 UTC m=+0.048955613 container create fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_snyder, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:46:30 compute-0 systemd[1]: Started libpod-conmon-fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49.scope.
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.480343233 +0000 UTC m=+0.026869005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:46:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.601584533 +0000 UTC m=+0.148110305 container init fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_snyder, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.611897104 +0000 UTC m=+0.158422826 container start fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_snyder, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.615228735 +0000 UTC m=+0.161754477 container attach fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_snyder, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 08:46:30 compute-0 hardcore_snyder[383907]: 167 167
Jan 31 08:46:30 compute-0 systemd[1]: libpod-fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49.scope: Deactivated successfully.
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.619995241 +0000 UTC m=+0.166520963 container died fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:46:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-538b6f86685910e214a9a44b3b4b78f1f423298ad171701f5cc04f75a7b3f36e-merged.mount: Deactivated successfully.
Jan 31 08:46:30 compute-0 podman[383891]: 2026-01-31 08:46:30.660302861 +0000 UTC m=+0.206828593 container remove fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:30 compute-0 systemd[1]: libpod-conmon-fa9ef309f85573ab70c879ce9f4cfe9a5b2823f45a64a9d0d142017dc9e14e49.scope: Deactivated successfully.
Jan 31 08:46:30 compute-0 podman[383930]: 2026-01-31 08:46:30.792904488 +0000 UTC m=+0.038470377 container create 358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 124 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 260 KiB/s wr, 50 op/s
Jan 31 08:46:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:30.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:30 compute-0 systemd[1]: Started libpod-conmon-358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5.scope.
Jan 31 08:46:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fbf4cebb79d1469566fe78c065a6edca33290d2ba5689452856b65d149b04f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fbf4cebb79d1469566fe78c065a6edca33290d2ba5689452856b65d149b04f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fbf4cebb79d1469566fe78c065a6edca33290d2ba5689452856b65d149b04f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fbf4cebb79d1469566fe78c065a6edca33290d2ba5689452856b65d149b04f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fbf4cebb79d1469566fe78c065a6edca33290d2ba5689452856b65d149b04f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:30 compute-0 podman[383930]: 2026-01-31 08:46:30.774725116 +0000 UTC m=+0.020290995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:46:30 compute-0 podman[383930]: 2026-01-31 08:46:30.887725565 +0000 UTC m=+0.133291444 container init 358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:30 compute-0 podman[383930]: 2026-01-31 08:46:30.896632982 +0000 UTC m=+0.142198831 container start 358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:30 compute-0 podman[383930]: 2026-01-31 08:46:30.900707551 +0000 UTC m=+0.146273410 container attach 358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:46:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:30.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:31 compute-0 admiring_wilbur[383946]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:46:31 compute-0 admiring_wilbur[383946]: --> relative data size: 1.0
Jan 31 08:46:31 compute-0 admiring_wilbur[383946]: --> All data devices are unavailable
Jan 31 08:46:31 compute-0 systemd[1]: libpod-358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5.scope: Deactivated successfully.
Jan 31 08:46:31 compute-0 podman[383930]: 2026-01-31 08:46:31.700924443 +0000 UTC m=+0.946490302 container died 358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wilbur, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:46:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-07fbf4cebb79d1469566fe78c065a6edca33290d2ba5689452856b65d149b04f-merged.mount: Deactivated successfully.
Jan 31 08:46:31 compute-0 podman[383930]: 2026-01-31 08:46:31.768594299 +0000 UTC m=+1.014160158 container remove 358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_wilbur, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:46:31 compute-0 systemd[1]: libpod-conmon-358f51ba3ac3b9cdf2714a97ce3789e1a8557eadda390edda97eaa7b729986d5.scope: Deactivated successfully.
Jan 31 08:46:31 compute-0 sudo[383825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:31 compute-0 nova_compute[247704]: 2026-01-31 08:46:31.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:31 compute-0 sudo[383973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:31 compute-0 sudo[383973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:31 compute-0 sudo[383973]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:31 compute-0 sudo[383998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:31 compute-0 sudo[383998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:31 compute-0 sudo[383998]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:31 compute-0 sudo[384023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:31 compute-0 sudo[384023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:31 compute-0 sudo[384023]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:32 compute-0 sudo[384048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:46:32 compute-0 sudo[384048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:32 compute-0 podman[384114]: 2026-01-31 08:46:32.345364843 +0000 UTC m=+0.020227114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:46:32 compute-0 podman[384114]: 2026-01-31 08:46:32.558013057 +0000 UTC m=+0.232875308 container create b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:46:32 compute-0 ceph-mon[74496]: pgmap v3411: 305 pgs: 305 active+clean; 124 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 260 KiB/s wr, 50 op/s
Jan 31 08:46:32 compute-0 systemd[1]: Started libpod-conmon-b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60.scope.
Jan 31 08:46:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 124 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 260 KiB/s wr, 50 op/s
Jan 31 08:46:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:32 compute-0 podman[384114]: 2026-01-31 08:46:32.884310157 +0000 UTC m=+0.559172438 container init b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:46:32 compute-0 podman[384114]: 2026-01-31 08:46:32.893346716 +0000 UTC m=+0.568209007 container start b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:46:32 compute-0 bold_lamport[384130]: 167 167
Jan 31 08:46:32 compute-0 systemd[1]: libpod-b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60.scope: Deactivated successfully.
Jan 31 08:46:32 compute-0 podman[384114]: 2026-01-31 08:46:32.909895029 +0000 UTC m=+0.584757330 container attach b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 31 08:46:32 compute-0 podman[384114]: 2026-01-31 08:46:32.910778941 +0000 UTC m=+0.585641222 container died b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:33.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dfd95e4cfb290f4c8202bbf3b5ad452842db28a8e42678cde2ab08519307d03-merged.mount: Deactivated successfully.
Jan 31 08:46:33 compute-0 podman[384114]: 2026-01-31 08:46:33.355308387 +0000 UTC m=+1.030170638 container remove b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lamport, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:46:33 compute-0 systemd[1]: libpod-conmon-b5d0f5d09d39cf00a1f71a4c703af23d489311027efdd254d185c52998359c60.scope: Deactivated successfully.
Jan 31 08:46:33 compute-0 podman[384148]: 2026-01-31 08:46:33.392160614 +0000 UTC m=+0.083296138 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:46:33 compute-0 podman[384173]: 2026-01-31 08:46:33.516202092 +0000 UTC m=+0.048401058 container create 8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_payne, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:33 compute-0 systemd[1]: Started libpod-conmon-8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a.scope.
Jan 31 08:46:33 compute-0 ceph-mon[74496]: pgmap v3412: 305 pgs: 305 active+clean; 124 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 260 KiB/s wr, 50 op/s
Jan 31 08:46:33 compute-0 podman[384173]: 2026-01-31 08:46:33.495130169 +0000 UTC m=+0.027329155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:46:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc9132d792782b74dc455b3256e60f27b49f89e3ec7e2c48e9bd63014fc3f88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc9132d792782b74dc455b3256e60f27b49f89e3ec7e2c48e9bd63014fc3f88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc9132d792782b74dc455b3256e60f27b49f89e3ec7e2c48e9bd63014fc3f88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bc9132d792782b74dc455b3256e60f27b49f89e3ec7e2c48e9bd63014fc3f88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:33 compute-0 podman[384173]: 2026-01-31 08:46:33.617879366 +0000 UTC m=+0.150078332 container init 8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_payne, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:46:33 compute-0 podman[384173]: 2026-01-31 08:46:33.624719423 +0000 UTC m=+0.156918389 container start 8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:46:33 compute-0 podman[384173]: 2026-01-31 08:46:33.632263176 +0000 UTC m=+0.164462162 container attach 8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:34 compute-0 musing_payne[384190]: {
Jan 31 08:46:34 compute-0 musing_payne[384190]:     "0": [
Jan 31 08:46:34 compute-0 musing_payne[384190]:         {
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "devices": [
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "/dev/loop3"
Jan 31 08:46:34 compute-0 musing_payne[384190]:             ],
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "lv_name": "ceph_lv0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "lv_size": "7511998464",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "name": "ceph_lv0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "tags": {
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.crush_device_class": "",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.encrypted": "0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.osd_id": "0",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.type": "block",
Jan 31 08:46:34 compute-0 musing_payne[384190]:                 "ceph.vdo": "0"
Jan 31 08:46:34 compute-0 musing_payne[384190]:             },
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "type": "block",
Jan 31 08:46:34 compute-0 musing_payne[384190]:             "vg_name": "ceph_vg0"
Jan 31 08:46:34 compute-0 musing_payne[384190]:         }
Jan 31 08:46:34 compute-0 musing_payne[384190]:     ]
Jan 31 08:46:34 compute-0 musing_payne[384190]: }
Jan 31 08:46:34 compute-0 systemd[1]: libpod-8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a.scope: Deactivated successfully.
Jan 31 08:46:34 compute-0 podman[384173]: 2026-01-31 08:46:34.414800097 +0000 UTC m=+0.946999083 container died 8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bc9132d792782b74dc455b3256e60f27b49f89e3ec7e2c48e9bd63014fc3f88-merged.mount: Deactivated successfully.
Jan 31 08:46:34 compute-0 podman[384173]: 2026-01-31 08:46:34.483867037 +0000 UTC m=+1.016066023 container remove 8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_payne, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:46:34 compute-0 systemd[1]: libpod-conmon-8c25f2b5d697d51d5a608f0ede98e8b723f229e71cdf2f206eecddf8046b246a.scope: Deactivated successfully.
Jan 31 08:46:34 compute-0 sudo[384048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:34 compute-0 sudo[384213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:34 compute-0 sudo[384213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:34 compute-0 sudo[384213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:34 compute-0 sudo[384238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:34 compute-0 sudo[384238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:34 compute-0 sudo[384238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:34 compute-0 sudo[384263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:34 compute-0 sudo[384263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:34 compute-0 sudo[384263]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:34 compute-0 sudo[384288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:46:34 compute-0 sudo[384288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:46:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:35.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:35 compute-0 nova_compute[247704]: 2026-01-31 08:46:35.020 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.092923347 +0000 UTC m=+0.045117739 container create a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_carver, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:35 compute-0 systemd[1]: Started libpod-conmon-a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984.scope.
Jan 31 08:46:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.073882974 +0000 UTC m=+0.026077386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.173830495 +0000 UTC m=+0.126024907 container init a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.18060437 +0000 UTC m=+0.132798762 container start a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_carver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.183987652 +0000 UTC m=+0.136182064 container attach a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:46:35 compute-0 practical_carver[384368]: 167 167
Jan 31 08:46:35 compute-0 systemd[1]: libpod-a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984.scope: Deactivated successfully.
Jan 31 08:46:35 compute-0 conmon[384368]: conmon a6f8423ceb7009357c0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984.scope/container/memory.events
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.187470248 +0000 UTC m=+0.139664660 container died a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-997b2ac2e9f6cd2e53610ad5658516c68b4c764a8212864976dae23ed218b3e6-merged.mount: Deactivated successfully.
Jan 31 08:46:35 compute-0 podman[384352]: 2026-01-31 08:46:35.225360049 +0000 UTC m=+0.177554441 container remove a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_carver, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:35 compute-0 systemd[1]: libpod-conmon-a6f8423ceb7009357c0c0d7a7081d52687fb05efcd0f2148dc4d7659fcb0a984.scope: Deactivated successfully.
Jan 31 08:46:35 compute-0 podman[384392]: 2026-01-31 08:46:35.378300791 +0000 UTC m=+0.060483553 container create c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:46:35 compute-0 systemd[1]: Started libpod-conmon-c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7.scope.
Jan 31 08:46:35 compute-0 podman[384392]: 2026-01-31 08:46:35.351324205 +0000 UTC m=+0.033507037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:46:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a771477fafadea4a75bf4e0ff5d0e3e8a0d0fc7b51fce847f2394fd3da5b2285/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a771477fafadea4a75bf4e0ff5d0e3e8a0d0fc7b51fce847f2394fd3da5b2285/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a771477fafadea4a75bf4e0ff5d0e3e8a0d0fc7b51fce847f2394fd3da5b2285/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a771477fafadea4a75bf4e0ff5d0e3e8a0d0fc7b51fce847f2394fd3da5b2285/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:35 compute-0 podman[384392]: 2026-01-31 08:46:35.464107898 +0000 UTC m=+0.146290650 container init c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:46:35 compute-0 podman[384392]: 2026-01-31 08:46:35.469244884 +0000 UTC m=+0.151427616 container start c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:46:35 compute-0 podman[384392]: 2026-01-31 08:46:35.472336869 +0000 UTC m=+0.154519611 container attach c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:46:35 compute-0 ceph-mon[74496]: pgmap v3413: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:46:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2492724714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:46:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:46:36 compute-0 tender_hamilton[384407]: {
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:         "osd_id": 0,
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:         "type": "bluestore"
Jan 31 08:46:36 compute-0 tender_hamilton[384407]:     }
Jan 31 08:46:36 compute-0 tender_hamilton[384407]: }
Jan 31 08:46:36 compute-0 systemd[1]: libpod-c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7.scope: Deactivated successfully.
Jan 31 08:46:36 compute-0 conmon[384407]: conmon c38d313a082e4fbb20eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7.scope/container/memory.events
Jan 31 08:46:36 compute-0 podman[384429]: 2026-01-31 08:46:36.345105506 +0000 UTC m=+0.030225937 container died c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a771477fafadea4a75bf4e0ff5d0e3e8a0d0fc7b51fce847f2394fd3da5b2285-merged.mount: Deactivated successfully.
Jan 31 08:46:36 compute-0 podman[384429]: 2026-01-31 08:46:36.403857115 +0000 UTC m=+0.088977526 container remove c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:46:36 compute-0 systemd[1]: libpod-conmon-c38d313a082e4fbb20ebe6880117e118089dd04c7be6acaaf084ca5df273a7c7.scope: Deactivated successfully.
Jan 31 08:46:36 compute-0 sudo[384288]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:46:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d92b22d9-ea27-49a7-88ab-371f265c9e5c does not exist
Jan 31 08:46:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev aa3bcd4b-d8e1-49a3-9ffc-1e2709cccd4d does not exist
Jan 31 08:46:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev dc39c3ac-bc5a-4bf9-ac35-2b9c2035743d does not exist
Jan 31 08:46:36 compute-0 sudo[384444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:36 compute-0 sudo[384444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:36 compute-0 sudo[384444]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:36 compute-0 sudo[384469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:46:36 compute-0 sudo[384469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:36 compute-0 sudo[384469]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 08:46:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:36.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:36 compute-0 nova_compute[247704]: 2026-01-31 08:46:36.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:37.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:46:37 compute-0 ceph-mon[74496]: pgmap v3414: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 08:46:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:46:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:38.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:39 compute-0 sudo[384495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:39 compute-0 sudo[384495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:39 compute-0 sudo[384495]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:39 compute-0 sudo[384520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:39 compute-0 sudo[384520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:39 compute-0 sudo[384520]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:39 compute-0 ceph-mon[74496]: pgmap v3415: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:46:40 compute-0 nova_compute[247704]: 2026-01-31 08:46:40.023 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:46:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:41.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 08:46:41 compute-0 nova_compute[247704]: 2026-01-31 08:46:41.859 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:41 compute-0 ceph-mon[74496]: pgmap v3416: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:46:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 1.5 MiB/s wr, 4 op/s
Jan 31 08:46:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:42.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:43.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:43 compute-0 nova_compute[247704]: 2026-01-31 08:46:43.118 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:43 compute-0 nova_compute[247704]: 2026-01-31 08:46:43.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:43 compute-0 ceph-mon[74496]: pgmap v3417: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 1.5 MiB/s wr, 4 op/s
Jan 31 08:46:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 1.5 MiB/s wr, 4 op/s
Jan 31 08:46:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:44.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:44 compute-0 podman[384549]: 2026-01-31 08:46:44.938697806 +0000 UTC m=+0.107516838 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:46:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:45.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:45 compute-0 nova_compute[247704]: 2026-01-31 08:46:45.025 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:45 compute-0 ceph-mon[74496]: pgmap v3418: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 1.5 MiB/s wr, 4 op/s
Jan 31 08:46:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:46.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:46 compute-0 nova_compute[247704]: 2026-01-31 08:46:46.861 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:47 compute-0 ceph-mon[74496]: pgmap v3419: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:48.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:49.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:49 compute-0 ceph-mon[74496]: pgmap v3420: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:50 compute-0 nova_compute[247704]: 2026-01-31 08:46:50.028 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:50 compute-0 nova_compute[247704]: 2026-01-31 08:46:50.222 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:50.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:46:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:46:51 compute-0 nova_compute[247704]: 2026-01-31 08:46:51.863 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:52 compute-0 ceph-mon[74496]: pgmap v3421: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:52 compute-0 nova_compute[247704]: 2026-01-31 08:46:52.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:52.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/604803900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:46:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1400578424' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:46:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:54 compute-0 ceph-mon[74496]: pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:46:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2766823848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:46:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2766823848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:46:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.137239) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214137285, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1689, "num_deletes": 256, "total_data_size": 3017645, "memory_usage": 3062080, "flush_reason": "Manual Compaction"}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214155290, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 2949889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73331, "largest_seqno": 75019, "table_properties": {"data_size": 2942105, "index_size": 4662, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16009, "raw_average_key_size": 19, "raw_value_size": 2926626, "raw_average_value_size": 3644, "num_data_blocks": 205, "num_entries": 803, "num_filter_entries": 803, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849046, "oldest_key_time": 1769849046, "file_creation_time": 1769849214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 18097 microseconds, and 5902 cpu microseconds.
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.155334) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 2949889 bytes OK
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.155358) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.157676) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.157700) EVENT_LOG_v1 {"time_micros": 1769849214157694, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.157722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3010582, prev total WAL file size 3010582, number of live WAL files 2.
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.158339) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323636' seq:0, type:0; will stop at (end)
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(2880KB)], [167(11MB)]
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214158371, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 14989618, "oldest_snapshot_seqno": -1}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 10306 keys, 14840346 bytes, temperature: kUnknown
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214242561, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 14840346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14772401, "index_size": 41080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 271509, "raw_average_key_size": 26, "raw_value_size": 14590720, "raw_average_value_size": 1415, "num_data_blocks": 1577, "num_entries": 10306, "num_filter_entries": 10306, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849214, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.243250) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14840346 bytes
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.244700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.7 rd, 175.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.5 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(10.1) write-amplify(5.0) OK, records in: 10833, records dropped: 527 output_compression: NoCompression
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.244735) EVENT_LOG_v1 {"time_micros": 1769849214244719, "job": 104, "event": "compaction_finished", "compaction_time_micros": 84346, "compaction_time_cpu_micros": 28839, "output_level": 6, "num_output_files": 1, "total_output_size": 14840346, "num_input_records": 10833, "num_output_records": 10306, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214245369, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849214248837, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.158258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.248922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.248930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.248931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.248933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:46:54.248935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:46:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Jan 31 08:46:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:54.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:55 compute-0 nova_compute[247704]: 2026-01-31 08:46:55.030 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:55.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:56 compute-0 ceph-mon[74496]: pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Jan 31 08:46:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 08:46:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:56.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:56 compute-0 nova_compute[247704]: 2026-01-31 08:46:56.865 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:46:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:57.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:58 compute-0 ceph-mon[74496]: pgmap v3424: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 08:46:58 compute-0 nova_compute[247704]: 2026-01-31 08:46:58.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 12 KiB/s wr, 34 op/s
Jan 31 08:46:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:46:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:46:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:46:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:46:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:59.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:46:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:59 compute-0 sudo[384583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:59 compute-0 sudo[384583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:59 compute-0 sudo[384583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:59 compute-0 sudo[384608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:46:59 compute-0 sudo[384608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:59 compute-0 sudo[384608]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:00 compute-0 nova_compute[247704]: 2026-01-31 08:47:00.033 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:00 compute-0 ceph-mon[74496]: pgmap v3425: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 797 KiB/s rd, 12 KiB/s wr, 34 op/s
Jan 31 08:47:00 compute-0 nova_compute[247704]: 2026-01-31 08:47:00.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:00 compute-0 nova_compute[247704]: 2026-01-31 08:47:00.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:47:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:00.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:01.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:01 compute-0 nova_compute[247704]: 2026-01-31 08:47:01.867 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:02 compute-0 ceph-mon[74496]: pgmap v3426: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:02.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:03 compute-0 podman[384635]: 2026-01-31 08:47:03.895912978 +0000 UTC m=+0.064115411 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:47:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:04 compute-0 ceph-mon[74496]: pgmap v3427: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:04.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:05 compute-0 nova_compute[247704]: 2026-01-31 08:47:05.036 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:05.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:06 compute-0 ceph-mon[74496]: pgmap v3428: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:47:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 31 08:47:06 compute-0 nova_compute[247704]: 2026-01-31 08:47:06.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:06.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:07.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:07 compute-0 nova_compute[247704]: 2026-01-31 08:47:07.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:07 compute-0 nova_compute[247704]: 2026-01-31 08:47:07.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:47:07 compute-0 nova_compute[247704]: 2026-01-31 08:47:07.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:47:08 compute-0 ceph-mon[74496]: pgmap v3429: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 31 08:47:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3314652597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.476 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.635 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.636 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.636 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.636 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:47:08 compute-0 nova_compute[247704]: 2026-01-31 08:47:08.636 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 602 KiB/s wr, 77 op/s
Jan 31 08:47:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:09.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:47:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393711494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.156 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/393711494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.315 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.316 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4286MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.316 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.317 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:47:09.318 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.319 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:47:09.319 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.481 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.482 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:47:09 compute-0 nova_compute[247704]: 2026-01-31 08:47:09.545 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:47:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264906209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:10 compute-0 nova_compute[247704]: 2026-01-31 08:47:10.000 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:10 compute-0 nova_compute[247704]: 2026-01-31 08:47:10.007 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:47:10 compute-0 nova_compute[247704]: 2026-01-31 08:47:10.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:10 compute-0 nova_compute[247704]: 2026-01-31 08:47:10.094 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:47:10 compute-0 nova_compute[247704]: 2026-01-31 08:47:10.096 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:47:10 compute-0 nova_compute[247704]: 2026-01-31 08:47:10.096 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:10 compute-0 ceph-mon[74496]: pgmap v3430: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 602 KiB/s wr, 77 op/s
Jan 31 08:47:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/534509877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2264906209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 31 08:47:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:10.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:11.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:11 compute-0 nova_compute[247704]: 2026-01-31 08:47:11.097 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:11 compute-0 nova_compute[247704]: 2026-01-31 08:47:11.098 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:47:11.218 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:47:11.218 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:47:11.218 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:11 compute-0 nova_compute[247704]: 2026-01-31 08:47:11.869 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:12 compute-0 ceph-mon[74496]: pgmap v3431: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 31 08:47:12 compute-0 nova_compute[247704]: 2026-01-31 08:47:12.559 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 08:47:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:12.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:13.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:14 compute-0 ceph-mon[74496]: pgmap v3432: 305 pgs: 305 active+clean; 198 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 08:47:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:14.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:15 compute-0 nova_compute[247704]: 2026-01-31 08:47:15.042 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:15.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3370830284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:15 compute-0 podman[384704]: 2026-01-31 08:47:15.920956475 +0000 UTC m=+0.097261877 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:47:16 compute-0 ceph-mon[74496]: pgmap v3433: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:16 compute-0 nova_compute[247704]: 2026-01-31 08:47:16.870 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:16.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:17.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:47:17.322 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:47:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1983775073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:17 compute-0 ceph-mon[74496]: pgmap v3434: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:18 compute-0 nova_compute[247704]: 2026-01-31 08:47:18.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:18.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:19.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:19 compute-0 ceph-mon[74496]: pgmap v3435: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:47:20 compute-0 sudo[384732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:20 compute-0 sudo[384732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:20 compute-0 sudo[384732]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:20 compute-0 nova_compute[247704]: 2026-01-31 08:47:20.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:20 compute-0 sudo[384757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:20 compute-0 sudo[384757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:20 compute-0 sudo[384757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:47:20
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes']
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 1.6 MiB/s wr, 55 op/s
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:47:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:47:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:20.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:21.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:21 compute-0 nova_compute[247704]: 2026-01-31 08:47:21.872 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:21 compute-0 ceph-mon[74496]: pgmap v3436: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 220 KiB/s rd, 1.6 MiB/s wr, 55 op/s
Jan 31 08:47:22 compute-0 nova_compute[247704]: 2026-01-31 08:47:22.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:22 compute-0 nova_compute[247704]: 2026-01-31 08:47:22.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:47:22 compute-0 nova_compute[247704]: 2026-01-31 08:47:22.592 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:47:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Jan 31 08:47:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:22.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:23.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:23 compute-0 ceph-mon[74496]: pgmap v3437: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 17 KiB/s wr, 1 op/s
Jan 31 08:47:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 21 KiB/s wr, 2 op/s
Jan 31 08:47:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:24.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:25 compute-0 nova_compute[247704]: 2026-01-31 08:47:25.067 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:25.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:25 compute-0 ceph-mon[74496]: pgmap v3438: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 21 KiB/s wr, 2 op/s
Jan 31 08:47:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 08:47:26 compute-0 nova_compute[247704]: 2026-01-31 08:47:26.876 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:26.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:27.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:27 compute-0 ceph-mon[74496]: pgmap v3439: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 08:47:28 compute-0 nova_compute[247704]: 2026-01-31 08:47:28.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:28 compute-0 nova_compute[247704]: 2026-01-31 08:47:28.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:47:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 08:47:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:28.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:29.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:30 compute-0 ceph-mon[74496]: pgmap v3440: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 4.7 KiB/s wr, 0 op/s
Jan 31 08:47:30 compute-0 nova_compute[247704]: 2026-01-31 08:47:30.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:30.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:31.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:31 compute-0 nova_compute[247704]: 2026-01-31 08:47:31.877 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:32 compute-0 ceph-mon[74496]: pgmap v3441: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:32.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:34 compute-0 ceph-mon[74496]: pgmap v3442: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:34 compute-0 podman[384791]: 2026-01-31 08:47:34.902992588 +0000 UTC m=+0.075241601 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 08:47:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:34.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:35 compute-0 nova_compute[247704]: 2026-01-31 08:47:35.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:35.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002172501686674111 of space, bias 1.0, pg target 0.6517505060022333 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:47:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:47:36 compute-0 ceph-mon[74496]: pgmap v3443: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 08:47:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:47:36 compute-0 nova_compute[247704]: 2026-01-31 08:47:36.879 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:36 compute-0 sudo[384811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:36 compute-0 sudo[384811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:36 compute-0 sudo[384811]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:36 compute-0 sudo[384836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:36 compute-0 sudo[384836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:36 compute-0 sudo[384836]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-0 sudo[384861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:37 compute-0 sudo[384861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-0 sudo[384861]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-0 sudo[384886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:47:37 compute-0 sudo[384886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:37 compute-0 sudo[384886]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:47:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:47:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:47:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:47:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:47:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b6c4a316-28be-4be2-92d1-5069cf7461d0 does not exist
Jan 31 08:47:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5d2440d4-34b7-4b8f-bc3d-a032dc7fa523 does not exist
Jan 31 08:47:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 128007aa-5e7e-4b90-8657-f479dae82b22 does not exist
Jan 31 08:47:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:47:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:47:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:47:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:47:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:47:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:47:37 compute-0 sudo[384942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:37 compute-0 sudo[384942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-0 sudo[384942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-0 sudo[384967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:37 compute-0 sudo[384967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-0 sudo[384967]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-0 sudo[384992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:37 compute-0 sudo[384992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:37 compute-0 sudo[384992]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:37 compute-0 sudo[385017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:47:37 compute-0 sudo[385017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:47:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:38.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:38 compute-0 ceph-mon[74496]: pgmap v3444: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:47:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:47:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:47:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:47:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:47:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:47:38 compute-0 podman[385084]: 2026-01-31 08:47:38.993215642 +0000 UTC m=+0.059877408 container create a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:47:39 compute-0 systemd[1]: Started libpod-conmon-a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c.scope.
Jan 31 08:47:39 compute-0 podman[385084]: 2026-01-31 08:47:38.959206054 +0000 UTC m=+0.025867850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:47:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:39 compute-0 podman[385084]: 2026-01-31 08:47:39.120489189 +0000 UTC m=+0.187150985 container init a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:39 compute-0 podman[385084]: 2026-01-31 08:47:39.127716095 +0000 UTC m=+0.194377861 container start a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:47:39 compute-0 nostalgic_cartwright[385100]: 167 167
Jan 31 08:47:39 compute-0 systemd[1]: libpod-a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c.scope: Deactivated successfully.
Jan 31 08:47:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:39 compute-0 podman[385084]: 2026-01-31 08:47:39.144297878 +0000 UTC m=+0.210959674 container attach a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:47:39 compute-0 podman[385084]: 2026-01-31 08:47:39.146048851 +0000 UTC m=+0.212710637 container died a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a243efdeba0b82c6e3939c997643466592718a1a70d757b99008a74a96ebcf63-merged.mount: Deactivated successfully.
Jan 31 08:47:39 compute-0 podman[385084]: 2026-01-31 08:47:39.264284667 +0000 UTC m=+0.330946433 container remove a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:39 compute-0 systemd[1]: libpod-conmon-a944b4dcd325de322fc5b68444a59fdd05327221aee9c9d934b4c8aceafdc79c.scope: Deactivated successfully.
Jan 31 08:47:39 compute-0 podman[385127]: 2026-01-31 08:47:39.406033527 +0000 UTC m=+0.055522002 container create 5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kilby, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:47:39 compute-0 systemd[1]: Started libpod-conmon-5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62.scope.
Jan 31 08:47:39 compute-0 podman[385127]: 2026-01-31 08:47:39.379253865 +0000 UTC m=+0.028742380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:47:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3d91248362ad4cea529a54385289a05c8ac9d7c9a360bb7240f06e0a4a1858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3d91248362ad4cea529a54385289a05c8ac9d7c9a360bb7240f06e0a4a1858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3d91248362ad4cea529a54385289a05c8ac9d7c9a360bb7240f06e0a4a1858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3d91248362ad4cea529a54385289a05c8ac9d7c9a360bb7240f06e0a4a1858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f3d91248362ad4cea529a54385289a05c8ac9d7c9a360bb7240f06e0a4a1858/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:39 compute-0 podman[385127]: 2026-01-31 08:47:39.520200665 +0000 UTC m=+0.169689160 container init 5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:47:39 compute-0 podman[385127]: 2026-01-31 08:47:39.527902583 +0000 UTC m=+0.177391058 container start 5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:47:39 compute-0 podman[385127]: 2026-01-31 08:47:39.538049499 +0000 UTC m=+0.187537974 container attach 5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:47:40 compute-0 ceph-mon[74496]: pgmap v3445: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 31 08:47:40 compute-0 nova_compute[247704]: 2026-01-31 08:47:40.075 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:40 compute-0 sudo[385148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:40 compute-0 sudo[385148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-0 sudo[385148]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-0 sudo[385173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:40 compute-0 sudo[385173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-0 sudo[385173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-0 beautiful_kilby[385143]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:47:40 compute-0 beautiful_kilby[385143]: --> relative data size: 1.0
Jan 31 08:47:40 compute-0 beautiful_kilby[385143]: --> All data devices are unavailable
Jan 31 08:47:40 compute-0 podman[385127]: 2026-01-31 08:47:40.451390412 +0000 UTC m=+1.100878897 container died 5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kilby, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:47:40 compute-0 systemd[1]: libpod-5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62.scope: Deactivated successfully.
Jan 31 08:47:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f3d91248362ad4cea529a54385289a05c8ac9d7c9a360bb7240f06e0a4a1858-merged.mount: Deactivated successfully.
Jan 31 08:47:40 compute-0 podman[385127]: 2026-01-31 08:47:40.51376398 +0000 UTC m=+1.163252455 container remove 5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:47:40 compute-0 systemd[1]: libpod-conmon-5e164db18882753e2af991843341eb8220dc0fbf68f14fd213fea19c39527a62.scope: Deactivated successfully.
Jan 31 08:47:40 compute-0 sudo[385017]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-0 sudo[385223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:40 compute-0 sudo[385223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-0 sudo[385223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-0 sudo[385248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:40 compute-0 sudo[385248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-0 sudo[385248]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-0 sudo[385273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:40 compute-0 sudo[385273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-0 sudo[385273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:40 compute-0 sudo[385298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:47:40 compute-0 sudo[385298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:47:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:40.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.090536585 +0000 UTC m=+0.033874975 container create 597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:47:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:41.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:41 compute-0 systemd[1]: Started libpod-conmon-597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c.scope.
Jan 31 08:47:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.147256594 +0000 UTC m=+0.090595014 container init 597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.156423468 +0000 UTC m=+0.099761868 container start 597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.160547768 +0000 UTC m=+0.103886188 container attach 597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:47:41 compute-0 pensive_swirles[385380]: 167 167
Jan 31 08:47:41 compute-0 systemd[1]: libpod-597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c.scope: Deactivated successfully.
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.162037884 +0000 UTC m=+0.105376274 container died 597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.075793595 +0000 UTC m=+0.019132005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:47:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3473b2d884c71b345ecb98361ea7974864e3f9f7ad5f40aaf4eca58bf08e9637-merged.mount: Deactivated successfully.
Jan 31 08:47:41 compute-0 podman[385364]: 2026-01-31 08:47:41.193818907 +0000 UTC m=+0.137157297 container remove 597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_swirles, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:47:41 compute-0 systemd[1]: libpod-conmon-597832cda4468ee9bd1599771a284bb8e6e6f83b0995dce288be8ba9fa998d1c.scope: Deactivated successfully.
Jan 31 08:47:41 compute-0 podman[385403]: 2026-01-31 08:47:41.318222774 +0000 UTC m=+0.041108241 container create 116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:47:41 compute-0 systemd[1]: Started libpod-conmon-116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01.scope.
Jan 31 08:47:41 compute-0 podman[385403]: 2026-01-31 08:47:41.299843427 +0000 UTC m=+0.022728914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:47:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81145ab660637e9b429192887bbe384179bc1d672f7b161a58045afa874387f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81145ab660637e9b429192887bbe384179bc1d672f7b161a58045afa874387f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81145ab660637e9b429192887bbe384179bc1d672f7b161a58045afa874387f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81145ab660637e9b429192887bbe384179bc1d672f7b161a58045afa874387f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:41 compute-0 podman[385403]: 2026-01-31 08:47:41.428139459 +0000 UTC m=+0.151024946 container init 116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:47:41 compute-0 podman[385403]: 2026-01-31 08:47:41.436311958 +0000 UTC m=+0.159197445 container start 116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 08:47:41 compute-0 podman[385403]: 2026-01-31 08:47:41.442489988 +0000 UTC m=+0.165375485 container attach 116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:47:41 compute-0 nova_compute[247704]: 2026-01-31 08:47:41.880 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:42 compute-0 ceph-mon[74496]: pgmap v3446: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 31 08:47:42 compute-0 cool_fermi[385420]: {
Jan 31 08:47:42 compute-0 cool_fermi[385420]:     "0": [
Jan 31 08:47:42 compute-0 cool_fermi[385420]:         {
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "devices": [
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "/dev/loop3"
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             ],
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "lv_name": "ceph_lv0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "lv_size": "7511998464",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "name": "ceph_lv0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "tags": {
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.crush_device_class": "",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.encrypted": "0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.osd_id": "0",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.type": "block",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:                 "ceph.vdo": "0"
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             },
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "type": "block",
Jan 31 08:47:42 compute-0 cool_fermi[385420]:             "vg_name": "ceph_vg0"
Jan 31 08:47:42 compute-0 cool_fermi[385420]:         }
Jan 31 08:47:42 compute-0 cool_fermi[385420]:     ]
Jan 31 08:47:42 compute-0 cool_fermi[385420]: }
Jan 31 08:47:42 compute-0 systemd[1]: libpod-116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01.scope: Deactivated successfully.
Jan 31 08:47:42 compute-0 podman[385403]: 2026-01-31 08:47:42.278782077 +0000 UTC m=+1.001667544 container died 116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:47:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-81145ab660637e9b429192887bbe384179bc1d672f7b161a58045afa874387f3-merged.mount: Deactivated successfully.
Jan 31 08:47:42 compute-0 podman[385403]: 2026-01-31 08:47:42.466917795 +0000 UTC m=+1.189803262 container remove 116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 08:47:42 compute-0 systemd[1]: libpod-conmon-116103c1442a26195947dbeb867e15e85f2c5f5a6a6def7c884399ff9b0ffb01.scope: Deactivated successfully.
Jan 31 08:47:42 compute-0 sudo[385298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:42 compute-0 sudo[385442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:42 compute-0 sudo[385442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:42 compute-0 sudo[385442]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:42 compute-0 sudo[385467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:42 compute-0 sudo[385467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:42 compute-0 sudo[385467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:42 compute-0 sudo[385492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:42 compute-0 sudo[385492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:42 compute-0 sudo[385492]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:42 compute-0 sudo[385517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:47:42 compute-0 sudo[385517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Jan 31 08:47:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:43.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.106793185 +0000 UTC m=+0.042164107 container create 94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:47:43 compute-0 systemd[1]: Started libpod-conmon-94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2.scope.
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.087898004 +0000 UTC m=+0.023268936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:47:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.199964811 +0000 UTC m=+0.135335743 container init 94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.209332689 +0000 UTC m=+0.144703601 container start 94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:47:43 compute-0 wonderful_zhukovsky[385600]: 167 167
Jan 31 08:47:43 compute-0 systemd[1]: libpod-94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2.scope: Deactivated successfully.
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.21636032 +0000 UTC m=+0.151731262 container attach 94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_zhukovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.217110779 +0000 UTC m=+0.152481691 container died 94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0036a4fde557958c9745c5d6b0cd28d9a7489218900b2fcdce7127c10b9aca3e-merged.mount: Deactivated successfully.
Jan 31 08:47:43 compute-0 podman[385583]: 2026-01-31 08:47:43.257598833 +0000 UTC m=+0.192969745 container remove 94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:47:43 compute-0 systemd[1]: libpod-conmon-94d1fcd9487e06e9344d7dfba9cffe5b8322d91dc49b98855be5eade86c712c2.scope: Deactivated successfully.
Jan 31 08:47:43 compute-0 podman[385625]: 2026-01-31 08:47:43.424556837 +0000 UTC m=+0.055323838 container create 963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:47:43 compute-0 systemd[1]: Started libpod-conmon-963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20.scope.
Jan 31 08:47:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:43 compute-0 podman[385625]: 2026-01-31 08:47:43.39884484 +0000 UTC m=+0.029611921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34429a55b1c08d8e1e6129b6501a54747a256e80ab1d65cca4b17470f3b08680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34429a55b1c08d8e1e6129b6501a54747a256e80ab1d65cca4b17470f3b08680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34429a55b1c08d8e1e6129b6501a54747a256e80ab1d65cca4b17470f3b08680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34429a55b1c08d8e1e6129b6501a54747a256e80ab1d65cca4b17470f3b08680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:43 compute-0 podman[385625]: 2026-01-31 08:47:43.51473866 +0000 UTC m=+0.145505681 container init 963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:43 compute-0 podman[385625]: 2026-01-31 08:47:43.521602798 +0000 UTC m=+0.152369799 container start 963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:47:43 compute-0 podman[385625]: 2026-01-31 08:47:43.528049575 +0000 UTC m=+0.158816586 container attach 963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:47:44 compute-0 ceph-mon[74496]: pgmap v3447: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Jan 31 08:47:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]: {
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:         "osd_id": 0,
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:         "type": "bluestore"
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]:     }
Jan 31 08:47:44 compute-0 vigilant_yalow[385641]: }
Jan 31 08:47:44 compute-0 systemd[1]: libpod-963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20.scope: Deactivated successfully.
Jan 31 08:47:44 compute-0 podman[385625]: 2026-01-31 08:47:44.357177619 +0000 UTC m=+0.987944650 container died 963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-34429a55b1c08d8e1e6129b6501a54747a256e80ab1d65cca4b17470f3b08680-merged.mount: Deactivated successfully.
Jan 31 08:47:44 compute-0 podman[385625]: 2026-01-31 08:47:44.431728693 +0000 UTC m=+1.062495694 container remove 963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:47:44 compute-0 systemd[1]: libpod-conmon-963a181dd1e49bed74407b6fb5cf3b6c82d84c896bd735cb5f798e0da0716b20.scope: Deactivated successfully.
Jan 31 08:47:44 compute-0 sudo[385517]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:47:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:47:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ea946d62-5722-4a69-b4a4-d34fc19b41f4 does not exist
Jan 31 08:47:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fceb5eec-a346-4c84-8ea5-a898e6dabda9 does not exist
Jan 31 08:47:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 480e15ad-bb02-4eb3-b7a7-15a976cb4f59 does not exist
Jan 31 08:47:44 compute-0 nova_compute[247704]: 2026-01-31 08:47:44.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:44 compute-0 nova_compute[247704]: 2026-01-31 08:47:44.570 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:44 compute-0 nova_compute[247704]: 2026-01-31 08:47:44.571 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:44 compute-0 sudo[385676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:47:44 compute-0 sudo[385676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:44 compute-0 sudo[385676]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:44 compute-0 sudo[385701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:47:44 compute-0 sudo[385701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:44 compute-0 sudo[385701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:44 compute-0 nova_compute[247704]: 2026-01-31 08:47:44.787 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:47:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:44 compute-0 nova_compute[247704]: 2026-01-31 08:47:44.997 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:44 compute-0 nova_compute[247704]: 2026-01-31 08:47:44.997 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:45 compute-0 nova_compute[247704]: 2026-01-31 08:47:45.005 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:47:45 compute-0 nova_compute[247704]: 2026-01-31 08:47:45.005 247708 INFO nova.compute.claims [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:47:45 compute-0 nova_compute[247704]: 2026-01-31 08:47:45.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:45.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:45 compute-0 nova_compute[247704]: 2026-01-31 08:47:45.325 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:47:45 compute-0 ceph-mon[74496]: pgmap v3448: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:47:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/480750745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:45 compute-0 nova_compute[247704]: 2026-01-31 08:47:45.800 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:45 compute-0 nova_compute[247704]: 2026-01-31 08:47:45.806 247708 DEBUG nova.compute.provider_tree [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:47:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/480750745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.577737) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266577803, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 686, "num_deletes": 251, "total_data_size": 917376, "memory_usage": 931224, "flush_reason": "Manual Compaction"}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266589048, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 907419, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75020, "largest_seqno": 75705, "table_properties": {"data_size": 903809, "index_size": 1453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8242, "raw_average_key_size": 19, "raw_value_size": 896596, "raw_average_value_size": 2119, "num_data_blocks": 64, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849215, "oldest_key_time": 1769849215, "file_creation_time": 1769849266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 11435 microseconds, and 5041 cpu microseconds.
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.589164) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 907419 bytes OK
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.589201) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.592191) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.592223) EVENT_LOG_v1 {"time_micros": 1769849266592213, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.592251) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 913859, prev total WAL file size 913859, number of live WAL files 2.
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.592857) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(886KB)], [170(14MB)]
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266592920, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 15747765, "oldest_snapshot_seqno": -1}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10214 keys, 13742712 bytes, temperature: kUnknown
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266699410, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 13742712, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13676394, "index_size": 39709, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 270289, "raw_average_key_size": 26, "raw_value_size": 13497139, "raw_average_value_size": 1321, "num_data_blocks": 1514, "num_entries": 10214, "num_filter_entries": 10214, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849266, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.700016) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13742712 bytes
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.702559) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.5 rd, 128.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(32.5) write-amplify(15.1) OK, records in: 10729, records dropped: 515 output_compression: NoCompression
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.702605) EVENT_LOG_v1 {"time_micros": 1769849266702586, "job": 106, "event": "compaction_finished", "compaction_time_micros": 106764, "compaction_time_cpu_micros": 33034, "output_level": 6, "num_output_files": 1, "total_output_size": 13742712, "num_input_records": 10729, "num_output_records": 10214, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266703001, "job": 106, "event": "table_file_deletion", "file_number": 172}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849266706401, "job": 106, "event": "table_file_deletion", "file_number": 170}
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.592731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.706613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.706623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.706628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.706632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:47:46.706637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:46 compute-0 nova_compute[247704]: 2026-01-31 08:47:46.883 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:46 compute-0 nova_compute[247704]: 2026-01-31 08:47:46.912 247708 DEBUG nova.scheduler.client.report [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:47:46 compute-0 podman[385749]: 2026-01-31 08:47:46.919592728 +0000 UTC m=+0.087144001 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 08:47:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:46.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:47 compute-0 nova_compute[247704]: 2026-01-31 08:47:47.382 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:47 compute-0 nova_compute[247704]: 2026-01-31 08:47:47.383 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:47:47 compute-0 ceph-mon[74496]: pgmap v3449: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:47 compute-0 nova_compute[247704]: 2026-01-31 08:47:47.839 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:47:47 compute-0 nova_compute[247704]: 2026-01-31 08:47:47.840 247708 DEBUG nova.network.neutron [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:47:48 compute-0 nova_compute[247704]: 2026-01-31 08:47:48.272 247708 INFO nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:47:48 compute-0 nova_compute[247704]: 2026-01-31 08:47:48.613 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:47:48 compute-0 nova_compute[247704]: 2026-01-31 08:47:48.772 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.102 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.105 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.105 247708 INFO nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Creating image(s)
Jan 31 08:47:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.149 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.196 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.242 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.249 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.283 247708 DEBUG nova.policy [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c6968a1ee10e4e3b8651ffe0240a7e46', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.320 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.321 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.322 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.322 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.357 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.363 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1fff7e98-de8b-44f9-b725-55c12d40b395_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.784 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1fff7e98-de8b-44f9-b725-55c12d40b395_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.854 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] resizing rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:47:49 compute-0 nova_compute[247704]: 2026-01-31 08:47:49.984 247708 DEBUG nova.objects.instance [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'migration_context' on Instance uuid 1fff7e98-de8b-44f9-b725-55c12d40b395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:47:50 compute-0 ceph-mon[74496]: pgmap v3450: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.4 KiB/s wr, 0 op/s
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:50 compute-0 nova_compute[247704]: 2026-01-31 08:47:50.132 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:47:50 compute-0 nova_compute[247704]: 2026-01-31 08:47:50.134 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Ensure instance console log exists: /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:47:50 compute-0 nova_compute[247704]: 2026-01-31 08:47:50.135 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:50 compute-0 nova_compute[247704]: 2026-01-31 08:47:50.136 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:50 compute-0 nova_compute[247704]: 2026-01-31 08:47:50.137 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:50 compute-0 nova_compute[247704]: 2026-01-31 08:47:50.138 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:50 compute-0 ovn_controller[149457]: 2026-01-31T08:47:50Z|00823|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 31 08:47:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 674 KiB/s wr, 15 op/s
Jan 31 08:47:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:47:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:47:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:51.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:51 compute-0 nova_compute[247704]: 2026-01-31 08:47:51.885 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:52 compute-0 ceph-mon[74496]: pgmap v3451: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 674 KiB/s wr, 15 op/s
Jan 31 08:47:52 compute-0 nova_compute[247704]: 2026-01-31 08:47:52.464 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 673 KiB/s wr, 14 op/s
Jan 31 08:47:52 compute-0 nova_compute[247704]: 2026-01-31 08:47:52.845 247708 WARNING nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 31 08:47:52 compute-0 nova_compute[247704]: 2026-01-31 08:47:52.845 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid 1fff7e98-de8b-44f9-b725-55c12d40b395 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 08:47:52 compute-0 nova_compute[247704]: 2026-01-31 08:47:52.845 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:52.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:53.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:54 compute-0 ceph-mon[74496]: pgmap v3452: 305 pgs: 305 active+clean; 221 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 673 KiB/s wr, 14 op/s
Jan 31 08:47:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/182738076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:47:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/182738076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:47:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:54 compute-0 nova_compute[247704]: 2026-01-31 08:47:54.943 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:47:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:54.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:47:54 compute-0 nova_compute[247704]: 2026-01-31 08:47:54.961 247708 DEBUG nova.network.neutron [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Successfully created port: 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:47:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:55 compute-0 nova_compute[247704]: 2026-01-31 08:47:55.141 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:56 compute-0 ceph-mon[74496]: pgmap v3453: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:56 compute-0 nova_compute[247704]: 2026-01-31 08:47:56.888 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:47:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:56.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:57.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:57 compute-0 ceph-mon[74496]: pgmap v3454: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:47:58 compute-0 nova_compute[247704]: 2026-01-31 08:47:58.874 247708 DEBUG nova.network.neutron [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Successfully updated port: 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:47:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:47:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:58.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:47:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:47:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:47:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:47:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:59 compute-0 nova_compute[247704]: 2026-01-31 08:47:59.327 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:47:59 compute-0 nova_compute[247704]: 2026-01-31 08:47:59.328 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquired lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:47:59 compute-0 nova_compute[247704]: 2026-01-31 08:47:59.328 247708 DEBUG nova.network.neutron [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:47:59 compute-0 nova_compute[247704]: 2026-01-31 08:47:59.390 247708 DEBUG nova.compute.manager [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-changed-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:47:59 compute-0 nova_compute[247704]: 2026-01-31 08:47:59.390 247708 DEBUG nova.compute.manager [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Refreshing instance network info cache due to event network-changed-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:47:59 compute-0 nova_compute[247704]: 2026-01-31 08:47:59.391 247708 DEBUG oslo_concurrency.lockutils [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:48:00 compute-0 nova_compute[247704]: 2026-01-31 08:48:00.144 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:00 compute-0 sudo[385947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:00 compute-0 sudo[385947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:00 compute-0 sudo[385947]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:00 compute-0 sudo[385973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:00 compute-0 sudo[385973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:00 compute-0 sudo[385973]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:00 compute-0 ceph-mon[74496]: pgmap v3455: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:48:00 compute-0 nova_compute[247704]: 2026-01-31 08:48:00.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:00 compute-0 nova_compute[247704]: 2026-01-31 08:48:00.761 247708 DEBUG nova.network.neutron [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:48:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:48:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:00.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:01.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:01 compute-0 nova_compute[247704]: 2026-01-31 08:48:01.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:01 compute-0 nova_compute[247704]: 2026-01-31 08:48:01.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:48:01 compute-0 ceph-mon[74496]: pgmap v3456: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:48:01 compute-0 nova_compute[247704]: 2026-01-31 08:48:01.889 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Jan 31 08:48:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1146695923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:03.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.190 247708 DEBUG nova.network.neutron [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.284 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Releasing lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.285 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Instance network_info: |[{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.286 247708 DEBUG oslo_concurrency.lockutils [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.287 247708 DEBUG nova.network.neutron [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Refreshing network info cache for port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.291 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Start _get_guest_xml network_info=[{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.297 247708 WARNING nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.306 247708 DEBUG nova.virt.libvirt.host [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.307 247708 DEBUG nova.virt.libvirt.host [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.325 247708 DEBUG nova.virt.libvirt.host [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.326 247708 DEBUG nova.virt.libvirt.host [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.328 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.328 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.329 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.329 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.330 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.330 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.331 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.331 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.331 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.332 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.332 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.333 247708 DEBUG nova.virt.hardware [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.338 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:48:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499755342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.801 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.836 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:03 compute-0 nova_compute[247704]: 2026-01-31 08:48:03.842 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:04 compute-0 ceph-mon[74496]: pgmap v3457: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.1 KiB/s rd, 1.1 MiB/s wr, 13 op/s
Jan 31 08:48:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3678183900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3499755342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:48:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852161205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.384 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.386 247708 DEBUG nova.virt.libvirt.vif [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=194,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEVRXCOmyKSdhlo9l0WxIRbGJsQf5I7xnoqbiNS2WMkfr+PHUSU3MjMN6e2reAurOFdIo1aIllo1la3phc66kwPrjNj65wq6jckJq8FKFDC0nYvOhxhwR1EYQGTcrj65g==',key_name='tempest-TestSecurityGroupsBasicOps-681515177',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-1fepqj6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:47:48Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=1fff7e98-de8b-44f9-b725-55c12d40b395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.386 247708 DEBUG nova.network.os_vif_util [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.387 247708 DEBUG nova.network.os_vif_util [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.388 247708 DEBUG nova.objects.instance [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fff7e98-de8b-44f9-b725-55c12d40b395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.411 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <uuid>1fff7e98-de8b-44f9-b725-55c12d40b395</uuid>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <name>instance-000000c2</name>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710</nova:name>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:48:03</nova:creationTime>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:user uuid="c6968a1ee10e4e3b8651ffe0240a7e46">tempest-TestSecurityGroupsBasicOps-1014068786-project-member</nova:user>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:project uuid="ba35ae24dbf3443e8a526dce39c6793b">tempest-TestSecurityGroupsBasicOps-1014068786</nova:project>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <nova:port uuid="1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94">
Jan 31 08:48:04 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <system>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <entry name="serial">1fff7e98-de8b-44f9-b725-55c12d40b395</entry>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <entry name="uuid">1fff7e98-de8b-44f9-b725-55c12d40b395</entry>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </system>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <os>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </os>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <features>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </features>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/1fff7e98-de8b-44f9-b725-55c12d40b395_disk">
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </source>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/1fff7e98-de8b-44f9-b725-55c12d40b395_disk.config">
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </source>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:48:04 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:09:df:e9"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <target dev="tap1b62b7b3-9c"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/console.log" append="off"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <video>
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </video>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:48:04 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:48:04 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:48:04 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:48:04 compute-0 nova_compute[247704]: </domain>
Jan 31 08:48:04 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.412 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Preparing to wait for external event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.413 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.413 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.413 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.414 247708 DEBUG nova.virt.libvirt.vif [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=194,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEVRXCOmyKSdhlo9l0WxIRbGJsQf5I7xnoqbiNS2WMkfr+PHUSU3MjMN6e2reAurOFdIo1aIllo1la3phc66kwPrjNj65wq6jckJq8FKFDC0nYvOhxhwR1EYQGTcrj65g==',key_name='tempest-TestSecurityGroupsBasicOps-681515177',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-1fepqj6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:47:48Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=1fff7e98-de8b-44f9-b725-55c12d40b395,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.414 247708 DEBUG nova.network.os_vif_util [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.415 247708 DEBUG nova.network.os_vif_util [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.415 247708 DEBUG os_vif [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.416 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.416 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.417 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.421 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b62b7b3-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.422 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b62b7b3-9c, col_values=(('external_ids', {'iface-id': '1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:df:e9', 'vm-uuid': '1fff7e98-de8b-44f9-b725-55c12d40b395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.426 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:04 compute-0 NetworkManager[49108]: <info>  [1769849284.4262] manager: (tap1b62b7b3-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.433 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.436 247708 INFO os_vif [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c')
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.638 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.639 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.639 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] No VIF found with MAC fa:16:3e:09:df:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.640 247708 INFO nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Using config drive
Jan 31 08:48:04 compute-0 nova_compute[247704]: 2026-01-31 08:48:04.671 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Jan 31 08:48:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:05.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3852161205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:05 compute-0 podman[386082]: 2026-01-31 08:48:05.911422479 +0000 UTC m=+0.067673277 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:48:06 compute-0 nova_compute[247704]: 2026-01-31 08:48:06.485 247708 INFO nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Creating config drive at /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/disk.config
Jan 31 08:48:06 compute-0 nova_compute[247704]: 2026-01-31 08:48:06.490 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpq2nyx1x3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:06 compute-0 nova_compute[247704]: 2026-01-31 08:48:06.615 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpq2nyx1x3" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:06 compute-0 nova_compute[247704]: 2026-01-31 08:48:06.686 247708 DEBUG nova.storage.rbd_utils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] rbd image 1fff7e98-de8b-44f9-b725-55c12d40b395_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:48:06 compute-0 nova_compute[247704]: 2026-01-31 08:48:06.691 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/disk.config 1fff7e98-de8b-44f9-b725-55c12d40b395_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:06 compute-0 ceph-mon[74496]: pgmap v3458: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Jan 31 08:48:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/280270877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1441525110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.2 MiB/s wr, 49 op/s
Jan 31 08:48:06 compute-0 nova_compute[247704]: 2026-01-31 08:48:06.899 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:06.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:07.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.216 247708 DEBUG oslo_concurrency.processutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/disk.config 1fff7e98-de8b-44f9-b725-55c12d40b395_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.217 247708 INFO nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Deleting local config drive /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395/disk.config because it was imported into RBD.
Jan 31 08:48:07 compute-0 kernel: tap1b62b7b3-9c: entered promiscuous mode
Jan 31 08:48:07 compute-0 NetworkManager[49108]: <info>  [1769849287.2841] manager: (tap1b62b7b3-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-0 ovn_controller[149457]: 2026-01-31T08:48:07Z|00824|binding|INFO|Claiming lport 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 for this chassis.
Jan 31 08:48:07 compute-0 ovn_controller[149457]: 2026-01-31T08:48:07Z|00825|binding|INFO|1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94: Claiming fa:16:3e:09:df:e9 10.100.0.5
Jan 31 08:48:07 compute-0 systemd-udevd[386153]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:48:07 compute-0 NetworkManager[49108]: <info>  [1769849287.3241] device (tap1b62b7b3-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:48:07 compute-0 NetworkManager[49108]: <info>  [1769849287.3251] device (tap1b62b7b3-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.330 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-0 systemd-machined[214448]: New machine qemu-87-instance-000000c2.
Jan 31 08:48:07 compute-0 ovn_controller[149457]: 2026-01-31T08:48:07Z|00826|binding|INFO|Setting lport 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 ovn-installed in OVS
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.339 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-000000c2.
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.351 247708 DEBUG nova.network.neutron [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updated VIF entry in instance network info cache for port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.352 247708 DEBUG nova.network.neutron [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:48:07 compute-0 ovn_controller[149457]: 2026-01-31T08:48:07Z|00827|binding|INFO|Setting lport 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 up in Southbound
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.387 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:df:e9 10.100.0.5'], port_security=['fa:16:3e:09:df:e9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1fff7e98-de8b-44f9-b725-55c12d40b395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1594b23b-251f-4e04-9498-d13079f70afc 8ba5f6d0-b888-4df9-bd08-60927f70df0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b33545-8c35-4640-a006-3779e6b65cfc, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.389 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 in datapath 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 bound to our chassis
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.391 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.405 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ddce5f2c-4828-4378-a8c3-53876f107e48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.407 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93bea2b0-d1 in ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.409 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93bea2b0-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.410 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7c87c2-73b5-4fc5-b320-2f6c3fcf0baa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.411 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[69b95a23-b085-490e-9139-e28d9caf0b79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.428 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[01386d47-a06e-41bb-b4ca-b92a1ae19ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.440 247708 DEBUG oslo_concurrency.lockutils [req-bfa5800b-01cf-4982-a664-98ce658d61d6 req-9c60f1a4-ddd7-4c66-82f8-2b7c4b25a8b5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.443 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[995af1fc-8fb2-4cad-a252-4d268faf4dcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.470 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0097f8c8-3fa4-41ac-b07d-1be686e3b160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 NetworkManager[49108]: <info>  [1769849287.4778] manager: (tap93bea2b0-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.476 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4c47ad-d126-42fd-8bb0-964f10b193bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.512 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb409df-2b5f-4f90-93d9-4a279540e91a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.516 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3e0807-b314-4342-9e2c-4ad53bf18d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 NetworkManager[49108]: <info>  [1769849287.5360] device (tap93bea2b0-d0): carrier: link connected
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.540 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[15da1029-6c34-4fe1-bd72-46d09fb9d62a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.559 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbdb892-9e19-4225-9326-91e94731acea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93bea2b0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:5d:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953015, 'reachable_time': 22153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386189, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.578 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbc1f7c-fdbf-4cc2-a41c-074943d92066]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe99:5dce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 953015, 'tstamp': 953015}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386190, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.638 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.639 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.717 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f9db5cdd-9cac-4578-9910-2469a876dc0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93bea2b0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:99:5d:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953015, 'reachable_time': 22153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386191, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.749 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e673d6e6-d294-4f36-84f8-944ccbbd1db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.809 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc5843b-0753-4795-bc3d-71200bd32eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.810 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93bea2b0-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.811 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.811 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93bea2b0-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-0 NetworkManager[49108]: <info>  [1769849287.8147] manager: (tap93bea2b0-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 31 08:48:07 compute-0 kernel: tap93bea2b0-d0: entered promiscuous mode
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.821 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93bea2b0-d0, col_values=(('external_ids', {'iface-id': 'ccf3782f-9c12-4288-9bff-27ab514c5f7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:07 compute-0 ovn_controller[149457]: 2026-01-31T08:48:07Z|00828|binding|INFO|Releasing lport ccf3782f-9c12-4288-9bff-27ab514c5f7a from this chassis (sb_readonly=0)
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.826 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:48:07 compute-0 nova_compute[247704]: 2026-01-31 08:48:07.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.827 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4850b93a-02f4-4f4a-a187-0f640a828828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.829 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.pid.haproxy
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:48:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:07.830 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'env', 'PROCESS_TAG=haproxy-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:48:07 compute-0 ceph-mon[74496]: pgmap v3459: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.2 MiB/s wr, 49 op/s
Jan 31 08:48:08 compute-0 podman[386223]: 2026-01-31 08:48:08.210167043 +0000 UTC m=+0.022653042 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:48:08 compute-0 nova_compute[247704]: 2026-01-31 08:48:08.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:08 compute-0 podman[386223]: 2026-01-31 08:48:08.582535153 +0000 UTC m=+0.395021102 container create 07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:48:08 compute-0 nova_compute[247704]: 2026-01-31 08:48:08.653 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:08 compute-0 nova_compute[247704]: 2026-01-31 08:48:08.655 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:08 compute-0 nova_compute[247704]: 2026-01-31 08:48:08.655 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:08 compute-0 nova_compute[247704]: 2026-01-31 08:48:08.655 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:48:08 compute-0 nova_compute[247704]: 2026-01-31 08:48:08.656 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:08 compute-0 systemd[1]: Started libpod-conmon-07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2.scope.
Jan 31 08:48:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e517803cf4f63e4b88fdde27124ba5046788c1991c06668e13d9de375ddc8e12/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:08 compute-0 podman[386223]: 2026-01-31 08:48:08.814937178 +0000 UTC m=+0.627423157 container init 07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:48:08 compute-0 podman[386223]: 2026-01-31 08:48:08.821005655 +0000 UTC m=+0.633491614 container start 07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:48:08 compute-0 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[386240]: [NOTICE]   (386263) : New worker (386265) forked
Jan 31 08:48:08 compute-0 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[386240]: [NOTICE]   (386263) : Loading success.
Jan 31 08:48:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 50 op/s
Jan 31 08:48:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:08.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:48:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553671827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.157 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.267 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849289.2662811, 1fff7e98-de8b-44f9-b725-55c12d40b395 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.267 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] VM Started (Lifecycle Event)
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.311 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.315 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849289.266557, 1fff7e98-de8b-44f9-b725-55c12d40b395 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.315 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] VM Paused (Lifecycle Event)
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.325 247708 DEBUG nova.compute.manager [req-2ad3fa2c-082c-4e22-9129-29e47f9afd29 req-2b55fe2b-553e-465c-88dc-a655a4a2e6ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.326 247708 DEBUG oslo_concurrency.lockutils [req-2ad3fa2c-082c-4e22-9129-29e47f9afd29 req-2b55fe2b-553e-465c-88dc-a655a4a2e6ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.326 247708 DEBUG oslo_concurrency.lockutils [req-2ad3fa2c-082c-4e22-9129-29e47f9afd29 req-2b55fe2b-553e-465c-88dc-a655a4a2e6ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.326 247708 DEBUG oslo_concurrency.lockutils [req-2ad3fa2c-082c-4e22-9129-29e47f9afd29 req-2b55fe2b-553e-465c-88dc-a655a4a2e6ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.327 247708 DEBUG nova.compute.manager [req-2ad3fa2c-082c-4e22-9129-29e47f9afd29 req-2b55fe2b-553e-465c-88dc-a655a4a2e6ee 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Processing event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.327 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.333 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.338 247708 INFO nova.virt.libvirt.driver [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Instance spawned successfully.
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.339 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.377 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.378 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.383 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.388 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849289.3316453, 1fff7e98-de8b-44f9-b725-55c12d40b395 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.388 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] VM Resumed (Lifecycle Event)
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.436 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.436 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.437 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.437 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.438 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.438 247708 DEBUG nova.virt.libvirt.driver [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.552 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.553 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4155MB free_disk=20.880420684814453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.553 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.553 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.557 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.559 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.757 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.824 247708 INFO nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Took 20.72 seconds to spawn the instance on the hypervisor.
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.824 247708 DEBUG nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.992 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1fff7e98-de8b-44f9-b725-55c12d40b395 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.993 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:48:09 compute-0 nova_compute[247704]: 2026-01-31 08:48:09.993 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.046 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.218 247708 INFO nova.compute.manager [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Took 25.26 seconds to build instance.
Jan 31 08:48:10 compute-0 ceph-mon[74496]: pgmap v3460: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 3.5 MiB/s wr, 50 op/s
Jan 31 08:48:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2553671827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.284 247708 DEBUG oslo_concurrency.lockutils [None req-9a1bfdb9-67d2-4439-8cc5-54a878aa07d4 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.285 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 17.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.285 247708 INFO nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.285 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:48:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835481311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.489 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.495 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.537 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.653 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:48:10 compute-0 nova_compute[247704]: 2026-01-31 08:48:10.654 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 564 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Jan 31 08:48:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:10.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:11.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:11.219 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:11.220 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:11.220 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3835481311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1540000744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1385660829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.847 247708 DEBUG nova.compute.manager [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.848 247708 DEBUG oslo_concurrency.lockutils [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.848 247708 DEBUG oslo_concurrency.lockutils [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.848 247708 DEBUG oslo_concurrency.lockutils [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.848 247708 DEBUG nova.compute.manager [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] No waiting events found dispatching network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.849 247708 WARNING nova.compute.manager [req-d52d47c1-5c09-4f8d-9c7b-57e4ade39d0c req-ab45fc6a-ed06-41c8-a9c3-0d9bac88f678 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received unexpected event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 for instance with vm_state active and task_state None.
Jan 31 08:48:11 compute-0 nova_compute[247704]: 2026-01-31 08:48:11.897 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:12 compute-0 ceph-mon[74496]: pgmap v3461: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 564 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Jan 31 08:48:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 564 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Jan 31 08:48:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:12.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:13.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:13 compute-0 ceph-mon[74496]: pgmap v3462: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 564 KiB/s rd, 3.6 MiB/s wr, 81 op/s
Jan 31 08:48:13 compute-0 nova_compute[247704]: 2026-01-31 08:48:13.655 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:13 compute-0 nova_compute[247704]: 2026-01-31 08:48:13.656 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:13 compute-0 nova_compute[247704]: 2026-01-31 08:48:13.656 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:14 compute-0 nova_compute[247704]: 2026-01-31 08:48:14.425 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/759972552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 31 08:48:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:15.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:16 compute-0 ceph-mon[74496]: pgmap v3463: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 31 08:48:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2196161786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:48:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2098039222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Jan 31 08:48:16 compute-0 nova_compute[247704]: 2026-01-31 08:48:16.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:17.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2983306795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:17 compute-0 podman[386345]: 2026-01-31 08:48:17.905144613 +0000 UTC m=+0.079123806 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 08:48:18 compute-0 ceph-mon[74496]: pgmap v3464: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Jan 31 08:48:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 412 KiB/s wr, 91 op/s
Jan 31 08:48:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:19 compute-0 NetworkManager[49108]: <info>  [1769849299.3661] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Jan 31 08:48:19 compute-0 nova_compute[247704]: 2026-01-31 08:48:19.364 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:19 compute-0 NetworkManager[49108]: <info>  [1769849299.3680] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 31 08:48:19 compute-0 nova_compute[247704]: 2026-01-31 08:48:19.391 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:19 compute-0 ovn_controller[149457]: 2026-01-31T08:48:19Z|00829|binding|INFO|Releasing lport ccf3782f-9c12-4288-9bff-27ab514c5f7a from this chassis (sb_readonly=0)
Jan 31 08:48:19 compute-0 nova_compute[247704]: 2026-01-31 08:48:19.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:19 compute-0 nova_compute[247704]: 2026-01-31 08:48:19.427 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:20.094 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:48:20 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:20.095 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:48:20 compute-0 nova_compute[247704]: 2026-01-31 08:48:20.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:48:20
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:48:20 compute-0 ceph-mon[74496]: pgmap v3465: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 412 KiB/s wr, 91 op/s
Jan 31 08:48:20 compute-0 sudo[386374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:20 compute-0 sudo[386374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:20 compute-0 sudo[386374]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:20 compute-0 sudo[386399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:20 compute-0 sudo[386399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:20 compute-0 sudo[386399]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 37 KiB/s wr, 178 op/s
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:48:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:48:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:21 compute-0 ceph-mon[74496]: pgmap v3466: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 37 KiB/s wr, 178 op/s
Jan 31 08:48:21 compute-0 nova_compute[247704]: 2026-01-31 08:48:21.901 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:22 compute-0 ovn_controller[149457]: 2026-01-31T08:48:22Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:df:e9 10.100.0.5
Jan 31 08:48:22 compute-0 ovn_controller[149457]: 2026-01-31T08:48:22Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:df:e9 10.100.0.5
Jan 31 08:48:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 31 08:48:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:23.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:23 compute-0 nova_compute[247704]: 2026-01-31 08:48:23.741 247708 DEBUG nova.compute.manager [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-changed-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:48:23 compute-0 nova_compute[247704]: 2026-01-31 08:48:23.741 247708 DEBUG nova.compute.manager [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Refreshing instance network info cache due to event network-changed-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:48:23 compute-0 nova_compute[247704]: 2026-01-31 08:48:23.741 247708 DEBUG oslo_concurrency.lockutils [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:48:23 compute-0 nova_compute[247704]: 2026-01-31 08:48:23.742 247708 DEBUG oslo_concurrency.lockutils [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:48:23 compute-0 nova_compute[247704]: 2026-01-31 08:48:23.742 247708 DEBUG nova.network.neutron [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Refreshing network info cache for port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:48:23 compute-0 ceph-mon[74496]: pgmap v3467: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 31 08:48:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:24 compute-0 nova_compute[247704]: 2026-01-31 08:48:24.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.3 MiB/s wr, 327 op/s
Jan 31 08:48:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:24.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:25.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:25 compute-0 ceph-mon[74496]: pgmap v3468: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.3 MiB/s wr, 327 op/s
Jan 31 08:48:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:48:26.099 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:48:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 350 op/s
Jan 31 08:48:26 compute-0 nova_compute[247704]: 2026-01-31 08:48:26.903 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:26.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:27.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:27 compute-0 nova_compute[247704]: 2026-01-31 08:48:27.215 247708 DEBUG nova.network.neutron [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updated VIF entry in instance network info cache for port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:48:27 compute-0 nova_compute[247704]: 2026-01-31 08:48:27.216 247708 DEBUG nova.network.neutron [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:48:27 compute-0 nova_compute[247704]: 2026-01-31 08:48:27.327 247708 DEBUG oslo_concurrency.lockutils [req-2f0b4d3b-df6a-4f14-b43b-5770eaa14103 req-fadef004-25ce-4e0d-a292-df80bb907450 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:48:28 compute-0 ceph-mon[74496]: pgmap v3469: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 350 op/s
Jan 31 08:48:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 379 op/s
Jan 31 08:48:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:28.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:29.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:29 compute-0 nova_compute[247704]: 2026-01-31 08:48:29.430 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:30 compute-0 ceph-mon[74496]: pgmap v3470: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 379 op/s
Jan 31 08:48:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 377 op/s
Jan 31 08:48:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.003000071s ======
Jan 31 08:48:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:31.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Jan 31 08:48:31 compute-0 nova_compute[247704]: 2026-01-31 08:48:31.905 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:32 compute-0 ceph-mon[74496]: pgmap v3471: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 377 op/s
Jan 31 08:48:32 compute-0 nova_compute[247704]: 2026-01-31 08:48:32.650 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:32 compute-0 sshd-session[386430]: Invalid user ubuntu from 45.148.10.240 port 43618
Jan 31 08:48:32 compute-0 sshd-session[386430]: Connection closed by invalid user ubuntu 45.148.10.240 port 43618 [preauth]
Jan 31 08:48:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 289 op/s
Jan 31 08:48:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:33.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:33.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:34 compute-0 ceph-mon[74496]: pgmap v3472: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 289 op/s
Jan 31 08:48:34 compute-0 nova_compute[247704]: 2026-01-31 08:48:34.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 388 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 337 op/s
Jan 31 08:48:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:35.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:35.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:35 compute-0 ceph-mon[74496]: pgmap v3473: 305 pgs: 305 active+clean; 388 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 337 op/s
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006998243532477201 of space, bias 1.0, pg target 2.0994730597431603 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6442640720965941 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:48:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:48:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 600 KiB/s rd, 3.0 MiB/s wr, 170 op/s
Jan 31 08:48:36 compute-0 nova_compute[247704]: 2026-01-31 08:48:36.908 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:36 compute-0 podman[386437]: 2026-01-31 08:48:36.938402856 +0000 UTC m=+0.098806945 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 08:48:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:37.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:37 compute-0 sshd-session[386435]: Connection closed by authenticating user root 123.54.197.60 port 45042 [preauth]
Jan 31 08:48:38 compute-0 ceph-mon[74496]: pgmap v3474: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 600 KiB/s rd, 3.0 MiB/s wr, 170 op/s
Jan 31 08:48:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Jan 31 08:48:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:39.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:39.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:39 compute-0 sshd-session[386456]: Connection closed by authenticating user root 123.54.197.60 port 45050 [preauth]
Jan 31 08:48:39 compute-0 nova_compute[247704]: 2026-01-31 08:48:39.434 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:40 compute-0 ceph-mon[74496]: pgmap v3475: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Jan 31 08:48:40 compute-0 sudo[386462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:40 compute-0 sudo[386462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:40 compute-0 sudo[386462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:40 compute-0 sudo[386487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:40 compute-0 sudo[386487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:40 compute-0 sudo[386487]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:40 compute-0 sshd-session[386459]: Connection closed by authenticating user root 123.54.197.60 port 47724 [preauth]
Jan 31 08:48:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:48:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:41.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:41 compute-0 nova_compute[247704]: 2026-01-31 08:48:41.909 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:42 compute-0 ceph-mon[74496]: pgmap v3476: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:48:42 compute-0 sshd-session[386512]: Connection closed by authenticating user root 123.54.197.60 port 47736 [preauth]
Jan 31 08:48:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:48:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:43.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:44 compute-0 sshd-session[386515]: Connection closed by authenticating user root 123.54.197.60 port 47750 [preauth]
Jan 31 08:48:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:44 compute-0 nova_compute[247704]: 2026-01-31 08:48:44.437 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.529847) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324529892, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 714, "num_deletes": 250, "total_data_size": 970463, "memory_usage": 983168, "flush_reason": "Manual Compaction"}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324549858, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 607509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75706, "largest_seqno": 76419, "table_properties": {"data_size": 604467, "index_size": 949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8325, "raw_average_key_size": 20, "raw_value_size": 598045, "raw_average_value_size": 1473, "num_data_blocks": 43, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849267, "oldest_key_time": 1769849267, "file_creation_time": 1769849324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 20058 microseconds, and 2902 cpu microseconds.
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.549904) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 607509 bytes OK
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.549926) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.555596) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.555627) EVENT_LOG_v1 {"time_micros": 1769849324555618, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.555652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 966871, prev total WAL file size 967584, number of live WAL files 2.
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.556420) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373630' seq:72057594037927935, type:22 .. '6D6772737461740033303131' seq:0, type:0; will stop at (end)
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(593KB)], [173(13MB)]
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324556556, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14350221, "oldest_snapshot_seqno": -1}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: pgmap v3477: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10134 keys, 10877003 bytes, temperature: kUnknown
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324748026, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10877003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10815298, "index_size": 35288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 268830, "raw_average_key_size": 26, "raw_value_size": 10641606, "raw_average_value_size": 1050, "num_data_blocks": 1330, "num_entries": 10134, "num_filter_entries": 10134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.748472) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10877003 bytes
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.752166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.9 rd, 56.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(41.5) write-amplify(17.9) OK, records in: 10620, records dropped: 486 output_compression: NoCompression
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.752245) EVENT_LOG_v1 {"time_micros": 1769849324752215, "job": 108, "event": "compaction_finished", "compaction_time_micros": 191639, "compaction_time_cpu_micros": 33586, "output_level": 6, "num_output_files": 1, "total_output_size": 10877003, "num_input_records": 10620, "num_output_records": 10134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324753019, "job": 108, "event": "table_file_deletion", "file_number": 175}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849324756030, "job": 108, "event": "table_file_deletion", "file_number": 173}
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.556230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.756198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.756207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.756210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.756213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:48:44.756216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:48:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 2.3 MiB/s wr, 77 op/s
Jan 31 08:48:44 compute-0 sudo[386521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:44 compute-0 sudo[386521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:44 compute-0 sudo[386521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:45 compute-0 sudo[386546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:45 compute-0 sudo[386546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:45 compute-0 sudo[386546]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:45.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:45 compute-0 sudo[386571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:45 compute-0 sudo[386571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:45 compute-0 sudo[386571]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:45 compute-0 sudo[386596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:48:45 compute-0 sudo[386596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:45 compute-0 sudo[386596]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:45 compute-0 sshd-session[386519]: Connection closed by authenticating user root 123.54.197.60 port 47754 [preauth]
Jan 31 08:48:45 compute-0 ceph-mon[74496]: pgmap v3478: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 2.3 MiB/s wr, 77 op/s
Jan 31 08:48:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Jan 31 08:48:46 compute-0 nova_compute[247704]: 2026-01-31 08:48:46.911 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:47.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:47 compute-0 sshd-session[386652]: Connection closed by authenticating user root 123.54.197.60 port 47760 [preauth]
Jan 31 08:48:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:48:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:48:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:48:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:48:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:48:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9d794c31-315a-4184-b65f-9bb2df5eb340 does not exist
Jan 31 08:48:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c2873e9b-1651-4e6c-bd75-92e288b63894 does not exist
Jan 31 08:48:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 48729ba8-5e07-446f-aefc-9e22cdd48e8f does not exist
Jan 31 08:48:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:48:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:48:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:48:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: pgmap v3479: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 156 KiB/s rd, 1.5 MiB/s wr, 36 op/s
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:48:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:48:48 compute-0 sudo[386657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:48 compute-0 sudo[386657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:48 compute-0 sudo[386657]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:48 compute-0 sudo[386688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:48 compute-0 sudo[386688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:48 compute-0 sudo[386688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:48 compute-0 podman[386681]: 2026-01-31 08:48:48.32184737 +0000 UTC m=+0.138706336 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:48:48 compute-0 sudo[386730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:48 compute-0 sudo[386730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:48 compute-0 sudo[386730]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:48 compute-0 sudo[386759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:48:48 compute-0 sudo[386759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:48 compute-0 nova_compute[247704]: 2026-01-31 08:48:48.565 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.768305793 +0000 UTC m=+0.042077375 container create fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:48:48 compute-0 systemd[1]: Started libpod-conmon-fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570.scope.
Jan 31 08:48:48 compute-0 sshd-session[386655]: Connection closed by authenticating user root 123.54.197.60 port 47762 [preauth]
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.749189488 +0000 UTC m=+0.022961090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:48:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 944 KiB/s wr, 26 op/s
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.870659663 +0000 UTC m=+0.144431355 container init fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.880033752 +0000 UTC m=+0.153805354 container start fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.884342597 +0000 UTC m=+0.158114199 container attach fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_solomon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:48:48 compute-0 upbeat_solomon[386837]: 167 167
Jan 31 08:48:48 compute-0 systemd[1]: libpod-fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570.scope: Deactivated successfully.
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.888710003 +0000 UTC m=+0.162481615 container died fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_solomon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:48:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-822a942d1648ca5d194fe032cda21294a7b79ee93fcb8035b93a6c9a1a21bc10-merged.mount: Deactivated successfully.
Jan 31 08:48:48 compute-0 podman[386820]: 2026-01-31 08:48:48.936463225 +0000 UTC m=+0.210234807 container remove fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_solomon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:48:48 compute-0 systemd[1]: libpod-conmon-fa6e218706e99454c08e02cb40451df8278b6a5be4e71fe7043648cc61c21570.scope: Deactivated successfully.
Jan 31 08:48:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:49.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:49 compute-0 podman[386860]: 2026-01-31 08:48:49.134391671 +0000 UTC m=+0.050370907 container create a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:49 compute-0 systemd[1]: Started libpod-conmon-a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34.scope.
Jan 31 08:48:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac6aa3e31889485994ea6b70d2b1aec572f1d557b5289e8dde53c416b6bd79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac6aa3e31889485994ea6b70d2b1aec572f1d557b5289e8dde53c416b6bd79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac6aa3e31889485994ea6b70d2b1aec572f1d557b5289e8dde53c416b6bd79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac6aa3e31889485994ea6b70d2b1aec572f1d557b5289e8dde53c416b6bd79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ac6aa3e31889485994ea6b70d2b1aec572f1d557b5289e8dde53c416b6bd79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:49 compute-0 podman[386860]: 2026-01-31 08:48:49.113680927 +0000 UTC m=+0.029660193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:48:49 compute-0 podman[386860]: 2026-01-31 08:48:49.218197919 +0000 UTC m=+0.134177175 container init a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:48:49 compute-0 podman[386860]: 2026-01-31 08:48:49.230597471 +0000 UTC m=+0.146576707 container start a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:48:49 compute-0 podman[386860]: 2026-01-31 08:48:49.234829064 +0000 UTC m=+0.150808510 container attach a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:48:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:49 compute-0 nova_compute[247704]: 2026-01-31 08:48:49.439 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:50 compute-0 elegant_varahamihira[386879]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:48:50 compute-0 elegant_varahamihira[386879]: --> relative data size: 1.0
Jan 31 08:48:50 compute-0 elegant_varahamihira[386879]: --> All data devices are unavailable
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:50 compute-0 systemd[1]: libpod-a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34.scope: Deactivated successfully.
Jan 31 08:48:50 compute-0 podman[386860]: 2026-01-31 08:48:50.144015267 +0000 UTC m=+1.059994493 container died a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4ac6aa3e31889485994ea6b70d2b1aec572f1d557b5289e8dde53c416b6bd79-merged.mount: Deactivated successfully.
Jan 31 08:48:50 compute-0 ceph-mon[74496]: pgmap v3480: 305 pgs: 305 active+clean; 409 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 944 KiB/s wr, 26 op/s
Jan 31 08:48:50 compute-0 podman[386860]: 2026-01-31 08:48:50.210767981 +0000 UTC m=+1.126747217 container remove a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:48:50 compute-0 systemd[1]: libpod-conmon-a9bbcf081b0515cd169774a70b405e8507f189ea5d4734521e43a363113d4f34.scope: Deactivated successfully.
Jan 31 08:48:50 compute-0 sudo[386759]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:50 compute-0 sudo[386905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:50 compute-0 sudo[386905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:50 compute-0 sudo[386905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:50 compute-0 sshd-session[386862]: Connection closed by authenticating user root 123.54.197.60 port 47768 [preauth]
Jan 31 08:48:50 compute-0 sudo[386931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:50 compute-0 sudo[386931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:50 compute-0 sudo[386931]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:50 compute-0 sudo[386956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:50 compute-0 sudo[386956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:50 compute-0 sudo[386956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:50 compute-0 sudo[386981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:48:50 compute-0 sudo[386981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.848685423 +0000 UTC m=+0.042518365 container create 6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:48:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Jan 31 08:48:50 compute-0 systemd[1]: Started libpod-conmon-6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7.scope.
Jan 31 08:48:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.831451944 +0000 UTC m=+0.025284906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.927627914 +0000 UTC m=+0.121460876 container init 6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.933309532 +0000 UTC m=+0.127142464 container start 6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.936787367 +0000 UTC m=+0.130620329 container attach 6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:48:50 compute-0 angry_faraday[387065]: 167 167
Jan 31 08:48:50 compute-0 systemd[1]: libpod-6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7.scope: Deactivated successfully.
Jan 31 08:48:50 compute-0 conmon[387065]: conmon 6e9bcfe8369e92a6342c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7.scope/container/memory.events
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.938657762 +0000 UTC m=+0.132490694 container died 6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:48:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9596ec58fe38e5a8ba3671ea1d1ef9a0952581a99448c9cc8a60b266cb5a70ee-merged.mount: Deactivated successfully.
Jan 31 08:48:50 compute-0 podman[387049]: 2026-01-31 08:48:50.989317325 +0000 UTC m=+0.183150257 container remove 6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_faraday, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:48:51 compute-0 systemd[1]: libpod-conmon-6e9bcfe8369e92a6342c0ba09ddf718cce896f0ccea0190053ab971a64320ed7.scope: Deactivated successfully.
Jan 31 08:48:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:51.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:51 compute-0 podman[387090]: 2026-01-31 08:48:51.138441754 +0000 UTC m=+0.044163726 container create 087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:48:51 compute-0 systemd[1]: Started libpod-conmon-087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575.scope.
Jan 31 08:48:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848048920b1df42b8e9e70dfb74eee518f851bcf724cad0636637ea1796d4d8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848048920b1df42b8e9e70dfb74eee518f851bcf724cad0636637ea1796d4d8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848048920b1df42b8e9e70dfb74eee518f851bcf724cad0636637ea1796d4d8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/848048920b1df42b8e9e70dfb74eee518f851bcf724cad0636637ea1796d4d8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:51 compute-0 podman[387090]: 2026-01-31 08:48:51.119833781 +0000 UTC m=+0.025555783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:48:51 compute-0 podman[387090]: 2026-01-31 08:48:51.225983953 +0000 UTC m=+0.131705955 container init 087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:48:51 compute-0 podman[387090]: 2026-01-31 08:48:51.231799856 +0000 UTC m=+0.137521828 container start 087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:48:51 compute-0 podman[387090]: 2026-01-31 08:48:51.235058055 +0000 UTC m=+0.140780057 container attach 087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:48:51 compute-0 nova_compute[247704]: 2026-01-31 08:48:51.913 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:51 compute-0 sshd-session[387006]: Connection closed by authenticating user root 123.54.197.60 port 35328 [preauth]
Jan 31 08:48:51 compute-0 nifty_haibt[387106]: {
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:     "0": [
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:         {
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "devices": [
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "/dev/loop3"
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             ],
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "lv_name": "ceph_lv0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "lv_size": "7511998464",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "name": "ceph_lv0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "tags": {
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.crush_device_class": "",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.encrypted": "0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.osd_id": "0",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.type": "block",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:                 "ceph.vdo": "0"
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             },
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "type": "block",
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:             "vg_name": "ceph_vg0"
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:         }
Jan 31 08:48:51 compute-0 nifty_haibt[387106]:     ]
Jan 31 08:48:51 compute-0 nifty_haibt[387106]: }
Jan 31 08:48:52 compute-0 systemd[1]: libpod-087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575.scope: Deactivated successfully.
Jan 31 08:48:52 compute-0 podman[387090]: 2026-01-31 08:48:52.010638396 +0000 UTC m=+0.916360388 container died 087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-848048920b1df42b8e9e70dfb74eee518f851bcf724cad0636637ea1796d4d8d-merged.mount: Deactivated successfully.
Jan 31 08:48:52 compute-0 podman[387090]: 2026-01-31 08:48:52.063664347 +0000 UTC m=+0.969386329 container remove 087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haibt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:48:52 compute-0 systemd[1]: libpod-conmon-087d61e9aaab2c50390f69be7ec03c09a385e7b6734740ba3061a49b6a648575.scope: Deactivated successfully.
Jan 31 08:48:52 compute-0 sudo[386981]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:52 compute-0 ceph-mon[74496]: pgmap v3481: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Jan 31 08:48:52 compute-0 sudo[387128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:52 compute-0 sudo[387128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:52 compute-0 sudo[387128]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:52 compute-0 sudo[387153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:52 compute-0 sudo[387153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:52 compute-0 sudo[387153]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:52 compute-0 sudo[387180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:52 compute-0 sudo[387180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:52 compute-0 sudo[387180]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:52 compute-0 sudo[387206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:48:52 compute-0 sudo[387206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:52 compute-0 podman[387269]: 2026-01-31 08:48:52.66997889 +0000 UTC m=+0.021341722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:48:52 compute-0 podman[387269]: 2026-01-31 08:48:52.786810712 +0000 UTC m=+0.138173524 container create d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swartz, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:48:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Jan 31 08:48:52 compute-0 systemd[1]: Started libpod-conmon-d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de.scope.
Jan 31 08:48:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:52 compute-0 podman[387269]: 2026-01-31 08:48:52.984057221 +0000 UTC m=+0.335420053 container init d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swartz, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:48:52 compute-0 podman[387269]: 2026-01-31 08:48:52.997218571 +0000 UTC m=+0.348581423 container start d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:48:53 compute-0 pedantic_swartz[387285]: 167 167
Jan 31 08:48:53 compute-0 systemd[1]: libpod-d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de.scope: Deactivated successfully.
Jan 31 08:48:53 compute-0 conmon[387285]: conmon d55ea8fcd941e6207379 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de.scope/container/memory.events
Jan 31 08:48:53 compute-0 podman[387269]: 2026-01-31 08:48:53.016988253 +0000 UTC m=+0.368351185 container attach d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swartz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:48:53 compute-0 podman[387269]: 2026-01-31 08:48:53.017576627 +0000 UTC m=+0.368939449 container died d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swartz, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:48:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d99e3c48021f70817a8eb75d04d2ca0716a17da6d0ddf950773f1e0b49dea4-merged.mount: Deactivated successfully.
Jan 31 08:48:53 compute-0 podman[387269]: 2026-01-31 08:48:53.265488189 +0000 UTC m=+0.616851001 container remove d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:48:53 compute-0 systemd[1]: libpod-conmon-d55ea8fcd941e6207379aa330a1c96600d5c97b113f86421740bdaa27f8b77de.scope: Deactivated successfully.
Jan 31 08:48:53 compute-0 podman[387309]: 2026-01-31 08:48:53.438635652 +0000 UTC m=+0.082193061 container create 13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 08:48:53 compute-0 podman[387309]: 2026-01-31 08:48:53.386442762 +0000 UTC m=+0.030000191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:48:53 compute-0 sshd-session[387171]: Connection closed by authenticating user root 123.54.197.60 port 35340 [preauth]
Jan 31 08:48:53 compute-0 systemd[1]: Started libpod-conmon-13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1.scope.
Jan 31 08:48:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc5a4d6daaa4e1e563d7bac8ab31bbc10764a3e1ad6fc807d06ddee60003574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc5a4d6daaa4e1e563d7bac8ab31bbc10764a3e1ad6fc807d06ddee60003574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc5a4d6daaa4e1e563d7bac8ab31bbc10764a3e1ad6fc807d06ddee60003574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc5a4d6daaa4e1e563d7bac8ab31bbc10764a3e1ad6fc807d06ddee60003574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:53 compute-0 podman[387309]: 2026-01-31 08:48:53.662422928 +0000 UTC m=+0.305980337 container init 13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:48:53 compute-0 podman[387309]: 2026-01-31 08:48:53.67075438 +0000 UTC m=+0.314311799 container start 13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:53 compute-0 podman[387309]: 2026-01-31 08:48:53.72788291 +0000 UTC m=+0.371440329 container attach 13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:48:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:54 compute-0 nova_compute[247704]: 2026-01-31 08:48:54.441 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:54 compute-0 dreamy_carver[387325]: {
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:         "osd_id": 0,
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:         "type": "bluestore"
Jan 31 08:48:54 compute-0 dreamy_carver[387325]:     }
Jan 31 08:48:54 compute-0 dreamy_carver[387325]: }
Jan 31 08:48:54 compute-0 systemd[1]: libpod-13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1.scope: Deactivated successfully.
Jan 31 08:48:54 compute-0 podman[387309]: 2026-01-31 08:48:54.544584263 +0000 UTC m=+1.188141692 container died 13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:48:54 compute-0 nova_compute[247704]: 2026-01-31 08:48:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:54 compute-0 ceph-mon[74496]: pgmap v3482: 305 pgs: 305 active+clean; 359 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Jan 31 08:48:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/57302897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/769013742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:48:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/769013742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bc5a4d6daaa4e1e563d7bac8ab31bbc10764a3e1ad6fc807d06ddee60003574-merged.mount: Deactivated successfully.
Jan 31 08:48:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Jan 31 08:48:55 compute-0 podman[387309]: 2026-01-31 08:48:55.017493939 +0000 UTC m=+1.661051358 container remove 13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_carver, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 31 08:48:55 compute-0 systemd[1]: libpod-conmon-13967e574dce065e20e04a53b8adc49b2a324635449acb4cf843b56a762139c1.scope: Deactivated successfully.
Jan 31 08:48:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:55 compute-0 sudo[387206]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:48:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:55.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev dc64cf19-a716-4899-8487-00fe07ce2c39 does not exist
Jan 31 08:48:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4cd5a30b-5928-4284-a903-d1d0ad39e546 does not exist
Jan 31 08:48:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 30258a78-0078-422e-b31a-d8c2427a9666 does not exist
Jan 31 08:48:55 compute-0 sshd-session[387330]: Connection closed by authenticating user root 123.54.197.60 port 35346 [preauth]
Jan 31 08:48:55 compute-0 sudo[387363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:48:55 compute-0 sudo[387363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:55 compute-0 sudo[387363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:55 compute-0 sudo[387388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:48:55 compute-0 sudo[387388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:55 compute-0 sudo[387388]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2202804351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:48:55 compute-0 ceph-mon[74496]: pgmap v3483: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 1.5 MiB/s wr, 51 op/s
Jan 31 08:48:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:48:56 compute-0 sshd-session[387413]: Connection closed by authenticating user root 123.54.197.60 port 35354 [preauth]
Jan 31 08:48:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 347 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Jan 31 08:48:56 compute-0 nova_compute[247704]: 2026-01-31 08:48:56.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:48:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:57.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:48:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:57.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:58 compute-0 sshd-session[387417]: Connection closed by authenticating user root 123.54.197.60 port 35370 [preauth]
Jan 31 08:48:58 compute-0 ceph-mon[74496]: pgmap v3484: 305 pgs: 305 active+clean; 347 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Jan 31 08:48:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 367 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 08:48:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:48:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:59.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:48:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:48:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:48:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:59.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:48:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:59 compute-0 nova_compute[247704]: 2026-01-31 08:48:59.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:48:59 compute-0 sshd-session[387420]: Connection closed by authenticating user root 123.54.197.60 port 35376 [preauth]
Jan 31 08:49:00 compute-0 sudo[387423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:00 compute-0 sudo[387423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:00 compute-0 sudo[387423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:00 compute-0 sudo[387448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:00 compute-0 sudo[387448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:00 compute-0 sudo[387448]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 286 KiB/s rd, 3.0 MiB/s wr, 130 op/s
Jan 31 08:49:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:01.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:01 compute-0 ceph-mon[74496]: pgmap v3485: 305 pgs: 305 active+clean; 367 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 08:49:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:01 compute-0 nova_compute[247704]: 2026-01-31 08:49:01.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:01 compute-0 nova_compute[247704]: 2026-01-31 08:49:01.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:02 compute-0 ceph-mon[74496]: pgmap v3486: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 286 KiB/s rd, 3.0 MiB/s wr, 130 op/s
Jan 31 08:49:02 compute-0 nova_compute[247704]: 2026-01-31 08:49:02.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:02 compute-0 nova_compute[247704]: 2026-01-31 08:49:02.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:49:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 31 08:49:03 compute-0 sshd-session[387473]: Connection closed by authenticating user root 123.54.197.60 port 48888 [preauth]
Jan 31 08:49:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:03.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:04 compute-0 ceph-mon[74496]: pgmap v3487: 305 pgs: 305 active+clean; 333 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 31 08:49:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:04 compute-0 nova_compute[247704]: 2026-01-31 08:49:04.446 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 31 08:49:04 compute-0 ovn_controller[149457]: 2026-01-31T08:49:04Z|00830|binding|INFO|Releasing lport ccf3782f-9c12-4288-9bff-27ab514c5f7a from this chassis (sb_readonly=0)
Jan 31 08:49:05 compute-0 nova_compute[247704]: 2026-01-31 08:49:05.018 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:05.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1274297397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:05.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:06 compute-0 sshd-session[387476]: Connection closed by authenticating user root 123.54.197.60 port 48896 [preauth]
Jan 31 08:49:06 compute-0 ceph-mon[74496]: pgmap v3488: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 2.6 MiB/s wr, 107 op/s
Jan 31 08:49:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 2.4 MiB/s wr, 96 op/s
Jan 31 08:49:06 compute-0 nova_compute[247704]: 2026-01-31 08:49:06.919 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:07.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/902395460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3995720161' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:49:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:07 compute-0 sshd-session[387479]: Connection closed by authenticating user root 123.54.197.60 port 48902 [preauth]
Jan 31 08:49:07 compute-0 podman[387484]: 2026-01-31 08:49:07.906310074 +0000 UTC m=+0.076484522 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:49:08 compute-0 ceph-mon[74496]: pgmap v3489: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 2.4 MiB/s wr, 96 op/s
Jan 31 08:49:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1381336499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/414552168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:49:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 261 KiB/s rd, 1.9 MiB/s wr, 88 op/s
Jan 31 08:49:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:09.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:09.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:09 compute-0 sshd-session[387482]: Connection closed by authenticating user root 123.54.197.60 port 48910 [preauth]
Jan 31 08:49:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:09 compute-0 nova_compute[247704]: 2026-01-31 08:49:09.448 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:09 compute-0 nova_compute[247704]: 2026-01-31 08:49:09.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:09 compute-0 nova_compute[247704]: 2026-01-31 08:49:09.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:49:09 compute-0 nova_compute[247704]: 2026-01-31 08:49:09.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:49:10 compute-0 sshd-session[387507]: Connection closed by authenticating user root 123.54.197.60 port 48926 [preauth]
Jan 31 08:49:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 246 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Jan 31 08:49:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:11.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:11.219 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:11.220 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:11.221 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:11 compute-0 nova_compute[247704]: 2026-01-31 08:49:11.921 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:12 compute-0 sshd-session[387510]: Connection closed by authenticating user root 123.54.197.60 port 35090 [preauth]
Jan 31 08:49:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 6 op/s
Jan 31 08:49:12 compute-0 nova_compute[247704]: 2026-01-31 08:49:12.928 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:12 compute-0 nova_compute[247704]: 2026-01-31 08:49:12.929 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:12 compute-0 nova_compute[247704]: 2026-01-31 08:49:12.929 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:49:12 compute-0 nova_compute[247704]: 2026-01-31 08:49:12.929 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1fff7e98-de8b-44f9-b725-55c12d40b395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:49:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:13.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:13.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:14 compute-0 sshd-session[387513]: Connection closed by authenticating user root 123.54.197.60 port 35102 [preauth]
Jan 31 08:49:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:14 compute-0 nova_compute[247704]: 2026-01-31 08:49:14.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 85 B/s wr, 7 op/s
Jan 31 08:49:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:15.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:15 compute-0 ceph-mon[74496]: pgmap v3490: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 261 KiB/s rd, 1.9 MiB/s wr, 88 op/s
Jan 31 08:49:16 compute-0 sshd-session[387516]: Connection closed by authenticating user root 123.54.197.60 port 35108 [preauth]
Jan 31 08:49:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Jan 31 08:49:16 compute-0 ceph-mon[74496]: pgmap v3491: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 246 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Jan 31 08:49:16 compute-0 ceph-mon[74496]: pgmap v3492: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 6 op/s
Jan 31 08:49:16 compute-0 ceph-mon[74496]: pgmap v3493: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 85 B/s wr, 7 op/s
Jan 31 08:49:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4256285338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:16 compute-0 nova_compute[247704]: 2026-01-31 08:49:16.924 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:17.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:17.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:17 compute-0 sshd-session[387519]: Connection closed by authenticating user root 123.54.197.60 port 35114 [preauth]
Jan 31 08:49:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2114223668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:17 compute-0 ceph-mon[74496]: pgmap v3494: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.9 KiB/s rd, 170 B/s wr, 7 op/s
Jan 31 08:49:18 compute-0 nova_compute[247704]: 2026-01-31 08:49:18.820 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 304 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 24 op/s
Jan 31 08:49:18 compute-0 podman[387524]: 2026-01-31 08:49:18.938067511 +0000 UTC m=+0.099556863 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:49:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:19.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.199 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.200 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.201 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.201 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.201 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:19.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:19 compute-0 sshd-session[387521]: Connection closed by authenticating user root 123.54.197.60 port 35130 [preauth]
Jan 31 08:49:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.455 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.456 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.456 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.457 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.457 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.487 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:49:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250200569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:19 compute-0 nova_compute[247704]: 2026-01-31 08:49:19.938 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:19 compute-0 ceph-mon[74496]: pgmap v3495: 305 pgs: 305 active+clean; 304 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 24 op/s
Jan 31 08:49:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1250200569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:49:20
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'volumes', '.mgr', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:49:20 compute-0 sudo[387576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:20 compute-0 sudo[387576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:20 compute-0 sudo[387576]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:49:20 compute-0 sudo[387601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:20 compute-0 sudo[387601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:20 compute-0 sudo[387601]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 14 KiB/s wr, 41 op/s
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.025 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.026 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:49:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:21.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:21 compute-0 sshd-session[387571]: Connection closed by authenticating user root 123.54.197.60 port 35136 [preauth]
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.218 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.220 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4051MB free_disk=20.891765594482422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.220 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.221 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:21.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:21.348 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:49:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:21.349 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.348 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:21 compute-0 nova_compute[247704]: 2026-01-31 08:49:21.925 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:21 compute-0 ceph-mon[74496]: pgmap v3496: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 14 KiB/s wr, 41 op/s
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.088 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1fff7e98-de8b-44f9-b725-55c12d40b395 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.089 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.089 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.144 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.293 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.294 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.310 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.330 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.381 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:49:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/244133078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.831 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.837 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:49:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 31 08:49:22 compute-0 nova_compute[247704]: 2026-01-31 08:49:22.933 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:49:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/244133078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2868309194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:23.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:23 compute-0 nova_compute[247704]: 2026-01-31 08:49:23.371 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:49:23 compute-0 nova_compute[247704]: 2026-01-31 08:49:23.371 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:23 compute-0 sshd-session[387626]: Connection closed by authenticating user root 123.54.197.60 port 43066 [preauth]
Jan 31 08:49:24 compute-0 ceph-mon[74496]: pgmap v3497: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 31 08:49:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:24 compute-0 nova_compute[247704]: 2026-01-31 08:49:24.490 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:25.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:25.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:25 compute-0 sshd-session[387651]: Connection closed by authenticating user root 123.54.197.60 port 43068 [preauth]
Jan 31 08:49:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:25.351 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:26 compute-0 ceph-mon[74496]: pgmap v3498: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:26 compute-0 nova_compute[247704]: 2026-01-31 08:49:26.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:26 compute-0 sshd-session[387655]: Connection closed by authenticating user root 123.54.197.60 port 43084 [preauth]
Jan 31 08:49:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:27.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:27 compute-0 nova_compute[247704]: 2026-01-31 08:49:27.366 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:27 compute-0 nova_compute[247704]: 2026-01-31 08:49:27.366 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:28 compute-0 ceph-mon[74496]: pgmap v3499: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:29 compute-0 sshd-session[387658]: Connection closed by authenticating user root 123.54.197.60 port 43096 [preauth]
Jan 31 08:49:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:29.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:29 compute-0 nova_compute[247704]: 2026-01-31 08:49:29.493 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:30 compute-0 ceph-mon[74496]: pgmap v3500: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 95 op/s
Jan 31 08:49:30 compute-0 sshd-session[387661]: Connection closed by authenticating user root 123.54.197.60 port 43102 [preauth]
Jan 31 08:49:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 81 op/s
Jan 31 08:49:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:31.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:31 compute-0 nova_compute[247704]: 2026-01-31 08:49:31.930 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:32 compute-0 ceph-mon[74496]: pgmap v3501: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 81 op/s
Jan 31 08:49:32 compute-0 sshd-session[387664]: Connection closed by authenticating user root 123.54.197.60 port 38458 [preauth]
Jan 31 08:49:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 63 op/s
Jan 31 08:49:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:33.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:34 compute-0 ceph-mon[74496]: pgmap v3502: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 63 op/s
Jan 31 08:49:34 compute-0 sshd-session[387667]: Connection closed by authenticating user root 123.54.197.60 port 38466 [preauth]
Jan 31 08:49:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:34 compute-0 nova_compute[247704]: 2026-01-31 08:49:34.495 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Jan 31 08:49:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:35.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:35.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:35 compute-0 sshd-session[387670]: Connection closed by authenticating user root 123.54.197.60 port 38472 [preauth]
Jan 31 08:49:35 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004275209961846269 of space, bias 1.0, pg target 1.2825629885538805 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6464260320700727 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:49:36 compute-0 ceph-mon[74496]: pgmap v3503: 305 pgs: 305 active+clean; 276 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Jan 31 08:49:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:49:36 compute-0 nova_compute[247704]: 2026-01-31 08:49:36.933 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:49:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:37.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:49:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:37.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:38 compute-0 ceph-mon[74496]: pgmap v3504: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:49:38 compute-0 sshd-session[387672]: Connection closed by authenticating user root 123.54.197.60 port 38488 [preauth]
Jan 31 08:49:38 compute-0 podman[387678]: 2026-01-31 08:49:38.876447187 +0000 UTC m=+0.052122649 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:49:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:49:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:39.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:39.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:39 compute-0 nova_compute[247704]: 2026-01-31 08:49:39.497 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:40 compute-0 ceph-mon[74496]: pgmap v3505: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:49:40 compute-0 ovn_controller[149457]: 2026-01-31T08:49:40Z|00831|binding|INFO|Releasing lport ccf3782f-9c12-4288-9bff-27ab514c5f7a from this chassis (sb_readonly=0)
Jan 31 08:49:40 compute-0 nova_compute[247704]: 2026-01-31 08:49:40.355 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:40 compute-0 sshd-session[387676]: Connection closed by authenticating user root 123.54.197.60 port 38492 [preauth]
Jan 31 08:49:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 230 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 08:49:40 compute-0 sudo[387701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:40 compute-0 sudo[387701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:40 compute-0 sudo[387701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:41 compute-0 sudo[387726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:41 compute-0 sudo[387726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:41 compute-0 sudo[387726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:41.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:41.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:41 compute-0 nova_compute[247704]: 2026-01-31 08:49:41.935 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:42 compute-0 sshd-session[387699]: Connection closed by authenticating user root 123.54.197.60 port 40754 [preauth]
Jan 31 08:49:42 compute-0 ceph-mon[74496]: pgmap v3506: 305 pgs: 305 active+clean; 230 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 08:49:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 230 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 08:49:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:43.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:43.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:43 compute-0 sshd-session[387752]: Connection closed by authenticating user root 123.54.197.60 port 40756 [preauth]
Jan 31 08:49:44 compute-0 ceph-mon[74496]: pgmap v3507: 305 pgs: 305 active+clean; 230 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 08:49:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:44 compute-0 nova_compute[247704]: 2026-01-31 08:49:44.499 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 31 08:49:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:45.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:45.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:45 compute-0 sshd-session[387754]: Connection closed by authenticating user root 123.54.197.60 port 40762 [preauth]
Jan 31 08:49:46 compute-0 ceph-mon[74496]: pgmap v3508: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 31 08:49:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 127 KiB/s wr, 36 op/s
Jan 31 08:49:46 compute-0 nova_compute[247704]: 2026-01-31 08:49:46.938 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:47.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:47.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:48 compute-0 sshd-session[387758]: Connection closed by authenticating user root 123.54.197.60 port 40770 [preauth]
Jan 31 08:49:48 compute-0 ceph-mon[74496]: pgmap v3509: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 127 KiB/s wr, 36 op/s
Jan 31 08:49:48 compute-0 nova_compute[247704]: 2026-01-31 08:49:48.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 08:49:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:49.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:49:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:49.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:49:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3491448071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:49:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:49 compute-0 sshd-session[387761]: Connection closed by authenticating user root 123.54.197.60 port 40784 [preauth]
Jan 31 08:49:49 compute-0 nova_compute[247704]: 2026-01-31 08:49:49.501 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:49 compute-0 podman[387765]: 2026-01-31 08:49:49.91163726 +0000 UTC m=+0.082892869 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:50 compute-0 ceph-mon[74496]: pgmap v3510: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 08:49:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 08:49:51 compute-0 sshd-session[387763]: Connection closed by authenticating user root 123.54.197.60 port 53854 [preauth]
Jan 31 08:49:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:51.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:51.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:51 compute-0 nova_compute[247704]: 2026-01-31 08:49:51.941 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:52 compute-0 ceph-mon[74496]: pgmap v3511: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 08:49:52 compute-0 sshd-session[387792]: Connection closed by authenticating user root 123.54.197.60 port 53866 [preauth]
Jan 31 08:49:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Jan 31 08:49:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:53.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:53.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:49:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2800592308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:49:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:49:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2800592308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:49:53 compute-0 ceph-mon[74496]: pgmap v3512: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Jan 31 08:49:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2800592308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:49:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2800592308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:49:54 compute-0 sshd-session[387795]: Connection closed by authenticating user root 123.54.197.60 port 53880 [preauth]
Jan 31 08:49:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:54 compute-0 nova_compute[247704]: 2026-01-31 08:49:54.502 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:54 compute-0 nova_compute[247704]: 2026-01-31 08:49:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.126 247708 DEBUG nova.compute.manager [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-changed-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.126 247708 DEBUG nova.compute.manager [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Refreshing instance network info cache due to event network-changed-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.127 247708 DEBUG oslo_concurrency.lockutils [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.127 247708 DEBUG oslo_concurrency.lockutils [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.127 247708 DEBUG nova.network.neutron [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Refreshing network info cache for port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:49:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:55.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:55.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.647 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:55 compute-0 sshd-session[387798]: Connection closed by authenticating user root 123.54.197.60 port 53886 [preauth]
Jan 31 08:49:55 compute-0 sudo[387800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:55 compute-0 sudo[387800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.778 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.778 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.779 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.779 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:55 compute-0 sudo[387800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.779 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.781 247708 INFO nova.compute.manager [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Terminating instance
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.782 247708 DEBUG nova.compute.manager [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:49:55 compute-0 sudo[387825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:55 compute-0 sudo[387825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:55 compute-0 sudo[387825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:55 compute-0 sudo[387850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:55 compute-0 sudo[387850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:55 compute-0 sudo[387850]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:55 compute-0 kernel: tap1b62b7b3-9c (unregistering): left promiscuous mode
Jan 31 08:49:55 compute-0 NetworkManager[49108]: <info>  [1769849395.9626] device (tap1b62b7b3-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.970 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:55 compute-0 ovn_controller[149457]: 2026-01-31T08:49:55Z|00832|binding|INFO|Releasing lport 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 from this chassis (sb_readonly=0)
Jan 31 08:49:55 compute-0 ovn_controller[149457]: 2026-01-31T08:49:55Z|00833|binding|INFO|Setting lport 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 down in Southbound
Jan 31 08:49:55 compute-0 ovn_controller[149457]: 2026-01-31T08:49:55Z|00834|binding|INFO|Removing iface tap1b62b7b3-9c ovn-installed in OVS
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:55 compute-0 ceph-mon[74496]: pgmap v3513: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 KiB/s wr, 19 op/s
Jan 31 08:49:55 compute-0 nova_compute[247704]: 2026-01-31 08:49:55.985 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:56 compute-0 sudo[387875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:49:56 compute-0 sudo[387875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:56 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Jan 31 08:49:56 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c2.scope: Consumed 19.048s CPU time.
Jan 31 08:49:56 compute-0 systemd-machined[214448]: Machine qemu-87-instance-000000c2 terminated.
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.217 247708 INFO nova.virt.libvirt.driver [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Instance destroyed successfully.
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.218 247708 DEBUG nova.objects.instance [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lazy-loading 'resources' on Instance uuid 1fff7e98-de8b-44f9-b725-55c12d40b395 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.257 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:df:e9 10.100.0.5'], port_security=['fa:16:3e:09:df:e9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1fff7e98-de8b-44f9-b725-55c12d40b395', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba35ae24dbf3443e8a526dce39c6793b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1594b23b-251f-4e04-9498-d13079f70afc 8ba5f6d0-b888-4df9-bd08-60927f70df0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81b33545-8c35-4640-a006-3779e6b65cfc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.259 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 in datapath 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 unbound from our chassis
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.261 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.263 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[13380a0a-7e5a-4aa1-811d-4122c278e031]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.264 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 namespace which is not needed anymore
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.288 247708 DEBUG nova.virt.libvirt.vif [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1014068786-access_point-1746647710',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1014068786-ac',id=194,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGEVRXCOmyKSdhlo9l0WxIRbGJsQf5I7xnoqbiNS2WMkfr+PHUSU3MjMN6e2reAurOFdIo1aIllo1la3phc66kwPrjNj65wq6jckJq8FKFDC0nYvOhxhwR1EYQGTcrj65g==',key_name='tempest-TestSecurityGroupsBasicOps-681515177',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba35ae24dbf3443e8a526dce39c6793b',ramdisk_id='',reservation_id='r-1fepqj6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1014068786',owner_user_name='tempest-TestSecurityGroupsBasicOps-1014068786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:48:10Z,user_data=None,user_id='c6968a1ee10e4e3b8651ffe0240a7e46',uuid=1fff7e98-de8b-44f9-b725-55c12d40b395,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.289 247708 DEBUG nova.network.os_vif_util [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converting VIF {"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.290 247708 DEBUG nova.network.os_vif_util [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.290 247708 DEBUG os_vif [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.291 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.292 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b62b7b3-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.294 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.297 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.300 247708 INFO os_vif [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:df:e9,bridge_name='br-int',has_traffic_filtering=True,id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94,network=Network(93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b62b7b3-9c')
Jan 31 08:49:56 compute-0 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[386240]: [NOTICE]   (386263) : haproxy version is 2.8.14-c23fe91
Jan 31 08:49:56 compute-0 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[386240]: [NOTICE]   (386263) : path to executable is /usr/sbin/haproxy
Jan 31 08:49:56 compute-0 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[386240]: [ALERT]    (386263) : Current worker (386265) exited with code 143 (Terminated)
Jan 31 08:49:56 compute-0 neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234[386240]: [WARNING]  (386263) : All workers exited. Exiting... (0)
Jan 31 08:49:56 compute-0 systemd[1]: libpod-07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2.scope: Deactivated successfully.
Jan 31 08:49:56 compute-0 podman[387974]: 2026-01-31 08:49:56.415852501 +0000 UTC m=+0.053143144 container died 07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:49:56 compute-0 sudo[387875]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:49:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:49:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:49:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:49:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7e214124-684e-487e-a7a6-e295e2ef09cf does not exist
Jan 31 08:49:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 89473c98-4124-46d7-a478-d01a11e16965 does not exist
Jan 31 08:49:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b6da877f-d7c5-475a-917f-d186a18e17d1 does not exist
Jan 31 08:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:49:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:49:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:49:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2-userdata-shm.mount: Deactivated successfully.
Jan 31 08:49:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e517803cf4f63e4b88fdde27124ba5046788c1991c06668e13d9de375ddc8e12-merged.mount: Deactivated successfully.
Jan 31 08:49:56 compute-0 sudo[388012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:56 compute-0 podman[387974]: 2026-01-31 08:49:56.526221347 +0000 UTC m=+0.163512000 container cleanup 07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:49:56 compute-0 sudo[388012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:56 compute-0 sudo[388012]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:56 compute-0 systemd[1]: libpod-conmon-07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2.scope: Deactivated successfully.
Jan 31 08:49:56 compute-0 sudo[388039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:56 compute-0 sudo[388039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:56 compute-0 sudo[388039]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:56 compute-0 podman[388038]: 2026-01-31 08:49:56.604685756 +0000 UTC m=+0.060589475 container remove 07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.609 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[80e7f114-a965-4043-a87b-665131c411ba]: (4, ('Sat Jan 31 08:49:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 (07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2)\n07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2\nSat Jan 31 08:49:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 (07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2)\n07865cf84e39a9bb7d9bc5142101435825f6c09ed86a36203955b7b5ebd7d4d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.611 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1281f98c-018f-40ea-b141-a8ca2e6f0fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.613 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93bea2b0-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:49:56 compute-0 sudo[388073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.650 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:56 compute-0 sudo[388073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:56 compute-0 kernel: tap93bea2b0-d0: left promiscuous mode
Jan 31 08:49:56 compute-0 sudo[388073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.656 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.658 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[408b08b2-f8f5-491e-aa1e-4ab2836e17b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.679 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0f1d2f-ad89-4931-a63f-0fb07f3dd797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.681 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8f77b711-fb15-4cee-97d4-15809a69ac08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.694 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aab6749b-0393-4f0f-b542-b48db33d2ab7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953008, 'reachable_time': 29670, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388122, 'error': None, 'target': 'ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.698 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:49:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:49:56.698 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b51740-8f3c-4337-a8f3-7bbb20e27967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:49:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d93bea2b0\x2dd2f6\x2d4f71\x2d8f0c\x2d6b8a51ad6234.mount: Deactivated successfully.
Jan 31 08:49:56 compute-0 sudo[388099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:49:56 compute-0 sudo[388099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.943 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.948 247708 INFO nova.virt.libvirt.driver [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Deleting instance files /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395_del
Jan 31 08:49:56 compute-0 nova_compute[247704]: 2026-01-31 08:49:56.949 247708 INFO nova.virt.libvirt.driver [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Deletion of /var/lib/nova/instances/1fff7e98-de8b-44f9-b725-55c12d40b395_del complete
Jan 31 08:49:56 compute-0 podman[388167]: 2026-01-31 08:49:56.968428017 +0000 UTC m=+0.037282309 container create d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:49:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:49:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:49:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:49:57 compute-0 systemd[1]: Started libpod-conmon-d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f.scope.
Jan 31 08:49:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:57 compute-0 podman[388167]: 2026-01-31 08:49:56.950963961 +0000 UTC m=+0.019818283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:49:57 compute-0 podman[388167]: 2026-01-31 08:49:57.059221075 +0000 UTC m=+0.128075397 container init d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:49:57 compute-0 podman[388167]: 2026-01-31 08:49:57.069924516 +0000 UTC m=+0.138778828 container start d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:49:57 compute-0 podman[388167]: 2026-01-31 08:49:57.073853312 +0000 UTC m=+0.142707614 container attach d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:49:57 compute-0 dreamy_bhaskara[388184]: 167 167
Jan 31 08:49:57 compute-0 systemd[1]: libpod-d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f.scope: Deactivated successfully.
Jan 31 08:49:57 compute-0 podman[388167]: 2026-01-31 08:49:57.077375537 +0000 UTC m=+0.146229839 container died d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:49:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-235a69559d119de22aaa42de54800663329718159f46617bfc4e4f103105bb77-merged.mount: Deactivated successfully.
Jan 31 08:49:57 compute-0 podman[388167]: 2026-01-31 08:49:57.132707694 +0000 UTC m=+0.201561996 container remove d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:49:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:57.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:57 compute-0 systemd[1]: libpod-conmon-d880b1970e4f967f2ead69515571d9a83fc0086edf72ecc734c65a27d1cf9f7f.scope: Deactivated successfully.
Jan 31 08:49:57 compute-0 podman[388210]: 2026-01-31 08:49:57.251356161 +0000 UTC m=+0.041426379 container create 8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.267 247708 INFO nova.compute.manager [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Took 1.48 seconds to destroy the instance on the hypervisor.
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.268 247708 DEBUG oslo.service.loopingcall [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.269 247708 DEBUG nova.compute.manager [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.269 247708 DEBUG nova.network.neutron [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:49:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:49:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:57.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:49:57 compute-0 sshd-session[387904]: Connection closed by authenticating user root 123.54.197.60 port 53902 [preauth]
Jan 31 08:49:57 compute-0 systemd[1]: Started libpod-conmon-8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e.scope.
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.318 247708 DEBUG nova.compute.manager [req-f7ee554f-24aa-4acb-8c52-abb1d1c2e439 req-721f2db7-f254-4e34-b673-adf38dacc25a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-vif-unplugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.318 247708 DEBUG oslo_concurrency.lockutils [req-f7ee554f-24aa-4acb-8c52-abb1d1c2e439 req-721f2db7-f254-4e34-b673-adf38dacc25a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.319 247708 DEBUG oslo_concurrency.lockutils [req-f7ee554f-24aa-4acb-8c52-abb1d1c2e439 req-721f2db7-f254-4e34-b673-adf38dacc25a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.319 247708 DEBUG oslo_concurrency.lockutils [req-f7ee554f-24aa-4acb-8c52-abb1d1c2e439 req-721f2db7-f254-4e34-b673-adf38dacc25a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.319 247708 DEBUG nova.compute.manager [req-f7ee554f-24aa-4acb-8c52-abb1d1c2e439 req-721f2db7-f254-4e34-b673-adf38dacc25a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] No waiting events found dispatching network-vif-unplugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.319 247708 DEBUG nova.compute.manager [req-f7ee554f-24aa-4acb-8c52-abb1d1c2e439 req-721f2db7-f254-4e34-b673-adf38dacc25a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-vif-unplugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:49:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:57 compute-0 podman[388210]: 2026-01-31 08:49:57.231719773 +0000 UTC m=+0.021790021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425fd9ca3f0e52524e35e6e537dcaf1f950ccdc497f80cf1a30e681c9b0961b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425fd9ca3f0e52524e35e6e537dcaf1f950ccdc497f80cf1a30e681c9b0961b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425fd9ca3f0e52524e35e6e537dcaf1f950ccdc497f80cf1a30e681c9b0961b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425fd9ca3f0e52524e35e6e537dcaf1f950ccdc497f80cf1a30e681c9b0961b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/425fd9ca3f0e52524e35e6e537dcaf1f950ccdc497f80cf1a30e681c9b0961b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:57 compute-0 podman[388210]: 2026-01-31 08:49:57.349711214 +0000 UTC m=+0.139781452 container init 8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:49:57 compute-0 podman[388210]: 2026-01-31 08:49:57.358428316 +0000 UTC m=+0.148498534 container start 8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:49:57 compute-0 podman[388210]: 2026-01-31 08:49:57.362833893 +0000 UTC m=+0.152904121 container attach 8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 31 08:49:57 compute-0 nova_compute[247704]: 2026-01-31 08:49:57.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:49:58 compute-0 ceph-mon[74496]: pgmap v3514: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 31 08:49:58 compute-0 competent_wu[388226]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:49:58 compute-0 competent_wu[388226]: --> relative data size: 1.0
Jan 31 08:49:58 compute-0 competent_wu[388226]: --> All data devices are unavailable
Jan 31 08:49:58 compute-0 systemd[1]: libpod-8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e.scope: Deactivated successfully.
Jan 31 08:49:58 compute-0 podman[388210]: 2026-01-31 08:49:58.108551308 +0000 UTC m=+0.898621546 container died 8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-425fd9ca3f0e52524e35e6e537dcaf1f950ccdc497f80cf1a30e681c9b0961b8-merged.mount: Deactivated successfully.
Jan 31 08:49:58 compute-0 podman[388210]: 2026-01-31 08:49:58.161134777 +0000 UTC m=+0.951205005 container remove 8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_wu, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:49:58 compute-0 systemd[1]: libpod-conmon-8f995e986a6bfb6cb28231961475d967f7b5520273eb52b75e403daa32fdf86e.scope: Deactivated successfully.
Jan 31 08:49:58 compute-0 sudo[388099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:58 compute-0 sudo[388257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:58 compute-0 sudo[388257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:58 compute-0 sudo[388257]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:58 compute-0 sudo[388282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:58 compute-0 nova_compute[247704]: 2026-01-31 08:49:58.292 247708 DEBUG nova.network.neutron [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updated VIF entry in instance network info cache for port 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:49:58 compute-0 nova_compute[247704]: 2026-01-31 08:49:58.293 247708 DEBUG nova.network.neutron [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [{"id": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "address": "fa:16:3e:09:df:e9", "network": {"id": "93bea2b0-d2f6-4f71-8f0c-6b8a51ad6234", "bridge": "br-int", "label": "tempest-network-smoke--1993688235", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba35ae24dbf3443e8a526dce39c6793b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b62b7b3-9c", "ovs_interfaceid": "1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:49:58 compute-0 sudo[388282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:58 compute-0 sudo[388282]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:58 compute-0 sudo[388307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:49:58 compute-0 sudo[388307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:58 compute-0 sudo[388307]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:58 compute-0 sudo[388333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:49:58 compute-0 sudo[388333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:58 compute-0 nova_compute[247704]: 2026-01-31 08:49:58.679 247708 DEBUG oslo_concurrency.lockutils [req-f101112d-588f-4004-84e8-1040d5820f02 req-4014cc7e-39f6-40ef-baf0-55f8508e90cb 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1fff7e98-de8b-44f9-b725-55c12d40b395" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.697725224 +0000 UTC m=+0.039170784 container create e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:49:58 compute-0 systemd[1]: Started libpod-conmon-e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064.scope.
Jan 31 08:49:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.768773102 +0000 UTC m=+0.110218662 container init e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.777579407 +0000 UTC m=+0.119024967 container start e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.681303414 +0000 UTC m=+0.022749004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:49:58 compute-0 compassionate_franklin[388416]: 167 167
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.782265581 +0000 UTC m=+0.123711161 container attach e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:49:58 compute-0 systemd[1]: libpod-e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064.scope: Deactivated successfully.
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.78301476 +0000 UTC m=+0.124460310 container died e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:49:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-801e0c27afceb31459947ac44bb31d3b0c110c767153fa991b0c64dea56a549c-merged.mount: Deactivated successfully.
Jan 31 08:49:58 compute-0 sshd-session[388231]: Connection closed by authenticating user root 123.54.197.60 port 53916 [preauth]
Jan 31 08:49:58 compute-0 podman[388399]: 2026-01-31 08:49:58.827096312 +0000 UTC m=+0.168541902 container remove e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:49:58 compute-0 systemd[1]: libpod-conmon-e4fc8ea90f20a0d9457b52cde191e7242c5f86a009fa366146d7e8cca0017064.scope: Deactivated successfully.
Jan 31 08:49:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s wr, 0 op/s
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:59.008256189 +0000 UTC m=+0.056241449 container create 24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:49:59 compute-0 systemd[1]: Started libpod-conmon-24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77.scope.
Jan 31 08:49:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131e5147883b0e6f5907c6d3ad99fd799f02f602ee4cb342df8e9470a4001586/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131e5147883b0e6f5907c6d3ad99fd799f02f602ee4cb342df8e9470a4001586/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131e5147883b0e6f5907c6d3ad99fd799f02f602ee4cb342df8e9470a4001586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131e5147883b0e6f5907c6d3ad99fd799f02f602ee4cb342df8e9470a4001586/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:58.987541456 +0000 UTC m=+0.035526736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:59.087447027 +0000 UTC m=+0.135432307 container init 24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:59.094308504 +0000 UTC m=+0.142293744 container start 24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:59.098169508 +0000 UTC m=+0.146154788 container attach 24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:49:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:59.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:49:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:49:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:59.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:49:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]: {
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:     "0": [
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:         {
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "devices": [
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "/dev/loop3"
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             ],
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "lv_name": "ceph_lv0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "lv_size": "7511998464",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "name": "ceph_lv0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "tags": {
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.cluster_name": "ceph",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.crush_device_class": "",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.encrypted": "0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.osd_id": "0",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.type": "block",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:                 "ceph.vdo": "0"
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             },
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "type": "block",
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:             "vg_name": "ceph_vg0"
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:         }
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]:     ]
Jan 31 08:49:59 compute-0 fervent_goldberg[388457]: }
Jan 31 08:49:59 compute-0 systemd[1]: libpod-24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77.scope: Deactivated successfully.
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:59.849869868 +0000 UTC m=+0.897855108 container died 24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:49:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-131e5147883b0e6f5907c6d3ad99fd799f02f602ee4cb342df8e9470a4001586-merged.mount: Deactivated successfully.
Jan 31 08:49:59 compute-0 podman[388441]: 2026-01-31 08:49:59.970601186 +0000 UTC m=+1.018586426 container remove 24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:49:59 compute-0 systemd[1]: libpod-conmon-24e31581035e2b10865af80bb1d6cf92b8deeb88c4941f4866235f265b3c0c77.scope: Deactivated successfully.
Jan 31 08:49:59 compute-0 sudo[388333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 08:50:00 compute-0 ceph-mon[74496]: pgmap v3515: 305 pgs: 305 active+clean; 183 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s wr, 0 op/s
Jan 31 08:50:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 08:50:00 compute-0 sudo[388480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:00 compute-0 sudo[388480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:00 compute-0 sudo[388480]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:00 compute-0 sudo[388505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:50:00 compute-0 sudo[388505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:00 compute-0 sudo[388505]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:00 compute-0 sudo[388530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:00 compute-0 sudo[388530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:00 compute-0 sudo[388530]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:00 compute-0 sudo[388555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:50:00 compute-0 sudo[388555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:00 compute-0 sshd-session[388460]: Connection closed by authenticating user root 123.54.197.60 port 53924 [preauth]
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.474407495 +0000 UTC m=+0.040156369 container create 5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_leavitt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 08:50:00 compute-0 systemd[1]: Started libpod-conmon-5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381.scope.
Jan 31 08:50:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.548469876 +0000 UTC m=+0.114218770 container init 5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.45610455 +0000 UTC m=+0.021853444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.556809879 +0000 UTC m=+0.122558753 container start 5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.561039682 +0000 UTC m=+0.126788556 container attach 5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_leavitt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:50:00 compute-0 loving_leavitt[388638]: 167 167
Jan 31 08:50:00 compute-0 systemd[1]: libpod-5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381.scope: Deactivated successfully.
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.563268976 +0000 UTC m=+0.129017840 container died 5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_leavitt, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fc61ab2014cbd20ab19530a83ebf7b6a0e6186e533beb3b4648e95b5239775f-merged.mount: Deactivated successfully.
Jan 31 08:50:00 compute-0 podman[388621]: 2026-01-31 08:50:00.596584047 +0000 UTC m=+0.162332921 container remove 5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_leavitt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:50:00 compute-0 systemd[1]: libpod-conmon-5c8a1d28ecbfc8d57da42d341860ca9b3c8d496c751d381460543cc193505381.scope: Deactivated successfully.
Jan 31 08:50:00 compute-0 podman[388663]: 2026-01-31 08:50:00.746239599 +0000 UTC m=+0.057561212 container create 9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:50:00 compute-0 systemd[1]: Started libpod-conmon-9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b.scope.
Jan 31 08:50:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f607d4c2ef89fd319d1f40fb169503f67901379e146c6b6242eeaeabd50975d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f607d4c2ef89fd319d1f40fb169503f67901379e146c6b6242eeaeabd50975d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f607d4c2ef89fd319d1f40fb169503f67901379e146c6b6242eeaeabd50975d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f607d4c2ef89fd319d1f40fb169503f67901379e146c6b6242eeaeabd50975d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:00 compute-0 podman[388663]: 2026-01-31 08:50:00.725603827 +0000 UTC m=+0.036925480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:50:00 compute-0 podman[388663]: 2026-01-31 08:50:00.849916271 +0000 UTC m=+0.161237914 container init 9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:50:00 compute-0 podman[388663]: 2026-01-31 08:50:00.855872816 +0000 UTC m=+0.167194449 container start 9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shirley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 08:50:00 compute-0 podman[388663]: 2026-01-31 08:50:00.859665879 +0000 UTC m=+0.170987492 container attach 9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shirley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:50:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:01 compute-0 sudo[388686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:01 compute-0 sudo[388686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:01 compute-0 sudo[388686]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:01.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:01 compute-0 sudo[388711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:01 compute-0 sudo[388711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:01 compute-0 sudo[388711]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.336 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.674 247708 DEBUG nova.compute.manager [req-6bb1325f-2674-4b21-91f0-c1332cad60df req-f0f10c46-b5e7-4d2e-839e-91878239d254 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.675 247708 DEBUG oslo_concurrency.lockutils [req-6bb1325f-2674-4b21-91f0-c1332cad60df req-f0f10c46-b5e7-4d2e-839e-91878239d254 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.675 247708 DEBUG oslo_concurrency.lockutils [req-6bb1325f-2674-4b21-91f0-c1332cad60df req-f0f10c46-b5e7-4d2e-839e-91878239d254 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.675 247708 DEBUG oslo_concurrency.lockutils [req-6bb1325f-2674-4b21-91f0-c1332cad60df req-f0f10c46-b5e7-4d2e-839e-91878239d254 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.676 247708 DEBUG nova.compute.manager [req-6bb1325f-2674-4b21-91f0-c1332cad60df req-f0f10c46-b5e7-4d2e-839e-91878239d254 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] No waiting events found dispatching network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.676 247708 WARNING nova.compute.manager [req-6bb1325f-2674-4b21-91f0-c1332cad60df req-f0f10c46-b5e7-4d2e-839e-91878239d254 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received unexpected event network-vif-plugged-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 for instance with vm_state active and task_state deleting.
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]: {
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:         "osd_id": 0,
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:         "type": "bluestore"
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]:     }
Jan 31 08:50:01 compute-0 xenodochial_shirley[388681]: }
Jan 31 08:50:01 compute-0 systemd[1]: libpod-9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b.scope: Deactivated successfully.
Jan 31 08:50:01 compute-0 podman[388663]: 2026-01-31 08:50:01.714754504 +0000 UTC m=+1.026076147 container died 9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f607d4c2ef89fd319d1f40fb169503f67901379e146c6b6242eeaeabd50975d-merged.mount: Deactivated successfully.
Jan 31 08:50:01 compute-0 podman[388663]: 2026-01-31 08:50:01.781271713 +0000 UTC m=+1.092593336 container remove 9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:50:01 compute-0 systemd[1]: libpod-conmon-9ae7ace0fc6a4a7eb412f82d62efa64e80524a0931119113a82b795a2719997b.scope: Deactivated successfully.
Jan 31 08:50:01 compute-0 sudo[388555]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:50:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:50:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:50:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:50:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 293af583-4a2f-4c4a-a1b0-a01cb356999c does not exist
Jan 31 08:50:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 416042fe-3d2e-45bc-a624-b558c9ba1bc9 does not exist
Jan 31 08:50:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2386dfa3-9ac4-43f4-b501-e25fee431003 does not exist
Jan 31 08:50:01 compute-0 sudo[388765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:01 compute-0 sudo[388765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:01 compute-0 sudo[388765]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:01 compute-0 nova_compute[247704]: 2026-01-31 08:50:01.945 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:01 compute-0 sudo[388790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:50:01 compute-0 sudo[388790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:01 compute-0 sudo[388790]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:02 compute-0 ceph-mon[74496]: pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:50:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:50:02 compute-0 sshd-session[388677]: Connection closed by authenticating user root 123.54.197.60 port 53384 [preauth]
Jan 31 08:50:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:03.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:03 compute-0 nova_compute[247704]: 2026-01-31 08:50:03.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:03 compute-0 nova_compute[247704]: 2026-01-31 08:50:03.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:03 compute-0 nova_compute[247704]: 2026-01-31 08:50:03.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:50:03 compute-0 sshd-session[388816]: Connection closed by authenticating user root 123.54.197.60 port 53386 [preauth]
Jan 31 08:50:04 compute-0 ceph-mon[74496]: pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:04 compute-0 nova_compute[247704]: 2026-01-31 08:50:04.893 247708 DEBUG nova.network.neutron [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:50:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:05.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:05 compute-0 nova_compute[247704]: 2026-01-31 08:50:05.358 247708 INFO nova.compute.manager [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Took 8.09 seconds to deallocate network for instance.
Jan 31 08:50:05 compute-0 nova_compute[247704]: 2026-01-31 08:50:05.443 247708 DEBUG nova.compute.manager [req-c5e66a25-a9d6-425b-939f-7c0abfe3b831 req-d2743a88-725b-450e-83a7-3d3482579579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Received event network-vif-deleted-1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:50:05 compute-0 nova_compute[247704]: 2026-01-31 08:50:05.443 247708 INFO nova.compute.manager [req-c5e66a25-a9d6-425b-939f-7c0abfe3b831 req-d2743a88-725b-450e-83a7-3d3482579579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Neutron deleted interface 1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94; detaching it from the instance and deleting it from the info cache
Jan 31 08:50:05 compute-0 nova_compute[247704]: 2026-01-31 08:50:05.443 247708 DEBUG nova.network.neutron [req-c5e66a25-a9d6-425b-939f-7c0abfe3b831 req-d2743a88-725b-450e-83a7-3d3482579579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:50:05 compute-0 sshd-session[388818]: Connection closed by authenticating user root 123.54.197.60 port 53392 [preauth]
Jan 31 08:50:05 compute-0 nova_compute[247704]: 2026-01-31 08:50:05.846 247708 DEBUG nova.compute.manager [req-c5e66a25-a9d6-425b-939f-7c0abfe3b831 req-d2743a88-725b-450e-83a7-3d3482579579 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Detach interface failed, port_id=1b62b7b3-9c42-4a24-b6d7-4085e6a8bd94, reason: Instance 1fff7e98-de8b-44f9-b725-55c12d40b395 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:50:06 compute-0 ceph-mon[74496]: pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.186 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.187 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.269 247708 DEBUG oslo_concurrency.processutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.339 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:50:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2492933013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.698 247708 DEBUG oslo_concurrency.processutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.705 247708 DEBUG nova.compute.provider_tree [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:50:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:06 compute-0 nova_compute[247704]: 2026-01-31 08:50:06.949 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/71149161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2492933013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:07 compute-0 nova_compute[247704]: 2026-01-31 08:50:07.120 247708 DEBUG nova.scheduler.client.report [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:50:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:50:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:50:07 compute-0 nova_compute[247704]: 2026-01-31 08:50:07.362 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:07 compute-0 nova_compute[247704]: 2026-01-31 08:50:07.432 247708 INFO nova.scheduler.client.report [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Deleted allocations for instance 1fff7e98-de8b-44f9-b725-55c12d40b395
Jan 31 08:50:07 compute-0 nova_compute[247704]: 2026-01-31 08:50:07.810 247708 DEBUG oslo_concurrency.lockutils [None req-c44cf7b4-2d4b-4a40-bc02-4046de3a7c85 c6968a1ee10e4e3b8651ffe0240a7e46 ba35ae24dbf3443e8a526dce39c6793b - - default default] Lock "1fff7e98-de8b-44f9-b725-55c12d40b395" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:07 compute-0 sshd-session[388821]: Connection closed by authenticating user root 123.54.197.60 port 53394 [preauth]
Jan 31 08:50:08 compute-0 ceph-mon[74496]: pgmap v3519: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 28 op/s
Jan 31 08:50:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2986454865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:50:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:09.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:09 compute-0 sshd-session[388846]: Connection closed by authenticating user root 123.54.197.60 port 53408 [preauth]
Jan 31 08:50:09 compute-0 nova_compute[247704]: 2026-01-31 08:50:09.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:09 compute-0 nova_compute[247704]: 2026-01-31 08:50:09.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:50:09 compute-0 nova_compute[247704]: 2026-01-31 08:50:09.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:50:09 compute-0 nova_compute[247704]: 2026-01-31 08:50:09.748 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:50:09 compute-0 podman[388851]: 2026-01-31 08:50:09.90534531 +0000 UTC m=+0.070929316 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:50:10 compute-0 ceph-mon[74496]: pgmap v3520: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:50:10 compute-0 nova_compute[247704]: 2026-01-31 08:50:10.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:10 compute-0 nova_compute[247704]: 2026-01-31 08:50:10.719 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:10 compute-0 nova_compute[247704]: 2026-01-31 08:50:10.719 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:10 compute-0 nova_compute[247704]: 2026-01-31 08:50:10.719 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:10 compute-0 nova_compute[247704]: 2026-01-31 08:50:10.719 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:50:10 compute-0 nova_compute[247704]: 2026-01-31 08:50:10.720 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:50:11 compute-0 sshd-session[388849]: Connection closed by authenticating user root 123.54.197.60 port 60902 [preauth]
Jan 31 08:50:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:50:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128509544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.184 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.218 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849396.2150078, 1fff7e98-de8b-44f9-b725-55c12d40b395 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.218 247708 INFO nova.compute.manager [-] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] VM Stopped (Lifecycle Event)
Jan 31 08:50:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:50:11.220 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:50:11.220 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:50:11.220 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:11.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.343 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.358 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.359 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4257MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.359 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.360 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.504 247708 DEBUG nova.compute.manager [None req-2da7404a-6fb1-4c46-a8b5-753ce7643066 - - - - - -] [instance: 1fff7e98-de8b-44f9-b725-55c12d40b395] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:50:11 compute-0 nova_compute[247704]: 2026-01-31 08:50:11.974 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:12 compute-0 ceph-mon[74496]: pgmap v3521: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 08:50:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4128509544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:12 compute-0 nova_compute[247704]: 2026-01-31 08:50:12.199 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:50:12 compute-0 nova_compute[247704]: 2026-01-31 08:50:12.199 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:50:12 compute-0 nova_compute[247704]: 2026-01-31 08:50:12.247 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:50:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541487575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:12 compute-0 nova_compute[247704]: 2026-01-31 08:50:12.653 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:12 compute-0 nova_compute[247704]: 2026-01-31 08:50:12.658 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:50:12 compute-0 nova_compute[247704]: 2026-01-31 08:50:12.867 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:50:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:13 compute-0 nova_compute[247704]: 2026-01-31 08:50:13.126 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1541487575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:13 compute-0 nova_compute[247704]: 2026-01-31 08:50:13.204 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:50:13 compute-0 nova_compute[247704]: 2026-01-31 08:50:13.204 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:13.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:13 compute-0 sshd-session[388894]: Connection closed by authenticating user root 123.54.197.60 port 60904 [preauth]
Jan 31 08:50:13 compute-0 nova_compute[247704]: 2026-01-31 08:50:13.551 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:14 compute-0 ceph-mon[74496]: pgmap v3522: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:14 compute-0 sshd-session[388919]: Connection closed by authenticating user root 123.54.197.60 port 60920 [preauth]
Jan 31 08:50:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1129069942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:15.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:16 compute-0 nova_compute[247704]: 2026-01-31 08:50:16.199 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:16 compute-0 nova_compute[247704]: 2026-01-31 08:50:16.200 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:16 compute-0 ceph-mon[74496]: pgmap v3523: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:16 compute-0 sshd-session[388922]: Connection closed by authenticating user root 123.54.197.60 port 60922 [preauth]
Jan 31 08:50:16 compute-0 nova_compute[247704]: 2026-01-31 08:50:16.346 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:16 compute-0 nova_compute[247704]: 2026-01-31 08:50:16.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:16 compute-0 nova_compute[247704]: 2026-01-31 08:50:16.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1183231244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:17.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:17 compute-0 sshd-session[388925]: Connection closed by authenticating user root 123.54.197.60 port 60932 [preauth]
Jan 31 08:50:18 compute-0 ceph-mon[74496]: pgmap v3524: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:19.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:19.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:19 compute-0 sshd-session[388927]: Connection closed by authenticating user root 123.54.197.60 port 60948 [preauth]
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:20 compute-0 ceph-mon[74496]: pgmap v3525: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:50:20
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'images', 'volumes', 'backups', 'default.rgw.log']
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:50:20 compute-0 podman[388933]: 2026-01-31 08:50:20.913406438 +0000 UTC m=+0.084689751 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:20 compute-0 ceph-mgr[74791]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 08:50:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:21 compute-0 sshd-session[388930]: Connection closed by authenticating user root 123.54.197.60 port 38588 [preauth]
Jan 31 08:50:21 compute-0 sudo[388960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:21 compute-0 sudo[388960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:21 compute-0 sudo[388960]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:21 compute-0 sudo[388985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:21 compute-0 sudo[388985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:21 compute-0 sudo[388985]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:21 compute-0 nova_compute[247704]: 2026-01-31 08:50:21.348 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:21 compute-0 nova_compute[247704]: 2026-01-31 08:50:21.978 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:22 compute-0 ceph-mon[74496]: pgmap v3526: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:23 compute-0 sshd-session[389010]: Connection closed by authenticating user root 123.54.197.60 port 38598 [preauth]
Jan 31 08:50:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:23.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:23.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:24 compute-0 ceph-mon[74496]: pgmap v3527: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:24 compute-0 nova_compute[247704]: 2026-01-31 08:50:24.450 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:24 compute-0 nova_compute[247704]: 2026-01-31 08:50:24.496 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:24 compute-0 sshd-session[389013]: Connection closed by authenticating user root 123.54.197.60 port 38608 [preauth]
Jan 31 08:50:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:25.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:26 compute-0 sshd-session[389017]: Connection closed by authenticating user root 123.54.197.60 port 38610 [preauth]
Jan 31 08:50:26 compute-0 nova_compute[247704]: 2026-01-31 08:50:26.352 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:26 compute-0 ceph-mon[74496]: pgmap v3528: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:26 compute-0 nova_compute[247704]: 2026-01-31 08:50:26.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:27 compute-0 sshd-session[389020]: Connection closed by authenticating user root 123.54.197.60 port 38622 [preauth]
Jan 31 08:50:28 compute-0 ceph-mon[74496]: pgmap v3529: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:50:28.726 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:50:28 compute-0 nova_compute[247704]: 2026-01-31 08:50:28.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:50:28.728 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:50:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:29.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:29.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:29 compute-0 ceph-mon[74496]: pgmap v3530: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:30 compute-0 sshd-session[389022]: Connection closed by authenticating user root 123.54.197.60 port 38634 [preauth]
Jan 31 08:50:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:31.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:31 compute-0 nova_compute[247704]: 2026-01-31 08:50:31.356 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:31 compute-0 sshd-session[389026]: Connection closed by authenticating user root 123.54.197.60 port 48068 [preauth]
Jan 31 08:50:31 compute-0 ceph-mon[74496]: pgmap v3531: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:32 compute-0 nova_compute[247704]: 2026-01-31 08:50:32.029 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:33.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:33 compute-0 sshd-session[389028]: Connection closed by authenticating user root 123.54.197.60 port 48074 [preauth]
Jan 31 08:50:34 compute-0 ceph-mon[74496]: pgmap v3532: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:35 compute-0 sshd-session[389031]: Connection closed by authenticating user root 123.54.197.60 port 48080 [preauth]
Jan 31 08:50:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:35.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:35.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:50:36 compute-0 ceph-mon[74496]: pgmap v3533: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:36 compute-0 nova_compute[247704]: 2026-01-31 08:50:36.360 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:37 compute-0 nova_compute[247704]: 2026-01-31 08:50:37.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:37 compute-0 sshd-session[389034]: Connection closed by authenticating user root 123.54.197.60 port 48088 [preauth]
Jan 31 08:50:38 compute-0 ceph-mon[74496]: pgmap v3534: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:50:38.732 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:50:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:38 compute-0 sshd-session[389037]: Connection closed by authenticating user root 123.54.197.60 port 48104 [preauth]
Jan 31 08:50:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:39.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:40 compute-0 ceph-mon[74496]: pgmap v3535: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:40 compute-0 podman[389043]: 2026-01-31 08:50:40.922467653 +0000 UTC m=+0.077584528 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:50:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:41.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:41.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:41 compute-0 nova_compute[247704]: 2026-01-31 08:50:41.363 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:41 compute-0 sudo[389062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:41 compute-0 sudo[389062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:41 compute-0 sudo[389062]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:41 compute-0 sudo[389087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:50:41 compute-0 sudo[389087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:41 compute-0 sudo[389087]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:42 compute-0 nova_compute[247704]: 2026-01-31 08:50:42.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:42 compute-0 sshd-session[389040]: Connection closed by authenticating user root 123.54.197.60 port 48106 [preauth]
Jan 31 08:50:42 compute-0 ceph-mon[74496]: pgmap v3536: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:43.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:43.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:43 compute-0 ceph-mon[74496]: pgmap v3537: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:44 compute-0 sshd-session[389113]: Connection closed by authenticating user root 123.54.197.60 port 53346 [preauth]
Jan 31 08:50:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/20990812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:50:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:45.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:45.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:45 compute-0 sshd-session[389116]: Connection closed by authenticating user root 123.54.197.60 port 53352 [preauth]
Jan 31 08:50:46 compute-0 ceph-mon[74496]: pgmap v3538: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:46 compute-0 nova_compute[247704]: 2026-01-31 08:50:46.366 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:47 compute-0 nova_compute[247704]: 2026-01-31 08:50:47.036 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:47.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:47 compute-0 sshd-session[389118]: Connection closed by authenticating user root 123.54.197.60 port 53366 [preauth]
Jan 31 08:50:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:47.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:48 compute-0 ceph-mon[74496]: pgmap v3539: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:50:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.6 KiB/s rd, 602 KiB/s wr, 12 op/s
Jan 31 08:50:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:49.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:49.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:49 compute-0 sshd-session[389121]: Connection closed by authenticating user root 123.54.197.60 port 53376 [preauth]
Jan 31 08:50:50 compute-0 ceph-mon[74496]: pgmap v3540: 305 pgs: 305 active+clean; 137 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.6 KiB/s rd, 602 KiB/s wr, 12 op/s
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:50 compute-0 nova_compute[247704]: 2026-01-31 08:50:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:51.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:51 compute-0 nova_compute[247704]: 2026-01-31 08:50:51.371 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:51 compute-0 sshd-session[389124]: Connection closed by authenticating user root 123.54.197.60 port 40470 [preauth]
Jan 31 08:50:51 compute-0 podman[389131]: 2026-01-31 08:50:51.940073585 +0000 UTC m=+0.118646328 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:50:52 compute-0 nova_compute[247704]: 2026-01-31 08:50:52.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:52 compute-0 ceph-mon[74496]: pgmap v3541: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:52 compute-0 sshd-session[389129]: Invalid user ubuntu from 45.148.10.240 port 43846
Jan 31 08:50:52 compute-0 sshd-session[389129]: Connection closed by invalid user ubuntu 45.148.10.240 port 43846 [preauth]
Jan 31 08:50:52 compute-0 sshd-session[389127]: Connection closed by authenticating user root 123.54.197.60 port 40484 [preauth]
Jan 31 08:50:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:53.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:54 compute-0 ceph-mon[74496]: pgmap v3542: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/709796620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:50:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/709796620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:50:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:55.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:55 compute-0 nova_compute[247704]: 2026-01-31 08:50:55.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:56 compute-0 ceph-mon[74496]: pgmap v3543: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:56 compute-0 nova_compute[247704]: 2026-01-31 08:50:56.407 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:57 compute-0 nova_compute[247704]: 2026-01-31 08:50:57.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:50:57 compute-0 sshd-session[389159]: Connection closed by authenticating user root 123.54.197.60 port 40492 [preauth]
Jan 31 08:50:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:50:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:57.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:50:58 compute-0 ceph-mon[74496]: pgmap v3544: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:58 compute-0 sshd-session[389163]: Connection closed by authenticating user root 123.54.197.60 port 40508 [preauth]
Jan 31 08:50:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:50:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:50:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:50:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:50:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:50:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:59.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:50:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:00 compute-0 ceph-mon[74496]: pgmap v3545: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:51:00 compute-0 sshd-session[389167]: Connection closed by authenticating user root 123.54.197.60 port 40524 [preauth]
Jan 31 08:51:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 31 08:51:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:01 compute-0 nova_compute[247704]: 2026-01-31 08:51:01.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:01 compute-0 sudo[389172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:01 compute-0 sudo[389172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:01 compute-0 sudo[389172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:01 compute-0 sudo[389197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:01 compute-0 sudo[389197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:01 compute-0 sudo[389197]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 nova_compute[247704]: 2026-01-31 08:51:02.042 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:02 compute-0 sshd-session[389170]: Connection closed by authenticating user root 123.54.197.60 port 53490 [preauth]
Jan 31 08:51:02 compute-0 sudo[389222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:02 compute-0 sudo[389222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 sudo[389222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 sudo[389247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:02 compute-0 sudo[389247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 sudo[389247]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 sudo[389273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:02 compute-0 sudo[389273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 ceph-mon[74496]: pgmap v3546: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.2 MiB/s wr, 14 op/s
Jan 31 08:51:02 compute-0 sudo[389273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 sudo[389298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 08:51:02 compute-0 sudo[389298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 sudo[389298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:51:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:51:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:02 compute-0 sudo[389345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:02 compute-0 sudo[389345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 sudo[389345]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 sudo[389370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:02 compute-0 sudo[389370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 sudo[389370]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:51:02 compute-0 sudo[389395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:02 compute-0 sudo[389395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:02 compute-0 sudo[389395]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:03 compute-0 sudo[389420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:51:03 compute-0 sudo[389420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:03.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:03 compute-0 sudo[389420]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:51:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:51:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:51:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3fb8e0a0-7a76-44cf-b5ed-15c51ae4c2b3 does not exist
Jan 31 08:51:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev def44ccf-b494-405c-8b4a-daf13a86448e does not exist
Jan 31 08:51:03 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0fb7b10b-ab37-4ec4-b220-a3d33cc14517 does not exist
Jan 31 08:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:51:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:51:03 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:51:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:51:03 compute-0 sudo[389476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:03 compute-0 sudo[389476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:03 compute-0 sudo[389476]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:03 compute-0 sudo[389501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:03 compute-0 sudo[389501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:03 compute-0 sudo[389501]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:03 compute-0 sudo[389526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:03 compute-0 sudo[389526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:03 compute-0 sudo[389526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:03 compute-0 sudo[389551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:51:03 compute-0 sudo[389551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-0 ceph-mon[74496]: pgmap v3547: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/802492447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/533740819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:51:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:51:03 compute-0 sshd-session[389323]: Connection closed by authenticating user root 123.54.197.60 port 53496 [preauth]
Jan 31 08:51:03 compute-0 podman[389616]: 2026-01-31 08:51:03.937675813 +0000 UTC m=+0.046695787 container create dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:51:03 compute-0 systemd[1]: Started libpod-conmon-dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f.scope.
Jan 31 08:51:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:04 compute-0 podman[389616]: 2026-01-31 08:51:03.917028571 +0000 UTC m=+0.026048625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:51:04 compute-0 podman[389616]: 2026-01-31 08:51:04.023189344 +0000 UTC m=+0.132209338 container init dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:51:04 compute-0 podman[389616]: 2026-01-31 08:51:04.030050891 +0000 UTC m=+0.139070855 container start dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:51:04 compute-0 podman[389616]: 2026-01-31 08:51:04.033803182 +0000 UTC m=+0.142823186 container attach dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:51:04 compute-0 nice_noether[389632]: 167 167
Jan 31 08:51:04 compute-0 systemd[1]: libpod-dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f.scope: Deactivated successfully.
Jan 31 08:51:04 compute-0 podman[389616]: 2026-01-31 08:51:04.037682496 +0000 UTC m=+0.146702490 container died dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 31 08:51:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-097151b1f911d4ea27fba44b9c1d1a96da94d0e97438c35682e58ff4d9d5048a-merged.mount: Deactivated successfully.
Jan 31 08:51:04 compute-0 podman[389616]: 2026-01-31 08:51:04.077803543 +0000 UTC m=+0.186823517 container remove dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:51:04 compute-0 systemd[1]: libpod-conmon-dea6a4e7154d8fb2391de4c08e7de08b186360432f8f0af9abbaaff25d89b62f.scope: Deactivated successfully.
Jan 31 08:51:04 compute-0 podman[389655]: 2026-01-31 08:51:04.205325615 +0000 UTC m=+0.043170511 container create 4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:51:04 compute-0 systemd[1]: Started libpod-conmon-4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4.scope.
Jan 31 08:51:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b748bf4fa7eff3de3e1867450fe47344b1495710dfda8aa8e63a180228a5213d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b748bf4fa7eff3de3e1867450fe47344b1495710dfda8aa8e63a180228a5213d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b748bf4fa7eff3de3e1867450fe47344b1495710dfda8aa8e63a180228a5213d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b748bf4fa7eff3de3e1867450fe47344b1495710dfda8aa8e63a180228a5213d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b748bf4fa7eff3de3e1867450fe47344b1495710dfda8aa8e63a180228a5213d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:04 compute-0 podman[389655]: 2026-01-31 08:51:04.278585508 +0000 UTC m=+0.116430434 container init 4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 08:51:04 compute-0 podman[389655]: 2026-01-31 08:51:04.185267298 +0000 UTC m=+0.023112214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:51:04 compute-0 podman[389655]: 2026-01-31 08:51:04.286766307 +0000 UTC m=+0.124611203 container start 4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:51:04 compute-0 podman[389655]: 2026-01-31 08:51:04.306033676 +0000 UTC m=+0.143878652 container attach 4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:51:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:04 compute-0 nova_compute[247704]: 2026-01-31 08:51:04.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:04 compute-0 nova_compute[247704]: 2026-01-31 08:51:04.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:51:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Jan 31 08:51:05 compute-0 cool_stonebraker[389671]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:51:05 compute-0 cool_stonebraker[389671]: --> relative data size: 1.0
Jan 31 08:51:05 compute-0 cool_stonebraker[389671]: --> All data devices are unavailable
Jan 31 08:51:05 compute-0 systemd[1]: libpod-4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4.scope: Deactivated successfully.
Jan 31 08:51:05 compute-0 podman[389655]: 2026-01-31 08:51:05.124464411 +0000 UTC m=+0.962309307 container died 4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 08:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b748bf4fa7eff3de3e1867450fe47344b1495710dfda8aa8e63a180228a5213d-merged.mount: Deactivated successfully.
Jan 31 08:51:05 compute-0 podman[389655]: 2026-01-31 08:51:05.175594684 +0000 UTC m=+1.013439580 container remove 4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_stonebraker, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:51:05 compute-0 systemd[1]: libpod-conmon-4f793f36dd0385092f229b16b51756f36d80f7bd8996efd4c1466156ce7212f4.scope: Deactivated successfully.
Jan 31 08:51:05 compute-0 sudo[389551]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:05.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:05 compute-0 sudo[389703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:05 compute-0 sudo[389703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:05 compute-0 sudo[389703]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:05 compute-0 sudo[389728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:05 compute-0 sudo[389728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:05 compute-0 sudo[389728]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:05 compute-0 sudo[389753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:05 compute-0 sudo[389753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:05 compute-0 sudo[389753]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:05 compute-0 sudo[389778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:51:05 compute-0 sudo[389778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:05 compute-0 sshd-session[389676]: Invalid user user from 123.54.197.60 port 53500
Jan 31 08:51:05 compute-0 nova_compute[247704]: 2026-01-31 08:51:05.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:05 compute-0 sshd-session[389676]: Connection closed by invalid user user 123.54.197.60 port 53500 [preauth]
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.723952717 +0000 UTC m=+0.052673013 container create ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:51:05 compute-0 systemd[1]: Started libpod-conmon-ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec.scope.
Jan 31 08:51:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.706184295 +0000 UTC m=+0.034904581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.813571338 +0000 UTC m=+0.142291644 container init ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.821607053 +0000 UTC m=+0.150327309 container start ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.825498688 +0000 UTC m=+0.154218964 container attach ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:51:05 compute-0 cranky_margulis[389859]: 167 167
Jan 31 08:51:05 compute-0 systemd[1]: libpod-ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec.scope: Deactivated successfully.
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.829440264 +0000 UTC m=+0.158160530 container died ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:51:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b5794b533d20a6710e2151012ef78593683a4728059a64dedf46418089410f-merged.mount: Deactivated successfully.
Jan 31 08:51:05 compute-0 podman[389843]: 2026-01-31 08:51:05.872069042 +0000 UTC m=+0.200789298 container remove ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:51:05 compute-0 systemd[1]: libpod-conmon-ea55a45dcd7606a04f87981ef37e23ad92f780ec0e18a3067f7b0acda2a2dcec.scope: Deactivated successfully.
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:06.014431705 +0000 UTC m=+0.046784359 container create da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:06 compute-0 ceph-mon[74496]: pgmap v3548: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 12 KiB/s wr, 0 op/s
Jan 31 08:51:06 compute-0 systemd[1]: Started libpod-conmon-da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c.scope.
Jan 31 08:51:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:05.993185288 +0000 UTC m=+0.025537962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3cdf87f4ac89c33cdb3adf3705b60d2622da0a3a4ac99f0098aed983249349/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3cdf87f4ac89c33cdb3adf3705b60d2622da0a3a4ac99f0098aed983249349/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3cdf87f4ac89c33cdb3adf3705b60d2622da0a3a4ac99f0098aed983249349/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3cdf87f4ac89c33cdb3adf3705b60d2622da0a3a4ac99f0098aed983249349/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:06.113291781 +0000 UTC m=+0.145644455 container init da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:06.119901482 +0000 UTC m=+0.152254126 container start da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:06.123422657 +0000 UTC m=+0.155775311 container attach da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:51:06 compute-0 nova_compute[247704]: 2026-01-31 08:51:06.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]: {
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:     "0": [
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:         {
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "devices": [
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "/dev/loop3"
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             ],
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "lv_name": "ceph_lv0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "lv_size": "7511998464",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "name": "ceph_lv0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "tags": {
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.cluster_name": "ceph",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.crush_device_class": "",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.encrypted": "0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.osd_id": "0",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.type": "block",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:                 "ceph.vdo": "0"
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             },
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "type": "block",
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:             "vg_name": "ceph_vg0"
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:         }
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]:     ]
Jan 31 08:51:06 compute-0 sharp_stonebraker[389903]: }
Jan 31 08:51:06 compute-0 systemd[1]: libpod-da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c.scope: Deactivated successfully.
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:06.910273313 +0000 UTC m=+0.942625967 container died da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:51:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 31 08:51:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f3cdf87f4ac89c33cdb3adf3705b60d2622da0a3a4ac99f0098aed983249349-merged.mount: Deactivated successfully.
Jan 31 08:51:06 compute-0 podman[389884]: 2026-01-31 08:51:06.968026058 +0000 UTC m=+1.000378712 container remove da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 31 08:51:06 compute-0 systemd[1]: libpod-conmon-da5492c86bf31a63c3b6a431f9433c32deaf43b14a833ce0990b9fd653baea4c.scope: Deactivated successfully.
Jan 31 08:51:06 compute-0 sudo[389778]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/900598782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:07 compute-0 nova_compute[247704]: 2026-01-31 08:51:07.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:07 compute-0 sudo[389924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:07 compute-0 sudo[389924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:07 compute-0 sudo[389924]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:07 compute-0 sudo[389949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:07 compute-0 sudo[389949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:07 compute-0 sudo[389949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:07 compute-0 sudo[389974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:07 compute-0 sudo[389974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:07 compute-0 sudo[389974]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:07 compute-0 sshd-session[389898]: Invalid user user from 123.54.197.60 port 53510
Jan 31 08:51:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:07 compute-0 sudo[389999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:51:07 compute-0 sudo[389999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:07.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:07 compute-0 sshd-session[389898]: Connection closed by invalid user user 123.54.197.60 port 53510 [preauth]
Jan 31 08:51:07 compute-0 podman[390063]: 2026-01-31 08:51:07.665268073 +0000 UTC m=+0.115623074 container create eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:51:07 compute-0 podman[390063]: 2026-01-31 08:51:07.573685715 +0000 UTC m=+0.024040696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:51:07 compute-0 systemd[1]: Started libpod-conmon-eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36.scope.
Jan 31 08:51:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:07 compute-0 podman[390063]: 2026-01-31 08:51:07.885976664 +0000 UTC m=+0.336331705 container init eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:51:07 compute-0 podman[390063]: 2026-01-31 08:51:07.895395913 +0000 UTC m=+0.345750904 container start eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:51:07 compute-0 festive_swirles[390079]: 167 167
Jan 31 08:51:07 compute-0 systemd[1]: libpod-eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36.scope: Deactivated successfully.
Jan 31 08:51:07 compute-0 podman[390063]: 2026-01-31 08:51:07.900558768 +0000 UTC m=+0.350913769 container attach eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:51:07 compute-0 podman[390063]: 2026-01-31 08:51:07.901073751 +0000 UTC m=+0.351428742 container died eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ea65fce1cfb8bab6dafc24ded59219009aeb95aee73b8136a38ee3a2f16075b-merged.mount: Deactivated successfully.
Jan 31 08:51:08 compute-0 ceph-mon[74496]: pgmap v3549: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s rd, 12 KiB/s wr, 2 op/s
Jan 31 08:51:08 compute-0 podman[390063]: 2026-01-31 08:51:08.144798731 +0000 UTC m=+0.595153722 container remove eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:51:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2677413607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:08 compute-0 systemd[1]: libpod-conmon-eb537a867f54a252276abdd61a820ccacb2c3c49ad4af884dbea141be7b70e36.scope: Deactivated successfully.
Jan 31 08:51:08 compute-0 podman[390103]: 2026-01-31 08:51:08.278039033 +0000 UTC m=+0.047319582 container create dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 31 08:51:08 compute-0 systemd[1]: Started libpod-conmon-dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372.scope.
Jan 31 08:51:08 compute-0 podman[390103]: 2026-01-31 08:51:08.256759426 +0000 UTC m=+0.026040075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:51:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1fc5ab097bd29bc45e978baa9e2b4e6c0682b1914b6245bd589768670e16d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1fc5ab097bd29bc45e978baa9e2b4e6c0682b1914b6245bd589768670e16d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1fc5ab097bd29bc45e978baa9e2b4e6c0682b1914b6245bd589768670e16d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1fc5ab097bd29bc45e978baa9e2b4e6c0682b1914b6245bd589768670e16d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:08 compute-0 podman[390103]: 2026-01-31 08:51:08.376152931 +0000 UTC m=+0.145433470 container init dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:51:08 compute-0 podman[390103]: 2026-01-31 08:51:08.384243448 +0000 UTC m=+0.153524017 container start dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:51:08 compute-0 podman[390103]: 2026-01-31 08:51:08.387789423 +0000 UTC m=+0.157069992 container attach dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:51:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 31 08:51:09 compute-0 gifted_panini[390120]: {
Jan 31 08:51:09 compute-0 gifted_panini[390120]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:51:09 compute-0 gifted_panini[390120]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:51:09 compute-0 gifted_panini[390120]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:51:09 compute-0 gifted_panini[390120]:         "osd_id": 0,
Jan 31 08:51:09 compute-0 gifted_panini[390120]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:51:09 compute-0 gifted_panini[390120]:         "type": "bluestore"
Jan 31 08:51:09 compute-0 gifted_panini[390120]:     }
Jan 31 08:51:09 compute-0 gifted_panini[390120]: }
Jan 31 08:51:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:09 compute-0 systemd[1]: libpod-dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372.scope: Deactivated successfully.
Jan 31 08:51:09 compute-0 podman[390103]: 2026-01-31 08:51:09.266046674 +0000 UTC m=+1.035327273 container died dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:51:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7be1fc5ab097bd29bc45e978baa9e2b4e6c0682b1914b6245bd589768670e16d-merged.mount: Deactivated successfully.
Jan 31 08:51:09 compute-0 podman[390103]: 2026-01-31 08:51:09.554247925 +0000 UTC m=+1.323528474 container remove dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 08:51:09 compute-0 systemd[1]: libpod-conmon-dd24197292b6b2cd0cd85c5da96baa4736d2525bc92dd461ccceb870c9da2372.scope: Deactivated successfully.
Jan 31 08:51:09 compute-0 nova_compute[247704]: 2026-01-31 08:51:09.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:09 compute-0 nova_compute[247704]: 2026-01-31 08:51:09.567 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:51:09 compute-0 nova_compute[247704]: 2026-01-31 08:51:09.568 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:51:09 compute-0 sudo[389999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:51:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:51:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2517ac12-9532-40c5-b579-026f33767ca2 does not exist
Jan 31 08:51:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9f220740-8517-401d-910b-a640cb57ccf1 does not exist
Jan 31 08:51:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 597e0427-2ccb-4f24-b201-ad226c852c2d does not exist
Jan 31 08:51:09 compute-0 sshd-session[390125]: Invalid user user from 123.54.197.60 port 53520
Jan 31 08:51:09 compute-0 sudo[390157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:09 compute-0 sudo[390157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:09 compute-0 sudo[390157]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:09 compute-0 sudo[390182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:51:09 compute-0 sudo[390182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:09 compute-0 sudo[390182]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:10 compute-0 nova_compute[247704]: 2026-01-31 08:51:10.005 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:51:10 compute-0 sshd-session[390125]: Connection closed by invalid user user 123.54.197.60 port 53520 [preauth]
Jan 31 08:51:10 compute-0 ceph-mon[74496]: pgmap v3550: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 31 08:51:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:51:10 compute-0 nova_compute[247704]: 2026-01-31 08:51:10.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.088 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.088 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.089 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.089 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.089 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:11.221 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:11.222 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:11.222 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:11.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:11 compute-0 sshd-session[390208]: Invalid user user from 123.54.197.60 port 51154
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.418 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:11 compute-0 podman[390230]: 2026-01-31 08:51:11.478364744 +0000 UTC m=+0.062671656 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 31 08:51:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:51:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520348176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.572 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:11 compute-0 sshd-session[390208]: Connection closed by invalid user user 123.54.197.60 port 51154 [preauth]
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.753 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.755 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4262MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.755 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:11 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.755 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:11 compute-0 ceph-mon[74496]: pgmap v3551: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3520348176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:11.999 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.000 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.020 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:51:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1222370889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.500 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.506 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.546 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.549 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:51:12 compute-0 nova_compute[247704]: 2026-01-31 08:51:12.549 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:12 compute-0 sshd-session[390252]: Invalid user user from 123.54.197.60 port 51164
Jan 31 08:51:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1222370889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:13.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:13 compute-0 sshd-session[390252]: Connection closed by invalid user user 123.54.197.60 port 51164 [preauth]
Jan 31 08:51:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:14 compute-0 ceph-mon[74496]: pgmap v3552: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:15.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:15 compute-0 ceph-mon[74496]: pgmap v3553: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 08:51:16 compute-0 sshd-session[390278]: Invalid user user from 123.54.197.60 port 51166
Jan 31 08:51:16 compute-0 sshd-session[390278]: Connection closed by invalid user user 123.54.197.60 port 51166 [preauth]
Jan 31 08:51:16 compute-0 nova_compute[247704]: 2026-01-31 08:51:16.457 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/971336384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:16 compute-0 nova_compute[247704]: 2026-01-31 08:51:16.550 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:16 compute-0 nova_compute[247704]: 2026-01-31 08:51:16.551 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:16 compute-0 nova_compute[247704]: 2026-01-31 08:51:16.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 31 08:51:17 compute-0 nova_compute[247704]: 2026-01-31 08:51:17.047 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:17 compute-0 ceph-mon[74496]: pgmap v3554: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Jan 31 08:51:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3154236841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:18 compute-0 sshd-session[390281]: Invalid user user from 123.54.197.60 port 51170
Jan 31 08:51:18 compute-0 sshd-session[390281]: Connection closed by invalid user user 123.54.197.60 port 51170 [preauth]
Jan 31 08:51:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 279 KiB/s wr, 97 op/s
Jan 31 08:51:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:19.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.488 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.488 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.540 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.850 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.851 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.857 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:51:19 compute-0 nova_compute[247704]: 2026-01-31 08:51:19.858 247708 INFO nova.compute.claims [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:51:20 compute-0 ceph-mon[74496]: pgmap v3555: 305 pgs: 305 active+clean; 171 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 279 KiB/s wr, 97 op/s
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.058 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:20 compute-0 sshd-session[390284]: Invalid user user from 123.54.197.60 port 51178
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:51:20
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'backups', 'vms']
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:51:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:51:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246006372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.506 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.512 247708 DEBUG nova.compute.provider_tree [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:51:20 compute-0 sshd-session[390284]: Connection closed by invalid user user 123.54.197.60 port 51178 [preauth]
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.572 247708 DEBUG nova.scheduler.client.report [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.683 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.684 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.835 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.836 247708 DEBUG nova.network.neutron [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.901 247708 INFO nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:51:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 31 08:51:20 compute-0 nova_compute[247704]: 2026-01-31 08:51:20.981 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:51:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2246006372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.209 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.211 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.211 247708 INFO nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Creating image(s)
Jan 31 08:51:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.243 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.289 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.333 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.338 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.405 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.406 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.407 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.407 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.437 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.441 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e2e8918c-a2d2-4778-87d0-0988977e7188_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.463 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.597 247708 DEBUG nova.policy [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '76212a9fc119475bb419a1bdb1e55e2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9141737aac2e46fca7bc57616fd70c44', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:51:21 compute-0 sudo[390405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:21 compute-0 sudo[390405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:21 compute-0 sudo[390405]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:21 compute-0 sudo[390430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:21 compute-0 sudo[390430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:21 compute-0 sudo[390430]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:21 compute-0 nova_compute[247704]: 2026-01-31 08:51:21.928 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 e2e8918c-a2d2-4778-87d0-0988977e7188_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.010 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] resizing rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.105 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:22 compute-0 ceph-mon[74496]: pgmap v3556: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.222 247708 DEBUG nova.objects.instance [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lazy-loading 'migration_context' on Instance uuid e2e8918c-a2d2-4778-87d0-0988977e7188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.271 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.271 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Ensure instance console log exists: /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.272 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.273 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.274 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:22 compute-0 sshd-session[390309]: Invalid user user from 123.54.197.60 port 60106
Jan 31 08:51:22 compute-0 nova_compute[247704]: 2026-01-31 08:51:22.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:22 compute-0 podman[390528]: 2026-01-31 08:51:22.594248939 +0000 UTC m=+0.105032716 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:51:22 compute-0 sshd-session[390309]: Connection closed by invalid user user 123.54.197.60 port 60106 [preauth]
Jan 31 08:51:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:51:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:23 compute-0 sshd-session[390556]: Invalid user user from 123.54.197.60 port 60118
Jan 31 08:51:24 compute-0 nova_compute[247704]: 2026-01-31 08:51:24.054 247708 DEBUG nova.network.neutron [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Successfully created port: e568c850-1847-4743-bc28-5e4dce2179a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:51:24 compute-0 ceph-mon[74496]: pgmap v3557: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:51:24 compute-0 sshd-session[390556]: Connection closed by invalid user user 123.54.197.60 port 60118 [preauth]
Jan 31 08:51:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 238 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 31 08:51:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:25.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:25 compute-0 sshd-session[390559]: Invalid user user from 123.54.197.60 port 60130
Jan 31 08:51:25 compute-0 sshd-session[390559]: Connection closed by invalid user user 123.54.197.60 port 60130 [preauth]
Jan 31 08:51:26 compute-0 ceph-mon[74496]: pgmap v3558: 305 pgs: 305 active+clean; 238 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.307 247708 DEBUG nova.network.neutron [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Successfully updated port: e568c850-1847-4743-bc28-5e4dce2179a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.467 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.566 247708 DEBUG nova.compute.manager [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-changed-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.566 247708 DEBUG nova.compute.manager [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Refreshing instance network info cache due to event network-changed-e568c850-1847-4743-bc28-5e4dce2179a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.566 247708 DEBUG oslo_concurrency.lockutils [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.566 247708 DEBUG oslo_concurrency.lockutils [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.566 247708 DEBUG nova.network.neutron [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Refreshing network info cache for port e568c850-1847-4743-bc28-5e4dce2179a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.609 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:51:26 compute-0 nova_compute[247704]: 2026-01-31 08:51:26.858 247708 DEBUG nova.network.neutron [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:51:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 31 08:51:27 compute-0 nova_compute[247704]: 2026-01-31 08:51:27.051 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:27 compute-0 sshd-session[390561]: Invalid user user from 123.54.197.60 port 60138
Jan 31 08:51:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:27 compute-0 nova_compute[247704]: 2026-01-31 08:51:27.312 247708 DEBUG nova.network.neutron [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:51:27 compute-0 nova_compute[247704]: 2026-01-31 08:51:27.392 247708 DEBUG oslo_concurrency.lockutils [req-6f7e19c6-b6e0-4390-a658-30b371487abc req-7c6437c1-9964-4e7e-9150-d5f3352c0614 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:51:27 compute-0 nova_compute[247704]: 2026-01-31 08:51:27.394 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquired lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:51:27 compute-0 nova_compute[247704]: 2026-01-31 08:51:27.394 247708 DEBUG nova.network.neutron [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:51:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:27.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:27 compute-0 sshd-session[390561]: Connection closed by invalid user user 123.54.197.60 port 60138 [preauth]
Jan 31 08:51:27 compute-0 nova_compute[247704]: 2026-01-31 08:51:27.690 247708 DEBUG nova.network.neutron [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:51:28 compute-0 ceph-mon[74496]: pgmap v3559: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.211755) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488211797, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1652, "num_deletes": 251, "total_data_size": 2868772, "memory_usage": 2915216, "flush_reason": "Manual Compaction"}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488241525, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 2801808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76420, "largest_seqno": 78071, "table_properties": {"data_size": 2794290, "index_size": 4460, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15999, "raw_average_key_size": 20, "raw_value_size": 2779136, "raw_average_value_size": 3517, "num_data_blocks": 196, "num_entries": 790, "num_filter_entries": 790, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849324, "oldest_key_time": 1769849324, "file_creation_time": 1769849488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 29862 microseconds, and 6377 cpu microseconds.
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.241607) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 2801808 bytes OK
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.241641) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.249463) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.249523) EVENT_LOG_v1 {"time_micros": 1769849488249511, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.249554) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2861853, prev total WAL file size 2861853, number of live WAL files 2.
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.250635) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(2736KB)], [176(10MB)]
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488250731, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 13678811, "oldest_snapshot_seqno": -1}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10407 keys, 11723528 bytes, temperature: kUnknown
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488398681, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 11723528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11659296, "index_size": 37111, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 275259, "raw_average_key_size": 26, "raw_value_size": 11480190, "raw_average_value_size": 1103, "num_data_blocks": 1403, "num_entries": 10407, "num_filter_entries": 10407, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849488, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.399053) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11723528 bytes
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.400553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.4 rd, 79.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.4 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(9.1) write-amplify(4.2) OK, records in: 10924, records dropped: 517 output_compression: NoCompression
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.400576) EVENT_LOG_v1 {"time_micros": 1769849488400564, "job": 110, "event": "compaction_finished", "compaction_time_micros": 148063, "compaction_time_cpu_micros": 26951, "output_level": 6, "num_output_files": 1, "total_output_size": 11723528, "num_input_records": 10924, "num_output_records": 10407, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488401331, "job": 110, "event": "table_file_deletion", "file_number": 178}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849488403421, "job": 110, "event": "table_file_deletion", "file_number": 176}
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.250481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.403502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.403515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.403517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.403519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:51:28.403521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:28 compute-0 sshd-session[390564]: Invalid user user from 123.54.197.60 port 60146
Jan 31 08:51:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 31 08:51:29 compute-0 sshd-session[390564]: Connection closed by invalid user user 123.54.197.60 port 60146 [preauth]
Jan 31 08:51:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:29.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:30 compute-0 ceph-mon[74496]: pgmap v3560: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 31 08:51:30 compute-0 nova_compute[247704]: 2026-01-31 08:51:30.403 247708 DEBUG nova.network.neutron [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updating instance_info_cache with network_info: [{"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:51:30 compute-0 sshd-session[390567]: Invalid user user from 123.54.197.60 port 60158
Jan 31 08:51:30 compute-0 sshd-session[390567]: Connection closed by invalid user user 123.54.197.60 port 60158 [preauth]
Jan 31 08:51:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.155 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Releasing lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.155 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Instance network_info: |[{"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.157 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Start _get_guest_xml network_info=[{"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.162 247708 WARNING nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.187 247708 DEBUG nova.virt.libvirt.host [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.188 247708 DEBUG nova.virt.libvirt.host [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.194 247708 DEBUG nova.virt.libvirt.host [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.194 247708 DEBUG nova.virt.libvirt.host [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.195 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.196 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.196 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.196 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.196 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.197 247708 DEBUG nova.virt.hardware [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.200 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:31.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.471 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:51:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942672263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.696 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.727 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:31 compute-0 nova_compute[247704]: 2026-01-31 08:51:31.732 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:51:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2152520865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.176 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.179 247708 DEBUG nova.virt.libvirt.vif [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-151715943',display_name='tempest-TestServerBasicOps-server-151715943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-151715943',id=199,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqw4bSBAebeRccUYRPmYE5U96xntvEajFi9bMkUSrh/sJnHOg/rE8lH50fMxtlE9t+djC4kjKRQ2i5sqfQ+G7Z27M4XKjPpLvlbSo2rEMSM1fGRXMlNZzQWpKs9h7S9Yw==',key_name='tempest-TestServerBasicOps-529859631',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9141737aac2e46fca7bc57616fd70c44',ramdisk_id='',reservation_id='r-arga7wb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1918636746',owner_user_name='tempest-TestServerBasicOps-1918636746-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='76212a9fc119475bb419a1bdb1e55e2e',uuid=e2e8918c-a2d2-4778-87d0-0988977e7188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.180 247708 DEBUG nova.network.os_vif_util [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Converting VIF {"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.181 247708 DEBUG nova.network.os_vif_util [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.183 247708 DEBUG nova.objects.instance [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2e8918c-a2d2-4778-87d0-0988977e7188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.208 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <uuid>e2e8918c-a2d2-4778-87d0-0988977e7188</uuid>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <name>instance-000000c7</name>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:name>tempest-TestServerBasicOps-server-151715943</nova:name>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:51:31</nova:creationTime>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:user uuid="76212a9fc119475bb419a1bdb1e55e2e">tempest-TestServerBasicOps-1918636746-project-member</nova:user>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:project uuid="9141737aac2e46fca7bc57616fd70c44">tempest-TestServerBasicOps-1918636746</nova:project>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <nova:port uuid="e568c850-1847-4743-bc28-5e4dce2179a9">
Jan 31 08:51:32 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <system>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <entry name="serial">e2e8918c-a2d2-4778-87d0-0988977e7188</entry>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <entry name="uuid">e2e8918c-a2d2-4778-87d0-0988977e7188</entry>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </system>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <os>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </os>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <features>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </features>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/e2e8918c-a2d2-4778-87d0-0988977e7188_disk">
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </source>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/e2e8918c-a2d2-4778-87d0-0988977e7188_disk.config">
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </source>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:51:32 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:fa:4c:dd"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <target dev="tape568c850-18"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/console.log" append="off"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <video>
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </video>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:51:32 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:51:32 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:51:32 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:51:32 compute-0 nova_compute[247704]: </domain>
Jan 31 08:51:32 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.210 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Preparing to wait for external event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.211 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.211 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.211 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.212 247708 DEBUG nova.virt.libvirt.vif [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-151715943',display_name='tempest-TestServerBasicOps-server-151715943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-151715943',id=199,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqw4bSBAebeRccUYRPmYE5U96xntvEajFi9bMkUSrh/sJnHOg/rE8lH50fMxtlE9t+djC4kjKRQ2i5sqfQ+G7Z27M4XKjPpLvlbSo2rEMSM1fGRXMlNZzQWpKs9h7S9Yw==',key_name='tempest-TestServerBasicOps-529859631',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9141737aac2e46fca7bc57616fd70c44',ramdisk_id='',reservation_id='r-arga7wb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1918636746',owner_user_name='tempest-TestServerBasicOps-1918636746-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='76212a9fc119475bb419a1bdb1e55e2e',uuid=e2e8918c-a2d2-4778-87d0-0988977e7188,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.213 247708 DEBUG nova.network.os_vif_util [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Converting VIF {"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.213 247708 DEBUG nova.network.os_vif_util [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.214 247708 DEBUG os_vif [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.214 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.215 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.215 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.219 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.219 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape568c850-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.219 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape568c850-18, col_values=(('external_ids', {'iface-id': 'e568c850-1847-4743-bc28-5e4dce2179a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:4c:dd', 'vm-uuid': 'e2e8918c-a2d2-4778-87d0-0988977e7188'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:32 compute-0 NetworkManager[49108]: <info>  [1769849492.2222] manager: (tape568c850-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.223 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.228 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.229 247708 INFO os_vif [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18')
Jan 31 08:51:32 compute-0 ceph-mon[74496]: pgmap v3561: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 3.6 MiB/s wr, 65 op/s
Jan 31 08:51:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2942672263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2152520865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:51:32 compute-0 sshd-session[390570]: Invalid user user from 123.54.197.60 port 37268
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.342 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.343 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.344 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] No VIF found with MAC fa:16:3e:fa:4c:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.345 247708 INFO nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Using config drive
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.375 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:32 compute-0 sshd-session[390570]: Connection closed by invalid user user 123.54.197.60 port 37268 [preauth]
Jan 31 08:51:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:32.699 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:51:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:32.700 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:51:32 compute-0 nova_compute[247704]: 2026-01-31 08:51:32.700 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:51:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:33.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:33 compute-0 sshd-session[390655]: Invalid user user from 123.54.197.60 port 37272
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.035 247708 INFO nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Creating config drive at /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/disk.config
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.040 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp98vsctan execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:34 compute-0 sshd-session[390655]: Connection closed by invalid user user 123.54.197.60 port 37272 [preauth]
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.172 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp98vsctan" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.206 247708 DEBUG nova.storage.rbd_utils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] rbd image e2e8918c-a2d2-4778-87d0-0988977e7188_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.211 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/disk.config e2e8918c-a2d2-4778-87d0-0988977e7188_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:34 compute-0 ceph-mon[74496]: pgmap v3562: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.378 247708 DEBUG oslo_concurrency.processutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/disk.config e2e8918c-a2d2-4778-87d0-0988977e7188_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.379 247708 INFO nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Deleting local config drive /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188/disk.config because it was imported into RBD.
Jan 31 08:51:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:34 compute-0 kernel: tape568c850-18: entered promiscuous mode
Jan 31 08:51:34 compute-0 NetworkManager[49108]: <info>  [1769849494.4434] manager: (tape568c850-18): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Jan 31 08:51:34 compute-0 ovn_controller[149457]: 2026-01-31T08:51:34Z|00835|binding|INFO|Claiming lport e568c850-1847-4743-bc28-5e4dce2179a9 for this chassis.
Jan 31 08:51:34 compute-0 ovn_controller[149457]: 2026-01-31T08:51:34Z|00836|binding|INFO|e568c850-1847-4743-bc28-5e4dce2179a9: Claiming fa:16:3e:fa:4c:dd 10.100.0.5
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.464 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:4c:dd 10.100.0.5'], port_security=['fa:16:3e:fa:4c:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e2e8918c-a2d2-4778-87d0-0988977e7188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9141737aac2e46fca7bc57616fd70c44', 'neutron:revision_number': '2', 'neutron:security_group_ids': '861cf335-10f6-4228-a2b0-e654edc6a8f0 e5304a9e-f9c3-48a7-be8e-a58f025dc869', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbf56355-1cb0-4797-b3ef-475b4d16216d, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e568c850-1847-4743-bc28-5e4dce2179a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.465 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e568c850-1847-4743-bc28-5e4dce2179a9 in datapath fff4a957-b1af-42da-9138-78a53cd3f9b7 bound to our chassis
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.467 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fff4a957-b1af-42da-9138-78a53cd3f9b7
Jan 31 08:51:34 compute-0 systemd-udevd[390712]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.475 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bab75b30-2723-4f30-944c-e1a3301e262c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.476 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfff4a957-b1 in ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:51:34 compute-0 systemd-machined[214448]: New machine qemu-88-instance-000000c7.
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.480 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfff4a957-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.481 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[51ea19d7-0388-4861-a7c0-696f10bf891c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.481 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[184e6413-19ea-472b-8ffd-5a94722ede4b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 NetworkManager[49108]: <info>  [1769849494.4879] device (tape568c850-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:51:34 compute-0 NetworkManager[49108]: <info>  [1769849494.4892] device (tape568c850-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.488 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-000000c7.
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.496 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[623181b5-5be4-41bb-8513-b0a57a620bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_controller[149457]: 2026-01-31T08:51:34Z|00837|binding|INFO|Setting lport e568c850-1847-4743-bc28-5e4dce2179a9 ovn-installed in OVS
Jan 31 08:51:34 compute-0 ovn_controller[149457]: 2026-01-31T08:51:34Z|00838|binding|INFO|Setting lport e568c850-1847-4743-bc28-5e4dce2179a9 up in Southbound
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.501 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.508 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[95cef1a4-3665-4c94-a5e5-dbd7d3b82534]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.530 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ec74a166-32ed-4d11-824e-157b5f50ca4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.534 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[34f47d33-5e29-4b7f-9078-35e69aa77928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 NetworkManager[49108]: <info>  [1769849494.5355] manager: (tapfff4a957-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Jan 31 08:51:34 compute-0 systemd-udevd[390717]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.554 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[167e0bd2-ad52-45b0-b0ab-8782f434591a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.559 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[3c992b79-93ea-4920-bd70-198bb77901bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 NetworkManager[49108]: <info>  [1769849494.5749] device (tapfff4a957-b0): carrier: link connected
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.576 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c2257a-8e28-433e-85cd-233eafc80add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.588 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0e1406-6450-4a78-9d1f-de514e846c1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff4a957-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:bf:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 973719, 'reachable_time': 41818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390746, 'error': None, 'target': 'ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.596 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[719b4c35-945d-4fa1-9f64-73b0b33562b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:bff1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 973719, 'tstamp': 973719}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390747, 'error': None, 'target': 'ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.604 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2deaff-b2f5-4145-a932-018aab13f4c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfff4a957-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:bf:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 973719, 'reachable_time': 41818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390748, 'error': None, 'target': 'ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.619 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c2256d0b-2493-4ca3-88aa-2a5175df0618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.653 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aa08b5ee-e1ef-44b8-bcf1-38e71315c471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.654 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff4a957-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.654 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.655 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfff4a957-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:34 compute-0 kernel: tapfff4a957-b0: entered promiscuous mode
Jan 31 08:51:34 compute-0 NetworkManager[49108]: <info>  [1769849494.6619] manager: (tapfff4a957-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.663 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfff4a957-b0, col_values=(('external_ids', {'iface-id': '051c78c0-47ef-41a8-9cb5-2b35e2f8924e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-0 ovn_controller[149457]: 2026-01-31T08:51:34Z|00839|binding|INFO|Releasing lport 051c78c0-47ef-41a8-9cb5-2b35e2f8924e from this chassis (sb_readonly=0)
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.676 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-0 nova_compute[247704]: 2026-01-31 08:51:34.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.681 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fff4a957-b1af-42da-9138-78a53cd3f9b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fff4a957-b1af-42da-9138-78a53cd3f9b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.682 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3018110e-caea-4e55-9547-f5d9cf9b7020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.684 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-fff4a957-b1af-42da-9138-78a53cd3f9b7
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/fff4a957-b1af-42da-9138-78a53cd3f9b7.pid.haproxy
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID fff4a957-b1af-42da-9138-78a53cd3f9b7
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:51:34 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:34.687 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'env', 'PROCESS_TAG=haproxy-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fff4a957-b1af-42da-9138-78a53cd3f9b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:51:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 31 08:51:35 compute-0 podman[390780]: 2026-01-31 08:51:35.058997943 +0000 UTC m=+0.068413405 container create de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:51:35 compute-0 systemd[1]: Started libpod-conmon-de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2.scope.
Jan 31 08:51:35 compute-0 podman[390780]: 2026-01-31 08:51:35.025901019 +0000 UTC m=+0.035316521 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:51:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b6dd7faffceebca376503b679f6d15c9af4d74c60dad04b2f3fd345be0924f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:35 compute-0 podman[390780]: 2026-01-31 08:51:35.165786542 +0000 UTC m=+0.175202034 container init de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:51:35 compute-0 podman[390780]: 2026-01-31 08:51:35.174141165 +0000 UTC m=+0.183556627 container start de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:51:35 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [NOTICE]   (390800) : New worker (390806) forked
Jan 31 08:51:35 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [NOTICE]   (390800) : Loading success.
Jan 31 08:51:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:35.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.365 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849495.3652256, e2e8918c-a2d2-4778-87d0-0988977e7188 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.366 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] VM Started (Lifecycle Event)
Jan 31 08:51:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:35.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.490 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.494 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849495.365946, e2e8918c-a2d2-4778-87d0-0988977e7188 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.495 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] VM Paused (Lifecycle Event)
Jan 31 08:51:35 compute-0 sshd-session[390698]: Invalid user user from 123.54.197.60 port 37288
Jan 31 08:51:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:51:35.703 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:51:35 compute-0 sshd-session[390698]: Connection closed by invalid user user 123.54.197.60 port 37288 [preauth]
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.941 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:51:35 compute-0 nova_compute[247704]: 2026-01-31 08:51:35.946 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:51:36 compute-0 nova_compute[247704]: 2026-01-31 08:51:36.023 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031679665689558906 of space, bias 1.0, pg target 0.9503899706867671 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:51:36 compute-0 ceph-mon[74496]: pgmap v3563: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 31 08:51:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 113 KiB/s wr, 6 op/s
Jan 31 08:51:37 compute-0 nova_compute[247704]: 2026-01-31 08:51:37.056 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:37 compute-0 nova_compute[247704]: 2026-01-31 08:51:37.221 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:37.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:38 compute-0 ceph-mon[74496]: pgmap v3564: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 113 KiB/s wr, 6 op/s
Jan 31 08:51:38 compute-0 sshd-session[390853]: Invalid user user from 123.54.197.60 port 37300
Jan 31 08:51:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s rd, 19 KiB/s wr, 10 op/s
Jan 31 08:51:39 compute-0 sshd-session[390853]: Connection closed by invalid user user 123.54.197.60 port 37300 [preauth]
Jan 31 08:51:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:39.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:40 compute-0 ceph-mon[74496]: pgmap v3565: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s rd, 19 KiB/s wr, 10 op/s
Jan 31 08:51:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 20 KiB/s wr, 10 op/s
Jan 31 08:51:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:41.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:41 compute-0 sshd-session[390858]: Invalid user user from 123.54.197.60 port 37312
Jan 31 08:51:41 compute-0 podman[390860]: 2026-01-31 08:51:41.6193174 +0000 UTC m=+0.084391575 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:51:41 compute-0 ceph-mon[74496]: pgmap v3566: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 20 KiB/s wr, 10 op/s
Jan 31 08:51:41 compute-0 sudo[390879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:41 compute-0 sudo[390879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:41 compute-0 sudo[390879]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:41 compute-0 sshd-session[390858]: Connection closed by invalid user user 123.54.197.60 port 37312 [preauth]
Jan 31 08:51:41 compute-0 sudo[390904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:51:41 compute-0 sudo[390904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:41 compute-0 sudo[390904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:42 compute-0 nova_compute[247704]: 2026-01-31 08:51:42.058 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:42 compute-0 nova_compute[247704]: 2026-01-31 08:51:42.223 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 15 KiB/s wr, 10 op/s
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.040 247708 DEBUG nova.compute.manager [req-51d3c3aa-7ad8-4734-939d-a596c0393475 req-24841c4a-db6c-4bd5-8b68-e7770a41fb2f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.041 247708 DEBUG oslo_concurrency.lockutils [req-51d3c3aa-7ad8-4734-939d-a596c0393475 req-24841c4a-db6c-4bd5-8b68-e7770a41fb2f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.041 247708 DEBUG oslo_concurrency.lockutils [req-51d3c3aa-7ad8-4734-939d-a596c0393475 req-24841c4a-db6c-4bd5-8b68-e7770a41fb2f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.042 247708 DEBUG oslo_concurrency.lockutils [req-51d3c3aa-7ad8-4734-939d-a596c0393475 req-24841c4a-db6c-4bd5-8b68-e7770a41fb2f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.042 247708 DEBUG nova.compute.manager [req-51d3c3aa-7ad8-4734-939d-a596c0393475 req-24841c4a-db6c-4bd5-8b68-e7770a41fb2f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Processing event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.043 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.049 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849503.0486698, e2e8918c-a2d2-4778-87d0-0988977e7188 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.049 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] VM Resumed (Lifecycle Event)
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.053 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.057 247708 INFO nova.virt.libvirt.driver [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Instance spawned successfully.
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.058 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.092 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.099 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.103 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.103 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.104 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.104 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.105 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.105 247708 DEBUG nova.virt.libvirt.driver [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.158 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:51:43 compute-0 sshd-session[390929]: Invalid user user from 123.54.197.60 port 36496
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.238 247708 INFO nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Took 22.03 seconds to spawn the instance on the hypervisor.
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.239 247708 DEBUG nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:51:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.408 247708 INFO nova.compute.manager [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Took 23.68 seconds to build instance.
Jan 31 08:51:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:43 compute-0 nova_compute[247704]: 2026-01-31 08:51:43.508 247708 DEBUG oslo_concurrency.lockutils [None req-062c333f-4694-4865-ba60-fc5708273a40 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:43 compute-0 sshd-session[390929]: Connection closed by invalid user user 123.54.197.60 port 36496 [preauth]
Jan 31 08:51:44 compute-0 ceph-mon[74496]: pgmap v3567: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 15 KiB/s wr, 10 op/s
Jan 31 08:51:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:44 compute-0 sshd-session[390932]: Invalid user user from 123.54.197.60 port 36512
Jan 31 08:51:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 46 op/s
Jan 31 08:51:45 compute-0 sshd-session[390932]: Connection closed by invalid user user 123.54.197.60 port 36512 [preauth]
Jan 31 08:51:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:45 compute-0 nova_compute[247704]: 2026-01-31 08:51:45.651 247708 DEBUG nova.compute.manager [req-033bd64f-32ca-4896-a2b6-9545376836cf req-36ab21f4-54c4-43da-9e93-322feb0ac98c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:51:45 compute-0 nova_compute[247704]: 2026-01-31 08:51:45.652 247708 DEBUG oslo_concurrency.lockutils [req-033bd64f-32ca-4896-a2b6-9545376836cf req-36ab21f4-54c4-43da-9e93-322feb0ac98c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:45 compute-0 nova_compute[247704]: 2026-01-31 08:51:45.652 247708 DEBUG oslo_concurrency.lockutils [req-033bd64f-32ca-4896-a2b6-9545376836cf req-36ab21f4-54c4-43da-9e93-322feb0ac98c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:45 compute-0 nova_compute[247704]: 2026-01-31 08:51:45.652 247708 DEBUG oslo_concurrency.lockutils [req-033bd64f-32ca-4896-a2b6-9545376836cf req-36ab21f4-54c4-43da-9e93-322feb0ac98c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:45 compute-0 nova_compute[247704]: 2026-01-31 08:51:45.653 247708 DEBUG nova.compute.manager [req-033bd64f-32ca-4896-a2b6-9545376836cf req-36ab21f4-54c4-43da-9e93-322feb0ac98c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] No waiting events found dispatching network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:51:45 compute-0 nova_compute[247704]: 2026-01-31 08:51:45.653 247708 WARNING nova.compute.manager [req-033bd64f-32ca-4896-a2b6-9545376836cf req-36ab21f4-54c4-43da-9e93-322feb0ac98c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received unexpected event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 for instance with vm_state active and task_state None.
Jan 31 08:51:46 compute-0 ceph-mon[74496]: pgmap v3568: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 15 KiB/s wr, 46 op/s
Jan 31 08:51:46 compute-0 sshd-session[390936]: Invalid user user from 123.54.197.60 port 36526
Jan 31 08:51:46 compute-0 sshd-session[390936]: Connection closed by invalid user user 123.54.197.60 port 36526 [preauth]
Jan 31 08:51:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 KiB/s wr, 50 op/s
Jan 31 08:51:47 compute-0 nova_compute[247704]: 2026-01-31 08:51:47.060 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:47 compute-0 nova_compute[247704]: 2026-01-31 08:51:47.225 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:47.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:48 compute-0 ceph-mon[74496]: pgmap v3569: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 KiB/s wr, 50 op/s
Jan 31 08:51:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 71 op/s
Jan 31 08:51:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:49.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:49 compute-0 sshd-session[390939]: Invalid user user from 123.54.197.60 port 36538
Jan 31 08:51:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:49 compute-0 NetworkManager[49108]: <info>  [1769849509.5706] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Jan 31 08:51:49 compute-0 nova_compute[247704]: 2026-01-31 08:51:49.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:49 compute-0 NetworkManager[49108]: <info>  [1769849509.5721] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Jan 31 08:51:49 compute-0 sshd-session[390939]: Connection closed by invalid user user 123.54.197.60 port 36538 [preauth]
Jan 31 08:51:49 compute-0 nova_compute[247704]: 2026-01-31 08:51:49.620 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:49 compute-0 ovn_controller[149457]: 2026-01-31T08:51:49Z|00840|binding|INFO|Releasing lport 051c78c0-47ef-41a8-9cb5-2b35e2f8924e from this chassis (sb_readonly=0)
Jan 31 08:51:49 compute-0 nova_compute[247704]: 2026-01-31 08:51:49.646 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:50 compute-0 nova_compute[247704]: 2026-01-31 08:51:50.258 247708 DEBUG nova.compute.manager [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-changed-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:51:50 compute-0 nova_compute[247704]: 2026-01-31 08:51:50.259 247708 DEBUG nova.compute.manager [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Refreshing instance network info cache due to event network-changed-e568c850-1847-4743-bc28-5e4dce2179a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:51:50 compute-0 nova_compute[247704]: 2026-01-31 08:51:50.259 247708 DEBUG oslo_concurrency.lockutils [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:51:50 compute-0 nova_compute[247704]: 2026-01-31 08:51:50.259 247708 DEBUG oslo_concurrency.lockutils [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:51:50 compute-0 nova_compute[247704]: 2026-01-31 08:51:50.259 247708 DEBUG nova.network.neutron [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Refreshing network info cache for port e568c850-1847-4743-bc28-5e4dce2179a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:51:50 compute-0 sshd-session[390943]: Invalid user user from 123.54.197.60 port 44632
Jan 31 08:51:50 compute-0 ceph-mon[74496]: pgmap v3570: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 71 op/s
Jan 31 08:51:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Jan 31 08:51:51 compute-0 sshd-session[390943]: Connection closed by invalid user user 123.54.197.60 port 44632 [preauth]
Jan 31 08:51:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:51.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:51 compute-0 nova_compute[247704]: 2026-01-31 08:51:51.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:52 compute-0 nova_compute[247704]: 2026-01-31 08:51:52.061 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:52 compute-0 nova_compute[247704]: 2026-01-31 08:51:52.227 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:52 compute-0 sshd-session[390946]: Invalid user user from 123.54.197.60 port 44648
Jan 31 08:51:52 compute-0 sshd-session[390946]: Connection closed by invalid user user 123.54.197.60 port 44648 [preauth]
Jan 31 08:51:52 compute-0 podman[390949]: 2026-01-31 08:51:52.926365544 +0000 UTC m=+0.100930327 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true)
Jan 31 08:51:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 64 op/s
Jan 31 08:51:52 compute-0 ceph-mon[74496]: pgmap v3571: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 KiB/s wr, 64 op/s
Jan 31 08:51:53 compute-0 nova_compute[247704]: 2026-01-31 08:51:53.195 247708 DEBUG nova.network.neutron [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updated VIF entry in instance network info cache for port e568c850-1847-4743-bc28-5e4dce2179a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:51:53 compute-0 nova_compute[247704]: 2026-01-31 08:51:53.196 247708 DEBUG nova.network.neutron [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updating instance_info_cache with network_info: [{"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:51:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:53 compute-0 nova_compute[247704]: 2026-01-31 08:51:53.387 247708 DEBUG oslo_concurrency.lockutils [req-485660b7-4f91-4814-929c-a8213f772885 req-f6ac6f65-f51b-4690-ab58-768bd0f56cd5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:51:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:53.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:54 compute-0 ceph-mon[74496]: pgmap v3572: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 64 op/s
Jan 31 08:51:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2136031882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:51:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2136031882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:51:54 compute-0 sshd-session[390976]: Invalid user user from 123.54.197.60 port 44654
Jan 31 08:51:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:54 compute-0 sshd-session[390976]: Connection closed by invalid user user 123.54.197.60 port 44654 [preauth]
Jan 31 08:51:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 66 op/s
Jan 31 08:51:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:51:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:51:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:55.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:56 compute-0 ceph-mon[74496]: pgmap v3573: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 66 op/s
Jan 31 08:51:56 compute-0 sshd-session[390979]: Invalid user user from 123.54.197.60 port 44656
Jan 31 08:51:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 251 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 443 KiB/s wr, 40 op/s
Jan 31 08:51:57 compute-0 sshd-session[390979]: Connection closed by invalid user user 123.54.197.60 port 44656 [preauth]
Jan 31 08:51:57 compute-0 nova_compute[247704]: 2026-01-31 08:51:57.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:57 compute-0 nova_compute[247704]: 2026-01-31 08:51:57.229 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:51:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:57.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:51:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:57.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:57 compute-0 nova_compute[247704]: 2026-01-31 08:51:57.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:57 compute-0 ovn_controller[149457]: 2026-01-31T08:51:57Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:4c:dd 10.100.0.5
Jan 31 08:51:57 compute-0 ovn_controller[149457]: 2026-01-31T08:51:57Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:4c:dd 10.100.0.5
Jan 31 08:51:58 compute-0 ceph-mon[74496]: pgmap v3574: 305 pgs: 305 active+clean; 251 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 443 KiB/s wr, 40 op/s
Jan 31 08:51:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/82887673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:51:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 842 KiB/s rd, 868 KiB/s wr, 54 op/s
Jan 31 08:51:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:51:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:59.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:51:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:51:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:51:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:59.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:00 compute-0 ceph-mon[74496]: pgmap v3575: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 842 KiB/s rd, 868 KiB/s wr, 54 op/s
Jan 31 08:52:00 compute-0 sshd-session[390982]: Invalid user user from 123.54.197.60 port 44668
Jan 31 08:52:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 3.5 MiB/s wr, 105 op/s
Jan 31 08:52:01 compute-0 sshd-session[390982]: Connection closed by invalid user user 123.54.197.60 port 44668 [preauth]
Jan 31 08:52:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:01.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:01 compute-0 sudo[390988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:01 compute-0 sudo[390988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:01 compute-0 sudo[390988]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:01 compute-0 sudo[391013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:01 compute-0 sudo[391013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:01 compute-0 sudo[391013]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:02 compute-0 nova_compute[247704]: 2026-01-31 08:52:02.066 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:02 compute-0 nova_compute[247704]: 2026-01-31 08:52:02.231 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:02 compute-0 ceph-mon[74496]: pgmap v3576: 305 pgs: 305 active+clean; 259 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 3.5 MiB/s wr, 105 op/s
Jan 31 08:52:02 compute-0 sshd-session[390986]: Invalid user user from 123.54.197.60 port 59054
Jan 31 08:52:02 compute-0 sshd-session[390986]: Connection closed by invalid user user 123.54.197.60 port 59054 [preauth]
Jan 31 08:52:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 31 08:52:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:03.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1349914857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:03.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:04 compute-0 sshd-session[391039]: Invalid user user from 123.54.197.60 port 59058
Jan 31 08:52:04 compute-0 sshd-session[391039]: Connection closed by invalid user user 123.54.197.60 port 59058 [preauth]
Jan 31 08:52:04 compute-0 nova_compute[247704]: 2026-01-31 08:52:04.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:04 compute-0 nova_compute[247704]: 2026-01-31 08:52:04.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:52:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:04 compute-0 ceph-mon[74496]: pgmap v3577: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 31 08:52:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 31 08:52:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:05.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:05 compute-0 nova_compute[247704]: 2026-01-31 08:52:05.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:05 compute-0 ceph-mon[74496]: pgmap v3578: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 121 op/s
Jan 31 08:52:06 compute-0 sshd-session[391042]: Invalid user user from 123.54.197.60 port 59072
Jan 31 08:52:06 compute-0 sshd-session[391042]: Connection closed by invalid user user 123.54.197.60 port 59072 [preauth]
Jan 31 08:52:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/711020738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3662488891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Jan 31 08:52:07 compute-0 nova_compute[247704]: 2026-01-31 08:52:07.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:07 compute-0 nova_compute[247704]: 2026-01-31 08:52:07.233 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:07.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:07 compute-0 sshd-session[391045]: Invalid user user from 123.54.197.60 port 59082
Jan 31 08:52:07 compute-0 ceph-mon[74496]: pgmap v3579: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:08.463 160292 DEBUG eventlet.wsgi.server [-] (160292) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:08.464 160292 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: Accept: */*
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: Connection: close
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: Content-Type: text/plain
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: Host: 169.254.169.254
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: User-Agent: curl/7.84.0
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: X-Forwarded-For: 10.100.0.5
Jan 31 08:52:08 compute-0 ovn_metadata_agent[160021]: X-Ovn-Network-Id: fff4a957-b1af-42da-9138-78a53cd3f9b7 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 31 08:52:08 compute-0 sshd-session[391045]: Connection closed by invalid user user 123.54.197.60 port 59082 [preauth]
Jan 31 08:52:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 31 08:52:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2680479719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:52:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3279264857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:52:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:09.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:09.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:09 compute-0 nova_compute[247704]: 2026-01-31 08:52:09.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:09 compute-0 nova_compute[247704]: 2026-01-31 08:52:09.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:52:09 compute-0 nova_compute[247704]: 2026-01-31 08:52:09.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:52:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:10 compute-0 sshd-session[391048]: Invalid user user from 123.54.197.60 port 59098
Jan 31 08:52:10 compute-0 ceph-mon[74496]: pgmap v3580: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 31 08:52:10 compute-0 sudo[391050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:10 compute-0 sudo[391050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-0 sudo[391050]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-0 sshd-session[391048]: Connection closed by invalid user user 123.54.197.60 port 59098 [preauth]
Jan 31 08:52:10 compute-0 sudo[391075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:52:10 compute-0 sudo[391075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-0 sudo[391075]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-0 sudo[391100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:10 compute-0 sudo[391100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-0 sudo[391100]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-0 sudo[391126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:52:10 compute-0 sudo[391126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:52:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:52:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:52:10 compute-0 nova_compute[247704]: 2026-01-31 08:52:10.642 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:52:10 compute-0 nova_compute[247704]: 2026-01-31 08:52:10.643 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:52:10 compute-0 nova_compute[247704]: 2026-01-31 08:52:10.643 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:52:10 compute-0 nova_compute[247704]: 2026-01-31 08:52:10.643 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e2e8918c-a2d2-4778-87d0-0988977e7188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:52:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:52:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:10.680 160292 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:10.680 160292 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 2.2161374
Jan 31 08:52:10 compute-0 haproxy-metadata-proxy-fff4a957-b1af-42da-9138-78a53cd3f9b7[390806]: 10.100.0.5:47112 [31/Jan/2026:08:52:08.463] listener listener/metadata 0/0/0/2217/2217 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 31 08:52:10 compute-0 sudo[391126]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:10.796 160292 DEBUG eventlet.wsgi.server [-] (160292) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:10.798 160292 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: Accept: */*
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: Connection: close
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: Content-Length: 100
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: Content-Type: application/x-www-form-urlencoded
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: Host: 169.254.169.254
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: User-Agent: curl/7.84.0
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: X-Forwarded-For: 10.100.0.5
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: X-Ovn-Network-Id: fff4a957-b1af-42da-9138-78a53cd3f9b7
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:52:10 compute-0 ovn_metadata_agent[160021]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 31 08:52:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 08:52:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:52:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Jan 31 08:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:11.204 160292 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 31 08:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:11.204 160292 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.4066231
Jan 31 08:52:11 compute-0 haproxy-metadata-proxy-fff4a957-b1af-42da-9138-78a53cd3f9b7[390806]: 10.100.0.5:47116 [31/Jan/2026:08:52:10.795] listener listener/metadata 0/0/0/409/409 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 31 08:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:11.222 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:11.223 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:11.223 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:52:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:52:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:11.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b20b8435-edea-45c0-be5f-baafd15cbf90 does not exist
Jan 31 08:52:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d48d1e33-f2a5-4039-a518-e2e33709b3f4 does not exist
Jan 31 08:52:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 45942ee7-5c33-40d6-9fb4-483ed4da5750 does not exist
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:52:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:52:11 compute-0 sudo[391183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:11 compute-0 sudo[391183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:11 compute-0 sudo[391183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:11 compute-0 sudo[391208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:52:11 compute-0 sudo[391208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:11 compute-0 sudo[391208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:11 compute-0 sudo[391233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:11 compute-0 sudo[391233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:11 compute-0 sudo[391233]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:11 compute-0 sudo[391258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:52:11 compute-0 sudo[391258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: pgmap v3581: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:52:11 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:52:11 compute-0 sshd-session[391162]: Invalid user user from 123.54.197.60 port 60500
Jan 31 08:52:11 compute-0 sshd-session[391162]: Connection closed by invalid user user 123.54.197.60 port 60500 [preauth]
Jan 31 08:52:11 compute-0 podman[391321]: 2026-01-31 08:52:11.899931284 +0000 UTC m=+0.068796486 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:52:11 compute-0 podman[391327]: 2026-01-31 08:52:11.883670898 +0000 UTC m=+0.027078940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:52:12 compute-0 ovn_controller[149457]: 2026-01-31T08:52:12Z|00841|binding|INFO|Releasing lport 051c78c0-47ef-41a8-9cb5-2b35e2f8924e from this chassis (sb_readonly=0)
Jan 31 08:52:12 compute-0 nova_compute[247704]: 2026-01-31 08:52:12.063 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:12 compute-0 nova_compute[247704]: 2026-01-31 08:52:12.069 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:12 compute-0 podman[391327]: 2026-01-31 08:52:12.087371954 +0000 UTC m=+0.230779966 container create 76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 08:52:12 compute-0 systemd[1]: Started libpod-conmon-76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9.scope.
Jan 31 08:52:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:52:12 compute-0 podman[391327]: 2026-01-31 08:52:12.231684845 +0000 UTC m=+0.375092877 container init 76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:52:12 compute-0 nova_compute[247704]: 2026-01-31 08:52:12.235 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:12 compute-0 podman[391327]: 2026-01-31 08:52:12.239881074 +0000 UTC m=+0.383289086 container start 76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:52:12 compute-0 affectionate_ritchie[391356]: 167 167
Jan 31 08:52:12 compute-0 systemd[1]: libpod-76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9.scope: Deactivated successfully.
Jan 31 08:52:12 compute-0 podman[391327]: 2026-01-31 08:52:12.246950136 +0000 UTC m=+0.390358168 container attach 76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:52:12 compute-0 podman[391327]: 2026-01-31 08:52:12.247415748 +0000 UTC m=+0.390823760 container died 76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e025a160eeae5c0bd002a989cd1433e72b553576c96d50139a75256b07d78a-merged.mount: Deactivated successfully.
Jan 31 08:52:12 compute-0 podman[391327]: 2026-01-31 08:52:12.316641722 +0000 UTC m=+0.460049734 container remove 76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_ritchie, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:52:12 compute-0 systemd[1]: libpod-conmon-76c4726a5ed0768c2e0c675a1e0851fbc8d60d0b277fe2b8eb54bedfb7f8dff9.scope: Deactivated successfully.
Jan 31 08:52:12 compute-0 podman[391384]: 2026-01-31 08:52:12.456959856 +0000 UTC m=+0.040766152 container create 29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:52:12 compute-0 systemd[1]: Started libpod-conmon-29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe.scope.
Jan 31 08:52:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5517ec194a125a7ed2b0aa6551d1b466df5bf73b8d3b2cbecb51c8933f8c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5517ec194a125a7ed2b0aa6551d1b466df5bf73b8d3b2cbecb51c8933f8c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5517ec194a125a7ed2b0aa6551d1b466df5bf73b8d3b2cbecb51c8933f8c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5517ec194a125a7ed2b0aa6551d1b466df5bf73b8d3b2cbecb51c8933f8c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cc5517ec194a125a7ed2b0aa6551d1b466df5bf73b8d3b2cbecb51c8933f8c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:12 compute-0 podman[391384]: 2026-01-31 08:52:12.43652911 +0000 UTC m=+0.020335426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:52:12 compute-0 podman[391384]: 2026-01-31 08:52:12.54010126 +0000 UTC m=+0.123907586 container init 29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:52:12 compute-0 podman[391384]: 2026-01-31 08:52:12.548541715 +0000 UTC m=+0.132348011 container start 29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:52:12 compute-0 podman[391384]: 2026-01-31 08:52:12.551876886 +0000 UTC m=+0.135683202 container attach 29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:52:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 425 KiB/s wr, 17 op/s
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.102 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.105 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.105 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.105 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.106 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.107 247708 INFO nova.compute.manager [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Terminating instance
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.108 247708 DEBUG nova.compute.manager [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:52:13 compute-0 kernel: tape568c850-18 (unregistering): left promiscuous mode
Jan 31 08:52:13 compute-0 NetworkManager[49108]: <info>  [1769849533.1810] device (tape568c850-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:52:13 compute-0 sshd-session[391352]: Invalid user user from 123.54.197.60 port 60516
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.190 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 ovn_controller[149457]: 2026-01-31T08:52:13Z|00842|binding|INFO|Releasing lport e568c850-1847-4743-bc28-5e4dce2179a9 from this chassis (sb_readonly=0)
Jan 31 08:52:13 compute-0 ovn_controller[149457]: 2026-01-31T08:52:13Z|00843|binding|INFO|Setting lport e568c850-1847-4743-bc28-5e4dce2179a9 down in Southbound
Jan 31 08:52:13 compute-0 ovn_controller[149457]: 2026-01-31T08:52:13Z|00844|binding|INFO|Removing iface tape568c850-18 ovn-installed in OVS
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.193 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.199 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.205 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:4c:dd 10.100.0.5'], port_security=['fa:16:3e:fa:4c:dd 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e2e8918c-a2d2-4778-87d0-0988977e7188', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9141737aac2e46fca7bc57616fd70c44', 'neutron:revision_number': '4', 'neutron:security_group_ids': '861cf335-10f6-4228-a2b0-e654edc6a8f0 e5304a9e-f9c3-48a7-be8e-a58f025dc869', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbf56355-1cb0-4797-b3ef-475b4d16216d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=e568c850-1847-4743-bc28-5e4dce2179a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.206 160028 INFO neutron.agent.ovn.metadata.agent [-] Port e568c850-1847-4743-bc28-5e4dce2179a9 in datapath fff4a957-b1af-42da-9138-78a53cd3f9b7 unbound from our chassis
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.207 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fff4a957-b1af-42da-9138-78a53cd3f9b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.212 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[72dfd336-d6ec-432e-81cc-c1145ba07941]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.213 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7 namespace which is not needed anymore
Jan 31 08:52:13 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Jan 31 08:52:13 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c7.scope: Consumed 14.397s CPU time.
Jan 31 08:52:13 compute-0 systemd-machined[214448]: Machine qemu-88-instance-000000c7 terminated.
Jan 31 08:52:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:13.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:13 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [NOTICE]   (390800) : haproxy version is 2.8.14-c23fe91
Jan 31 08:52:13 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [NOTICE]   (390800) : path to executable is /usr/sbin/haproxy
Jan 31 08:52:13 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [WARNING]  (390800) : Exiting Master process...
Jan 31 08:52:13 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [ALERT]    (390800) : Current worker (390806) exited with code 143 (Terminated)
Jan 31 08:52:13 compute-0 neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7[390796]: [WARNING]  (390800) : All workers exited. Exiting... (0)
Jan 31 08:52:13 compute-0 systemd[1]: libpod-de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2.scope: Deactivated successfully.
Jan 31 08:52:13 compute-0 podman[391436]: 2026-01-31 08:52:13.339583613 +0000 UTC m=+0.051238728 container died de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:52:13 compute-0 confident_dhawan[391400]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:52:13 compute-0 confident_dhawan[391400]: --> relative data size: 1.0
Jan 31 08:52:13 compute-0 confident_dhawan[391400]: --> All data devices are unavailable
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.347 247708 INFO nova.virt.libvirt.driver [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Instance destroyed successfully.
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.347 247708 DEBUG nova.objects.instance [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lazy-loading 'resources' on Instance uuid e2e8918c-a2d2-4778-87d0-0988977e7188 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.364 247708 DEBUG nova.virt.libvirt.vif [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-151715943',display_name='tempest-TestServerBasicOps-server-151715943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-151715943',id=199,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGqw4bSBAebeRccUYRPmYE5U96xntvEajFi9bMkUSrh/sJnHOg/rE8lH50fMxtlE9t+djC4kjKRQ2i5sqfQ+G7Z27M4XKjPpLvlbSo2rEMSM1fGRXMlNZzQWpKs9h7S9Yw==',key_name='tempest-TestServerBasicOps-529859631',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9141737aac2e46fca7bc57616fd70c44',ramdisk_id='',reservation_id='r-arga7wb9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1918636746',owner_user_name='tempest-TestServerBasicOps-1918636746-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:52:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='76212a9fc119475bb419a1bdb1e55e2e',uuid=e2e8918c-a2d2-4778-87d0-0988977e7188,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.365 247708 DEBUG nova.network.os_vif_util [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Converting VIF {"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.366 247708 DEBUG nova.network.os_vif_util [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.366 247708 DEBUG os_vif [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.370 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape568c850-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.376 247708 INFO os_vif [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:4c:dd,bridge_name='br-int',has_traffic_filtering=True,id=e568c850-1847-4743-bc28-5e4dce2179a9,network=Network(fff4a957-b1af-42da-9138-78a53cd3f9b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape568c850-18')
Jan 31 08:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2-userdata-shm.mount: Deactivated successfully.
Jan 31 08:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8b6dd7faffceebca376503b679f6d15c9af4d74c60dad04b2f3fd345be0924f-merged.mount: Deactivated successfully.
Jan 31 08:52:13 compute-0 systemd[1]: libpod-29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe.scope: Deactivated successfully.
Jan 31 08:52:13 compute-0 podman[391384]: 2026-01-31 08:52:13.393031753 +0000 UTC m=+0.976838069 container died 29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:52:13 compute-0 sshd-session[391352]: Connection closed by invalid user user 123.54.197.60 port 60516 [preauth]
Jan 31 08:52:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:13 compute-0 podman[391436]: 2026-01-31 08:52:13.465279732 +0000 UTC m=+0.176934847 container cleanup de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:52:13 compute-0 systemd[1]: libpod-conmon-de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2.scope: Deactivated successfully.
Jan 31 08:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cc5517ec194a125a7ed2b0aa6551d1b466df5bf73b8d3b2cbecb51c8933f8c5-merged.mount: Deactivated successfully.
Jan 31 08:52:13 compute-0 podman[391384]: 2026-01-31 08:52:13.545296709 +0000 UTC m=+1.129102995 container remove 29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dhawan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:52:13 compute-0 systemd[1]: libpod-conmon-29b5326e23e07213c9e7722ab701758033f3e0a9fbabfbee28f97a5ca95a8cfe.scope: Deactivated successfully.
Jan 31 08:52:13 compute-0 podman[391512]: 2026-01-31 08:52:13.564506315 +0000 UTC m=+0.074472103 container remove de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.571 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc022f3-06dd-47c2-a677-28e365a19a38]: (4, ('Sat Jan 31 08:52:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7 (de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2)\nde3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2\nSat Jan 31 08:52:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7 (de3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2)\nde3461c38b43570db1cc624b939d905cd1b5ede15c5518344f8b50e55d5ce1a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 sudo[391258]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.573 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[9c809a08-c9f8-4c6b-9747-f2efe1bf97e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.574 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfff4a957-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 kernel: tapfff4a957-b0: left promiscuous mode
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.618 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4a93e831-d3d5-4e01-b8bf-a88029a90375]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.636 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[736db72c-8710-459c-9499-ee9c7d15bd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.638 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7d3d04-7cd0-4563-b929-e3f22ff02454]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.650 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e8068fce-a1c8-4098-b0a0-0b12ea770c62]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 973714, 'reachable_time': 44462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391547, 'error': None, 'target': 'ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.653 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fff4a957-b1af-42da-9138-78a53cd3f9b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:52:13 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:13.653 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[55bea96f-63db-4149-a705-9ba2baa46b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:52:13 compute-0 systemd[1]: run-netns-ovnmeta\x2dfff4a957\x2db1af\x2d42da\x2d9138\x2d78a53cd3f9b7.mount: Deactivated successfully.
Jan 31 08:52:13 compute-0 sudo[391526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:13 compute-0 sudo[391526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:13 compute-0 sudo[391526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:13 compute-0 sudo[391554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:52:13 compute-0 sudo[391554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:13 compute-0 sudo[391554]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:13 compute-0 sudo[391581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:13 compute-0 sudo[391581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:13 compute-0 sudo[391581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:13 compute-0 sudo[391607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:52:13 compute-0 sudo[391607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.943 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updating instance_info_cache with network_info: [{"id": "e568c850-1847-4743-bc28-5e4dce2179a9", "address": "fa:16:3e:fa:4c:dd", "network": {"id": "fff4a957-b1af-42da-9138-78a53cd3f9b7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-215524522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9141737aac2e46fca7bc57616fd70c44", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape568c850-18", "ovs_interfaceid": "e568c850-1847-4743-bc28-5e4dce2179a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.948 247708 DEBUG nova.compute.manager [req-7fba3330-5977-4abc-a981-0070f59fc219 req-a95c53fa-89c7-4c37-b25a-06e579b1a5d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-vif-unplugged-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.948 247708 DEBUG oslo_concurrency.lockutils [req-7fba3330-5977-4abc-a981-0070f59fc219 req-a95c53fa-89c7-4c37-b25a-06e579b1a5d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.948 247708 DEBUG oslo_concurrency.lockutils [req-7fba3330-5977-4abc-a981-0070f59fc219 req-a95c53fa-89c7-4c37-b25a-06e579b1a5d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.949 247708 DEBUG oslo_concurrency.lockutils [req-7fba3330-5977-4abc-a981-0070f59fc219 req-a95c53fa-89c7-4c37-b25a-06e579b1a5d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.950 247708 DEBUG nova.compute.manager [req-7fba3330-5977-4abc-a981-0070f59fc219 req-a95c53fa-89c7-4c37-b25a-06e579b1a5d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] No waiting events found dispatching network-vif-unplugged-e568c850-1847-4743-bc28-5e4dce2179a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.950 247708 DEBUG nova.compute.manager [req-7fba3330-5977-4abc-a981-0070f59fc219 req-a95c53fa-89c7-4c37-b25a-06e579b1a5d9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-vif-unplugged-e568c850-1847-4743-bc28-5e4dce2179a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.972 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-e2e8918c-a2d2-4778-87d0-0988977e7188" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.972 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.973 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.983 247708 INFO nova.virt.libvirt.driver [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Deleting instance files /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188_del
Jan 31 08:52:13 compute-0 nova_compute[247704]: 2026-01-31 08:52:13.984 247708 INFO nova.virt.libvirt.driver [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Deletion of /var/lib/nova/instances/e2e8918c-a2d2-4778-87d0-0988977e7188_del complete
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.022 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.023 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.023 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.023 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.023 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:14 compute-0 ceph-mon[74496]: pgmap v3582: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 425 KiB/s wr, 17 op/s
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.066 247708 INFO nova.compute.manager [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.067 247708 DEBUG oslo.service.loopingcall [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.067 247708 DEBUG nova.compute.manager [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.067 247708 DEBUG nova.network.neutron [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.116860026 +0000 UTC m=+0.036837947 container create ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:52:14 compute-0 systemd[1]: Started libpod-conmon-ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b.scope.
Jan 31 08:52:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.178737941 +0000 UTC m=+0.098715892 container init ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.18567763 +0000 UTC m=+0.105655551 container start ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:52:14 compute-0 strange_montalcini[391708]: 167 167
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.1914261 +0000 UTC m=+0.111404021 container attach ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 08:52:14 compute-0 systemd[1]: libpod-ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b.scope: Deactivated successfully.
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.19185852 +0000 UTC m=+0.111836441 container died ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.101604724 +0000 UTC m=+0.021582675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:52:14 compute-0 podman[391674]: 2026-01-31 08:52:14.238343172 +0000 UTC m=+0.158321093 container remove ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:52:14 compute-0 systemd[1]: libpod-conmon-ba939618eeeeb829570af43705a706979bdc26b6d7231071489b24bb4bf4df8b.scope: Deactivated successfully.
Jan 31 08:52:14 compute-0 podman[391732]: 2026-01-31 08:52:14.367936765 +0000 UTC m=+0.041608984 container create d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf36da82d5a51bf8ec6f09d03719c0869cf5d30cabc2c5a58ed97b80fef444f1-merged.mount: Deactivated successfully.
Jan 31 08:52:14 compute-0 systemd[1]: Started libpod-conmon-d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847.scope.
Jan 31 08:52:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4c4b03a433f9c4fbf6a8aa02571e70306b7b4f21ddeb67d84461f20e8c52b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4c4b03a433f9c4fbf6a8aa02571e70306b7b4f21ddeb67d84461f20e8c52b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4c4b03a433f9c4fbf6a8aa02571e70306b7b4f21ddeb67d84461f20e8c52b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4c4b03a433f9c4fbf6a8aa02571e70306b7b4f21ddeb67d84461f20e8c52b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:14 compute-0 podman[391732]: 2026-01-31 08:52:14.349323192 +0000 UTC m=+0.022995431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:52:14 compute-0 podman[391732]: 2026-01-31 08:52:14.449399887 +0000 UTC m=+0.123072126 container init d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 08:52:14 compute-0 podman[391732]: 2026-01-31 08:52:14.455218199 +0000 UTC m=+0.128890408 container start d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:52:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:52:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006365587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:14 compute-0 podman[391732]: 2026-01-31 08:52:14.458291074 +0000 UTC m=+0.131963313 container attach d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.476 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.639 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.640 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4219MB free_disk=20.921966552734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.640 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.641 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:14 compute-0 sshd-session[391578]: Invalid user user from 123.54.197.60 port 60524
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.764 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance e2e8918c-a2d2-4778-87d0-0988977e7188 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.764 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.764 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:52:14 compute-0 nova_compute[247704]: 2026-01-31 08:52:14.822 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 201 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 27 KiB/s wr, 39 op/s
Jan 31 08:52:15 compute-0 sshd-session[391578]: Connection closed by invalid user user 123.54.197.60 port 60524 [preauth]
Jan 31 08:52:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3006365587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:52:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460422966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]: {
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:     "0": [
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:         {
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "devices": [
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "/dev/loop3"
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             ],
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "lv_name": "ceph_lv0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "lv_size": "7511998464",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "name": "ceph_lv0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "tags": {
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.cluster_name": "ceph",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.crush_device_class": "",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.encrypted": "0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.osd_id": "0",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.type": "block",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:                 "ceph.vdo": "0"
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             },
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "type": "block",
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:             "vg_name": "ceph_vg0"
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:         }
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]:     ]
Jan 31 08:52:15 compute-0 nostalgic_goldberg[391749]: }
Jan 31 08:52:15 compute-0 nova_compute[247704]: 2026-01-31 08:52:15.262 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:15 compute-0 nova_compute[247704]: 2026-01-31 08:52:15.268 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:52:15 compute-0 systemd[1]: libpod-d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847.scope: Deactivated successfully.
Jan 31 08:52:15 compute-0 podman[391732]: 2026-01-31 08:52:15.279539516 +0000 UTC m=+0.953211735 container died d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:52:15 compute-0 nova_compute[247704]: 2026-01-31 08:52:15.300 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a4c4b03a433f9c4fbf6a8aa02571e70306b7b4f21ddeb67d84461f20e8c52b6-merged.mount: Deactivated successfully.
Jan 31 08:52:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:15 compute-0 podman[391732]: 2026-01-31 08:52:15.334285528 +0000 UTC m=+1.007957747 container remove d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_goldberg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 08:52:15 compute-0 nova_compute[247704]: 2026-01-31 08:52:15.336 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:52:15 compute-0 nova_compute[247704]: 2026-01-31 08:52:15.336 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:15 compute-0 systemd[1]: libpod-conmon-d16c660a25c41129c479b7c6b6cd6619eedd5b40541d63476a341f3bfb5a9847.scope: Deactivated successfully.
Jan 31 08:52:15 compute-0 sudo[391607]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:15 compute-0 sudo[391795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:15 compute-0 sudo[391795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:15 compute-0 sudo[391795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:15 compute-0 sudo[391820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:52:15 compute-0 sudo[391820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:15 compute-0 sudo[391820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:15 compute-0 sudo[391845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:15 compute-0 sudo[391845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:15 compute-0 sudo[391845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:15 compute-0 sudo[391870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:52:15 compute-0 sudo[391870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:15 compute-0 podman[391935]: 2026-01-31 08:52:15.845512788 +0000 UTC m=+0.037973656 container create 69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:52:15 compute-0 systemd[1]: Started libpod-conmon-69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2.scope.
Jan 31 08:52:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:52:15 compute-0 podman[391935]: 2026-01-31 08:52:15.829441536 +0000 UTC m=+0.021902424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:52:15 compute-0 podman[391935]: 2026-01-31 08:52:15.936554933 +0000 UTC m=+0.129015831 container init 69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 08:52:15 compute-0 podman[391935]: 2026-01-31 08:52:15.943683707 +0000 UTC m=+0.136144585 container start 69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:52:15 compute-0 festive_lalande[391951]: 167 167
Jan 31 08:52:15 compute-0 systemd[1]: libpod-69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2.scope: Deactivated successfully.
Jan 31 08:52:15 compute-0 podman[391935]: 2026-01-31 08:52:15.957281097 +0000 UTC m=+0.149741965 container attach 69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 08:52:15 compute-0 podman[391935]: 2026-01-31 08:52:15.957945573 +0000 UTC m=+0.150406441 container died 69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-90dcc57e6974542547f8fb7c5a2b4f030fa367634404effb196899181ccd5929-merged.mount: Deactivated successfully.
Jan 31 08:52:16 compute-0 podman[391935]: 2026-01-31 08:52:16.023258742 +0000 UTC m=+0.215719610 container remove 69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:52:16 compute-0 systemd[1]: libpod-conmon-69144d61e656339d31305392f7ed88df327b073c138f9a8abede1f170fb914e2.scope: Deactivated successfully.
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.069 247708 DEBUG nova.network.neutron [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:52:16 compute-0 ceph-mon[74496]: pgmap v3583: 305 pgs: 305 active+clean; 201 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 27 KiB/s wr, 39 op/s
Jan 31 08:52:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2460422966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4071357891' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.109 247708 INFO nova.compute.manager [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Took 2.04 seconds to deallocate network for instance.
Jan 31 08:52:16 compute-0 podman[391975]: 2026-01-31 08:52:16.152339084 +0000 UTC m=+0.040095647 container create 50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.177 247708 DEBUG nova.compute.manager [req-e568561f-16a3-47aa-8dd4-1135a2ac9d22 req-3952c24a-8cbd-494d-abfa-0a3024608d6d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-vif-deleted-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:16 compute-0 systemd[1]: Started libpod-conmon-50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038.scope.
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.204 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.205 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.213 247708 DEBUG nova.compute.manager [req-29e13de2-3c86-47ab-9160-0823ea0040ae req-17502492-7c82-4db1-86b4-d97a5dbb6de9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.214 247708 DEBUG oslo_concurrency.lockutils [req-29e13de2-3c86-47ab-9160-0823ea0040ae req-17502492-7c82-4db1-86b4-d97a5dbb6de9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.214 247708 DEBUG oslo_concurrency.lockutils [req-29e13de2-3c86-47ab-9160-0823ea0040ae req-17502492-7c82-4db1-86b4-d97a5dbb6de9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.214 247708 DEBUG oslo_concurrency.lockutils [req-29e13de2-3c86-47ab-9160-0823ea0040ae req-17502492-7c82-4db1-86b4-d97a5dbb6de9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.214 247708 DEBUG nova.compute.manager [req-29e13de2-3c86-47ab-9160-0823ea0040ae req-17502492-7c82-4db1-86b4-d97a5dbb6de9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] No waiting events found dispatching network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.215 247708 WARNING nova.compute.manager [req-29e13de2-3c86-47ab-9160-0823ea0040ae req-17502492-7c82-4db1-86b4-d97a5dbb6de9 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Received unexpected event network-vif-plugged-e568c850-1847-4743-bc28-5e4dce2179a9 for instance with vm_state deleted and task_state None.
Jan 31 08:52:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b2e89af6b844c7d4041d81f95efb4daf3f3c39d0e4f2636f38c7c10d6952ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b2e89af6b844c7d4041d81f95efb4daf3f3c39d0e4f2636f38c7c10d6952ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b2e89af6b844c7d4041d81f95efb4daf3f3c39d0e4f2636f38c7c10d6952ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b2e89af6b844c7d4041d81f95efb4daf3f3c39d0e4f2636f38c7c10d6952ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:52:16 compute-0 podman[391975]: 2026-01-31 08:52:16.13451029 +0000 UTC m=+0.022266883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:52:16 compute-0 podman[391975]: 2026-01-31 08:52:16.243335948 +0000 UTC m=+0.131092541 container init 50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:52:16 compute-0 podman[391975]: 2026-01-31 08:52:16.250276697 +0000 UTC m=+0.138033270 container start 50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 08:52:16 compute-0 podman[391975]: 2026-01-31 08:52:16.257752158 +0000 UTC m=+0.145508761 container attach 50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.277 247708 DEBUG oslo_concurrency.processutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:16 compute-0 sshd-session[391783]: Invalid user user from 123.54.197.60 port 60534
Jan 31 08:52:16 compute-0 sshd-session[391783]: Connection closed by invalid user user 123.54.197.60 port 60534 [preauth]
Jan 31 08:52:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:52:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732757244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.678 247708 DEBUG oslo_concurrency.processutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.687 247708 DEBUG nova.compute.provider_tree [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.729 247708 DEBUG nova.scheduler.client.report [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.759 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.849 247708 INFO nova.scheduler.client.report [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Deleted allocations for instance e2e8918c-a2d2-4778-87d0-0988977e7188
Jan 31 08:52:16 compute-0 nova_compute[247704]: 2026-01-31 08:52:16.942 247708 DEBUG oslo_concurrency.lockutils [None req-5f867d7c-95af-498d-8002-6acef06f7354 76212a9fc119475bb419a1bdb1e55e2e 9141737aac2e46fca7bc57616fd70c44 - - default default] Lock "e2e8918c-a2d2-4778-87d0-0988977e7188" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]: {
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:         "osd_id": 0,
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:         "type": "bluestore"
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]:     }
Jan 31 08:52:16 compute-0 affectionate_chebyshev[391991]: }
Jan 31 08:52:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 08:52:16 compute-0 systemd[1]: libpod-50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038.scope: Deactivated successfully.
Jan 31 08:52:17 compute-0 podman[392037]: 2026-01-31 08:52:17.031647799 +0000 UTC m=+0.021059743 container died 50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-64b2e89af6b844c7d4041d81f95efb4daf3f3c39d0e4f2636f38c7c10d6952ff-merged.mount: Deactivated successfully.
Jan 31 08:52:17 compute-0 nova_compute[247704]: 2026-01-31 08:52:17.072 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:17 compute-0 podman[392037]: 2026-01-31 08:52:17.088235205 +0000 UTC m=+0.077647129 container remove 50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:52:17 compute-0 systemd[1]: libpod-conmon-50990a27ded7bfd60e284b67c9744784c456d0afabc9eac99ffc8e3e652b1038.scope: Deactivated successfully.
Jan 31 08:52:17 compute-0 sudo[391870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:52:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2732757244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3314229693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:52:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ea070edf-a380-47c2-9793-e2e379c26ca8 does not exist
Jan 31 08:52:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 38b58fc8-d85b-4d93-92aa-c66e3c3cdf86 does not exist
Jan 31 08:52:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 25b5943e-46c2-4137-a1b3-c71a7e9f315d does not exist
Jan 31 08:52:17 compute-0 sudo[392052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:17 compute-0 sudo[392052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:17 compute-0 sudo[392052]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:17.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:17 compute-0 sudo[392077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:52:17 compute-0 sudo[392077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:17 compute-0 sudo[392077]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:17.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:17 compute-0 sshd-session[392019]: Invalid user user from 123.54.197.60 port 60540
Jan 31 08:52:17 compute-0 nova_compute[247704]: 2026-01-31 08:52:17.926 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:17 compute-0 nova_compute[247704]: 2026-01-31 08:52:17.928 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:17 compute-0 nova_compute[247704]: 2026-01-31 08:52:17.928 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:18 compute-0 nova_compute[247704]: 2026-01-31 08:52:18.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:18 compute-0 ceph-mon[74496]: pgmap v3584: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 08:52:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:52:18 compute-0 nova_compute[247704]: 2026-01-31 08:52:18.372 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:18 compute-0 sshd-session[392019]: Connection closed by invalid user user 123.54.197.60 port 60540 [preauth]
Jan 31 08:52:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:19.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:19.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:52:20
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control']
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:52:20 compute-0 ceph-mon[74496]: pgmap v3585: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:20 compute-0 sshd-session[392103]: Invalid user user from 123.54.197.60 port 60546
Jan 31 08:52:20 compute-0 sshd-session[392103]: Connection closed by invalid user user 123.54.197.60 port 60546 [preauth]
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:52:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:21.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:22 compute-0 sudo[392108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:22 compute-0 sudo[392108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:22 compute-0 sudo[392108]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:22 compute-0 sudo[392133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:22 compute-0 sudo[392133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:22 compute-0 sudo[392133]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:22 compute-0 nova_compute[247704]: 2026-01-31 08:52:22.073 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:22 compute-0 sshd-session[392106]: Invalid user user from 123.54.197.60 port 52938
Jan 31 08:52:22 compute-0 ceph-mon[74496]: pgmap v3586: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:22 compute-0 sshd-session[392106]: Connection closed by invalid user user 123.54.197.60 port 52938 [preauth]
Jan 31 08:52:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:23 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 08:52:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:23 compute-0 nova_compute[247704]: 2026-01-31 08:52:23.374 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:23 compute-0 sshd-session[392159]: Invalid user user from 123.54.197.60 port 52954
Jan 31 08:52:23 compute-0 podman[392161]: 2026-01-31 08:52:23.959694671 +0000 UTC m=+0.122521952 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:52:24 compute-0 sshd-session[392159]: Connection closed by invalid user user 123.54.197.60 port 52954 [preauth]
Jan 31 08:52:24 compute-0 ceph-mon[74496]: pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:52:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Jan 31 08:52:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:25.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:25.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:25 compute-0 sshd-session[392189]: Invalid user user from 123.54.197.60 port 52968
Jan 31 08:52:25 compute-0 sshd-session[392189]: Connection closed by invalid user user 123.54.197.60 port 52968 [preauth]
Jan 31 08:52:26 compute-0 nova_compute[247704]: 2026-01-31 08:52:26.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:26 compute-0 ceph-mon[74496]: pgmap v3588: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Jan 31 08:52:26 compute-0 nova_compute[247704]: 2026-01-31 08:52:26.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:26 compute-0 nova_compute[247704]: 2026-01-31 08:52:26.606 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 195 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Jan 31 08:52:27 compute-0 nova_compute[247704]: 2026-01-31 08:52:27.076 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:28 compute-0 sshd-session[392193]: Invalid user user from 123.54.197.60 port 52982
Jan 31 08:52:28 compute-0 nova_compute[247704]: 2026-01-31 08:52:28.346 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849533.3453052, e2e8918c-a2d2-4778-87d0-0988977e7188 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:52:28 compute-0 nova_compute[247704]: 2026-01-31 08:52:28.347 247708 INFO nova.compute.manager [-] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] VM Stopped (Lifecycle Event)
Jan 31 08:52:28 compute-0 ceph-mon[74496]: pgmap v3589: 305 pgs: 305 active+clean; 195 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Jan 31 08:52:28 compute-0 nova_compute[247704]: 2026-01-31 08:52:28.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:28 compute-0 nova_compute[247704]: 2026-01-31 08:52:28.418 247708 DEBUG nova.compute.manager [None req-ff55d215-aa0a-44cf-9945-72d050aed500 - - - - - -] [instance: e2e8918c-a2d2-4778-87d0-0988977e7188] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:52:28 compute-0 sshd-session[392193]: Connection closed by invalid user user 123.54.197.60 port 52982 [preauth]
Jan 31 08:52:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 773 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 08:52:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:29.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:29.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:29 compute-0 ceph-mon[74496]: pgmap v3590: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 773 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 08:52:29 compute-0 nova_compute[247704]: 2026-01-31 08:52:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:29 compute-0 nova_compute[247704]: 2026-01-31 08:52:29.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:52:29 compute-0 nova_compute[247704]: 2026-01-31 08:52:29.703 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:52:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:29 compute-0 sshd-session[392196]: Invalid user user from 123.54.197.60 port 52990
Jan 31 08:52:30 compute-0 sshd-session[392196]: Connection closed by invalid user user 123.54.197.60 port 52990 [preauth]
Jan 31 08:52:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:52:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:31.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:31 compute-0 sshd-session[392199]: Invalid user user from 123.54.197.60 port 46292
Jan 31 08:52:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 08:52:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:31.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 08:52:31 compute-0 nova_compute[247704]: 2026-01-31 08:52:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:31 compute-0 nova_compute[247704]: 2026-01-31 08:52:31.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:52:32 compute-0 ceph-mon[74496]: pgmap v3591: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:52:32 compute-0 nova_compute[247704]: 2026-01-31 08:52:32.078 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:32 compute-0 sshd-session[392199]: Connection closed by invalid user user 123.54.197.60 port 46292 [preauth]
Jan 31 08:52:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:52:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:33.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:33 compute-0 nova_compute[247704]: 2026-01-31 08:52:33.381 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:33.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:33 compute-0 sshd-session[392202]: Invalid user user from 123.54.197.60 port 46298
Jan 31 08:52:33 compute-0 sshd-session[392202]: Connection closed by invalid user user 123.54.197.60 port 46298 [preauth]
Jan 31 08:52:34 compute-0 ceph-mon[74496]: pgmap v3592: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:52:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:52:35 compute-0 sshd-session[392204]: Invalid user user from 123.54.197.60 port 46312
Jan 31 08:52:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:35 compute-0 sshd-session[392204]: Connection closed by invalid user user 123.54.197.60 port 46312 [preauth]
Jan 31 08:52:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:52:36 compute-0 ceph-mon[74496]: pgmap v3593: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:52:36 compute-0 sshd-session[392207]: Invalid user user from 123.54.197.60 port 46314
Jan 31 08:52:36 compute-0 sshd-session[392207]: Connection closed by invalid user user 123.54.197.60 port 46314 [preauth]
Jan 31 08:52:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Jan 31 08:52:37 compute-0 nova_compute[247704]: 2026-01-31 08:52:37.080 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:37 compute-0 ceph-mon[74496]: pgmap v3594: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Jan 31 08:52:38 compute-0 nova_compute[247704]: 2026-01-31 08:52:38.384 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 482 KiB/s wr, 8 op/s
Jan 31 08:52:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:39.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:39 compute-0 sshd-session[392210]: Invalid user user from 123.54.197.60 port 46316
Jan 31 08:52:40 compute-0 sshd-session[392210]: Connection closed by invalid user user 123.54.197.60 port 46316 [preauth]
Jan 31 08:52:40 compute-0 ceph-mon[74496]: pgmap v3595: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 482 KiB/s wr, 8 op/s
Jan 31 08:52:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s rd, 30 KiB/s wr, 5 op/s
Jan 31 08:52:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:41.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Jan 31 08:52:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Jan 31 08:52:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Jan 31 08:52:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:41.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:41 compute-0 ceph-mon[74496]: pgmap v3596: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.3 KiB/s rd, 30 KiB/s wr, 5 op/s
Jan 31 08:52:41 compute-0 ceph-mon[74496]: osdmap e385: 3 total, 3 up, 3 in
Jan 31 08:52:42 compute-0 nova_compute[247704]: 2026-01-31 08:52:42.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:42 compute-0 sshd-session[392214]: Invalid user user from 123.54.197.60 port 52100
Jan 31 08:52:42 compute-0 sudo[392216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:42 compute-0 sudo[392216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:42 compute-0 sudo[392216]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:42 compute-0 podman[392224]: 2026-01-31 08:52:42.241125478 +0000 UTC m=+0.055777398 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:52:42 compute-0 sudo[392251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:52:42 compute-0 sudo[392251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:52:42 compute-0 sudo[392251]: pam_unix(sudo:session): session closed for user root
Jan 31 08:52:42 compute-0 sshd-session[392214]: Connection closed by invalid user user 123.54.197.60 port 52100 [preauth]
Jan 31 08:52:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 34 KiB/s wr, 7 op/s
Jan 31 08:52:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:43.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:43 compute-0 nova_compute[247704]: 2026-01-31 08:52:43.406 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:43.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:43 compute-0 ceph-mon[74496]: pgmap v3598: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 34 KiB/s wr, 7 op/s
Jan 31 08:52:44 compute-0 sshd-session[392287]: Invalid user user from 123.54.197.60 port 52116
Jan 31 08:52:44 compute-0 sshd-session[392287]: Connection closed by invalid user user 123.54.197.60 port 52116 [preauth]
Jan 31 08:52:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 231 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 20 op/s
Jan 31 08:52:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Jan 31 08:52:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Jan 31 08:52:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Jan 31 08:52:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:45.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:45.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:46 compute-0 sshd-session[392291]: Invalid user user from 123.54.197.60 port 52132
Jan 31 08:52:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Jan 31 08:52:46 compute-0 ceph-mon[74496]: pgmap v3599: 305 pgs: 305 active+clean; 231 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 20 op/s
Jan 31 08:52:46 compute-0 ceph-mon[74496]: osdmap e386: 3 total, 3 up, 3 in
Jan 31 08:52:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Jan 31 08:52:46 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Jan 31 08:52:46 compute-0 sshd-session[392291]: Connection closed by invalid user user 123.54.197.60 port 52132 [preauth]
Jan 31 08:52:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.1 MiB/s wr, 139 op/s
Jan 31 08:52:46 compute-0 nova_compute[247704]: 2026-01-31 08:52:46.998 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:46 compute-0 nova_compute[247704]: 2026-01-31 08:52:46.998 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:47 compute-0 ceph-mon[74496]: osdmap e387: 3 total, 3 up, 3 in
Jan 31 08:52:47 compute-0 nova_compute[247704]: 2026-01-31 08:52:47.152 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:47 compute-0 nova_compute[247704]: 2026-01-31 08:52:47.279 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:52:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:47.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:47 compute-0 sshd-session[392294]: Invalid user user from 123.54.197.60 port 52148
Jan 31 08:52:47 compute-0 nova_compute[247704]: 2026-01-31 08:52:47.602 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:47 compute-0 nova_compute[247704]: 2026-01-31 08:52:47.603 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:47 compute-0 nova_compute[247704]: 2026-01-31 08:52:47.615 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:52:47 compute-0 nova_compute[247704]: 2026-01-31 08:52:47.616 247708 INFO nova.compute.claims [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:52:47 compute-0 sshd-session[392294]: Connection closed by invalid user user 123.54.197.60 port 52148 [preauth]
Jan 31 08:52:48 compute-0 ceph-mon[74496]: pgmap v3602: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.1 MiB/s wr, 139 op/s
Jan 31 08:52:48 compute-0 nova_compute[247704]: 2026-01-31 08:52:48.409 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:48 compute-0 nova_compute[247704]: 2026-01-31 08:52:48.977 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 274 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 115 op/s
Jan 31 08:52:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.003000071s ======
Jan 31 08:52:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:49.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000071s
Jan 31 08:52:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:52:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394808446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:49 compute-0 nova_compute[247704]: 2026-01-31 08:52:49.441 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:49 compute-0 nova_compute[247704]: 2026-01-31 08:52:49.446 247708 DEBUG nova.compute.provider_tree [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:52:49 compute-0 nova_compute[247704]: 2026-01-31 08:52:49.504 247708 DEBUG nova.scheduler.client.report [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:52:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:49.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:52:50 compute-0 sshd-session[392298]: Invalid user user from 123.54.197.60 port 52162
Jan 31 08:52:50 compute-0 ceph-mon[74496]: pgmap v3603: 305 pgs: 305 active+clean; 274 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 115 op/s
Jan 31 08:52:50 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3394808446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.168 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.169 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:52:50 compute-0 sshd-session[392298]: Connection closed by invalid user user 123.54.197.60 port 52162 [preauth]
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.501 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.501 247708 DEBUG nova.network.neutron [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.774 247708 INFO nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:52:50 compute-0 nova_compute[247704]: 2026-01-31 08:52:50.893 247708 DEBUG nova.policy [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:52:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Jan 31 08:52:51 compute-0 nova_compute[247704]: 2026-01-31 08:52:51.185 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:52:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:51.367 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:52:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:51.368 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:52:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:51.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:51 compute-0 nova_compute[247704]: 2026-01-31 08:52:51.373 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:51.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.154 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.163 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.165 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.166 247708 INFO nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Creating image(s)
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.207 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.249 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.291 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.297 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:52 compute-0 sshd-session[392322]: Invalid user user from 123.54.197.60 port 50168
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.387 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.389 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.390 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.390 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:52 compute-0 ceph-mon[74496]: pgmap v3604: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.479 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.486 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:52:52 compute-0 sshd-session[392322]: Connection closed by invalid user user 123.54.197.60 port 50168 [preauth]
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.798 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.811 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:52:52 compute-0 nova_compute[247704]: 2026-01-31 08:52:52.891 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:52:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.2 MiB/s wr, 112 op/s
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.002 247708 DEBUG nova.objects.instance [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid d1eabae5-8bce-49d0-98bd-b5ce98f9a5de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.056 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.057 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Ensure instance console log exists: /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.058 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.058 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.059 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:52:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:52:53.371 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:52:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:53.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:53 compute-0 nova_compute[247704]: 2026-01-31 08:52:53.412 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:53 compute-0 ceph-mon[74496]: pgmap v3605: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.2 MiB/s wr, 112 op/s
Jan 31 08:52:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/121000242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:52:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/121000242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:52:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:53.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:54 compute-0 nova_compute[247704]: 2026-01-31 08:52:54.751 247708 DEBUG nova.network.neutron [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Successfully created port: 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:52:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Jan 31 08:52:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Jan 31 08:52:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Jan 31 08:52:54 compute-0 sshd-session[392491]: Invalid user user from 123.54.197.60 port 50180
Jan 31 08:52:54 compute-0 podman[392494]: 2026-01-31 08:52:54.920498875 +0000 UTC m=+0.093806253 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:52:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 41 op/s
Jan 31 08:52:55 compute-0 sshd-session[392491]: Connection closed by invalid user user 123.54.197.60 port 50180 [preauth]
Jan 31 08:52:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:55.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:52:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:55.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:52:55 compute-0 ceph-mon[74496]: osdmap e388: 3 total, 3 up, 3 in
Jan 31 08:52:55 compute-0 ceph-mon[74496]: pgmap v3607: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 41 op/s
Jan 31 08:52:56 compute-0 sshd-session[392521]: Invalid user user from 123.54.197.60 port 50196
Jan 31 08:52:56 compute-0 sshd-session[392521]: Connection closed by invalid user user 123.54.197.60 port 50196 [preauth]
Jan 31 08:52:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 52 op/s
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.156 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.264 247708 DEBUG nova.network.neutron [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Successfully updated port: 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.309 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.309 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.310 247708 DEBUG nova.network.neutron [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:52:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:52:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:52:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:57.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.521 247708 DEBUG nova.compute.manager [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-changed-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.522 247708 DEBUG nova.compute.manager [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Refreshing instance network info cache due to event network-changed-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.522 247708 DEBUG oslo_concurrency.lockutils [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:52:57 compute-0 nova_compute[247704]: 2026-01-31 08:52:57.670 247708 DEBUG nova.network.neutron [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:52:58 compute-0 ceph-mon[74496]: pgmap v3608: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 52 op/s
Jan 31 08:52:58 compute-0 sshd-session[392524]: Invalid user user from 123.54.197.60 port 50208
Jan 31 08:52:58 compute-0 sshd-session[392524]: Connection closed by invalid user user 123.54.197.60 port 50208 [preauth]
Jan 31 08:52:58 compute-0 nova_compute[247704]: 2026-01-31 08:52:58.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:52:58 compute-0 nova_compute[247704]: 2026-01-31 08:52:58.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:52:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 31 08:52:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:59.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:52:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:52:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:59.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:52:59 compute-0 nova_compute[247704]: 2026-01-31 08:52:59.689 247708 DEBUG nova.network.neutron [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:52:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:52:59 compute-0 sshd-session[392527]: Invalid user user from 123.54.197.60 port 50222
Jan 31 08:53:00 compute-0 sshd-session[392527]: Connection closed by invalid user user 123.54.197.60 port 50222 [preauth]
Jan 31 08:53:00 compute-0 ceph-mon[74496]: pgmap v3609: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 31 08:53:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3004284897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.311 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.312 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Instance network_info: |[{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.313 247708 DEBUG oslo_concurrency.lockutils [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.314 247708 DEBUG nova.network.neutron [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Refreshing network info cache for port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.319 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Start _get_guest_xml network_info=[{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.327 247708 WARNING nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.333 247708 DEBUG nova.virt.libvirt.host [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.334 247708 DEBUG nova.virt.libvirt.host [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.340 247708 DEBUG nova.virt.libvirt.host [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.340 247708 DEBUG nova.virt.libvirt.host [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.342 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.342 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.343 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.343 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.344 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.344 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.344 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.344 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.345 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.345 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.345 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.346 247708 DEBUG nova.virt.hardware [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.349 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:53:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028781581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.839 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.872 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:53:00 compute-0 nova_compute[247704]: 2026-01-31 08:53:00.878 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 31 08:53:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2028781581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:53:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139870674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.334 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.336 247708 DEBUG nova.virt.libvirt.vif [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:52:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1760441557',display_name='tempest-TestNetworkBasicOps-server-1760441557',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1760441557',id=201,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6Z01fJYkNm70ZQMs14pyH7TFBbmUxC/U4c2fOKOYLhx96B5eavINTvKYBJunlvKZ2TLiPoDy1LSL4O1mP5l+1spaDw3Xi5q2QdWKbIGlId67kn2ZWR46Izg5To1jn3BA==',key_name='tempest-TestNetworkBasicOps-1035394344',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-7pn7grxe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:52:51Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=d1eabae5-8bce-49d0-98bd-b5ce98f9a5de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.337 247708 DEBUG nova.network.os_vif_util [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.338 247708 DEBUG nova.network.os_vif_util [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.339 247708 DEBUG nova.objects.instance [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid d1eabae5-8bce-49d0-98bd-b5ce98f9a5de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.365 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <uuid>d1eabae5-8bce-49d0-98bd-b5ce98f9a5de</uuid>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <name>instance-000000c9</name>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:name>tempest-TestNetworkBasicOps-server-1760441557</nova:name>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:53:00</nova:creationTime>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <nova:port uuid="44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0">
Jan 31 08:53:01 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <system>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <entry name="serial">d1eabae5-8bce-49d0-98bd-b5ce98f9a5de</entry>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <entry name="uuid">d1eabae5-8bce-49d0-98bd-b5ce98f9a5de</entry>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </system>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <os>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </os>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <features>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </features>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk">
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </source>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk.config">
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </source>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:53:01 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:f6:00:a8"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <target dev="tap44e2cb6c-c4"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/console.log" append="off"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <video>
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </video>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:53:01 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:53:01 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:53:01 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:53:01 compute-0 nova_compute[247704]: </domain>
Jan 31 08:53:01 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.367 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Preparing to wait for external event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.367 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.367 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.368 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.368 247708 DEBUG nova.virt.libvirt.vif [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:52:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1760441557',display_name='tempest-TestNetworkBasicOps-server-1760441557',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1760441557',id=201,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6Z01fJYkNm70ZQMs14pyH7TFBbmUxC/U4c2fOKOYLhx96B5eavINTvKYBJunlvKZ2TLiPoDy1LSL4O1mP5l+1spaDw3Xi5q2QdWKbIGlId67kn2ZWR46Izg5To1jn3BA==',key_name='tempest-TestNetworkBasicOps-1035394344',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-7pn7grxe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:52:51Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=d1eabae5-8bce-49d0-98bd-b5ce98f9a5de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.369 247708 DEBUG nova.network.os_vif_util [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.369 247708 DEBUG nova.network.os_vif_util [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.369 247708 DEBUG os_vif [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.370 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.370 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.371 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.374 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.374 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44e2cb6c-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.375 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44e2cb6c-c4, col_values=(('external_ids', {'iface-id': '44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:00:a8', 'vm-uuid': 'd1eabae5-8bce-49d0-98bd-b5ce98f9a5de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.376 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:01 compute-0 NetworkManager[49108]: <info>  [1769849581.3781] manager: (tap44e2cb6c-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.379 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.384 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.385 247708 INFO os_vif [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4')
Jan 31 08:53:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:01 compute-0 sshd-session[392529]: Invalid user user from 123.54.197.60 port 42972
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.472 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.473 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.473 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:f6:00:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.474 247708 INFO nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Using config drive
Jan 31 08:53:01 compute-0 nova_compute[247704]: 2026-01-31 08:53:01.499 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:53:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:01.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:01 compute-0 sshd-session[392529]: Connection closed by invalid user user 123.54.197.60 port 42972 [preauth]
Jan 31 08:53:02 compute-0 ceph-mon[74496]: pgmap v3610: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 31 08:53:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3139870674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:02 compute-0 nova_compute[247704]: 2026-01-31 08:53:02.158 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:02 compute-0 sudo[392616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:02 compute-0 sudo[392616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:02 compute-0 sudo[392616]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:02 compute-0 sudo[392642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:02 compute-0 sudo[392642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:02 compute-0 sudo[392642]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:02 compute-0 sshd-session[392614]: Invalid user user from 123.54.197.60 port 42982
Jan 31 08:53:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.169 247708 INFO nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Creating config drive at /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/disk.config
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.174 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp28uwbt_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:03 compute-0 sshd-session[392614]: Connection closed by invalid user user 123.54.197.60 port 42982 [preauth]
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.307 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp28uwbt_4" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.356 247708 DEBUG nova.storage.rbd_utils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.362 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/disk.config d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:03.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.536 247708 DEBUG oslo_concurrency.processutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/disk.config d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.537 247708 INFO nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Deleting local config drive /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de/disk.config because it was imported into RBD.
Jan 31 08:53:03 compute-0 kernel: tap44e2cb6c-c4: entered promiscuous mode
Jan 31 08:53:03 compute-0 NetworkManager[49108]: <info>  [1769849583.5925] manager: (tap44e2cb6c-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Jan 31 08:53:03 compute-0 ovn_controller[149457]: 2026-01-31T08:53:03Z|00845|binding|INFO|Claiming lport 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 for this chassis.
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 ovn_controller[149457]: 2026-01-31T08:53:03Z|00846|binding|INFO|44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0: Claiming fa:16:3e:f6:00:a8 10.100.0.4
Jan 31 08:53:03 compute-0 systemd-udevd[392722]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.624 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:00:a8 10.100.0.4'], port_security=['fa:16:3e:f6:00:a8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd1eabae5-8bce-49d0-98bd-b5ce98f9a5de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8393a77f-db62-4b73-9860-21350fe1e354', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c5fa2e4-0deb-474b-8ddd-80ede5beb78d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5aa0dfdc-efed-4f7a-bc4e-a9890fbf83ea, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.625 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 systemd-machined[214448]: New machine qemu-89-instance-000000c9.
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.626 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 in datapath 8393a77f-db62-4b73-9860-21350fe1e354 bound to our chassis
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.628 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8393a77f-db62-4b73-9860-21350fe1e354
Jan 31 08:53:03 compute-0 ovn_controller[149457]: 2026-01-31T08:53:03Z|00847|binding|INFO|Setting lport 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 ovn-installed in OVS
Jan 31 08:53:03 compute-0 ovn_controller[149457]: 2026-01-31T08:53:03Z|00848|binding|INFO|Setting lport 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 up in Southbound
Jan 31 08:53:03 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-000000c9.
Jan 31 08:53:03 compute-0 NetworkManager[49108]: <info>  [1769849583.6364] device (tap44e2cb6c-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:53:03 compute-0 NetworkManager[49108]: <info>  [1769849583.6386] device (tap44e2cb6c-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.640 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.640 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6cc25b-70e0-4b12-a59e-3ad1094db555]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.641 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8393a77f-d1 in ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.645 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8393a77f-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.645 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0e36c4-d875-4bd9-a157-50a9aeb08cdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.646 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[44526a28-0c84-4a05-a34a-f541a6854c0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.654 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[37d5f535-af08-4b92-a1ed-a8e1f946028f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.664 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3b21e39a-b4be-4ad9-81c2-716afcc1be40]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.690 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc1bbac-9709-46a1-a8a9-6b18962b6d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.695 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[92283d3b-037e-4b3c-a1c3-8878835c59d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 NetworkManager[49108]: <info>  [1769849583.6966] manager: (tap8393a77f-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.724 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c5caf8-4166-4495-99cb-c962f994658a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.728 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccda95b-4b43-45a6-a5a1-051ade879084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 NetworkManager[49108]: <info>  [1769849583.7446] device (tap8393a77f-d0): carrier: link connected
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.750 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[62cc02f2-47bc-440e-8e69-42133ba00f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.767 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4a2c18-55b8-4c7c-ac20-b18ef37b103c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8393a77f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:b5:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 982636, 'reachable_time': 34401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392755, 'error': None, 'target': 'ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.784 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2d27dd-b4e2-42e8-a1b4-76414d4c9bc2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe57:b51a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 982636, 'tstamp': 982636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392756, 'error': None, 'target': 'ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.801 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[edb02e36-8503-47e9-b214-fe232db59d93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8393a77f-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:57:b5:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 250], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 982636, 'reachable_time': 34401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392757, 'error': None, 'target': 'ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.834 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[09503721-8111-4f62-97a2-7c1d95016d0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.888 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[34dace7f-3f9d-40af-b1fb-6ad55dec1b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.893 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8393a77f-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.893 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.893 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8393a77f-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:03 compute-0 NetworkManager[49108]: <info>  [1769849583.8964] manager: (tap8393a77f-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Jan 31 08:53:03 compute-0 kernel: tap8393a77f-d0: entered promiscuous mode
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.895 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.898 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.899 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8393a77f-d0, col_values=(('external_ids', {'iface-id': '5683a43e-6622-446f-bc3f-7627d5212848'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 ovn_controller[149457]: 2026-01-31T08:53:03Z|00849|binding|INFO|Releasing lport 5683a43e-6622-446f-bc3f-7627d5212848 from this chassis (sb_readonly=0)
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.901 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.902 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8393a77f-db62-4b73-9860-21350fe1e354.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8393a77f-db62-4b73-9860-21350fe1e354.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.903 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cff887e2-d7ec-4454-8024-1f43c46cd35f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.904 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-8393a77f-db62-4b73-9860-21350fe1e354
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/8393a77f-db62-4b73-9860-21350fe1e354.pid.haproxy
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 8393a77f-db62-4b73-9860-21350fe1e354
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:53:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:03.904 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354', 'env', 'PROCESS_TAG=haproxy-8393a77f-db62-4b73-9860-21350fe1e354', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8393a77f-db62-4b73-9860-21350fe1e354.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:53:03 compute-0 nova_compute[247704]: 2026-01-31 08:53:03.905 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:04 compute-0 ceph-mon[74496]: pgmap v3611: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.206 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849584.2053337, d1eabae5-8bce-49d0-98bd-b5ce98f9a5de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.207 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] VM Started (Lifecycle Event)
Jan 31 08:53:04 compute-0 podman[392831]: 2026-01-31 08:53:04.294437912 +0000 UTC m=+0.066046318 container create b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 08:53:04 compute-0 systemd[1]: Started libpod-conmon-b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177.scope.
Jan 31 08:53:04 compute-0 podman[392831]: 2026-01-31 08:53:04.256565441 +0000 UTC m=+0.028173947 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:53:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259dc58ef5be0452f28c0ec393522b6b3c824a297557c3c85cc0d27dc712367/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:04 compute-0 podman[392831]: 2026-01-31 08:53:04.396526886 +0000 UTC m=+0.168135372 container init b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:53:04 compute-0 podman[392831]: 2026-01-31 08:53:04.404206844 +0000 UTC m=+0.175815270 container start b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:53:04 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [NOTICE]   (392851) : New worker (392853) forked
Jan 31 08:53:04 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [NOTICE]   (392851) : Loading success.
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.442 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.449 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849584.2056677, d1eabae5-8bce-49d0-98bd-b5ce98f9a5de => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.449 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] VM Paused (Lifecycle Event)
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.494 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.499 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.539 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:53:04 compute-0 sshd-session[392707]: Invalid user user from 123.54.197.60 port 42986
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.609 247708 DEBUG nova.network.neutron [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updated VIF entry in instance network info cache for port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.611 247708 DEBUG nova.network.neutron [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:04 compute-0 nova_compute[247704]: 2026-01-31 08:53:04.674 247708 DEBUG oslo_concurrency.lockutils [req-5342049e-7ae4-411d-9232-b438ec0b0b53 req-57d77f14-9941-4b29-8f55-13e68f18b63f 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:53:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:04 compute-0 sshd-session[392707]: Connection closed by invalid user user 123.54.197.60 port 42986 [preauth]
Jan 31 08:53:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Jan 31 08:53:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.796 247708 DEBUG nova.compute.manager [req-4c3de7b9-a938-4d7c-8947-9a2005de8bbd req-9969e9b0-6301-4759-834b-fbd200753a8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.797 247708 DEBUG oslo_concurrency.lockutils [req-4c3de7b9-a938-4d7c-8947-9a2005de8bbd req-9969e9b0-6301-4759-834b-fbd200753a8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.798 247708 DEBUG oslo_concurrency.lockutils [req-4c3de7b9-a938-4d7c-8947-9a2005de8bbd req-9969e9b0-6301-4759-834b-fbd200753a8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.798 247708 DEBUG oslo_concurrency.lockutils [req-4c3de7b9-a938-4d7c-8947-9a2005de8bbd req-9969e9b0-6301-4759-834b-fbd200753a8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.798 247708 DEBUG nova.compute.manager [req-4c3de7b9-a938-4d7c-8947-9a2005de8bbd req-9969e9b0-6301-4759-834b-fbd200753a8c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Processing event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.799 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.804 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849585.803887, d1eabae5-8bce-49d0-98bd-b5ce98f9a5de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.804 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] VM Resumed (Lifecycle Event)
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.809 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.815 247708 INFO nova.virt.libvirt.driver [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Instance spawned successfully.
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.817 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.858 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.859 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.860 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.860 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.861 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.861 247708 DEBUG nova.virt.libvirt.driver [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.869 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.873 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.920 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.988 247708 INFO nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Took 13.82 seconds to spawn the instance on the hypervisor.
Jan 31 08:53:05 compute-0 nova_compute[247704]: 2026-01-31 08:53:05.988 247708 DEBUG nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:53:06 compute-0 nova_compute[247704]: 2026-01-31 08:53:06.092 247708 INFO nova.compute.manager [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Took 18.52 seconds to build instance.
Jan 31 08:53:06 compute-0 nova_compute[247704]: 2026-01-31 08:53:06.186 247708 DEBUG oslo_concurrency.lockutils [None req-ea98ef36-77f5-4405-974f-e88f6a70760b 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:06 compute-0 ceph-mon[74496]: pgmap v3612: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Jan 31 08:53:06 compute-0 nova_compute[247704]: 2026-01-31 08:53:06.378 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 1001 KiB/s wr, 49 op/s
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.162 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:07.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:07.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:07 compute-0 sshd-session[392862]: Invalid user user from 123.54.197.60 port 42996
Jan 31 08:53:07 compute-0 sshd-session[392862]: Connection closed by invalid user user 123.54.197.60 port 42996 [preauth]
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.924 247708 DEBUG nova.compute.manager [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.925 247708 DEBUG oslo_concurrency.lockutils [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.925 247708 DEBUG oslo_concurrency.lockutils [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.925 247708 DEBUG oslo_concurrency.lockutils [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.925 247708 DEBUG nova.compute.manager [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] No waiting events found dispatching network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:53:07 compute-0 nova_compute[247704]: 2026-01-31 08:53:07.926 247708 WARNING nova.compute.manager [req-8c0cc1f0-2590-4e8f-ac3b-979a629e1610 req-1788a20d-4c1c-495b-a26d-1ef595c5a516 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received unexpected event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 for instance with vm_state active and task_state None.
Jan 31 08:53:08 compute-0 ceph-mon[74496]: pgmap v3613: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 1001 KiB/s wr, 49 op/s
Jan 31 08:53:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1997707929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 481 KiB/s wr, 53 op/s
Jan 31 08:53:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3637733675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:09.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:09.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:09 compute-0 nova_compute[247704]: 2026-01-31 08:53:09.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:09 compute-0 nova_compute[247704]: 2026-01-31 08:53:09.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:53:09 compute-0 nova_compute[247704]: 2026-01-31 08:53:09.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:53:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:10 compute-0 nova_compute[247704]: 2026-01-31 08:53:10.052 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:53:10 compute-0 nova_compute[247704]: 2026-01-31 08:53:10.052 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:53:10 compute-0 nova_compute[247704]: 2026-01-31 08:53:10.053 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:53:10 compute-0 nova_compute[247704]: 2026-01-31 08:53:10.053 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d1eabae5-8bce-49d0-98bd-b5ce98f9a5de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:53:10 compute-0 ceph-mon[74496]: pgmap v3614: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 481 KiB/s wr, 53 op/s
Jan 31 08:53:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3390267310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3709395865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:53:10 compute-0 sshd-session[392866]: Invalid user user from 123.54.197.60 port 43006
Jan 31 08:53:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Jan 31 08:53:11 compute-0 sshd-session[392866]: Connection closed by invalid user user 123.54.197.60 port 43006 [preauth]
Jan 31 08:53:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:11.224 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:11.224 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:11.225 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:11 compute-0 nova_compute[247704]: 2026-01-31 08:53:11.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:11 compute-0 NetworkManager[49108]: <info>  [1769849591.2749] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Jan 31 08:53:11 compute-0 NetworkManager[49108]: <info>  [1769849591.2758] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Jan 31 08:53:11 compute-0 nova_compute[247704]: 2026-01-31 08:53:11.303 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:11 compute-0 ovn_controller[149457]: 2026-01-31T08:53:11Z|00850|binding|INFO|Releasing lport 5683a43e-6622-446f-bc3f-7627d5212848 from this chassis (sb_readonly=0)
Jan 31 08:53:11 compute-0 nova_compute[247704]: 2026-01-31 08:53:11.323 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:11 compute-0 nova_compute[247704]: 2026-01-31 08:53:11.379 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:11.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:11.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:12 compute-0 nova_compute[247704]: 2026-01-31 08:53:12.051 247708 DEBUG nova.compute.manager [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-changed-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:12 compute-0 nova_compute[247704]: 2026-01-31 08:53:12.051 247708 DEBUG nova.compute.manager [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Refreshing instance network info cache due to event network-changed-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:53:12 compute-0 nova_compute[247704]: 2026-01-31 08:53:12.051 247708 DEBUG oslo_concurrency.lockutils [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:53:12 compute-0 nova_compute[247704]: 2026-01-31 08:53:12.164 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:12 compute-0 sshd-session[392872]: Invalid user ubuntu from 45.148.10.240 port 33228
Jan 31 08:53:12 compute-0 ceph-mon[74496]: pgmap v3615: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Jan 31 08:53:12 compute-0 sshd-session[392870]: Invalid user user from 123.54.197.60 port 43986
Jan 31 08:53:12 compute-0 sshd-session[392872]: Connection closed by invalid user ubuntu 45.148.10.240 port 33228 [preauth]
Jan 31 08:53:12 compute-0 podman[392875]: 2026-01-31 08:53:12.627764611 +0000 UTC m=+0.071817908 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 08:53:12 compute-0 sshd-session[392870]: Connection closed by invalid user user 123.54.197.60 port 43986 [preauth]
Jan 31 08:53:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.102 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.134 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.135 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.135 247708 DEBUG oslo_concurrency.lockutils [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.135 247708 DEBUG nova.network.neutron [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Refreshing network info cache for port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.136 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.252 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.253 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.253 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.253 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.254 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:13.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:13.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:53:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425752531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.679 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.871 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:53:13 compute-0 nova_compute[247704]: 2026-01-31 08:53:13.871 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000c9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.120 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.122 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4059MB free_disk=20.9217529296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.122 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.123 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.239 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance d1eabae5-8bce-49d0-98bd-b5ce98f9a5de actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.239 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.239 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.311 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:14 compute-0 ceph-mon[74496]: pgmap v3616: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 104 op/s
Jan 31 08:53:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2425752531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:53:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783711535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.779 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.785 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:53:14 compute-0 nova_compute[247704]: 2026-01-31 08:53:14.959 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:53:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 15 KiB/s wr, 128 op/s
Jan 31 08:53:15 compute-0 nova_compute[247704]: 2026-01-31 08:53:15.196 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:53:15 compute-0 nova_compute[247704]: 2026-01-31 08:53:15.197 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1783711535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:15.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:15.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:15 compute-0 nova_compute[247704]: 2026-01-31 08:53:15.765 247708 DEBUG nova.network.neutron [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updated VIF entry in instance network info cache for port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:53:15 compute-0 nova_compute[247704]: 2026-01-31 08:53:15.766 247708 DEBUG nova.network.neutron [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:15 compute-0 nova_compute[247704]: 2026-01-31 08:53:15.836 247708 DEBUG oslo_concurrency.lockutils [req-b7ac9759-b9db-4a1b-a44f-fef46d7d3c76 req-f3779d5c-82a6-492a-81f1-2c5de58ba5f8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:53:15 compute-0 sshd-session[392918]: Invalid user user from 123.54.197.60 port 43998
Jan 31 08:53:16 compute-0 sshd-session[392918]: Connection closed by invalid user user 123.54.197.60 port 43998 [preauth]
Jan 31 08:53:16 compute-0 nova_compute[247704]: 2026-01-31 08:53:16.383 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:16 compute-0 ceph-mon[74496]: pgmap v3617: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 15 KiB/s wr, 128 op/s
Jan 31 08:53:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 163 op/s
Jan 31 08:53:17 compute-0 nova_compute[247704]: 2026-01-31 08:53:17.166 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2347719483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:17 compute-0 sudo[392946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:17 compute-0 sudo[392946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-0 sudo[392946]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:17 compute-0 sudo[392971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:53:17 compute-0 sudo[392971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-0 sudo[392971]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:17 compute-0 sudo[392996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:17 compute-0 sudo[392996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:17 compute-0 sudo[392996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:17 compute-0 sudo[393021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:53:17 compute-0 sudo[393021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:18 compute-0 sudo[393021]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5c0b2d0e-5a33-4365-8126-ec432b7d7d34 does not exist
Jan 31 08:53:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f157d5df-4fd9-4829-8009-ae4d109720e3 does not exist
Jan 31 08:53:18 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3228e308-5de4-4912-8ad1-1061ad6a6319 does not exist
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:53:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:53:18 compute-0 sudo[393079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:18 compute-0 sudo[393079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:18 compute-0 sudo[393079]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:18 compute-0 sudo[393104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:53:18 compute-0 sudo[393104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:18 compute-0 sudo[393104]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:18 compute-0 ceph-mon[74496]: pgmap v3618: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 163 op/s
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:53:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:53:18 compute-0 sudo[393129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:18 compute-0 sudo[393129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:18 compute-0 sudo[393129]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:18 compute-0 sshd-session[392944]: Invalid user user from 123.54.197.60 port 44000
Jan 31 08:53:18 compute-0 sudo[393154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:53:18 compute-0 sudo[393154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:18 compute-0 sshd-session[392944]: Connection closed by invalid user user 123.54.197.60 port 44000 [preauth]
Jan 31 08:53:18 compute-0 podman[393220]: 2026-01-31 08:53:18.846726072 +0000 UTC m=+0.048090250 container create c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:53:18 compute-0 systemd[1]: Started libpod-conmon-c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08.scope.
Jan 31 08:53:18 compute-0 ovn_controller[149457]: 2026-01-31T08:53:18Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:00:a8 10.100.0.4
Jan 31 08:53:18 compute-0 ovn_controller[149457]: 2026-01-31T08:53:18Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:00:a8 10.100.0.4
Jan 31 08:53:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:18 compute-0 podman[393220]: 2026-01-31 08:53:18.82652567 +0000 UTC m=+0.027889858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:53:18 compute-0 podman[393220]: 2026-01-31 08:53:18.938343251 +0000 UTC m=+0.139707519 container init c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 08:53:18 compute-0 podman[393220]: 2026-01-31 08:53:18.947167157 +0000 UTC m=+0.148531325 container start c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:53:18 compute-0 podman[393220]: 2026-01-31 08:53:18.951453721 +0000 UTC m=+0.152817989 container attach c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:53:18 compute-0 angry_sammet[393236]: 167 167
Jan 31 08:53:18 compute-0 systemd[1]: libpod-c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08.scope: Deactivated successfully.
Jan 31 08:53:18 compute-0 conmon[393236]: conmon c1f39a6f2b3fbd338df3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08.scope/container/memory.events
Jan 31 08:53:18 compute-0 podman[393220]: 2026-01-31 08:53:18.956290128 +0000 UTC m=+0.157654316 container died c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6706efd8825955710f345a4ab19183a7329f45a658195595036475e1874aec47-merged.mount: Deactivated successfully.
Jan 31 08:53:19 compute-0 podman[393220]: 2026-01-31 08:53:19.001109899 +0000 UTC m=+0.202474067 container remove c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:53:19 compute-0 systemd[1]: libpod-conmon-c1f39a6f2b3fbd338df3de3c660172eb76f87b9f18f64a38d8f38f762d1d8f08.scope: Deactivated successfully.
Jan 31 08:53:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 140 op/s
Jan 31 08:53:19 compute-0 podman[393262]: 2026-01-31 08:53:19.164492474 +0000 UTC m=+0.047792453 container create df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:53:19 compute-0 systemd[1]: Started libpod-conmon-df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7.scope.
Jan 31 08:53:19 compute-0 podman[393262]: 2026-01-31 08:53:19.140006028 +0000 UTC m=+0.023306047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:53:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589ac6f2ba14ee25fdc5bdc313f484de9b80533779837586eb314971b024f015/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589ac6f2ba14ee25fdc5bdc313f484de9b80533779837586eb314971b024f015/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589ac6f2ba14ee25fdc5bdc313f484de9b80533779837586eb314971b024f015/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589ac6f2ba14ee25fdc5bdc313f484de9b80533779837586eb314971b024f015/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589ac6f2ba14ee25fdc5bdc313f484de9b80533779837586eb314971b024f015/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:19 compute-0 podman[393262]: 2026-01-31 08:53:19.258802209 +0000 UTC m=+0.142102228 container init df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:53:19 compute-0 podman[393262]: 2026-01-31 08:53:19.269746845 +0000 UTC m=+0.153046824 container start df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:53:19 compute-0 podman[393262]: 2026-01-31 08:53:19.276388507 +0000 UTC m=+0.159688476 container attach df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:53:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:19.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2000871693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 08:53:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 08:53:19 compute-0 nova_compute[247704]: 2026-01-31 08:53:19.622 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:19 compute-0 nova_compute[247704]: 2026-01-31 08:53:19.624 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:19 compute-0 nova_compute[247704]: 2026-01-31 08:53:19.624 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:19 compute-0 nova_compute[247704]: 2026-01-31 08:53:19.864 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:20 compute-0 upbeat_brown[393278]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:53:20 compute-0 upbeat_brown[393278]: --> relative data size: 1.0
Jan 31 08:53:20 compute-0 upbeat_brown[393278]: --> All data devices are unavailable
Jan 31 08:53:20 compute-0 systemd[1]: libpod-df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7.scope: Deactivated successfully.
Jan 31 08:53:20 compute-0 podman[393262]: 2026-01-31 08:53:20.054671855 +0000 UTC m=+0.937971854 container died df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 08:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-589ac6f2ba14ee25fdc5bdc313f484de9b80533779837586eb314971b024f015-merged.mount: Deactivated successfully.
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:53:20 compute-0 podman[393262]: 2026-01-31 08:53:20.139502598 +0000 UTC m=+1.022802587 container remove df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:53:20 compute-0 systemd[1]: libpod-conmon-df7fdf1e44e92339ba1d7df5853aeff0097e392961beba053f3ba9ea5462aee7.scope: Deactivated successfully.
Jan 31 08:53:20 compute-0 sshd-session[393255]: Invalid user user from 123.54.197.60 port 44004
Jan 31 08:53:20 compute-0 sudo[393154]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:20 compute-0 sudo[393307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:20 compute-0 sudo[393307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:20 compute-0 sudo[393307]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:53:20
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:53:20 compute-0 sudo[393332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:53:20 compute-0 sudo[393332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:20 compute-0 sudo[393332]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:20 compute-0 sudo[393358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:20 compute-0 sudo[393358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:20 compute-0 sudo[393358]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:20 compute-0 sudo[393383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:53:20 compute-0 sudo[393383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:20 compute-0 sshd-session[393255]: Connection closed by invalid user user 123.54.197.60 port 44004 [preauth]
Jan 31 08:53:20 compute-0 ceph-mon[74496]: pgmap v3619: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 14 KiB/s wr, 140 op/s
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.748364583 +0000 UTC m=+0.052910768 container create 59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:53:20 compute-0 systemd[1]: Started libpod-conmon-59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a.scope.
Jan 31 08:53:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.72647327 +0000 UTC m=+0.031019515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.827110969 +0000 UTC m=+0.131657184 container init 59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.834710344 +0000 UTC m=+0.139256539 container start 59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.839389768 +0000 UTC m=+0.143936003 container attach 59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 08:53:20 compute-0 serene_montalcini[393465]: 167 167
Jan 31 08:53:20 compute-0 systemd[1]: libpod-59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a.scope: Deactivated successfully.
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.841718795 +0000 UTC m=+0.146265000 container died 59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:53:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8638b4956d4ebe840cc9e9310e991aa340a1f38e611f5f4728058be6d28ef7f2-merged.mount: Deactivated successfully.
Jan 31 08:53:20 compute-0 podman[393447]: 2026-01-31 08:53:20.954836118 +0000 UTC m=+0.259382313 container remove 59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:53:20 compute-0 systemd[1]: libpod-conmon-59c8978df7f22f68698d4577d570e38a06998cef931871fb92ac49d2fd4c0d2a.scope: Deactivated successfully.
Jan 31 08:53:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 171 op/s
Jan 31 08:53:21 compute-0 podman[393490]: 2026-01-31 08:53:21.127440317 +0000 UTC m=+0.067561804 container create fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:53:21 compute-0 podman[393490]: 2026-01-31 08:53:21.082243978 +0000 UTC m=+0.022365495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:53:21 compute-0 systemd[1]: Started libpod-conmon-fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a.scope.
Jan 31 08:53:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545dd26e9ae0d3ae5f6021f903a50dc36fbec4af0bd23a8a9e3bfc1d86fe85d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545dd26e9ae0d3ae5f6021f903a50dc36fbec4af0bd23a8a9e3bfc1d86fe85d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545dd26e9ae0d3ae5f6021f903a50dc36fbec4af0bd23a8a9e3bfc1d86fe85d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545dd26e9ae0d3ae5f6021f903a50dc36fbec4af0bd23a8a9e3bfc1d86fe85d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:21 compute-0 podman[393490]: 2026-01-31 08:53:21.260256239 +0000 UTC m=+0.200377756 container init fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:53:21 compute-0 podman[393490]: 2026-01-31 08:53:21.267263509 +0000 UTC m=+0.207385036 container start fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:53:21 compute-0 podman[393490]: 2026-01-31 08:53:21.271312418 +0000 UTC m=+0.211433955 container attach fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:53:21 compute-0 nova_compute[247704]: 2026-01-31 08:53:21.387 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:21 compute-0 ceph-mon[74496]: pgmap v3620: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 171 op/s
Jan 31 08:53:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:21.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:21 compute-0 sshd-session[393461]: Invalid user user from 123.54.197.60 port 35950
Jan 31 08:53:22 compute-0 cranky_diffie[393506]: {
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:     "0": [
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:         {
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "devices": [
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "/dev/loop3"
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             ],
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "lv_name": "ceph_lv0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "lv_size": "7511998464",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "name": "ceph_lv0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "tags": {
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.cluster_name": "ceph",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.crush_device_class": "",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.encrypted": "0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.osd_id": "0",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.type": "block",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:                 "ceph.vdo": "0"
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             },
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "type": "block",
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:             "vg_name": "ceph_vg0"
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:         }
Jan 31 08:53:22 compute-0 cranky_diffie[393506]:     ]
Jan 31 08:53:22 compute-0 cranky_diffie[393506]: }
Jan 31 08:53:22 compute-0 systemd[1]: libpod-fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a.scope: Deactivated successfully.
Jan 31 08:53:22 compute-0 podman[393490]: 2026-01-31 08:53:22.10488048 +0000 UTC m=+1.045001987 container died fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-545dd26e9ae0d3ae5f6021f903a50dc36fbec4af0bd23a8a9e3bfc1d86fe85d6-merged.mount: Deactivated successfully.
Jan 31 08:53:22 compute-0 podman[393490]: 2026-01-31 08:53:22.163566248 +0000 UTC m=+1.103687735 container remove fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:53:22 compute-0 nova_compute[247704]: 2026-01-31 08:53:22.168 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:22 compute-0 systemd[1]: libpod-conmon-fe4da51e96b0debcdf48d1a73c6b7bfdd9cd83a1641296aac7b2ce3e2dcec03a.scope: Deactivated successfully.
Jan 31 08:53:22 compute-0 sudo[393383]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-0 sshd-session[393461]: Connection closed by invalid user user 123.54.197.60 port 35950 [preauth]
Jan 31 08:53:22 compute-0 sudo[393528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:22 compute-0 sudo[393528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-0 sudo[393528]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-0 sudo[393553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:53:22 compute-0 sudo[393553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-0 sudo[393553]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-0 sudo[393579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:22 compute-0 sudo[393579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-0 sudo[393579]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-0 sudo[393602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:22 compute-0 sudo[393602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-0 sudo[393602]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-0 sudo[393622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:53:22 compute-0 sudo[393622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-0 sudo[393652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:22 compute-0 sudo[393652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:22 compute-0 sudo[393652]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.830399493 +0000 UTC m=+0.055669025 container create 4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:53:22 compute-0 systemd[1]: Started libpod-conmon-4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352.scope.
Jan 31 08:53:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.810885979 +0000 UTC m=+0.036155551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.910157854 +0000 UTC m=+0.135427446 container init 4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.921235834 +0000 UTC m=+0.146505376 container start 4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:53:22 compute-0 sad_shtern[393736]: 167 167
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.925992999 +0000 UTC m=+0.151262601 container attach 4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 08:53:22 compute-0 systemd[1]: libpod-4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352.scope: Deactivated successfully.
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.927967118 +0000 UTC m=+0.153236680 container died 4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:53:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-06a939bb9c3c700034af6765867313821fd916f69c2234293155fb28f257f79c-merged.mount: Deactivated successfully.
Jan 31 08:53:22 compute-0 podman[393720]: 2026-01-31 08:53:22.968234638 +0000 UTC m=+0.193504180 container remove 4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:53:22 compute-0 systemd[1]: libpod-conmon-4437b5f1a5427c09ee27124d82d2e3bc46e9130a98f15140bb2c75e503be9352.scope: Deactivated successfully.
Jan 31 08:53:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 31 08:53:23 compute-0 podman[393759]: 2026-01-31 08:53:23.105400674 +0000 UTC m=+0.040523356 container create ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:53:23 compute-0 systemd[1]: Started libpod-conmon-ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1.scope.
Jan 31 08:53:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a50f2f3dcaadfe48e79b0c9c48ba407a8a7fb3a25d5d93320429e5d8eab7c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a50f2f3dcaadfe48e79b0c9c48ba407a8a7fb3a25d5d93320429e5d8eab7c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a50f2f3dcaadfe48e79b0c9c48ba407a8a7fb3a25d5d93320429e5d8eab7c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1a50f2f3dcaadfe48e79b0c9c48ba407a8a7fb3a25d5d93320429e5d8eab7c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:53:23 compute-0 podman[393759]: 2026-01-31 08:53:23.085664074 +0000 UTC m=+0.020786806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:53:23 compute-0 podman[393759]: 2026-01-31 08:53:23.184034459 +0000 UTC m=+0.119157161 container init ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:53:23 compute-0 podman[393759]: 2026-01-31 08:53:23.189758048 +0000 UTC m=+0.124880730 container start ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 08:53:23 compute-0 podman[393759]: 2026-01-31 08:53:23.192864013 +0000 UTC m=+0.127986715 container attach ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:53:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:23.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:23.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:23 compute-0 sweet_ellis[393775]: {
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:         "osd_id": 0,
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:         "type": "bluestore"
Jan 31 08:53:23 compute-0 sweet_ellis[393775]:     }
Jan 31 08:53:23 compute-0 sweet_ellis[393775]: }
Jan 31 08:53:24 compute-0 systemd[1]: libpod-ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1.scope: Deactivated successfully.
Jan 31 08:53:24 compute-0 podman[393798]: 2026-01-31 08:53:24.06112321 +0000 UTC m=+0.025945412 container died ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 08:53:24 compute-0 ceph-mon[74496]: pgmap v3621: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 31 08:53:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a50f2f3dcaadfe48e79b0c9c48ba407a8a7fb3a25d5d93320429e5d8eab7c4-merged.mount: Deactivated successfully.
Jan 31 08:53:24 compute-0 podman[393798]: 2026-01-31 08:53:24.117615124 +0000 UTC m=+0.082437246 container remove ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:53:24 compute-0 systemd[1]: libpod-conmon-ba19fc941cce43981e571dff29d1f03e041d244d0a909cedd49d187b15de79e1.scope: Deactivated successfully.
Jan 31 08:53:24 compute-0 sudo[393622]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:53:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:53:24 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9ae35761-f542-4154-8160-6c8ea1202ddc does not exist
Jan 31 08:53:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 569efd38-eb99-4b85-8976-6a2fb1bc1f7c does not exist
Jan 31 08:53:24 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f7428eae-39b9-4379-af2e-936077ce9e6d does not exist
Jan 31 08:53:24 compute-0 sudo[393813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:24 compute-0 sudo[393813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:24 compute-0 sudo[393813]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:24 compute-0 sudo[393838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:53:24 compute-0 sudo[393838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:24 compute-0 sudo[393838]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:24 compute-0 sshd-session[393780]: Invalid user user from 123.54.197.60 port 35956
Jan 31 08:53:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 31 08:53:25 compute-0 sshd-session[393780]: Connection closed by invalid user user 123.54.197.60 port 35956 [preauth]
Jan 31 08:53:25 compute-0 nova_compute[247704]: 2026-01-31 08:53:25.125 247708 INFO nova.compute.manager [None req-bb64a89b-d6e1-4538-afce-0047dbe4670f 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Get console output
Jan 31 08:53:25 compute-0 nova_compute[247704]: 2026-01-31 08:53:25.134 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:53:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:53:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.004000095s ======
Jan 31 08:53:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:25.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000095s
Jan 31 08:53:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:25.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:25 compute-0 podman[393866]: 2026-01-31 08:53:25.944339582 +0000 UTC m=+0.096863227 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 08:53:26 compute-0 nova_compute[247704]: 2026-01-31 08:53:26.011 247708 INFO nova.compute.manager [None req-82d12dab-e864-4511-82d1-825837694e62 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Get console output
Jan 31 08:53:26 compute-0 nova_compute[247704]: 2026-01-31 08:53:26.017 315733 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 31 08:53:26 compute-0 sshd-session[393864]: Invalid user user from 123.54.197.60 port 35972
Jan 31 08:53:26 compute-0 ceph-mon[74496]: pgmap v3622: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 31 08:53:26 compute-0 nova_compute[247704]: 2026-01-31 08:53:26.390 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:26 compute-0 sshd-session[393864]: Connection closed by invalid user user 123.54.197.60 port 35972 [preauth]
Jan 31 08:53:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 362 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Jan 31 08:53:27 compute-0 nova_compute[247704]: 2026-01-31 08:53:27.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:27.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:27 compute-0 nova_compute[247704]: 2026-01-31 08:53:27.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:27.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:28 compute-0 ceph-mon[74496]: pgmap v3623: 305 pgs: 305 active+clean; 362 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Jan 31 08:53:28 compute-0 sshd-session[393896]: Invalid user user from 123.54.197.60 port 35984
Jan 31 08:53:28 compute-0 sshd-session[393896]: Connection closed by invalid user user 123.54.197.60 port 35984 [preauth]
Jan 31 08:53:28 compute-0 nova_compute[247704]: 2026-01-31 08:53:28.987 247708 DEBUG nova.compute.manager [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-changed-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:28 compute-0 nova_compute[247704]: 2026-01-31 08:53:28.988 247708 DEBUG nova.compute.manager [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Refreshing instance network info cache due to event network-changed-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:53:28 compute-0 nova_compute[247704]: 2026-01-31 08:53:28.988 247708 DEBUG oslo_concurrency.lockutils [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:53:28 compute-0 nova_compute[247704]: 2026-01-31 08:53:28.988 247708 DEBUG oslo_concurrency.lockutils [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:53:28 compute-0 nova_compute[247704]: 2026-01-31 08:53:28.988 247708 DEBUG nova.network.neutron [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Refreshing network info cache for port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:53:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 367 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 2.3 MiB/s wr, 92 op/s
Jan 31 08:53:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:29.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.521 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.521 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.522 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.523 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.523 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.525 247708 INFO nova.compute.manager [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Terminating instance
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.527 247708 DEBUG nova.compute.manager [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:53:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:29.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:29 compute-0 kernel: tap44e2cb6c-c4 (unregistering): left promiscuous mode
Jan 31 08:53:29 compute-0 NetworkManager[49108]: <info>  [1769849609.5948] device (tap44e2cb6c-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:53:29 compute-0 ovn_controller[149457]: 2026-01-31T08:53:29Z|00851|binding|INFO|Releasing lport 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 from this chassis (sb_readonly=0)
Jan 31 08:53:29 compute-0 ovn_controller[149457]: 2026-01-31T08:53:29Z|00852|binding|INFO|Setting lport 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 down in Southbound
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.614 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-0 ovn_controller[149457]: 2026-01-31T08:53:29Z|00853|binding|INFO|Removing iface tap44e2cb6c-c4 ovn-installed in OVS
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.620 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.631 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c9.scope: Deactivated successfully.
Jan 31 08:53:29 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c9.scope: Consumed 14.280s CPU time.
Jan 31 08:53:29 compute-0 systemd-machined[214448]: Machine qemu-89-instance-000000c9 terminated.
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.766 247708 INFO nova.virt.libvirt.driver [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Instance destroyed successfully.
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.767 247708 DEBUG nova.objects.instance [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid d1eabae5-8bce-49d0-98bd-b5ce98f9a5de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:53:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:29.858 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:00:a8 10.100.0.4'], port_security=['fa:16:3e:f6:00:a8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd1eabae5-8bce-49d0-98bd-b5ce98f9a5de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8393a77f-db62-4b73-9860-21350fe1e354', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c5fa2e4-0deb-474b-8ddd-80ede5beb78d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5aa0dfdc-efed-4f7a-bc4e-a9890fbf83ea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:53:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:29.860 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 in datapath 8393a77f-db62-4b73-9860-21350fe1e354 unbound from our chassis
Jan 31 08:53:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:29.862 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8393a77f-db62-4b73-9860-21350fe1e354, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:53:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:29.863 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[21115e78-3221-4db7-804e-0f2fc39bfdb9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:29.864 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354 namespace which is not needed anymore
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.974 247708 DEBUG nova.virt.libvirt.vif [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:52:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1760441557',display_name='tempest-TestNetworkBasicOps-server-1760441557',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1760441557',id=201,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6Z01fJYkNm70ZQMs14pyH7TFBbmUxC/U4c2fOKOYLhx96B5eavINTvKYBJunlvKZ2TLiPoDy1LSL4O1mP5l+1spaDw3Xi5q2QdWKbIGlId67kn2ZWR46Izg5To1jn3BA==',key_name='tempest-TestNetworkBasicOps-1035394344',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:53:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-7pn7grxe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:53:06Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=d1eabae5-8bce-49d0-98bd-b5ce98f9a5de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.975 247708 DEBUG nova.network.os_vif_util [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.976 247708 DEBUG nova.network.os_vif_util [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.977 247708 DEBUG os_vif [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.979 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.979 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44e2cb6c-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.982 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:29 compute-0 nova_compute[247704]: 2026-01-31 08:53:29.985 247708 INFO os_vif [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:00:a8,bridge_name='br-int',has_traffic_filtering=True,id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0,network=Network(8393a77f-db62-4b73-9860-21350fe1e354),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44e2cb6c-c4')
Jan 31 08:53:30 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [NOTICE]   (392851) : haproxy version is 2.8.14-c23fe91
Jan 31 08:53:30 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [NOTICE]   (392851) : path to executable is /usr/sbin/haproxy
Jan 31 08:53:30 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [WARNING]  (392851) : Exiting Master process...
Jan 31 08:53:30 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [ALERT]    (392851) : Current worker (392853) exited with code 143 (Terminated)
Jan 31 08:53:30 compute-0 neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354[392847]: [WARNING]  (392851) : All workers exited. Exiting... (0)
Jan 31 08:53:30 compute-0 systemd[1]: libpod-b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177.scope: Deactivated successfully.
Jan 31 08:53:30 compute-0 podman[393934]: 2026-01-31 08:53:30.026279565 +0000 UTC m=+0.049470844 container died b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-2259dc58ef5be0452f28c0ec393522b6b3c824a297557c3c85cc0d27dc712367-merged.mount: Deactivated successfully.
Jan 31 08:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177-userdata-shm.mount: Deactivated successfully.
Jan 31 08:53:30 compute-0 podman[393934]: 2026-01-31 08:53:30.101036695 +0000 UTC m=+0.124227934 container cleanup b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:53:30 compute-0 systemd[1]: libpod-conmon-b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177.scope: Deactivated successfully.
Jan 31 08:53:30 compute-0 sshd-session[393899]: Invalid user user from 123.54.197.60 port 35998
Jan 31 08:53:30 compute-0 podman[393982]: 2026-01-31 08:53:30.183949832 +0000 UTC m=+0.060390810 container remove b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.188 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[735a90ab-430d-4573-8c8c-0b203214f789]: (4, ('Sat Jan 31 08:53:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354 (b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177)\nb8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177\nSat Jan 31 08:53:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354 (b8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177)\nb8c69538d629bf4b5413fbe157f6f69e1bc0218a7f6e66d967eac0e2da86d177\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.191 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[980e5377-11e3-46e5-9c3e-4460ec54fb47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.192 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8393a77f-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:53:30 compute-0 nova_compute[247704]: 2026-01-31 08:53:30.194 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:30 compute-0 kernel: tap8393a77f-d0: left promiscuous mode
Jan 31 08:53:30 compute-0 nova_compute[247704]: 2026-01-31 08:53:30.200 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.202 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[29ac9723-f2b7-4cee-9367-380b873e3dab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.224 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[10ce92ef-4e09-4f4a-9b8b-2b27d57fe7f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.225 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[52e8302c-d68a-4ab9-9797-47351219563c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.238 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[607000ab-e268-4cad-8b94-506654874ebe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 982630, 'reachable_time': 17232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393997, 'error': None, 'target': 'ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d8393a77f\x2ddb62\x2d4b73\x2d9860\x2d21350fe1e354.mount: Deactivated successfully.
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.242 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8393a77f-db62-4b73-9860-21350fe1e354 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:53:30 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:30.243 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c6a830-1708-45d2-a1a0-1f2ae2132aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:53:30 compute-0 ceph-mon[74496]: pgmap v3624: 305 pgs: 305 active+clean; 367 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 2.3 MiB/s wr, 92 op/s
Jan 31 08:53:30 compute-0 sshd-session[393899]: Connection closed by invalid user user 123.54.197.60 port 35998 [preauth]
Jan 31 08:53:30 compute-0 nova_compute[247704]: 2026-01-31 08:53:30.673 247708 INFO nova.virt.libvirt.driver [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Deleting instance files /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_del
Jan 31 08:53:30 compute-0 nova_compute[247704]: 2026-01-31 08:53:30.674 247708 INFO nova.virt.libvirt.driver [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Deletion of /var/lib/nova/instances/d1eabae5-8bce-49d0-98bd-b5ce98f9a5de_del complete
Jan 31 08:53:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 31 08:53:31 compute-0 nova_compute[247704]: 2026-01-31 08:53:31.214 247708 INFO nova.compute.manager [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Took 1.69 seconds to destroy the instance on the hypervisor.
Jan 31 08:53:31 compute-0 nova_compute[247704]: 2026-01-31 08:53:31.215 247708 DEBUG oslo.service.loopingcall [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:53:31 compute-0 nova_compute[247704]: 2026-01-31 08:53:31.216 247708 DEBUG nova.compute.manager [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:53:31 compute-0 nova_compute[247704]: 2026-01-31 08:53:31.216 247708 DEBUG nova.network.neutron [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:53:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:31.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:31.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:32 compute-0 nova_compute[247704]: 2026-01-31 08:53:32.174 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:32 compute-0 sshd-session[394000]: Invalid user user from 123.54.197.60 port 37490
Jan 31 08:53:32 compute-0 ceph-mon[74496]: pgmap v3625: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 141 op/s
Jan 31 08:53:32 compute-0 sshd-session[394000]: Connection closed by invalid user user 123.54.197.60 port 37490 [preauth]
Jan 31 08:53:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 323 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 95 op/s
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.270 247708 DEBUG nova.compute.manager [req-420b2715-cba5-457a-b685-8adeafb91e40 req-aa971cdc-33af-4fb8-ad99-ac91069c56ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-vif-unplugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.271 247708 DEBUG oslo_concurrency.lockutils [req-420b2715-cba5-457a-b685-8adeafb91e40 req-aa971cdc-33af-4fb8-ad99-ac91069c56ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.271 247708 DEBUG oslo_concurrency.lockutils [req-420b2715-cba5-457a-b685-8adeafb91e40 req-aa971cdc-33af-4fb8-ad99-ac91069c56ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.271 247708 DEBUG oslo_concurrency.lockutils [req-420b2715-cba5-457a-b685-8adeafb91e40 req-aa971cdc-33af-4fb8-ad99-ac91069c56ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.272 247708 DEBUG nova.compute.manager [req-420b2715-cba5-457a-b685-8adeafb91e40 req-aa971cdc-33af-4fb8-ad99-ac91069c56ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] No waiting events found dispatching network-vif-unplugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.272 247708 DEBUG nova.compute.manager [req-420b2715-cba5-457a-b685-8adeafb91e40 req-aa971cdc-33af-4fb8-ad99-ac91069c56ac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-vif-unplugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:53:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:33.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.632 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.792 247708 DEBUG nova.network.neutron [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updated VIF entry in instance network info cache for port 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:53:33 compute-0 nova_compute[247704]: 2026-01-31 08:53:33.793 247708 DEBUG nova.network.neutron [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [{"id": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "address": "fa:16:3e:f6:00:a8", "network": {"id": "8393a77f-db62-4b73-9860-21350fe1e354", "bridge": "br-int", "label": "tempest-network-smoke--77904780", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44e2cb6c-c4", "ovs_interfaceid": "44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:33 compute-0 sshd-session[394003]: Invalid user user from 123.54.197.60 port 37506
Jan 31 08:53:34 compute-0 sshd-session[394003]: Connection closed by invalid user user 123.54.197.60 port 37506 [preauth]
Jan 31 08:53:34 compute-0 nova_compute[247704]: 2026-01-31 08:53:34.329 247708 DEBUG oslo_concurrency.lockutils [req-8870b249-4d3c-418e-9187-327974997210 req-5d390dea-0680-452b-affb-34a781a1b85b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:53:34 compute-0 ceph-mon[74496]: pgmap v3626: 305 pgs: 305 active+clean; 323 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 95 op/s
Jan 31 08:53:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:34 compute-0 nova_compute[247704]: 2026-01-31 08:53:34.982 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 548 KiB/s wr, 81 op/s
Jan 31 08:53:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:35.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:35 compute-0 sshd-session[394006]: Invalid user user from 123.54.197.60 port 37518
Jan 31 08:53:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:35.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:35 compute-0 sshd-session[394006]: Connection closed by invalid user user 123.54.197.60 port 37518 [preauth]
Jan 31 08:53:35 compute-0 nova_compute[247704]: 2026-01-31 08:53:35.997 247708 DEBUG nova.network.neutron [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024711229759910663 of space, bias 1.0, pg target 0.7413368927973198 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0040689195398287735 of space, bias 1.0, pg target 1.220675861948632 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:53:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.218 247708 DEBUG nova.compute.manager [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.218 247708 DEBUG oslo_concurrency.lockutils [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.219 247708 DEBUG oslo_concurrency.lockutils [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.219 247708 DEBUG oslo_concurrency.lockutils [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.220 247708 DEBUG nova.compute.manager [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] No waiting events found dispatching network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.220 247708 WARNING nova.compute.manager [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received unexpected event network-vif-plugged-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 for instance with vm_state active and task_state deleting.
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.221 247708 DEBUG nova.compute.manager [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Received event network-vif-deleted-44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.222 247708 INFO nova.compute.manager [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Neutron deleted interface 44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0; detaching it from the instance and deleting it from the info cache
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.222 247708 DEBUG nova.network.neutron [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:53:36 compute-0 ceph-mon[74496]: pgmap v3627: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 548 KiB/s wr, 81 op/s
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.537 247708 DEBUG nova.compute.manager [req-a1f8e58b-8aeb-475b-9a95-47141bfd50d7 req-c60e48c8-355d-49e1-bb41-1511c38253bf 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Detach interface failed, port_id=44e2cb6c-c4d7-4bda-ac28-2fea02e1a1e0, reason: Instance d1eabae5-8bce-49d0-98bd-b5ce98f9a5de could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:53:36 compute-0 nova_compute[247704]: 2026-01-31 08:53:36.540 247708 INFO nova.compute.manager [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Took 5.32 seconds to deallocate network for instance.
Jan 31 08:53:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 558 KiB/s wr, 81 op/s
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.047 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.048 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.176 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.195 247708 DEBUG oslo_concurrency.processutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:53:37 compute-0 sshd-session[394008]: Invalid user user from 123.54.197.60 port 37530
Jan 31 08:53:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:37.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:37 compute-0 ceph-mon[74496]: pgmap v3628: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 558 KiB/s wr, 81 op/s
Jan 31 08:53:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:37.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:37 compute-0 sshd-session[394008]: Connection closed by invalid user user 123.54.197.60 port 37530 [preauth]
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.703 247708 DEBUG oslo_concurrency.processutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.710 247708 DEBUG nova.compute.provider_tree [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:53:37 compute-0 nova_compute[247704]: 2026-01-31 08:53:37.856 247708 DEBUG nova.scheduler.client.report [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:53:38 compute-0 nova_compute[247704]: 2026-01-31 08:53:38.036 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:38 compute-0 nova_compute[247704]: 2026-01-31 08:53:38.251 247708 INFO nova.scheduler.client.report [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance d1eabae5-8bce-49d0-98bd-b5ce98f9a5de
Jan 31 08:53:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1264266940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:53:38 compute-0 sshd-session[394033]: Invalid user user from 123.54.197.60 port 37540
Jan 31 08:53:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 950 KiB/s rd, 497 KiB/s wr, 71 op/s
Jan 31 08:53:39 compute-0 nova_compute[247704]: 2026-01-31 08:53:39.089 247708 DEBUG oslo_concurrency.lockutils [None req-b84fb077-e3f3-4511-91eb-819b82821bf5 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "d1eabae5-8bce-49d0-98bd-b5ce98f9a5de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:53:39 compute-0 sshd-session[394033]: Connection closed by invalid user user 123.54.197.60 port 37540 [preauth]
Jan 31 08:53:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:39.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:39 compute-0 ceph-mon[74496]: pgmap v3629: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 950 KiB/s rd, 497 KiB/s wr, 71 op/s
Jan 31 08:53:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:39.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:39 compute-0 nova_compute[247704]: 2026-01-31 08:53:39.984 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:40 compute-0 sshd-session[394036]: Invalid user user from 123.54.197.60 port 37554
Jan 31 08:53:40 compute-0 sshd-session[394036]: Connection closed by invalid user user 123.54.197.60 port 37554 [preauth]
Jan 31 08:53:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 347 KiB/s wr, 70 op/s
Jan 31 08:53:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:41.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:41.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:42 compute-0 nova_compute[247704]: 2026-01-31 08:53:42.178 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Jan 31 08:53:42 compute-0 ceph-mon[74496]: pgmap v3630: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 347 KiB/s wr, 70 op/s
Jan 31 08:53:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Jan 31 08:53:42 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Jan 31 08:53:42 compute-0 sudo[394042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:42 compute-0 sudo[394042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:42 compute-0 sudo[394042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:42 compute-0 sudo[394067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:53:42 compute-0 sudo[394067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:53:42 compute-0 sudo[394067]: pam_unix(sudo:session): session closed for user root
Jan 31 08:53:42 compute-0 podman[394091]: 2026-01-31 08:53:42.778502145 +0000 UTC m=+0.097000351 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 08:53:42 compute-0 sshd-session[394039]: Invalid user user from 123.54.197.60 port 47450
Jan 31 08:53:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 17 KiB/s wr, 14 op/s
Jan 31 08:53:43 compute-0 nova_compute[247704]: 2026-01-31 08:53:43.091 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:43 compute-0 sshd-session[394039]: Connection closed by invalid user user 123.54.197.60 port 47450 [preauth]
Jan 31 08:53:43 compute-0 ceph-mon[74496]: osdmap e389: 3 total, 3 up, 3 in
Jan 31 08:53:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:43.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Jan 31 08:53:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Jan 31 08:53:44 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Jan 31 08:53:44 compute-0 ceph-mon[74496]: pgmap v3632: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 17 KiB/s wr, 14 op/s
Jan 31 08:53:44 compute-0 nova_compute[247704]: 2026-01-31 08:53:44.764 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849609.7636452, d1eabae5-8bce-49d0-98bd-b5ce98f9a5de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:53:44 compute-0 nova_compute[247704]: 2026-01-31 08:53:44.765 247708 INFO nova.compute.manager [-] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] VM Stopped (Lifecycle Event)
Jan 31 08:53:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:44 compute-0 nova_compute[247704]: 2026-01-31 08:53:44.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 20 op/s
Jan 31 08:53:45 compute-0 nova_compute[247704]: 2026-01-31 08:53:45.108 247708 DEBUG nova.compute.manager [None req-901510c7-c19a-42e7-bb9c-b5a68325989d - - - - - -] [instance: d1eabae5-8bce-49d0-98bd-b5ce98f9a5de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:53:45 compute-0 sshd-session[394111]: Invalid user user from 123.54.197.60 port 47452
Jan 31 08:53:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:45.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:45 compute-0 sshd-session[394111]: Connection closed by invalid user user 123.54.197.60 port 47452 [preauth]
Jan 31 08:53:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Jan 31 08:53:45 compute-0 ceph-mon[74496]: osdmap e390: 3 total, 3 up, 3 in
Jan 31 08:53:45 compute-0 ceph-mon[74496]: pgmap v3634: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 20 op/s
Jan 31 08:53:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:45.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Jan 31 08:53:45 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Jan 31 08:53:46 compute-0 ceph-mon[74496]: osdmap e391: 3 total, 3 up, 3 in
Jan 31 08:53:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 102 op/s
Jan 31 08:53:47 compute-0 nova_compute[247704]: 2026-01-31 08:53:47.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:47 compute-0 sshd-session[394115]: Invalid user user from 123.54.197.60 port 47458
Jan 31 08:53:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:47.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:47.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:47 compute-0 ceph-mon[74496]: pgmap v3636: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 102 op/s
Jan 31 08:53:48 compute-0 sshd-session[394115]: Connection closed by invalid user user 123.54.197.60 port 47458 [preauth]
Jan 31 08:53:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 398 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 14 MiB/s wr, 163 op/s
Jan 31 08:53:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Jan 31 08:53:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Jan 31 08:53:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Jan 31 08:53:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:49.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:49.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:49 compute-0 nova_compute[247704]: 2026-01-31 08:53:49.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:50 compute-0 ceph-mon[74496]: pgmap v3637: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 398 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 14 MiB/s wr, 163 op/s
Jan 31 08:53:50 compute-0 ceph-mon[74496]: osdmap e392: 3 total, 3 up, 3 in
Jan 31 08:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:53:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:53:50 compute-0 sshd-session[394119]: Invalid user ubuntu from 123.54.197.60 port 47466
Jan 31 08:53:50 compute-0 sshd-session[394119]: Connection closed by invalid user ubuntu 123.54.197.60 port 47466 [preauth]
Jan 31 08:53:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 321 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 14 MiB/s wr, 174 op/s
Jan 31 08:53:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:51.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:51.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:51 compute-0 sshd-session[394122]: Invalid user ubuntu from 123.54.197.60 port 55600
Jan 31 08:53:52 compute-0 ceph-mon[74496]: pgmap v3639: 305 pgs: 305 active+clean; 321 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 14 MiB/s wr, 174 op/s
Jan 31 08:53:52 compute-0 sshd-session[394122]: Connection closed by invalid user ubuntu 123.54.197.60 port 55600 [preauth]
Jan 31 08:53:52 compute-0 nova_compute[247704]: 2026-01-31 08:53:52.183 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:52 compute-0 nova_compute[247704]: 2026-01-31 08:53:52.419 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:52 compute-0 nova_compute[247704]: 2026-01-31 08:53:52.529 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 8.8 MiB/s wr, 163 op/s
Jan 31 08:53:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:53:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3588722987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:53:53 compute-0 sshd-session[394125]: Invalid user ubuntu from 123.54.197.60 port 55614
Jan 31 08:53:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:53:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3588722987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:53:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:53.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:53 compute-0 sshd-session[394125]: Connection closed by invalid user ubuntu 123.54.197.60 port 55614 [preauth]
Jan 31 08:53:54 compute-0 ceph-mon[74496]: pgmap v3640: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 8.8 MiB/s wr, 163 op/s
Jan 31 08:53:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3588722987' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:53:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3588722987' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:53:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:54.328 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:53:54 compute-0 nova_compute[247704]: 2026-01-31 08:53:54.329 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:53:54.329 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:53:54 compute-0 nova_compute[247704]: 2026-01-31 08:53:54.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:53:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:53:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Jan 31 08:53:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Jan 31 08:53:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Jan 31 08:53:54 compute-0 nova_compute[247704]: 2026-01-31 08:53:54.991 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 797 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 31 08:53:55 compute-0 sshd-session[394128]: Invalid user ubuntu from 123.54.197.60 port 55622
Jan 31 08:53:55 compute-0 sshd-session[394128]: Connection closed by invalid user ubuntu 123.54.197.60 port 55622 [preauth]
Jan 31 08:53:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:55.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:56 compute-0 ceph-mon[74496]: osdmap e393: 3 total, 3 up, 3 in
Jan 31 08:53:56 compute-0 ceph-mon[74496]: pgmap v3642: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 797 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 31 08:53:56 compute-0 sshd-session[394131]: Invalid user ubuntu from 123.54.197.60 port 55638
Jan 31 08:53:56 compute-0 podman[394134]: 2026-01-31 08:53:56.950212399 +0000 UTC m=+0.111543053 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:53:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 280 KiB/s wr, 46 op/s
Jan 31 08:53:57 compute-0 sshd-session[394131]: Connection closed by invalid user ubuntu 123.54.197.60 port 55638 [preauth]
Jan 31 08:53:57 compute-0 nova_compute[247704]: 2026-01-31 08:53:57.190 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:53:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:53:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:57.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:53:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:58 compute-0 ceph-mon[74496]: pgmap v3643: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 280 KiB/s wr, 46 op/s
Jan 31 08:53:58 compute-0 sshd-session[394161]: Invalid user ubuntu from 123.54.197.60 port 55648
Jan 31 08:53:58 compute-0 sshd-session[394161]: Connection closed by invalid user ubuntu 123.54.197.60 port 55648 [preauth]
Jan 31 08:53:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 227 KiB/s wr, 41 op/s
Jan 31 08:53:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:53:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:59.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:53:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:53:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:53:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:53:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:00 compute-0 nova_compute[247704]: 2026-01-31 08:54:00.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:00 compute-0 sshd-session[394164]: Invalid user ubuntu from 123.54.197.60 port 55654
Jan 31 08:54:00 compute-0 ceph-mon[74496]: pgmap v3644: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 227 KiB/s wr, 41 op/s
Jan 31 08:54:00 compute-0 sshd-session[394164]: Connection closed by invalid user ubuntu 123.54.197.60 port 55654 [preauth]
Jan 31 08:54:00 compute-0 nova_compute[247704]: 2026-01-31 08:54:00.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 267 KiB/s rd, 229 KiB/s wr, 64 op/s
Jan 31 08:54:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/243364613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:01.332 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:01.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:02 compute-0 nova_compute[247704]: 2026-01-31 08:54:02.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Jan 31 08:54:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Jan 31 08:54:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Jan 31 08:54:02 compute-0 ceph-mon[74496]: pgmap v3645: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 267 KiB/s rd, 229 KiB/s wr, 64 op/s
Jan 31 08:54:02 compute-0 sshd-session[394167]: Invalid user ubuntu from 123.54.197.60 port 33614
Jan 31 08:54:02 compute-0 sshd-session[394167]: Connection closed by invalid user ubuntu 123.54.197.60 port 33614 [preauth]
Jan 31 08:54:02 compute-0 sudo[394170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:02 compute-0 sudo[394170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:02 compute-0 sudo[394170]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:02 compute-0 sudo[394195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:02 compute-0 sudo[394195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:02 compute-0 sudo[394195]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:02 compute-0 nova_compute[247704]: 2026-01-31 08:54:02.920 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:02 compute-0 nova_compute[247704]: 2026-01-31 08:54:02.921 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:03 compute-0 nova_compute[247704]: 2026-01-31 08:54:03.021 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:54:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 276 KiB/s wr, 46 op/s
Jan 31 08:54:03 compute-0 nova_compute[247704]: 2026-01-31 08:54:03.276 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:03 compute-0 nova_compute[247704]: 2026-01-31 08:54:03.277 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:03 compute-0 nova_compute[247704]: 2026-01-31 08:54:03.291 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:54:03 compute-0 nova_compute[247704]: 2026-01-31 08:54:03.292 247708 INFO nova.compute.claims [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:54:03 compute-0 ceph-mon[74496]: osdmap e394: 3 total, 3 up, 3 in
Jan 31 08:54:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:03.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:03.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:03 compute-0 nova_compute[247704]: 2026-01-31 08:54:03.615 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:04 compute-0 nova_compute[247704]: 2026-01-31 08:54:04.080 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:04 compute-0 nova_compute[247704]: 2026-01-31 08:54:04.090 247708 DEBUG nova.compute.provider_tree [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:54:04 compute-0 sshd-session[394220]: Invalid user ubuntu from 123.54.197.60 port 33630
Jan 31 08:54:04 compute-0 nova_compute[247704]: 2026-01-31 08:54:04.313 247708 DEBUG nova.scheduler.client.report [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:54:04 compute-0 ceph-mon[74496]: pgmap v3647: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 276 KiB/s wr, 46 op/s
Jan 31 08:54:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2018586957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:04 compute-0 sshd-session[394220]: Connection closed by invalid user ubuntu 123.54.197.60 port 33630 [preauth]
Jan 31 08:54:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.797595) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644797635, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1731, "num_deletes": 259, "total_data_size": 2968812, "memory_usage": 3019920, "flush_reason": "Manual Compaction"}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Jan 31 08:54:04 compute-0 nova_compute[247704]: 2026-01-31 08:54:04.819 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:04 compute-0 nova_compute[247704]: 2026-01-31 08:54:04.820 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644828733, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 2922273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78072, "largest_seqno": 79802, "table_properties": {"data_size": 2914231, "index_size": 4855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16748, "raw_average_key_size": 20, "raw_value_size": 2898148, "raw_average_value_size": 3508, "num_data_blocks": 210, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849489, "oldest_key_time": 1769849489, "file_creation_time": 1769849644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 31218 microseconds, and 4811 cpu microseconds.
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.828803) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 2922273 bytes OK
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.828834) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.830974) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.830998) EVENT_LOG_v1 {"time_micros": 1769849644830989, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.831025) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2961536, prev total WAL file size 2961536, number of live WAL files 2.
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.831944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323635' seq:72057594037927935, type:22 .. '6C6F676D0033353138' seq:0, type:0; will stop at (end)
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(2853KB)], [179(11MB)]
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644832054, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 14645801, "oldest_snapshot_seqno": -1}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10696 keys, 14495886 bytes, temperature: kUnknown
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644972045, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 14495886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14426663, "index_size": 41398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26757, "raw_key_size": 282326, "raw_average_key_size": 26, "raw_value_size": 14239489, "raw_average_value_size": 1331, "num_data_blocks": 1583, "num_entries": 10696, "num_filter_entries": 10696, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.972623) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14495886 bytes
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.977808) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.5 rd, 103.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11233, records dropped: 537 output_compression: NoCompression
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.977844) EVENT_LOG_v1 {"time_micros": 1769849644977827, "job": 112, "event": "compaction_finished", "compaction_time_micros": 140190, "compaction_time_cpu_micros": 35939, "output_level": 6, "num_output_files": 1, "total_output_size": 14495886, "num_input_records": 11233, "num_output_records": 10696, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644978744, "job": 112, "event": "table_file_deletion", "file_number": 181}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849644980930, "job": 112, "event": "table_file_deletion", "file_number": 179}
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.831779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.981254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.981265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.981269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.981272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:04 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:04.981276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:05 compute-0 nova_compute[247704]: 2026-01-31 08:54:05.039 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 243 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 249 KiB/s rd, 227 KiB/s wr, 41 op/s
Jan 31 08:54:05 compute-0 nova_compute[247704]: 2026-01-31 08:54:05.355 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:54:05 compute-0 nova_compute[247704]: 2026-01-31 08:54:05.355 247708 DEBUG nova.network.neutron [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:54:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:05 compute-0 nova_compute[247704]: 2026-01-31 08:54:05.651 247708 INFO nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:54:05 compute-0 ceph-mon[74496]: pgmap v3648: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 243 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 249 KiB/s rd, 227 KiB/s wr, 41 op/s
Jan 31 08:54:05 compute-0 nova_compute[247704]: 2026-01-31 08:54:05.944 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.545 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.546 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.547 247708 INFO nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Creating image(s)
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.571 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.600 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.627 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.632 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.659 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.718 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.719 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.720 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.720 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:06 compute-0 sshd-session[394245]: Invalid user ubuntu from 123.54.197.60 port 33632
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.747 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:06 compute-0 nova_compute[247704]: 2026-01-31 08:54:06.752 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:07 compute-0 sshd-session[394245]: Connection closed by invalid user ubuntu 123.54.197.60 port 33632 [preauth]
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.038 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 208 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 6.1 KiB/s wr, 60 op/s
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.128 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] resizing rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.188 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.261 247708 DEBUG nova.objects.instance [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'migration_context' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.348 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.349 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Ensure instance console log exists: /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.350 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.350 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.351 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:07 compute-0 nova_compute[247704]: 2026-01-31 08:54:07.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:54:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:08 compute-0 ceph-mon[74496]: pgmap v3649: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 208 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 6.1 KiB/s wr, 60 op/s
Jan 31 08:54:08 compute-0 nova_compute[247704]: 2026-01-31 08:54:08.201 247708 DEBUG nova.policy [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a798fdf6d13d4af4b166dd94b5cea7cc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6e96de7b2784be1adce763bc9c9adc5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:54:08 compute-0 sshd-session[394414]: Invalid user ubuntu from 123.54.197.60 port 33644
Jan 31 08:54:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 263 KiB/s rd, 390 KiB/s wr, 58 op/s
Jan 31 08:54:09 compute-0 sshd-session[394414]: Connection closed by invalid user ubuntu 123.54.197.60 port 33644 [preauth]
Jan 31 08:54:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Jan 31 08:54:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Jan 31 08:54:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Jan 31 08:54:10 compute-0 nova_compute[247704]: 2026-01-31 08:54:10.042 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:10 compute-0 ceph-mon[74496]: pgmap v3650: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 263 KiB/s rd, 390 KiB/s wr, 58 op/s
Jan 31 08:54:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2686569800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:10 compute-0 ceph-mon[74496]: osdmap e395: 3 total, 3 up, 3 in
Jan 31 08:54:10 compute-0 sshd-session[394417]: Invalid user ubuntu from 123.54.197.60 port 33652
Jan 31 08:54:10 compute-0 nova_compute[247704]: 2026-01-31 08:54:10.598 247708 DEBUG nova.network.neutron [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Successfully created port: 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:54:10 compute-0 sshd-session[394417]: Connection closed by invalid user ubuntu 123.54.197.60 port 33652 [preauth]
Jan 31 08:54:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 219 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 2.4 MiB/s wr, 96 op/s
Jan 31 08:54:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/127726543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:11.225 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:11.225 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:11.226 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:11.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:11 compute-0 nova_compute[247704]: 2026-01-31 08:54:11.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:11 compute-0 nova_compute[247704]: 2026-01-31 08:54:11.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:54:11 compute-0 nova_compute[247704]: 2026-01-31 08:54:11.565 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:54:11 compute-0 nova_compute[247704]: 2026-01-31 08:54:11.594 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 08:54:11 compute-0 nova_compute[247704]: 2026-01-31 08:54:11.594 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:54:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:12 compute-0 ceph-mon[74496]: pgmap v3652: 305 pgs: 305 active+clean; 219 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 2.4 MiB/s wr, 96 op/s
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.191 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:12 compute-0 sshd-session[394420]: Invalid user ubuntu from 123.54.197.60 port 42036
Jan 31 08:54:12 compute-0 sshd-session[394420]: Connection closed by invalid user ubuntu 123.54.197.60 port 42036 [preauth]
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.715 247708 DEBUG nova.network.neutron [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Successfully updated port: 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.740 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.740 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquired lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.740 247708 DEBUG nova.network.neutron [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.854 247708 DEBUG nova.compute.manager [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-changed-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.854 247708 DEBUG nova.compute.manager [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Refreshing instance network info cache due to event network-changed-3b64ce10-dfce-4ef5-afaa-7985dab00bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.854 247708 DEBUG oslo_concurrency.lockutils [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:54:12 compute-0 podman[394423]: 2026-01-31 08:54:12.903240723 +0000 UTC m=+0.070972347 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:54:12 compute-0 nova_compute[247704]: 2026-01-31 08:54:12.975 247708 DEBUG nova.network.neutron [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:54:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 207 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 08:54:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:13 compute-0 nova_compute[247704]: 2026-01-31 08:54:13.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:13 compute-0 nova_compute[247704]: 2026-01-31 08:54:13.621 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:13 compute-0 nova_compute[247704]: 2026-01-31 08:54:13.621 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:13 compute-0 nova_compute[247704]: 2026-01-31 08:54:13.622 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:13 compute-0 nova_compute[247704]: 2026-01-31 08:54:13.622 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:54:13 compute-0 nova_compute[247704]: 2026-01-31 08:54:13.622 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:13.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:13 compute-0 sshd-session[394429]: Invalid user ubuntu from 123.54.197.60 port 42040
Jan 31 08:54:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:54:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1001679988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.079 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:14 compute-0 ceph-mon[74496]: pgmap v3653: 305 pgs: 305 active+clean; 207 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 08:54:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3785438324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1001679988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:14 compute-0 sshd-session[394429]: Connection closed by invalid user ubuntu 123.54.197.60 port 42040 [preauth]
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.270 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.271 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4241MB free_disk=20.94586181640625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.272 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.272 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.437 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.438 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.438 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.480 247708 DEBUG nova.network.neutron [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.528 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.544 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Releasing lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.545 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Instance network_info: |[{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.546 247708 DEBUG oslo_concurrency.lockutils [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.547 247708 DEBUG nova.network.neutron [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Refreshing network info cache for port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.549 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Start _get_guest_xml network_info=[{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.555 247708 WARNING nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.561 247708 DEBUG nova.virt.libvirt.host [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.562 247708 DEBUG nova.virt.libvirt.host [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.568 247708 DEBUG nova.virt.libvirt.host [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.569 247708 DEBUG nova.virt.libvirt.host [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.571 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.571 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.572 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.572 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.572 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.572 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.573 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.573 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.573 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.574 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.574 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.574 247708 DEBUG nova.virt.hardware [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.579 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:54:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646746360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.922 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.927 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:54:14 compute-0 nova_compute[247704]: 2026-01-31 08:54:14.972 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:54:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:54:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490158924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.044 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.049 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.085 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.090 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.107 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.108 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2646746360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/490158924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:54:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574426144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.495 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.498 247708 DEBUG nova.virt.libvirt.vif [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1386444972',display_name='tempest-TestStampPattern-server-1386444972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1386444972',id=203,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfz5EJ1EvVhkopw671xsxq9cmxCD9AJZYRdx1fZqnsJKH4HE3ct43AahyjBDecQtfre/K2oZ3kPMxp5bbpWjZgXwmif2lJfZCK32Cd1YqdcHbaKXFc2nUgzqikPeTQnpA==',key_name='tempest-TestStampPattern-1030074900',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6e96de7b2784be1adce763bc9c9adc5',ramdisk_id='',reservation_id='r-or1ep324',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-23409568',owner_user_name='tempest-TestStampPattern-23409568-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:54:06Z,user_data=None,user_id='a798fdf6d13d4af4b166dd94b5cea7cc',uuid=da90f1fb-9090-49b5-a510-d7e6ac7a30d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.499 247708 DEBUG nova.network.os_vif_util [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converting VIF {"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.500 247708 DEBUG nova.network.os_vif_util [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.502 247708 DEBUG nova.objects.instance [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:54:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:15.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.544 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <uuid>da90f1fb-9090-49b5-a510-d7e6ac7a30d6</uuid>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <name>instance-000000cb</name>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:name>tempest-TestStampPattern-server-1386444972</nova:name>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:54:14</nova:creationTime>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:user uuid="a798fdf6d13d4af4b166dd94b5cea7cc">tempest-TestStampPattern-23409568-project-member</nova:user>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:project uuid="e6e96de7b2784be1adce763bc9c9adc5">tempest-TestStampPattern-23409568</nova:project>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <nova:port uuid="3b64ce10-dfce-4ef5-afaa-7985dab00bc6">
Jan 31 08:54:15 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <system>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <entry name="serial">da90f1fb-9090-49b5-a510-d7e6ac7a30d6</entry>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <entry name="uuid">da90f1fb-9090-49b5-a510-d7e6ac7a30d6</entry>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </system>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <os>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </os>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <features>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </features>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk">
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </source>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk.config">
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </source>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:54:15 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:d6:93:84"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <target dev="tap3b64ce10-df"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/console.log" append="off"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <video>
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </video>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:54:15 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:54:15 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:54:15 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:54:15 compute-0 nova_compute[247704]: </domain>
Jan 31 08:54:15 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.545 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Preparing to wait for external event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.545 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.546 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.546 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.546 247708 DEBUG nova.virt.libvirt.vif [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1386444972',display_name='tempest-TestStampPattern-server-1386444972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1386444972',id=203,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfz5EJ1EvVhkopw671xsxq9cmxCD9AJZYRdx1fZqnsJKH4HE3ct43AahyjBDecQtfre/K2oZ3kPMxp5bbpWjZgXwmif2lJfZCK32Cd1YqdcHbaKXFc2nUgzqikPeTQnpA==',key_name='tempest-TestStampPattern-1030074900',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6e96de7b2784be1adce763bc9c9adc5',ramdisk_id='',reservation_id='r-or1ep324',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-23409568',owner_user_name='tempest-TestStampPattern-23409568-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:54:06Z,user_data=None,user_id='a798fdf6d13d4af4b166dd94b5cea7cc',uuid=da90f1fb-9090-49b5-a510-d7e6ac7a30d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.547 247708 DEBUG nova.network.os_vif_util [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converting VIF {"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.547 247708 DEBUG nova.network.os_vif_util [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.548 247708 DEBUG os_vif [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.548 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.549 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.549 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.552 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.552 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b64ce10-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.553 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b64ce10-df, col_values=(('external_ids', {'iface-id': '3b64ce10-dfce-4ef5-afaa-7985dab00bc6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:93:84', 'vm-uuid': 'da90f1fb-9090-49b5-a510-d7e6ac7a30d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.554 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:15 compute-0 NetworkManager[49108]: <info>  [1769849655.5558] manager: (tap3b64ce10-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.557 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.563 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.564 247708 INFO os_vif [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df')
Jan 31 08:54:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:15.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.627 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.627 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.628 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No VIF found with MAC fa:16:3e:d6:93:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.628 247708 INFO nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Using config drive
Jan 31 08:54:15 compute-0 nova_compute[247704]: 2026-01-31 08:54:15.658 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:16 compute-0 ceph-mon[74496]: pgmap v3654: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 31 08:54:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3574426144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:16 compute-0 sshd-session[394469]: Invalid user ubuntu from 123.54.197.60 port 42054
Jan 31 08:54:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.139 247708 INFO nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Creating config drive at /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/disk.config
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.143 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptgr0p2xq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:17 compute-0 sshd-session[394469]: Connection closed by invalid user ubuntu 123.54.197.60 port 42054 [preauth]
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.193 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.267 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptgr0p2xq" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.294 247708 DEBUG nova.storage.rbd_utils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] rbd image da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.298 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/disk.config da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.482 247708 DEBUG oslo_concurrency.processutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/disk.config da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.483 247708 INFO nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Deleting local config drive /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6/disk.config because it was imported into RBD.
Jan 31 08:54:17 compute-0 virtqemud[247621]: End of file while reading data: Input/output error
Jan 31 08:54:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:17 compute-0 kernel: tap3b64ce10-df: entered promiscuous mode
Jan 31 08:54:17 compute-0 NetworkManager[49108]: <info>  [1769849657.5436] manager: (tap3b64ce10-df): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Jan 31 08:54:17 compute-0 ovn_controller[149457]: 2026-01-31T08:54:17Z|00854|binding|INFO|Claiming lport 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 for this chassis.
Jan 31 08:54:17 compute-0 ovn_controller[149457]: 2026-01-31T08:54:17Z|00855|binding|INFO|3b64ce10-dfce-4ef5-afaa-7985dab00bc6: Claiming fa:16:3e:d6:93:84 10.100.0.14
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.543 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.550 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.554 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.565 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:93:84 10.100.0.14'], port_security=['fa:16:3e:d6:93:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'da90f1fb-9090-49b5-a510-d7e6ac7a30d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53eebd24-3b32-4949-827a-524f9e042652', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6e96de7b2784be1adce763bc9c9adc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e34ba2ee-7d71-4f69-8288-b62c847fa225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fa4ba9c-944f-4ccd-90bc-07135c4442c5, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3b64ce10-dfce-4ef5-afaa-7985dab00bc6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.566 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 in datapath 53eebd24-3b32-4949-827a-524f9e042652 bound to our chassis
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.568 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 53eebd24-3b32-4949-827a-524f9e042652
Jan 31 08:54:17 compute-0 systemd-machined[214448]: New machine qemu-90-instance-000000cb.
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.584 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cf05f499-1b92-4ea3-af9c-64d1e5ad91e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.586 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap53eebd24-31 in ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.589 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap53eebd24-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.589 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[342e0b73-3007-4023-8ccc-35ff75aef324]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.590 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[54cd2a2f-aba3-4e98-959d-068225e030d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_controller[149457]: 2026-01-31T08:54:17Z|00856|binding|INFO|Setting lport 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 ovn-installed in OVS
Jan 31 08:54:17 compute-0 ovn_controller[149457]: 2026-01-31T08:54:17Z|00857|binding|INFO|Setting lport 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 up in Southbound
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.594 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-000000cb.
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.604 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e068577-5741-430a-bce8-961985d21381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 systemd-udevd[394632]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:54:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:17 compute-0 NetworkManager[49108]: <info>  [1769849657.6338] device (tap3b64ce10-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:54:17 compute-0 NetworkManager[49108]: <info>  [1769849657.6343] device (tap3b64ce10-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.633 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b905f321-694d-4fb2-920a-b1e96abda86e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.661 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[22217301-5e94-4af5-9c59-517f0ecc6201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.666 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9ce23b-bf7a-4ff9-a7ad-87d49426d9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 NetworkManager[49108]: <info>  [1769849657.6669] manager: (tap53eebd24-30): new Veth device (/org/freedesktop/NetworkManager/Devices/377)
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.694 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[2859c095-35b9-4ce3-b88b-fda0fbc32385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.697 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7ca421-56ea-449e-b9b8-fc902294bfba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 NetworkManager[49108]: <info>  [1769849657.7174] device (tap53eebd24-30): carrier: link connected
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.723 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[66a83eae-50c8-4397-b91b-6daa3405ba46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.734 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3121c9-16f5-43b4-a423-dc95f95e5c3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53eebd24-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 990033, 'reachable_time': 26289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394662, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.746 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc487f2-1e8f-4f59-b40a-20ae9e6e89a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:bd43'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 990033, 'tstamp': 990033}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394663, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.762 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[512285b8-014e-4019-88e0-45700f7eeea4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap53eebd24-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:bd:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 990033, 'reachable_time': 26289, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394664, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.795 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2655bce8-92be-46e5-bc2f-3fd82fdf807c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.857 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b434a63e-52a7-4ffb-b546-64425b44e7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.859 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53eebd24-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.859 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.860 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap53eebd24-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.862 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 kernel: tap53eebd24-30: entered promiscuous mode
Jan 31 08:54:17 compute-0 NetworkManager[49108]: <info>  [1769849657.8639] manager: (tap53eebd24-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.866 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap53eebd24-30, col_values=(('external_ids', {'iface-id': '62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.868 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 ovn_controller[149457]: 2026-01-31T08:54:17Z|00858|binding|INFO|Releasing lport 62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604 from this chassis (sb_readonly=0)
Jan 31 08:54:17 compute-0 nova_compute[247704]: 2026-01-31 08:54:17.878 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.880 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/53eebd24-3b32-4949-827a-524f9e042652.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/53eebd24-3b32-4949-827a-524f9e042652.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.881 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0977be48-636d-4d57-93aa-0fcf542e015c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.882 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-53eebd24-3b32-4949-827a-524f9e042652
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/53eebd24-3b32-4949-827a-524f9e042652.pid.haproxy
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 53eebd24-3b32-4949-827a-524f9e042652
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:54:17 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:17.883 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'env', 'PROCESS_TAG=haproxy-53eebd24-3b32-4949-827a-524f9e042652', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/53eebd24-3b32-4949-827a-524f9e042652.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.092 247708 DEBUG nova.compute.manager [req-64739fb5-9cc4-4792-94e4-011d63407b37 req-d7d5603b-255d-4cce-b12d-7c3d0b92e3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.093 247708 DEBUG oslo_concurrency.lockutils [req-64739fb5-9cc4-4792-94e4-011d63407b37 req-d7d5603b-255d-4cce-b12d-7c3d0b92e3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.093 247708 DEBUG oslo_concurrency.lockutils [req-64739fb5-9cc4-4792-94e4-011d63407b37 req-d7d5603b-255d-4cce-b12d-7c3d0b92e3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.094 247708 DEBUG oslo_concurrency.lockutils [req-64739fb5-9cc4-4792-94e4-011d63407b37 req-d7d5603b-255d-4cce-b12d-7c3d0b92e3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.094 247708 DEBUG nova.compute.manager [req-64739fb5-9cc4-4792-94e4-011d63407b37 req-d7d5603b-255d-4cce-b12d-7c3d0b92e3f7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Processing event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.161 247708 DEBUG nova.network.neutron [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updated VIF entry in instance network info cache for port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.163 247708 DEBUG nova.network.neutron [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:54:18 compute-0 ceph-mon[74496]: pgmap v3655: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 08:54:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3250238092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.216 247708 DEBUG oslo_concurrency.lockutils [req-2e844565-fbb3-40f2-a04c-3690b33e6d81 req-ddcc674d-71ed-4712-9ef2-f7e1cc7daab0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:54:18 compute-0 podman[394696]: 2026-01-31 08:54:18.24483236 +0000 UTC m=+0.059454787 container create 1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 08:54:18 compute-0 systemd[1]: Started libpod-conmon-1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855.scope.
Jan 31 08:54:18 compute-0 podman[394696]: 2026-01-31 08:54:18.21272675 +0000 UTC m=+0.027349217 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:54:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9de26101b1d497cc742cd5422f2bec748d01ba25d899c38998bfd782c0bc4d77/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:18 compute-0 podman[394696]: 2026-01-31 08:54:18.356106406 +0000 UTC m=+0.170728843 container init 1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 08:54:18 compute-0 podman[394696]: 2026-01-31 08:54:18.361611799 +0000 UTC m=+0.176234236 container start 1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:54:18 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [NOTICE]   (394716) : New worker (394718) forked
Jan 31 08:54:18 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [NOTICE]   (394716) : Loading success.
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.996 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849658.9955482, da90f1fb-9090-49b5-a510-d7e6ac7a30d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:54:18 compute-0 nova_compute[247704]: 2026-01-31 08:54:18.998 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] VM Started (Lifecycle Event)
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.001 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.006 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.009 247708 INFO nova.virt.libvirt.driver [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Instance spawned successfully.
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.009 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.026 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.031 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.034 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.034 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.034 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.035 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.035 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.035 247708 DEBUG nova.virt.libvirt.driver [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:54:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.7 MiB/s wr, 65 op/s
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.060 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.060 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849658.995843, da90f1fb-9090-49b5-a510-d7e6ac7a30d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.060 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] VM Paused (Lifecycle Event)
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.087 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.091 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849659.0050719, da90f1fb-9090-49b5-a510-d7e6ac7a30d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.091 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] VM Resumed (Lifecycle Event)
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.114 247708 INFO nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Took 12.57 seconds to spawn the instance on the hypervisor.
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.115 247708 DEBUG nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.118 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.124 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.157 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.205 247708 INFO nova.compute.manager [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Took 15.97 seconds to build instance.
Jan 31 08:54:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3882194921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:19 compute-0 nova_compute[247704]: 2026-01-31 08:54:19.221 247708 DEBUG oslo_concurrency.lockutils [None req-86ea31c9-4ce6-4fd6-9a75-a9bbba77b51f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:19.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:19 compute-0 sshd-session[394727]: Invalid user ubuntu from 123.54.197.60 port 42062
Jan 31 08:54:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:19 compute-0 sshd-session[394727]: Connection closed by invalid user ubuntu 123.54.197.60 port 42062 [preauth]
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.102 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.103 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.103 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.205 247708 DEBUG nova.compute.manager [req-67c6e789-92f9-49b8-b184-cc05a37cec9c req-12a0bd85-c5ed-4736-a2ba-fb465449ff17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.205 247708 DEBUG oslo_concurrency.lockutils [req-67c6e789-92f9-49b8-b184-cc05a37cec9c req-12a0bd85-c5ed-4736-a2ba-fb465449ff17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.206 247708 DEBUG oslo_concurrency.lockutils [req-67c6e789-92f9-49b8-b184-cc05a37cec9c req-12a0bd85-c5ed-4736-a2ba-fb465449ff17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.206 247708 DEBUG oslo_concurrency.lockutils [req-67c6e789-92f9-49b8-b184-cc05a37cec9c req-12a0bd85-c5ed-4736-a2ba-fb465449ff17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.206 247708 DEBUG nova.compute.manager [req-67c6e789-92f9-49b8-b184-cc05a37cec9c req-12a0bd85-c5ed-4736-a2ba-fb465449ff17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] No waiting events found dispatching network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.206 247708 WARNING nova.compute.manager [req-67c6e789-92f9-49b8-b184-cc05a37cec9c req-12a0bd85-c5ed-4736-a2ba-fb465449ff17 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received unexpected event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 for instance with vm_state active and task_state None.
Jan 31 08:54:20 compute-0 ceph-mon[74496]: pgmap v3656: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 1.7 MiB/s wr, 65 op/s
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:54:20
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'vms', 'backups']
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:54:20 compute-0 nova_compute[247704]: 2026-01-31 08:54:20.555 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:54:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:54:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 770 KiB/s rd, 712 KiB/s wr, 77 op/s
Jan 31 08:54:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:21.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:21 compute-0 sshd-session[394771]: Invalid user ubuntu from 123.54.197.60 port 53846
Jan 31 08:54:22 compute-0 sshd-session[394771]: Connection closed by invalid user ubuntu 123.54.197.60 port 53846 [preauth]
Jan 31 08:54:22 compute-0 nova_compute[247704]: 2026-01-31 08:54:22.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:22 compute-0 ceph-mon[74496]: pgmap v3657: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 770 KiB/s rd, 712 KiB/s wr, 77 op/s
Jan 31 08:54:22 compute-0 sudo[394777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:22 compute-0 sudo[394777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:22 compute-0 sudo[394777]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:22 compute-0 sudo[394802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:22 compute-0 sudo[394802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:22 compute-0 sudo[394802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 53 op/s
Jan 31 08:54:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:23.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:23 compute-0 sshd-session[394775]: Invalid user ubuntu from 123.54.197.60 port 53858
Jan 31 08:54:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:23 compute-0 sshd-session[394775]: Connection closed by invalid user ubuntu 123.54.197.60 port 53858 [preauth]
Jan 31 08:54:24 compute-0 ceph-mon[74496]: pgmap v3658: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 53 op/s
Jan 31 08:54:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/658489239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:54:24 compute-0 sudo[394830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:24 compute-0 sudo[394830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-0 sudo[394830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:24 compute-0 sudo[394855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:54:24 compute-0 sudo[394855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-0 sudo[394855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:24 compute-0 sudo[394880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:24 compute-0 sudo[394880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-0 sudo[394880]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:24 compute-0 sudo[394905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:54:24 compute-0 sudo[394905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 81 op/s
Jan 31 08:54:25 compute-0 sudo[394905]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:54:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:54:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:54:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 48b594b3-93b6-4ece-82f5-26069e7bfb0c does not exist
Jan 31 08:54:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f96b08f5-7350-4ceb-ab34-4caaf48381a0 does not exist
Jan 31 08:54:25 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9df13275-05c2-4731-b744-035806328e97 does not exist
Jan 31 08:54:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:54:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:54:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:54:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:54:25 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:54:25 compute-0 sudo[394961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:25 compute-0 sudo[394961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:25 compute-0 sudo[394961]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:25 compute-0 sudo[394986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:54:25 compute-0 sudo[394986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:25 compute-0 sudo[394986]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:25 compute-0 NetworkManager[49108]: <info>  [1769849665.4239] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Jan 31 08:54:25 compute-0 NetworkManager[49108]: <info>  [1769849665.4249] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 31 08:54:25 compute-0 nova_compute[247704]: 2026-01-31 08:54:25.423 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:25 compute-0 sudo[395011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:25 compute-0 sudo[395011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:25 compute-0 sudo[395011]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:25 compute-0 nova_compute[247704]: 2026-01-31 08:54:25.466 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:25 compute-0 ovn_controller[149457]: 2026-01-31T08:54:25Z|00859|binding|INFO|Releasing lport 62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604 from this chassis (sb_readonly=0)
Jan 31 08:54:25 compute-0 nova_compute[247704]: 2026-01-31 08:54:25.482 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:25 compute-0 sudo[395036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:54:25 compute-0 sudo[395036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:25.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:25 compute-0 nova_compute[247704]: 2026-01-31 08:54:25.558 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.787862367 +0000 UTC m=+0.043179942 container create 02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:54:25 compute-0 systemd[1]: Started libpod-conmon-02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30.scope.
Jan 31 08:54:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.765851651 +0000 UTC m=+0.021169326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.8739402 +0000 UTC m=+0.129257805 container init 02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.881169596 +0000 UTC m=+0.136487171 container start 02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.886181007 +0000 UTC m=+0.141498632 container attach 02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:54:25 compute-0 kind_hofstadter[395120]: 167 167
Jan 31 08:54:25 compute-0 systemd[1]: libpod-02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30.scope: Deactivated successfully.
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.887653883 +0000 UTC m=+0.142971478 container died 02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-46c088a515b7ac52ce48575db52f8ab541d0efebeb8f648efa8a8e1c0e861e1b-merged.mount: Deactivated successfully.
Jan 31 08:54:25 compute-0 podman[395104]: 2026-01-31 08:54:25.934882502 +0000 UTC m=+0.190200087 container remove 02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:54:25 compute-0 systemd[1]: libpod-conmon-02b2183e8b2302043ff199ea38cc184f8e4497982cdb08f0717f8db142c16b30.scope: Deactivated successfully.
Jan 31 08:54:26 compute-0 podman[395146]: 2026-01-31 08:54:26.080506533 +0000 UTC m=+0.036031958 container create 68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:54:26 compute-0 systemd[1]: Started libpod-conmon-68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0.scope.
Jan 31 08:54:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08ed0fefd75bf7c33a6a27e3365be14aef4f26112647bf305b39b1e11f958cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08ed0fefd75bf7c33a6a27e3365be14aef4f26112647bf305b39b1e11f958cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08ed0fefd75bf7c33a6a27e3365be14aef4f26112647bf305b39b1e11f958cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08ed0fefd75bf7c33a6a27e3365be14aef4f26112647bf305b39b1e11f958cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08ed0fefd75bf7c33a6a27e3365be14aef4f26112647bf305b39b1e11f958cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:26 compute-0 podman[395146]: 2026-01-31 08:54:26.160557489 +0000 UTC m=+0.116082964 container init 68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:54:26 compute-0 podman[395146]: 2026-01-31 08:54:26.065165959 +0000 UTC m=+0.020691404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:54:26 compute-0 podman[395146]: 2026-01-31 08:54:26.173320779 +0000 UTC m=+0.128846224 container start 68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_allen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 08:54:26 compute-0 podman[395146]: 2026-01-31 08:54:26.177910561 +0000 UTC m=+0.133436006 container attach 68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:54:26 compute-0 ceph-mon[74496]: pgmap v3659: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 81 op/s
Jan 31 08:54:26 compute-0 sshd-session[394827]: Invalid user ubuntu from 123.54.197.60 port 53880
Jan 31 08:54:26 compute-0 sshd-session[394827]: Connection closed by invalid user ubuntu 123.54.197.60 port 53880 [preauth]
Jan 31 08:54:26 compute-0 nova_compute[247704]: 2026-01-31 08:54:26.965 247708 DEBUG nova.compute.manager [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-changed-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:54:26 compute-0 nova_compute[247704]: 2026-01-31 08:54:26.967 247708 DEBUG nova.compute.manager [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Refreshing instance network info cache due to event network-changed-3b64ce10-dfce-4ef5-afaa-7985dab00bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:54:26 compute-0 nova_compute[247704]: 2026-01-31 08:54:26.967 247708 DEBUG oslo_concurrency.lockutils [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:54:26 compute-0 nova_compute[247704]: 2026-01-31 08:54:26.967 247708 DEBUG oslo_concurrency.lockutils [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:54:26 compute-0 nova_compute[247704]: 2026-01-31 08:54:26.967 247708 DEBUG nova.network.neutron [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Refreshing network info cache for port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:54:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 512 KiB/s wr, 75 op/s
Jan 31 08:54:27 compute-0 dazzling_allen[395162]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:54:27 compute-0 dazzling_allen[395162]: --> relative data size: 1.0
Jan 31 08:54:27 compute-0 dazzling_allen[395162]: --> All data devices are unavailable
Jan 31 08:54:27 compute-0 systemd[1]: libpod-68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0.scope: Deactivated successfully.
Jan 31 08:54:27 compute-0 podman[395146]: 2026-01-31 08:54:27.099134342 +0000 UTC m=+1.054659777 container died 68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08ed0fefd75bf7c33a6a27e3365be14aef4f26112647bf305b39b1e11f958cd-merged.mount: Deactivated successfully.
Jan 31 08:54:27 compute-0 podman[395146]: 2026-01-31 08:54:27.154718333 +0000 UTC m=+1.110243758 container remove 68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:54:27 compute-0 systemd[1]: libpod-conmon-68a0461af9e0c9cb7494dd34958dbf18554109238ac572f8105d66b3474ab3a0.scope: Deactivated successfully.
Jan 31 08:54:27 compute-0 sudo[395036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:27 compute-0 nova_compute[247704]: 2026-01-31 08:54:27.197 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:27 compute-0 podman[395180]: 2026-01-31 08:54:27.237922157 +0000 UTC m=+0.101549871 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:54:27 compute-0 sudo[395206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:27 compute-0 sudo[395206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:27 compute-0 sudo[395206]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:27 compute-0 sudo[395237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:54:27 compute-0 sudo[395237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:27 compute-0 sudo[395237]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:27 compute-0 sudo[395262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:27 compute-0 sudo[395262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:27 compute-0 sudo[395262]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:27 compute-0 sudo[395287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:54:27 compute-0 sudo[395287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:27.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.823754111 +0000 UTC m=+0.042986715 container create be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:54:27 compute-0 systemd[1]: Started libpod-conmon-be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b.scope.
Jan 31 08:54:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:27 compute-0 ovn_controller[149457]: 2026-01-31T08:54:27Z|00860|binding|INFO|Releasing lport 62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604 from this chassis (sb_readonly=0)
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.804565386 +0000 UTC m=+0.023797940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.913587866 +0000 UTC m=+0.132820460 container init be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:54:27 compute-0 nova_compute[247704]: 2026-01-31 08:54:27.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.924557663 +0000 UTC m=+0.143790217 container start be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.928446647 +0000 UTC m=+0.147679191 container attach be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:54:27 compute-0 blissful_germain[395365]: 167 167
Jan 31 08:54:27 compute-0 systemd[1]: libpod-be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b.scope: Deactivated successfully.
Jan 31 08:54:27 compute-0 conmon[395365]: conmon be5ba570d0fe10a0cb04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b.scope/container/memory.events
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.934842413 +0000 UTC m=+0.154074957 container died be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:54:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a399310ef92caddbc0df328d1364ecb07d2f9eef3053816755b4c02b34558598-merged.mount: Deactivated successfully.
Jan 31 08:54:27 compute-0 podman[395349]: 2026-01-31 08:54:27.981443886 +0000 UTC m=+0.200676440 container remove be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:54:27 compute-0 sshd-session[395174]: Invalid user ubuntu from 123.54.197.60 port 53886
Jan 31 08:54:27 compute-0 systemd[1]: libpod-conmon-be5ba570d0fe10a0cb04224185fa7113fe2a22d809a868652b1275450d44e96b.scope: Deactivated successfully.
Jan 31 08:54:28 compute-0 podman[395388]: 2026-01-31 08:54:28.159950756 +0000 UTC m=+0.051038861 container create 065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:54:28 compute-0 systemd[1]: Started libpod-conmon-065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113.scope.
Jan 31 08:54:28 compute-0 sshd-session[395174]: Connection closed by invalid user ubuntu 123.54.197.60 port 53886 [preauth]
Jan 31 08:54:28 compute-0 podman[395388]: 2026-01-31 08:54:28.137628463 +0000 UTC m=+0.028716598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:54:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41ad296ac48956337a6ef04f853466fe989292b649a6421c6a5c6460889ef6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41ad296ac48956337a6ef04f853466fe989292b649a6421c6a5c6460889ef6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41ad296ac48956337a6ef04f853466fe989292b649a6421c6a5c6460889ef6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e41ad296ac48956337a6ef04f853466fe989292b649a6421c6a5c6460889ef6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:28 compute-0 podman[395388]: 2026-01-31 08:54:28.283182313 +0000 UTC m=+0.174270428 container init 065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:54:28 compute-0 podman[395388]: 2026-01-31 08:54:28.29130485 +0000 UTC m=+0.182392955 container start 065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:54:28 compute-0 podman[395388]: 2026-01-31 08:54:28.301015127 +0000 UTC m=+0.192103242 container attach 065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:54:28 compute-0 ceph-mon[74496]: pgmap v3660: 305 pgs: 305 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 512 KiB/s wr, 75 op/s
Jan 31 08:54:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 76 op/s
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]: {
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:     "0": [
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:         {
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "devices": [
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "/dev/loop3"
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             ],
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "lv_name": "ceph_lv0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "lv_size": "7511998464",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "name": "ceph_lv0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "tags": {
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.cluster_name": "ceph",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.crush_device_class": "",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.encrypted": "0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.osd_id": "0",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.type": "block",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:                 "ceph.vdo": "0"
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             },
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "type": "block",
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:             "vg_name": "ceph_vg0"
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:         }
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]:     ]
Jan 31 08:54:29 compute-0 dreamy_perlman[395405]: }
Jan 31 08:54:29 compute-0 systemd[1]: libpod-065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113.scope: Deactivated successfully.
Jan 31 08:54:29 compute-0 podman[395388]: 2026-01-31 08:54:29.115120843 +0000 UTC m=+1.006208948 container died 065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e41ad296ac48956337a6ef04f853466fe989292b649a6421c6a5c6460889ef6a-merged.mount: Deactivated successfully.
Jan 31 08:54:29 compute-0 podman[395388]: 2026-01-31 08:54:29.178232788 +0000 UTC m=+1.069320893 container remove 065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:54:29 compute-0 systemd[1]: libpod-conmon-065a73a278795ee32bdd860ec39857be53e3885ba4127158ce2486b1c1f0d113.scope: Deactivated successfully.
Jan 31 08:54:29 compute-0 sudo[395287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:29 compute-0 sudo[395428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:29 compute-0 sudo[395428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:29 compute-0 sudo[395428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:29 compute-0 sudo[395453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:54:29 compute-0 sudo[395453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:29 compute-0 sudo[395453]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:29 compute-0 nova_compute[247704]: 2026-01-31 08:54:29.367 247708 DEBUG nova.network.neutron [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updated VIF entry in instance network info cache for port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:54:29 compute-0 nova_compute[247704]: 2026-01-31 08:54:29.369 247708 DEBUG nova.network.neutron [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:54:29 compute-0 sudo[395478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:29 compute-0 sudo[395478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:29 compute-0 sudo[395478]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:29 compute-0 nova_compute[247704]: 2026-01-31 08:54:29.439 247708 DEBUG oslo_concurrency.lockutils [req-e8f1c830-6103-4038-ab12-93d0b3c30e08 req-e9d2877c-8433-46fe-b33f-a857d1035428 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:54:29 compute-0 sudo[395503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:54:29 compute-0 sudo[395503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:29.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.757578424 +0000 UTC m=+0.043048727 container create 0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:54:29 compute-0 systemd[1]: Started libpod-conmon-0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c.scope.
Jan 31 08:54:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.828814286 +0000 UTC m=+0.114284639 container init 0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.739347251 +0000 UTC m=+0.024817574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.836449933 +0000 UTC m=+0.121920256 container start 0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.839540968 +0000 UTC m=+0.125011281 container attach 0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:54:29 compute-0 practical_maxwell[395586]: 167 167
Jan 31 08:54:29 compute-0 systemd[1]: libpod-0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c.scope: Deactivated successfully.
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.842061409 +0000 UTC m=+0.127531722 container died 0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:54:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-97d7aff50f0b4f82a2fdca1d12c17e723f3454062053a8b72c93a23c0a6cd64c-merged.mount: Deactivated successfully.
Jan 31 08:54:29 compute-0 podman[395570]: 2026-01-31 08:54:29.880314909 +0000 UTC m=+0.165785232 container remove 0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:54:29 compute-0 systemd[1]: libpod-conmon-0e5a2907c546ee5b3c40fbd864a3186f344818896b7406aef2c9c21ae2dc1e3c.scope: Deactivated successfully.
Jan 31 08:54:30 compute-0 podman[395609]: 2026-01-31 08:54:30.052319532 +0000 UTC m=+0.043091749 container create dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:54:30 compute-0 systemd[1]: Started libpod-conmon-dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671.scope.
Jan 31 08:54:30 compute-0 podman[395609]: 2026-01-31 08:54:30.031432364 +0000 UTC m=+0.022204561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:54:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e16f8707990d2a1229c7a2b4b14f04a1f2ac4e78e7271b4d42f09398d8f20c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e16f8707990d2a1229c7a2b4b14f04a1f2ac4e78e7271b4d42f09398d8f20c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e16f8707990d2a1229c7a2b4b14f04a1f2ac4e78e7271b4d42f09398d8f20c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e16f8707990d2a1229c7a2b4b14f04a1f2ac4e78e7271b4d42f09398d8f20c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:54:30 compute-0 podman[395609]: 2026-01-31 08:54:30.155581333 +0000 UTC m=+0.146353540 container init dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:54:30 compute-0 podman[395609]: 2026-01-31 08:54:30.162211483 +0000 UTC m=+0.152983680 container start dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:54:30 compute-0 podman[395609]: 2026-01-31 08:54:30.170936456 +0000 UTC m=+0.161708653 container attach dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 08:54:30 compute-0 sshd-session[395411]: Invalid user ubuntu from 123.54.197.60 port 53894
Jan 31 08:54:30 compute-0 ceph-mon[74496]: pgmap v3661: 305 pgs: 305 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 76 op/s
Jan 31 08:54:30 compute-0 nova_compute[247704]: 2026-01-31 08:54:30.561 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:30 compute-0 sshd-session[395411]: Connection closed by invalid user ubuntu 123.54.197.60 port 53894 [preauth]
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]: {
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:         "osd_id": 0,
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:         "type": "bluestore"
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]:     }
Jan 31 08:54:30 compute-0 condescending_torvalds[395626]: }
Jan 31 08:54:31 compute-0 systemd[1]: libpod-dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671.scope: Deactivated successfully.
Jan 31 08:54:31 compute-0 podman[395609]: 2026-01-31 08:54:31.014058057 +0000 UTC m=+1.004830244 container died dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_torvalds, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:54:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e16f8707990d2a1229c7a2b4b14f04a1f2ac4e78e7271b4d42f09398d8f20c-merged.mount: Deactivated successfully.
Jan 31 08:54:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:54:31 compute-0 podman[395609]: 2026-01-31 08:54:31.07341979 +0000 UTC m=+1.064192007 container remove dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_torvalds, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 08:54:31 compute-0 systemd[1]: libpod-conmon-dd515865d8d574623b396b8b4d062041301cdefbd1e61bf141630ffa13786671.scope: Deactivated successfully.
Jan 31 08:54:31 compute-0 sudo[395503]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:54:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:54:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 77157d26-139d-4c8f-858d-3437fb4ba911 does not exist
Jan 31 08:54:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1b19091e-5e58-47b5-969d-e5d4075c7246 does not exist
Jan 31 08:54:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 78a89269-2b0b-44fc-9eca-1de71beab3dc does not exist
Jan 31 08:54:31 compute-0 sudo[395662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:31 compute-0 sudo[395662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:31 compute-0 sudo[395662]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:31 compute-0 sudo[395687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:54:31 compute-0 sudo[395687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:31 compute-0 sudo[395687]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1813203743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:31 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:54:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/925435894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:31.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:31.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:31 compute-0 sshd-session[395643]: Invalid user ubuntu from 123.54.197.60 port 35086
Jan 31 08:54:32 compute-0 nova_compute[247704]: 2026-01-31 08:54:32.202 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:32 compute-0 sshd-session[395643]: Connection closed by invalid user ubuntu 123.54.197.60 port 35086 [preauth]
Jan 31 08:54:32 compute-0 ovn_controller[149457]: 2026-01-31T08:54:32Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:93:84 10.100.0.14
Jan 31 08:54:32 compute-0 ovn_controller[149457]: 2026-01-31T08:54:32Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:93:84 10.100.0.14
Jan 31 08:54:32 compute-0 ceph-mon[74496]: pgmap v3662: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 08:54:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 229 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 08:54:33 compute-0 sshd-session[395713]: Invalid user ubuntu from 123.54.197.60 port 35096
Jan 31 08:54:33 compute-0 ceph-mon[74496]: pgmap v3663: 305 pgs: 305 active+clean; 229 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 08:54:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:33.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:33.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:33 compute-0 sshd-session[395713]: Connection closed by invalid user ubuntu 123.54.197.60 port 35096 [preauth]
Jan 31 08:54:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:54:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 79K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1532 writes, 6535 keys, 1532 commit groups, 1.0 writes per commit group, ingest: 10.29 MB, 0.02 MB/s
                                           Interval WAL: 1532 writes, 1532 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.6      2.47              0.29        56    0.044       0      0       0.0       0.0
                                             L6      1/0   13.82 MB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   5.3     63.2     54.2     10.49              1.46        55    0.191    426K    29K       0.0       0.0
                                            Sum      1/0   13.82 MB   0.0      0.6     0.1      0.5       0.7      0.1       0.0   6.3     51.1     52.2     12.96              1.75       111    0.117    426K    29K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4     89.6     92.6      0.78              0.18        10    0.078     54K   2582       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   0.0     63.2     54.2     10.49              1.46        55    0.191    426K    29K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.7      2.47              0.29        55    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.105, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.66 GB write, 0.10 MB/s write, 0.65 GB read, 0.10 MB/s read, 13.0 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 71.24 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.00076 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4141,68.19 MB,22.4325%) FilterBlock(112,1.14 MB,0.375943%) IndexBlock(112,1.90 MB,0.626383%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:54:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:34 compute-0 sshd-session[395715]: Invalid user ubuntu from 123.54.197.60 port 35100
Jan 31 08:54:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 236 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 122 op/s
Jan 31 08:54:35 compute-0 sshd-session[395715]: Connection closed by invalid user ubuntu 123.54.197.60 port 35100 [preauth]
Jan 31 08:54:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:35 compute-0 nova_compute[247704]: 2026-01-31 08:54:35.566 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:35.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029547695072585146 of space, bias 1.0, pg target 0.8864308521775544 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6486970442490229 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:54:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:54:36 compute-0 ceph-mon[74496]: pgmap v3664: 305 pgs: 305 active+clean; 236 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 122 op/s
Jan 31 08:54:36 compute-0 sshd-session[395718]: Invalid user ubuntu from 123.54.197.60 port 35116
Jan 31 08:54:36 compute-0 sshd-session[395718]: Connection closed by invalid user ubuntu 123.54.197.60 port 35116 [preauth]
Jan 31 08:54:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 245 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Jan 31 08:54:37 compute-0 nova_compute[247704]: 2026-01-31 08:54:37.205 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:37.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:37 compute-0 sshd-session[395721]: Invalid user ubuntu from 123.54.197.60 port 35122
Jan 31 08:54:38 compute-0 sshd-session[395721]: Connection closed by invalid user ubuntu 123.54.197.60 port 35122 [preauth]
Jan 31 08:54:38 compute-0 ceph-mon[74496]: pgmap v3665: 305 pgs: 305 active+clean; 245 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 132 op/s
Jan 31 08:54:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 163 op/s
Jan 31 08:54:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:39.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:39 compute-0 sshd-session[395724]: Invalid user ubuntu from 123.54.197.60 port 35138
Jan 31 08:54:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:39.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:39 compute-0 sshd-session[395724]: Connection closed by invalid user ubuntu 123.54.197.60 port 35138 [preauth]
Jan 31 08:54:40 compute-0 ceph-mon[74496]: pgmap v3666: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 163 op/s
Jan 31 08:54:40 compute-0 nova_compute[247704]: 2026-01-31 08:54:40.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Jan 31 08:54:41 compute-0 sshd-session[395726]: Invalid user ubuntu from 123.54.197.60 port 41922
Jan 31 08:54:41 compute-0 sshd-session[395726]: Connection closed by invalid user ubuntu 123.54.197.60 port 41922 [preauth]
Jan 31 08:54:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:41.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:41.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:41 compute-0 ceph-mon[74496]: pgmap v3667: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Jan 31 08:54:42 compute-0 nova_compute[247704]: 2026-01-31 08:54:42.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:42 compute-0 sshd-session[395729]: Invalid user ubuntu from 123.54.197.60 port 41928
Jan 31 08:54:42 compute-0 sshd-session[395729]: Connection closed by invalid user ubuntu 123.54.197.60 port 41928 [preauth]
Jan 31 08:54:42 compute-0 nova_compute[247704]: 2026-01-31 08:54:42.966 247708 DEBUG oslo_concurrency.lockutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:42 compute-0 nova_compute[247704]: 2026-01-31 08:54:42.967 247708 DEBUG oslo_concurrency.lockutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:43 compute-0 sudo[395732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.012 247708 DEBUG nova.objects.instance [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'flavor' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:54:43 compute-0 sudo[395732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:43 compute-0 sudo[395732]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 31 08:54:43 compute-0 sudo[395760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:54:43 compute-0 sudo[395760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:54:43 compute-0 sudo[395760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:54:43 compute-0 podman[395756]: 2026-01-31 08:54:43.105406831 +0000 UTC m=+0.079060354 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.139 247708 DEBUG oslo_concurrency.lockutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:43.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.579 247708 DEBUG oslo_concurrency.lockutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.580 247708 DEBUG oslo_concurrency.lockutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.580 247708 INFO nova.compute.manager [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Attaching volume 9ef5b6cb-4a21-4534-a367-ffc4848e8857 to /dev/vdb
Jan 31 08:54:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:43.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.884 247708 DEBUG os_brick.utils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.886 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.901 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.901 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[91c606e0-ec4e-4ce5-980c-1702efa173a3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.903 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.913 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.913 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5a1a6f-3920-40b5-a902-bb3c785c8f45]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.915 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.925 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.925 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[10cf540e-3c9a-444a-bc61-ccc4e20bc272]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.928 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[52886c07-94c9-4712-9dc0-e27332abe6fd]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.928 247708 DEBUG oslo_concurrency.processutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.961 247708 DEBUG oslo_concurrency.processutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.964 247708 DEBUG os_brick.initiator.connectors.lightos [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.964 247708 DEBUG os_brick.initiator.connectors.lightos [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.964 247708 DEBUG os_brick.initiator.connectors.lightos [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.965 247708 DEBUG os_brick.utils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:54:43 compute-0 nova_compute[247704]: 2026-01-31 08:54:43.965 247708 DEBUG nova.virt.block_device [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating existing volume attachment record: 0f8033ff-c269-4449-a084-9c11dbe7d6e2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:54:44 compute-0 ceph-mon[74496]: pgmap v3668: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 31 08:54:44 compute-0 sshd-session[395801]: Invalid user ubuntu from 123.54.197.60 port 41930
Jan 31 08:54:44 compute-0 sshd-session[395801]: Connection closed by invalid user ubuntu 123.54.197.60 port 41930 [preauth]
Jan 31 08:54:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Jan 31 08:54:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1109915705' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.242 247708 DEBUG nova.objects.instance [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'flavor' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.281 247708 DEBUG nova.virt.libvirt.driver [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Attempting to attach volume 9ef5b6cb-4a21-4534-a367-ffc4848e8857 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.286 247708 DEBUG nova.virt.libvirt.guest [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 08:54:45 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:54:45 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-9ef5b6cb-4a21-4534-a367-ffc4848e8857">
Jan 31 08:54:45 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:54:45 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:54:45 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:54:45 compute-0 nova_compute[247704]:   </source>
Jan 31 08:54:45 compute-0 nova_compute[247704]:   <auth username="openstack">
Jan 31 08:54:45 compute-0 nova_compute[247704]:     <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:54:45 compute-0 nova_compute[247704]:   </auth>
Jan 31 08:54:45 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:54:45 compute-0 nova_compute[247704]:   <serial>9ef5b6cb-4a21-4534-a367-ffc4848e8857</serial>
Jan 31 08:54:45 compute-0 nova_compute[247704]: </disk>
Jan 31 08:54:45 compute-0 nova_compute[247704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 31 08:54:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:45.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.560 247708 DEBUG nova.virt.libvirt.driver [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.561 247708 DEBUG nova.virt.libvirt.driver [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.561 247708 DEBUG nova.virt.libvirt.driver [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.562 247708 DEBUG nova.virt.libvirt.driver [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No VIF found with MAC fa:16:3e:d6:93:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.570 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:45.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:45 compute-0 sshd-session[395812]: Invalid user ubuntu from 123.54.197.60 port 41946
Jan 31 08:54:45 compute-0 nova_compute[247704]: 2026-01-31 08:54:45.982 247708 DEBUG oslo_concurrency.lockutils [None req-acfac217-0a52-49ea-a771-c299d4d55e29 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:46 compute-0 sshd-session[395812]: Connection closed by invalid user ubuntu 123.54.197.60 port 41946 [preauth]
Jan 31 08:54:46 compute-0 ceph-mon[74496]: pgmap v3669: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Jan 31 08:54:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 08:54:47 compute-0 nova_compute[247704]: 2026-01-31 08:54:47.210 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:47 compute-0 sshd-session[395835]: Invalid user ubuntu from 123.54.197.60 port 41950
Jan 31 08:54:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:47.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:47 compute-0 sshd-session[395835]: Connection closed by invalid user ubuntu 123.54.197.60 port 41950 [preauth]
Jan 31 08:54:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:47.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:48 compute-0 ceph-mon[74496]: pgmap v3670: 305 pgs: 305 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.349 247708 DEBUG oslo_concurrency.lockutils [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.351 247708 DEBUG oslo_concurrency.lockutils [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.400 247708 INFO nova.compute.manager [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Detaching volume 9ef5b6cb-4a21-4534-a367-ffc4848e8857
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.631 247708 INFO nova.virt.block_device [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Attempting to driver detach volume 9ef5b6cb-4a21-4534-a367-ffc4848e8857 from mountpoint /dev/vdb
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.646 247708 DEBUG nova.virt.libvirt.driver [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Attempting to detach device vdb from instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.647 247708 DEBUG nova.virt.libvirt.guest [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-9ef5b6cb-4a21-4534-a367-ffc4848e8857">
Jan 31 08:54:48 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   </source>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <serial>9ef5b6cb-4a21-4534-a367-ffc4848e8857</serial>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]: </disk>
Jan 31 08:54:48 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.656 247708 INFO nova.virt.libvirt.driver [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully detached device vdb from instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 from the persistent domain config.
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.657 247708 DEBUG nova.virt.libvirt.driver [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.658 247708 DEBUG nova.virt.libvirt.guest [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <source protocol="rbd" name="volumes/volume-9ef5b6cb-4a21-4534-a367-ffc4848e8857">
Jan 31 08:54:48 compute-0 nova_compute[247704]:     <host name="192.168.122.100" port="6789"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:     <host name="192.168.122.102" port="6789"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:     <host name="192.168.122.101" port="6789"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   </source>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <target dev="vdb" bus="virtio"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <serial>9ef5b6cb-4a21-4534-a367-ffc4848e8857</serial>
Jan 31 08:54:48 compute-0 nova_compute[247704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 08:54:48 compute-0 nova_compute[247704]: </disk>
Jan 31 08:54:48 compute-0 nova_compute[247704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.790 247708 DEBUG nova.virt.libvirt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Received event <DeviceRemovedEvent: 1769849688.7900176, da90f1fb-9090-49b5-a510-d7e6ac7a30d6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.792 247708 DEBUG nova.virt.libvirt.driver [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 31 08:54:48 compute-0 nova_compute[247704]: 2026-01-31 08:54:48.795 247708 INFO nova.virt.libvirt.driver [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully detached device vdb from instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 from the live domain config.
Jan 31 08:54:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 75 op/s
Jan 31 08:54:49 compute-0 nova_compute[247704]: 2026-01-31 08:54:49.099 247708 DEBUG nova.objects.instance [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'flavor' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:54:49 compute-0 nova_compute[247704]: 2026-01-31 08:54:49.225 247708 DEBUG oslo_concurrency.lockutils [None req-799a184d-8e41-46ec-9a75-84d0ea5c9587 a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:54:49 compute-0 sshd-session[395837]: Invalid user ubuntu from 123.54.197.60 port 41964
Jan 31 08:54:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:49.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:49 compute-0 sshd-session[395837]: Connection closed by invalid user ubuntu 123.54.197.60 port 41964 [preauth]
Jan 31 08:54:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:54:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:54:50 compute-0 ceph-mon[74496]: pgmap v3671: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 75 op/s
Jan 31 08:54:50 compute-0 nova_compute[247704]: 2026-01-31 08:54:50.572 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Jan 31 08:54:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.211267) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691211312, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 704, "num_deletes": 252, "total_data_size": 904129, "memory_usage": 916792, "flush_reason": "Manual Compaction"}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Jan 31 08:54:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691219348, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 894100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79803, "largest_seqno": 80506, "table_properties": {"data_size": 890372, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8760, "raw_average_key_size": 19, "raw_value_size": 882718, "raw_average_value_size": 1997, "num_data_blocks": 66, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849645, "oldest_key_time": 1769849645, "file_creation_time": 1769849691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 8144 microseconds, and 2928 cpu microseconds.
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:51 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.219403) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 894100 bytes OK
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.219432) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.225643) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.225665) EVENT_LOG_v1 {"time_micros": 1769849691225658, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.225689) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 900497, prev total WAL file size 900538, number of live WAL files 2.
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.226715) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(873KB)], [182(13MB)]
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691226799, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 15389986, "oldest_snapshot_seqno": -1}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10612 keys, 13406544 bytes, temperature: kUnknown
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691347500, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 13406544, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13338782, "index_size": 40129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 281278, "raw_average_key_size": 26, "raw_value_size": 13153930, "raw_average_value_size": 1239, "num_data_blocks": 1523, "num_entries": 10612, "num_filter_entries": 10612, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849691, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.347841) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13406544 bytes
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.352839) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.4 rd, 111.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(32.2) write-amplify(15.0) OK, records in: 11138, records dropped: 526 output_compression: NoCompression
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.352899) EVENT_LOG_v1 {"time_micros": 1769849691352878, "job": 114, "event": "compaction_finished", "compaction_time_micros": 120784, "compaction_time_cpu_micros": 33441, "output_level": 6, "num_output_files": 1, "total_output_size": 13406544, "num_input_records": 11138, "num_output_records": 10612, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691353275, "job": 114, "event": "table_file_deletion", "file_number": 184}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849691355312, "job": 114, "event": "table_file_deletion", "file_number": 182}
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.226132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.355381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.355388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.355390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.355393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:54:51.355395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:54:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:51.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:51.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:52 compute-0 sshd-session[395842]: Invalid user ubuntu from 123.54.197.60 port 48176
Jan 31 08:54:52 compute-0 nova_compute[247704]: 2026-01-31 08:54:52.213 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:52 compute-0 ceph-mon[74496]: pgmap v3672: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Jan 31 08:54:52 compute-0 ceph-mon[74496]: osdmap e396: 3 total, 3 up, 3 in
Jan 31 08:54:52 compute-0 sshd-session[395842]: Connection closed by invalid user ubuntu 123.54.197.60 port 48176 [preauth]
Jan 31 08:54:52 compute-0 nova_compute[247704]: 2026-01-31 08:54:52.379 247708 DEBUG nova.compute.manager [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:54:52 compute-0 nova_compute[247704]: 2026-01-31 08:54:52.428 247708 INFO nova.compute.manager [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] instance snapshotting
Jan 31 08:54:52 compute-0 nova_compute[247704]: 2026-01-31 08:54:52.783 247708 INFO nova.virt.libvirt.driver [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Beginning live snapshot process
Jan 31 08:54:52 compute-0 nova_compute[247704]: 2026-01-31 08:54:52.988 247708 DEBUG nova.virt.libvirt.imagebackend [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] No parent info for 7c23949f-bba8-4466-bb79-caf568852d38; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 31 08:54:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 408 KiB/s rd, 2.8 MiB/s wr, 94 op/s
Jan 31 08:54:53 compute-0 nova_compute[247704]: 2026-01-31 08:54:53.254 247708 DEBUG nova.storage.rbd_utils [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] creating snapshot(1406ee3f317345c0a14ea04e6180b23a) on rbd image(da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:54:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:53.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:53.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Jan 31 08:54:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Jan 31 08:54:54 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Jan 31 08:54:54 compute-0 ceph-mon[74496]: pgmap v3674: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 408 KiB/s rd, 2.8 MiB/s wr, 94 op/s
Jan 31 08:54:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1800567950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:54:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1800567950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:54:54 compute-0 nova_compute[247704]: 2026-01-31 08:54:54.339 247708 DEBUG nova.storage.rbd_utils [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] cloning vms/da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk@1406ee3f317345c0a14ea04e6180b23a to images/6827763f-c9c4-43fc-825d-2f9c946c4536 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 31 08:54:54 compute-0 nova_compute[247704]: 2026-01-31 08:54:54.484 247708 DEBUG nova.storage.rbd_utils [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] flattening images/6827763f-c9c4-43fc-825d-2f9c946c4536 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 31 08:54:54 compute-0 sshd-session[395897]: Invalid user ubuntu from 123.54.197.60 port 48186
Jan 31 08:54:54 compute-0 nova_compute[247704]: 2026-01-31 08:54:54.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:54:54 compute-0 sshd-session[395897]: Connection closed by invalid user ubuntu 123.54.197.60 port 48186 [preauth]
Jan 31 08:54:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:54:54 compute-0 nova_compute[247704]: 2026-01-31 08:54:54.876 247708 DEBUG nova.storage.rbd_utils [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] removing snapshot(1406ee3f317345c0a14ea04e6180b23a) on rbd image(da90f1fb-9090-49b5-a510-d7e6ac7a30d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 31 08:54:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 286 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 102 op/s
Jan 31 08:54:55 compute-0 ceph-mon[74496]: osdmap e397: 3 total, 3 up, 3 in
Jan 31 08:54:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:55 compute-0 nova_compute[247704]: 2026-01-31 08:54:55.586 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:54:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:54:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Jan 31 08:54:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Jan 31 08:54:56 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Jan 31 08:54:56 compute-0 ceph-mon[74496]: pgmap v3676: 305 pgs: 305 active+clean; 286 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 102 op/s
Jan 31 08:54:56 compute-0 ceph-mon[74496]: osdmap e398: 3 total, 3 up, 3 in
Jan 31 08:54:56 compute-0 nova_compute[247704]: 2026-01-31 08:54:56.387 247708 DEBUG nova.storage.rbd_utils [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] creating snapshot(snap) on rbd image(6827763f-c9c4-43fc-825d-2f9c946c4536) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 31 08:54:56 compute-0 sshd-session[395972]: Invalid user ubuntu from 123.54.197.60 port 48200
Jan 31 08:54:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 319 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 101 op/s
Jan 31 08:54:57 compute-0 sshd-session[395972]: Connection closed by invalid user ubuntu 123.54.197.60 port 48200 [preauth]
Jan 31 08:54:57 compute-0 nova_compute[247704]: 2026-01-31 08:54:57.216 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Jan 31 08:54:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Jan 31 08:54:57 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Jan 31 08:54:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:57 compute-0 podman[395995]: 2026-01-31 08:54:57.968972593 +0000 UTC m=+0.132254366 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:54:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:58.041 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:54:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:54:58.043 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:54:58 compute-0 nova_compute[247704]: 2026-01-31 08:54:58.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:54:58 compute-0 ceph-mon[74496]: pgmap v3678: 305 pgs: 305 active+clean; 319 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.2 MiB/s wr, 101 op/s
Jan 31 08:54:58 compute-0 ceph-mon[74496]: osdmap e399: 3 total, 3 up, 3 in
Jan 31 08:54:58 compute-0 sshd-session[395993]: Invalid user ubuntu from 123.54.197.60 port 48210
Jan 31 08:54:58 compute-0 sshd-session[395993]: Connection closed by invalid user ubuntu 123.54.197.60 port 48210 [preauth]
Jan 31 08:54:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.1 MiB/s wr, 145 op/s
Jan 31 08:54:59 compute-0 nova_compute[247704]: 2026-01-31 08:54:59.558 247708 INFO nova.virt.libvirt.driver [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Snapshot image upload complete
Jan 31 08:54:59 compute-0 nova_compute[247704]: 2026-01-31 08:54:59.559 247708 INFO nova.compute.manager [None req-acfef235-abed-41fa-ac4a-58adf35a1e4f a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Took 7.13 seconds to snapshot the instance on the hypervisor.
Jan 31 08:54:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:54:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:59.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:54:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:54:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:54:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:59.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:54:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:00 compute-0 sshd-session[396022]: Invalid user ubuntu from 123.54.197.60 port 48224
Jan 31 08:55:00 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:55:00.045 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:55:00 compute-0 sshd-session[396022]: Connection closed by invalid user ubuntu 123.54.197.60 port 48224 [preauth]
Jan 31 08:55:00 compute-0 ceph-mon[74496]: pgmap v3680: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.1 MiB/s wr, 145 op/s
Jan 31 08:55:00 compute-0 nova_compute[247704]: 2026-01-31 08:55:00.589 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 162 op/s
Jan 31 08:55:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:01.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:01 compute-0 sshd-session[396025]: Invalid user ubuntu from 123.54.197.60 port 43730
Jan 31 08:55:02 compute-0 sshd-session[396025]: Connection closed by invalid user ubuntu 123.54.197.60 port 43730 [preauth]
Jan 31 08:55:02 compute-0 nova_compute[247704]: 2026-01-31 08:55:02.216 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:02 compute-0 ceph-mon[74496]: pgmap v3681: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 162 op/s
Jan 31 08:55:02 compute-0 nova_compute[247704]: 2026-01-31 08:55:02.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 151 op/s
Jan 31 08:55:03 compute-0 sudo[396030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:03 compute-0 sudo[396030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:03 compute-0 sudo[396030]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:03 compute-0 sudo[396055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:03 compute-0 sudo[396055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:03 compute-0 sudo[396055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:03 compute-0 sshd-session[396027]: Invalid user ubuntu from 123.54.197.60 port 43732
Jan 31 08:55:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:03 compute-0 sshd-session[396027]: Connection closed by invalid user ubuntu 123.54.197.60 port 43732 [preauth]
Jan 31 08:55:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:04 compute-0 ceph-mon[74496]: pgmap v3682: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.3 MiB/s wr, 151 op/s
Jan 31 08:55:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Jan 31 08:55:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Jan 31 08:55:04 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Jan 31 08:55:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 93 op/s
Jan 31 08:55:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:05 compute-0 nova_compute[247704]: 2026-01-31 08:55:05.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:05.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:05 compute-0 ceph-mon[74496]: osdmap e400: 3 total, 3 up, 3 in
Jan 31 08:55:05 compute-0 ceph-mon[74496]: pgmap v3684: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 93 op/s
Jan 31 08:55:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2275626305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:05 compute-0 sshd-session[396080]: Invalid user ubuntu from 123.54.197.60 port 43740
Jan 31 08:55:06 compute-0 sshd-session[396080]: Connection closed by invalid user ubuntu 123.54.197.60 port 43740 [preauth]
Jan 31 08:55:06 compute-0 nova_compute[247704]: 2026-01-31 08:55:06.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/153569570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 77 op/s
Jan 31 08:55:07 compute-0 nova_compute[247704]: 2026-01-31 08:55:07.218 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:07 compute-0 nova_compute[247704]: 2026-01-31 08:55:07.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:07 compute-0 nova_compute[247704]: 2026-01-31 08:55:07.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:55:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:07.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:07 compute-0 ceph-mon[74496]: pgmap v3685: 305 pgs: 305 active+clean; 360 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 77 op/s
Jan 31 08:55:08 compute-0 sshd-session[396084]: Invalid user ubuntu from 123.54.197.60 port 43754
Jan 31 08:55:08 compute-0 sshd-session[396084]: Connection closed by invalid user ubuntu 123.54.197.60 port 43754 [preauth]
Jan 31 08:55:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 366 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Jan 31 08:55:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:09.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:09 compute-0 sshd-session[396087]: Invalid user ubuntu from 123.54.197.60 port 43762
Jan 31 08:55:10 compute-0 sshd-session[396087]: Connection closed by invalid user ubuntu 123.54.197.60 port 43762 [preauth]
Jan 31 08:55:10 compute-0 ceph-mon[74496]: pgmap v3686: 305 pgs: 305 active+clean; 366 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Jan 31 08:55:10 compute-0 nova_compute[247704]: 2026-01-31 08:55:10.594 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 31 08:55:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320594743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:55:11.226 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:55:11.226 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:55:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:11 compute-0 sshd-session[396090]: Invalid user ubuntu from 123.54.197.60 port 60364
Jan 31 08:55:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:11.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:11 compute-0 sshd-session[396090]: Connection closed by invalid user ubuntu 123.54.197.60 port 60364 [preauth]
Jan 31 08:55:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:11.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:12 compute-0 ceph-mon[74496]: pgmap v3687: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 31 08:55:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1320594743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3939374713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:12 compute-0 nova_compute[247704]: 2026-01-31 08:55:12.220 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:12 compute-0 sshd-session[396092]: Invalid user ubuntu from 123.54.197.60 port 60378
Jan 31 08:55:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 08:55:13 compute-0 sshd-session[396092]: Connection closed by invalid user ubuntu 123.54.197.60 port 60378 [preauth]
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:55:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:13.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:13 compute-0 podman[396097]: 2026-01-31 08:55:13.886957015 +0000 UTC m=+0.049137366 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.970 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.970 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.970 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:55:13 compute-0 nova_compute[247704]: 2026-01-31 08:55:13.970 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:55:14 compute-0 ceph-mon[74496]: pgmap v3688: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 08:55:14 compute-0 sshd-session[396095]: Invalid user ubuntu from 123.54.197.60 port 60382
Jan 31 08:55:14 compute-0 sshd-session[396095]: Connection closed by invalid user ubuntu 123.54.197.60 port 60382 [preauth]
Jan 31 08:55:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:55:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:15.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:15 compute-0 nova_compute[247704]: 2026-01-31 08:55:15.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:16 compute-0 sshd-session[396117]: Invalid user ubuntu from 123.54.197.60 port 60388
Jan 31 08:55:16 compute-0 ceph-mon[74496]: pgmap v3689: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 08:55:16 compute-0 sshd-session[396117]: Connection closed by invalid user ubuntu 123.54.197.60 port 60388 [preauth]
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.051 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:55:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.082 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.082 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.082 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.120 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.120 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.120 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.120 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.121 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/773898932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2725231681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.223 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1522886520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.525 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:17.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.622 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.623 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:55:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.781 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.783 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3982MB free_disk=20.876312255859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.783 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:55:17 compute-0 nova_compute[247704]: 2026-01-31 08:55:17.783 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:55:18 compute-0 ceph-mon[74496]: pgmap v3690: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 08:55:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1228416371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/690692146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1522886520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3943486430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.222 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.222 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.223 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.472 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.511 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.512 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.550 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:55:18 compute-0 nova_compute[247704]: 2026-01-31 08:55:18.580 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:55:19 compute-0 nova_compute[247704]: 2026-01-31 08:55:19.051 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:55:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 08:55:19 compute-0 sshd-session[396120]: Invalid user ubuntu from 123.54.197.60 port 60394
Jan 31 08:55:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:55:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398945549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:19 compute-0 nova_compute[247704]: 2026-01-31 08:55:19.473 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:55:19 compute-0 nova_compute[247704]: 2026-01-31 08:55:19.482 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:55:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:19.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:19.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:19 compute-0 sshd-session[396120]: Connection closed by invalid user ubuntu 123.54.197.60 port 60394 [preauth]
Jan 31 08:55:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:55:20 compute-0 ceph-mon[74496]: pgmap v3691: 305 pgs: 305 active+clean; 406 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 08:55:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/398945549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1654730449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:55:20
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups', 'default.rgw.control', '.mgr']
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:55:20 compute-0 nova_compute[247704]: 2026-01-31 08:55:20.601 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:55:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:55:21 compute-0 sshd-session[396168]: Invalid user ubuntu from 123.54.197.60 port 48840
Jan 31 08:55:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 08:55:21 compute-0 nova_compute[247704]: 2026-01-31 08:55:21.259 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:55:21 compute-0 sshd-session[396168]: Connection closed by invalid user ubuntu 123.54.197.60 port 48840 [preauth]
Jan 31 08:55:21 compute-0 nova_compute[247704]: 2026-01-31 08:55:21.316 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:55:21 compute-0 nova_compute[247704]: 2026-01-31 08:55:21.317 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:55:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:21.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:21.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:22 compute-0 nova_compute[247704]: 2026-01-31 08:55:22.246 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:22 compute-0 ceph-mon[74496]: pgmap v3692: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 08:55:22 compute-0 sshd-session[396171]: Invalid user ubuntu from 123.54.197.60 port 48844
Jan 31 08:55:22 compute-0 sshd-session[396171]: Connection closed by invalid user ubuntu 123.54.197.60 port 48844 [preauth]
Jan 31 08:55:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 557 KiB/s rd, 28 KiB/s wr, 43 op/s
Jan 31 08:55:23 compute-0 sudo[396176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:23 compute-0 sudo[396176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:23 compute-0 sudo[396176]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:23 compute-0 sudo[396201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:23 compute-0 sudo[396201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:23 compute-0 sudo[396201]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:23.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:23.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:24 compute-0 sshd-session[396174]: Invalid user ubuntu from 123.54.197.60 port 48860
Jan 31 08:55:24 compute-0 ceph-mon[74496]: pgmap v3693: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 557 KiB/s rd, 28 KiB/s wr, 43 op/s
Jan 31 08:55:24 compute-0 sshd-session[396174]: Connection closed by invalid user ubuntu 123.54.197.60 port 48860 [preauth]
Jan 31 08:55:24 compute-0 nova_compute[247704]: 2026-01-31 08:55:24.796 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:24 compute-0 nova_compute[247704]: 2026-01-31 08:55:24.797 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:24 compute-0 nova_compute[247704]: 2026-01-31 08:55:24.798 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 107 op/s
Jan 31 08:55:25 compute-0 nova_compute[247704]: 2026-01-31 08:55:25.605 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:25.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:25 compute-0 sshd-session[396227]: Invalid user ubuntu from 123.54.197.60 port 48864
Jan 31 08:55:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:25.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:25 compute-0 sshd-session[396227]: Connection closed by invalid user ubuntu 123.54.197.60 port 48864 [preauth]
Jan 31 08:55:26 compute-0 ceph-mon[74496]: pgmap v3694: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 28 KiB/s wr, 107 op/s
Jan 31 08:55:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 152 op/s
Jan 31 08:55:27 compute-0 nova_compute[247704]: 2026-01-31 08:55:27.249 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:27 compute-0 sshd-session[396229]: Invalid user ubuntu from 123.54.197.60 port 48868
Jan 31 08:55:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:27.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:27 compute-0 sshd-session[396229]: Connection closed by invalid user ubuntu 123.54.197.60 port 48868 [preauth]
Jan 31 08:55:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:28 compute-0 ceph-mon[74496]: pgmap v3695: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 152 op/s
Jan 31 08:55:28 compute-0 nova_compute[247704]: 2026-01-31 08:55:28.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:28 compute-0 podman[396235]: 2026-01-31 08:55:28.909007882 +0000 UTC m=+0.077747471 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 08:55:28 compute-0 sshd-session[396232]: Invalid user ubuntu from 123.54.197.60 port 48876
Jan 31 08:55:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 150 op/s
Jan 31 08:55:29 compute-0 sshd-session[396232]: Connection closed by invalid user ubuntu 123.54.197.60 port 48876 [preauth]
Jan 31 08:55:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:30 compute-0 ceph-mon[74496]: pgmap v3696: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 150 op/s
Jan 31 08:55:30 compute-0 nova_compute[247704]: 2026-01-31 08:55:30.607 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 150 op/s
Jan 31 08:55:31 compute-0 sshd-session[396263]: Invalid user ubuntu from 123.54.197.60 port 48884
Jan 31 08:55:31 compute-0 sudo[396266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:31 compute-0 sudo[396266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:31 compute-0 sudo[396266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:31 compute-0 sshd-session[396263]: Connection closed by invalid user ubuntu 123.54.197.60 port 48884 [preauth]
Jan 31 08:55:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:31.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:31 compute-0 sudo[396291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:55:31 compute-0 sudo[396291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:31 compute-0 sudo[396291]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:31 compute-0 sudo[396316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:31 compute-0 sudo[396316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:31 compute-0 sudo[396316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:31.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:31 compute-0 sudo[396341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:55:31 compute-0 sudo[396341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:32 compute-0 sudo[396341]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:32 compute-0 nova_compute[247704]: 2026-01-31 08:55:32.251 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:32 compute-0 ceph-mon[74496]: pgmap v3697: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 150 op/s
Jan 31 08:55:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 08:55:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 08:55:32 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-0 sshd-session[396368]: Invalid user ubuntu from 123.54.197.60 port 39432
Jan 31 08:55:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 85 B/s wr, 132 op/s
Jan 31 08:55:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:55:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:55:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:55:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 502ffb49-eb02-49bf-b232-572246a4f940 does not exist
Jan 31 08:55:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 06cb4184-e617-463b-92f4-f4f7127cf240 does not exist
Jan 31 08:55:33 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 95509704-9f8c-455a-ab20-575517839b33 does not exist
Jan 31 08:55:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:55:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:55:33 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:55:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:55:33 compute-0 sudo[396400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:33 compute-0 sudo[396400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:33 compute-0 sudo[396400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:33 compute-0 sshd-session[396368]: Connection closed by invalid user ubuntu 123.54.197.60 port 39432 [preauth]
Jan 31 08:55:33 compute-0 sudo[396425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:55:33 compute-0 sudo[396425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:33 compute-0 sudo[396425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:33 compute-0 sudo[396450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:33 compute-0 sudo[396450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:33 compute-0 sudo[396450]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:33 compute-0 sudo[396475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:55:33 compute-0 sudo[396475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-0 ceph-mon[74496]: pgmap v3698: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 85 B/s wr, 132 op/s
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:55:33 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:55:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:33.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.775533027 +0000 UTC m=+0.058048443 container create c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feynman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:55:33 compute-0 systemd[1]: Started libpod-conmon-c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8.scope.
Jan 31 08:55:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.744070311 +0000 UTC m=+0.026585817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.856510936 +0000 UTC m=+0.139026372 container init c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.86573171 +0000 UTC m=+0.148247136 container start c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.869210365 +0000 UTC m=+0.151725781 container attach c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feynman, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:55:33 compute-0 strange_feynman[396559]: 167 167
Jan 31 08:55:33 compute-0 systemd[1]: libpod-c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8.scope: Deactivated successfully.
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.875660561 +0000 UTC m=+0.158175977 container died c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0f9c9a2bd0aacf421b103a54b538c8b57bb7527677f9373b75ca8c07d05711-merged.mount: Deactivated successfully.
Jan 31 08:55:33 compute-0 podman[396542]: 2026-01-31 08:55:33.930634009 +0000 UTC m=+0.213149465 container remove c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:55:33 compute-0 systemd[1]: libpod-conmon-c014c0ec65ee2e8a60320d1373939ebb8a99733c3e65ccb89e16637f21c786b8.scope: Deactivated successfully.
Jan 31 08:55:34 compute-0 podman[396585]: 2026-01-31 08:55:34.089659915 +0000 UTC m=+0.042732240 container create 820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:55:34 compute-0 systemd[1]: Started libpod-conmon-820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11.scope.
Jan 31 08:55:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bddce7e001c76088315394a765903205c4d2e6b67eea81bd9fbc1c2f5bdef07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bddce7e001c76088315394a765903205c4d2e6b67eea81bd9fbc1c2f5bdef07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bddce7e001c76088315394a765903205c4d2e6b67eea81bd9fbc1c2f5bdef07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bddce7e001c76088315394a765903205c4d2e6b67eea81bd9fbc1c2f5bdef07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bddce7e001c76088315394a765903205c4d2e6b67eea81bd9fbc1c2f5bdef07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:34 compute-0 podman[396585]: 2026-01-31 08:55:34.071821152 +0000 UTC m=+0.024893497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:55:34 compute-0 podman[396585]: 2026-01-31 08:55:34.182439962 +0000 UTC m=+0.135512337 container init 820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:55:34 compute-0 podman[396585]: 2026-01-31 08:55:34.190147399 +0000 UTC m=+0.143219714 container start 820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_benz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:55:34 compute-0 podman[396585]: 2026-01-31 08:55:34.1988572 +0000 UTC m=+0.151929525 container attach 820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_benz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:55:34 compute-0 sshd-session[396599]: Invalid user sdadmin from 45.148.10.240 port 58768
Jan 31 08:55:34 compute-0 sshd-session[396500]: Invalid user ubuntu from 123.54.197.60 port 39440
Jan 31 08:55:34 compute-0 sshd-session[396599]: Connection closed by invalid user sdadmin 45.148.10.240 port 58768 [preauth]
Jan 31 08:55:34 compute-0 sshd-session[396500]: Connection closed by invalid user ubuntu 123.54.197.60 port 39440 [preauth]
Jan 31 08:55:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:34 compute-0 upbeat_benz[396604]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:55:34 compute-0 upbeat_benz[396604]: --> relative data size: 1.0
Jan 31 08:55:34 compute-0 upbeat_benz[396604]: --> All data devices are unavailable
Jan 31 08:55:34 compute-0 systemd[1]: libpod-820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11.scope: Deactivated successfully.
Jan 31 08:55:34 compute-0 podman[396585]: 2026-01-31 08:55:34.983831168 +0000 UTC m=+0.936903493 container died 820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_benz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bddce7e001c76088315394a765903205c4d2e6b67eea81bd9fbc1c2f5bdef07-merged.mount: Deactivated successfully.
Jan 31 08:55:35 compute-0 podman[396585]: 2026-01-31 08:55:35.030919773 +0000 UTC m=+0.983992098 container remove 820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_benz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 08:55:35 compute-0 systemd[1]: libpod-conmon-820dcdfcc409f62b9c30a48f444b42b2e548d48fbcf6c9ae69738d585d093f11.scope: Deactivated successfully.
Jan 31 08:55:35 compute-0 sudo[396475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 420 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Jan 31 08:55:35 compute-0 sudo[396631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:35 compute-0 sudo[396631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:35 compute-0 sudo[396631]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:35 compute-0 sudo[396658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:55:35 compute-0 sudo[396658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:35 compute-0 sudo[396658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:35 compute-0 sudo[396683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:35 compute-0 sudo[396683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:35 compute-0 sudo[396683]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:35 compute-0 sudo[396708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:55:35 compute-0 sudo[396708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.58598827 +0000 UTC m=+0.046390139 container create d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:55:35 compute-0 nova_compute[247704]: 2026-01-31 08:55:35.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:35 compute-0 systemd[1]: Started libpod-conmon-d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb.scope.
Jan 31 08:55:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.666647202 +0000 UTC m=+0.127049091 container init d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.572470881 +0000 UTC m=+0.032872770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.673052637 +0000 UTC m=+0.133454506 container start d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.676894421 +0000 UTC m=+0.137296290 container attach d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:55:35 compute-0 romantic_chandrasekhar[396788]: 167 167
Jan 31 08:55:35 compute-0 systemd[1]: libpod-d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb.scope: Deactivated successfully.
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.680771715 +0000 UTC m=+0.141173594 container died d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4768d573932b9370cc51e1e9c5211640f4f9e1edf6c69d6dcb4cd91528d493e4-merged.mount: Deactivated successfully.
Jan 31 08:55:35 compute-0 podman[396772]: 2026-01-31 08:55:35.716822231 +0000 UTC m=+0.177224110 container remove d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:55:35 compute-0 systemd[1]: libpod-conmon-d019ecb49825ab21b1218d91bc3fde403c09cf33b8e475903278e4e6d65c0cdb.scope: Deactivated successfully.
Jan 31 08:55:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:35 compute-0 podman[396811]: 2026-01-31 08:55:35.844338592 +0000 UTC m=+0.037539093 container create 45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:55:35 compute-0 systemd[1]: Started libpod-conmon-45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444.scope.
Jan 31 08:55:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d599eb5bbc80011c030db036784d559a20a9b3697e2d506c24e892fc5af9cda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d599eb5bbc80011c030db036784d559a20a9b3697e2d506c24e892fc5af9cda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d599eb5bbc80011c030db036784d559a20a9b3697e2d506c24e892fc5af9cda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d599eb5bbc80011c030db036784d559a20a9b3697e2d506c24e892fc5af9cda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:35 compute-0 podman[396811]: 2026-01-31 08:55:35.905592351 +0000 UTC m=+0.098792882 container init 45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:55:35 compute-0 podman[396811]: 2026-01-31 08:55:35.910823949 +0000 UTC m=+0.104024450 container start 45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:55:35 compute-0 podman[396811]: 2026-01-31 08:55:35.914513969 +0000 UTC m=+0.107714490 container attach 45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 08:55:35 compute-0 podman[396811]: 2026-01-31 08:55:35.828734542 +0000 UTC m=+0.021935063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0059942362274334636 of space, bias 1.0, pg target 1.798270868230039 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002256290131211614 of space, bias 1.0, pg target 0.6746307492322725 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004070555322910851 of space, bias 1.0, pg target 1.2170960415503445 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:55:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 08:55:36 compute-0 sshd-session[396643]: Invalid user ubuntu from 123.54.197.60 port 39454
Jan 31 08:55:36 compute-0 ceph-mon[74496]: pgmap v3699: 305 pgs: 305 active+clean; 420 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Jan 31 08:55:36 compute-0 sshd-session[396643]: Connection closed by invalid user ubuntu 123.54.197.60 port 39454 [preauth]
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]: {
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:     "0": [
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:         {
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "devices": [
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "/dev/loop3"
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             ],
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "lv_name": "ceph_lv0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "lv_size": "7511998464",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "name": "ceph_lv0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "tags": {
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.cluster_name": "ceph",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.crush_device_class": "",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.encrypted": "0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.osd_id": "0",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.type": "block",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:                 "ceph.vdo": "0"
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             },
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "type": "block",
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:             "vg_name": "ceph_vg0"
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:         }
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]:     ]
Jan 31 08:55:36 compute-0 affectionate_montalcini[396827]: }
Jan 31 08:55:36 compute-0 systemd[1]: libpod-45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444.scope: Deactivated successfully.
Jan 31 08:55:36 compute-0 podman[396811]: 2026-01-31 08:55:36.700286086 +0000 UTC m=+0.893486587 container died 45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:55:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d599eb5bbc80011c030db036784d559a20a9b3697e2d506c24e892fc5af9cda-merged.mount: Deactivated successfully.
Jan 31 08:55:36 compute-0 podman[396811]: 2026-01-31 08:55:36.756946113 +0000 UTC m=+0.950146634 container remove 45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:55:36 compute-0 systemd[1]: libpod-conmon-45e8ef25ddd24e7f887b7c889392dd24813c39f553b92f832f0efdd4cd184444.scope: Deactivated successfully.
Jan 31 08:55:36 compute-0 sudo[396708]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:36 compute-0 sudo[396851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:36 compute-0 sudo[396851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:36 compute-0 sudo[396851]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:36 compute-0 sudo[396876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:55:36 compute-0 sudo[396876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:36 compute-0 sudo[396876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:36 compute-0 sudo[396901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:36 compute-0 sudo[396901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:36 compute-0 sudo[396901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:36 compute-0 sudo[396926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:55:36 compute-0 sudo[396926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 437 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 31 08:55:37 compute-0 nova_compute[247704]: 2026-01-31 08:55:37.253 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.283969618 +0000 UTC m=+0.046513342 container create 4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:55:37 compute-0 systemd[1]: Started libpod-conmon-4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00.scope.
Jan 31 08:55:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.346591411 +0000 UTC m=+0.109135175 container init 4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.353006007 +0000 UTC m=+0.115549731 container start 4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:55:37 compute-0 determined_black[397007]: 167 167
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.356237755 +0000 UTC m=+0.118781479 container attach 4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:55:37 compute-0 systemd[1]: libpod-4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00.scope: Deactivated successfully.
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.356725518 +0000 UTC m=+0.119269242 container died 4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.265719515 +0000 UTC m=+0.028263289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-19ade0c1f758afc57cf474bbc33dd660070fb380a5874e121da18b1213f64491-merged.mount: Deactivated successfully.
Jan 31 08:55:37 compute-0 podman[396991]: 2026-01-31 08:55:37.394985798 +0000 UTC m=+0.157529532 container remove 4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_black, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 08:55:37 compute-0 systemd[1]: libpod-conmon-4d69b834a6fe7384b0b4907990c88f7878afb6133f91bff3f80241af1f2aea00.scope: Deactivated successfully.
Jan 31 08:55:37 compute-0 podman[397030]: 2026-01-31 08:55:37.520625693 +0000 UTC m=+0.039316327 container create 4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:55:37 compute-0 systemd[1]: Started libpod-conmon-4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6.scope.
Jan 31 08:55:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4926d1ef92caa41338f88370aa84846f707320be514998f63f2e3b61dd65e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4926d1ef92caa41338f88370aa84846f707320be514998f63f2e3b61dd65e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4926d1ef92caa41338f88370aa84846f707320be514998f63f2e3b61dd65e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f4926d1ef92caa41338f88370aa84846f707320be514998f63f2e3b61dd65e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:55:37 compute-0 podman[397030]: 2026-01-31 08:55:37.504993663 +0000 UTC m=+0.023684317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:55:37 compute-0 podman[397030]: 2026-01-31 08:55:37.60644628 +0000 UTC m=+0.125137004 container init 4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:55:37 compute-0 podman[397030]: 2026-01-31 08:55:37.614011193 +0000 UTC m=+0.132701837 container start 4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:55:37 compute-0 podman[397030]: 2026-01-31 08:55:37.617668232 +0000 UTC m=+0.136359086 container attach 4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:55:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:37.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:37.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:38 compute-0 sshd-session[397052]: Invalid user ubuntu from 123.54.197.60 port 39458
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]: {
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:         "osd_id": 0,
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:         "type": "bluestore"
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]:     }
Jan 31 08:55:38 compute-0 compassionate_dhawan[397047]: }
Jan 31 08:55:39 compute-0 ceph-mon[74496]: pgmap v3700: 305 pgs: 305 active+clean; 437 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 31 08:55:39 compute-0 systemd[1]: libpod-4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6.scope: Deactivated successfully.
Jan 31 08:55:39 compute-0 podman[397030]: 2026-01-31 08:55:39.024626474 +0000 UTC m=+1.543317148 container died 4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 08:55:39 compute-0 systemd[1]: libpod-4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6.scope: Consumed 1.407s CPU time.
Jan 31 08:55:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f4926d1ef92caa41338f88370aa84846f707320be514998f63f2e3b61dd65e9-merged.mount: Deactivated successfully.
Jan 31 08:55:39 compute-0 podman[397030]: 2026-01-31 08:55:39.077223483 +0000 UTC m=+1.595914117 container remove 4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:55:39 compute-0 systemd[1]: libpod-conmon-4dee881fe7b5a75a8416925bf235d9420f98eb015c69ab3eb23e0d0e84fddde6.scope: Deactivated successfully.
Jan 31 08:55:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 447 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 87 op/s
Jan 31 08:55:39 compute-0 sudo[396926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:55:39 compute-0 sshd-session[397052]: Connection closed by invalid user ubuntu 123.54.197.60 port 39458 [preauth]
Jan 31 08:55:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:55:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 40d657e9-069d-4a5e-9cbc-92397a66f7d4 does not exist
Jan 31 08:55:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 642d69c2-058b-4634-baac-aaddc64be337 does not exist
Jan 31 08:55:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bae79652-317c-4c40-b49c-0be7d74e972c does not exist
Jan 31 08:55:39 compute-0 sudo[397087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:39 compute-0 sudo[397087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:39 compute-0 sudo[397087]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:39 compute-0 sudo[397113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:55:39 compute-0 sudo[397113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:39 compute-0 sudo[397113]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:39.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:40 compute-0 ceph-mon[74496]: pgmap v3701: 305 pgs: 305 active+clean; 447 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 87 op/s
Jan 31 08:55:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:55:40 compute-0 sshd-session[397086]: Invalid user ubuntu from 123.54.197.60 port 39462
Jan 31 08:55:40 compute-0 nova_compute[247704]: 2026-01-31 08:55:40.615 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:40 compute-0 sshd-session[397086]: Connection closed by invalid user ubuntu 123.54.197.60 port 39462 [preauth]
Jan 31 08:55:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 454 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 116 op/s
Jan 31 08:55:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:41.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:41.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:42 compute-0 nova_compute[247704]: 2026-01-31 08:55:42.256 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:42 compute-0 ceph-mon[74496]: pgmap v3702: 305 pgs: 305 active+clean; 454 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 116 op/s
Jan 31 08:55:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Jan 31 08:55:43 compute-0 sshd-session[397139]: Invalid user ubuntu from 123.54.197.60 port 35664
Jan 31 08:55:43 compute-0 sudo[397142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:43 compute-0 sudo[397142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:43 compute-0 sudo[397142]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:43 compute-0 sshd-session[397139]: Connection closed by invalid user ubuntu 123.54.197.60 port 35664 [preauth]
Jan 31 08:55:43 compute-0 sudo[397167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:55:43 compute-0 sudo[397167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:55:43 compute-0 sudo[397167]: pam_unix(sudo:session): session closed for user root
Jan 31 08:55:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:43.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:43.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 08:55:44 compute-0 ceph-mon[74496]: pgmap v3703: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Jan 31 08:55:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:44 compute-0 podman[397196]: 2026-01-31 08:55:44.894902717 +0000 UTC m=+0.066912509 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:55:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 404 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Jan 31 08:55:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3174164384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:55:45 compute-0 sshd-session[397192]: Invalid user ubuntu from 123.54.197.60 port 35678
Jan 31 08:55:45 compute-0 nova_compute[247704]: 2026-01-31 08:55:45.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:45.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:45 compute-0 sshd-session[397192]: Connection closed by invalid user ubuntu 123.54.197.60 port 35678 [preauth]
Jan 31 08:55:46 compute-0 ceph-mon[74496]: pgmap v3704: 305 pgs: 305 active+clean; 404 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 135 op/s
Jan 31 08:55:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 110 op/s
Jan 31 08:55:47 compute-0 sshd-session[397215]: Invalid user ubuntu from 123.54.197.60 port 35680
Jan 31 08:55:47 compute-0 nova_compute[247704]: 2026-01-31 08:55:47.259 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:47 compute-0 sshd-session[397215]: Connection closed by invalid user ubuntu 123.54.197.60 port 35680 [preauth]
Jan 31 08:55:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:47.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:47.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:47 compute-0 ceph-mon[74496]: pgmap v3705: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 110 op/s
Jan 31 08:55:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 860 KiB/s rd, 538 KiB/s wr, 85 op/s
Jan 31 08:55:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:49.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:49 compute-0 sshd-session[397218]: Invalid user ubuntu from 123.54.197.60 port 35686
Jan 31 08:55:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:50 compute-0 sshd-session[397218]: Connection closed by invalid user ubuntu 123.54.197.60 port 35686 [preauth]
Jan 31 08:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:55:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:55:50 compute-0 ceph-mon[74496]: pgmap v3706: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 860 KiB/s rd, 538 KiB/s wr, 85 op/s
Jan 31 08:55:50 compute-0 nova_compute[247704]: 2026-01-31 08:55:50.656 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 147 KiB/s wr, 60 op/s
Jan 31 08:55:51 compute-0 sshd-session[397221]: Invalid user ubuntu from 123.54.197.60 port 35056
Jan 31 08:55:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:51.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:51 compute-0 sshd-session[397221]: Connection closed by invalid user ubuntu 123.54.197.60 port 35056 [preauth]
Jan 31 08:55:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:55:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:51.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:55:52 compute-0 nova_compute[247704]: 2026-01-31 08:55:52.261 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:52 compute-0 ceph-mon[74496]: pgmap v3707: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 147 KiB/s wr, 60 op/s
Jan 31 08:55:53 compute-0 sshd-session[397224]: Invalid user ubuntu from 123.54.197.60 port 35068
Jan 31 08:55:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 50 KiB/s wr, 30 op/s
Jan 31 08:55:53 compute-0 sshd-session[397224]: Connection closed by invalid user ubuntu 123.54.197.60 port 35068 [preauth]
Jan 31 08:55:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:55:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1149404289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:55:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:55:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1149404289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:55:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1149404289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:55:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1149404289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:55:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:53.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:53.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:54 compute-0 ceph-mon[74496]: pgmap v3708: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 50 KiB/s wr, 30 op/s
Jan 31 08:55:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:55:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 50 KiB/s wr, 28 op/s
Jan 31 08:55:55 compute-0 sshd-session[397227]: Invalid user ubuntu from 123.54.197.60 port 35074
Jan 31 08:55:55 compute-0 sshd-session[397227]: Connection closed by invalid user ubuntu 123.54.197.60 port 35074 [preauth]
Jan 31 08:55:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:55.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:55 compute-0 nova_compute[247704]: 2026-01-31 08:55:55.658 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:55.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:55 compute-0 ceph-mon[74496]: pgmap v3709: 305 pgs: 305 active+clean; 378 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 50 KiB/s wr, 28 op/s
Jan 31 08:55:56 compute-0 nova_compute[247704]: 2026-01-31 08:55:56.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:55:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 366 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 15 KiB/s wr, 20 op/s
Jan 31 08:55:57 compute-0 nova_compute[247704]: 2026-01-31 08:55:57.263 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:55:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:58 compute-0 ceph-mon[74496]: pgmap v3710: 305 pgs: 305 active+clean; 366 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 15 KiB/s wr, 20 op/s
Jan 31 08:55:58 compute-0 sshd-session[397231]: Invalid user ubuntu from 123.54.197.60 port 35078
Jan 31 08:55:59 compute-0 podman[397234]: 2026-01-31 08:55:59.034456834 +0000 UTC m=+0.076017450 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:55:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 342 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 17 op/s
Jan 31 08:55:59 compute-0 sshd-session[397231]: Connection closed by invalid user ubuntu 123.54.197.60 port 35078 [preauth]
Jan 31 08:55:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:55:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:59.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:55:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:55:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:55:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:55:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:00 compute-0 ceph-mon[74496]: pgmap v3711: 305 pgs: 305 active+clean; 342 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 17 op/s
Jan 31 08:56:00 compute-0 sshd-session[397260]: Invalid user ubuntu from 123.54.197.60 port 35088
Jan 31 08:56:00 compute-0 nova_compute[247704]: 2026-01-31 08:56:00.661 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:00 compute-0 sshd-session[397260]: Connection closed by invalid user ubuntu 123.54.197.60 port 35088 [preauth]
Jan 31 08:56:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Jan 31 08:56:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:01.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:02 compute-0 sshd-session[397263]: Invalid user ubuntu from 123.54.197.60 port 40438
Jan 31 08:56:02 compute-0 nova_compute[247704]: 2026-01-31 08:56:02.309 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:02 compute-0 sshd-session[397263]: Connection closed by invalid user ubuntu 123.54.197.60 port 40438 [preauth]
Jan 31 08:56:02 compute-0 ceph-mon[74496]: pgmap v3712: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 7.0 KiB/s wr, 29 op/s
Jan 31 08:56:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3242168337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Jan 31 08:56:03 compute-0 sudo[397268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:03 compute-0 sudo[397268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:03 compute-0 sudo[397268]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:03 compute-0 sshd-session[397266]: Invalid user ubuntu from 123.54.197.60 port 40448
Jan 31 08:56:03 compute-0 sudo[397293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:03 compute-0 sudo[397293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:03 compute-0 sudo[397293]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:03 compute-0 ceph-mon[74496]: pgmap v3713: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Jan 31 08:56:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:03.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:03 compute-0 sshd-session[397266]: Connection closed by invalid user ubuntu 123.54.197.60 port 40448 [preauth]
Jan 31 08:56:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:04.088 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:56:04 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:04.089 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:56:04 compute-0 nova_compute[247704]: 2026-01-31 08:56:04.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:04 compute-0 nova_compute[247704]: 2026-01-31 08:56:04.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1685330573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:56:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Jan 31 08:56:05 compute-0 sshd-session[397318]: Invalid user ubuntu from 123.54.197.60 port 40460
Jan 31 08:56:05 compute-0 sshd-session[397318]: Connection closed by invalid user ubuntu 123.54.197.60 port 40460 [preauth]
Jan 31 08:56:05 compute-0 nova_compute[247704]: 2026-01-31 08:56:05.709 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:05.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:05.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:06 compute-0 ceph-mon[74496]: pgmap v3714: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Jan 31 08:56:06 compute-0 sshd-session[397321]: Invalid user ubuntu from 123.54.197.60 port 40474
Jan 31 08:56:06 compute-0 sshd-session[397321]: Connection closed by invalid user ubuntu 123.54.197.60 port 40474 [preauth]
Jan 31 08:56:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 23 op/s
Jan 31 08:56:07 compute-0 nova_compute[247704]: 2026-01-31 08:56:07.312 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:07.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:07.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:08 compute-0 ceph-mon[74496]: pgmap v3715: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 23 op/s
Jan 31 08:56:08 compute-0 nova_compute[247704]: 2026-01-31 08:56:08.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:08 compute-0 sshd-session[397324]: Invalid user ubuntu from 123.54.197.60 port 40484
Jan 31 08:56:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 4.2 KiB/s wr, 27 op/s
Jan 31 08:56:09 compute-0 sshd-session[397324]: Connection closed by invalid user ubuntu 123.54.197.60 port 40484 [preauth]
Jan 31 08:56:09 compute-0 ceph-mon[74496]: pgmap v3716: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 4.2 KiB/s wr, 27 op/s
Jan 31 08:56:09 compute-0 nova_compute[247704]: 2026-01-31 08:56:09.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:09 compute-0 nova_compute[247704]: 2026-01-31 08:56:09.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:56:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:09.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:10 compute-0 sshd-session[397327]: Invalid user ubuntu from 123.54.197.60 port 40488
Jan 31 08:56:10 compute-0 nova_compute[247704]: 2026-01-31 08:56:10.711 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:10 compute-0 sshd-session[397327]: Connection closed by invalid user ubuntu 123.54.197.60 port 40488 [preauth]
Jan 31 08:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:11.093 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:56:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 580 KiB/s rd, 361 KiB/s wr, 29 op/s
Jan 31 08:56:11 compute-0 ovn_controller[149457]: 2026-01-31T08:56:11Z|00861|binding|INFO|Releasing lport 62efe4e7-cbd5-44c6-8fac-7cb5fe1c3604 from this chassis (sb_readonly=0)
Jan 31 08:56:11 compute-0 nova_compute[247704]: 2026-01-31 08:56:11.160 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:11.226 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:11.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:12 compute-0 ceph-mon[74496]: pgmap v3717: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 580 KiB/s rd, 361 KiB/s wr, 29 op/s
Jan 31 08:56:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2553713859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:12 compute-0 nova_compute[247704]: 2026-01-31 08:56:12.313 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:12 compute-0 sshd-session[397330]: Invalid user ubuntu from 123.54.197.60 port 55072
Jan 31 08:56:13 compute-0 sshd-session[397330]: Connection closed by invalid user ubuntu 123.54.197.60 port 55072 [preauth]
Jan 31 08:56:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 573 KiB/s rd, 360 KiB/s wr, 17 op/s
Jan 31 08:56:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/249953271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:13.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:14 compute-0 ceph-mon[74496]: pgmap v3718: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 573 KiB/s rd, 360 KiB/s wr, 17 op/s
Jan 31 08:56:14 compute-0 sshd-session[397333]: Invalid user ubuntu from 123.54.197.60 port 55074
Jan 31 08:56:14 compute-0 nova_compute[247704]: 2026-01-31 08:56:14.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:14 compute-0 nova_compute[247704]: 2026-01-31 08:56:14.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:56:14 compute-0 nova_compute[247704]: 2026-01-31 08:56:14.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:56:14 compute-0 sshd-session[397333]: Connection closed by invalid user ubuntu 123.54.197.60 port 55074 [preauth]
Jan 31 08:56:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 573 KiB/s rd, 361 KiB/s wr, 17 op/s
Jan 31 08:56:15 compute-0 nova_compute[247704]: 2026-01-31 08:56:15.715 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:15.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:15.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:15 compute-0 podman[397338]: 2026-01-31 08:56:15.887163375 +0000 UTC m=+0.059864197 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:56:15 compute-0 sshd-session[397336]: Invalid user ubuntu from 123.54.197.60 port 55078
Jan 31 08:56:15 compute-0 nova_compute[247704]: 2026-01-31 08:56:15.945 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:56:15 compute-0 nova_compute[247704]: 2026-01-31 08:56:15.946 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:56:15 compute-0 nova_compute[247704]: 2026-01-31 08:56:15.946 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 08:56:15 compute-0 nova_compute[247704]: 2026-01-31 08:56:15.946 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:16 compute-0 sshd-session[397336]: Connection closed by invalid user ubuntu 123.54.197.60 port 55078 [preauth]
Jan 31 08:56:16 compute-0 ceph-mon[74496]: pgmap v3719: 305 pgs: 305 active+clean; 304 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 573 KiB/s rd, 361 KiB/s wr, 17 op/s
Jan 31 08:56:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 291 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 363 KiB/s wr, 29 op/s
Jan 31 08:56:17 compute-0 nova_compute[247704]: 2026-01-31 08:56:17.315 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:17.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:18 compute-0 sshd-session[397358]: Invalid user ubuntu from 123.54.197.60 port 55088
Jan 31 08:56:18 compute-0 ceph-mon[74496]: pgmap v3720: 305 pgs: 305 active+clean; 291 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 363 KiB/s wr, 29 op/s
Jan 31 08:56:18 compute-0 sshd-session[397358]: Connection closed by invalid user ubuntu 123.54.197.60 port 55088 [preauth]
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.014 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.038 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.039 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.039 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.077 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.078 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.078 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.078 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.078 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 290 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 360 KiB/s wr, 28 op/s
Jan 31 08:56:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/593862730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:19 compute-0 sshd-session[397361]: Invalid user ubuntu from 123.54.197.60 port 55102
Jan 31 08:56:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:56:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652238372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.529 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.624 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.624 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 08:56:19 compute-0 sshd-session[397361]: Connection closed by invalid user ubuntu 123.54.197.60 port 55102 [preauth]
Jan 31 08:56:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:19.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.776 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.777 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4026MB free_disk=20.93993377685547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.777 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.778 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:19.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.929 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.930 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:56:19 compute-0 nova_compute[247704]: 2026-01-31 08:56:19.930 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.011 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:56:20
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.control']
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:56:20 compute-0 ceph-mon[74496]: pgmap v3721: 305 pgs: 305 active+clean; 290 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 581 KiB/s rd, 360 KiB/s wr, 28 op/s
Jan 31 08:56:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3753263597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3652238372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/553797308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:56:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369310111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.455 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.462 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.525 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.529 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.530 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.532 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:20 compute-0 nova_compute[247704]: 2026-01-31 08:56:20.717 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:56:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:56:21 compute-0 sshd-session[397385]: Invalid user ubuntu from 123.54.197.60 port 57332
Jan 31 08:56:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 283 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 360 KiB/s wr, 38 op/s
Jan 31 08:56:21 compute-0 sshd-session[397385]: Connection closed by invalid user ubuntu 123.54.197.60 port 57332 [preauth]
Jan 31 08:56:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3369310111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:22 compute-0 nova_compute[247704]: 2026-01-31 08:56:22.056 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:22 compute-0 nova_compute[247704]: 2026-01-31 08:56:22.360 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:22 compute-0 ceph-mon[74496]: pgmap v3722: 305 pgs: 305 active+clean; 283 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 360 KiB/s wr, 38 op/s
Jan 31 08:56:22 compute-0 nova_compute[247704]: 2026-01-31 08:56:22.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:22 compute-0 nova_compute[247704]: 2026-01-31 08:56:22.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:22 compute-0 sshd-session[397410]: Invalid user ubuntu from 123.54.197.60 port 57346
Jan 31 08:56:22 compute-0 sshd-session[397410]: Connection closed by invalid user ubuntu 123.54.197.60 port 57346 [preauth]
Jan 31 08:56:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 283 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 41 op/s
Jan 31 08:56:23 compute-0 sudo[397415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:23 compute-0 sudo[397415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:23 compute-0 sudo[397415]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:23 compute-0 sudo[397440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:23 compute-0 sudo[397440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:23 compute-0 sudo[397440]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 08:56:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2899906367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 08:56:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2899906367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Jan 31 08:56:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Jan 31 08:56:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Jan 31 08:56:24 compute-0 ceph-mon[74496]: pgmap v3723: 305 pgs: 305 active+clean; 283 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 41 op/s
Jan 31 08:56:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2899906367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2899906367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:24 compute-0 sshd-session[397413]: Invalid user debian from 123.54.197.60 port 57348
Jan 31 08:56:24 compute-0 sshd-session[397413]: Connection closed by invalid user debian 123.54.197.60 port 57348 [preauth]
Jan 31 08:56:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 4.3 KiB/s wr, 54 op/s
Jan 31 08:56:25 compute-0 ceph-mon[74496]: osdmap e401: 3 total, 3 up, 3 in
Jan 31 08:56:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:25 compute-0 nova_compute[247704]: 2026-01-31 08:56:25.773 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:25.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:26 compute-0 sshd-session[397466]: Invalid user debian from 123.54.197.60 port 57364
Jan 31 08:56:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Jan 31 08:56:26 compute-0 ceph-mon[74496]: pgmap v3725: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 4.3 KiB/s wr, 54 op/s
Jan 31 08:56:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Jan 31 08:56:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Jan 31 08:56:26 compute-0 sshd-session[397466]: Connection closed by invalid user debian 123.54.197.60 port 57364 [preauth]
Jan 31 08:56:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Jan 31 08:56:27 compute-0 nova_compute[247704]: 2026-01-31 08:56:27.363 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:27 compute-0 ceph-mon[74496]: osdmap e402: 3 total, 3 up, 3 in
Jan 31 08:56:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:27.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:28 compute-0 sshd-session[397469]: Invalid user debian from 123.54.197.60 port 57366
Jan 31 08:56:28 compute-0 sshd-session[397469]: Connection closed by invalid user debian 123.54.197.60 port 57366 [preauth]
Jan 31 08:56:28 compute-0 ceph-mon[74496]: pgmap v3727: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Jan 31 08:56:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Jan 31 08:56:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:56:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 58K writes, 218K keys, 58K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 58K writes, 21K syncs, 2.74 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3506 writes, 12K keys, 3506 commit groups, 1.0 writes per commit group, ingest: 15.02 MB, 0.03 MB/s
                                           Interval WAL: 3506 writes, 1419 syncs, 2.47 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:56:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:29.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:29 compute-0 ceph-mon[74496]: pgmap v3728: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 74 op/s
Jan 31 08:56:29 compute-0 podman[397474]: 2026-01-31 08:56:29.939913971 +0000 UTC m=+0.105929696 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:56:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Jan 31 08:56:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Jan 31 08:56:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Jan 31 08:56:30 compute-0 sshd-session[397472]: Invalid user debian from 123.54.197.60 port 57372
Jan 31 08:56:30 compute-0 sshd-session[397472]: Connection closed by invalid user debian 123.54.197.60 port 57372 [preauth]
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.776 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.820 247708 DEBUG nova.compute.manager [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-changed-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.820 247708 DEBUG nova.compute.manager [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Refreshing instance network info cache due to event network-changed-3b64ce10-dfce-4ef5-afaa-7985dab00bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.821 247708 DEBUG oslo_concurrency.lockutils [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.821 247708 DEBUG oslo_concurrency.lockutils [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.821 247708 DEBUG nova.network.neutron [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Refreshing network info cache for port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.979 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.980 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.981 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.981 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.982 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.985 247708 INFO nova.compute.manager [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Terminating instance
Jan 31 08:56:30 compute-0 nova_compute[247704]: 2026-01-31 08:56:30.986 247708 DEBUG nova.compute.manager [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:56:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 4.2 KiB/s wr, 67 op/s
Jan 31 08:56:31 compute-0 kernel: tap3b64ce10-df (unregistering): left promiscuous mode
Jan 31 08:56:31 compute-0 NetworkManager[49108]: <info>  [1769849791.2207] device (tap3b64ce10-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:56:31 compute-0 ovn_controller[149457]: 2026-01-31T08:56:31Z|00862|binding|INFO|Releasing lport 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 from this chassis (sb_readonly=0)
Jan 31 08:56:31 compute-0 ovn_controller[149457]: 2026-01-31T08:56:31Z|00863|binding|INFO|Setting lport 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 down in Southbound
Jan 31 08:56:31 compute-0 ovn_controller[149457]: 2026-01-31T08:56:31Z|00864|binding|INFO|Removing iface tap3b64ce10-df ovn-installed in OVS
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.229 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.232 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.240 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:31.245 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:93:84 10.100.0.14'], port_security=['fa:16:3e:d6:93:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'da90f1fb-9090-49b5-a510-d7e6ac7a30d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-53eebd24-3b32-4949-827a-524f9e042652', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6e96de7b2784be1adce763bc9c9adc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e34ba2ee-7d71-4f69-8288-b62c847fa225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fa4ba9c-944f-4ccd-90bc-07135c4442c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=3b64ce10-dfce-4ef5-afaa-7985dab00bc6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:56:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:31.246 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6 in datapath 53eebd24-3b32-4949-827a-524f9e042652 unbound from our chassis
Jan 31 08:56:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:31.247 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 53eebd24-3b32-4949-827a-524f9e042652, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:56:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:31.249 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[52d5a55f-6f85-4694-bc10-f86c2469ac02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:31.249 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 namespace which is not needed anymore
Jan 31 08:56:31 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Jan 31 08:56:31 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000cb.scope: Consumed 20.359s CPU time.
Jan 31 08:56:31 compute-0 systemd-machined[214448]: Machine qemu-90-instance-000000cb terminated.
Jan 31 08:56:31 compute-0 ceph-mon[74496]: osdmap e403: 3 total, 3 up, 3 in
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.424 247708 INFO nova.virt.libvirt.driver [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Instance destroyed successfully.
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.424 247708 DEBUG nova.objects.instance [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lazy-loading 'resources' on Instance uuid da90f1fb-9090-49b5-a510-d7e6ac7a30d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.498 247708 DEBUG nova.virt.libvirt.vif [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:54:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1386444972',display_name='tempest-TestStampPattern-server-1386444972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1386444972',id=203,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfz5EJ1EvVhkopw671xsxq9cmxCD9AJZYRdx1fZqnsJKH4HE3ct43AahyjBDecQtfre/K2oZ3kPMxp5bbpWjZgXwmif2lJfZCK32Cd1YqdcHbaKXFc2nUgzqikPeTQnpA==',key_name='tempest-TestStampPattern-1030074900',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:54:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6e96de7b2784be1adce763bc9c9adc5',ramdisk_id='',reservation_id='r-or1ep324',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-23409568',owner_user_name='tempest-TestStampPattern-23409568-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:54:59Z,user_data=None,user_id='a798fdf6d13d4af4b166dd94b5cea7cc',uuid=da90f1fb-9090-49b5-a510-d7e6ac7a30d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.498 247708 DEBUG nova.network.os_vif_util [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converting VIF {"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.499 247708 DEBUG nova.network.os_vif_util [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.499 247708 DEBUG os_vif [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.501 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.502 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b64ce10-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.503 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.506 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:31 compute-0 nova_compute[247704]: 2026-01-31 08:56:31.509 247708 INFO os_vif [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:93:84,bridge_name='br-int',has_traffic_filtering=True,id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6,network=Network(53eebd24-3b32-4949-827a-524f9e042652),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b64ce10-df')
Jan 31 08:56:31 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [NOTICE]   (394716) : haproxy version is 2.8.14-c23fe91
Jan 31 08:56:31 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [NOTICE]   (394716) : path to executable is /usr/sbin/haproxy
Jan 31 08:56:31 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [WARNING]  (394716) : Exiting Master process...
Jan 31 08:56:31 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [WARNING]  (394716) : Exiting Master process...
Jan 31 08:56:31 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [ALERT]    (394716) : Current worker (394718) exited with code 143 (Terminated)
Jan 31 08:56:31 compute-0 neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652[394711]: [WARNING]  (394716) : All workers exited. Exiting... (0)
Jan 31 08:56:31 compute-0 systemd[1]: libpod-1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855.scope: Deactivated successfully.
Jan 31 08:56:31 compute-0 podman[397527]: 2026-01-31 08:56:31.570551862 +0000 UTC m=+0.222129922 container died 1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:56:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:31.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855-userdata-shm.mount: Deactivated successfully.
Jan 31 08:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9de26101b1d497cc742cd5422f2bec748d01ba25d899c38998bfd782c0bc4d77-merged.mount: Deactivated successfully.
Jan 31 08:56:32 compute-0 sshd-session[397501]: Invalid user debian from 123.54.197.60 port 43608
Jan 31 08:56:32 compute-0 podman[397527]: 2026-01-31 08:56:32.091389146 +0000 UTC m=+0.742967216 container cleanup 1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:56:32 compute-0 systemd[1]: libpod-conmon-1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855.scope: Deactivated successfully.
Jan 31 08:56:32 compute-0 sshd-session[397501]: Connection closed by invalid user debian 123.54.197.60 port 43608 [preauth]
Jan 31 08:56:32 compute-0 nova_compute[247704]: 2026-01-31 08:56:32.365 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:32 compute-0 podman[397588]: 2026-01-31 08:56:32.384291648 +0000 UTC m=+0.270820026 container remove 1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:56:32 compute-0 ceph-mon[74496]: pgmap v3730: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 4.2 KiB/s wr, 67 op/s
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.388 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e737bc-a1bb-4f52-ae78-46f8eff270c8]: (4, ('Sat Jan 31 08:56:31 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 (1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855)\n1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855\nSat Jan 31 08:56:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 (1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855)\n1bd5655670f0b5cc27a7042e36e2af512de83bc0df5de8c9e6731dc48b61b855\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.390 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[29bb311e-a02e-4b85-9676-6fd829776b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.391 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap53eebd24-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:56:32 compute-0 nova_compute[247704]: 2026-01-31 08:56:32.393 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:32 compute-0 kernel: tap53eebd24-30: left promiscuous mode
Jan 31 08:56:32 compute-0 nova_compute[247704]: 2026-01-31 08:56:32.399 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:32 compute-0 nova_compute[247704]: 2026-01-31 08:56:32.400 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.403 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[20be700c-9543-475b-b3bf-9c667d329d85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.415 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d04b1e1a-ffac-459b-88d8-d699877765b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.416 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[79fa5013-00d6-4f3b-9774-5e7c78332100]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.430 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[484796d3-993f-446c-b565-8188c11a18ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 990027, 'reachable_time': 26648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397604, 'error': None, 'target': 'ovnmeta-53eebd24-3b32-4949-827a-524f9e042652', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d53eebd24\x2d3b32\x2d4949\x2d827a\x2d524f9e042652.mount: Deactivated successfully.
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.434 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-53eebd24-3b32-4949-827a-524f9e042652 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:56:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:56:32.434 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[07f1567b-f33c-4ebd-851f-e8dd5639e398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:56:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 3.0 KiB/s wr, 49 op/s
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.200 247708 INFO nova.virt.libvirt.driver [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Deleting instance files /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6_del
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.201 247708 INFO nova.virt.libvirt.driver [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Deletion of /var/lib/nova/instances/da90f1fb-9090-49b5-a510-d7e6ac7a30d6_del complete
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.233 247708 DEBUG nova.compute.manager [req-a591b54a-16c0-4243-ad68-d37f0e52a908 req-54d3babe-2791-4778-885b-91202356ef01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-vif-unplugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.233 247708 DEBUG oslo_concurrency.lockutils [req-a591b54a-16c0-4243-ad68-d37f0e52a908 req-54d3babe-2791-4778-885b-91202356ef01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.233 247708 DEBUG oslo_concurrency.lockutils [req-a591b54a-16c0-4243-ad68-d37f0e52a908 req-54d3babe-2791-4778-885b-91202356ef01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.233 247708 DEBUG oslo_concurrency.lockutils [req-a591b54a-16c0-4243-ad68-d37f0e52a908 req-54d3babe-2791-4778-885b-91202356ef01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.234 247708 DEBUG nova.compute.manager [req-a591b54a-16c0-4243-ad68-d37f0e52a908 req-54d3babe-2791-4778-885b-91202356ef01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] No waiting events found dispatching network-vif-unplugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.234 247708 DEBUG nova.compute.manager [req-a591b54a-16c0-4243-ad68-d37f0e52a908 req-54d3babe-2791-4778-885b-91202356ef01 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-vif-unplugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.365 247708 INFO nova.compute.manager [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Took 2.38 seconds to destroy the instance on the hypervisor.
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.366 247708 DEBUG oslo.service.loopingcall [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.366 247708 DEBUG nova.compute.manager [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:56:33 compute-0 nova_compute[247704]: 2026-01-31 08:56:33.366 247708 DEBUG nova.network.neutron [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:56:33 compute-0 sshd-session[397605]: Invalid user debian from 123.54.197.60 port 43620
Jan 31 08:56:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:33 compute-0 sshd-session[397605]: Connection closed by invalid user debian 123.54.197.60 port 43620 [preauth]
Jan 31 08:56:34 compute-0 ceph-mon[74496]: pgmap v3731: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 3.0 KiB/s wr, 49 op/s
Jan 31 08:56:34 compute-0 nova_compute[247704]: 2026-01-31 08:56:34.735 247708 DEBUG nova.network.neutron [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updated VIF entry in instance network info cache for port 3b64ce10-dfce-4ef5-afaa-7985dab00bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:56:34 compute-0 nova_compute[247704]: 2026-01-31 08:56:34.736 247708 DEBUG nova.network.neutron [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [{"id": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "address": "fa:16:3e:d6:93:84", "network": {"id": "53eebd24-3b32-4949-827a-524f9e042652", "bridge": "br-int", "label": "tempest-TestStampPattern-1663738459-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6e96de7b2784be1adce763bc9c9adc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b64ce10-df", "ovs_interfaceid": "3b64ce10-dfce-4ef5-afaa-7985dab00bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:34 compute-0 nova_compute[247704]: 2026-01-31 08:56:34.924 247708 DEBUG oslo_concurrency.lockutils [req-ca8d84cc-1c78-403b-a362-1a9a280c806a req-1f0a309e-f34e-4104-9119-38d719550699 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-da90f1fb-9090-49b5-a510-d7e6ac7a30d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.008 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.076 247708 DEBUG nova.network.neutron [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 173 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.211 247708 INFO nova.compute.manager [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Took 1.84 seconds to deallocate network for instance.
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.216 247708 DEBUG nova.compute.manager [req-46ea8c95-b84e-4305-8fd2-4616f7124a34 req-2d67877e-3084-4883-b544-a6e26858c225 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-vif-deleted-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.216 247708 INFO nova.compute.manager [req-46ea8c95-b84e-4305-8fd2-4616f7124a34 req-2d67877e-3084-4883-b544-a6e26858c225 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Neutron deleted interface 3b64ce10-dfce-4ef5-afaa-7985dab00bc6; detaching it from the instance and deleting it from the info cache
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.217 247708 DEBUG nova.network.neutron [req-46ea8c95-b84e-4305-8fd2-4616f7124a34 req-2d67877e-3084-4883-b544-a6e26858c225 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:56:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Jan 31 08:56:35 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.379 247708 DEBUG nova.compute.manager [req-46ea8c95-b84e-4305-8fd2-4616f7124a34 req-2d67877e-3084-4883-b544-a6e26858c225 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Detach interface failed, port_id=3b64ce10-dfce-4ef5-afaa-7985dab00bc6, reason: Instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.581 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.581 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.631 247708 DEBUG nova.compute.manager [req-3ed97c21-1c42-4653-ac38-42586cfea501 req-cc905130-c241-4f12-86e1-a248cf05adc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.632 247708 DEBUG oslo_concurrency.lockutils [req-3ed97c21-1c42-4653-ac38-42586cfea501 req-cc905130-c241-4f12-86e1-a248cf05adc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.632 247708 DEBUG oslo_concurrency.lockutils [req-3ed97c21-1c42-4653-ac38-42586cfea501 req-cc905130-c241-4f12-86e1-a248cf05adc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.632 247708 DEBUG oslo_concurrency.lockutils [req-3ed97c21-1c42-4653-ac38-42586cfea501 req-cc905130-c241-4f12-86e1-a248cf05adc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.632 247708 DEBUG nova.compute.manager [req-3ed97c21-1c42-4653-ac38-42586cfea501 req-cc905130-c241-4f12-86e1-a248cf05adc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] No waiting events found dispatching network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.633 247708 WARNING nova.compute.manager [req-3ed97c21-1c42-4653-ac38-42586cfea501 req-cc905130-c241-4f12-86e1-a248cf05adc2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Received unexpected event network-vif-plugged-3b64ce10-dfce-4ef5-afaa-7985dab00bc6 for instance with vm_state active and task_state deleting.
Jan 31 08:56:35 compute-0 nova_compute[247704]: 2026-01-31 08:56:35.642 247708 DEBUG oslo_concurrency.processutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:56:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:35.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:35.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:56:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846463667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.055 247708 DEBUG oslo_concurrency.processutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.062 247708 DEBUG nova.compute.provider_tree [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:56:36 compute-0 sshd-session[397608]: Invalid user debian from 123.54.197.60 port 43632
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013571546970965942 of space, bias 1.0, pg target 0.40714640912897826 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0022557448701842546 of space, bias 1.0, pg target 0.6767234610552764 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:56:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.110 247708 DEBUG nova.scheduler.client.report [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.274 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:36 compute-0 ceph-mon[74496]: pgmap v3732: 305 pgs: 305 active+clean; 173 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Jan 31 08:56:36 compute-0 ceph-mon[74496]: osdmap e404: 3 total, 3 up, 3 in
Jan 31 08:56:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2846463667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.375 247708 INFO nova.scheduler.client.report [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Deleted allocations for instance da90f1fb-9090-49b5-a510-d7e6ac7a30d6
Jan 31 08:56:36 compute-0 sshd-session[397608]: Connection closed by invalid user debian 123.54.197.60 port 43632 [preauth]
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.506 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:36 compute-0 nova_compute[247704]: 2026-01-31 08:56:36.890 247708 DEBUG oslo_concurrency.lockutils [None req-ccfa8405-a636-40c0-b083-dee5b8c7780e a798fdf6d13d4af4b166dd94b5cea7cc e6e96de7b2784be1adce763bc9c9adc5 - - default default] Lock "da90f1fb-9090-49b5-a510-d7e6ac7a30d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:56:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Jan 31 08:56:37 compute-0 nova_compute[247704]: 2026-01-31 08:56:37.379 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:37.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:37 compute-0 sshd-session[397634]: Invalid user debian from 123.54.197.60 port 43646
Jan 31 08:56:38 compute-0 sshd-session[397634]: Connection closed by invalid user debian 123.54.197.60 port 43646 [preauth]
Jan 31 08:56:38 compute-0 ceph-mon[74496]: pgmap v3734: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Jan 31 08:56:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 31 08:56:39 compute-0 sshd-session[397637]: Invalid user debian from 123.54.197.60 port 43650
Jan 31 08:56:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:39 compute-0 sshd-session[397637]: Connection closed by invalid user debian 123.54.197.60 port 43650 [preauth]
Jan 31 08:56:39 compute-0 sudo[397639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:39 compute-0 sudo[397639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:39 compute-0 sudo[397639]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:39 compute-0 sudo[397664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:56:39 compute-0 sudo[397664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:39 compute-0 sudo[397664]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:39 compute-0 sudo[397689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:39 compute-0 sudo[397689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:39 compute-0 sudo[397689]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:39 compute-0 sudo[397714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 08:56:39 compute-0 sudo[397714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:40 compute-0 ceph-mon[74496]: pgmap v3735: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 31 08:56:40 compute-0 podman[397811]: 2026-01-31 08:56:40.376126359 +0000 UTC m=+0.061597049 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:56:40 compute-0 podman[397811]: 2026-01-31 08:56:40.489443035 +0000 UTC m=+0.174913725 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:56:40 compute-0 sshd-session[397739]: Invalid user debian from 123.54.197.60 port 36252
Jan 31 08:56:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 31 08:56:41 compute-0 podman[397965]: 2026-01-31 08:56:41.129470717 +0000 UTC m=+0.135347152 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:56:41 compute-0 podman[397965]: 2026-01-31 08:56:41.137985364 +0000 UTC m=+0.143861739 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 08:56:41 compute-0 sshd-session[397739]: Connection closed by invalid user debian 123.54.197.60 port 36252 [preauth]
Jan 31 08:56:41 compute-0 podman[398031]: 2026-01-31 08:56:41.32534994 +0000 UTC m=+0.053055521 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.openshift.expose-services=, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64)
Jan 31 08:56:41 compute-0 podman[398031]: 2026-01-31 08:56:41.335430155 +0000 UTC m=+0.063135736 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.openshift.expose-services=, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64, name=keepalived, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 08:56:41 compute-0 sudo[397714]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:56:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:56:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:41 compute-0 sudo[398060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:41 compute-0 sudo[398060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:41 compute-0 sudo[398060]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:41 compute-0 nova_compute[247704]: 2026-01-31 08:56:41.509 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:41 compute-0 sudo[398085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:56:41 compute-0 sudo[398085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:41 compute-0 sudo[398085]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:41 compute-0 sudo[398111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:41 compute-0 sudo[398111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:41 compute-0 sudo[398111]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 08:56:41 compute-0 sudo[398137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:56:41 compute-0 sudo[398137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:41.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:42 compute-0 sudo[398137]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:56:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:56:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:56:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7b1380bb-ccfa-494c-a30a-5c2e8c0823df does not exist
Jan 31 08:56:42 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5ce309a6-08ec-4037-ae0a-14189e9ee54f does not exist
Jan 31 08:56:42 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a8521ab9-6818-4439-ad81-604999a7dcef does not exist
Jan 31 08:56:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:56:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:56:42 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:56:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:56:42 compute-0 sudo[398193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:42 compute-0 sudo[398193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:42 compute-0 sudo[398193]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:42 compute-0 sudo[398218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:56:42 compute-0 sudo[398218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:42 compute-0 sudo[398218]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:42 compute-0 sudo[398243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:42 compute-0 sudo[398243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:42 compute-0 sudo[398243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:42 compute-0 sudo[398268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:56:42 compute-0 sudo[398268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:42 compute-0 nova_compute[247704]: 2026-01-31 08:56:42.380 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:42 compute-0 ceph-mon[74496]: pgmap v3736: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:56:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.578384409 +0000 UTC m=+0.052789175 container create 40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:56:42 compute-0 systemd[1]: Started libpod-conmon-40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7.scope.
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.55620473 +0000 UTC m=+0.030609536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:56:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.673205805 +0000 UTC m=+0.147610641 container init 40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.680888221 +0000 UTC m=+0.155292977 container start 40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:56:42 compute-0 funny_wiles[398354]: 167 167
Jan 31 08:56:42 compute-0 systemd[1]: libpod-40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7.scope: Deactivated successfully.
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.705373017 +0000 UTC m=+0.179777863 container attach 40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.705799447 +0000 UTC m=+0.180204243 container died 40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a58e4a4d878508ca5f0c06d71ee7c8286fe84df7159e49efdc912e49e59df7-merged.mount: Deactivated successfully.
Jan 31 08:56:42 compute-0 podman[398336]: 2026-01-31 08:56:42.747564703 +0000 UTC m=+0.221969499 container remove 40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wiles, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:56:42 compute-0 systemd[1]: libpod-conmon-40aad80e0dc04a343f02b093ccf636201061e71fc2ae84d5285d3864ef4c17c7.scope: Deactivated successfully.
Jan 31 08:56:42 compute-0 podman[398378]: 2026-01-31 08:56:42.91442206 +0000 UTC m=+0.050500099 container create 97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:56:42 compute-0 systemd[1]: Started libpod-conmon-97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083.scope.
Jan 31 08:56:42 compute-0 podman[398378]: 2026-01-31 08:56:42.890174771 +0000 UTC m=+0.026252860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:56:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7bae63945672f7581342adea6f2a589facccc1371762386ac39993bf2fc61d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7bae63945672f7581342adea6f2a589facccc1371762386ac39993bf2fc61d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7bae63945672f7581342adea6f2a589facccc1371762386ac39993bf2fc61d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7bae63945672f7581342adea6f2a589facccc1371762386ac39993bf2fc61d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7bae63945672f7581342adea6f2a589facccc1371762386ac39993bf2fc61d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:43 compute-0 podman[398378]: 2026-01-31 08:56:43.019835523 +0000 UTC m=+0.155913572 container init 97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 08:56:43 compute-0 podman[398378]: 2026-01-31 08:56:43.026768212 +0000 UTC m=+0.162846221 container start 97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:56:43 compute-0 podman[398378]: 2026-01-31 08:56:43.02998454 +0000 UTC m=+0.166062549 container attach 97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:56:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Jan 31 08:56:43 compute-0 sshd-session[398110]: Invalid user debian from 123.54.197.60 port 36260
Jan 31 08:56:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:43 compute-0 nostalgic_meninsky[398395]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:56:43 compute-0 nostalgic_meninsky[398395]: --> relative data size: 1.0
Jan 31 08:56:43 compute-0 nostalgic_meninsky[398395]: --> All data devices are unavailable
Jan 31 08:56:43 compute-0 sshd-session[398110]: Connection closed by invalid user debian 123.54.197.60 port 36260 [preauth]
Jan 31 08:56:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:43 compute-0 sudo[398408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:43 compute-0 sudo[398408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:43 compute-0 sudo[398408]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:43 compute-0 systemd[1]: libpod-97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083.scope: Deactivated successfully.
Jan 31 08:56:43 compute-0 podman[398378]: 2026-01-31 08:56:43.839571047 +0000 UTC m=+0.975649046 container died 97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:56:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7bae63945672f7581342adea6f2a589facccc1371762386ac39993bf2fc61d3-merged.mount: Deactivated successfully.
Jan 31 08:56:43 compute-0 podman[398378]: 2026-01-31 08:56:43.888463145 +0000 UTC m=+1.024541144 container remove 97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 08:56:43 compute-0 systemd[1]: libpod-conmon-97a6add2c9907831baec134f28b6ebda2970d824bcf1919d5b172b9719ba0083.scope: Deactivated successfully.
Jan 31 08:56:43 compute-0 sudo[398436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:43 compute-0 sudo[398436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:43 compute-0 sudo[398436]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:43 compute-0 sudo[398268]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:43 compute-0 sudo[398471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:43 compute-0 sudo[398471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:43 compute-0 sudo[398471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:44 compute-0 sudo[398496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:56:44 compute-0 sudo[398496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:44 compute-0 sudo[398496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:44 compute-0 sudo[398521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:44 compute-0 sudo[398521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:44 compute-0 sudo[398521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:44 compute-0 sudo[398548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:56:44 compute-0 sudo[398548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:44 compute-0 ceph-mon[74496]: pgmap v3737: 305 pgs: 305 active+clean; 122 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 KiB/s wr, 48 op/s
Jan 31 08:56:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1492873453' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1492873453' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.443492451 +0000 UTC m=+0.047547447 container create da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:56:44 compute-0 systemd[1]: Started libpod-conmon-da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8.scope.
Jan 31 08:56:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.419510608 +0000 UTC m=+0.023565664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.530312752 +0000 UTC m=+0.134367758 container init da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.536897963 +0000 UTC m=+0.140952939 container start da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.54051031 +0000 UTC m=+0.144565286 container attach da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:56:44 compute-0 frosty_shannon[398629]: 167 167
Jan 31 08:56:44 compute-0 systemd[1]: libpod-da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8.scope: Deactivated successfully.
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.543684647 +0000 UTC m=+0.147739643 container died da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc2c0c418e228a5ee461dcdcbc6455f1214ed37d02865d0fcad6326b86cccd5-merged.mount: Deactivated successfully.
Jan 31 08:56:44 compute-0 podman[398613]: 2026-01-31 08:56:44.577417488 +0000 UTC m=+0.181472454 container remove da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:56:44 compute-0 systemd[1]: libpod-conmon-da6fd52b72e48a9e0163c606d9884e1dfabbc74488d6f853c3ba38f479ffdce8.scope: Deactivated successfully.
Jan 31 08:56:44 compute-0 podman[398655]: 2026-01-31 08:56:44.73174505 +0000 UTC m=+0.058571465 container create 6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:56:44 compute-0 systemd[1]: Started libpod-conmon-6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e.scope.
Jan 31 08:56:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67be9a800cb8929824bbc90057e588f65d74f06cbbf357b6e04920c7276deaff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67be9a800cb8929824bbc90057e588f65d74f06cbbf357b6e04920c7276deaff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67be9a800cb8929824bbc90057e588f65d74f06cbbf357b6e04920c7276deaff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67be9a800cb8929824bbc90057e588f65d74f06cbbf357b6e04920c7276deaff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:44 compute-0 podman[398655]: 2026-01-31 08:56:44.710441643 +0000 UTC m=+0.037268148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:56:44 compute-0 podman[398655]: 2026-01-31 08:56:44.807155984 +0000 UTC m=+0.133982399 container init 6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:56:44 compute-0 podman[398655]: 2026-01-31 08:56:44.816344977 +0000 UTC m=+0.143171382 container start 6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:56:44 compute-0 podman[398655]: 2026-01-31 08:56:44.820463587 +0000 UTC m=+0.147289992 container attach 6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 08:56:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1023 B/s wr, 29 op/s
Jan 31 08:56:45 compute-0 sshd-session[398534]: Invalid user debian from 123.54.197.60 port 36268
Jan 31 08:56:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:45 compute-0 sshd-session[398534]: Connection closed by invalid user debian 123.54.197.60 port 36268 [preauth]
Jan 31 08:56:45 compute-0 agitated_mayer[398671]: {
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:     "0": [
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:         {
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "devices": [
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "/dev/loop3"
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             ],
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "lv_name": "ceph_lv0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "lv_size": "7511998464",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "name": "ceph_lv0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "tags": {
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.cluster_name": "ceph",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.crush_device_class": "",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.encrypted": "0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.osd_id": "0",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.type": "block",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:                 "ceph.vdo": "0"
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             },
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "type": "block",
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:             "vg_name": "ceph_vg0"
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:         }
Jan 31 08:56:45 compute-0 agitated_mayer[398671]:     ]
Jan 31 08:56:45 compute-0 agitated_mayer[398671]: }
Jan 31 08:56:45 compute-0 systemd[1]: libpod-6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e.scope: Deactivated successfully.
Jan 31 08:56:45 compute-0 podman[398655]: 2026-01-31 08:56:45.596009136 +0000 UTC m=+0.922835541 container died 6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:56:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-67be9a800cb8929824bbc90057e588f65d74f06cbbf357b6e04920c7276deaff-merged.mount: Deactivated successfully.
Jan 31 08:56:45 compute-0 podman[398655]: 2026-01-31 08:56:45.64967088 +0000 UTC m=+0.976497285 container remove 6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:56:45 compute-0 systemd[1]: libpod-conmon-6a1603d9ff01f58a6f1613121a00bb43ec14a69299a124a99c71e1bdb92a205e.scope: Deactivated successfully.
Jan 31 08:56:45 compute-0 sudo[398548]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:45 compute-0 sudo[398695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:45 compute-0 sudo[398695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:45 compute-0 sudo[398695]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:45.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:45 compute-0 sudo[398720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:56:45 compute-0 sudo[398720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:45 compute-0 sudo[398720]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:45 compute-0 sudo[398745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:45 compute-0 sudo[398745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:45 compute-0 sudo[398745]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:45 compute-0 sudo[398770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:56:45 compute-0 sudo[398770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:46 compute-0 podman[398794]: 2026-01-31 08:56:46.001980547 +0000 UTC m=+0.084188728 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.213564042 +0000 UTC m=+0.038966169 container create 57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 08:56:46 compute-0 systemd[1]: Started libpod-conmon-57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6.scope.
Jan 31 08:56:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.197357109 +0000 UTC m=+0.022759256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.302777082 +0000 UTC m=+0.128179249 container init 57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.311856912 +0000 UTC m=+0.137259029 container start 57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.315815548 +0000 UTC m=+0.141217725 container attach 57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 08:56:46 compute-0 compassionate_banzai[398871]: 167 167
Jan 31 08:56:46 compute-0 systemd[1]: libpod-57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6.scope: Deactivated successfully.
Jan 31 08:56:46 compute-0 conmon[398871]: conmon 57c1b85a4b11e15e0b74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6.scope/container/memory.events
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.319592981 +0000 UTC m=+0.144995108 container died 57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 08:56:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a94e4acd8bd32b1de5d1de8011c74cf8faeeaf325b9565715c0c798fcec7a9-merged.mount: Deactivated successfully.
Jan 31 08:56:46 compute-0 podman[398855]: 2026-01-31 08:56:46.359740277 +0000 UTC m=+0.185142434 container remove 57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_banzai, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:56:46 compute-0 systemd[1]: libpod-conmon-57c1b85a4b11e15e0b74f9a208f63d1fc13b74bff3c73a6ee1b6951fee0e42f6.scope: Deactivated successfully.
Jan 31 08:56:46 compute-0 nova_compute[247704]: 2026-01-31 08:56:46.423 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849791.422328, da90f1fb-9090-49b5-a510-d7e6ac7a30d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:56:46 compute-0 nova_compute[247704]: 2026-01-31 08:56:46.424 247708 INFO nova.compute.manager [-] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] VM Stopped (Lifecycle Event)
Jan 31 08:56:46 compute-0 ceph-mon[74496]: pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1023 B/s wr, 29 op/s
Jan 31 08:56:46 compute-0 nova_compute[247704]: 2026-01-31 08:56:46.512 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:46 compute-0 podman[398895]: 2026-01-31 08:56:46.54452584 +0000 UTC m=+0.054511646 container create 92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:56:46 compute-0 systemd[1]: Started libpod-conmon-92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04.scope.
Jan 31 08:56:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60708ebfb7575f1783f33d3f9b668a2af08fd9f0b3bf06dcfbfd6e202b34af7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60708ebfb7575f1783f33d3f9b668a2af08fd9f0b3bf06dcfbfd6e202b34af7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60708ebfb7575f1783f33d3f9b668a2af08fd9f0b3bf06dcfbfd6e202b34af7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60708ebfb7575f1783f33d3f9b668a2af08fd9f0b3bf06dcfbfd6e202b34af7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:56:46 compute-0 podman[398895]: 2026-01-31 08:56:46.524593665 +0000 UTC m=+0.034579491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:56:46 compute-0 podman[398895]: 2026-01-31 08:56:46.634486658 +0000 UTC m=+0.144472474 container init 92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 08:56:46 compute-0 nova_compute[247704]: 2026-01-31 08:56:46.636 247708 DEBUG nova.compute.manager [None req-bf5e358a-f6e1-46a0-a4be-c777a18043af - - - - - -] [instance: da90f1fb-9090-49b5-a510-d7e6ac7a30d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:56:46 compute-0 podman[398895]: 2026-01-31 08:56:46.640132485 +0000 UTC m=+0.150118281 container start 92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 08:56:46 compute-0 podman[398895]: 2026-01-31 08:56:46.643470156 +0000 UTC m=+0.153455962 container attach 92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:56:46 compute-0 sshd-session[398693]: Invalid user debian from 123.54.197.60 port 36280
Jan 31 08:56:46 compute-0 sshd-session[398693]: Connection closed by invalid user debian 123.54.197.60 port 36280 [preauth]
Jan 31 08:56:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 31 08:56:47 compute-0 nova_compute[247704]: 2026-01-31 08:56:47.382 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:47 compute-0 sad_goldstine[398912]: {
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:         "osd_id": 0,
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:         "type": "bluestore"
Jan 31 08:56:47 compute-0 sad_goldstine[398912]:     }
Jan 31 08:56:47 compute-0 sad_goldstine[398912]: }
Jan 31 08:56:47 compute-0 systemd[1]: libpod-92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04.scope: Deactivated successfully.
Jan 31 08:56:47 compute-0 podman[398895]: 2026-01-31 08:56:47.46819807 +0000 UTC m=+0.978183876 container died 92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 08:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-60708ebfb7575f1783f33d3f9b668a2af08fd9f0b3bf06dcfbfd6e202b34af7f-merged.mount: Deactivated successfully.
Jan 31 08:56:47 compute-0 podman[398895]: 2026-01-31 08:56:47.518746719 +0000 UTC m=+1.028732525 container remove 92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_goldstine, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:56:47 compute-0 systemd[1]: libpod-conmon-92f8337c868885ef5432c1a06802a8f2f6e2b58b17c06dd336b40759c4c42c04.scope: Deactivated successfully.
Jan 31 08:56:47 compute-0 sudo[398770]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:56:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:56:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a0d340e7-25fc-474f-b6c3-4f53a25a0259 does not exist
Jan 31 08:56:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d36f0733-0c0a-4644-98b6-2dfedc90592d does not exist
Jan 31 08:56:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 31d12056-08e8-4c1a-85a3-036da4ab5e8c does not exist
Jan 31 08:56:47 compute-0 sudo[398949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:56:47 compute-0 sudo[398949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:47 compute-0 sudo[398949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:47 compute-0 sudo[398974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:56:47 compute-0 sudo[398974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:56:47 compute-0 sudo[398974]: pam_unix(sudo:session): session closed for user root
Jan 31 08:56:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:47.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:48 compute-0 sshd-session[398917]: Invalid user debian from 123.54.197.60 port 36284
Jan 31 08:56:48 compute-0 ceph-mon[74496]: pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 31 08:56:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:56:48 compute-0 sshd-session[398917]: Connection closed by invalid user debian 123.54.197.60 port 36284 [preauth]
Jan 31 08:56:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 31 08:56:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:49.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:49.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:56:50 compute-0 sshd-session[399000]: Invalid user debian from 123.54.197.60 port 36294
Jan 31 08:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:56:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:56:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:50 compute-0 sshd-session[399000]: Connection closed by invalid user debian 123.54.197.60 port 36294 [preauth]
Jan 31 08:56:50 compute-0 ceph-mon[74496]: pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 15 op/s
Jan 31 08:56:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Jan 31 08:56:51 compute-0 nova_compute[247704]: 2026-01-31 08:56:51.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:51.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:51 compute-0 sshd-session[399003]: Invalid user debian from 123.54.197.60 port 59054
Jan 31 08:56:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:51.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:52 compute-0 sshd-session[399003]: Connection closed by invalid user debian 123.54.197.60 port 59054 [preauth]
Jan 31 08:56:52 compute-0 nova_compute[247704]: 2026-01-31 08:56:52.385 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:52 compute-0 ceph-mon[74496]: pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Jan 31 08:56:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Jan 31 08:56:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2744281129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:56:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2744281129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:56:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:53.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:53.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:54 compute-0 sshd-session[399005]: Invalid user debian from 123.54.197.60 port 59060
Jan 31 08:56:54 compute-0 ceph-mon[74496]: pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 596 B/s wr, 14 op/s
Jan 31 08:56:54 compute-0 sshd-session[399005]: Connection closed by invalid user debian 123.54.197.60 port 59060 [preauth]
Jan 31 08:56:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 2 op/s
Jan 31 08:56:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:56:55 compute-0 ceph-mon[74496]: pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 255 B/s wr, 2 op/s
Jan 31 08:56:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:55.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:56:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:56:56 compute-0 sshd-session[399009]: Invalid user debian from 123.54.197.60 port 59076
Jan 31 08:56:56 compute-0 nova_compute[247704]: 2026-01-31 08:56:56.519 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:56 compute-0 sshd-session[399009]: Connection closed by invalid user debian 123.54.197.60 port 59076 [preauth]
Jan 31 08:56:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 31 08:56:57 compute-0 nova_compute[247704]: 2026-01-31 08:56:57.331 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:57 compute-0 nova_compute[247704]: 2026-01-31 08:56:57.333 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:57 compute-0 nova_compute[247704]: 2026-01-31 08:56:57.386 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:56:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:57.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:58 compute-0 sshd-session[399012]: Invalid user debian from 123.54.197.60 port 59080
Jan 31 08:56:58 compute-0 ceph-mon[74496]: pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 31 08:56:58 compute-0 sshd-session[399012]: Connection closed by invalid user debian 123.54.197.60 port 59080 [preauth]
Jan 31 08:56:58 compute-0 nova_compute[247704]: 2026-01-31 08:56:58.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:56:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:56:59 compute-0 sshd-session[399016]: Invalid user debian from 123.54.197.60 port 59096
Jan 31 08:56:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:56:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:59.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:56:59 compute-0 sshd-session[399016]: Connection closed by invalid user debian 123.54.197.60 port 59096 [preauth]
Jan 31 08:56:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:56:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:56:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:00 compute-0 ceph-mon[74496]: pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:00 compute-0 podman[399021]: 2026-01-31 08:57:00.930350736 +0000 UTC m=+0.092529901 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:57:01 compute-0 sshd-session[399018]: Invalid user debian from 123.54.197.60 port 42806
Jan 31 08:57:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:01 compute-0 sshd-session[399018]: Connection closed by invalid user debian 123.54.197.60 port 42806 [preauth]
Jan 31 08:57:01 compute-0 nova_compute[247704]: 2026-01-31 08:57:01.521 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:02 compute-0 ceph-mon[74496]: pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:02 compute-0 nova_compute[247704]: 2026-01-31 08:57:02.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:02 compute-0 sshd-session[399048]: Invalid user debian from 123.54.197.60 port 42816
Jan 31 08:57:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:03 compute-0 sshd-session[399048]: Connection closed by invalid user debian 123.54.197.60 port 42816 [preauth]
Jan 31 08:57:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:03.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:03.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:03 compute-0 sudo[399053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:03 compute-0 sudo[399053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:03 compute-0 sudo[399053]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:04 compute-0 sudo[399078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:04 compute-0 sudo[399078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:04 compute-0 sudo[399078]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:04 compute-0 ceph-mon[74496]: pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:04 compute-0 sshd-session[399051]: Invalid user debian from 123.54.197.60 port 42830
Jan 31 08:57:04 compute-0 sshd-session[399051]: Connection closed by invalid user debian 123.54.197.60 port 42830 [preauth]
Jan 31 08:57:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:05 compute-0 ceph-mon[74496]: pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:06 compute-0 nova_compute[247704]: 2026-01-31 08:57:06.525 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:06 compute-0 nova_compute[247704]: 2026-01-31 08:57:06.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:57:07.223 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:57:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:57:07.224 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:57:07 compute-0 nova_compute[247704]: 2026-01-31 08:57:07.223 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:07 compute-0 sshd-session[399104]: Invalid user debian from 123.54.197.60 port 42840
Jan 31 08:57:07 compute-0 nova_compute[247704]: 2026-01-31 08:57:07.390 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:07 compute-0 sshd-session[399104]: Connection closed by invalid user debian 123.54.197.60 port 42840 [preauth]
Jan 31 08:57:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:07.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:57:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:07.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:08 compute-0 ceph-mon[74496]: pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:08 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3311638129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:09 compute-0 sshd-session[399107]: Invalid user debian from 123.54.197.60 port 42846
Jan 31 08:57:09 compute-0 sshd-session[399107]: Connection closed by invalid user debian 123.54.197.60 port 42846 [preauth]
Jan 31 08:57:09 compute-0 nova_compute[247704]: 2026-01-31 08:57:09.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:09.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:09.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:10 compute-0 ceph-mon[74496]: pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.267588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830267679, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1561, "num_deletes": 253, "total_data_size": 2608977, "memory_usage": 2656168, "flush_reason": "Manual Compaction"}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830278724, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1592728, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80508, "largest_seqno": 82067, "table_properties": {"data_size": 1587089, "index_size": 2778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14888, "raw_average_key_size": 21, "raw_value_size": 1574680, "raw_average_value_size": 2255, "num_data_blocks": 124, "num_entries": 698, "num_filter_entries": 698, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849691, "oldest_key_time": 1769849691, "file_creation_time": 1769849830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 11170 microseconds, and 3725 cpu microseconds.
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.278770) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1592728 bytes OK
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.278790) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.282697) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.282712) EVENT_LOG_v1 {"time_micros": 1769849830282707, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.282733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2602333, prev total WAL file size 2602333, number of live WAL files 2.
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.283315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303130' seq:72057594037927935, type:22 .. '6D6772737461740033323631' seq:0, type:0; will stop at (end)
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1555KB)], [185(12MB)]
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830283375, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 14999272, "oldest_snapshot_seqno": -1}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 10848 keys, 12099418 bytes, temperature: kUnknown
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830378991, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 12099418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12032742, "index_size": 38467, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 286500, "raw_average_key_size": 26, "raw_value_size": 11846548, "raw_average_value_size": 1092, "num_data_blocks": 1457, "num_entries": 10848, "num_filter_entries": 10848, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.379388) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12099418 bytes
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.380948) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.6 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(17.0) write-amplify(7.6) OK, records in: 11310, records dropped: 462 output_compression: NoCompression
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.380979) EVENT_LOG_v1 {"time_micros": 1769849830380966, "job": 116, "event": "compaction_finished", "compaction_time_micros": 95755, "compaction_time_cpu_micros": 24550, "output_level": 6, "num_output_files": 1, "total_output_size": 12099418, "num_input_records": 11310, "num_output_records": 10848, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830381357, "job": 116, "event": "table_file_deletion", "file_number": 187}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849830383890, "job": 116, "event": "table_file_deletion", "file_number": 185}
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.283201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.384023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.384031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.384032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.384034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:57:10.384036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:57:10 compute-0 nova_compute[247704]: 2026-01-31 08:57:10.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:10 compute-0 nova_compute[247704]: 2026-01-31 08:57:10.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:57:10 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:57:10 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:57:10 compute-0 sshd-session[399110]: Invalid user debian from 123.54.197.60 port 50588
Jan 31 08:57:10 compute-0 sshd-session[399110]: Connection closed by invalid user debian 123.54.197.60 port 50588 [preauth]
Jan 31 08:57:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 154 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 31 08:57:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:57:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:57:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:57:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:57:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:57:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:57:11 compute-0 nova_compute[247704]: 2026-01-31 08:57:11.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:11.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:57:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:12 compute-0 sshd-session[399114]: Invalid user debian from 123.54.197.60 port 50598
Jan 31 08:57:12 compute-0 ceph-mon[74496]: pgmap v3751: 305 pgs: 305 active+clean; 154 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Jan 31 08:57:12 compute-0 nova_compute[247704]: 2026-01-31 08:57:12.392 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:12 compute-0 sshd-session[399114]: Connection closed by invalid user debian 123.54.197.60 port 50598 [preauth]
Jan 31 08:57:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1209241508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:13.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:13.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:14 compute-0 sshd-session[399117]: Invalid user debian from 123.54.197.60 port 50614
Jan 31 08:57:14 compute-0 sshd-session[399117]: Connection closed by invalid user debian 123.54.197.60 port 50614 [preauth]
Jan 31 08:57:14 compute-0 ceph-mon[74496]: pgmap v3752: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:14 compute-0 nova_compute[247704]: 2026-01-31 08:57:14.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:14 compute-0 nova_compute[247704]: 2026-01-31 08:57:14.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:57:14 compute-0 nova_compute[247704]: 2026-01-31 08:57:14.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:57:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:15 compute-0 ceph-mon[74496]: pgmap v3753: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:15 compute-0 sshd-session[399120]: Invalid user debian from 123.54.197.60 port 50630
Jan 31 08:57:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:15.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:15.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:16 compute-0 sshd-session[399120]: Connection closed by invalid user debian 123.54.197.60 port 50630 [preauth]
Jan 31 08:57:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:57:16.226 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:57:16 compute-0 nova_compute[247704]: 2026-01-31 08:57:16.531 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:16 compute-0 podman[399125]: 2026-01-31 08:57:16.882788196 +0000 UTC m=+0.048386648 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 31 08:57:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:17 compute-0 sshd-session[399122]: Invalid user debian from 123.54.197.60 port 50632
Jan 31 08:57:17 compute-0 nova_compute[247704]: 2026-01-31 08:57:17.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:17 compute-0 sshd-session[399122]: Connection closed by invalid user debian 123.54.197.60 port 50632 [preauth]
Jan 31 08:57:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:17.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:18 compute-0 ceph-mon[74496]: pgmap v3754: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:18 compute-0 nova_compute[247704]: 2026-01-31 08:57:18.787 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:57:19 compute-0 sshd-session[399142]: Invalid user debian from 123.54.197.60 port 50640
Jan 31 08:57:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:19 compute-0 sshd-session[399142]: Connection closed by invalid user debian 123.54.197.60 port 50640 [preauth]
Jan 31 08:57:19 compute-0 nova_compute[247704]: 2026-01-31 08:57:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:19 compute-0 nova_compute[247704]: 2026-01-31 08:57:19.643 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:57:19 compute-0 nova_compute[247704]: 2026-01-31 08:57:19.643 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:57:19 compute-0 nova_compute[247704]: 2026-01-31 08:57:19.643 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:57:19 compute-0 nova_compute[247704]: 2026-01-31 08:57:19.644 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:57:19 compute-0 nova_compute[247704]: 2026-01-31 08:57:19.644 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:57:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:57:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395214702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:20 compute-0 nova_compute[247704]: 2026-01-31 08:57:20.092 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:57:20 compute-0 ceph-mon[74496]: pgmap v3755: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2896569779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/395214702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:20 compute-0 nova_compute[247704]: 2026-01-31 08:57:20.279 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:57:20 compute-0 nova_compute[247704]: 2026-01-31 08:57:20.281 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4232MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:57:20 compute-0 nova_compute[247704]: 2026-01-31 08:57:20.281 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:57:20 compute-0 nova_compute[247704]: 2026-01-31 08:57:20.281 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:57:20
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', '.rgw.root', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:57:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:57:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:21 compute-0 sshd-session[399145]: Invalid user debian from 123.54.197.60 port 50654
Jan 31 08:57:21 compute-0 nova_compute[247704]: 2026-01-31 08:57:21.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:21 compute-0 nova_compute[247704]: 2026-01-31 08:57:21.625 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:57:21 compute-0 nova_compute[247704]: 2026-01-31 08:57:21.626 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:57:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:21.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.120 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:57:22 compute-0 sshd-session[399145]: Connection closed by invalid user debian 123.54.197.60 port 50654 [preauth]
Jan 31 08:57:22 compute-0 ceph-mon[74496]: pgmap v3756: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.398 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:57:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312063628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.551 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.557 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.638 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.841 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:57:22 compute-0 nova_compute[247704]: 2026-01-31 08:57:22.842 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:57:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 652 KiB/s wr, 12 op/s
Jan 31 08:57:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1652517659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2312063628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/666852656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:57:23 compute-0 sshd-session[399192]: Invalid user debian from 123.54.197.60 port 54838
Jan 31 08:57:23 compute-0 sshd-session[399192]: Connection closed by invalid user debian 123.54.197.60 port 54838 [preauth]
Jan 31 08:57:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:23.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:57:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:24 compute-0 sudo[399198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:24 compute-0 sudo[399198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:24 compute-0 sudo[399198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:24 compute-0 sudo[399223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:24 compute-0 sudo[399223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:24 compute-0 sudo[399223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:24 compute-0 ceph-mon[74496]: pgmap v3757: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 652 KiB/s wr, 12 op/s
Jan 31 08:57:25 compute-0 sshd-session[399196]: Invalid user debian from 123.54.197.60 port 54842
Jan 31 08:57:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:25 compute-0 sshd-session[399196]: Connection closed by invalid user debian 123.54.197.60 port 54842 [preauth]
Jan 31 08:57:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:25.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:25 compute-0 nova_compute[247704]: 2026-01-31 08:57:25.842 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:25 compute-0 nova_compute[247704]: 2026-01-31 08:57:25.843 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:25 compute-0 nova_compute[247704]: 2026-01-31 08:57:25.843 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:26 compute-0 ceph-mon[74496]: pgmap v3758: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:26 compute-0 nova_compute[247704]: 2026-01-31 08:57:26.538 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:26 compute-0 sshd-session[399249]: Invalid user debian from 123.54.197.60 port 54850
Jan 31 08:57:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:27 compute-0 sshd-session[399249]: Connection closed by invalid user debian 123.54.197.60 port 54850 [preauth]
Jan 31 08:57:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2818314949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:57:27 compute-0 nova_compute[247704]: 2026-01-31 08:57:27.399 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:57:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:27.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:27.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:28 compute-0 sshd-session[399252]: Invalid user debian from 123.54.197.60 port 54852
Jan 31 08:57:28 compute-0 ceph-mon[74496]: pgmap v3759: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2565149127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:57:28 compute-0 sshd-session[399252]: Connection closed by invalid user debian 123.54.197.60 port 54852 [preauth]
Jan 31 08:57:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:29 compute-0 nova_compute[247704]: 2026-01-31 08:57:29.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:29.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:30 compute-0 ceph-mon[74496]: pgmap v3760: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:57:30 compute-0 sshd-session[399255]: Invalid user debian from 123.54.197.60 port 54862
Jan 31 08:57:30 compute-0 sshd-session[399255]: Connection closed by invalid user debian 123.54.197.60 port 54862 [preauth]
Jan 31 08:57:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 170 B/s wr, 3 op/s
Jan 31 08:57:31 compute-0 nova_compute[247704]: 2026-01-31 08:57:31.543 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:31 compute-0 nova_compute[247704]: 2026-01-31 08:57:31.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:31 compute-0 nova_compute[247704]: 2026-01-31 08:57:31.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:57:31 compute-0 nova_compute[247704]: 2026-01-31 08:57:31.843 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:57:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:31.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:31.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:31 compute-0 podman[399260]: 2026-01-31 08:57:31.894954252 +0000 UTC m=+0.069504831 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:57:32 compute-0 sshd-session[399258]: Invalid user debian from 123.54.197.60 port 57122
Jan 31 08:57:32 compute-0 nova_compute[247704]: 2026-01-31 08:57:32.402 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:32 compute-0 ceph-mon[74496]: pgmap v3761: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 170 B/s wr, 3 op/s
Jan 31 08:57:32 compute-0 sshd-session[399258]: Connection closed by invalid user debian 123.54.197.60 port 57122 [preauth]
Jan 31 08:57:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 12 KiB/s wr, 18 op/s
Jan 31 08:57:33 compute-0 sshd-session[399287]: Invalid user debian from 123.54.197.60 port 57132
Jan 31 08:57:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:33.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:34 compute-0 sshd-session[399287]: Connection closed by invalid user debian 123.54.197.60 port 57132 [preauth]
Jan 31 08:57:34 compute-0 ceph-mon[74496]: pgmap v3762: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 12 KiB/s wr, 18 op/s
Jan 31 08:57:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Jan 31 08:57:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:35 compute-0 ceph-mon[74496]: pgmap v3763: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Jan 31 08:57:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:35.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:35.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:57:36 compute-0 sshd-session[399289]: Invalid user debian from 123.54.197.60 port 57146
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:57:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:57:36 compute-0 sshd-session[399289]: Connection closed by invalid user debian 123.54.197.60 port 57146 [preauth]
Jan 31 08:57:36 compute-0 nova_compute[247704]: 2026-01-31 08:57:36.545 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 08:57:37 compute-0 nova_compute[247704]: 2026-01-31 08:57:37.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:37.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:38 compute-0 ceph-mon[74496]: pgmap v3764: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 08:57:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 31 08:57:39 compute-0 sshd-session[399293]: Invalid user debian from 123.54.197.60 port 57162
Jan 31 08:57:39 compute-0 sshd-session[399293]: Connection closed by invalid user debian 123.54.197.60 port 57162 [preauth]
Jan 31 08:57:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:39.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:57:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:39.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:40 compute-0 ceph-mon[74496]: pgmap v3765: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 31 08:57:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 31 08:57:41 compute-0 nova_compute[247704]: 2026-01-31 08:57:41.548 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:41 compute-0 sshd-session[399296]: Invalid user debian from 123.54.197.60 port 39838
Jan 31 08:57:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:41.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:41 compute-0 sshd-session[399296]: Connection closed by invalid user debian 123.54.197.60 port 39838 [preauth]
Jan 31 08:57:42 compute-0 ceph-mon[74496]: pgmap v3766: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 31 08:57:42 compute-0 nova_compute[247704]: 2026-01-31 08:57:42.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 31 08:57:43 compute-0 sshd-session[399299]: Invalid user debian from 123.54.197.60 port 39844
Jan 31 08:57:43 compute-0 sshd-session[399299]: Connection closed by invalid user debian 123.54.197.60 port 39844 [preauth]
Jan 31 08:57:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:43.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:44 compute-0 sudo[399304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:44 compute-0 sudo[399304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:44 compute-0 sudo[399304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:44 compute-0 sudo[399329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:44 compute-0 sudo[399329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:44 compute-0 sudo[399329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:44 compute-0 ceph-mon[74496]: pgmap v3767: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 31 08:57:44 compute-0 sshd-session[399302]: Invalid user debian from 123.54.197.60 port 39852
Jan 31 08:57:45 compute-0 sshd-session[399302]: Connection closed by invalid user debian 123.54.197.60 port 39852 [preauth]
Jan 31 08:57:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 78 op/s
Jan 31 08:57:45 compute-0 nova_compute[247704]: 2026-01-31 08:57:45.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:45 compute-0 nova_compute[247704]: 2026-01-31 08:57:45.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:57:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:45.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:45.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:46 compute-0 ceph-mon[74496]: pgmap v3768: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 78 op/s
Jan 31 08:57:46 compute-0 sshd-session[399356]: Invalid user debian from 123.54.197.60 port 39860
Jan 31 08:57:46 compute-0 nova_compute[247704]: 2026-01-31 08:57:46.552 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:46 compute-0 sshd-session[399356]: Connection closed by invalid user debian 123.54.197.60 port 39860 [preauth]
Jan 31 08:57:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 197 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 636 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:57:47 compute-0 nova_compute[247704]: 2026-01-31 08:57:47.407 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:47.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:47 compute-0 podman[399361]: 2026-01-31 08:57:47.885236593 +0000 UTC m=+0.062005718 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 08:57:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:47.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:48 compute-0 sudo[399381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:48 compute-0 sudo[399381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sudo[399381]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 sudo[399406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:57:48 compute-0 sudo[399406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sudo[399406]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 sudo[399431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:48 compute-0 sudo[399431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sudo[399431]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 sudo[399456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:57:48 compute-0 sudo[399456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 ceph-mon[74496]: pgmap v3769: 305 pgs: 305 active+clean; 197 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 636 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 08:57:48 compute-0 sudo[399456]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:57:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:57:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:57:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:57:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:57:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e53438b3-fb7e-4b94-b31c-2ae9754b420a does not exist
Jan 31 08:57:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 3434722e-a671-4060-aea6-ff03509ca306 does not exist
Jan 31 08:57:48 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6a2869a4-63f2-4c9a-aa42-e5498252d6d6 does not exist
Jan 31 08:57:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:57:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:57:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:57:48 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:57:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:57:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:57:48 compute-0 sshd-session[399359]: Invalid user debian from 123.54.197.60 port 39876
Jan 31 08:57:48 compute-0 sudo[399513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:48 compute-0 sudo[399513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sudo[399513]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 sudo[399538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:57:48 compute-0 sudo[399538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sudo[399538]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 sudo[399563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:48 compute-0 sudo[399563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sudo[399563]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:48 compute-0 sudo[399588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:57:48 compute-0 sudo[399588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:48 compute-0 sshd-session[399359]: Connection closed by invalid user debian 123.54.197.60 port 39876 [preauth]
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.148398719 +0000 UTC m=+0.036463478 container create 8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:57:49 compute-0 systemd[1]: Started libpod-conmon-8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59.scope.
Jan 31 08:57:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.224354555 +0000 UTC m=+0.112419354 container init 8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.132309648 +0000 UTC m=+0.020374427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.234266606 +0000 UTC m=+0.122331365 container start 8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.237641749 +0000 UTC m=+0.125706538 container attach 8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:57:49 compute-0 systemd[1]: libpod-8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59.scope: Deactivated successfully.
Jan 31 08:57:49 compute-0 zen_kepler[399670]: 167 167
Jan 31 08:57:49 compute-0 conmon[399670]: conmon 8ccbacafef5816471ad8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59.scope/container/memory.events
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.244100165 +0000 UTC m=+0.132164944 container died 8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kepler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 08:57:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-44bffb885705f0a6c5e10c979aff90f34d3ab8ff7bbead42a9cf64612232afb0-merged.mount: Deactivated successfully.
Jan 31 08:57:49 compute-0 podman[399652]: 2026-01-31 08:57:49.285735748 +0000 UTC m=+0.173800507 container remove 8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kepler, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:57:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 763 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 31 08:57:49 compute-0 systemd[1]: libpod-conmon-8ccbacafef5816471ad84b4359936833c9e219ca24c61267ef38175ee1e1ff59.scope: Deactivated successfully.
Jan 31 08:57:49 compute-0 podman[399693]: 2026-01-31 08:57:49.420657269 +0000 UTC m=+0.046705487 container create 5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:57:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:57:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:57:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:57:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:57:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:57:49 compute-0 systemd[1]: Started libpod-conmon-5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac.scope.
Jan 31 08:57:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:57:49 compute-0 podman[399693]: 2026-01-31 08:57:49.399677499 +0000 UTC m=+0.025725737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530a1471384e8204bceefff32452cec1f4da1f528372568807a02a19e75d14a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530a1471384e8204bceefff32452cec1f4da1f528372568807a02a19e75d14a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530a1471384e8204bceefff32452cec1f4da1f528372568807a02a19e75d14a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530a1471384e8204bceefff32452cec1f4da1f528372568807a02a19e75d14a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530a1471384e8204bceefff32452cec1f4da1f528372568807a02a19e75d14a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:49 compute-0 podman[399693]: 2026-01-31 08:57:49.512155934 +0000 UTC m=+0.138204182 container init 5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:57:49 compute-0 podman[399693]: 2026-01-31 08:57:49.51774043 +0000 UTC m=+0.143788648 container start 5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:57:49 compute-0 podman[399693]: 2026-01-31 08:57:49.526147764 +0000 UTC m=+0.152195982 container attach 5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:57:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:49.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:57:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:57:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:50 compute-0 funny_edison[399709]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:57:50 compute-0 funny_edison[399709]: --> relative data size: 1.0
Jan 31 08:57:50 compute-0 funny_edison[399709]: --> All data devices are unavailable
Jan 31 08:57:50 compute-0 systemd[1]: libpod-5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac.scope: Deactivated successfully.
Jan 31 08:57:50 compute-0 podman[399693]: 2026-01-31 08:57:50.339531483 +0000 UTC m=+0.965579731 container died 5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 08:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-530a1471384e8204bceefff32452cec1f4da1f528372568807a02a19e75d14a1-merged.mount: Deactivated successfully.
Jan 31 08:57:50 compute-0 podman[399693]: 2026-01-31 08:57:50.390235115 +0000 UTC m=+1.016283333 container remove 5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 08:57:50 compute-0 systemd[1]: libpod-conmon-5180704a3f7e7ec0f8913300615ec384342a5e52ded601afdbf4cdd5a998b6ac.scope: Deactivated successfully.
Jan 31 08:57:50 compute-0 sudo[399588]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:50 compute-0 ceph-mon[74496]: pgmap v3770: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 763 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 31 08:57:50 compute-0 sudo[399736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:50 compute-0 sudo[399736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:50 compute-0 sudo[399736]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:50 compute-0 sudo[399761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:57:50 compute-0 sudo[399761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:50 compute-0 sudo[399761]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:50 compute-0 sudo[399786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:50 compute-0 nova_compute[247704]: 2026-01-31 08:57:50.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:50 compute-0 sudo[399786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:50 compute-0 sudo[399786]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:50 compute-0 sudo[399811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:57:50 compute-0 sudo[399811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:50 compute-0 sshd-session[399666]: Invalid user debian from 123.54.197.60 port 39878
Jan 31 08:57:50 compute-0 podman[399876]: 2026-01-31 08:57:50.904586972 +0000 UTC m=+0.031857985 container create 166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:57:50 compute-0 systemd[1]: Started libpod-conmon-166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43.scope.
Jan 31 08:57:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:57:50 compute-0 podman[399876]: 2026-01-31 08:57:50.973348734 +0000 UTC m=+0.100619797 container init 166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 08:57:50 compute-0 podman[399876]: 2026-01-31 08:57:50.9809702 +0000 UTC m=+0.108241223 container start 166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:57:50 compute-0 podman[399876]: 2026-01-31 08:57:50.984787253 +0000 UTC m=+0.112058266 container attach 166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:57:50 compute-0 practical_yalow[399892]: 167 167
Jan 31 08:57:50 compute-0 podman[399876]: 2026-01-31 08:57:50.890341866 +0000 UTC m=+0.017612899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:57:50 compute-0 systemd[1]: libpod-166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43.scope: Deactivated successfully.
Jan 31 08:57:50 compute-0 podman[399876]: 2026-01-31 08:57:50.996819005 +0000 UTC m=+0.124090018 container died 166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-74e6d59aef59657fe7a98ae6fd8501ec652e375465bc576d3f906e0bfd5cf900-merged.mount: Deactivated successfully.
Jan 31 08:57:51 compute-0 podman[399876]: 2026-01-31 08:57:51.062235676 +0000 UTC m=+0.189506689 container remove 166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 08:57:51 compute-0 systemd[1]: libpod-conmon-166d0f4b3a190f40a40f8add51513a5d69a86ccb185010edeb8a68ed8bc46c43.scope: Deactivated successfully.
Jan 31 08:57:51 compute-0 podman[399916]: 2026-01-31 08:57:51.207477518 +0000 UTC m=+0.063337262 container create c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:57:51 compute-0 systemd[1]: Started libpod-conmon-c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890.scope.
Jan 31 08:57:51 compute-0 podman[399916]: 2026-01-31 08:57:51.167054625 +0000 UTC m=+0.022914389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:57:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f164d974ef6d6ae047ec5fc6ad429afafeff804ac335913a4ed2ebd0409870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f164d974ef6d6ae047ec5fc6ad429afafeff804ac335913a4ed2ebd0409870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f164d974ef6d6ae047ec5fc6ad429afafeff804ac335913a4ed2ebd0409870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f164d974ef6d6ae047ec5fc6ad429afafeff804ac335913a4ed2ebd0409870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:51 compute-0 podman[399916]: 2026-01-31 08:57:51.314887149 +0000 UTC m=+0.170746923 container init c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:57:51 compute-0 podman[399916]: 2026-01-31 08:57:51.320836164 +0000 UTC m=+0.176695908 container start c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:57:51 compute-0 podman[399916]: 2026-01-31 08:57:51.326246236 +0000 UTC m=+0.182106040 container attach c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:57:51 compute-0 nova_compute[247704]: 2026-01-31 08:57:51.555 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:51 compute-0 sshd-session[399666]: Connection closed by invalid user debian 123.54.197.60 port 39878 [preauth]
Jan 31 08:57:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:51.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:52 compute-0 gracious_poitras[399933]: {
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:     "0": [
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:         {
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "devices": [
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "/dev/loop3"
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             ],
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "lv_name": "ceph_lv0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "lv_size": "7511998464",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "name": "ceph_lv0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "tags": {
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.cluster_name": "ceph",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.crush_device_class": "",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.encrypted": "0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.osd_id": "0",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.type": "block",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:                 "ceph.vdo": "0"
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             },
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "type": "block",
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:             "vg_name": "ceph_vg0"
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:         }
Jan 31 08:57:52 compute-0 gracious_poitras[399933]:     ]
Jan 31 08:57:52 compute-0 gracious_poitras[399933]: }
Jan 31 08:57:52 compute-0 systemd[1]: libpod-c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890.scope: Deactivated successfully.
Jan 31 08:57:52 compute-0 podman[399916]: 2026-01-31 08:57:52.164860637 +0000 UTC m=+1.020720381 container died c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7f164d974ef6d6ae047ec5fc6ad429afafeff804ac335913a4ed2ebd0409870-merged.mount: Deactivated successfully.
Jan 31 08:57:52 compute-0 podman[399916]: 2026-01-31 08:57:52.237803801 +0000 UTC m=+1.093663555 container remove c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_poitras, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:57:52 compute-0 systemd[1]: libpod-conmon-c798e618b10be66b2792466b7df730e24cf2b31143df6c62b0f4e0ba6f2e6890.scope: Deactivated successfully.
Jan 31 08:57:52 compute-0 sudo[399811]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:52 compute-0 sudo[399958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:52 compute-0 sudo[399958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:52 compute-0 sudo[399958]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:52 compute-0 sudo[399983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:57:52 compute-0 sudo[399983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:52 compute-0 sudo[399983]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:52 compute-0 nova_compute[247704]: 2026-01-31 08:57:52.409 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:52 compute-0 sudo[400009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:52 compute-0 sudo[400009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:52 compute-0 sudo[400009]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:52 compute-0 ceph-mon[74496]: pgmap v3771: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:52 compute-0 sudo[400034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:57:52 compute-0 sudo[400034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.782163627 +0000 UTC m=+0.041776346 container create 6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:57:52 compute-0 systemd[1]: Started libpod-conmon-6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b.scope.
Jan 31 08:57:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.765274967 +0000 UTC m=+0.024887706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.861562538 +0000 UTC m=+0.121175277 container init 6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.869933722 +0000 UTC m=+0.129546451 container start 6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.874197115 +0000 UTC m=+0.133809854 container attach 6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:57:52 compute-0 zealous_morse[400116]: 167 167
Jan 31 08:57:52 compute-0 systemd[1]: libpod-6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b.scope: Deactivated successfully.
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.875578769 +0000 UTC m=+0.135191488 container died 6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0f78b7ba49c3f63f154ed4bc04c3472b1c0944f33f76170a0f2a29a763a4d9-merged.mount: Deactivated successfully.
Jan 31 08:57:52 compute-0 podman[400099]: 2026-01-31 08:57:52.91136634 +0000 UTC m=+0.170979059 container remove 6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:57:52 compute-0 systemd[1]: libpod-conmon-6a6f071cd6fc842a13483c8bc09bf078e661d35e83dd84f3d5fc9de0993b4e6b.scope: Deactivated successfully.
Jan 31 08:57:53 compute-0 sshd-session[399938]: Invalid user debian from 123.54.197.60 port 46662
Jan 31 08:57:53 compute-0 podman[400140]: 2026-01-31 08:57:53.039791142 +0000 UTC m=+0.034521320 container create 17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:57:53 compute-0 systemd[1]: Started libpod-conmon-17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f.scope.
Jan 31 08:57:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e948189e7dc5db1fdb5dc130000d68d68fe565afe04f52b568139872208cc5fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e948189e7dc5db1fdb5dc130000d68d68fe565afe04f52b568139872208cc5fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e948189e7dc5db1fdb5dc130000d68d68fe565afe04f52b568139872208cc5fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e948189e7dc5db1fdb5dc130000d68d68fe565afe04f52b568139872208cc5fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:57:53 compute-0 podman[400140]: 2026-01-31 08:57:53.106550405 +0000 UTC m=+0.101280633 container init 17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chatterjee, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:57:53 compute-0 podman[400140]: 2026-01-31 08:57:53.113121255 +0000 UTC m=+0.107851423 container start 17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 08:57:53 compute-0 podman[400140]: 2026-01-31 08:57:53.116819925 +0000 UTC m=+0.111550123 container attach 17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:57:53 compute-0 podman[400140]: 2026-01-31 08:57:53.025035543 +0000 UTC m=+0.019765721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:57:53 compute-0 sshd-session[399938]: Connection closed by invalid user debian 123.54.197.60 port 46662 [preauth]
Jan 31 08:57:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/361818439' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:57:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/361818439' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:57:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:53.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]: {
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:         "osd_id": 0,
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:         "type": "bluestore"
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]:     }
Jan 31 08:57:53 compute-0 fervent_chatterjee[400157]: }
Jan 31 08:57:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:53.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:53 compute-0 systemd[1]: libpod-17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f.scope: Deactivated successfully.
Jan 31 08:57:53 compute-0 podman[400140]: 2026-01-31 08:57:53.966309211 +0000 UTC m=+0.961039399 container died 17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e948189e7dc5db1fdb5dc130000d68d68fe565afe04f52b568139872208cc5fb-merged.mount: Deactivated successfully.
Jan 31 08:57:54 compute-0 podman[400140]: 2026-01-31 08:57:54.021335329 +0000 UTC m=+1.016065497 container remove 17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:57:54 compute-0 systemd[1]: libpod-conmon-17df6aea01672c3eb262482209a11f7f029db29b9f93dbc13f621407d53ad87f.scope: Deactivated successfully.
Jan 31 08:57:54 compute-0 sudo[400034]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:57:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:57:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 301e7f17-b9d2-46f3-b9eb-b82cd4308954 does not exist
Jan 31 08:57:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7d0ffe9a-4b7f-40f7-960d-938538dbe0b0 does not exist
Jan 31 08:57:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 17bf3c53-c49c-427a-8772-332faf332dd1 does not exist
Jan 31 08:57:54 compute-0 sudo[400194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:57:54 compute-0 sudo[400194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:54 compute-0 sudo[400194]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:54 compute-0 sudo[400219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:57:54 compute-0 sudo[400219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:57:54 compute-0 sudo[400219]: pam_unix(sudo:session): session closed for user root
Jan 31 08:57:54 compute-0 ceph-mon[74496]: pgmap v3772: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:54 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:57:54 compute-0 sshd-session[400162]: Invalid user debian from 123.54.197.60 port 46676
Jan 31 08:57:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:57:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:55 compute-0 sshd-session[400162]: Connection closed by invalid user debian 123.54.197.60 port 46676 [preauth]
Jan 31 08:57:55 compute-0 ceph-mon[74496]: pgmap v3773: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 08:57:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:55.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:57:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:55.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:56 compute-0 nova_compute[247704]: 2026-01-31 08:57:56.561 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:56 compute-0 sshd-session[400248]: Invalid user admin from 45.148.10.240 port 39880
Jan 31 08:57:56 compute-0 sshd-session[400248]: Connection closed by invalid user admin 45.148.10.240 port 39880 [preauth]
Jan 31 08:57:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 969 KiB/s wr, 39 op/s
Jan 31 08:57:57 compute-0 nova_compute[247704]: 2026-01-31 08:57:57.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:57:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:57.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:57:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:57.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:57:58 compute-0 sshd-session[400245]: Invalid user debian from 123.54.197.60 port 46684
Jan 31 08:57:58 compute-0 nova_compute[247704]: 2026-01-31 08:57:58.479 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:58 compute-0 ceph-mon[74496]: pgmap v3774: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 969 KiB/s wr, 39 op/s
Jan 31 08:57:58 compute-0 sshd-session[400245]: Connection closed by invalid user debian 123.54.197.60 port 46684 [preauth]
Jan 31 08:57:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 301 KiB/s wr, 24 op/s
Jan 31 08:57:59 compute-0 ceph-mon[74496]: pgmap v3775: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 301 KiB/s wr, 24 op/s
Jan 31 08:57:59 compute-0 nova_compute[247704]: 2026-01-31 08:57:59.747 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:57:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:57:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:59.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:57:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:57:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:57:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:00 compute-0 sshd-session[400251]: Invalid user debian from 123.54.197.60 port 46690
Jan 31 08:58:00 compute-0 sshd-session[400251]: Connection closed by invalid user debian 123.54.197.60 port 46690 [preauth]
Jan 31 08:58:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 KiB/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:58:01 compute-0 nova_compute[247704]: 2026-01-31 08:58:01.565 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:01.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:01.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:02 compute-0 sshd-session[400254]: Invalid user debian from 123.54.197.60 port 42888
Jan 31 08:58:02 compute-0 podman[400256]: 2026-01-31 08:58:02.183857549 +0000 UTC m=+0.088455751 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:58:02 compute-0 sshd-session[400254]: Connection closed by invalid user debian 123.54.197.60 port 42888 [preauth]
Jan 31 08:58:02 compute-0 ceph-mon[74496]: pgmap v3776: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 KiB/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:58:02 compute-0 nova_compute[247704]: 2026-01-31 08:58:02.444 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Jan 31 08:58:03 compute-0 sshd-session[400284]: Invalid user debian from 123.54.197.60 port 42892
Jan 31 08:58:03 compute-0 sshd-session[400284]: Connection closed by invalid user debian 123.54.197.60 port 42892 [preauth]
Jan 31 08:58:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:03.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:03.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:04 compute-0 sudo[400288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:04 compute-0 sudo[400288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:04 compute-0 sudo[400288]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:04 compute-0 sudo[400313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:04 compute-0 sudo[400313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:04 compute-0 sudo[400313]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:04 compute-0 ceph-mon[74496]: pgmap v3777: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Jan 31 08:58:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:58:05 compute-0 sshd-session[400286]: Invalid user debian from 123.54.197.60 port 42902
Jan 31 08:58:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:05.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:06 compute-0 ceph-mon[74496]: pgmap v3778: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 31 08:58:06 compute-0 nova_compute[247704]: 2026-01-31 08:58:06.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:06 compute-0 sshd-session[400286]: Connection closed by invalid user debian 123.54.197.60 port 42902 [preauth]
Jan 31 08:58:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 31 08:58:07 compute-0 nova_compute[247704]: 2026-01-31 08:58:07.446 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:07 compute-0 nova_compute[247704]: 2026-01-31 08:58:07.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:07.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:07.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:08 compute-0 sshd-session[400340]: Invalid user debian from 123.54.197.60 port 42904
Jan 31 08:58:08 compute-0 sshd-session[400340]: Connection closed by invalid user debian 123.54.197.60 port 42904 [preauth]
Jan 31 08:58:08 compute-0 ceph-mon[74496]: pgmap v3779: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 31 08:58:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s rd, 3.1 KiB/s wr, 6 op/s
Jan 31 08:58:09 compute-0 sshd-session[400343]: Invalid user debian from 123.54.197.60 port 42906
Jan 31 08:58:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:09.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:09.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:09 compute-0 sshd-session[400343]: Connection closed by invalid user debian 123.54.197.60 port 42906 [preauth]
Jan 31 08:58:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:10 compute-0 ceph-mon[74496]: pgmap v3780: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s rd, 3.1 KiB/s wr, 6 op/s
Jan 31 08:58:10 compute-0 nova_compute[247704]: 2026-01-31 08:58:10.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:10 compute-0 nova_compute[247704]: 2026-01-31 08:58:10.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:58:11 compute-0 sshd-session[400345]: Invalid user debian from 123.54.197.60 port 49350
Jan 31 08:58:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:11.227 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:11.228 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:11.228 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.3 KiB/s wr, 8 op/s
Jan 31 08:58:11 compute-0 sshd-session[400345]: Connection closed by invalid user debian 123.54.197.60 port 49350 [preauth]
Jan 31 08:58:11 compute-0 nova_compute[247704]: 2026-01-31 08:58:11.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:11 compute-0 nova_compute[247704]: 2026-01-31 08:58:11.571 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:11 compute-0 ceph-mon[74496]: pgmap v3781: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.3 KiB/s wr, 8 op/s
Jan 31 08:58:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:11.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:11.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:12 compute-0 ovn_controller[149457]: 2026-01-31T08:58:12Z|00865|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 31 08:58:12 compute-0 nova_compute[247704]: 2026-01-31 08:58:12.492 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:12 compute-0 sshd-session[400348]: Invalid user debian from 123.54.197.60 port 49356
Jan 31 08:58:13 compute-0 sshd-session[400348]: Connection closed by invalid user debian 123.54.197.60 port 49356 [preauth]
Jan 31 08:58:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 8 op/s
Jan 31 08:58:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:13.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:13.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:14 compute-0 sshd-session[400351]: Invalid user debian from 123.54.197.60 port 49360
Jan 31 08:58:14 compute-0 ceph-mon[74496]: pgmap v3782: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 8 op/s
Jan 31 08:58:14 compute-0 sshd-session[400351]: Connection closed by invalid user debian 123.54.197.60 port 49360 [preauth]
Jan 31 08:58:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:15.215 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:58:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:15.217 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:58:15 compute-0 nova_compute[247704]: 2026-01-31 08:58:15.217 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:15 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:15.218 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 207 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 485 KiB/s wr, 13 op/s
Jan 31 08:58:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3727841173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:15 compute-0 nova_compute[247704]: 2026-01-31 08:58:15.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:15 compute-0 nova_compute[247704]: 2026-01-31 08:58:15.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:58:15 compute-0 nova_compute[247704]: 2026-01-31 08:58:15.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:58:15 compute-0 nova_compute[247704]: 2026-01-31 08:58:15.617 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:58:15 compute-0 sshd-session[400354]: Invalid user debian from 123.54.197.60 port 49376
Jan 31 08:58:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:15.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:16 compute-0 sshd-session[400354]: Connection closed by invalid user debian 123.54.197.60 port 49376 [preauth]
Jan 31 08:58:16 compute-0 ceph-mon[74496]: pgmap v3783: 305 pgs: 305 active+clean; 207 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 485 KiB/s wr, 13 op/s
Jan 31 08:58:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1143733964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:16 compute-0 nova_compute[247704]: 2026-01-31 08:58:16.574 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 219 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 700 KiB/s wr, 28 op/s
Jan 31 08:58:17 compute-0 nova_compute[247704]: 2026-01-31 08:58:17.495 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:17 compute-0 sshd-session[400357]: Invalid user debian from 123.54.197.60 port 49390
Jan 31 08:58:17 compute-0 sshd-session[400357]: Connection closed by invalid user debian 123.54.197.60 port 49390 [preauth]
Jan 31 08:58:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:17.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:17.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:18 compute-0 ceph-mon[74496]: pgmap v3784: 305 pgs: 305 active+clean; 219 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 700 KiB/s wr, 28 op/s
Jan 31 08:58:18 compute-0 podman[400362]: 2026-01-31 08:58:18.872361598 +0000 UTC m=+0.045369214 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 08:58:19 compute-0 sshd-session[400359]: Invalid user debian from 123.54.197.60 port 49400
Jan 31 08:58:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 08:58:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:58:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1049638260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:19 compute-0 sshd-session[400359]: Connection closed by invalid user debian 123.54.197.60 port 49400 [preauth]
Jan 31 08:58:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:19.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:19.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:58:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:58:20
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'volumes']
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:58:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Jan 31 08:58:20 compute-0 ceph-mon[74496]: pgmap v3785: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 08:58:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1049638260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.443289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900443335, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 845, "num_deletes": 251, "total_data_size": 1229908, "memory_usage": 1249344, "flush_reason": "Manual Compaction"}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Jan 31 08:58:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900459203, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1215924, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82069, "largest_seqno": 82912, "table_properties": {"data_size": 1211751, "index_size": 1888, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9550, "raw_average_key_size": 19, "raw_value_size": 1203218, "raw_average_value_size": 2480, "num_data_blocks": 84, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849831, "oldest_key_time": 1769849831, "file_creation_time": 1769849900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 15961 microseconds, and 3271 cpu microseconds.
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:58:20 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.459245) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1215924 bytes OK
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.459274) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.465558) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.465667) EVENT_LOG_v1 {"time_micros": 1769849900465648, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.465708) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1225830, prev total WAL file size 1225871, number of live WAL files 2.
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.466658) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1187KB)], [188(11MB)]
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900466705, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 13315342, "oldest_snapshot_seqno": -1}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 10815 keys, 11380726 bytes, temperature: kUnknown
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900550625, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11380726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11314828, "index_size": 37753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 286506, "raw_average_key_size": 26, "raw_value_size": 11129515, "raw_average_value_size": 1029, "num_data_blocks": 1421, "num_entries": 10815, "num_filter_entries": 10815, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769849900, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.550934) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11380726 bytes
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.552489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.5 rd, 135.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(20.3) write-amplify(9.4) OK, records in: 11333, records dropped: 518 output_compression: NoCompression
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.552513) EVENT_LOG_v1 {"time_micros": 1769849900552501, "job": 118, "event": "compaction_finished", "compaction_time_micros": 84012, "compaction_time_cpu_micros": 22330, "output_level": 6, "num_output_files": 1, "total_output_size": 11380726, "num_input_records": 11333, "num_output_records": 10815, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900552826, "job": 118, "event": "table_file_deletion", "file_number": 190}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849900554653, "job": 118, "event": "table_file_deletion", "file_number": 188}
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.466572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.554856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.554865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.554867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.554869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-08:58:20.554871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:58:20 compute-0 nova_compute[247704]: 2026-01-31 08:58:20.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:20 compute-0 nova_compute[247704]: 2026-01-31 08:58:20.719 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:20 compute-0 nova_compute[247704]: 2026-01-31 08:58:20.719 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:20 compute-0 nova_compute[247704]: 2026-01-31 08:58:20.720 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:20 compute-0 nova_compute[247704]: 2026-01-31 08:58:20.720 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:58:20 compute-0 nova_compute[247704]: 2026-01-31 08:58:20.720 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:20 compute-0 sshd-session[400382]: Invalid user debian from 123.54.197.60 port 46318
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:58:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:58:21 compute-0 sshd-session[400382]: Connection closed by invalid user debian 123.54.197.60 port 46318 [preauth]
Jan 31 08:58:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:58:21 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151004839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.152 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.351 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.353 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4216MB free_disk=20.94265365600586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.354 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.354 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Jan 31 08:58:21 compute-0 ceph-mon[74496]: osdmap e405: 3 total, 3 up, 3 in
Jan 31 08:58:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2809828954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2151004839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Jan 31 08:58:21 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.577 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.713 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.714 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:58:21 compute-0 nova_compute[247704]: 2026-01-31 08:58:21.738 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:21.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:58:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977114274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:22 compute-0 nova_compute[247704]: 2026-01-31 08:58:22.170 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:22 compute-0 nova_compute[247704]: 2026-01-31 08:58:22.176 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:58:22 compute-0 nova_compute[247704]: 2026-01-31 08:58:22.252 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:58:22 compute-0 nova_compute[247704]: 2026-01-31 08:58:22.255 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:58:22 compute-0 nova_compute[247704]: 2026-01-31 08:58:22.255 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Jan 31 08:58:22 compute-0 ceph-mon[74496]: pgmap v3787: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 08:58:22 compute-0 ceph-mon[74496]: osdmap e406: 3 total, 3 up, 3 in
Jan 31 08:58:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2071452061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1977114274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Jan 31 08:58:22 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Jan 31 08:58:22 compute-0 nova_compute[247704]: 2026-01-31 08:58:22.497 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:23 compute-0 sshd-session[400407]: Invalid user debian from 123.54.197.60 port 46328
Jan 31 08:58:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 50 op/s
Jan 31 08:58:23 compute-0 sshd-session[400407]: Connection closed by invalid user debian 123.54.197.60 port 46328 [preauth]
Jan 31 08:58:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Jan 31 08:58:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Jan 31 08:58:23 compute-0 ceph-mon[74496]: osdmap e407: 3 total, 3 up, 3 in
Jan 31 08:58:23 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Jan 31 08:58:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:23.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:23.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:24 compute-0 nova_compute[247704]: 2026-01-31 08:58:24.256 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:24 compute-0 nova_compute[247704]: 2026-01-31 08:58:24.257 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:24 compute-0 sudo[400435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:24 compute-0 sudo[400435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:24 compute-0 sudo[400435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:24 compute-0 sudo[400460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:24 compute-0 sudo[400460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:24 compute-0 sudo[400460]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:24 compute-0 ceph-mon[74496]: pgmap v3790: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 50 op/s
Jan 31 08:58:24 compute-0 ceph-mon[74496]: osdmap e408: 3 total, 3 up, 3 in
Jan 31 08:58:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2855477775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:24 compute-0 sshd-session[400432]: Invalid user debian from 123.54.197.60 port 46332
Jan 31 08:58:24 compute-0 sshd-session[400432]: Connection closed by invalid user debian 123.54.197.60 port 46332 [preauth]
Jan 31 08:58:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:58:25 compute-0 nova_compute[247704]: 2026-01-31 08:58:25.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:58:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:25.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:25.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:26 compute-0 sshd-session[400485]: Invalid user debian from 123.54.197.60 port 46346
Jan 31 08:58:26 compute-0 sshd-session[400485]: Connection closed by invalid user debian 123.54.197.60 port 46346 [preauth]
Jan 31 08:58:26 compute-0 ceph-mon[74496]: pgmap v3792: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 08:58:26 compute-0 nova_compute[247704]: 2026-01-31 08:58:26.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 251 op/s
Jan 31 08:58:27 compute-0 nova_compute[247704]: 2026-01-31 08:58:27.498 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:27 compute-0 ceph-mon[74496]: pgmap v3793: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.0 MiB/s wr, 251 op/s
Jan 31 08:58:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:27.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/51172579' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 307 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 320 op/s
Jan 31 08:58:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Jan 31 08:58:29 compute-0 ceph-mon[74496]: pgmap v3794: 305 pgs: 305 active+clean; 307 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 320 op/s
Jan 31 08:58:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Jan 31 08:58:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Jan 31 08:58:29 compute-0 sshd-session[400488]: Invalid user debian from 123.54.197.60 port 46358
Jan 31 08:58:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:29.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Jan 31 08:58:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Jan 31 08:58:30 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Jan 31 08:58:30 compute-0 sshd-session[400488]: Connection closed by invalid user debian 123.54.197.60 port 46358 [preauth]
Jan 31 08:58:30 compute-0 ceph-mon[74496]: osdmap e409: 3 total, 3 up, 3 in
Jan 31 08:58:30 compute-0 ceph-mon[74496]: osdmap e410: 3 total, 3 up, 3 in
Jan 31 08:58:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 361 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.0 MiB/s wr, 377 op/s
Jan 31 08:58:31 compute-0 nova_compute[247704]: 2026-01-31 08:58:31.584 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:31 compute-0 ceph-mon[74496]: pgmap v3797: 305 pgs: 305 active+clean; 361 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.0 MiB/s wr, 377 op/s
Jan 31 08:58:31 compute-0 sshd-session[400492]: Invalid user debian from 123.54.197.60 port 34884
Jan 31 08:58:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:31.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:31.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:32 compute-0 sshd-session[400492]: Connection closed by invalid user debian 123.54.197.60 port 34884 [preauth]
Jan 31 08:58:32 compute-0 nova_compute[247704]: 2026-01-31 08:58:32.522 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:32 compute-0 podman[400497]: 2026-01-31 08:58:32.90602711 +0000 UTC m=+0.076576292 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:58:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.6 MiB/s wr, 306 op/s
Jan 31 08:58:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:33.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:34 compute-0 sshd-session[400495]: Invalid user debian from 123.54.197.60 port 34900
Jan 31 08:58:34 compute-0 ceph-mon[74496]: pgmap v3798: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.6 MiB/s wr, 306 op/s
Jan 31 08:58:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3839769465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2286454376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:34 compute-0 sshd-session[400495]: Connection closed by invalid user debian 123.54.197.60 port 34900 [preauth]
Jan 31 08:58:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 215 op/s
Jan 31 08:58:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:35.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:35.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003163422727061232 of space, bias 1.0, pg target 0.9490268181183695 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0041398034733854455 of space, bias 1.0, pg target 1.2419410420156336 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:58:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 08:58:36 compute-0 sshd-session[400525]: Invalid user debian from 123.54.197.60 port 34902
Jan 31 08:58:36 compute-0 ceph-mon[74496]: pgmap v3799: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.7 MiB/s wr, 215 op/s
Jan 31 08:58:36 compute-0 nova_compute[247704]: 2026-01-31 08:58:36.587 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:36 compute-0 sshd-session[400525]: Connection closed by invalid user debian 123.54.197.60 port 34902 [preauth]
Jan 31 08:58:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 95 op/s
Jan 31 08:58:37 compute-0 nova_compute[247704]: 2026-01-31 08:58:37.525 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:37 compute-0 ceph-mon[74496]: pgmap v3800: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 95 op/s
Jan 31 08:58:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:37.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 31 08:58:39 compute-0 sshd-session[400528]: Invalid user debian from 123.54.197.60 port 34916
Jan 31 08:58:39 compute-0 ceph-mon[74496]: pgmap v3801: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 31 08:58:39 compute-0 sshd-session[400528]: Connection closed by invalid user debian 123.54.197.60 port 34916 [preauth]
Jan 31 08:58:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:39.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:39.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Jan 31 08:58:41 compute-0 nova_compute[247704]: 2026-01-31 08:58:41.589 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:41.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:41.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.003 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.004 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.034 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.125 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.125 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.133 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.133 247708 INFO nova.compute.claims [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Claim successful on node compute-0.ctlplane.example.com
Jan 31 08:58:42 compute-0 sshd-session[400531]: Invalid user debian from 123.54.197.60 port 57704
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.270 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:42 compute-0 ceph-mon[74496]: pgmap v3802: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Jan 31 08:58:42 compute-0 sshd-session[400531]: Connection closed by invalid user debian 123.54.197.60 port 57704 [preauth]
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.526 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:58:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012726985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.690 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.696 247708 DEBUG nova.compute.provider_tree [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.713 247708 DEBUG nova.scheduler.client.report [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.743 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.744 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.799 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.800 247708 DEBUG nova.network.neutron [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.829 247708 INFO nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.852 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 08:58:42 compute-0 nova_compute[247704]: 2026-01-31 08:58:42.906 247708 INFO nova.virt.block_device [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Booting with volume 4fa6de41-50fa-4a7f-9f53-f34c852aad03 at /dev/vda
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.058 247708 DEBUG os_brick.utils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.059 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.068 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.068 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1c0527-d741-41c6-94de-d8098fd91501]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.070 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.076 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.076 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[a293e428-705f-4a37-ba25-e6de0b021f6d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.077 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.084 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.085 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[d636e49e-7950-453d-883b-8bc82e5d85d3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.086 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[02eca5f0-39e3-46df-ae23-5b7b2599ef5b]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.086 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.109 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.111 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.112 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.112 247708 DEBUG os_brick.initiator.connectors.lightos [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.112 247708 DEBUG os_brick.utils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] <== get_connector_properties: return (54ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.113 247708 DEBUG nova.virt.block_device [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating existing volume attachment record: f9cb034f-03a7-4914-ba0e-94615d09a9f6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 08:58:43 compute-0 nova_compute[247704]: 2026-01-31 08:58:43.230 247708 DEBUG nova.policy [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe599a5134944b9fbf952e83fdf41c55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e458566e0de24b2fb797037d94d9014c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 08:58:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 793 KiB/s wr, 90 op/s
Jan 31 08:58:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2012726985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:58:43 compute-0 sshd-session[400557]: Invalid user debian from 123.54.197.60 port 57718
Jan 31 08:58:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:58:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860137983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:43 compute-0 sshd-session[400557]: Connection closed by invalid user debian 123.54.197.60 port 57718 [preauth]
Jan 31 08:58:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:43.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.137 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.139 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.139 247708 INFO nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Creating image(s)
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.139 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.140 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Ensure instance console log exists: /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.140 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.140 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.141 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:44 compute-0 nova_compute[247704]: 2026-01-31 08:58:44.217 247708 DEBUG nova.network.neutron [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Successfully created port: 406c4828-51a3-4b13-8dfa-189de772181c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 08:58:44 compute-0 ceph-mon[74496]: pgmap v3803: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 793 KiB/s wr, 90 op/s
Jan 31 08:58:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3860137983' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:44 compute-0 sudo[400569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:44 compute-0 sudo[400569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:44 compute-0 sudo[400569]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:44 compute-0 sudo[400594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:44 compute-0 sudo[400594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:44 compute-0 sudo[400594]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 79 op/s
Jan 31 08:58:45 compute-0 sshd-session[400566]: Invalid user debian from 123.54.197.60 port 57720
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.370 247708 DEBUG nova.network.neutron [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Successfully updated port: 406c4828-51a3-4b13-8dfa-189de772181c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.447 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.447 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquired lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.447 247708 DEBUG nova.network.neutron [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.486 247708 DEBUG nova.compute.manager [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-changed-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.487 247708 DEBUG nova.compute.manager [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing instance network info cache due to event network-changed-406c4828-51a3-4b13-8dfa-189de772181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.487 247708 DEBUG oslo_concurrency.lockutils [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:58:45 compute-0 sshd-session[400566]: Connection closed by invalid user debian 123.54.197.60 port 57720 [preauth]
Jan 31 08:58:45 compute-0 nova_compute[247704]: 2026-01-31 08:58:45.656 247708 DEBUG nova.network.neutron [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 08:58:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:45.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:46 compute-0 ceph-mon[74496]: pgmap v3804: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 79 op/s
Jan 31 08:58:46 compute-0 nova_compute[247704]: 2026-01-31 08:58:46.592 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:47 compute-0 sshd-session[400619]: Invalid user debian from 123.54.197.60 port 57734
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.210 247708 DEBUG nova.network.neutron [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.229 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Releasing lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.229 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Instance network_info: |[{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.230 247708 DEBUG oslo_concurrency.lockutils [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.230 247708 DEBUG nova.network.neutron [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.232 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Start _get_guest_xml network_info=[{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'f9cb034f-03a7-4914-ba0e-94615d09a9f6', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4fa6de41-50fa-4a7f-9f53-f34c852aad03', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4fa6de41-50fa-4a7f-9f53-f34c852aad03', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b5a9d1f1-65d6-4560-98fa-a2eb7858eb30', 'attached_at': '', 'detached_at': '', 'volume_id': '4fa6de41-50fa-4a7f-9f53-f34c852aad03', 'serial': '4fa6de41-50fa-4a7f-9f53-f34c852aad03'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.237 247708 WARNING nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.243 247708 DEBUG nova.virt.libvirt.host [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.243 247708 DEBUG nova.virt.libvirt.host [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.246 247708 DEBUG nova.virt.libvirt.host [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.246 247708 DEBUG nova.virt.libvirt.host [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.247 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.247 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.248 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.248 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.248 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.248 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.248 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.249 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.249 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.249 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.249 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.250 247708 DEBUG nova.virt.hardware [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.276 247708 DEBUG nova.storage.rbd_utils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] rbd image b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.282 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 08:58:47 compute-0 sshd-session[400619]: Connection closed by invalid user debian 123.54.197.60 port 57734 [preauth]
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.528 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 08:58:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254434377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.744 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.788 247708 DEBUG nova.virt.libvirt.vif [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-771034180',display_name='tempest-TestVolumeBackupRestore-server-771034180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-771034180',id=209,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSXWw6wYpaSHBo9ftzyLl5wm30t8j0U4CUEEnCFBkUAB1sIpDz9xWTf3w+oVpHCAnveGlte5XYp1YwuD2KjE34NLwsu/ET1dovu4bvOMwBBQ2NeXVxqzGRj8Sl1giWY/A==',key_name='tempest-TestVolumeBackupRestore-1915864449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e458566e0de24b2fb797037d94d9014c',ramdisk_id='',reservation_id='r-iqep79g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1557060049',owner_user_name='tempest-TestVolumeBackupRestore-1557060049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:58:42Z,user_data=None,user_id='fe599a5134944b9fbf952e83fdf41c55',uuid=b5a9d1f1-65d6-4560-98fa-a2eb7858eb30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.788 247708 DEBUG nova.network.os_vif_util [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Converting VIF {"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.789 247708 DEBUG nova.network.os_vif_util [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.791 247708 DEBUG nova.objects.instance [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lazy-loading 'pci_devices' on Instance uuid b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.813 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] End _get_guest_xml xml=<domain type="kvm">
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <uuid>b5a9d1f1-65d6-4560-98fa-a2eb7858eb30</uuid>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <name>instance-000000d1</name>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <metadata>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <nova:name>tempest-TestVolumeBackupRestore-server-771034180</nova:name>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 08:58:47</nova:creationTime>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:user uuid="fe599a5134944b9fbf952e83fdf41c55">tempest-TestVolumeBackupRestore-1557060049-project-member</nova:user>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:project uuid="e458566e0de24b2fb797037d94d9014c">tempest-TestVolumeBackupRestore-1557060049</nova:project>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <nova:port uuid="406c4828-51a3-4b13-8dfa-189de772181c">
Jan 31 08:58:47 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </metadata>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <system>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <entry name="serial">b5a9d1f1-65d6-4560-98fa-a2eb7858eb30</entry>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <entry name="uuid">b5a9d1f1-65d6-4560-98fa-a2eb7858eb30</entry>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </system>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <os>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </os>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <features>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <apic/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </features>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </clock>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </cpu>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   <devices>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_disk.config">
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </source>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-4fa6de41-50fa-4a7f-9f53-f34c852aad03">
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </source>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 08:58:47 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       </auth>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <serial>4fa6de41-50fa-4a7f-9f53-f34c852aad03</serial>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </disk>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:57:60:b6"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <target dev="tap406c4828-51"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </interface>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/console.log" append="off"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </serial>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <video>
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </video>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </rng>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 08:58:47 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 08:58:47 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 08:58:47 compute-0 nova_compute[247704]:   </devices>
Jan 31 08:58:47 compute-0 nova_compute[247704]: </domain>
Jan 31 08:58:47 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.815 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Preparing to wait for external event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.815 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.816 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.816 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.817 247708 DEBUG nova.virt.libvirt.vif [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-771034180',display_name='tempest-TestVolumeBackupRestore-server-771034180',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-771034180',id=209,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSXWw6wYpaSHBo9ftzyLl5wm30t8j0U4CUEEnCFBkUAB1sIpDz9xWTf3w+oVpHCAnveGlte5XYp1YwuD2KjE34NLwsu/ET1dovu4bvOMwBBQ2NeXVxqzGRj8Sl1giWY/A==',key_name='tempest-TestVolumeBackupRestore-1915864449',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e458566e0de24b2fb797037d94d9014c',ramdisk_id='',reservation_id='r-iqep79g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1557060049',owner_user_name='tempest-TestVolumeBackupRestore-1557060049-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:58:42Z,user_data=None,user_id='fe599a5134944b9fbf952e83fdf41c55',uuid=b5a9d1f1-65d6-4560-98fa-a2eb7858eb30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.817 247708 DEBUG nova.network.os_vif_util [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Converting VIF {"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.817 247708 DEBUG nova.network.os_vif_util [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.818 247708 DEBUG os_vif [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.819 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.819 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.823 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.824 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap406c4828-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.824 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap406c4828-51, col_values=(('external_ids', {'iface-id': '406c4828-51a3-4b13-8dfa-189de772181c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:60:b6', 'vm-uuid': 'b5a9d1f1-65d6-4560-98fa-a2eb7858eb30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:47 compute-0 NetworkManager[49108]: <info>  [1769849927.8286] manager: (tap406c4828-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.830 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.834 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.836 247708 INFO os_vif [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51')
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.900 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.901 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.901 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] No VIF found with MAC fa:16:3e:57:60:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.901 247708 INFO nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Using config drive
Jan 31 08:58:47 compute-0 nova_compute[247704]: 2026-01-31 08:58:47.931 247708 DEBUG nova.storage.rbd_utils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] rbd image b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.408 247708 INFO nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Creating config drive at /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/disk.config
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.413 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppayx294f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:48 compute-0 ceph-mon[74496]: pgmap v3805: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 08:58:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1254434377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 08:58:48 compute-0 sshd-session[400660]: Invalid user debian from 123.54.197.60 port 57742
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.541 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppayx294f" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.570 247708 DEBUG nova.storage.rbd_utils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] rbd image b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.574 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/disk.config b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:58:48 compute-0 sshd-session[400660]: Connection closed by invalid user debian 123.54.197.60 port 57742 [preauth]
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.760 247708 DEBUG oslo_concurrency.processutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/disk.config b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.762 247708 INFO nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Deleting local config drive /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30/disk.config because it was imported into RBD.
Jan 31 08:58:48 compute-0 kernel: tap406c4828-51: entered promiscuous mode
Jan 31 08:58:48 compute-0 NetworkManager[49108]: <info>  [1769849928.8177] manager: (tap406c4828-51): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Jan 31 08:58:48 compute-0 ovn_controller[149457]: 2026-01-31T08:58:48Z|00866|binding|INFO|Claiming lport 406c4828-51a3-4b13-8dfa-189de772181c for this chassis.
Jan 31 08:58:48 compute-0 ovn_controller[149457]: 2026-01-31T08:58:48Z|00867|binding|INFO|406c4828-51a3-4b13-8dfa-189de772181c: Claiming fa:16:3e:57:60:b6 10.100.0.12
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.818 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.823 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.835 247708 DEBUG nova.network.neutron [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updated VIF entry in instance network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.837 247708 DEBUG nova.network.neutron [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.839 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:60:b6 10.100.0.12'], port_security=['fa:16:3e:57:60:b6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b5a9d1f1-65d6-4560-98fa-a2eb7858eb30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e458566e0de24b2fb797037d94d9014c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8f6ec03-7c86-4716-8c55-9383e1499cf3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a434d06e-fe73-474d-9955-8520021405a6, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=406c4828-51a3-4b13-8dfa-189de772181c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.840 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 406c4828-51a3-4b13-8dfa-189de772181c in datapath 7c5ee869-1ab4-41f3-b296-34f3ca0f4177 bound to our chassis
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.841 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c5ee869-1ab4-41f3-b296-34f3ca0f4177
Jan 31 08:58:48 compute-0 systemd-udevd[400737]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:58:48 compute-0 systemd-machined[214448]: New machine qemu-91-instance-000000d1.
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.852 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0705d38c-990a-45bc-9009-4121964cad29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.853 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c5ee869-11 in ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.854 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.855 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c5ee869-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.855 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bec12692-2bd2-48aa-a4c9-0e6fc90b95ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.857 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb33f74-1efa-4aeb-baf1-e0bcdefa9a88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_controller[149457]: 2026-01-31T08:58:48Z|00868|binding|INFO|Setting lport 406c4828-51a3-4b13-8dfa-189de772181c ovn-installed in OVS
Jan 31 08:58:48 compute-0 ovn_controller[149457]: 2026-01-31T08:58:48Z|00869|binding|INFO|Setting lport 406c4828-51a3-4b13-8dfa-189de772181c up in Southbound
Jan 31 08:58:48 compute-0 NetworkManager[49108]: <info>  [1769849928.8601] device (tap406c4828-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:58:48 compute-0 NetworkManager[49108]: <info>  [1769849928.8606] device (tap406c4828-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 08:58:48 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-000000d1.
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.861 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.867 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a35e1f77-8849-4121-97da-f47f150b5c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 nova_compute[247704]: 2026-01-31 08:58:48.869 247708 DEBUG oslo_concurrency.lockutils [req-527d7047-5c46-4d29-a723-27562629a06e req-7d2d16bd-6ba0-4914-ac87-1aa4ae536a85 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.879 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dcfe55-2ab0-419c-ac4e-853829c85b5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.901 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[50822b0e-aa13-4cd8-994e-bf081ac35dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 systemd-udevd[400741]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.906 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a3034a28-e211-4b9a-a652-5baa56b85b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 NetworkManager[49108]: <info>  [1769849928.9078] manager: (tap7c5ee869-10): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.935 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[0298b04e-c0b0-4461-8c72-64ea0a05f041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.938 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c42d4b2c-dd4f-4ad2-be8c-bbc8dad6fe53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 NetworkManager[49108]: <info>  [1769849928.9563] device (tap7c5ee869-10): carrier: link connected
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.962 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[4c16bbfc-51d4-4b54-90f1-e6f113284e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.975 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b57e52b7-aca4-4992-8b36-883a352fb575]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c5ee869-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:65:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017157, 'reachable_time': 44077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400779, 'error': None, 'target': 'ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:48 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:48.987 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[29f00826-75e9-4da3-aa3d-ef00c527a6e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:6517'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1017157, 'tstamp': 1017157}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400785, 'error': None, 'target': 'ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:49 compute-0 podman[400764]: 2026-01-31 08:58:49.001258214 +0000 UTC m=+0.056091615 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.001 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6daa32e9-4659-45f6-8ce8-550632b9d7cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c5ee869-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:65:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017157, 'reachable_time': 44077, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400788, 'error': None, 'target': 'ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.028 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[0341d2c5-015c-445f-96a8-1f3db2c7c306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.074 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f4f6e1-140a-4990-9492-db44fccccb13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.075 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c5ee869-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.076 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.076 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c5ee869-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.078 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:49 compute-0 kernel: tap7c5ee869-10: entered promiscuous mode
Jan 31 08:58:49 compute-0 NetworkManager[49108]: <info>  [1769849929.0788] manager: (tap7c5ee869-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.080 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.082 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c5ee869-10, col_values=(('external_ids', {'iface-id': '88f71a67-9e7a-451f-9a9c-07d88ef8702d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.083 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:49 compute-0 ovn_controller[149457]: 2026-01-31T08:58:49Z|00870|binding|INFO|Releasing lport 88f71a67-9e7a-451f-9a9c-07d88ef8702d from this chassis (sb_readonly=0)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.086 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c5ee869-1ab4-41f3-b296-34f3ca0f4177.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c5ee869-1ab4-41f3-b296-34f3ca0f4177.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.087 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[162ef7d9-2402-43f8-8db7-05b2c7d69efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.087 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: global
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-7c5ee869-1ab4-41f3-b296-34f3ca0f4177
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/7c5ee869-1ab4-41f3-b296-34f3ca0f4177.pid.haproxy
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 7c5ee869-1ab4-41f3-b296-34f3ca0f4177
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 08:58:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:58:49.088 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'env', 'PROCESS_TAG=haproxy-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c5ee869-1ab4-41f3-b296-34f3ca0f4177.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.276 247708 DEBUG nova.compute.manager [req-1681e996-474f-4a35-b3b8-029d3c0a4517 req-4fc4c97f-f744-4868-bf61-9cca46e0666c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.276 247708 DEBUG oslo_concurrency.lockutils [req-1681e996-474f-4a35-b3b8-029d3c0a4517 req-4fc4c97f-f744-4868-bf61-9cca46e0666c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.277 247708 DEBUG oslo_concurrency.lockutils [req-1681e996-474f-4a35-b3b8-029d3c0a4517 req-4fc4c97f-f744-4868-bf61-9cca46e0666c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.277 247708 DEBUG oslo_concurrency.lockutils [req-1681e996-474f-4a35-b3b8-029d3c0a4517 req-4fc4c97f-f744-4868-bf61-9cca46e0666c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.277 247708 DEBUG nova.compute.manager [req-1681e996-474f-4a35-b3b8-029d3c0a4517 req-4fc4c97f-f744-4868-bf61-9cca46e0666c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Processing event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 08:58:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 386 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 205 KiB/s wr, 73 op/s
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.413 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849929.4130268, b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.414 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] VM Started (Lifecycle Event)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.417 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.420 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.424 247708 INFO nova.virt.libvirt.driver [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Instance spawned successfully.
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.425 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 08:58:49 compute-0 podman[400864]: 2026-01-31 08:58:49.442768369 +0000 UTC m=+0.052140438 container create 406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.443 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.454 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:58:49 compute-0 systemd[1]: Started libpod-conmon-406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19.scope.
Jan 31 08:58:49 compute-0 podman[400864]: 2026-01-31 08:58:49.412242757 +0000 UTC m=+0.021614846 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:58:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:58:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/372dbb4f573351d93dfe2822925568b31f9de152753df983cd16777acece9abc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.531 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.532 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849929.4133203, b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.532 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] VM Paused (Lifecycle Event)
Jan 31 08:58:49 compute-0 podman[400864]: 2026-01-31 08:58:49.534421538 +0000 UTC m=+0.143793627 container init 406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.536 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.537 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.537 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.537 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.538 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.538 247708 DEBUG nova.virt.libvirt.driver [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 08:58:49 compute-0 podman[400864]: 2026-01-31 08:58:49.540705331 +0000 UTC m=+0.150077410 container start 406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:58:49 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [NOTICE]   (400884) : New worker (400886) forked
Jan 31 08:58:49 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [NOTICE]   (400884) : Loading success.
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.565 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.571 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769849929.4200606, b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.571 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] VM Resumed (Lifecycle Event)
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.600 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.604 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.611 247708 INFO nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Took 5.47 seconds to spawn the instance on the hypervisor.
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.611 247708 DEBUG nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.624 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.726 247708 INFO nova.compute.manager [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Took 7.64 seconds to build instance.
Jan 31 08:58:49 compute-0 nova_compute[247704]: 2026-01-31 08:58:49.761 247708 DEBUG oslo_concurrency.lockutils [None req-c4d45507-db5d-4694-b2f2-1c3652c46d4c fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:58:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:58:50 compute-0 sshd-session[400791]: Invalid user debian from 123.54.197.60 port 57746
Jan 31 08:58:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:50 compute-0 ceph-mon[74496]: pgmap v3806: 305 pgs: 305 active+clean; 386 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 205 KiB/s wr, 73 op/s
Jan 31 08:58:51 compute-0 sshd-session[400791]: Connection closed by invalid user debian 123.54.197.60 port 57746 [preauth]
Jan 31 08:58:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 414 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 124 op/s
Jan 31 08:58:51 compute-0 nova_compute[247704]: 2026-01-31 08:58:51.446 247708 DEBUG nova.compute.manager [req-dc0211ea-d8eb-4c85-9677-bbd4a7a7cef5 req-0fe44d82-2be1-44ee-8683-6464361134a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:51 compute-0 nova_compute[247704]: 2026-01-31 08:58:51.448 247708 DEBUG oslo_concurrency.lockutils [req-dc0211ea-d8eb-4c85-9677-bbd4a7a7cef5 req-0fe44d82-2be1-44ee-8683-6464361134a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:58:51 compute-0 nova_compute[247704]: 2026-01-31 08:58:51.449 247708 DEBUG oslo_concurrency.lockutils [req-dc0211ea-d8eb-4c85-9677-bbd4a7a7cef5 req-0fe44d82-2be1-44ee-8683-6464361134a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:58:51 compute-0 nova_compute[247704]: 2026-01-31 08:58:51.449 247708 DEBUG oslo_concurrency.lockutils [req-dc0211ea-d8eb-4c85-9677-bbd4a7a7cef5 req-0fe44d82-2be1-44ee-8683-6464361134a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:58:51 compute-0 nova_compute[247704]: 2026-01-31 08:58:51.449 247708 DEBUG nova.compute.manager [req-dc0211ea-d8eb-4c85-9677-bbd4a7a7cef5 req-0fe44d82-2be1-44ee-8683-6464361134a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] No waiting events found dispatching network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:58:51 compute-0 nova_compute[247704]: 2026-01-31 08:58:51.449 247708 WARNING nova.compute.manager [req-dc0211ea-d8eb-4c85-9677-bbd4a7a7cef5 req-0fe44d82-2be1-44ee-8683-6464361134a8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received unexpected event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c for instance with vm_state active and task_state None.
Jan 31 08:58:51 compute-0 ceph-mon[74496]: pgmap v3807: 305 pgs: 305 active+clean; 414 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 124 op/s
Jan 31 08:58:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:51.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:51.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:52 compute-0 nova_compute[247704]: 2026-01-31 08:58:52.533 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:52 compute-0 sshd-session[400896]: Invalid user debian from 123.54.197.60 port 53692
Jan 31 08:58:52 compute-0 nova_compute[247704]: 2026-01-31 08:58:52.826 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:52 compute-0 sshd-session[400896]: Connection closed by invalid user debian 123.54.197.60 port 53692 [preauth]
Jan 31 08:58:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 08:58:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:53.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:54 compute-0 sshd-session[400899]: Invalid user debian from 123.54.197.60 port 53694
Jan 31 08:58:54 compute-0 sshd-session[400899]: Connection closed by invalid user debian 123.54.197.60 port 53694 [preauth]
Jan 31 08:58:54 compute-0 ceph-mon[74496]: pgmap v3808: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 08:58:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/602051847' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:58:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/602051847' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:58:54 compute-0 sudo[400902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:54 compute-0 sudo[400902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-0 sudo[400902]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:54 compute-0 sudo[400927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:58:54 compute-0 sudo[400927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-0 sudo[400927]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:54 compute-0 sudo[400952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:54 compute-0 sudo[400952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-0 sudo[400952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:54 compute-0 sudo[400979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 08:58:54 compute-0 sudo[400979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:54 compute-0 sudo[400979]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:58:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:55 compute-0 sshd-session[400964]: Invalid user debian from 123.54.197.60 port 53706
Jan 31 08:58:55 compute-0 sshd-session[400964]: Connection closed by invalid user debian 123.54.197.60 port 53706 [preauth]
Jan 31 08:58:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 08:58:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 08:58:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:58:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:58:56 compute-0 NetworkManager[49108]: <info>  [1769849936.2416] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 31 08:58:56 compute-0 NetworkManager[49108]: <info>  [1769849936.2430] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.242 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.248 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:56 compute-0 ovn_controller[149457]: 2026-01-31T08:58:56Z|00871|binding|INFO|Releasing lport 88f71a67-9e7a-451f-9a9c-07d88ef8702d from this chassis (sb_readonly=0)
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.262 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:58:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 08:58:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 08:58:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5d23c5eb-9a15-4054-b7b0-e7268d8767dd does not exist
Jan 31 08:58:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 08:58:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ffc39667-caa9-4da1-a4be-1d4d1c953ad3 does not exist
Jan 31 08:58:56 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bfd0c982-7fc1-4b0b-8c45-a0db069c6d91 does not exist
Jan 31 08:58:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 08:58:56 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mon[74496]: pgmap v3809: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:58:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 08:58:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 08:58:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:58:56 compute-0 sudo[401039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:56 compute-0 sudo[401039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:56 compute-0 sudo[401039]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:56 compute-0 sudo[401064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:58:56 compute-0 sudo[401064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:56 compute-0 sudo[401064]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.505 247708 DEBUG nova.compute.manager [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-changed-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.506 247708 DEBUG nova.compute.manager [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing instance network info cache due to event network-changed-406c4828-51a3-4b13-8dfa-189de772181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.506 247708 DEBUG oslo_concurrency.lockutils [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.507 247708 DEBUG oslo_concurrency.lockutils [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:58:56 compute-0 nova_compute[247704]: 2026-01-31 08:58:56.507 247708 DEBUG nova.network.neutron [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:58:56 compute-0 sudo[401089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:56 compute-0 sudo[401089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:56 compute-0 sudo[401089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:56 compute-0 sudo[401114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 08:58:56 compute-0 sudo[401114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:56 compute-0 podman[401178]: 2026-01-31 08:58:56.90003701 +0000 UTC m=+0.039047420 container create bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rhodes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:58:56 compute-0 systemd[1]: Started libpod-conmon-bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402.scope.
Jan 31 08:58:56 compute-0 sshd-session[401035]: Invalid user debian from 123.54.197.60 port 53716
Jan 31 08:58:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:58:56 compute-0 podman[401178]: 2026-01-31 08:58:56.967649125 +0000 UTC m=+0.106659535 container init bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:58:56 compute-0 podman[401178]: 2026-01-31 08:58:56.975321901 +0000 UTC m=+0.114332311 container start bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:58:56 compute-0 podman[401178]: 2026-01-31 08:58:56.881620783 +0000 UTC m=+0.020631213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:58:56 compute-0 podman[401178]: 2026-01-31 08:58:56.978815907 +0000 UTC m=+0.117826347 container attach bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rhodes, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:58:56 compute-0 relaxed_rhodes[401194]: 167 167
Jan 31 08:58:56 compute-0 systemd[1]: libpod-bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402.scope: Deactivated successfully.
Jan 31 08:58:56 compute-0 podman[401178]: 2026-01-31 08:58:56.982070356 +0000 UTC m=+0.121080766 container died bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rhodes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ccb4826eb563d6f4daa9dc2886f7d61257eb1960f579942c67107a03c8bbf0-merged.mount: Deactivated successfully.
Jan 31 08:58:57 compute-0 podman[401178]: 2026-01-31 08:58:57.021105005 +0000 UTC m=+0.160115415 container remove bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:58:57 compute-0 systemd[1]: libpod-conmon-bc3b9162610428f2125b6083ce7a8a62303c0135fa1959847b9e0a803183d402.scope: Deactivated successfully.
Jan 31 08:58:57 compute-0 podman[401218]: 2026-01-31 08:58:57.174188507 +0000 UTC m=+0.054701311 container create 79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_engelbart, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 31 08:58:57 compute-0 sshd-session[401035]: Connection closed by invalid user debian 123.54.197.60 port 53716 [preauth]
Jan 31 08:58:57 compute-0 systemd[1]: Started libpod-conmon-79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526.scope.
Jan 31 08:58:57 compute-0 podman[401218]: 2026-01-31 08:58:57.141726478 +0000 UTC m=+0.022239302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a8a48da0eb18f51b6edc802337bd57b5582d81abb039a7fc8dcc2feed44cbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a8a48da0eb18f51b6edc802337bd57b5582d81abb039a7fc8dcc2feed44cbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a8a48da0eb18f51b6edc802337bd57b5582d81abb039a7fc8dcc2feed44cbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a8a48da0eb18f51b6edc802337bd57b5582d81abb039a7fc8dcc2feed44cbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96a8a48da0eb18f51b6edc802337bd57b5582d81abb039a7fc8dcc2feed44cbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:57 compute-0 podman[401218]: 2026-01-31 08:58:57.267685691 +0000 UTC m=+0.148198545 container init 79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:58:57 compute-0 podman[401218]: 2026-01-31 08:58:57.274734302 +0000 UTC m=+0.155247106 container start 79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:58:57 compute-0 podman[401218]: 2026-01-31 08:58:57.279481057 +0000 UTC m=+0.159993861 container attach 79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 08:58:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 08:58:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 08:58:57 compute-0 nova_compute[247704]: 2026-01-31 08:58:57.535 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:57 compute-0 nova_compute[247704]: 2026-01-31 08:58:57.828 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:58:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:58:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:58:58 compute-0 quirky_engelbart[401235]: --> passed data devices: 0 physical, 1 LVM
Jan 31 08:58:58 compute-0 quirky_engelbart[401235]: --> relative data size: 1.0
Jan 31 08:58:58 compute-0 quirky_engelbart[401235]: --> All data devices are unavailable
Jan 31 08:58:58 compute-0 systemd[1]: libpod-79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526.scope: Deactivated successfully.
Jan 31 08:58:58 compute-0 podman[401218]: 2026-01-31 08:58:58.134241292 +0000 UTC m=+1.014754096 container died 79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_engelbart, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-96a8a48da0eb18f51b6edc802337bd57b5582d81abb039a7fc8dcc2feed44cbe-merged.mount: Deactivated successfully.
Jan 31 08:58:58 compute-0 podman[401218]: 2026-01-31 08:58:58.19875225 +0000 UTC m=+1.079265054 container remove 79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 08:58:58 compute-0 systemd[1]: libpod-conmon-79a117081d8db8e090001fd3926ea9bc5ca2170a64f9b747e394c8013ce19526.scope: Deactivated successfully.
Jan 31 08:58:58 compute-0 sudo[401114]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:58 compute-0 sudo[401264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:58 compute-0 sudo[401264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:58 compute-0 sudo[401264]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:58 compute-0 sudo[401289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:58:58 compute-0 sudo[401289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:58 compute-0 sudo[401289]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:58 compute-0 sudo[401314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:58:58 compute-0 sudo[401314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:58 compute-0 sudo[401314]: pam_unix(sudo:session): session closed for user root
Jan 31 08:58:58 compute-0 ceph-mon[74496]: pgmap v3810: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:58 compute-0 sudo[401340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 08:58:58 compute-0 sudo[401340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:58:58 compute-0 sshd-session[401240]: Invalid user debian from 123.54.197.60 port 53730
Jan 31 08:58:58 compute-0 nova_compute[247704]: 2026-01-31 08:58:58.598 247708 DEBUG nova.network.neutron [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updated VIF entry in instance network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:58:58 compute-0 nova_compute[247704]: 2026-01-31 08:58:58.601 247708 DEBUG nova.network.neutron [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:58:58 compute-0 sshd-session[401240]: Connection closed by invalid user debian 123.54.197.60 port 53730 [preauth]
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.736862805 +0000 UTC m=+0.043894948 container create bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:58:58 compute-0 systemd[1]: Started libpod-conmon-bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a.scope.
Jan 31 08:58:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.803778022 +0000 UTC m=+0.110810205 container init bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.81067135 +0000 UTC m=+0.117703493 container start bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.814168464 +0000 UTC m=+0.121200607 container attach bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.719062252 +0000 UTC m=+0.026094415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:58:58 compute-0 eager_faraday[401421]: 167 167
Jan 31 08:58:58 compute-0 systemd[1]: libpod-bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a.scope: Deactivated successfully.
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.818636354 +0000 UTC m=+0.125668497 container died bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:58:58 compute-0 nova_compute[247704]: 2026-01-31 08:58:58.820 247708 DEBUG oslo_concurrency.lockutils [req-87bd390f-0189-471c-b91b-48bb7440f695 req-88def414-749e-4109-996d-d9e0ff0ea9c5 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-48415b73afb9bb77a99966db26a125f557c7540aa18c1ad34784f06fcf5e17a6-merged.mount: Deactivated successfully.
Jan 31 08:58:58 compute-0 podman[401404]: 2026-01-31 08:58:58.858864231 +0000 UTC m=+0.165896374 container remove bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:58:58 compute-0 systemd[1]: libpod-conmon-bfe0256e1e32e22ed4f045ba10bb299c9199b90bdd48b429cb364de4248c7a3a.scope: Deactivated successfully.
Jan 31 08:58:58 compute-0 podman[401445]: 2026-01-31 08:58:58.982806986 +0000 UTC m=+0.038165169 container create bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_golick, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:58:59 compute-0 systemd[1]: Started libpod-conmon-bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080.scope.
Jan 31 08:58:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bda8d3ebf193f675a9eec644857553c8a27950275af8f8adea8cac42ae3e65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bda8d3ebf193f675a9eec644857553c8a27950275af8f8adea8cac42ae3e65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bda8d3ebf193f675a9eec644857553c8a27950275af8f8adea8cac42ae3e65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88bda8d3ebf193f675a9eec644857553c8a27950275af8f8adea8cac42ae3e65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:58:59 compute-0 podman[401445]: 2026-01-31 08:58:58.966709384 +0000 UTC m=+0.022067597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:58:59 compute-0 podman[401445]: 2026-01-31 08:58:59.066992272 +0000 UTC m=+0.122350475 container init bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_golick, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 08:58:59 compute-0 podman[401445]: 2026-01-31 08:58:59.074788271 +0000 UTC m=+0.130146464 container start bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:58:59 compute-0 podman[401445]: 2026-01-31 08:58:59.079283102 +0000 UTC m=+0.134641295 container attach bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_golick, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 08:58:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:58:59 compute-0 nova_compute[247704]: 2026-01-31 08:58:59.716 247708 DEBUG nova.compute.manager [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-changed-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:58:59 compute-0 nova_compute[247704]: 2026-01-31 08:58:59.716 247708 DEBUG nova.compute.manager [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing instance network info cache due to event network-changed-406c4828-51a3-4b13-8dfa-189de772181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:58:59 compute-0 nova_compute[247704]: 2026-01-31 08:58:59.717 247708 DEBUG oslo_concurrency.lockutils [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:58:59 compute-0 nova_compute[247704]: 2026-01-31 08:58:59.717 247708 DEBUG oslo_concurrency.lockutils [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:58:59 compute-0 nova_compute[247704]: 2026-01-31 08:58:59.717 247708 DEBUG nova.network.neutron [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:58:59 compute-0 practical_golick[401462]: {
Jan 31 08:58:59 compute-0 practical_golick[401462]:     "0": [
Jan 31 08:58:59 compute-0 practical_golick[401462]:         {
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "devices": [
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "/dev/loop3"
Jan 31 08:58:59 compute-0 practical_golick[401462]:             ],
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "lv_name": "ceph_lv0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "lv_size": "7511998464",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "name": "ceph_lv0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "tags": {
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.cluster_name": "ceph",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.crush_device_class": "",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.encrypted": "0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.osd_id": "0",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.type": "block",
Jan 31 08:58:59 compute-0 practical_golick[401462]:                 "ceph.vdo": "0"
Jan 31 08:58:59 compute-0 practical_golick[401462]:             },
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "type": "block",
Jan 31 08:58:59 compute-0 practical_golick[401462]:             "vg_name": "ceph_vg0"
Jan 31 08:58:59 compute-0 practical_golick[401462]:         }
Jan 31 08:58:59 compute-0 practical_golick[401462]:     ]
Jan 31 08:58:59 compute-0 practical_golick[401462]: }
Jan 31 08:58:59 compute-0 systemd[1]: libpod-bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080.scope: Deactivated successfully.
Jan 31 08:58:59 compute-0 podman[401445]: 2026-01-31 08:58:59.952072224 +0000 UTC m=+1.007430417 container died bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_golick, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:58:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:58:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:58:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:59.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-88bda8d3ebf193f675a9eec644857553c8a27950275af8f8adea8cac42ae3e65-merged.mount: Deactivated successfully.
Jan 31 08:59:00 compute-0 podman[401445]: 2026-01-31 08:59:00.005286618 +0000 UTC m=+1.060644811 container remove bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_golick, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:59:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:00 compute-0 systemd[1]: libpod-conmon-bfc28af1b8b7b6eae6877b679905479c49617a7697ce5ed9e071f93d41ac9080.scope: Deactivated successfully.
Jan 31 08:59:00 compute-0 sudo[401340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:00 compute-0 sshd-session[401461]: Invalid user admin from 123.54.197.60 port 53746
Jan 31 08:59:00 compute-0 sudo[401483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:00 compute-0 sudo[401483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:00 compute-0 sudo[401483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:00 compute-0 sudo[401508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:59:00 compute-0 sudo[401508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:00 compute-0 sudo[401508]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:00 compute-0 sudo[401533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:00 compute-0 sudo[401533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:00 compute-0 sudo[401533]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:00 compute-0 sudo[401558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 08:59:00 compute-0 sudo[401558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:00 compute-0 sshd-session[401461]: Connection closed by invalid user admin 123.54.197.60 port 53746 [preauth]
Jan 31 08:59:00 compute-0 ceph-mon[74496]: pgmap v3811: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.575791911 +0000 UTC m=+0.037639836 container create 00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:59:00 compute-0 systemd[1]: Started libpod-conmon-00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117.scope.
Jan 31 08:59:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.559610547 +0000 UTC m=+0.021458492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.657891427 +0000 UTC m=+0.119739382 container init 00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.665872111 +0000 UTC m=+0.127720036 container start 00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.669337355 +0000 UTC m=+0.131185300 container attach 00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:59:00 compute-0 awesome_cartwright[401641]: 167 167
Jan 31 08:59:00 compute-0 systemd[1]: libpod-00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117.scope: Deactivated successfully.
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.673560637 +0000 UTC m=+0.135408562 container died 00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d4067b44b877b70e79ce7fd202a0d5dca8cc09bbd874346184cac799c8a4c3-merged.mount: Deactivated successfully.
Jan 31 08:59:00 compute-0 podman[401623]: 2026-01-31 08:59:00.707752839 +0000 UTC m=+0.169600764 container remove 00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 08:59:00 compute-0 systemd[1]: libpod-conmon-00dea210a0fcdacf36e1f961db89ab9f13746ed33539934a6eed46e364371117.scope: Deactivated successfully.
Jan 31 08:59:00 compute-0 podman[401665]: 2026-01-31 08:59:00.85133668 +0000 UTC m=+0.044436240 container create 405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:59:00 compute-0 systemd[1]: Started libpod-conmon-405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818.scope.
Jan 31 08:59:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3421e8479b663ac497f6401b259cd6ac69df76e3c2afd24de997c344e6f8056/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3421e8479b663ac497f6401b259cd6ac69df76e3c2afd24de997c344e6f8056/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3421e8479b663ac497f6401b259cd6ac69df76e3c2afd24de997c344e6f8056/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3421e8479b663ac497f6401b259cd6ac69df76e3c2afd24de997c344e6f8056/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:59:00 compute-0 podman[401665]: 2026-01-31 08:59:00.831951439 +0000 UTC m=+0.025051019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 08:59:00 compute-0 podman[401665]: 2026-01-31 08:59:00.942648811 +0000 UTC m=+0.135748391 container init 405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:59:00 compute-0 podman[401665]: 2026-01-31 08:59:00.949221791 +0000 UTC m=+0.142321351 container start 405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:59:00 compute-0 podman[401665]: 2026-01-31 08:59:00.952327246 +0000 UTC m=+0.145426796 container attach 405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 08:59:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 137 op/s
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.427 247708 DEBUG nova.network.neutron [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updated VIF entry in instance network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.428 247708 DEBUG nova.network.neutron [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.453 247708 DEBUG oslo_concurrency.lockutils [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.454 247708 DEBUG nova.compute.manager [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-changed-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.454 247708 DEBUG nova.compute.manager [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing instance network info cache due to event network-changed-406c4828-51a3-4b13-8dfa-189de772181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.454 247708 DEBUG oslo_concurrency.lockutils [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.454 247708 DEBUG oslo_concurrency.lockutils [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.455 247708 DEBUG nova.network.neutron [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:59:01 compute-0 nova_compute[247704]: 2026-01-31 08:59:01.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:01 compute-0 elegant_payne[401681]: {
Jan 31 08:59:01 compute-0 elegant_payne[401681]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 08:59:01 compute-0 elegant_payne[401681]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 08:59:01 compute-0 elegant_payne[401681]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 08:59:01 compute-0 elegant_payne[401681]:         "osd_id": 0,
Jan 31 08:59:01 compute-0 elegant_payne[401681]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 08:59:01 compute-0 elegant_payne[401681]:         "type": "bluestore"
Jan 31 08:59:01 compute-0 elegant_payne[401681]:     }
Jan 31 08:59:01 compute-0 elegant_payne[401681]: }
Jan 31 08:59:01 compute-0 systemd[1]: libpod-405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818.scope: Deactivated successfully.
Jan 31 08:59:01 compute-0 podman[401703]: 2026-01-31 08:59:01.831638187 +0000 UTC m=+0.034189992 container died 405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3421e8479b663ac497f6401b259cd6ac69df76e3c2afd24de997c344e6f8056-merged.mount: Deactivated successfully.
Jan 31 08:59:01 compute-0 podman[401703]: 2026-01-31 08:59:01.891223756 +0000 UTC m=+0.093775551 container remove 405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 08:59:01 compute-0 systemd[1]: libpod-conmon-405d9c052be3fbddb689e62665291591693206e01bb8d650be78bb3c6e48b818.scope: Deactivated successfully.
Jan 31 08:59:01 compute-0 sudo[401558]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 08:59:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:59:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 08:59:01 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:59:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1adc7f7a-8877-4596-b515-f0638c05334c does not exist
Jan 31 08:59:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4f4609fa-1d2e-49c5-918a-8f1563d3f308 does not exist
Jan 31 08:59:01 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 48030120-4e4d-4d29-99db-62f784d584ba does not exist
Jan 31 08:59:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:01.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:02 compute-0 sudo[401716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:02 compute-0 sudo[401716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:02 compute-0 sudo[401716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:02 compute-0 sudo[401741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:59:02 compute-0 sudo[401741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:02 compute-0 sudo[401741]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:02 compute-0 sshd-session[401637]: Invalid user admin from 123.54.197.60 port 42348
Jan 31 08:59:02 compute-0 ceph-mon[74496]: pgmap v3812: 305 pgs: 305 active+clean; 418 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 137 op/s
Jan 31 08:59:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:59:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 08:59:02 compute-0 sshd-session[401637]: Connection closed by invalid user admin 123.54.197.60 port 42348 [preauth]
Jan 31 08:59:02 compute-0 nova_compute[247704]: 2026-01-31 08:59:02.536 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:02 compute-0 nova_compute[247704]: 2026-01-31 08:59:02.780 247708 DEBUG nova.network.neutron [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updated VIF entry in instance network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:59:02 compute-0 nova_compute[247704]: 2026-01-31 08:59:02.781 247708 DEBUG nova.network.neutron [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:02 compute-0 nova_compute[247704]: 2026-01-31 08:59:02.830 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:03 compute-0 nova_compute[247704]: 2026-01-31 08:59:03.000 247708 DEBUG oslo_concurrency.lockutils [req-c09b7a14-c3a1-48ba-9639-3cf611342bfe req-12d2e395-343b-4ddb-affb-ebee702b276e 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:59:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 432 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 86 op/s
Jan 31 08:59:03 compute-0 ovn_controller[149457]: 2026-01-31T08:59:03Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:60:b6 10.100.0.12
Jan 31 08:59:03 compute-0 ovn_controller[149457]: 2026-01-31T08:59:03Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:60:b6 10.100.0.12
Jan 31 08:59:03 compute-0 sshd-session[401767]: Invalid user admin from 123.54.197.60 port 42360
Jan 31 08:59:03 compute-0 podman[401769]: 2026-01-31 08:59:03.918695617 +0000 UTC m=+0.084064286 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:59:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:03.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:04 compute-0 sshd-session[401767]: Connection closed by invalid user admin 123.54.197.60 port 42360 [preauth]
Jan 31 08:59:04 compute-0 ceph-mon[74496]: pgmap v3813: 305 pgs: 305 active+clean; 432 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 86 op/s
Jan 31 08:59:04 compute-0 sudo[401799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:04 compute-0 sudo[401799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:04 compute-0 sudo[401799]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:04 compute-0 sudo[401824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:04 compute-0 sudo[401824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:04 compute-0 sudo[401824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 422 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 31 08:59:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1734990595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:05 compute-0 sshd-session[401797]: Invalid user admin from 123.54.197.60 port 42362
Jan 31 08:59:05 compute-0 sshd-session[401797]: Connection closed by invalid user admin 123.54.197.60 port 42362 [preauth]
Jan 31 08:59:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:06 compute-0 ceph-mon[74496]: pgmap v3814: 305 pgs: 305 active+clean; 422 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 31 08:59:06 compute-0 nova_compute[247704]: 2026-01-31 08:59:06.930 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 388 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 31 08:59:07 compute-0 nova_compute[247704]: 2026-01-31 08:59:07.566 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:07 compute-0 ceph-mon[74496]: pgmap v3815: 305 pgs: 305 active+clean; 388 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 31 08:59:07 compute-0 nova_compute[247704]: 2026-01-31 08:59:07.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:07 compute-0 nova_compute[247704]: 2026-01-31 08:59:07.923 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:07.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:08 compute-0 sshd-session[401850]: Invalid user admin from 123.54.197.60 port 42366
Jan 31 08:59:08 compute-0 sshd-session[401850]: Connection closed by invalid user admin 123.54.197.60 port 42366 [preauth]
Jan 31 08:59:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:10 compute-0 sshd-session[401853]: Invalid user admin from 123.54.197.60 port 42368
Jan 31 08:59:10 compute-0 ceph-mon[74496]: pgmap v3816: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:10 compute-0 nova_compute[247704]: 2026-01-31 08:59:10.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:10 compute-0 nova_compute[247704]: 2026-01-31 08:59:10.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:59:10 compute-0 sshd-session[401853]: Connection closed by invalid user admin 123.54.197.60 port 42368 [preauth]
Jan 31 08:59:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:11.228 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:11.228 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:11.229 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:11 compute-0 ceph-mon[74496]: pgmap v3817: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:11 compute-0 sshd-session[401856]: Invalid user admin from 123.54.197.60 port 60194
Jan 31 08:59:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:11.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:12 compute-0 sshd-session[401856]: Connection closed by invalid user admin 123.54.197.60 port 60194 [preauth]
Jan 31 08:59:12 compute-0 nova_compute[247704]: 2026-01-31 08:59:12.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:12 compute-0 nova_compute[247704]: 2026-01-31 08:59:12.570 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:12 compute-0 nova_compute[247704]: 2026-01-31 08:59:12.835 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:13 compute-0 sshd-session[401859]: Invalid user admin from 123.54.197.60 port 60202
Jan 31 08:59:13 compute-0 sshd-session[401859]: Connection closed by invalid user admin 123.54.197.60 port 60202 [preauth]
Jan 31 08:59:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:13.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.117 247708 DEBUG nova.compute.manager [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-changed-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.117 247708 DEBUG nova.compute.manager [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing instance network info cache due to event network-changed-406c4828-51a3-4b13-8dfa-189de772181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.118 247708 DEBUG oslo_concurrency.lockutils [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.118 247708 DEBUG oslo_concurrency.lockutils [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.118 247708 DEBUG nova.network.neutron [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Refreshing network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.342 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.343 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.343 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.343 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.344 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.345 247708 INFO nova.compute.manager [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Terminating instance
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.346 247708 DEBUG nova.compute.manager [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 08:59:14 compute-0 ceph-mon[74496]: pgmap v3818: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 08:59:14 compute-0 kernel: tap406c4828-51 (unregistering): left promiscuous mode
Jan 31 08:59:14 compute-0 NetworkManager[49108]: <info>  [1769849954.4030] device (tap406c4828-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.455 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 ovn_controller[149457]: 2026-01-31T08:59:14Z|00872|binding|INFO|Releasing lport 406c4828-51a3-4b13-8dfa-189de772181c from this chassis (sb_readonly=0)
Jan 31 08:59:14 compute-0 ovn_controller[149457]: 2026-01-31T08:59:14Z|00873|binding|INFO|Setting lport 406c4828-51a3-4b13-8dfa-189de772181c down in Southbound
Jan 31 08:59:14 compute-0 ovn_controller[149457]: 2026-01-31T08:59:14Z|00874|binding|INFO|Removing iface tap406c4828-51 ovn-installed in OVS
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.460 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.463 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Jan 31 08:59:14 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000d1.scope: Consumed 13.870s CPU time.
Jan 31 08:59:14 compute-0 systemd-machined[214448]: Machine qemu-91-instance-000000d1 terminated.
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.552 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:60:b6 10.100.0.12'], port_security=['fa:16:3e:57:60:b6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b5a9d1f1-65d6-4560-98fa-a2eb7858eb30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e458566e0de24b2fb797037d94d9014c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8f6ec03-7c86-4716-8c55-9383e1499cf3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a434d06e-fe73-474d-9955-8520021405a6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=406c4828-51a3-4b13-8dfa-189de772181c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.553 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 406c4828-51a3-4b13-8dfa-189de772181c in datapath 7c5ee869-1ab4-41f3-b296-34f3ca0f4177 unbound from our chassis
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.555 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c5ee869-1ab4-41f3-b296-34f3ca0f4177, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.556 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5662e007-83a2-46ca-9ad7-1ef6063bd1aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.557 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177 namespace which is not needed anymore
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.569 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.571 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.587 247708 INFO nova.virt.libvirt.driver [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Instance destroyed successfully.
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.588 247708 DEBUG nova.objects.instance [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lazy-loading 'resources' on Instance uuid b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 08:59:14 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [NOTICE]   (400884) : haproxy version is 2.8.14-c23fe91
Jan 31 08:59:14 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [NOTICE]   (400884) : path to executable is /usr/sbin/haproxy
Jan 31 08:59:14 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [WARNING]  (400884) : Exiting Master process...
Jan 31 08:59:14 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [ALERT]    (400884) : Current worker (400886) exited with code 143 (Terminated)
Jan 31 08:59:14 compute-0 neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177[400880]: [WARNING]  (400884) : All workers exited. Exiting... (0)
Jan 31 08:59:14 compute-0 systemd[1]: libpod-406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19.scope: Deactivated successfully.
Jan 31 08:59:14 compute-0 conmon[400880]: conmon 406327c3a4c2931fcbec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19.scope/container/memory.events
Jan 31 08:59:14 compute-0 podman[401899]: 2026-01-31 08:59:14.671513203 +0000 UTC m=+0.038194040 container died 406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19-userdata-shm.mount: Deactivated successfully.
Jan 31 08:59:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-372dbb4f573351d93dfe2822925568b31f9de152753df983cd16777acece9abc-merged.mount: Deactivated successfully.
Jan 31 08:59:14 compute-0 podman[401899]: 2026-01-31 08:59:14.704860714 +0000 UTC m=+0.071541531 container cleanup 406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:59:14 compute-0 systemd[1]: libpod-conmon-406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19.scope: Deactivated successfully.
Jan 31 08:59:14 compute-0 podman[401928]: 2026-01-31 08:59:14.760312512 +0000 UTC m=+0.040993648 container remove 406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.764 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9cab30-97ae-4016-ad64-d3b9eca81be1]: (4, ('Sat Jan 31 08:59:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177 (406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19)\n406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19\nSat Jan 31 08:59:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177 (406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19)\n406327c3a4c2931fcbecb941da45a5dc59a137689cb7ffa5262ab22b852f5d19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.766 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4eef86-e786-4c6a-9a5b-cb3e2c8d8ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.767 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c5ee869-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 kernel: tap7c5ee869-10: left promiscuous mode
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.777 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.780 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[61877309-f1e1-4e22-bc3e-88899c8973fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.796 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac40271-9e65-41da-a4da-5d0ddcafee6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.797 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[026233af-0cf4-468d-9f62-74ebc8e3dd6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.807 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[582f6d9d-798a-4e40-8755-c4e5a3f2f7f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1017151, 'reachable_time': 33049, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401947, 'error': None, 'target': 'ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.809 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c5ee869-1ab4-41f3-b296-34f3ca0f4177 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 08:59:14 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:14.809 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6220c9-37b2-47e7-b66e-0b19f4bfa077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:59:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c5ee869\x2d1ab4\x2d41f3\x2db296\x2d34f3ca0f4177.mount: Deactivated successfully.
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.882 247708 DEBUG nova.virt.libvirt.vif [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-771034180',display_name='tempest-TestVolumeBackupRestore-server-771034180',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-771034180',id=209,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSXWw6wYpaSHBo9ftzyLl5wm30t8j0U4CUEEnCFBkUAB1sIpDz9xWTf3w+oVpHCAnveGlte5XYp1YwuD2KjE34NLwsu/ET1dovu4bvOMwBBQ2NeXVxqzGRj8Sl1giWY/A==',key_name='tempest-TestVolumeBackupRestore-1915864449',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:58:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e458566e0de24b2fb797037d94d9014c',ramdisk_id='',reservation_id='r-iqep79g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1557060049',owner_user_name='tempest-TestVolumeBackupRestore-1557060049-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:58:49Z,user_data=None,user_id='fe599a5134944b9fbf952e83fdf41c55',uuid=b5a9d1f1-65d6-4560-98fa-a2eb7858eb30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.883 247708 DEBUG nova.network.os_vif_util [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Converting VIF {"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.884 247708 DEBUG nova.network.os_vif_util [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.885 247708 DEBUG os_vif [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.887 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.888 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap406c4828-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.892 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.896 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 08:59:14 compute-0 nova_compute[247704]: 2026-01-31 08:59:14.898 247708 INFO os_vif [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:60:b6,bridge_name='br-int',has_traffic_filtering=True,id=406c4828-51a3-4b13-8dfa-189de772181c,network=Network(7c5ee869-1ab4-41f3-b296-34f3ca0f4177),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap406c4828-51')
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.141 247708 INFO nova.virt.libvirt.driver [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Deleting instance files /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_del
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.142 247708 INFO nova.virt.libvirt.driver [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Deletion of /var/lib/nova/instances/b5a9d1f1-65d6-4560-98fa-a2eb7858eb30_del complete
Jan 31 08:59:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:15 compute-0 sshd-session[401861]: Invalid user admin from 123.54.197.60 port 60210
Jan 31 08:59:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.550 247708 INFO nova.compute.manager [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Took 1.20 seconds to destroy the instance on the hypervisor.
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.550 247708 DEBUG oslo.service.loopingcall [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.551 247708 DEBUG nova.compute.manager [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.551 247708 DEBUG nova.network.neutron [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:59:15 compute-0 sshd-session[401861]: Connection closed by invalid user admin 123.54.197.60 port 60210 [preauth]
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.980 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 31 08:59:15 compute-0 nova_compute[247704]: 2026-01-31 08:59:15.980 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:59:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:16.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:16.002 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:59:16 compute-0 nova_compute[247704]: 2026-01-31 08:59:16.002 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:16.004 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:59:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:16 compute-0 nova_compute[247704]: 2026-01-31 08:59:16.383 247708 DEBUG nova.network.neutron [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updated VIF entry in instance network info cache for port 406c4828-51a3-4b13-8dfa-189de772181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 08:59:16 compute-0 nova_compute[247704]: 2026-01-31 08:59:16.384 247708 DEBUG nova.network.neutron [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [{"id": "406c4828-51a3-4b13-8dfa-189de772181c", "address": "fa:16:3e:57:60:b6", "network": {"id": "7c5ee869-1ab4-41f3-b296-34f3ca0f4177", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-82909361-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e458566e0de24b2fb797037d94d9014c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap406c4828-51", "ovs_interfaceid": "406c4828-51a3-4b13-8dfa-189de772181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:16 compute-0 ceph-mon[74496]: pgmap v3819: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 08:59:16 compute-0 nova_compute[247704]: 2026-01-31 08:59:16.671 247708 DEBUG oslo_concurrency.lockutils [req-013037fb-8904-44bc-8600-64308e32cfc7 req-33f15507-1999-46bc-a13a-9023916af783 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:59:16 compute-0 nova_compute[247704]: 2026-01-31 08:59:16.985 247708 DEBUG nova.network.neutron [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.052 247708 DEBUG nova.compute.manager [req-022de6c6-61ef-4df6-a7ab-72c03e4cd35a req-8381f0b5-886b-4daf-b45f-65670c9342a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-vif-deleted-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.053 247708 INFO nova.compute.manager [req-022de6c6-61ef-4df6-a7ab-72c03e4cd35a req-8381f0b5-886b-4daf-b45f-65670c9342a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Neutron deleted interface 406c4828-51a3-4b13-8dfa-189de772181c; detaching it from the instance and deleting it from the info cache
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.053 247708 DEBUG nova.network.neutron [req-022de6c6-61ef-4df6-a7ab-72c03e4cd35a req-8381f0b5-886b-4daf-b45f-65670c9342a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.107 247708 INFO nova.compute.manager [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Took 1.56 seconds to deallocate network for instance.
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.175 247708 DEBUG nova.compute.manager [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-vif-unplugged-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.175 247708 DEBUG oslo_concurrency.lockutils [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.176 247708 DEBUG oslo_concurrency.lockutils [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.176 247708 DEBUG oslo_concurrency.lockutils [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.176 247708 DEBUG nova.compute.manager [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] No waiting events found dispatching network-vif-unplugged-406c4828-51a3-4b13-8dfa-189de772181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.176 247708 DEBUG nova.compute.manager [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-vif-unplugged-406c4828-51a3-4b13-8dfa-189de772181c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.177 247708 DEBUG nova.compute.manager [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.177 247708 DEBUG oslo_concurrency.lockutils [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.177 247708 DEBUG oslo_concurrency.lockutils [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.178 247708 DEBUG oslo_concurrency.lockutils [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.178 247708 DEBUG nova.compute.manager [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] No waiting events found dispatching network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.178 247708 WARNING nova.compute.manager [req-6a58f819-774c-4148-a122-0e024f356ae3 req-444ebcfd-ee73-428c-b8bc-3f32ff95e657 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Received unexpected event network-vif-plugged-406c4828-51a3-4b13-8dfa-189de772181c for instance with vm_state active and task_state deleting.
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.192 247708 DEBUG nova.compute.manager [req-022de6c6-61ef-4df6-a7ab-72c03e4cd35a req-8381f0b5-886b-4daf-b45f-65670c9342a7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Detach interface failed, port_id=406c4828-51a3-4b13-8dfa-189de772181c, reason: Instance b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 31 08:59:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 290 KiB/s wr, 39 op/s
Jan 31 08:59:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1437794474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:17 compute-0 sshd-session[401967]: Invalid user admin from 123.54.197.60 port 60222
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.572 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:17 compute-0 nova_compute[247704]: 2026-01-31 08:59:17.661 247708 INFO nova.compute.manager [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Took 0.55 seconds to detach 1 volumes for instance.
Jan 31 08:59:17 compute-0 sshd-session[401967]: Connection closed by invalid user admin 123.54.197.60 port 60222 [preauth]
Jan 31 08:59:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:18.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:18.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:18 compute-0 nova_compute[247704]: 2026-01-31 08:59:18.075 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:18 compute-0 nova_compute[247704]: 2026-01-31 08:59:18.076 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:18 compute-0 nova_compute[247704]: 2026-01-31 08:59:18.128 247708 DEBUG oslo_concurrency.processutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:59:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:59:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772980989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:18 compute-0 ceph-mon[74496]: pgmap v3820: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 68 KiB/s rd, 290 KiB/s wr, 39 op/s
Jan 31 08:59:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/929412592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1772980989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:18 compute-0 nova_compute[247704]: 2026-01-31 08:59:18.565 247708 DEBUG oslo_concurrency.processutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:59:18 compute-0 nova_compute[247704]: 2026-01-31 08:59:18.573 247708 DEBUG nova.compute.provider_tree [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:59:18 compute-0 nova_compute[247704]: 2026-01-31 08:59:18.807 247708 DEBUG nova.scheduler.client.report [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:59:19 compute-0 nova_compute[247704]: 2026-01-31 08:59:19.095 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:19 compute-0 sshd-session[401970]: Invalid user admin from 123.54.197.60 port 60230
Jan 31 08:59:19 compute-0 podman[401995]: 2026-01-31 08:59:19.24696046 +0000 UTC m=+0.057952441 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 08:59:19 compute-0 nova_compute[247704]: 2026-01-31 08:59:19.308 247708 INFO nova.scheduler.client.report [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Deleted allocations for instance b5a9d1f1-65d6-4560-98fa-a2eb7858eb30
Jan 31 08:59:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 88 KiB/s wr, 21 op/s
Jan 31 08:59:19 compute-0 sshd-session[401970]: Connection closed by invalid user admin 123.54.197.60 port 60230 [preauth]
Jan 31 08:59:19 compute-0 ceph-mon[74496]: pgmap v3821: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 88 KiB/s wr, 21 op/s
Jan 31 08:59:19 compute-0 nova_compute[247704]: 2026-01-31 08:59:19.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:19 compute-0 nova_compute[247704]: 2026-01-31 08:59:19.998 247708 DEBUG oslo_concurrency.lockutils [None req-5647334f-ca3b-494b-8e26-a685ffdca079 fe599a5134944b9fbf952e83fdf41c55 e458566e0de24b2fb797037d94d9014c - - default default] Lock "b5a9d1f1-65d6-4560-98fa-a2eb7858eb30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:20.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:59:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_08:59:20
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 08:59:20 compute-0 sshd-session[402015]: Invalid user admin from 123.54.197.60 port 42888
Jan 31 08:59:20 compute-0 sshd-session[402015]: Connection closed by invalid user admin 123.54.197.60 port 42888 [preauth]
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:59:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:59:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 317 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Jan 31 08:59:21 compute-0 nova_compute[247704]: 2026-01-31 08:59:21.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:22 compute-0 ovn_metadata_agent[160021]: 2026-01-31 08:59:22.007 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:59:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:22.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:22.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.100 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.100 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.101 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.101 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.101 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:59:22 compute-0 sshd-session[402018]: Invalid user admin from 123.54.197.60 port 42890
Jan 31 08:59:22 compute-0 ceph-mon[74496]: pgmap v3822: 305 pgs: 305 active+clean; 317 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 30 op/s
Jan 31 08:59:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/419757811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:22 compute-0 sshd-session[402018]: Connection closed by invalid user admin 123.54.197.60 port 42890 [preauth]
Jan 31 08:59:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:59:22 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/525278072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.519 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.574 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.669 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.671 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4204MB free_disk=20.9776611328125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.671 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:59:22 compute-0 nova_compute[247704]: 2026-01-31 08:59:22.671 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:59:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 41 op/s
Jan 31 08:59:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/525278072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:23 compute-0 nova_compute[247704]: 2026-01-31 08:59:23.826 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:59:23 compute-0 nova_compute[247704]: 2026-01-31 08:59:23.828 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:59:23 compute-0 sshd-session[402044]: Invalid user admin from 123.54.197.60 port 42906
Jan 31 08:59:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:24.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:24.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:24 compute-0 sshd-session[402044]: Connection closed by invalid user admin 123.54.197.60 port 42906 [preauth]
Jan 31 08:59:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Jan 31 08:59:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Jan 31 08:59:24 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Jan 31 08:59:24 compute-0 ceph-mon[74496]: pgmap v3823: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 13 KiB/s wr, 41 op/s
Jan 31 08:59:24 compute-0 sudo[402049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:24 compute-0 sudo[402049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:24 compute-0 sudo[402049]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:24 compute-0 sudo[402074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:24 compute-0 sudo[402074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:24 compute-0 sudo[402074]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:24 compute-0 nova_compute[247704]: 2026-01-31 08:59:24.930 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:25 compute-0 nova_compute[247704]: 2026-01-31 08:59:25.150 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:59:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 2.6 KiB/s wr, 67 op/s
Jan 31 08:59:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 08:59:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1769649985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:25 compute-0 nova_compute[247704]: 2026-01-31 08:59:25.654 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:59:25 compute-0 nova_compute[247704]: 2026-01-31 08:59:25.661 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:59:25 compute-0 ceph-mon[74496]: osdmap e411: 3 total, 3 up, 3 in
Jan 31 08:59:25 compute-0 nova_compute[247704]: 2026-01-31 08:59:25.884 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:59:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:26.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:26.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:26 compute-0 nova_compute[247704]: 2026-01-31 08:59:26.061 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:59:26 compute-0 nova_compute[247704]: 2026-01-31 08:59:26.062 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:59:26 compute-0 sshd-session[402046]: Invalid user admin from 123.54.197.60 port 42920
Jan 31 08:59:26 compute-0 sshd-session[402046]: Connection closed by invalid user admin 123.54.197.60 port 42920 [preauth]
Jan 31 08:59:26 compute-0 ceph-mon[74496]: pgmap v3825: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 2.6 KiB/s wr, 67 op/s
Jan 31 08:59:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1769649985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1659983838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Jan 31 08:59:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Jan 31 08:59:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Jan 31 08:59:27 compute-0 nova_compute[247704]: 2026-01-31 08:59:27.062 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:27 compute-0 nova_compute[247704]: 2026-01-31 08:59:27.063 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:27 compute-0 nova_compute[247704]: 2026-01-31 08:59:27.063 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 2.7 KiB/s wr, 69 op/s
Jan 31 08:59:27 compute-0 nova_compute[247704]: 2026-01-31 08:59:27.577 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/168774946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/168774946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:27 compute-0 ceph-mon[74496]: osdmap e412: 3 total, 3 up, 3 in
Jan 31 08:59:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2743009941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 08:59:27 compute-0 ceph-mon[74496]: pgmap v3827: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 2.7 KiB/s wr, 69 op/s
Jan 31 08:59:27 compute-0 sshd-session[402122]: Invalid user admin from 123.54.197.60 port 42926
Jan 31 08:59:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Jan 31 08:59:27 compute-0 sshd-session[402122]: Connection closed by invalid user admin 123.54.197.60 port 42926 [preauth]
Jan 31 08:59:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Jan 31 08:59:28 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Jan 31 08:59:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:28.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:28.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:29 compute-0 ceph-mon[74496]: osdmap e413: 3 total, 3 up, 3 in
Jan 31 08:59:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2432836408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2432836408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:29 compute-0 sshd-session[402125]: Invalid user admin from 123.54.197.60 port 42930
Jan 31 08:59:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 238 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 2.8 KiB/s wr, 73 op/s
Jan 31 08:59:29 compute-0 sshd-session[402125]: Connection closed by invalid user admin 123.54.197.60 port 42930 [preauth]
Jan 31 08:59:29 compute-0 nova_compute[247704]: 2026-01-31 08:59:29.586 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849954.584936, b5a9d1f1-65d6-4560-98fa-a2eb7858eb30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 08:59:29 compute-0 nova_compute[247704]: 2026-01-31 08:59:29.587 247708 INFO nova.compute.manager [-] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] VM Stopped (Lifecycle Event)
Jan 31 08:59:29 compute-0 nova_compute[247704]: 2026-01-31 08:59:29.674 247708 DEBUG nova.compute.manager [None req-eeeae58d-7c82-4264-9745-301cbecbade6 - - - - - -] [instance: b5a9d1f1-65d6-4560-98fa-a2eb7858eb30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 08:59:29 compute-0 nova_compute[247704]: 2026-01-31 08:59:29.932 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:30 compute-0 ceph-mon[74496]: pgmap v3829: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 238 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 2.8 KiB/s wr, 73 op/s
Jan 31 08:59:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:30.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:30.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 5.3 KiB/s wr, 110 op/s
Jan 31 08:59:31 compute-0 nova_compute[247704]: 2026-01-31 08:59:31.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:59:31 compute-0 sshd-session[402128]: Invalid user admin from 123.54.197.60 port 43382
Jan 31 08:59:31 compute-0 sshd-session[402128]: Connection closed by invalid user admin 123.54.197.60 port 43382 [preauth]
Jan 31 08:59:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:32.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:32.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:32 compute-0 ceph-mon[74496]: pgmap v3830: 305 pgs: 305 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 5.3 KiB/s wr, 110 op/s
Jan 31 08:59:32 compute-0 nova_compute[247704]: 2026-01-31 08:59:32.579 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 3.9 KiB/s wr, 71 op/s
Jan 31 08:59:33 compute-0 sshd-session[402131]: Invalid user admin from 123.54.197.60 port 43394
Jan 31 08:59:33 compute-0 sshd-session[402131]: Connection closed by invalid user admin 123.54.197.60 port 43394 [preauth]
Jan 31 08:59:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:34.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:34 compute-0 ceph-mon[74496]: pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 3.9 KiB/s wr, 71 op/s
Jan 31 08:59:34 compute-0 podman[402137]: 2026-01-31 08:59:34.931311871 +0000 UTC m=+0.107434404 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 31 08:59:34 compute-0 nova_compute[247704]: 2026-01-31 08:59:34.934 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:34 compute-0 sshd-session[402134]: Invalid user admin from 123.54.197.60 port 43406
Jan 31 08:59:35 compute-0 sshd-session[402134]: Connection closed by invalid user admin 123.54.197.60 port 43406 [preauth]
Jan 31 08:59:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Jan 31 08:59:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Jan 31 08:59:35 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Jan 31 08:59:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 68 op/s
Jan 31 08:59:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:36.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:36.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 08:59:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 08:59:36 compute-0 ceph-mon[74496]: osdmap e414: 3 total, 3 up, 3 in
Jan 31 08:59:36 compute-0 ceph-mon[74496]: pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 68 op/s
Jan 31 08:59:36 compute-0 sshd-session[402163]: Invalid user admin from 123.54.197.60 port 43420
Jan 31 08:59:36 compute-0 sshd-session[402163]: Connection closed by invalid user admin 123.54.197.60 port 43420 [preauth]
Jan 31 08:59:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 31 08:59:37 compute-0 nova_compute[247704]: 2026-01-31 08:59:37.582 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:38.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:38 compute-0 ceph-mon[74496]: pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 31 08:59:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 31 08:59:39 compute-0 nova_compute[247704]: 2026-01-31 08:59:39.936 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:40 compute-0 ceph-mon[74496]: pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 31 08:59:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 08:59:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:42.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:42 compute-0 ceph-mon[74496]: pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 08:59:42 compute-0 nova_compute[247704]: 2026-01-31 08:59:42.585 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:43 compute-0 nova_compute[247704]: 2026-01-31 08:59:43.314 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:43 compute-0 nova_compute[247704]: 2026-01-31 08:59:43.359 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:44 compute-0 ceph-mon[74496]: pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:44 compute-0 nova_compute[247704]: 2026-01-31 08:59:44.938 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:44 compute-0 sudo[402171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:44 compute-0 sudo[402171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:44 compute-0 sudo[402171]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:44 compute-0 sudo[402196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 08:59:44 compute-0 sudo[402196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:59:44 compute-0 sudo[402196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:59:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:46.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:46 compute-0 ceph-mon[74496]: pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:47 compute-0 nova_compute[247704]: 2026-01-31 08:59:47.587 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:48.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:48 compute-0 ceph-mon[74496]: pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:49 compute-0 podman[402223]: 2026-01-31 08:59:49.865163083 +0000 UTC m=+0.043248523 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:59:49 compute-0 nova_compute[247704]: 2026-01-31 08:59:49.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:59:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:59:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:50 compute-0 ceph-mon[74496]: pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:52.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:52.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:52 compute-0 ceph-mon[74496]: pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:52 compute-0 nova_compute[247704]: 2026-01-31 08:59:52.587 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3594944823' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 08:59:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3594944823' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 08:59:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 08:59:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:54.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 08:59:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:54.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:54 compute-0 ceph-mon[74496]: pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:54 compute-0 nova_compute[247704]: 2026-01-31 08:59:54.942 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:59:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:55 compute-0 ceph-mon[74496]: pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:56.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:57 compute-0 nova_compute[247704]: 2026-01-31 08:59:57.590 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 08:59:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 08:59:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:58.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 08:59:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 08:59:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 08:59:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:58.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 08:59:58 compute-0 ceph-mon[74496]: pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 08:59:59 compute-0 nova_compute[247704]: 2026-01-31 08:59:59.944 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 09:00:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:00.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:00 compute-0 ceph-mon[74496]: pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 09:00:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:02.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:02 compute-0 sudo[402250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:02 compute-0 sudo[402250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-0 sudo[402250]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-0 sudo[402275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:00:02 compute-0 sudo[402275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-0 sudo[402275]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-0 ceph-mon[74496]: pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:02 compute-0 sudo[402301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:02 compute-0 sudo[402301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-0 sudo[402301]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-0 sudo[402326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:00:02 compute-0 sudo[402326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-0 nova_compute[247704]: 2026-01-31 09:00:02.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:02 compute-0 sudo[402326]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:00:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:00:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:00:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:00:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:00:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1bac5d28-0d95-4bb8-8645-82a44fcb641c does not exist
Jan 31 09:00:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 54e5125a-a7f6-49ca-b9a8-84f6cc1dbed5 does not exist
Jan 31 09:00:02 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 23819f16-c754-4536-84c7-690ee977ab4a does not exist
Jan 31 09:00:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:00:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:00:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:00:02 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:00:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:00:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:00:02 compute-0 sudo[402382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:02 compute-0 sudo[402382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:02 compute-0 sudo[402382]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:03 compute-0 sudo[402407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:00:03 compute-0 sudo[402407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:03 compute-0 sudo[402407]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:03 compute-0 sudo[402432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:03 compute-0 sudo[402432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:03 compute-0 sudo[402432]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:03 compute-0 sudo[402457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:00:03 compute-0 sudo[402457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.431145304 +0000 UTC m=+0.039210395 container create 329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:00:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:00:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:00:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:00:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:00:03 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:00:03 compute-0 systemd[1]: Started libpod-conmon-329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a.scope.
Jan 31 09:00:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.412548331 +0000 UTC m=+0.020613432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.513619319 +0000 UTC m=+0.121684430 container init 329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.519502862 +0000 UTC m=+0.127567943 container start 329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_neumann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.523916129 +0000 UTC m=+0.131981230 container attach 329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:00:03 compute-0 gifted_neumann[402537]: 167 167
Jan 31 09:00:03 compute-0 systemd[1]: libpod-329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a.scope: Deactivated successfully.
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.525189401 +0000 UTC m=+0.133254482 container died 329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 09:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ab94b70241d603420833cb2da065642d9a070d8973460136eedd29c03831ddb-merged.mount: Deactivated successfully.
Jan 31 09:00:03 compute-0 nova_compute[247704]: 2026-01-31 09:00:03.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:03 compute-0 podman[402521]: 2026-01-31 09:00:03.568448613 +0000 UTC m=+0.176513694 container remove 329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_neumann, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 09:00:03 compute-0 systemd[1]: libpod-conmon-329db22ca3a0ebc26889591b2cae99a3d611d08bbc4b77714b4e1e4e9558475a.scope: Deactivated successfully.
Jan 31 09:00:03 compute-0 podman[402561]: 2026-01-31 09:00:03.689502866 +0000 UTC m=+0.038559938 container create 5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:00:03 compute-0 systemd[1]: Started libpod-conmon-5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7.scope.
Jan 31 09:00:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce37a3ec2b883e5d914e4a76a697ad8885a52f64c55eb3a7ff57b6bff2452b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce37a3ec2b883e5d914e4a76a697ad8885a52f64c55eb3a7ff57b6bff2452b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce37a3ec2b883e5d914e4a76a697ad8885a52f64c55eb3a7ff57b6bff2452b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce37a3ec2b883e5d914e4a76a697ad8885a52f64c55eb3a7ff57b6bff2452b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce37a3ec2b883e5d914e4a76a697ad8885a52f64c55eb3a7ff57b6bff2452b49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:03 compute-0 podman[402561]: 2026-01-31 09:00:03.672914052 +0000 UTC m=+0.021971144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:00:03 compute-0 podman[402561]: 2026-01-31 09:00:03.76781122 +0000 UTC m=+0.116868312 container init 5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 09:00:03 compute-0 podman[402561]: 2026-01-31 09:00:03.77724169 +0000 UTC m=+0.126298762 container start 5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:00:03 compute-0 podman[402561]: 2026-01-31 09:00:03.781298688 +0000 UTC m=+0.130355780 container attach 5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:00:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:04.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:04 compute-0 ceph-mon[74496]: pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:04 compute-0 agitated_allen[402578]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:00:04 compute-0 agitated_allen[402578]: --> relative data size: 1.0
Jan 31 09:00:04 compute-0 agitated_allen[402578]: --> All data devices are unavailable
Jan 31 09:00:04 compute-0 systemd[1]: libpod-5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7.scope: Deactivated successfully.
Jan 31 09:00:04 compute-0 podman[402561]: 2026-01-31 09:00:04.540304715 +0000 UTC m=+0.889361797 container died 5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 09:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce37a3ec2b883e5d914e4a76a697ad8885a52f64c55eb3a7ff57b6bff2452b49-merged.mount: Deactivated successfully.
Jan 31 09:00:04 compute-0 podman[402561]: 2026-01-31 09:00:04.588716161 +0000 UTC m=+0.937773223 container remove 5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:00:04 compute-0 systemd[1]: libpod-conmon-5b001c2284667b79e23520cbaad81cde8f6b6074f81c478e33341f33cdcf45e7.scope: Deactivated successfully.
Jan 31 09:00:04 compute-0 sudo[402457]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:04 compute-0 sudo[402605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:04 compute-0 sudo[402605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:04 compute-0 sudo[402605]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:04 compute-0 sudo[402630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:00:04 compute-0 sudo[402630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:04 compute-0 sudo[402630]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:04 compute-0 sudo[402655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:04 compute-0 sudo[402655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:04 compute-0 sudo[402655]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:04 compute-0 sudo[402680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:00:04 compute-0 sudo[402680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:04 compute-0 nova_compute[247704]: 2026-01-31 09:00:04.946 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:05 compute-0 sudo[402732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:05 compute-0 sudo[402732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:05 compute-0 sudo[402732]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.116948106 +0000 UTC m=+0.045662371 container create bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:00:05 compute-0 sudo[402785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:05 compute-0 sudo[402785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:05 compute-0 sudo[402785]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:05 compute-0 systemd[1]: Started libpod-conmon-bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4.scope.
Jan 31 09:00:05 compute-0 podman[402770]: 2026-01-31 09:00:05.157417101 +0000 UTC m=+0.083395170 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Jan 31 09:00:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.182695035 +0000 UTC m=+0.111409300 container init bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.191268003 +0000 UTC m=+0.119982268 container start bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.19483653 +0000 UTC m=+0.123550815 container attach bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:00:05 compute-0 gracious_feistel[402834]: 167 167
Jan 31 09:00:05 compute-0 systemd[1]: libpod-bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4.scope: Deactivated successfully.
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.103465588 +0000 UTC m=+0.032179873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.199019541 +0000 UTC m=+0.127733846 container died bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e31dda721f1b79b7c1ae1a8d20d52e8034773b898debba7a6592640fc36a7fb-merged.mount: Deactivated successfully.
Jan 31 09:00:05 compute-0 podman[402771]: 2026-01-31 09:00:05.237445626 +0000 UTC m=+0.166159891 container remove bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 09:00:05 compute-0 systemd[1]: libpod-conmon-bc35152e08af47297cff230d82d4878d505c0e560ba9c938c827df291d0f62a4.scope: Deactivated successfully.
Jan 31 09:00:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:05 compute-0 podman[402861]: 2026-01-31 09:00:05.404616431 +0000 UTC m=+0.060399370 container create cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_feistel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 09:00:05 compute-0 systemd[1]: Started libpod-conmon-cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814.scope.
Jan 31 09:00:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2acd0e87907fa469d602537048bfe892a8aadb239c4ab9bf1377144918b02d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2acd0e87907fa469d602537048bfe892a8aadb239c4ab9bf1377144918b02d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2acd0e87907fa469d602537048bfe892a8aadb239c4ab9bf1377144918b02d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2acd0e87907fa469d602537048bfe892a8aadb239c4ab9bf1377144918b02d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:05 compute-0 podman[402861]: 2026-01-31 09:00:05.474856079 +0000 UTC m=+0.130639048 container init cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:00:05 compute-0 podman[402861]: 2026-01-31 09:00:05.385283931 +0000 UTC m=+0.041066890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:00:05 compute-0 podman[402861]: 2026-01-31 09:00:05.480603678 +0000 UTC m=+0.136386637 container start cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_feistel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 09:00:05 compute-0 podman[402861]: 2026-01-31 09:00:05.484020842 +0000 UTC m=+0.139803811 container attach cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 09:00:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:06.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:06 compute-0 sweet_feistel[402877]: {
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:     "0": [
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:         {
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "devices": [
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "/dev/loop3"
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             ],
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "lv_name": "ceph_lv0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "lv_size": "7511998464",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "name": "ceph_lv0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "tags": {
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.cluster_name": "ceph",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.crush_device_class": "",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.encrypted": "0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.osd_id": "0",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.type": "block",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:                 "ceph.vdo": "0"
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             },
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "type": "block",
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:             "vg_name": "ceph_vg0"
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:         }
Jan 31 09:00:06 compute-0 sweet_feistel[402877]:     ]
Jan 31 09:00:06 compute-0 sweet_feistel[402877]: }
Jan 31 09:00:06 compute-0 systemd[1]: libpod-cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814.scope: Deactivated successfully.
Jan 31 09:00:06 compute-0 podman[402861]: 2026-01-31 09:00:06.320773939 +0000 UTC m=+0.976556898 container died cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2acd0e87907fa469d602537048bfe892a8aadb239c4ab9bf1377144918b02d74-merged.mount: Deactivated successfully.
Jan 31 09:00:06 compute-0 ceph-mon[74496]: pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:06 compute-0 podman[402861]: 2026-01-31 09:00:06.529017882 +0000 UTC m=+1.184800821 container remove cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:00:06 compute-0 sudo[402680]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:06 compute-0 systemd[1]: libpod-conmon-cbd859cf832ca9549d61858de43351cafe8b48505b63381423002cc1e63ad814.scope: Deactivated successfully.
Jan 31 09:00:06 compute-0 sudo[402899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:06 compute-0 sudo[402899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:06 compute-0 sudo[402899]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:06 compute-0 sudo[402926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:00:06 compute-0 sudo[402926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:06 compute-0 sudo[402926]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:06 compute-0 sudo[402951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:06 compute-0 sudo[402951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:06 compute-0 sudo[402951]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:06 compute-0 sudo[402976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:00:06 compute-0 sudo[402976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.170933811 +0000 UTC m=+0.041351927 container create e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lichterman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:00:07 compute-0 systemd[1]: Started libpod-conmon-e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e.scope.
Jan 31 09:00:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.154482881 +0000 UTC m=+0.024901027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.251629373 +0000 UTC m=+0.122047519 container init e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.259128146 +0000 UTC m=+0.129546262 container start e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lichterman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 09:00:07 compute-0 serene_lichterman[403056]: 167 167
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.264001964 +0000 UTC m=+0.134420110 container attach e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 09:00:07 compute-0 systemd[1]: libpod-e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e.scope: Deactivated successfully.
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.265145872 +0000 UTC m=+0.135563988 container died e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-900b83aafde53e98bf1c54e3b166e4bfca8bc77b51c903cd4dad305de55ada09-merged.mount: Deactivated successfully.
Jan 31 09:00:07 compute-0 podman[403039]: 2026-01-31 09:00:07.306616191 +0000 UTC m=+0.177034307 container remove e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lichterman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 09:00:07 compute-0 systemd[1]: libpod-conmon-e4733bd39cb6d66ae21a53eb65eacc14d46dcb9c890ee35fc54304334d7fdf9e.scope: Deactivated successfully.
Jan 31 09:00:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:07 compute-0 podman[403078]: 2026-01-31 09:00:07.448639874 +0000 UTC m=+0.044593586 container create ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:00:07 compute-0 systemd[1]: Started libpod-conmon-ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303.scope.
Jan 31 09:00:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f52231de90f653152ef045d48b88c33e5ff9c7c3e8eb3a58e5ef43582a6bdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f52231de90f653152ef045d48b88c33e5ff9c7c3e8eb3a58e5ef43582a6bdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f52231de90f653152ef045d48b88c33e5ff9c7c3e8eb3a58e5ef43582a6bdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f52231de90f653152ef045d48b88c33e5ff9c7c3e8eb3a58e5ef43582a6bdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:07 compute-0 podman[403078]: 2026-01-31 09:00:07.427391077 +0000 UTC m=+0.023344779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:00:07 compute-0 podman[403078]: 2026-01-31 09:00:07.545677713 +0000 UTC m=+0.141631445 container init ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 09:00:07 compute-0 podman[403078]: 2026-01-31 09:00:07.551449973 +0000 UTC m=+0.147403685 container start ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 09:00:07 compute-0 podman[403078]: 2026-01-31 09:00:07.555922633 +0000 UTC m=+0.151876335 container attach ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 09:00:07 compute-0 nova_compute[247704]: 2026-01-31 09:00:07.596 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:08.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:08.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:08 compute-0 bold_pare[403094]: {
Jan 31 09:00:08 compute-0 bold_pare[403094]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:00:08 compute-0 bold_pare[403094]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:00:08 compute-0 bold_pare[403094]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:00:08 compute-0 bold_pare[403094]:         "osd_id": 0,
Jan 31 09:00:08 compute-0 bold_pare[403094]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:00:08 compute-0 bold_pare[403094]:         "type": "bluestore"
Jan 31 09:00:08 compute-0 bold_pare[403094]:     }
Jan 31 09:00:08 compute-0 bold_pare[403094]: }
Jan 31 09:00:08 compute-0 systemd[1]: libpod-ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303.scope: Deactivated successfully.
Jan 31 09:00:08 compute-0 podman[403078]: 2026-01-31 09:00:08.299530284 +0000 UTC m=+0.895483966 container died ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pare, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f52231de90f653152ef045d48b88c33e5ff9c7c3e8eb3a58e5ef43582a6bdf-merged.mount: Deactivated successfully.
Jan 31 09:00:08 compute-0 podman[403078]: 2026-01-31 09:00:08.35366977 +0000 UTC m=+0.949623462 container remove ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 09:00:08 compute-0 systemd[1]: libpod-conmon-ac059065fbb17cd2d4c66d70542f948ead8dc63cbf3795a663758780d501e303.scope: Deactivated successfully.
Jan 31 09:00:08 compute-0 sudo[402976]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:00:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:00:08 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f5b624ea-194b-4643-a0a1-cf68edbc1ae5 does not exist
Jan 31 09:00:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 49982b22-5b96-4cfe-b255-ffbcdda97fb3 does not exist
Jan 31 09:00:08 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 031186af-3db6-48de-9e6d-9c3791b47e00 does not exist
Jan 31 09:00:08 compute-0 sudo[403130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:08 compute-0 sudo[403130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:08 compute-0 sudo[403130]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:08 compute-0 sudo[403155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:00:08 compute-0 sudo[403155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:08 compute-0 sudo[403155]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:08 compute-0 ceph-mon[74496]: pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:08 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:00:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:09 compute-0 nova_compute[247704]: 2026-01-31 09:00:09.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:09 compute-0 nova_compute[247704]: 2026-01-31 09:00:09.982 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:10.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:10 compute-0 ceph-mon[74496]: pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:10 compute-0 nova_compute[247704]: 2026-01-31 09:00:10.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:10 compute-0 nova_compute[247704]: 2026-01-31 09:00:10.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:00:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:11.229 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:11.229 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:11.229 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:12.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:12 compute-0 sshd-session[403181]: Invalid user admin from 45.148.10.240 port 43850
Jan 31 09:00:12 compute-0 sshd-session[403181]: Connection closed by invalid user admin 45.148.10.240 port 43850 [preauth]
Jan 31 09:00:12 compute-0 ceph-mon[74496]: pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:12 compute-0 nova_compute[247704]: 2026-01-31 09:00:12.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:14.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:14 compute-0 ceph-mon[74496]: pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:14 compute-0 nova_compute[247704]: 2026-01-31 09:00:14.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:15 compute-0 nova_compute[247704]: 2026-01-31 09:00:15.000 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:16.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.215 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.215 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.230 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.309 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.310 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.316 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.317 247708 INFO nova.compute.claims [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.464 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:16 compute-0 ceph-mon[74496]: pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:16 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3029930112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:00:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032600505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.885 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.893 247708 DEBUG nova.compute.provider_tree [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.914 247708 DEBUG nova.scheduler.client.report [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.941 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:16 compute-0 nova_compute[247704]: 2026-01-31 09:00:16.942 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.001 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.001 247708 DEBUG nova.network.neutron [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.026 247708 INFO nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.054 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.149 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.150 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.151 247708 INFO nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Creating image(s)
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.181 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.214 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.247 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.253 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.335 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.336 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "ff90c10b8251df1dd96780c3025774cae23123c6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.337 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.337 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "ff90c10b8251df1dd96780c3025774cae23123c6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.369 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.377 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.584 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.585 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.598 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/33126419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3032600505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.636 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ff90c10b8251df1dd96780c3025774cae23123c6 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.726 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] resizing rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.847 247708 DEBUG nova.objects.instance [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.879 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.879 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Ensure instance console log exists: /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.880 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.881 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:17 compute-0 nova_compute[247704]: 2026-01-31 09:00:17.881 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:18.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:18 compute-0 nova_compute[247704]: 2026-01-31 09:00:18.265 247708 DEBUG nova.policy [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4a56abd8fdd341ae88a99e102ab399de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:00:18 compute-0 ceph-mon[74496]: pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 700 KiB/s wr, 3 op/s
Jan 31 09:00:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:19.529 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:00:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:19.529 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:00:19 compute-0 nova_compute[247704]: 2026-01-31 09:00:19.530 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:19 compute-0 ceph-mon[74496]: pgmap v3855: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 700 KiB/s wr, 3 op/s
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.002 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:20.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:20.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:00:20
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'volumes']
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:00:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.452 247708 DEBUG nova.network.neutron [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Successfully updated port: a9477ae0-b9b1-427b-b136-9017671bc84e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.503 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.503 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquired lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.504 247708 DEBUG nova.network.neutron [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.635 247708 DEBUG nova.compute.manager [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-changed-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.638 247708 DEBUG nova.compute.manager [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Refreshing instance network info cache due to event network-changed-a9477ae0-b9b1-427b-b136-9017671bc84e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:00:20 compute-0 nova_compute[247704]: 2026-01-31 09:00:20.639 247708 DEBUG oslo_concurrency.lockutils [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:00:20 compute-0 podman[403376]: 2026-01-31 09:00:20.899821443 +0000 UTC m=+0.058234837 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:00:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:00:21 compute-0 nova_compute[247704]: 2026-01-31 09:00:21.241 247708 DEBUG nova.network.neutron [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:00:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Jan 31 09:00:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:22 compute-0 ceph-mon[74496]: pgmap v3856: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.8 MiB/s wr, 18 op/s
Jan 31 09:00:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1357204511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.609 247708 DEBUG nova.network.neutron [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Updating instance_info_cache with network_info: [{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.628 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.690 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Releasing lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.691 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Instance network_info: |[{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.691 247708 DEBUG oslo_concurrency.lockutils [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.692 247708 DEBUG nova.network.neutron [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Refreshing network info cache for port a9477ae0-b9b1-427b-b136-9017671bc84e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.695 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Start _get_guest_xml network_info=[{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'size': 0, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'image_id': '7c23949f-bba8-4466-bb79-caf568852d38'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.700 247708 WARNING nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.705 247708 DEBUG nova.virt.libvirt.host [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.706 247708 DEBUG nova.virt.libvirt.host [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.711 247708 DEBUG nova.virt.libvirt.host [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.712 247708 DEBUG nova.virt.libvirt.host [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.713 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.713 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:29:26Z,direct_url=<?>,disk_format='qcow2',id=7c23949f-bba8-4466-bb79-caf568852d38,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f1803bf3df964a3f90dda65daa6f9a53',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:29:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.714 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.714 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.714 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.715 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.715 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.715 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.715 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.716 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.716 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.716 247708 DEBUG nova.virt.hardware [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:00:22 compute-0 nova_compute[247704]: 2026-01-31 09:00:22.719 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:00:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664484667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.214 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.255 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.260 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:00:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4078823607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/664484667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.662 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.663 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.663 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.663 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.664 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:00:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344523482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.689 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.691 247708 DEBUG nova.virt.libvirt.vif [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1860154085',display_name='tempest-TestNetworkBasicOps-server-1860154085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1860154085',id=210,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO+E42CVHseO528bXptBdHbFUExjfcDKkMfNAoNtITX7nGTK7linVbZnHESjjlklVt+0E1mdM5qaYAJOpVr8pbc/19fpL4o7fiV2OvqhrvnSeYsn3T7kOazkaPErbck+DQ==',key_name='tempest-TestNetworkBasicOps-33976219',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-tp8kdiz8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:00:17Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=1ab6e2fe-f122-432a-a79e-3bba6e7a8603,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.692 247708 DEBUG nova.network.os_vif_util [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.693 247708 DEBUG nova.network.os_vif_util [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.695 247708 DEBUG nova.objects.instance [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.824 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <uuid>1ab6e2fe-f122-432a-a79e-3bba6e7a8603</uuid>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <name>instance-000000d2</name>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <metadata>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:name>tempest-TestNetworkBasicOps-server-1860154085</nova:name>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 09:00:22</nova:creationTime>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:user uuid="4a56abd8fdd341ae88a99e102ab399de">tempest-TestNetworkBasicOps-1691550221-project-member</nova:user>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:project uuid="0d55ec1a5544450dba4e4fd1426395d7">tempest-TestNetworkBasicOps-1691550221</nova:project>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:root type="image" uuid="7c23949f-bba8-4466-bb79-caf568852d38"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <nova:port uuid="a9477ae0-b9b1-427b-b136-9017671bc84e">
Jan 31 09:00:23 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </metadata>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <system>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <entry name="serial">1ab6e2fe-f122-432a-a79e-3bba6e7a8603</entry>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <entry name="uuid">1ab6e2fe-f122-432a-a79e-3bba6e7a8603</entry>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </system>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <os>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </os>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <features>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <apic/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </features>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </clock>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </cpu>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   <devices>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk">
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </source>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk.config">
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </source>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:00:23 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:73:d1:e5"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <target dev="tapa9477ae0-b9"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </interface>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/console.log" append="off"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </serial>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <video>
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </video>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </rng>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 09:00:23 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 09:00:23 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 09:00:23 compute-0 nova_compute[247704]:   </devices>
Jan 31 09:00:23 compute-0 nova_compute[247704]: </domain>
Jan 31 09:00:23 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.828 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Preparing to wait for external event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.828 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.829 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.829 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.830 247708 DEBUG nova.virt.libvirt.vif [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1860154085',display_name='tempest-TestNetworkBasicOps-server-1860154085',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1860154085',id=210,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO+E42CVHseO528bXptBdHbFUExjfcDKkMfNAoNtITX7nGTK7linVbZnHESjjlklVt+0E1mdM5qaYAJOpVr8pbc/19fpL4o7fiV2OvqhrvnSeYsn3T7kOazkaPErbck+DQ==',key_name='tempest-TestNetworkBasicOps-33976219',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-tp8kdiz8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:00:17Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=1ab6e2fe-f122-432a-a79e-3bba6e7a8603,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.830 247708 DEBUG nova.network.os_vif_util [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.831 247708 DEBUG nova.network.os_vif_util [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.831 247708 DEBUG os_vif [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.832 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.833 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.837 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9477ae0-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.837 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9477ae0-b9, col_values=(('external_ids', {'iface-id': 'a9477ae0-b9b1-427b-b136-9017671bc84e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:d1:e5', 'vm-uuid': '1ab6e2fe-f122-432a-a79e-3bba6e7a8603'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.839 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:23 compute-0 NetworkManager[49108]: <info>  [1769850023.8403] manager: (tapa9477ae0-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.841 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.847 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.848 247708 INFO os_vif [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9')
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.989 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.989 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.989 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] No VIF found with MAC fa:16:3e:73:d1:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:00:23 compute-0 nova_compute[247704]: 2026-01-31 09:00:23.990 247708 INFO nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Using config drive
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.021 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:00:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499201930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.087 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:24.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:24.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.324 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000d2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.325 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000d2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.420 247708 INFO nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Creating config drive at /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/disk.config
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.428 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmparve27_u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.565 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmparve27_u" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:24 compute-0 ceph-mon[74496]: pgmap v3857: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:00:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/344523482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:00:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3499201930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.601 247708 DEBUG nova.storage.rbd_utils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] rbd image 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.606 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/disk.config 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.646 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.647 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4143MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.647 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.648 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.760 247708 DEBUG nova.network.neutron [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Updated VIF entry in instance network info cache for port a9477ae0-b9b1-427b-b136-9017671bc84e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.761 247708 DEBUG nova.network.neutron [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Updating instance_info_cache with network_info: [{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.867 247708 DEBUG oslo_concurrency.lockutils [req-f48901c5-448b-4421-b8fa-8cfbe3dcc39a req-9d244139-e508-47ed-ab3f-87a02d529a2c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.909 247708 DEBUG oslo_concurrency.processutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/disk.config 1ab6e2fe-f122-432a-a79e-3bba6e7a8603_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.910 247708 INFO nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Deleting local config drive /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603/disk.config because it was imported into RBD.
Jan 31 09:00:24 compute-0 kernel: tapa9477ae0-b9: entered promiscuous mode
Jan 31 09:00:24 compute-0 NetworkManager[49108]: <info>  [1769850024.9531] manager: (tapa9477ae0-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.955 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:24 compute-0 ovn_controller[149457]: 2026-01-31T09:00:24Z|00875|binding|INFO|Claiming lport a9477ae0-b9b1-427b-b136-9017671bc84e for this chassis.
Jan 31 09:00:24 compute-0 ovn_controller[149457]: 2026-01-31T09:00:24Z|00876|binding|INFO|a9477ae0-b9b1-427b-b136-9017671bc84e: Claiming fa:16:3e:73:d1:e5 10.100.0.5
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.961 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.965 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:24 compute-0 systemd-machined[214448]: New machine qemu-92-instance-000000d2.
Jan 31 09:00:24 compute-0 ovn_controller[149457]: 2026-01-31T09:00:24Z|00877|binding|INFO|Setting lport a9477ae0-b9b1-427b-b136-9017671bc84e ovn-installed in OVS
Jan 31 09:00:24 compute-0 systemd-udevd[403553]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:00:24 compute-0 nova_compute[247704]: 2026-01-31 09:00:24.985 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:24 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-000000d2.
Jan 31 09:00:24 compute-0 NetworkManager[49108]: <info>  [1769850024.9978] device (tapa9477ae0-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:00:24 compute-0 NetworkManager[49108]: <info>  [1769850024.9994] device (tapa9477ae0-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:00:25 compute-0 ovn_controller[149457]: 2026-01-31T09:00:25Z|00878|binding|INFO|Setting lport a9477ae0-b9b1-427b-b136-9017671bc84e up in Southbound
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.002 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d1:e5 10.100.0.5'], port_security=['fa:16:3e:73:d1:e5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1ab6e2fe-f122-432a-a79e-3bba6e7a8603', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07659edf-94fb-4be4-adbf-22b52c034c40, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a9477ae0-b9b1-427b-b136-9017671bc84e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.004 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a9477ae0-b9b1-427b-b136-9017671bc84e in datapath 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 bound to our chassis
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.006 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.017 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b64246-8323-4254-bed0-2ecc130ebd3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.018 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e170bdb-61 in ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.020 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e170bdb-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.020 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[19256129-9b71-4b1f-842d-0ef9b80f1afb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.021 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[aaead22a-270f-4923-9110-3f43a1357d2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.035 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[37332c64-a4b8-4401-86cc-ec6ba570c654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.051 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[35af267b-b954-47b3-9276-4e4016340b13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.080 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.080 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.080 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.085 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[79530639-4c4b-4302-b8a1-84913500166b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 NetworkManager[49108]: <info>  [1769850025.0932] manager: (tap4e170bdb-60): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.092 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e8cf6f-df81-4df3-ad6c-6fb4551c4d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.122 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c15a3734-abb9-45a5-a210-06300286a049]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.126 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9849aabc-2664-4b2c-9de9-6a5a6c08eb98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 NetworkManager[49108]: <info>  [1769850025.1479] device (tap4e170bdb-60): carrier: link connected
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.154 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8b949d8b-6902-43ea-9eee-11a8aacb0eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.168 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a136b-c20d-46da-844e-c29b168b4a71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e170bdb-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:fc:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1026776, 'reachable_time': 28882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403594, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.189 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.192 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b5751f91-15bb-4e03-89d5-c2998e278341]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:fcfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1026776, 'tstamp': 1026776}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 403608, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 sudo[403586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:25 compute-0 sudo[403586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:25 compute-0 sudo[403586]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.207 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c390b5-f58d-4757-8939-bf4a76e671d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e170bdb-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:fc:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 259], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1026776, 'reachable_time': 28882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 403612, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.231 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2437d920-942c-483a-834c-12add379f499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.252 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.253 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:00:25 compute-0 sudo[403615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:25 compute-0 sudo[403615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:25 compute-0 sudo[403615]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.280 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.284 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d64f99bf-15b2-45bb-9bf2-47c0d1f0c150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.286 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e170bdb-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.286 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.287 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e170bdb-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:25 compute-0 kernel: tap4e170bdb-60: entered promiscuous mode
Jan 31 09:00:25 compute-0 NetworkManager[49108]: <info>  [1769850025.2896] manager: (tap4e170bdb-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.291 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.292 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e170bdb-60, col_values=(('external_ids', {'iface-id': '58845f69-5ae5-46ec-8d5b-7dca32eb756b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.293 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:25 compute-0 ovn_controller[149457]: 2026-01-31T09:00:25Z|00879|binding|INFO|Releasing lport 58845f69-5ae5-46ec-8d5b-7dca32eb756b from this chassis (sb_readonly=0)
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.299 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.300 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.302 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f05029ad-4126-4548-b499-f1bb608d1e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.302 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: global
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.pid.haproxy
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.303 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'env', 'PROCESS_TAG=haproxy-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e170bdb-6ef8-49b3-bd1f-9130dcc7a216.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:00:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.314 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:00:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.374 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:25.532 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:25 compute-0 ceph-mon[74496]: pgmap v3858: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.643 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850025.6426332, 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.644 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] VM Started (Lifecycle Event)
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.681 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.685 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850025.6429095, 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.686 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] VM Paused (Lifecycle Event)
Jan 31 09:00:25 compute-0 podman[403731]: 2026-01-31 09:00:25.70847052 +0000 UTC m=+0.102326209 container create b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:00:25 compute-0 podman[403731]: 2026-01-31 09:00:25.651044584 +0000 UTC m=+0.044900303 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.747 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.754 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:00:25 compute-0 systemd[1]: Started libpod-conmon-b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7.scope.
Jan 31 09:00:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.785 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4df4bd645ec2b057c05c8f00a3c30e17f55fa4a72898086b9a175d80e7b2fcbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:00:25 compute-0 podman[403731]: 2026-01-31 09:00:25.806684309 +0000 UTC m=+0.200539998 container init b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:00:25 compute-0 podman[403731]: 2026-01-31 09:00:25.812977601 +0000 UTC m=+0.206833300 container start b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 09:00:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:00:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/307220799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.835 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:25 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [NOTICE]   (403752) : New worker (403756) forked
Jan 31 09:00:25 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [NOTICE]   (403752) : Loading success.
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.844 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.884 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.973 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:00:25 compute-0 nova_compute[247704]: 2026-01-31 09:00:25.974 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:26.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:26.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/307220799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.477 247708 DEBUG nova.compute.manager [req-8a2212f3-a651-4379-b4ff-65ba7fb21b2e req-4be3e1a8-fa34-45d1-9c45-7442d4a15095 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.478 247708 DEBUG oslo_concurrency.lockutils [req-8a2212f3-a651-4379-b4ff-65ba7fb21b2e req-4be3e1a8-fa34-45d1-9c45-7442d4a15095 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.478 247708 DEBUG oslo_concurrency.lockutils [req-8a2212f3-a651-4379-b4ff-65ba7fb21b2e req-4be3e1a8-fa34-45d1-9c45-7442d4a15095 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.478 247708 DEBUG oslo_concurrency.lockutils [req-8a2212f3-a651-4379-b4ff-65ba7fb21b2e req-4be3e1a8-fa34-45d1-9c45-7442d4a15095 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.479 247708 DEBUG nova.compute.manager [req-8a2212f3-a651-4379-b4ff-65ba7fb21b2e req-4be3e1a8-fa34-45d1-9c45-7442d4a15095 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Processing event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.480 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.485 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850027.484673, 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.486 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] VM Resumed (Lifecycle Event)
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.490 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.495 247708 INFO nova.virt.libvirt.driver [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Instance spawned successfully.
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.496 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.533 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.534 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.535 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.536 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.537 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.538 247708 DEBUG nova.virt.libvirt.driver [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.588 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.593 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.626 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.631 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.665 247708 INFO nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Took 10.52 seconds to spawn the instance on the hypervisor.
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.666 247708 DEBUG nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:00:27 compute-0 ceph-mon[74496]: pgmap v3859: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.757 247708 INFO nova.compute.manager [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Took 11.47 seconds to build instance.
Jan 31 09:00:27 compute-0 nova_compute[247704]: 2026-01-31 09:00:27.782 247708 DEBUG oslo_concurrency.lockutils [None req-819a2693-fbaa-426b-8537-c9cbb2fc6fa2 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:28.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:28.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:28 compute-0 nova_compute[247704]: 2026-01-31 09:00:28.839 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:28 compute-0 nova_compute[247704]: 2026-01-31 09:00:28.968 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:00:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 09:00:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:30.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:30 compute-0 nova_compute[247704]: 2026-01-31 09:00:30.482 247708 DEBUG nova.compute.manager [req-4945eeae-c854-4f95-b1ae-51750d69b051 req-5a6674ad-c50e-44d9-8151-4c46f1783b84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:00:30 compute-0 nova_compute[247704]: 2026-01-31 09:00:30.483 247708 DEBUG oslo_concurrency.lockutils [req-4945eeae-c854-4f95-b1ae-51750d69b051 req-5a6674ad-c50e-44d9-8151-4c46f1783b84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:30 compute-0 nova_compute[247704]: 2026-01-31 09:00:30.483 247708 DEBUG oslo_concurrency.lockutils [req-4945eeae-c854-4f95-b1ae-51750d69b051 req-5a6674ad-c50e-44d9-8151-4c46f1783b84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:30 compute-0 nova_compute[247704]: 2026-01-31 09:00:30.483 247708 DEBUG oslo_concurrency.lockutils [req-4945eeae-c854-4f95-b1ae-51750d69b051 req-5a6674ad-c50e-44d9-8151-4c46f1783b84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:30 compute-0 nova_compute[247704]: 2026-01-31 09:00:30.484 247708 DEBUG nova.compute.manager [req-4945eeae-c854-4f95-b1ae-51750d69b051 req-5a6674ad-c50e-44d9-8151-4c46f1783b84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] No waiting events found dispatching network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:00:30 compute-0 nova_compute[247704]: 2026-01-31 09:00:30.484 247708 WARNING nova.compute.manager [req-4945eeae-c854-4f95-b1ae-51750d69b051 req-5a6674ad-c50e-44d9-8151-4c46f1783b84 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received unexpected event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e for instance with vm_state active and task_state None.
Jan 31 09:00:30 compute-0 ceph-mon[74496]: pgmap v3860: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 09:00:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 71 op/s
Jan 31 09:00:31 compute-0 ceph-mon[74496]: pgmap v3861: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 71 op/s
Jan 31 09:00:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:32.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:32.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:32 compute-0 ovn_controller[149457]: 2026-01-31T09:00:32Z|00880|binding|INFO|Releasing lport 58845f69-5ae5-46ec-8d5b-7dca32eb756b from this chassis (sb_readonly=0)
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.162 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 NetworkManager[49108]: <info>  [1769850032.1656] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 31 09:00:32 compute-0 NetworkManager[49108]: <info>  [1769850032.1667] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 31 09:00:32 compute-0 ovn_controller[149457]: 2026-01-31T09:00:32Z|00881|binding|INFO|Releasing lport 58845f69-5ae5-46ec-8d5b-7dca32eb756b from this chassis (sb_readonly=0)
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.175 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.528 247708 DEBUG nova.compute.manager [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-changed-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.529 247708 DEBUG nova.compute.manager [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Refreshing instance network info cache due to event network-changed-a9477ae0-b9b1-427b-b136-9017671bc84e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.529 247708 DEBUG oslo_concurrency.lockutils [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.529 247708 DEBUG oslo_concurrency.lockutils [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.529 247708 DEBUG nova.network.neutron [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Refreshing network info cache for port a9477ae0-b9b1-427b-b136-9017671bc84e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.632 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.697 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.698 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.698 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.698 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.698 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.701 247708 INFO nova.compute.manager [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Terminating instance
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.702 247708 DEBUG nova.compute.manager [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:00:32 compute-0 kernel: tapa9477ae0-b9 (unregistering): left promiscuous mode
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.754 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 NetworkManager[49108]: <info>  [1769850032.7556] device (tapa9477ae0-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 ovn_controller[149457]: 2026-01-31T09:00:32Z|00882|binding|INFO|Releasing lport a9477ae0-b9b1-427b-b136-9017671bc84e from this chassis (sb_readonly=0)
Jan 31 09:00:32 compute-0 ovn_controller[149457]: 2026-01-31T09:00:32Z|00883|binding|INFO|Setting lport a9477ae0-b9b1-427b-b136-9017671bc84e down in Southbound
Jan 31 09:00:32 compute-0 ovn_controller[149457]: 2026-01-31T09:00:32Z|00884|binding|INFO|Removing iface tapa9477ae0-b9 ovn-installed in OVS
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.766 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:32.770 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:d1:e5 10.100.0.5'], port_security=['fa:16:3e:73:d1:e5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1ab6e2fe-f122-432a-a79e-3bba6e7a8603', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1274203822', 'neutron:project_id': '0d55ec1a5544450dba4e4fd1426395d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc812d40-ff23-43ac-a90f-2ee695b7bd6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07659edf-94fb-4be4-adbf-22b52c034c40, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=a9477ae0-b9b1-427b-b136-9017671bc84e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:32.773 160028 INFO neutron.agent.ovn.metadata.agent [-] Port a9477ae0-b9b1-427b-b136-9017671bc84e in datapath 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 unbound from our chassis
Jan 31 09:00:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:32.775 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:00:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:32.776 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6bc32d-6cf7-48ec-8fa2-923ad8290289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:32 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:32.777 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 namespace which is not needed anymore
Jan 31 09:00:32 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Jan 31 09:00:32 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000d2.scope: Consumed 6.014s CPU time.
Jan 31 09:00:32 compute-0 systemd-machined[214448]: Machine qemu-92-instance-000000d2 terminated.
Jan 31 09:00:32 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [NOTICE]   (403752) : haproxy version is 2.8.14-c23fe91
Jan 31 09:00:32 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [NOTICE]   (403752) : path to executable is /usr/sbin/haproxy
Jan 31 09:00:32 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [WARNING]  (403752) : Exiting Master process...
Jan 31 09:00:32 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [ALERT]    (403752) : Current worker (403756) exited with code 143 (Terminated)
Jan 31 09:00:32 compute-0 neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216[403748]: [WARNING]  (403752) : All workers exited. Exiting... (0)
Jan 31 09:00:32 compute-0 systemd[1]: libpod-b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7.scope: Deactivated successfully.
Jan 31 09:00:32 compute-0 podman[403793]: 2026-01-31 09:00:32.897584851 +0000 UTC m=+0.044693778 container died b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7-userdata-shm.mount: Deactivated successfully.
Jan 31 09:00:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df4bd645ec2b057c05c8f00a3c30e17f55fa4a72898086b9a175d80e7b2fcbe-merged.mount: Deactivated successfully.
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.925 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.935 247708 INFO nova.virt.libvirt.driver [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Instance destroyed successfully.
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.935 247708 DEBUG nova.objects.instance [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lazy-loading 'resources' on Instance uuid 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:00:32 compute-0 podman[403793]: 2026-01-31 09:00:32.94688312 +0000 UTC m=+0.093992047 container cleanup b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:00:32 compute-0 systemd[1]: libpod-conmon-b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7.scope: Deactivated successfully.
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.981 247708 DEBUG nova.virt.libvirt.vif [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:00:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1860154085',display_name='tempest-TestNetworkBasicOps-server-1860154085',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1860154085',id=210,image_ref='7c23949f-bba8-4466-bb79-caf568852d38',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO+E42CVHseO528bXptBdHbFUExjfcDKkMfNAoNtITX7nGTK7linVbZnHESjjlklVt+0E1mdM5qaYAJOpVr8pbc/19fpL4o7fiV2OvqhrvnSeYsn3T7kOazkaPErbck+DQ==',key_name='tempest-TestNetworkBasicOps-33976219',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:00:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d55ec1a5544450dba4e4fd1426395d7',ramdisk_id='',reservation_id='r-tp8kdiz8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='7c23949f-bba8-4466-bb79-caf568852d38',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1691550221',owner_user_name='tempest-TestNetworkBasicOps-1691550221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:00:27Z,user_data=None,user_id='4a56abd8fdd341ae88a99e102ab399de',uuid=1ab6e2fe-f122-432a-a79e-3bba6e7a8603,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.982 247708 DEBUG nova.network.os_vif_util [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converting VIF {"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.983 247708 DEBUG nova.network.os_vif_util [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.983 247708 DEBUG os_vif [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.984 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.985 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9477ae0-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:00:32 compute-0 nova_compute[247704]: 2026-01-31 09:00:32.996 247708 INFO os_vif [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:d1:e5,bridge_name='br-int',has_traffic_filtering=True,id=a9477ae0-b9b1-427b-b136-9017671bc84e,network=Network(4e170bdb-6ef8-49b3-bd1f-9130dcc7a216),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa9477ae0-b9')
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.019 247708 DEBUG nova.compute.manager [req-0936a494-b611-4bf6-b41d-eaf8a9516d18 req-0293acb8-156d-479e-8404-2b948118572d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-vif-unplugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.019 247708 DEBUG oslo_concurrency.lockutils [req-0936a494-b611-4bf6-b41d-eaf8a9516d18 req-0293acb8-156d-479e-8404-2b948118572d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.020 247708 DEBUG oslo_concurrency.lockutils [req-0936a494-b611-4bf6-b41d-eaf8a9516d18 req-0293acb8-156d-479e-8404-2b948118572d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.020 247708 DEBUG oslo_concurrency.lockutils [req-0936a494-b611-4bf6-b41d-eaf8a9516d18 req-0293acb8-156d-479e-8404-2b948118572d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.020 247708 DEBUG nova.compute.manager [req-0936a494-b611-4bf6-b41d-eaf8a9516d18 req-0293acb8-156d-479e-8404-2b948118572d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] No waiting events found dispatching network-vif-unplugged-a9477ae0-b9b1-427b-b136-9017671bc84e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.021 247708 DEBUG nova.compute.manager [req-0936a494-b611-4bf6-b41d-eaf8a9516d18 req-0293acb8-156d-479e-8404-2b948118572d 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-vif-unplugged-a9477ae0-b9b1-427b-b136-9017671bc84e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:00:33 compute-0 podman[403832]: 2026-01-31 09:00:33.023071523 +0000 UTC m=+0.056323272 container remove b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.027 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[95b92cdb-ff14-4313-9ab7-6e69a73ce078]: (4, ('Sat Jan 31 09:00:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 (b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7)\nb8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7\nSat Jan 31 09:00:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 (b8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7)\nb8883b9864e8ae89e4ab4b8879c5482403eee89a434e010031f8e193502750b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.029 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b743a93a-a090-4f6b-afc6-30331015dcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.030 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e170bdb-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:00:33 compute-0 kernel: tap4e170bdb-60: left promiscuous mode
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.031 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.041 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[adf5e04a-7e2d-4be0-a3e7-0634026dc735]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.054 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[78e423fa-7223-4561-90e2-c755a3af55fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.055 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ae87c420-4e6b-45a2-9b6f-6085f61e4d4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.069 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[878726d8-133c-4cb1-959c-519ae3f82ee0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1026769, 'reachable_time': 37644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403865, 'error': None, 'target': 'ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.072 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e170bdb-6ef8-49b3-bd1f-9130dcc7a216 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:00:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:00:33.073 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[88b4999f-1e5c-4890-b3f9-5e591b0e8513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:00:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d4e170bdb\x2d6ef8\x2d49b3\x2dbd1f\x2d9130dcc7a216.mount: Deactivated successfully.
Jan 31 09:00:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.752 247708 INFO nova.virt.libvirt.driver [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Deleting instance files /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603_del
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.754 247708 INFO nova.virt.libvirt.driver [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Deletion of /var/lib/nova/instances/1ab6e2fe-f122-432a-a79e-3bba6e7a8603_del complete
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.837 247708 INFO nova.compute.manager [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Took 1.14 seconds to destroy the instance on the hypervisor.
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.838 247708 DEBUG oslo.service.loopingcall [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.838 247708 DEBUG nova.compute.manager [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:00:33 compute-0 nova_compute[247704]: 2026-01-31 09:00:33.838 247708 DEBUG nova.network.neutron [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:00:34 compute-0 nova_compute[247704]: 2026-01-31 09:00:34.112 247708 DEBUG nova.network.neutron [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Updated VIF entry in instance network info cache for port a9477ae0-b9b1-427b-b136-9017671bc84e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:00:34 compute-0 nova_compute[247704]: 2026-01-31 09:00:34.113 247708 DEBUG nova.network.neutron [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Updating instance_info_cache with network_info: [{"id": "a9477ae0-b9b1-427b-b136-9017671bc84e", "address": "fa:16:3e:73:d1:e5", "network": {"id": "4e170bdb-6ef8-49b3-bd1f-9130dcc7a216", "bridge": "br-int", "label": "tempest-network-smoke--726043769", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d55ec1a5544450dba4e4fd1426395d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9477ae0-b9", "ovs_interfaceid": "a9477ae0-b9b1-427b-b136-9017671bc84e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:00:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:34.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:34 compute-0 nova_compute[247704]: 2026-01-31 09:00:34.141 247708 DEBUG oslo_concurrency.lockutils [req-1eb5e14e-d40a-4463-9f51-049431495729 req-6f746a65-234a-40b3-a256-3b9a2af6d325 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-1ab6e2fe-f122-432a-a79e-3bba6e7a8603" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:00:34 compute-0 ceph-mon[74496]: pgmap v3862: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.157 247708 DEBUG nova.compute.manager [req-adb16fca-1fa2-4f91-8220-ab0c4aceb2d4 req-11a85187-a608-483c-b721-fc2d36943597 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.158 247708 DEBUG oslo_concurrency.lockutils [req-adb16fca-1fa2-4f91-8220-ab0c4aceb2d4 req-11a85187-a608-483c-b721-fc2d36943597 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.158 247708 DEBUG oslo_concurrency.lockutils [req-adb16fca-1fa2-4f91-8220-ab0c4aceb2d4 req-11a85187-a608-483c-b721-fc2d36943597 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.158 247708 DEBUG oslo_concurrency.lockutils [req-adb16fca-1fa2-4f91-8220-ab0c4aceb2d4 req-11a85187-a608-483c-b721-fc2d36943597 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.158 247708 DEBUG nova.compute.manager [req-adb16fca-1fa2-4f91-8220-ab0c4aceb2d4 req-11a85187-a608-483c-b721-fc2d36943597 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] No waiting events found dispatching network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.159 247708 WARNING nova.compute.manager [req-adb16fca-1fa2-4f91-8220-ab0c4aceb2d4 req-11a85187-a608-483c-b721-fc2d36943597 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Received unexpected event network-vif-plugged-a9477ae0-b9b1-427b-b136-9017671bc84e for instance with vm_state active and task_state deleting.
Jan 31 09:00:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.330 247708 DEBUG nova.network.neutron [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.362 247708 INFO nova.compute.manager [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Took 1.52 seconds to deallocate network for instance.
Jan 31 09:00:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 150 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.408 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.409 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.505 247708 DEBUG oslo_concurrency.processutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:00:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:00:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2431638206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:35 compute-0 podman[403888]: 2026-01-31 09:00:35.942500402 +0000 UTC m=+0.114749982 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.948 247708 DEBUG oslo_concurrency.processutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.955 247708 DEBUG nova.compute.provider_tree [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.974 247708 DEBUG nova.scheduler.client.report [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:00:35 compute-0 nova_compute[247704]: 2026-01-31 09:00:35.997 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:36 compute-0 nova_compute[247704]: 2026-01-31 09:00:36.030 247708 INFO nova.scheduler.client.report [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Deleted allocations for instance 1ab6e2fe-f122-432a-a79e-3bba6e7a8603
Jan 31 09:00:36 compute-0 nova_compute[247704]: 2026-01-31 09:00:36.111 247708 DEBUG oslo_concurrency.lockutils [None req-d3ac9e35-c417-4907-85fa-b3c7905b03ae 4a56abd8fdd341ae88a99e102ab399de 0d55ec1a5544450dba4e4fd1426395d7 - - default default] Lock "1ab6e2fe-f122-432a-a79e-3bba6e7a8603" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:00:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006546767401823934 of space, bias 1.0, pg target 0.19640302205471802 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:00:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:00:36 compute-0 ceph-mon[74496]: pgmap v3863: 305 pgs: 305 active+clean; 150 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 09:00:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2431638206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 09:00:37 compute-0 nova_compute[247704]: 2026-01-31 09:00:37.637 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:37 compute-0 ceph-mon[74496]: pgmap v3864: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 09:00:37 compute-0 nova_compute[247704]: 2026-01-31 09:00:37.987 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 31 09:00:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:00:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:40.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:00:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:40 compute-0 ceph-mon[74496]: pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 31 09:00:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 91 op/s
Jan 31 09:00:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:42.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:42.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:42 compute-0 nova_compute[247704]: 2026-01-31 09:00:42.637 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:42 compute-0 ceph-mon[74496]: pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 91 op/s
Jan 31 09:00:42 compute-0 nova_compute[247704]: 2026-01-31 09:00:42.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 835 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Jan 31 09:00:43 compute-0 ceph-mon[74496]: pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 835 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Jan 31 09:00:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:44.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:45 compute-0 sudo[403921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:45 compute-0 sudo[403921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:45 compute-0 sudo[403921]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:00:45 compute-0 sudo[403946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:00:45 compute-0 sudo[403946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:00:45 compute-0 sudo[403946]: pam_unix(sudo:session): session closed for user root
Jan 31 09:00:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:46.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:46.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:46 compute-0 ceph-mon[74496]: pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:00:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Jan 31 09:00:47 compute-0 nova_compute[247704]: 2026-01-31 09:00:47.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:47 compute-0 nova_compute[247704]: 2026-01-31 09:00:47.934 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850032.932545, 1ab6e2fe-f122-432a-a79e-3bba6e7a8603 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:00:47 compute-0 nova_compute[247704]: 2026-01-31 09:00:47.934 247708 INFO nova.compute.manager [-] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] VM Stopped (Lifecycle Event)
Jan 31 09:00:47 compute-0 nova_compute[247704]: 2026-01-31 09:00:47.956 247708 DEBUG nova.compute.manager [None req-70776d4d-7b6a-4951-8afb-99fcb8184023 - - - - - -] [instance: 1ab6e2fe-f122-432a-a79e-3bba6e7a8603] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:00:47 compute-0 nova_compute[247704]: 2026-01-31 09:00:47.991 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:48.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:48 compute-0 ceph-mon[74496]: pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Jan 31 09:00:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Jan 31 09:00:49 compute-0 ceph-mon[74496]: pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Jan 31 09:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:00:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:00:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:50.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:50.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:51 compute-0 ceph-mon[74496]: pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:51 compute-0 podman[403974]: 2026-01-31 09:00:51.889256383 +0000 UTC m=+0.062762227 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 09:00:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:52.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:52 compute-0 nova_compute[247704]: 2026-01-31 09:00:52.640 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:52 compute-0 nova_compute[247704]: 2026-01-31 09:00:52.993 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:00:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1275907244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:00:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:00:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1275907244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:00:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1275907244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:00:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1275907244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:00:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:00:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:54 compute-0 ceph-mon[74496]: pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:00:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:56 compute-0 ceph-mon[74496]: pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:56.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:00:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:00:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:56.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:00:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:57 compute-0 nova_compute[247704]: 2026-01-31 09:00:57.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:57 compute-0 ceph-mon[74496]: pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:00:57 compute-0 nova_compute[247704]: 2026-01-31 09:00:57.995 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:00:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:00:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:00:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:00:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:58.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:00:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:00:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:58.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:00:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2174453895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:00:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:00.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:00.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:00 compute-0 ceph-mon[74496]: pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:01 compute-0 CROND[404001]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 09:01:01 compute-0 run-parts[404004]: (/etc/cron.hourly) starting 0anacron
Jan 31 09:01:01 compute-0 run-parts[404010]: (/etc/cron.hourly) finished 0anacron
Jan 31 09:01:01 compute-0 CROND[404000]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 09:01:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:02.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:02.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:02 compute-0 nova_compute[247704]: 2026-01-31 09:01:02.645 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:02 compute-0 ceph-mon[74496]: pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:01:02.698 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:01:02 compute-0 nova_compute[247704]: 2026-01-31 09:01:02.699 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:02 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:01:02.700 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:01:02 compute-0 nova_compute[247704]: 2026-01-31 09:01:02.997 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 46 KiB/s wr, 10 op/s
Jan 31 09:01:03 compute-0 nova_compute[247704]: 2026-01-31 09:01:03.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:03 compute-0 ceph-mon[74496]: pgmap v3877: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 46 KiB/s wr, 10 op/s
Jan 31 09:01:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:04.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.373952) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065374024, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1773, "num_deletes": 261, "total_data_size": 3015029, "memory_usage": 3065640, "flush_reason": "Manual Compaction"}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Jan 31 09:01:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 147 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Jan 31 09:01:05 compute-0 sudo[404013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:05 compute-0 sudo[404013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:05 compute-0 sudo[404013]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065488801, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 2967205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82915, "largest_seqno": 84685, "table_properties": {"data_size": 2959131, "index_size": 4887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17064, "raw_average_key_size": 20, "raw_value_size": 2942727, "raw_average_value_size": 3490, "num_data_blocks": 213, "num_entries": 843, "num_filter_entries": 843, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849900, "oldest_key_time": 1769849900, "file_creation_time": 1769850065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 114929 microseconds, and 8395 cpu microseconds.
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:05 compute-0 sudo[404038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:05 compute-0 sudo[404038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:05 compute-0 sudo[404038]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.488884) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 2967205 bytes OK
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.488916) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.567361) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.567432) EVENT_LOG_v1 {"time_micros": 1769850065567419, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.567466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 3007601, prev total WAL file size 3007601, number of live WAL files 2.
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.568693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353137' seq:72057594037927935, type:22 .. '6C6F676D0033373731' seq:0, type:0; will stop at (end)
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(2897KB)], [191(10MB)]
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065568769, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 14347931, "oldest_snapshot_seqno": -1}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 11119 keys, 14206061 bytes, temperature: kUnknown
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065879914, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 14206061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14135054, "index_size": 42094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 293919, "raw_average_key_size": 26, "raw_value_size": 13941642, "raw_average_value_size": 1253, "num_data_blocks": 1603, "num_entries": 11119, "num_filter_entries": 11119, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850065, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.880252) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14206061 bytes
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.905415) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.1 rd, 45.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.9 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 11658, records dropped: 539 output_compression: NoCompression
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.905478) EVENT_LOG_v1 {"time_micros": 1769850065905446, "job": 120, "event": "compaction_finished", "compaction_time_micros": 311229, "compaction_time_cpu_micros": 25268, "output_level": 6, "num_output_files": 1, "total_output_size": 14206061, "num_input_records": 11658, "num_output_records": 11119, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065905893, "job": 120, "event": "table_file_deletion", "file_number": 193}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850065906649, "job": 120, "event": "table_file_deletion", "file_number": 191}
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.568538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.906893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.906904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.906906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.906908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:05 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:05.906910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:06 compute-0 ceph-mon[74496]: pgmap v3878: 305 pgs: 305 active+clean; 147 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Jan 31 09:01:06 compute-0 podman[404064]: 2026-01-31 09:01:06.90474739 +0000 UTC m=+0.076906341 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 09:01:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 31 09:01:07 compute-0 nova_compute[247704]: 2026-01-31 09:01:07.646 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:07 compute-0 ceph-mon[74496]: pgmap v3879: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 31 09:01:08 compute-0 nova_compute[247704]: 2026-01-31 09:01:07.999 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:08 compute-0 sudo[404091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:08 compute-0 sudo[404091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:08 compute-0 sudo[404091]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:08 compute-0 sudo[404116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:01:08 compute-0 sudo[404116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:08 compute-0 sudo[404116]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:08 compute-0 sudo[404141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:08 compute-0 sudo[404141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:08 compute-0 sudo[404141]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:08 compute-0 sudo[404166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 09:01:08 compute-0 sudo[404166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:09 compute-0 sudo[404166]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:01:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:01:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:09 compute-0 sudo[404211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:09 compute-0 sudo[404211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:09 compute-0 sudo[404211]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:09 compute-0 sudo[404236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:01:09 compute-0 sudo[404236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:09 compute-0 sudo[404236]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:09 compute-0 sudo[404261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:09 compute-0 sudo[404261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:09 compute-0 sudo[404261]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:09 compute-0 sudo[404286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:01:09 compute-0 sudo[404286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:09 compute-0 nova_compute[247704]: 2026-01-31 09:01:09.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:09 compute-0 sudo[404286]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:01:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:01:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:01:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:01:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:01:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cf53e576-1e0f-4b8e-90a1-99da8d09eb43 does not exist
Jan 31 09:01:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 16e1446d-cb7d-4bf5-bd30-e3bda1fec713 does not exist
Jan 31 09:01:10 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7638db53-eb18-41eb-95b8-260206de66cd does not exist
Jan 31 09:01:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:01:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:01:10 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:01:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:01:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:10 compute-0 sudo[404342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:10 compute-0 sudo[404342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:10 compute-0 sudo[404342]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:10 compute-0 sudo[404367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:01:10 compute-0 sudo[404367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:10 compute-0 sudo[404367]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-0 sudo[404392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:10 compute-0 ceph-mon[74496]: pgmap v3880: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3591830032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3119043435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:01:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:01:10 compute-0 sudo[404392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:10 compute-0 sudo[404392]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:10 compute-0 sudo[404417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:01:10 compute-0 sudo[404417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:10 compute-0 podman[404483]: 2026-01-31 09:01:10.687774118 +0000 UTC m=+0.022170670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:01:10 compute-0 podman[404483]: 2026-01-31 09:01:10.933396601 +0000 UTC m=+0.267793113 container create b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:01:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:01:11.229 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:01:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:01:11.230 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:01:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:01:11.230 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:01:11 compute-0 systemd[1]: Started libpod-conmon-b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628.scope.
Jan 31 09:01:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:01:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:11 compute-0 podman[404483]: 2026-01-31 09:01:11.422427862 +0000 UTC m=+0.756824384 container init b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:01:11 compute-0 podman[404483]: 2026-01-31 09:01:11.431305217 +0000 UTC m=+0.765701719 container start b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:01:11 compute-0 wizardly_curie[404500]: 167 167
Jan 31 09:01:11 compute-0 systemd[1]: libpod-b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628.scope: Deactivated successfully.
Jan 31 09:01:11 compute-0 nova_compute[247704]: 2026-01-31 09:01:11.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:11 compute-0 nova_compute[247704]: 2026-01-31 09:01:11.565 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:01:11 compute-0 podman[404483]: 2026-01-31 09:01:11.565280825 +0000 UTC m=+0.899677367 container attach b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 31 09:01:11 compute-0 podman[404483]: 2026-01-31 09:01:11.56711802 +0000 UTC m=+0.901514542 container died b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 09:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e1f5dc71b4af218b8bffaf782bf7b1a9626fe67ea493cb528409b1bed25c986-merged.mount: Deactivated successfully.
Jan 31 09:01:11 compute-0 ceph-mon[74496]: pgmap v3881: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:12.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:12.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:12 compute-0 podman[404483]: 2026-01-31 09:01:12.481901355 +0000 UTC m=+1.816297867 container remove b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_curie, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:01:12 compute-0 systemd[1]: libpod-conmon-b0c0b43ffb0d3a30d2bc02f43c068880eeb88eec31ca122423c7b345eab23628.scope: Deactivated successfully.
Jan 31 09:01:12 compute-0 nova_compute[247704]: 2026-01-31 09:01:12.648 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:12 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:01:12.702 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:01:12 compute-0 podman[404527]: 2026-01-31 09:01:12.619827978 +0000 UTC m=+0.044196945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:01:12 compute-0 podman[404527]: 2026-01-31 09:01:12.821866232 +0000 UTC m=+0.246235099 container create e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hermann, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:01:13 compute-0 nova_compute[247704]: 2026-01-31 09:01:13.002 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:13 compute-0 systemd[1]: Started libpod-conmon-e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef.scope.
Jan 31 09:01:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a6a6371932587451fe55d9e61bfc393fe1bc4044cd11450f0f0c8a6cd48cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a6a6371932587451fe55d9e61bfc393fe1bc4044cd11450f0f0c8a6cd48cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a6a6371932587451fe55d9e61bfc393fe1bc4044cd11450f0f0c8a6cd48cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a6a6371932587451fe55d9e61bfc393fe1bc4044cd11450f0f0c8a6cd48cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a6a6371932587451fe55d9e61bfc393fe1bc4044cd11450f0f0c8a6cd48cbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:13 compute-0 podman[404527]: 2026-01-31 09:01:13.188945347 +0000 UTC m=+0.613314214 container init e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:01:13 compute-0 podman[404527]: 2026-01-31 09:01:13.199564385 +0000 UTC m=+0.623933252 container start e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hermann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 09:01:13 compute-0 podman[404527]: 2026-01-31 09:01:13.219236503 +0000 UTC m=+0.643605370 container attach e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:01:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:14 compute-0 bold_hermann[404543]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:01:14 compute-0 bold_hermann[404543]: --> relative data size: 1.0
Jan 31 09:01:14 compute-0 bold_hermann[404543]: --> All data devices are unavailable
Jan 31 09:01:14 compute-0 systemd[1]: libpod-e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef.scope: Deactivated successfully.
Jan 31 09:01:14 compute-0 podman[404558]: 2026-01-31 09:01:14.079224196 +0000 UTC m=+0.023319179 container died e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:01:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:14.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:14.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a6a6371932587451fe55d9e61bfc393fe1bc4044cd11450f0f0c8a6cd48cbb-merged.mount: Deactivated successfully.
Jan 31 09:01:15 compute-0 ceph-mon[74496]: pgmap v3882: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:01:15 compute-0 podman[404558]: 2026-01-31 09:01:15.298476632 +0000 UTC m=+1.242571635 container remove e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 09:01:15 compute-0 systemd[1]: libpod-conmon-e84b2dd7addaa3355cf8201721683a40ffbc51fcb62c813f9e20e56306b73cef.scope: Deactivated successfully.
Jan 31 09:01:15 compute-0 sudo[404417]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:15 compute-0 sudo[404574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:15 compute-0 sudo[404574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:15 compute-0 sudo[404574]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Jan 31 09:01:15 compute-0 sudo[404599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:01:15 compute-0 sudo[404599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:15 compute-0 sudo[404599]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:15 compute-0 sudo[404624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:15 compute-0 sudo[404624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:15 compute-0 sudo[404624]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:15 compute-0 sudo[404649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:01:15 compute-0 sudo[404649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:15 compute-0 podman[404714]: 2026-01-31 09:01:15.88150052 +0000 UTC m=+0.094936260 container create 281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 09:01:15 compute-0 podman[404714]: 2026-01-31 09:01:15.809890869 +0000 UTC m=+0.023326599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:01:15 compute-0 systemd[1]: Started libpod-conmon-281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84.scope.
Jan 31 09:01:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:01:16 compute-0 podman[404714]: 2026-01-31 09:01:16.060469461 +0000 UTC m=+0.273905221 container init 281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:01:16 compute-0 podman[404714]: 2026-01-31 09:01:16.065534104 +0000 UTC m=+0.278969834 container start 281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 09:01:16 compute-0 cranky_cray[404730]: 167 167
Jan 31 09:01:16 compute-0 systemd[1]: libpod-281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84.scope: Deactivated successfully.
Jan 31 09:01:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:16.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:16 compute-0 podman[404714]: 2026-01-31 09:01:16.241857852 +0000 UTC m=+0.455293582 container attach 281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:01:16 compute-0 podman[404714]: 2026-01-31 09:01:16.242450187 +0000 UTC m=+0.455885917 container died 281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:01:16 compute-0 ceph-mon[74496]: pgmap v3883: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Jan 31 09:01:16 compute-0 nova_compute[247704]: 2026-01-31 09:01:16.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3b7904f2333183a3bee86bc51c6e01ead148120ce675471f6df6a482d882b35-merged.mount: Deactivated successfully.
Jan 31 09:01:16 compute-0 podman[404714]: 2026-01-31 09:01:16.716027042 +0000 UTC m=+0.929462772 container remove 281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 09:01:16 compute-0 systemd[1]: libpod-conmon-281d900ea3df19bade49080baa2a5a7708228388f22ec462cb2d87dfea511a84.scope: Deactivated successfully.
Jan 31 09:01:16 compute-0 podman[404754]: 2026-01-31 09:01:16.909287381 +0000 UTC m=+0.083613894 container create ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bose, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 09:01:16 compute-0 podman[404754]: 2026-01-31 09:01:16.849738123 +0000 UTC m=+0.024064666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:01:16 compute-0 systemd[1]: Started libpod-conmon-ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c.scope.
Jan 31 09:01:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4b17e0842d80f3d93735edfa2bb457ae1888b18b6e3410ebc95a4728c0ec49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4b17e0842d80f3d93735edfa2bb457ae1888b18b6e3410ebc95a4728c0ec49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4b17e0842d80f3d93735edfa2bb457ae1888b18b6e3410ebc95a4728c0ec49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c4b17e0842d80f3d93735edfa2bb457ae1888b18b6e3410ebc95a4728c0ec49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:17 compute-0 podman[404754]: 2026-01-31 09:01:17.086231853 +0000 UTC m=+0.260558386 container init ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:01:17 compute-0 podman[404754]: 2026-01-31 09:01:17.093965062 +0000 UTC m=+0.268291575 container start ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 09:01:17 compute-0 podman[404754]: 2026-01-31 09:01:17.132189971 +0000 UTC m=+0.306516704 container attach ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:01:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 155 KiB/s rd, 669 KiB/s wr, 14 op/s
Jan 31 09:01:17 compute-0 nova_compute[247704]: 2026-01-31 09:01:17.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:17 compute-0 nova_compute[247704]: 2026-01-31 09:01:17.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:01:17 compute-0 nova_compute[247704]: 2026-01-31 09:01:17.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:01:17 compute-0 nova_compute[247704]: 2026-01-31 09:01:17.601 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:01:17 compute-0 nova_compute[247704]: 2026-01-31 09:01:17.669 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:17 compute-0 pedantic_bose[404771]: {
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:     "0": [
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:         {
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "devices": [
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "/dev/loop3"
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             ],
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "lv_name": "ceph_lv0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "lv_size": "7511998464",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "name": "ceph_lv0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "tags": {
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.cluster_name": "ceph",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.crush_device_class": "",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.encrypted": "0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.osd_id": "0",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.type": "block",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:                 "ceph.vdo": "0"
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             },
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "type": "block",
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:             "vg_name": "ceph_vg0"
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:         }
Jan 31 09:01:17 compute-0 pedantic_bose[404771]:     ]
Jan 31 09:01:17 compute-0 pedantic_bose[404771]: }
Jan 31 09:01:17 compute-0 systemd[1]: libpod-ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c.scope: Deactivated successfully.
Jan 31 09:01:17 compute-0 podman[404754]: 2026-01-31 09:01:17.858847361 +0000 UTC m=+1.033173894 container died ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:01:17 compute-0 ceph-mon[74496]: pgmap v3884: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 155 KiB/s rd, 669 KiB/s wr, 14 op/s
Jan 31 09:01:18 compute-0 nova_compute[247704]: 2026-01-31 09:01:18.006 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c4b17e0842d80f3d93735edfa2bb457ae1888b18b6e3410ebc95a4728c0ec49-merged.mount: Deactivated successfully.
Jan 31 09:01:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:18.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:18 compute-0 podman[404754]: 2026-01-31 09:01:18.252780529 +0000 UTC m=+1.427107062 container remove ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:01:18 compute-0 systemd[1]: libpod-conmon-ae2dedfc3a42d3ab6fbbe65a10e61f924ef9a9a3c3a1b9c0ab8de16b78a4638c.scope: Deactivated successfully.
Jan 31 09:01:18 compute-0 sudo[404649]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:18 compute-0 sudo[404795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:18 compute-0 sudo[404795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:18 compute-0 sudo[404795]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:18 compute-0 sudo[404820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:01:18 compute-0 sudo[404820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:18 compute-0 sudo[404820]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:18 compute-0 sudo[404846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:18 compute-0 sudo[404846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:18 compute-0 sudo[404846]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:18 compute-0 sudo[404871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:01:18 compute-0 sudo[404871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:18 compute-0 podman[404933]: 2026-01-31 09:01:18.836396791 +0000 UTC m=+0.090057711 container create c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:01:18 compute-0 podman[404933]: 2026-01-31 09:01:18.770113669 +0000 UTC m=+0.023774619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:01:19 compute-0 systemd[1]: Started libpod-conmon-c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910.scope.
Jan 31 09:01:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:01:19 compute-0 podman[404933]: 2026-01-31 09:01:19.258784621 +0000 UTC m=+0.512445631 container init c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:01:19 compute-0 podman[404933]: 2026-01-31 09:01:19.267896552 +0000 UTC m=+0.521557482 container start c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:01:19 compute-0 naughty_chatelet[404949]: 167 167
Jan 31 09:01:19 compute-0 systemd[1]: libpod-c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910.scope: Deactivated successfully.
Jan 31 09:01:19 compute-0 podman[404933]: 2026-01-31 09:01:19.362221036 +0000 UTC m=+0.615881956 container attach c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:01:19 compute-0 podman[404933]: 2026-01-31 09:01:19.36282474 +0000 UTC m=+0.616485660 container died c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 09:01:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 31 09:01:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d27d15f0e2a8b6817218870f25ee02c5ebe22fb616146d19c9280a951776383-merged.mount: Deactivated successfully.
Jan 31 09:01:19 compute-0 ceph-mon[74496]: pgmap v3885: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 206 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:01:20 compute-0 podman[404933]: 2026-01-31 09:01:20.171529966 +0000 UTC m=+1.425190896 container remove c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:01:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:20.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:20.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:20 compute-0 systemd[1]: libpod-conmon-c76dd9be126396adcf0e9b9b860c30eaaf65d0ccb2678c1d079d4a02b19e7910.scope: Deactivated successfully.
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:01:20
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log']
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:01:20 compute-0 podman[404973]: 2026-01-31 09:01:20.299125759 +0000 UTC m=+0.025289686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:01:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:20 compute-0 podman[404973]: 2026-01-31 09:01:20.458489554 +0000 UTC m=+0.184653451 container create 9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:01:20 compute-0 systemd[1]: Started libpod-conmon-9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de.scope.
Jan 31 09:01:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4cc00201ae8550a7bee20f3c8ce6431f7bae8da079c1537dbc154337940b75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4cc00201ae8550a7bee20f3c8ce6431f7bae8da079c1537dbc154337940b75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4cc00201ae8550a7bee20f3c8ce6431f7bae8da079c1537dbc154337940b75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4cc00201ae8550a7bee20f3c8ce6431f7bae8da079c1537dbc154337940b75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:01:20 compute-0 podman[404973]: 2026-01-31 09:01:20.691616092 +0000 UTC m=+0.417780019 container init 9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 09:01:20 compute-0 podman[404973]: 2026-01-31 09:01:20.699217127 +0000 UTC m=+0.425381024 container start 9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:01:20 compute-0 podman[404973]: 2026-01-31 09:01:20.731552473 +0000 UTC m=+0.457716400 container attach 9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:01:20 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:01:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]: {
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:         "osd_id": 0,
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:         "type": "bluestore"
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]:     }
Jan 31 09:01:21 compute-0 kind_visvesvaraya[404991]: }
Jan 31 09:01:21 compute-0 systemd[1]: libpod-9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de.scope: Deactivated successfully.
Jan 31 09:01:21 compute-0 podman[404973]: 2026-01-31 09:01:21.600704677 +0000 UTC m=+1.326868574 container died 9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec4cc00201ae8550a7bee20f3c8ce6431f7bae8da079c1537dbc154337940b75-merged.mount: Deactivated successfully.
Jan 31 09:01:22 compute-0 podman[404973]: 2026-01-31 09:01:22.004860905 +0000 UTC m=+1.731024802 container remove 9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_visvesvaraya, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:01:22 compute-0 sudo[404871]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:01:22 compute-0 systemd[1]: libpod-conmon-9787d1bd28c01aedddbfe8c4adcaaab4da899fecc766e50343981bdce5e126de.scope: Deactivated successfully.
Jan 31 09:01:22 compute-0 podman[405024]: 2026-01-31 09:01:22.155825476 +0000 UTC m=+0.054116757 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 09:01:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:22.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:22 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:01:22 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fad0dfd2-b8c4-4e88-80ab-4d6a1773aab7 does not exist
Jan 31 09:01:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2af02194-8a9a-49b9-b0a9-0794b9034357 does not exist
Jan 31 09:01:22 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9f189fd5-2e38-46a9-8f89-4f083aa6a67c does not exist
Jan 31 09:01:22 compute-0 sudo[405044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:22 compute-0 sudo[405044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:22 compute-0 sudo[405044]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:22 compute-0 sudo[405070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:01:22 compute-0 sudo[405070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:22 compute-0 sudo[405070]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:22 compute-0 ceph-mon[74496]: pgmap v3886: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:01:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:22 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:01:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/681303664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:22 compute-0 nova_compute[247704]: 2026-01-31 09:01:22.671 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.008 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.598 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.598 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.598 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:01:23 compute-0 nova_compute[247704]: 2026-01-31 09:01:23.598 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:01:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1502050569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1841012567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-0 ceph-mon[74496]: pgmap v3887: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:01:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3584786760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:01:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842686204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.114 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:01:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:24.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:24.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.256 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.257 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4167MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.258 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.258 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.354 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.355 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.380 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:01:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:01:24 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1794774602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.818 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.827 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.860 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.894 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:01:24 compute-0 nova_compute[247704]: 2026-01-31 09:01:24.895 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:01:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1842686204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1794774602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 83 op/s
Jan 31 09:01:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:25 compute-0 sudo[405141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:25 compute-0 sudo[405141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:25 compute-0 sudo[405141]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:25 compute-0 sudo[405166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:25 compute-0 sudo[405166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:25 compute-0 sudo[405166]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:25 compute-0 nova_compute[247704]: 2026-01-31 09:01:25.895 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:25 compute-0 nova_compute[247704]: 2026-01-31 09:01:25.895 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:26 compute-0 ceph-mon[74496]: pgmap v3888: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 83 op/s
Jan 31 09:01:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:26.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:26.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 86 op/s
Jan 31 09:01:27 compute-0 nova_compute[247704]: 2026-01-31 09:01:27.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:27 compute-0 nova_compute[247704]: 2026-01-31 09:01:27.673 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:28 compute-0 nova_compute[247704]: 2026-01-31 09:01:28.010 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:28.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:28 compute-0 ceph-mon[74496]: pgmap v3889: 305 pgs: 305 active+clean; 124 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 86 op/s
Jan 31 09:01:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 KiB/s wr, 88 op/s
Jan 31 09:01:29 compute-0 ceph-mon[74496]: pgmap v3890: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 KiB/s wr, 88 op/s
Jan 31 09:01:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/444433900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:01:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:30.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:30.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 82 op/s
Jan 31 09:01:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:32.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:32.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:32 compute-0 nova_compute[247704]: 2026-01-31 09:01:32.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:01:32 compute-0 ceph-mon[74496]: pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 82 op/s
Jan 31 09:01:32 compute-0 nova_compute[247704]: 2026-01-31 09:01:32.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:33 compute-0 nova_compute[247704]: 2026-01-31 09:01:33.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:01:33 compute-0 ceph-mon[74496]: pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:01:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:34.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:34.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:01:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:36 compute-0 ceph-mon[74496]: pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:01:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:01:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:36.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Jan 31 09:01:37 compute-0 nova_compute[247704]: 2026-01-31 09:01:37.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:37 compute-0 ceph-mon[74496]: pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Jan 31 09:01:37 compute-0 podman[405197]: 2026-01-31 09:01:37.895967395 +0000 UTC m=+0.069478941 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:01:38 compute-0 nova_compute[247704]: 2026-01-31 09:01:38.014 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:38.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:38.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:38 compute-0 nova_compute[247704]: 2026-01-31 09:01:38.609 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:38 compute-0 nova_compute[247704]: 2026-01-31 09:01:38.637 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Jan 31 09:01:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:40.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:40.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:40 compute-0 ceph-mon[74496]: pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Jan 31 09:01:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 09:01:41 compute-0 ceph-mon[74496]: pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 09:01:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:42.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:42 compute-0 nova_compute[247704]: 2026-01-31 09:01:42.678 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:43 compute-0 nova_compute[247704]: 2026-01-31 09:01:43.038 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:44.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:44.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:44 compute-0 ceph-mon[74496]: pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:45 compute-0 sudo[405228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:45 compute-0 sudo[405228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:45 compute-0 sudo[405228]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:45 compute-0 sudo[405253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:01:45 compute-0 sudo[405253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:01:45 compute-0 sudo[405253]: pam_unix(sudo:session): session closed for user root
Jan 31 09:01:45 compute-0 ceph-mon[74496]: pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:46.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:46.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:47 compute-0 nova_compute[247704]: 2026-01-31 09:01:47.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:48 compute-0 nova_compute[247704]: 2026-01-31 09:01:48.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:48.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:48.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:48 compute-0 ceph-mon[74496]: pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:01:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:01:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:50.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:50.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:50 compute-0 ceph-mon[74496]: pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.518283) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111518384, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 251, "total_data_size": 899575, "memory_usage": 911768, "flush_reason": "Manual Compaction"}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111526125, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 879489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84686, "largest_seqno": 85341, "table_properties": {"data_size": 875920, "index_size": 1412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8076, "raw_average_key_size": 19, "raw_value_size": 868866, "raw_average_value_size": 2098, "num_data_blocks": 63, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850066, "oldest_key_time": 1769850066, "file_creation_time": 1769850111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 7902 microseconds, and 3084 cpu microseconds.
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.526195) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 879489 bytes OK
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.526227) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.528477) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.528509) EVENT_LOG_v1 {"time_micros": 1769850111528500, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.528539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 896136, prev total WAL file size 896136, number of live WAL files 2.
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.529185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(858KB)], [194(13MB)]
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111529231, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 15085550, "oldest_snapshot_seqno": -1}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 11018 keys, 13133221 bytes, temperature: kUnknown
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111673337, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 13133221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13063809, "index_size": 40755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 292533, "raw_average_key_size": 26, "raw_value_size": 12872961, "raw_average_value_size": 1168, "num_data_blocks": 1541, "num_entries": 11018, "num_filter_entries": 11018, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.673672) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13133221 bytes
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.674694) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.6 rd, 91.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 13.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(32.1) write-amplify(14.9) OK, records in: 11533, records dropped: 515 output_compression: NoCompression
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.674714) EVENT_LOG_v1 {"time_micros": 1769850111674704, "job": 122, "event": "compaction_finished", "compaction_time_micros": 144196, "compaction_time_cpu_micros": 27773, "output_level": 6, "num_output_files": 1, "total_output_size": 13133221, "num_input_records": 11533, "num_output_records": 11018, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111674863, "job": 122, "event": "table_file_deletion", "file_number": 196}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850111675898, "job": 122, "event": "table_file_deletion", "file_number": 194}
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.529046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.675939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.675944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.675945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.675947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:01:51.675949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:01:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:52.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:52.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:52 compute-0 ceph-mon[74496]: pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:52 compute-0 nova_compute[247704]: 2026-01-31 09:01:52.681 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:52 compute-0 podman[405282]: 2026-01-31 09:01:52.907131026 +0000 UTC m=+0.076585363 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:01:53 compute-0 nova_compute[247704]: 2026-01-31 09:01:53.041 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2693558940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:01:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2693558940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:01:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:54.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:01:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:54.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:01:54 compute-0 ceph-mon[74496]: pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:01:55 compute-0 ceph-mon[74496]: pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:01:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:01:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:56.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:56.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:01:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:57 compute-0 nova_compute[247704]: 2026-01-31 09:01:57.683 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:58 compute-0 nova_compute[247704]: 2026-01-31 09:01:58.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:01:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:58.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:01:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:01:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:58.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:01:58 compute-0 ceph-mon[74496]: pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:01:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:00.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:00.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:00 compute-0 ceph-mon[74496]: pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:02:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:02.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:02.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:02 compute-0 ceph-mon[74496]: pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:02 compute-0 nova_compute[247704]: 2026-01-31 09:02:02.685 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:03 compute-0 nova_compute[247704]: 2026-01-31 09:02:03.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:03 compute-0 nova_compute[247704]: 2026-01-31 09:02:03.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:02:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:04.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:04.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:04 compute-0 ceph-mon[74496]: pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:05 compute-0 sudo[405308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:05 compute-0 sudo[405308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:05 compute-0 sudo[405308]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:05 compute-0 sudo[405333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:05 compute-0 sudo[405333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:05 compute-0 sudo[405333]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:06.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:06.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:06 compute-0 ceph-mon[74496]: pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:07 compute-0 nova_compute[247704]: 2026-01-31 09:02:07.688 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/349253811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:07 compute-0 ceph-mon[74496]: pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:08 compute-0 nova_compute[247704]: 2026-01-31 09:02:08.048 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:08.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:02:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:08.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:08 compute-0 podman[405360]: 2026-01-31 09:02:08.892779064 +0000 UTC m=+0.070333461 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 09:02:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:10.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:02:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:10.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:10 compute-0 nova_compute[247704]: 2026-01-31 09:02:10.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:10 compute-0 ceph-mon[74496]: pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:02:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:02:11.230 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:02:11.231 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:02:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:02:11.231 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:02:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 907 KiB/s wr, 15 op/s
Jan 31 09:02:11 compute-0 nova_compute[247704]: 2026-01-31 09:02:11.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:11 compute-0 nova_compute[247704]: 2026-01-31 09:02:11.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:02:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:12.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:12 compute-0 nova_compute[247704]: 2026-01-31 09:02:12.690 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:12 compute-0 ceph-mon[74496]: pgmap v3911: 305 pgs: 305 active+clean; 146 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 907 KiB/s wr, 15 op/s
Jan 31 09:02:13 compute-0 nova_compute[247704]: 2026-01-31 09:02:13.050 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:14 compute-0 ceph-mon[74496]: pgmap v3912: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:14.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:14.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3830990779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:02:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2146965649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:02:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:16.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:16.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:16 compute-0 ceph-mon[74496]: pgmap v3913: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:02:16.969 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:02:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:02:16.970 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:02:16 compute-0 nova_compute[247704]: 2026-01-31 09:02:16.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:17 compute-0 nova_compute[247704]: 2026-01-31 09:02:17.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:17 compute-0 nova_compute[247704]: 2026-01-31 09:02:17.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:02:17 compute-0 nova_compute[247704]: 2026-01-31 09:02:17.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:02:17 compute-0 nova_compute[247704]: 2026-01-31 09:02:17.582 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:02:17 compute-0 nova_compute[247704]: 2026-01-31 09:02:17.694 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:17 compute-0 ceph-mon[74496]: pgmap v3914: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:02:17 compute-0 ovn_controller[149457]: 2026-01-31T09:02:17Z|00885|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 09:02:18 compute-0 nova_compute[247704]: 2026-01-31 09:02:18.052 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:18.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:18.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:18 compute-0 nova_compute[247704]: 2026-01-31 09:02:18.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:02:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:20.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a196f0 =====
Jan 31 09:02:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a196f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:20.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:02:20
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Jan 31 09:02:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:02:20 compute-0 ceph-mon[74496]: pgmap v3915: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:02:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:02:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 31 09:02:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:22.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:22.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:22 compute-0 ceph-mon[74496]: pgmap v3916: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 31 09:02:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2934368418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:22 compute-0 nova_compute[247704]: 2026-01-31 09:02:22.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:22 compute-0 sudo[405394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:22 compute-0 sudo[405394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-0 sudo[405394]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:22 compute-0 sudo[405419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:02:22 compute-0 sudo[405419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-0 sudo[405419]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:22 compute-0 sudo[405444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:22 compute-0 sudo[405444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:22 compute-0 sudo[405444]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:22 compute-0 sudo[405469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:02:22 compute-0 sudo[405469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 09:02:23 compute-0 nova_compute[247704]: 2026-01-31 09:02:23.054 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 sudo[405469]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 920 KiB/s wr, 85 op/s
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1838854291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/279774204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2f7f4c44-feac-4c10-8c2d-5fcad9f4cfc0 does not exist
Jan 31 09:02:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7b4242e8-bad7-46ac-a755-9478de0511e8 does not exist
Jan 31 09:02:23 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9ae432b7-ab63-48ef-9cf6-8793a4b6fad2 does not exist
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:02:23 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:02:23 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:02:23 compute-0 sudo[405524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:23 compute-0 sudo[405524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-0 sudo[405524]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-0 sudo[405555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:02:23 compute-0 sudo[405555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-0 sudo[405555]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-0 podman[405548]: 2026-01-31 09:02:23.839317105 +0000 UTC m=+0.042066094 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 09:02:23 compute-0 sudo[405593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:23 compute-0 sudo[405593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-0 sudo[405593]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:23 compute-0 sudo[405618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:02:23 compute-0 sudo[405618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:02:23.972 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.193241251 +0000 UTC m=+0.037603535 container create 33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 09:02:24 compute-0 systemd[1]: Started libpod-conmon-33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5.scope.
Jan 31 09:02:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.175669374 +0000 UTC m=+0.020031658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.282535873 +0000 UTC m=+0.126898177 container init 33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 09:02:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:24.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:24.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.289870811 +0000 UTC m=+0.134233085 container start 33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.294411541 +0000 UTC m=+0.138773825 container attach 33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 09:02:24 compute-0 wizardly_germain[405700]: 167 167
Jan 31 09:02:24 compute-0 systemd[1]: libpod-33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5.scope: Deactivated successfully.
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.297972808 +0000 UTC m=+0.142335092 container died 33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 09:02:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffbb029a57fda5b9e90f4f0a028a86bc4631d389809878d814ee843cf4496f29-merged.mount: Deactivated successfully.
Jan 31 09:02:24 compute-0 podman[405684]: 2026-01-31 09:02:24.337193432 +0000 UTC m=+0.181555716 container remove 33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 09:02:24 compute-0 systemd[1]: libpod-conmon-33dc2a1a0906047f6302edefb6d7dd6159b1c4507108a430dadae9d6005757c5.scope: Deactivated successfully.
Jan 31 09:02:24 compute-0 podman[405724]: 2026-01-31 09:02:24.480757982 +0000 UTC m=+0.045172359 container create 36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cerf, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:02:24 compute-0 systemd[1]: Started libpod-conmon-36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5.scope.
Jan 31 09:02:24 compute-0 podman[405724]: 2026-01-31 09:02:24.461190916 +0000 UTC m=+0.025605333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:02:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf03ebd33c02ef5d2cfe837ca6dd9bdf0931a8732b492b6da905e99009c16bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf03ebd33c02ef5d2cfe837ca6dd9bdf0931a8732b492b6da905e99009c16bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf03ebd33c02ef5d2cfe837ca6dd9bdf0931a8732b492b6da905e99009c16bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf03ebd33c02ef5d2cfe837ca6dd9bdf0931a8732b492b6da905e99009c16bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf03ebd33c02ef5d2cfe837ca6dd9bdf0931a8732b492b6da905e99009c16bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:24 compute-0 nova_compute[247704]: 2026-01-31 09:02:24.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:24 compute-0 podman[405724]: 2026-01-31 09:02:24.574364478 +0000 UTC m=+0.138778855 container init 36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:02:24 compute-0 podman[405724]: 2026-01-31 09:02:24.580657242 +0000 UTC m=+0.145071619 container start 36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cerf, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 09:02:24 compute-0 podman[405724]: 2026-01-31 09:02:24.583642334 +0000 UTC m=+0.148056731 container attach 36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cerf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 09:02:24 compute-0 nova_compute[247704]: 2026-01-31 09:02:24.601 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:02:24 compute-0 nova_compute[247704]: 2026-01-31 09:02:24.601 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:02:24 compute-0 nova_compute[247704]: 2026-01-31 09:02:24.602 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:02:24 compute-0 nova_compute[247704]: 2026-01-31 09:02:24.602 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:02:24 compute-0 nova_compute[247704]: 2026-01-31 09:02:24.602 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:02:24 compute-0 ceph-mon[74496]: pgmap v3917: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 920 KiB/s wr, 85 op/s
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/859458309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:02:24 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:02:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:02:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888245081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.056 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.245 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.246 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4163MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.246 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.246 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.334 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.334 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.378 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:02:25 compute-0 lucid_cerf[405741]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:02:25 compute-0 lucid_cerf[405741]: --> relative data size: 1.0
Jan 31 09:02:25 compute-0 lucid_cerf[405741]: --> All data devices are unavailable
Jan 31 09:02:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:25 compute-0 systemd[1]: libpod-36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5.scope: Deactivated successfully.
Jan 31 09:02:25 compute-0 podman[405724]: 2026-01-31 09:02:25.42572205 +0000 UTC m=+0.990136427 container died 36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cerf, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf03ebd33c02ef5d2cfe837ca6dd9bdf0931a8732b492b6da905e99009c16bc-merged.mount: Deactivated successfully.
Jan 31 09:02:25 compute-0 podman[405724]: 2026-01-31 09:02:25.481270361 +0000 UTC m=+1.045684738 container remove 36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cerf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:02:25 compute-0 systemd[1]: libpod-conmon-36b4c2fea8b1c33272270fb08fcde868cd8c357cf298a0fd1dd527a2c87941b5.scope: Deactivated successfully.
Jan 31 09:02:25 compute-0 sudo[405618]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:25 compute-0 sudo[405809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:25 compute-0 sudo[405809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-0 sudo[405809]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:25 compute-0 sudo[405834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:02:25 compute-0 sudo[405834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-0 sudo[405834]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3888245081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:25 compute-0 ceph-mon[74496]: pgmap v3918: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:25 compute-0 sudo[405859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:25 compute-0 sudo[405859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-0 sudo[405859]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:25 compute-0 sudo[405884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:02:25 compute-0 sudo[405884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:02:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200959977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.815 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.822 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.842 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.844 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:02:25 compute-0 nova_compute[247704]: 2026-01-31 09:02:25.844 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:02:25 compute-0 sudo[405923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:25 compute-0 sudo[405923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-0 sudo[405923]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:25 compute-0 sudo[405960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:25 compute-0 sudo[405960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:25 compute-0 sudo[405960]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.029379898 +0000 UTC m=+0.036313524 container create 9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 09:02:26 compute-0 systemd[1]: Started libpod-conmon-9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43.scope.
Jan 31 09:02:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.013994584 +0000 UTC m=+0.020928230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.112638014 +0000 UTC m=+0.119571660 container init 9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.118394053 +0000 UTC m=+0.125327679 container start 9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 09:02:26 compute-0 dazzling_brown[406013]: 167 167
Jan 31 09:02:26 compute-0 systemd[1]: libpod-9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43.scope: Deactivated successfully.
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.125518057 +0000 UTC m=+0.132451713 container attach 9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.126300776 +0000 UTC m=+0.133234422 container died 9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:02:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-098b7cb3c1e1379a64a3eb73637c2e7cbeeb3217a7a724bc48d9963d6e5ca165-merged.mount: Deactivated successfully.
Jan 31 09:02:26 compute-0 podman[405995]: 2026-01-31 09:02:26.165685394 +0000 UTC m=+0.172619020 container remove 9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:02:26 compute-0 systemd[1]: libpod-conmon-9c4f1c08cc2c83ddf5d04398226af7b90241ed065c2bdd504cb4c6ddcd75be43.scope: Deactivated successfully.
Jan 31 09:02:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:26.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:26.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:26 compute-0 podman[406038]: 2026-01-31 09:02:26.298652347 +0000 UTC m=+0.047916917 container create 116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:02:26 compute-0 systemd[1]: Started libpod-conmon-116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3.scope.
Jan 31 09:02:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:02:26 compute-0 podman[406038]: 2026-01-31 09:02:26.278366863 +0000 UTC m=+0.027631473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a6078e504106eb866413299ae08c2573ac60f8c6cf8fb9b5e3a93f838954a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a6078e504106eb866413299ae08c2573ac60f8c6cf8fb9b5e3a93f838954a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a6078e504106eb866413299ae08c2573ac60f8c6cf8fb9b5e3a93f838954a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a6078e504106eb866413299ae08c2573ac60f8c6cf8fb9b5e3a93f838954a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:26 compute-0 podman[406038]: 2026-01-31 09:02:26.434342436 +0000 UTC m=+0.183607036 container init 116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:02:26 compute-0 podman[406038]: 2026-01-31 09:02:26.440756472 +0000 UTC m=+0.190021042 container start 116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 09:02:26 compute-0 podman[406038]: 2026-01-31 09:02:26.496928377 +0000 UTC m=+0.246192977 container attach 116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldberg, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 09:02:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4200959977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]: {
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:     "0": [
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:         {
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "devices": [
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "/dev/loop3"
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             ],
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "lv_name": "ceph_lv0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "lv_size": "7511998464",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "name": "ceph_lv0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "tags": {
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.cluster_name": "ceph",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.crush_device_class": "",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.encrypted": "0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.osd_id": "0",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.type": "block",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:                 "ceph.vdo": "0"
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             },
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "type": "block",
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:             "vg_name": "ceph_vg0"
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:         }
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]:     ]
Jan 31 09:02:27 compute-0 vigilant_goldberg[406054]: }
Jan 31 09:02:27 compute-0 podman[406038]: 2026-01-31 09:02:27.169988494 +0000 UTC m=+0.919253064 container died 116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:02:27 compute-0 systemd[1]: libpod-116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3.scope: Deactivated successfully.
Jan 31 09:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a6078e504106eb866413299ae08c2573ac60f8c6cf8fb9b5e3a93f838954a9-merged.mount: Deactivated successfully.
Jan 31 09:02:27 compute-0 podman[406038]: 2026-01-31 09:02:27.220704487 +0000 UTC m=+0.969969057 container remove 116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldberg, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:02:27 compute-0 systemd[1]: libpod-conmon-116fba74265d24d7ca3f1fc102a0af6074a546c41c5993940cb1e3cdb04733d3.scope: Deactivated successfully.
Jan 31 09:02:27 compute-0 sudo[405884]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:27 compute-0 sudo[406075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:27 compute-0 sudo[406075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:27 compute-0 sudo[406075]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:27 compute-0 sudo[406100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:02:27 compute-0 sudo[406100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:27 compute-0 sudo[406100]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:27 compute-0 sudo[406125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:27 compute-0 sudo[406125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:27 compute-0 sudo[406125]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:27 compute-0 sudo[406150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:02:27 compute-0 sudo[406150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:27 compute-0 nova_compute[247704]: 2026-01-31 09:02:27.731 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:27 compute-0 ceph-mon[74496]: pgmap v3919: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.784328183 +0000 UTC m=+0.039830891 container create b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:02:27 compute-0 systemd[1]: Started libpod-conmon-b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f.scope.
Jan 31 09:02:27 compute-0 nova_compute[247704]: 2026-01-31 09:02:27.844 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:27 compute-0 nova_compute[247704]: 2026-01-31 09:02:27.845 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.767005091 +0000 UTC m=+0.022507829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.865101416 +0000 UTC m=+0.120604154 container init b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.869827041 +0000 UTC m=+0.125329739 container start b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kirch, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 09:02:27 compute-0 gallant_kirch[406231]: 167 167
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.874694969 +0000 UTC m=+0.130197677 container attach b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kirch, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:02:27 compute-0 conmon[406231]: conmon b58af72bb5697d2f65e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f.scope/container/memory.events
Jan 31 09:02:27 compute-0 systemd[1]: libpod-b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f.scope: Deactivated successfully.
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.875931269 +0000 UTC m=+0.131433977 container died b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 09:02:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc1596f635002a30ffa0cfe73422d963608bd9ced570447d57687ce0341862a7-merged.mount: Deactivated successfully.
Jan 31 09:02:27 compute-0 podman[406215]: 2026-01-31 09:02:27.921095547 +0000 UTC m=+0.176598255 container remove b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kirch, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:02:27 compute-0 systemd[1]: libpod-conmon-b58af72bb5697d2f65e8686452ca16591fc19f6e4b80fc1a34d50d7225c96d5f.scope: Deactivated successfully.
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.042764666 +0000 UTC m=+0.038761673 container create 460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 09:02:28 compute-0 nova_compute[247704]: 2026-01-31 09:02:28.056 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:28 compute-0 systemd[1]: Started libpod-conmon-460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82.scope.
Jan 31 09:02:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61b1c4ef738588a9dd819d1cf191fe6098f9896ea6e94e9db1dda0eb8ab79a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61b1c4ef738588a9dd819d1cf191fe6098f9896ea6e94e9db1dda0eb8ab79a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61b1c4ef738588a9dd819d1cf191fe6098f9896ea6e94e9db1dda0eb8ab79a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61b1c4ef738588a9dd819d1cf191fe6098f9896ea6e94e9db1dda0eb8ab79a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.025450475 +0000 UTC m=+0.021447502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.12063423 +0000 UTC m=+0.116631257 container init 460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.129009864 +0000 UTC m=+0.125006871 container start 460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.133259537 +0000 UTC m=+0.129256554 container attach 460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:02:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:28.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:28 compute-0 nova_compute[247704]: 2026-01-31 09:02:28.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]: {
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:         "osd_id": 0,
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:         "type": "bluestore"
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]:     }
Jan 31 09:02:28 compute-0 wizardly_hawking[406273]: }
Jan 31 09:02:28 compute-0 systemd[1]: libpod-460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82.scope: Deactivated successfully.
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.894278021 +0000 UTC m=+0.890275028 container died 460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 09:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d61b1c4ef738588a9dd819d1cf191fe6098f9896ea6e94e9db1dda0eb8ab79a3-merged.mount: Deactivated successfully.
Jan 31 09:02:28 compute-0 podman[406257]: 2026-01-31 09:02:28.94231539 +0000 UTC m=+0.938312397 container remove 460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:02:28 compute-0 systemd[1]: libpod-conmon-460a442dcc1cde02f265eacce22bb358374b961eb1c5e6d7d0792e6d0c20cf82.scope: Deactivated successfully.
Jan 31 09:02:28 compute-0 sudo[406150]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:02:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:02:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1aee6fcc-ffb8-4e4b-ace4-090ed409e054 does not exist
Jan 31 09:02:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2a4c0fee-73c0-4633-b7d2-d056e15f1959 does not exist
Jan 31 09:02:29 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev db1e2050-ede2-4210-b82e-ca9ac889bb1a does not exist
Jan 31 09:02:29 compute-0 sudo[406307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:29 compute-0 sudo[406307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:29 compute-0 sudo[406307]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:29 compute-0 sudo[406332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:02:29 compute-0 sudo[406332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:29 compute-0 sudo[406332]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:02:29 compute-0 ceph-mon[74496]: pgmap v3920: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:02:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:30.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 189 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Jan 31 09:02:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:32.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:32.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:32 compute-0 ceph-mon[74496]: pgmap v3921: 305 pgs: 305 active+clean; 189 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Jan 31 09:02:32 compute-0 nova_compute[247704]: 2026-01-31 09:02:32.734 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:33 compute-0 nova_compute[247704]: 2026-01-31 09:02:33.059 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 09:02:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:34.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:34 compute-0 ceph-mon[74496]: pgmap v3922: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 09:02:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 09:02:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.6488060964544947 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:02:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:02:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:36.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:36.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:36 compute-0 ceph-mon[74496]: pgmap v3923: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 09:02:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 09:02:37 compute-0 nova_compute[247704]: 2026-01-31 09:02:37.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:37 compute-0 nova_compute[247704]: 2026-01-31 09:02:37.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:02:37 compute-0 nova_compute[247704]: 2026-01-31 09:02:37.585 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:02:37 compute-0 nova_compute[247704]: 2026-01-31 09:02:37.736 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:38 compute-0 nova_compute[247704]: 2026-01-31 09:02:38.061 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:38.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:38.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:38 compute-0 ceph-mon[74496]: pgmap v3924: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 09:02:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 09:02:39 compute-0 podman[406362]: 2026-01-31 09:02:39.902393926 +0000 UTC m=+0.077899346 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:02:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:40.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:40.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:40 compute-0 ceph-mon[74496]: pgmap v3925: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 09:02:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 09:02:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:42.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:42 compute-0 ceph-mon[74496]: pgmap v3926: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 09:02:42 compute-0 nova_compute[247704]: 2026-01-31 09:02:42.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:43 compute-0 nova_compute[247704]: 2026-01-31 09:02:43.088 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 964 KiB/s wr, 33 op/s
Jan 31 09:02:43 compute-0 ceph-mon[74496]: pgmap v3927: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 964 KiB/s wr, 33 op/s
Jan 31 09:02:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:44.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:44.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2159508482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:02:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 KiB/s rd, 18 KiB/s wr, 3 op/s
Jan 31 09:02:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:46 compute-0 sudo[406393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:46 compute-0 sudo[406393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:46 compute-0 sudo[406393]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:46 compute-0 sudo[406418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:02:46 compute-0 sudo[406418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:02:46 compute-0 sudo[406418]: pam_unix(sudo:session): session closed for user root
Jan 31 09:02:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:46.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:46 compute-0 ceph-mon[74496]: pgmap v3928: 305 pgs: 305 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 KiB/s rd, 18 KiB/s wr, 3 op/s
Jan 31 09:02:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 152 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 17 op/s
Jan 31 09:02:47 compute-0 nova_compute[247704]: 2026-01-31 09:02:47.741 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:48 compute-0 nova_compute[247704]: 2026-01-31 09:02:48.089 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:48.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:48 compute-0 ceph-mon[74496]: pgmap v3929: 305 pgs: 305 active+clean; 152 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 18 KiB/s wr, 17 op/s
Jan 31 09:02:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 31 09:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:02:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:02:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:50.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:50 compute-0 ceph-mon[74496]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 31 09:02:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 28 op/s
Jan 31 09:02:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:52 compute-0 ceph-mon[74496]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 28 op/s
Jan 31 09:02:52 compute-0 nova_compute[247704]: 2026-01-31 09:02:52.743 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:53 compute-0 nova_compute[247704]: 2026-01-31 09:02:53.092 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Jan 31 09:02:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1663783843' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:02:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1663783843' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:02:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:54 compute-0 nova_compute[247704]: 2026-01-31 09:02:54.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:54 compute-0 nova_compute[247704]: 2026-01-31 09:02:54.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:02:54 compute-0 ceph-mon[74496]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Jan 31 09:02:54 compute-0 podman[406448]: 2026-01-31 09:02:54.900968822 +0000 UTC m=+0.063255129 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 09:02:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 31 09:02:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:02:55 compute-0 ceph-mon[74496]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 31 09:02:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:02:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:02:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:02:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:56.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:02:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 31 09:02:57 compute-0 nova_compute[247704]: 2026-01-31 09:02:57.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:58 compute-0 nova_compute[247704]: 2026-01-31 09:02:58.094 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:02:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:02:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:58.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:02:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:02:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:02:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:02:58 compute-0 nova_compute[247704]: 2026-01-31 09:02:58.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:02:58 compute-0 ceph-mon[74496]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 31 09:02:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.1 KiB/s rd, 341 B/s wr, 11 op/s
Jan 31 09:03:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:00.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:00 compute-0 ceph-mon[74496]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.1 KiB/s rd, 341 B/s wr, 11 op/s
Jan 31 09:03:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:01 compute-0 ceph-mon[74496]: pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:02.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:02 compute-0 nova_compute[247704]: 2026-01-31 09:03:02.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:03 compute-0 nova_compute[247704]: 2026-01-31 09:03:03.094 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:04.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:04.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:04 compute-0 ceph-mon[74496]: pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:05 compute-0 nova_compute[247704]: 2026-01-31 09:03:05.613 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:06 compute-0 sudo[406471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:06 compute-0 sudo[406471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:06 compute-0 sudo[406471]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:06 compute-0 sudo[406496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:06 compute-0 sudo[406496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:06 compute-0 sudo[406496]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:06.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:06.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:06 compute-0 ceph-mon[74496]: pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:07 compute-0 ceph-mon[74496]: pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:07 compute-0 nova_compute[247704]: 2026-01-31 09:03:07.750 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:08 compute-0 nova_compute[247704]: 2026-01-31 09:03:08.096 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:08.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:10.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:10 compute-0 ceph-mon[74496]: pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:10 compute-0 podman[406524]: 2026-01-31 09:03:10.92203184 +0000 UTC m=+0.092545661 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 09:03:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:03:11.231 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:03:11.232 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:03:11.233 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:11 compute-0 nova_compute[247704]: 2026-01-31 09:03:11.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:11 compute-0 ceph-mon[74496]: pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:12.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:03:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1424942873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:12 compute-0 nova_compute[247704]: 2026-01-31 09:03:12.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1424942873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:13 compute-0 nova_compute[247704]: 2026-01-31 09:03:13.097 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:13 compute-0 nova_compute[247704]: 2026-01-31 09:03:13.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:13 compute-0 nova_compute[247704]: 2026-01-31 09:03:13.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:03:13 compute-0 ceph-mon[74496]: pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:03:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:14.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:14.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 KiB/s rd, 317 KiB/s wr, 9 op/s
Jan 31 09:03:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:16.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:16.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:16 compute-0 ceph-mon[74496]: pgmap v3943: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 KiB/s rd, 317 KiB/s wr, 9 op/s
Jan 31 09:03:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 150 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.0 MiB/s wr, 22 op/s
Jan 31 09:03:17 compute-0 ceph-mon[74496]: pgmap v3944: 305 pgs: 305 active+clean; 150 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.0 MiB/s wr, 22 op/s
Jan 31 09:03:17 compute-0 nova_compute[247704]: 2026-01-31 09:03:17.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:18 compute-0 nova_compute[247704]: 2026-01-31 09:03:18.098 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:03:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:18.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:03:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:18.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:18 compute-0 nova_compute[247704]: 2026-01-31 09:03:18.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:18 compute-0 nova_compute[247704]: 2026-01-31 09:03:18.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:03:18 compute-0 nova_compute[247704]: 2026-01-31 09:03:18.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:03:18 compute-0 nova_compute[247704]: 2026-01-31 09:03:18.583 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:03:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2283409527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 31 09:03:19 compute-0 nova_compute[247704]: 2026-01-31 09:03:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:03:20 compute-0 ceph-mon[74496]: pgmap v3945: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 31 09:03:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2384576305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/26884125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:03:20
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Jan 31 09:03:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:03:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:20.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:20.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:03:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:03:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3981064920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:22.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:22 compute-0 ceph-mon[74496]: pgmap v3946: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:03:22 compute-0 nova_compute[247704]: 2026-01-31 09:03:22.790 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:23 compute-0 nova_compute[247704]: 2026-01-31 09:03:23.100 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 09:03:23 compute-0 ceph-mon[74496]: pgmap v3947: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 09:03:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:24.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:24 compute-0 nova_compute[247704]: 2026-01-31 09:03:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:24 compute-0 nova_compute[247704]: 2026-01-31 09:03:24.585 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:24 compute-0 nova_compute[247704]: 2026-01-31 09:03:24.585 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:24 compute-0 nova_compute[247704]: 2026-01-31 09:03:24.586 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:24 compute-0 nova_compute[247704]: 2026-01-31 09:03:24.586 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:03:24 compute-0 nova_compute[247704]: 2026-01-31 09:03:24.586 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/500537900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3264900271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:03:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485732851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.028 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.180 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.182 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4226MB free_disk=20.967517852783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.182 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.182 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.256 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.256 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.280 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:03:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 31 09:03:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:03:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663870704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.710 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.716 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:03:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:03:25.737 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:03:25.739 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.741 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.743 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:03:25 compute-0 nova_compute[247704]: 2026-01-31 09:03:25.743 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:03:25 compute-0 podman[406602]: 2026-01-31 09:03:25.893812605 +0000 UTC m=+0.063770491 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 09:03:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/485732851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:25 compute-0 ceph-mon[74496]: pgmap v3948: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 31 09:03:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3663870704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:26 compute-0 sudo[406622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:26 compute-0 sudo[406622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:26 compute-0 sudo[406622]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:26 compute-0 sudo[406647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:26 compute-0 sudo[406647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:26 compute-0 sudo[406647]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:26.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:26.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:26 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:03:26.742 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:03:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 63 op/s
Jan 31 09:03:27 compute-0 nova_compute[247704]: 2026-01-31 09:03:27.745 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:27 compute-0 nova_compute[247704]: 2026-01-31 09:03:27.745 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:27 compute-0 nova_compute[247704]: 2026-01-31 09:03:27.792 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:28 compute-0 nova_compute[247704]: 2026-01-31 09:03:28.102 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:28.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:28 compute-0 ceph-mon[74496]: pgmap v3949: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 63 op/s
Jan 31 09:03:29 compute-0 sudo[406674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:29 compute-0 sudo[406674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-0 sudo[406674]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-0 sudo[406699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:03:29 compute-0 sudo[406699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-0 sudo[406699]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-0 sudo[406724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:29 compute-0 sudo[406724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-0 sudo[406724]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 765 KiB/s wr, 77 op/s
Jan 31 09:03:29 compute-0 sudo[406749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:03:29 compute-0 sudo[406749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:29 compute-0 sudo[406749]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:29 compute-0 ceph-mon[74496]: pgmap v3950: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 765 KiB/s wr, 77 op/s
Jan 31 09:03:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 09:03:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 09:03:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:03:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:03:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:03:29 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:03:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:03:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 63a30ef9-fbee-486f-b989-789a367394a3 does not exist
Jan 31 09:03:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 06869d22-c186-4881-860d-040ee67fe1df does not exist
Jan 31 09:03:30 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ba715f6f-3e72-44f5-8e1f-978bf1503988 does not exist
Jan 31 09:03:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:03:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:03:30 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:03:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:03:30 compute-0 sudo[406805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:30 compute-0 sudo[406805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:30 compute-0 sudo[406805]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:30 compute-0 sudo[406830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:03:30 compute-0 sudo[406830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:30 compute-0 sudo[406830]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:30 compute-0 sudo[406855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:30 compute-0 sudo[406855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:30 compute-0 sudo[406855]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:30 compute-0 sudo[406880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:03:30 compute-0 sudo[406880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:30.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.495710895 +0000 UTC m=+0.036046468 container create 02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 09:03:30 compute-0 systemd[1]: Started libpod-conmon-02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a.scope.
Jan 31 09:03:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:03:30 compute-0 nova_compute[247704]: 2026-01-31 09:03:30.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.573792084 +0000 UTC m=+0.114127687 container init 02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.478503247 +0000 UTC m=+0.018838830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.580906367 +0000 UTC m=+0.121241930 container start 02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ganguly, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.584440212 +0000 UTC m=+0.124775805 container attach 02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ganguly, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 09:03:30 compute-0 systemd[1]: libpod-02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a.scope: Deactivated successfully.
Jan 31 09:03:30 compute-0 musing_ganguly[406962]: 167 167
Jan 31 09:03:30 compute-0 conmon[406962]: conmon 02ab69c256a55ffed205 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a.scope/container/memory.events
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.587295402 +0000 UTC m=+0.127630965 container died 02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 09:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-91d919c89a93f249fb96e77c31f2c88abae02965af9459e90dec7dc6aba98e79-merged.mount: Deactivated successfully.
Jan 31 09:03:30 compute-0 podman[406945]: 2026-01-31 09:03:30.629289553 +0000 UTC m=+0.169625106 container remove 02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:03:30 compute-0 systemd[1]: libpod-conmon-02ab69c256a55ffed205ebc525a91c3ef77a039926a1f9634df54ac2821c253a.scope: Deactivated successfully.
Jan 31 09:03:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:30 compute-0 podman[406985]: 2026-01-31 09:03:30.799261887 +0000 UTC m=+0.054019925 container create 9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 09:03:30 compute-0 systemd[1]: Started libpod-conmon-9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1.scope.
Jan 31 09:03:30 compute-0 podman[406985]: 2026-01-31 09:03:30.772954767 +0000 UTC m=+0.027712885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:03:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d2bf07613bfc7a0775f6707d8d7d0e5084876d3056e9d88d40eaadafd9d427/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d2bf07613bfc7a0775f6707d8d7d0e5084876d3056e9d88d40eaadafd9d427/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d2bf07613bfc7a0775f6707d8d7d0e5084876d3056e9d88d40eaadafd9d427/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d2bf07613bfc7a0775f6707d8d7d0e5084876d3056e9d88d40eaadafd9d427/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90d2bf07613bfc7a0775f6707d8d7d0e5084876d3056e9d88d40eaadafd9d427/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:30 compute-0 podman[406985]: 2026-01-31 09:03:30.909406155 +0000 UTC m=+0.164164213 container init 9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:03:30 compute-0 podman[406985]: 2026-01-31 09:03:30.918821983 +0000 UTC m=+0.173580031 container start 9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:03:30 compute-0 podman[406985]: 2026-01-31 09:03:30.924229505 +0000 UTC m=+0.178987543 container attach 9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:03:30 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:03:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 31 09:03:31 compute-0 sweet_stonebraker[407001]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:03:31 compute-0 sweet_stonebraker[407001]: --> relative data size: 1.0
Jan 31 09:03:31 compute-0 sweet_stonebraker[407001]: --> All data devices are unavailable
Jan 31 09:03:31 compute-0 systemd[1]: libpod-9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1.scope: Deactivated successfully.
Jan 31 09:03:31 compute-0 podman[406985]: 2026-01-31 09:03:31.730524361 +0000 UTC m=+0.985282419 container died 9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 09:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d2bf07613bfc7a0775f6707d8d7d0e5084876d3056e9d88d40eaadafd9d427-merged.mount: Deactivated successfully.
Jan 31 09:03:31 compute-0 podman[406985]: 2026-01-31 09:03:31.782492835 +0000 UTC m=+1.037250873 container remove 9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_stonebraker, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:03:31 compute-0 systemd[1]: libpod-conmon-9f8a4af48f00fe016e6b2331d08696c1819b6c9bf3c146c50dd362e391ad8df1.scope: Deactivated successfully.
Jan 31 09:03:31 compute-0 sudo[406880]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:31 compute-0 sudo[407031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:31 compute-0 sudo[407031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:31 compute-0 sudo[407031]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:31 compute-0 sudo[407056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:03:31 compute-0 sudo[407056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:31 compute-0 sudo[407056]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:31 compute-0 sudo[407081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:31 compute-0 sudo[407081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:31 compute-0 sudo[407081]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:32 compute-0 sudo[407106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:03:32 compute-0 sudo[407106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:32 compute-0 ceph-mon[74496]: pgmap v3951: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.348534608 +0000 UTC m=+0.050043857 container create a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:03:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:32.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:32 compute-0 systemd[1]: Started libpod-conmon-a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553.scope.
Jan 31 09:03:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.426935735 +0000 UTC m=+0.128445014 container init a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.334055767 +0000 UTC m=+0.035565046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.434494879 +0000 UTC m=+0.136004118 container start a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.437678986 +0000 UTC m=+0.139188255 container attach a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:03:32 compute-0 fervent_euler[407189]: 167 167
Jan 31 09:03:32 compute-0 systemd[1]: libpod-a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553.scope: Deactivated successfully.
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.439985243 +0000 UTC m=+0.141494492 container died a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-961b1fc4c29752331d4348538cb1149507ec7380e7308a2f0c491a84eef858c1-merged.mount: Deactivated successfully.
Jan 31 09:03:32 compute-0 podman[407172]: 2026-01-31 09:03:32.474703127 +0000 UTC m=+0.176212406 container remove a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:03:32 compute-0 systemd[1]: libpod-conmon-a91f68386efde4e74bbe8d2a95559e904bcb9acb85f7201b3739704312170553.scope: Deactivated successfully.
Jan 31 09:03:32 compute-0 podman[407213]: 2026-01-31 09:03:32.623537156 +0000 UTC m=+0.042631228 container create a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 09:03:32 compute-0 systemd[1]: Started libpod-conmon-a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd.scope.
Jan 31 09:03:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bcd75153239bff7c34632e9712eb0948e7238fc3d55097685d4a9d9016925f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bcd75153239bff7c34632e9712eb0948e7238fc3d55097685d4a9d9016925f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bcd75153239bff7c34632e9712eb0948e7238fc3d55097685d4a9d9016925f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bcd75153239bff7c34632e9712eb0948e7238fc3d55097685d4a9d9016925f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:32 compute-0 podman[407213]: 2026-01-31 09:03:32.697757701 +0000 UTC m=+0.116851783 container init a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 09:03:32 compute-0 podman[407213]: 2026-01-31 09:03:32.60806271 +0000 UTC m=+0.027156802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:03:32 compute-0 podman[407213]: 2026-01-31 09:03:32.705371455 +0000 UTC m=+0.124465527 container start a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 09:03:32 compute-0 podman[407213]: 2026-01-31 09:03:32.708614685 +0000 UTC m=+0.127708767 container attach a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:03:32 compute-0 nova_compute[247704]: 2026-01-31 09:03:32.795 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:33 compute-0 nova_compute[247704]: 2026-01-31 09:03:33.104 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:03:33 compute-0 charming_poitras[407227]: {
Jan 31 09:03:33 compute-0 charming_poitras[407227]:     "0": [
Jan 31 09:03:33 compute-0 charming_poitras[407227]:         {
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "devices": [
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "/dev/loop3"
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             ],
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "lv_name": "ceph_lv0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "lv_size": "7511998464",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "name": "ceph_lv0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "tags": {
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.cluster_name": "ceph",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.crush_device_class": "",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.encrypted": "0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.osd_id": "0",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.type": "block",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:                 "ceph.vdo": "0"
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             },
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "type": "block",
Jan 31 09:03:33 compute-0 charming_poitras[407227]:             "vg_name": "ceph_vg0"
Jan 31 09:03:33 compute-0 charming_poitras[407227]:         }
Jan 31 09:03:33 compute-0 charming_poitras[407227]:     ]
Jan 31 09:03:33 compute-0 charming_poitras[407227]: }
Jan 31 09:03:33 compute-0 systemd[1]: libpod-a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd.scope: Deactivated successfully.
Jan 31 09:03:33 compute-0 conmon[407227]: conmon a504ebb08e0ddb413afd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd.scope/container/memory.events
Jan 31 09:03:33 compute-0 podman[407213]: 2026-01-31 09:03:33.501050914 +0000 UTC m=+0.920144996 container died a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bcd75153239bff7c34632e9712eb0948e7238fc3d55097685d4a9d9016925f8-merged.mount: Deactivated successfully.
Jan 31 09:03:33 compute-0 podman[407213]: 2026-01-31 09:03:33.568990755 +0000 UTC m=+0.988084837 container remove a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:03:33 compute-0 systemd[1]: libpod-conmon-a504ebb08e0ddb413afd9ca0f82216fa7a9b8ed261e56940c2649ca9263dcbdd.scope: Deactivated successfully.
Jan 31 09:03:33 compute-0 sudo[407106]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:33 compute-0 sudo[407250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:33 compute-0 sudo[407250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:33 compute-0 sudo[407250]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:33 compute-0 sudo[407275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:03:33 compute-0 sudo[407275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:33 compute-0 sudo[407275]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:33 compute-0 sudo[407300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:33 compute-0 sudo[407300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:33 compute-0 sudo[407300]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:33 compute-0 sudo[407325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:03:33 compute-0 sudo[407325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.118096507 +0000 UTC m=+0.039471010 container create f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_leavitt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:03:34 compute-0 systemd[1]: Started libpod-conmon-f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9.scope.
Jan 31 09:03:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.101708679 +0000 UTC m=+0.023083212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.200252915 +0000 UTC m=+0.121627438 container init f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_leavitt, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.205755569 +0000 UTC m=+0.127130062 container start f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:03:34 compute-0 recursing_leavitt[407407]: 167 167
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.211190421 +0000 UTC m=+0.132564934 container attach f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 09:03:34 compute-0 systemd[1]: libpod-f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9.scope: Deactivated successfully.
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.213377944 +0000 UTC m=+0.134752457 container died f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_leavitt, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 09:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae57909415f84aa53b623543b8a2c8d13f7b58756e736068f5294de46022f8c2-merged.mount: Deactivated successfully.
Jan 31 09:03:34 compute-0 podman[407390]: 2026-01-31 09:03:34.252115596 +0000 UTC m=+0.173490099 container remove f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:03:34 compute-0 systemd[1]: libpod-conmon-f9331a2319d8b06bd4fbbff0adafb00ec8ede1bc9cdbe7bf3579bf0af34797f9.scope: Deactivated successfully.
Jan 31 09:03:34 compute-0 podman[407432]: 2026-01-31 09:03:34.375403874 +0000 UTC m=+0.042509784 container create 95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:03:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:34.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:34 compute-0 systemd[1]: Started libpod-conmon-95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2.scope.
Jan 31 09:03:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fed53480d969bd46935adcb636975600c4b89c980e2aa6aa749258d5a7526f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fed53480d969bd46935adcb636975600c4b89c980e2aa6aa749258d5a7526f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fed53480d969bd46935adcb636975600c4b89c980e2aa6aa749258d5a7526f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fed53480d969bd46935adcb636975600c4b89c980e2aa6aa749258d5a7526f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:03:34 compute-0 podman[407432]: 2026-01-31 09:03:34.357518799 +0000 UTC m=+0.024624729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:03:34 compute-0 podman[407432]: 2026-01-31 09:03:34.453712178 +0000 UTC m=+0.120818128 container init 95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 09:03:34 compute-0 podman[407432]: 2026-01-31 09:03:34.461328373 +0000 UTC m=+0.128434283 container start 95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 09:03:34 compute-0 podman[407432]: 2026-01-31 09:03:34.464842089 +0000 UTC m=+0.131948039 container attach 95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_zhukovsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 09:03:34 compute-0 ceph-mon[74496]: pgmap v3952: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:03:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/123604271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]: {
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:         "osd_id": 0,
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:         "type": "bluestore"
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]:     }
Jan 31 09:03:35 compute-0 competent_zhukovsky[407449]: }
Jan 31 09:03:35 compute-0 systemd[1]: libpod-95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2.scope: Deactivated successfully.
Jan 31 09:03:35 compute-0 podman[407432]: 2026-01-31 09:03:35.305449139 +0000 UTC m=+0.972555079 container died 95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:03:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fed53480d969bd46935adcb636975600c4b89c980e2aa6aa749258d5a7526f9-merged.mount: Deactivated successfully.
Jan 31 09:03:35 compute-0 podman[407432]: 2026-01-31 09:03:35.362480416 +0000 UTC m=+1.029586326 container remove 95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 09:03:35 compute-0 systemd[1]: libpod-conmon-95183215ad64f87ab9acc5766247868ff360dea49696a83acb559a14c0b345b2.scope: Deactivated successfully.
Jan 31 09:03:35 compute-0 sudo[407325]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:03:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 183 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 80 op/s
Jan 31 09:03:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:03:35 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7c1be46c-bea0-49b0-9a8d-2b0a7fd3f211 does not exist
Jan 31 09:03:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 903de38e-cf1a-47eb-871d-b65ec8b6c6d8 does not exist
Jan 31 09:03:35 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b662008c-472d-40eb-aa50-d81eee74c9f9 does not exist
Jan 31 09:03:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:35 compute-0 sudo[407481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:35 compute-0 sudo[407481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:35 compute-0 sudo[407481]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:35 compute-0 sudo[407506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:03:35 compute-0 sudo[407506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:35 compute-0 sudo[407506]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015583560161920714 of space, bias 1.0, pg target 0.4675068048576214 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:03:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:03:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:36.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:36 compute-0 nova_compute[247704]: 2026-01-31 09:03:36.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:03:36 compute-0 ceph-mon[74496]: pgmap v3953: 305 pgs: 305 active+clean; 183 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 80 op/s
Jan 31 09:03:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:36 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:03:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 31 09:03:37 compute-0 nova_compute[247704]: 2026-01-31 09:03:37.833 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:37 compute-0 ceph-mon[74496]: pgmap v3954: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 31 09:03:38 compute-0 nova_compute[247704]: 2026-01-31 09:03:38.105 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:38.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1318176720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3775936680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:03:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 92 op/s
Jan 31 09:03:39 compute-0 ceph-mon[74496]: pgmap v3955: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 92 op/s
Jan 31 09:03:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:40.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 09:03:41 compute-0 ceph-mon[74496]: pgmap v3956: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 09:03:41 compute-0 podman[407534]: 2026-01-31 09:03:41.916135974 +0000 UTC m=+0.086940275 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 09:03:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:42 compute-0 nova_compute[247704]: 2026-01-31 09:03:42.836 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:43 compute-0 nova_compute[247704]: 2026-01-31 09:03:43.106 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 31 09:03:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:44.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:44 compute-0 ceph-mon[74496]: pgmap v3957: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Jan 31 09:03:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 31 09:03:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:46.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:46 compute-0 sudo[407561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:46 compute-0 sudo[407561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:46 compute-0 sudo[407561]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:46 compute-0 sudo[407587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:03:46 compute-0 sudo[407587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:03:46 compute-0 sudo[407587]: pam_unix(sudo:session): session closed for user root
Jan 31 09:03:46 compute-0 ceph-mon[74496]: pgmap v3958: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 31 09:03:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Jan 31 09:03:47 compute-0 ceph-mon[74496]: pgmap v3959: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Jan 31 09:03:47 compute-0 nova_compute[247704]: 2026-01-31 09:03:47.838 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:48 compute-0 nova_compute[247704]: 2026-01-31 09:03:48.108 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:03:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:48.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:03:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 124 op/s
Jan 31 09:03:49 compute-0 ceph-mon[74496]: pgmap v3960: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.7 MiB/s wr, 124 op/s
Jan 31 09:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:03:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:03:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:50.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 358 KiB/s wr, 97 op/s
Jan 31 09:03:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:52 compute-0 ceph-mon[74496]: pgmap v3961: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 358 KiB/s wr, 97 op/s
Jan 31 09:03:52 compute-0 nova_compute[247704]: 2026-01-31 09:03:52.841 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:53 compute-0 nova_compute[247704]: 2026-01-31 09:03:53.111 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 245 KiB/s wr, 74 op/s
Jan 31 09:03:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1116350203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:03:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1116350203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:03:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:54.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:54 compute-0 ceph-mon[74496]: pgmap v3962: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 245 KiB/s wr, 74 op/s
Jan 31 09:03:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 256 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 762 KiB/s wr, 60 op/s
Jan 31 09:03:55 compute-0 ceph-mon[74496]: pgmap v3963: 305 pgs: 305 active+clean; 256 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 762 KiB/s wr, 60 op/s
Jan 31 09:03:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:03:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:56.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:03:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:03:56 compute-0 podman[407617]: 2026-01-31 09:03:56.88504332 +0000 UTC m=+0.053995775 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:03:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 701 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Jan 31 09:03:57 compute-0 nova_compute[247704]: 2026-01-31 09:03:57.843 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:58 compute-0 nova_compute[247704]: 2026-01-31 09:03:58.111 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:03:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:03:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:03:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:03:58 compute-0 ceph-mon[74496]: pgmap v3964: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 701 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Jan 31 09:03:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:04:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:00.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:00 compute-0 ceph-mon[74496]: pgmap v3965: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:04:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:04:01 compute-0 ceph-mon[74496]: pgmap v3966: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:04:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:02.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:02 compute-0 nova_compute[247704]: 2026-01-31 09:04:02.846 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:03 compute-0 nova_compute[247704]: 2026-01-31 09:04:03.112 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:04:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:04 compute-0 ceph-mon[74496]: pgmap v3967: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:04:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 31 09:04:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:06.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:06 compute-0 sudo[407642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:06 compute-0 sudo[407642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:06 compute-0 sudo[407642]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:06 compute-0 sudo[407667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:06 compute-0 sudo[407667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:06 compute-0 sudo[407667]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:06 compute-0 ceph-mon[74496]: pgmap v3968: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 31 09:04:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Jan 31 09:04:07 compute-0 nova_compute[247704]: 2026-01-31 09:04:07.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:07 compute-0 ceph-mon[74496]: pgmap v3969: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Jan 31 09:04:07 compute-0 nova_compute[247704]: 2026-01-31 09:04:07.848 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:08 compute-0 nova_compute[247704]: 2026-01-31 09:04:08.114 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:08.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 761 KiB/s wr, 30 op/s
Jan 31 09:04:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:10.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:11 compute-0 ceph-mon[74496]: pgmap v3970: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 761 KiB/s wr, 30 op/s
Jan 31 09:04:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:04:11.233 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:04:11.233 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:04:11.234 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 233 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 31 KiB/s wr, 29 op/s
Jan 31 09:04:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3121644942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:12 compute-0 ceph-mon[74496]: pgmap v3971: 305 pgs: 305 active+clean; 233 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 31 KiB/s wr, 29 op/s
Jan 31 09:04:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:12.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:12.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:12 compute-0 nova_compute[247704]: 2026-01-31 09:04:12.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:12 compute-0 podman[407696]: 2026-01-31 09:04:12.892701673 +0000 UTC m=+0.067833321 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 31 09:04:13 compute-0 nova_compute[247704]: 2026-01-31 09:04:13.116 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 31 09:04:13 compute-0 nova_compute[247704]: 2026-01-31 09:04:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:13 compute-0 nova_compute[247704]: 2026-01-31 09:04:13.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:13 compute-0 nova_compute[247704]: 2026-01-31 09:04:13.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:04:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:14.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:14.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:14 compute-0 ceph-mon[74496]: pgmap v3972: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 31 09:04:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 8.2 KiB/s wr, 38 op/s
Jan 31 09:04:15 compute-0 ceph-mon[74496]: pgmap v3973: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 8.2 KiB/s wr, 38 op/s
Jan 31 09:04:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 138 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 8.7 KiB/s wr, 44 op/s
Jan 31 09:04:17 compute-0 nova_compute[247704]: 2026-01-31 09:04:17.851 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:18 compute-0 nova_compute[247704]: 2026-01-31 09:04:18.118 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:18.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:18 compute-0 ceph-mon[74496]: pgmap v3974: 305 pgs: 305 active+clean; 138 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 8.7 KiB/s wr, 44 op/s
Jan 31 09:04:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:04:19.208 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:04:19 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:04:19.209 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:04:19 compute-0 nova_compute[247704]: 2026-01-31 09:04:19.251 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 9.0 KiB/s wr, 55 op/s
Jan 31 09:04:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1720794115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:04:20
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'images']
Jan 31 09:04:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:04:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:20 compute-0 nova_compute[247704]: 2026-01-31 09:04:20.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:20 compute-0 nova_compute[247704]: 2026-01-31 09:04:20.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:04:20 compute-0 nova_compute[247704]: 2026-01-31 09:04:20.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:04:20 compute-0 nova_compute[247704]: 2026-01-31 09:04:20.578 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:04:20 compute-0 nova_compute[247704]: 2026-01-31 09:04:20.579 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:20 compute-0 ceph-mon[74496]: pgmap v3975: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 9.0 KiB/s wr, 55 op/s
Jan 31 09:04:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2909094674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:04:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Jan 31 09:04:21 compute-0 ceph-mon[74496]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 6.7 KiB/s wr, 55 op/s
Jan 31 09:04:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:22.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:22.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:22 compute-0 nova_compute[247704]: 2026-01-31 09:04:22.854 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:23 compute-0 nova_compute[247704]: 2026-01-31 09:04:23.119 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:04:23.211 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:04:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 31 op/s
Jan 31 09:04:24 compute-0 ceph-mon[74496]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 KiB/s wr, 31 op/s
Jan 31 09:04:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:24.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:24.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 09:04:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:26 compute-0 ceph-mon[74496]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Jan 31 09:04:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:26.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:26 compute-0 nova_compute[247704]: 2026-01-31 09:04:26.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:26 compute-0 nova_compute[247704]: 2026-01-31 09:04:26.596 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:26 compute-0 nova_compute[247704]: 2026-01-31 09:04:26.596 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:26 compute-0 nova_compute[247704]: 2026-01-31 09:04:26.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:26 compute-0 nova_compute[247704]: 2026-01-31 09:04:26.597 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:04:26 compute-0 nova_compute[247704]: 2026-01-31 09:04:26.597 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:26 compute-0 sudo[407730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:26 compute-0 sudo[407730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:26 compute-0 sudo[407730]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:26 compute-0 sudo[407755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:26 compute-0 sudo[407755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:26 compute-0 sudo[407755]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:04:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397250279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.035 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.183 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.185 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4208MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.185 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.185 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.250 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.251 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.272 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:04:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3025758037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3759234867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1397250279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 31 09:04:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:04:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358740387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.731 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.737 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.754 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.757 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.757 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:04:27 compute-0 nova_compute[247704]: 2026-01-31 09:04:27.856 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:27 compute-0 podman[407823]: 2026-01-31 09:04:27.873040626 +0000 UTC m=+0.050642863 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 09:04:28 compute-0 nova_compute[247704]: 2026-01-31 09:04:28.121 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:28.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:28.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:28 compute-0 ceph-mon[74496]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 31 09:04:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2358740387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:28 compute-0 nova_compute[247704]: 2026-01-31 09:04:28.757 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:28 compute-0 nova_compute[247704]: 2026-01-31 09:04:28.757 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 31 09:04:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:30.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:30.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:30 compute-0 ceph-mon[74496]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 341 B/s wr, 11 op/s
Jan 31 09:04:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:04:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 31 09:04:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:32.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 31 09:04:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:32 compute-0 nova_compute[247704]: 2026-01-31 09:04:32.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:04:32 compute-0 ceph-mon[74496]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:04:32 compute-0 nova_compute[247704]: 2026-01-31 09:04:32.857 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:33 compute-0 nova_compute[247704]: 2026-01-31 09:04:33.123 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:33 compute-0 ceph-mon[74496]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:04:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 86K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1557 writes, 6622 keys, 1557 commit groups, 1.0 writes per commit group, ingest: 10.40 MB, 0.02 MB/s
                                           Interval WAL: 1557 writes, 1557 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.7      2.63              0.31        61    0.043       0      0       0.0       0.0
                                             L6      1/0   12.52 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5     65.1     56.0     11.25              1.60        60    0.187    483K    32K       0.0       0.0
                                            Sum      1/0   12.52 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     52.8     53.7     13.88              1.90       121    0.115    483K    32K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.5     76.3     74.9      0.91              0.15        10    0.091     56K   2560       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     65.1     56.0     11.25              1.60        60    0.187    483K    32K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.8      2.62              0.31        60    0.044       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.112, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.73 GB write, 0.10 MB/s write, 0.72 GB read, 0.10 MB/s read, 13.9 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 79.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000557 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4591,75.78 MB,24.9285%) FilterBlock(122,1.30 MB,0.428044%) IndexBlock(122,2.12 MB,0.697548%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 09:04:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:34.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:34.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:36 compute-0 sudo[407846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:36 compute-0 sudo[407846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 sudo[407846]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 sudo[407871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:04:36 compute-0 sudo[407871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 sudo[407871]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 sudo[407896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:36 compute-0 sudo[407896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 sudo[407896]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 sudo[407921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:04:36 compute-0 sudo[407921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:04:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:36.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:36.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:36 compute-0 sudo[407921]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:04:36 compute-0 ceph-mon[74496]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0aecf073-cc5c-4452-ab6f-d8f3abab03d6 does not exist
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 328f2537-597f-4af7-ab7e-8d932cc87099 does not exist
Jan 31 09:04:36 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f7359127-ccc9-4bed-b281-1fa598f098fe does not exist
Jan 31 09:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:04:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:04:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:04:36 compute-0 sudo[407977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:36 compute-0 sudo[407977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 sudo[407977]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 sudo[408002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:04:36 compute-0 sudo[408002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 sudo[408002]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 sudo[408027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:36 compute-0 sudo[408027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:36 compute-0 sudo[408027]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:36 compute-0 sudo[408052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:04:36 compute-0 sudo[408052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.196002353 +0000 UTC m=+0.043992781 container create 948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:04:37 compute-0 systemd[1]: Started libpod-conmon-948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd.scope.
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.174784127 +0000 UTC m=+0.022774575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:04:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.291496345 +0000 UTC m=+0.139486803 container init 948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lewin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.299651293 +0000 UTC m=+0.147641721 container start 948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.303158909 +0000 UTC m=+0.151149357 container attach 948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:04:37 compute-0 systemd[1]: libpod-948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd.scope: Deactivated successfully.
Jan 31 09:04:37 compute-0 compassionate_lewin[408133]: 167 167
Jan 31 09:04:37 compute-0 conmon[408133]: conmon 948b9dc7a3af2dc2f4aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd.scope/container/memory.events
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.308903848 +0000 UTC m=+0.156894296 container died 948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:04:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-4efa87821ea87bc4d6ef56bd5fcc19accd19fe28b4001d6bcc598b64ec75be6d-merged.mount: Deactivated successfully.
Jan 31 09:04:37 compute-0 podman[408117]: 2026-01-31 09:04:37.345537119 +0000 UTC m=+0.193527547 container remove 948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 09:04:37 compute-0 systemd[1]: libpod-conmon-948b9dc7a3af2dc2f4aa295a3bae9868a23ea3788ab33f18ba2941fb0d08b1fd.scope: Deactivated successfully.
Jan 31 09:04:37 compute-0 podman[408155]: 2026-01-31 09:04:37.466405518 +0000 UTC m=+0.037885163 container create 4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:04:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:37 compute-0 systemd[1]: Started libpod-conmon-4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729.scope.
Jan 31 09:04:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe39379579b8d127afd84dc42c3fa9ef457f1c924e01c875e4b1d129093faff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe39379579b8d127afd84dc42c3fa9ef457f1c924e01c875e4b1d129093faff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe39379579b8d127afd84dc42c3fa9ef457f1c924e01c875e4b1d129093faff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe39379579b8d127afd84dc42c3fa9ef457f1c924e01c875e4b1d129093faff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fe39379579b8d127afd84dc42c3fa9ef457f1c924e01c875e4b1d129093faff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:37 compute-0 podman[408155]: 2026-01-31 09:04:37.546723731 +0000 UTC m=+0.118203396 container init 4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_diffie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 09:04:37 compute-0 podman[408155]: 2026-01-31 09:04:37.450200234 +0000 UTC m=+0.021679899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:04:37 compute-0 podman[408155]: 2026-01-31 09:04:37.554185842 +0000 UTC m=+0.125665487 container start 4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:04:37 compute-0 podman[408155]: 2026-01-31 09:04:37.558873487 +0000 UTC m=+0.130353212 container attach 4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 31 09:04:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:04:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:04:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:04:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:04:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:04:37 compute-0 ceph-mon[74496]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:37 compute-0 nova_compute[247704]: 2026-01-31 09:04:37.900 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:38 compute-0 nova_compute[247704]: 2026-01-31 09:04:38.124 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:38 compute-0 reverent_diffie[408171]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:04:38 compute-0 reverent_diffie[408171]: --> relative data size: 1.0
Jan 31 09:04:38 compute-0 reverent_diffie[408171]: --> All data devices are unavailable
Jan 31 09:04:38 compute-0 systemd[1]: libpod-4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729.scope: Deactivated successfully.
Jan 31 09:04:38 compute-0 podman[408155]: 2026-01-31 09:04:38.343160148 +0000 UTC m=+0.914639833 container died 4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:04:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fe39379579b8d127afd84dc42c3fa9ef457f1c924e01c875e4b1d129093faff-merged.mount: Deactivated successfully.
Jan 31 09:04:38 compute-0 podman[408155]: 2026-01-31 09:04:38.39426768 +0000 UTC m=+0.965747325 container remove 4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:04:38 compute-0 systemd[1]: libpod-conmon-4e8917e2bc2f1d6bab01db0cf699a2cf90ae01f5e7974952348b0247a09fc729.scope: Deactivated successfully.
Jan 31 09:04:38 compute-0 sudo[408052]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:38.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:38 compute-0 sudo[408198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:38 compute-0 sudo[408198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:38 compute-0 sudo[408198]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:38.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:38 compute-0 sudo[408223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:04:38 compute-0 sudo[408223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:38 compute-0 sudo[408223]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:38 compute-0 sudo[408248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:38 compute-0 sudo[408248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:38 compute-0 sudo[408248]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:38 compute-0 sudo[408273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:04:38 compute-0 sudo[408273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:38 compute-0 podman[408339]: 2026-01-31 09:04:38.908612117 +0000 UTC m=+0.035303240 container create 553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:04:38 compute-0 systemd[1]: Started libpod-conmon-553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3.scope.
Jan 31 09:04:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:04:38 compute-0 podman[408339]: 2026-01-31 09:04:38.974672373 +0000 UTC m=+0.101363516 container init 553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:04:38 compute-0 podman[408339]: 2026-01-31 09:04:38.979984933 +0000 UTC m=+0.106676056 container start 553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:04:38 compute-0 awesome_swirles[408355]: 167 167
Jan 31 09:04:38 compute-0 systemd[1]: libpod-553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3.scope: Deactivated successfully.
Jan 31 09:04:38 compute-0 podman[408339]: 2026-01-31 09:04:38.985361033 +0000 UTC m=+0.112052176 container attach 553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 09:04:38 compute-0 podman[408339]: 2026-01-31 09:04:38.985949338 +0000 UTC m=+0.112640461 container died 553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:04:38 compute-0 podman[408339]: 2026-01-31 09:04:38.89351331 +0000 UTC m=+0.020204453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:04:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f907205b44f5e27c9e7f672353b174ae727aee5dc17b9c5e0b3eaeeacf195b8a-merged.mount: Deactivated successfully.
Jan 31 09:04:39 compute-0 podman[408339]: 2026-01-31 09:04:39.019020081 +0000 UTC m=+0.145711204 container remove 553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:04:39 compute-0 systemd[1]: libpod-conmon-553af0d6f8964e5c6ba167add0a86ea855938e18b6e876a9c81c7bf9689dccd3.scope: Deactivated successfully.
Jan 31 09:04:39 compute-0 podman[408380]: 2026-01-31 09:04:39.151477072 +0000 UTC m=+0.046052941 container create ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 09:04:39 compute-0 systemd[1]: Started libpod-conmon-ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f.scope.
Jan 31 09:04:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec9d42c5432af8688273074794ee7b90a341874b7d202c47d5bd5401e46b58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec9d42c5432af8688273074794ee7b90a341874b7d202c47d5bd5401e46b58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec9d42c5432af8688273074794ee7b90a341874b7d202c47d5bd5401e46b58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ec9d42c5432af8688273074794ee7b90a341874b7d202c47d5bd5401e46b58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:39 compute-0 podman[408380]: 2026-01-31 09:04:39.129400965 +0000 UTC m=+0.023976874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:04:39 compute-0 podman[408380]: 2026-01-31 09:04:39.235019144 +0000 UTC m=+0.129595053 container init ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:04:39 compute-0 podman[408380]: 2026-01-31 09:04:39.241874291 +0000 UTC m=+0.136450170 container start ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:04:39 compute-0 podman[408380]: 2026-01-31 09:04:39.245045408 +0000 UTC m=+0.139621307 container attach ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:04:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:39 compute-0 angry_ride[408396]: {
Jan 31 09:04:39 compute-0 angry_ride[408396]:     "0": [
Jan 31 09:04:39 compute-0 angry_ride[408396]:         {
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "devices": [
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "/dev/loop3"
Jan 31 09:04:39 compute-0 angry_ride[408396]:             ],
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "lv_name": "ceph_lv0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "lv_size": "7511998464",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "name": "ceph_lv0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "tags": {
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.cluster_name": "ceph",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.crush_device_class": "",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.encrypted": "0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.osd_id": "0",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.type": "block",
Jan 31 09:04:39 compute-0 angry_ride[408396]:                 "ceph.vdo": "0"
Jan 31 09:04:39 compute-0 angry_ride[408396]:             },
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "type": "block",
Jan 31 09:04:39 compute-0 angry_ride[408396]:             "vg_name": "ceph_vg0"
Jan 31 09:04:39 compute-0 angry_ride[408396]:         }
Jan 31 09:04:39 compute-0 angry_ride[408396]:     ]
Jan 31 09:04:39 compute-0 angry_ride[408396]: }
Jan 31 09:04:39 compute-0 systemd[1]: libpod-ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f.scope: Deactivated successfully.
Jan 31 09:04:39 compute-0 podman[408380]: 2026-01-31 09:04:39.982052328 +0000 UTC m=+0.876628187 container died ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ec9d42c5432af8688273074794ee7b90a341874b7d202c47d5bd5401e46b58-merged.mount: Deactivated successfully.
Jan 31 09:04:40 compute-0 podman[408380]: 2026-01-31 09:04:40.091271564 +0000 UTC m=+0.985847423 container remove ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 09:04:40 compute-0 systemd[1]: libpod-conmon-ca937379d08e9100de7e73e08f49857f8e655cf1d7760af54e9083294ffb541f.scope: Deactivated successfully.
Jan 31 09:04:40 compute-0 sudo[408273]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:40 compute-0 sudo[408420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:40 compute-0 sudo[408420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:40 compute-0 sudo[408420]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:40 compute-0 sudo[408445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:04:40 compute-0 sudo[408445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:40 compute-0 sudo[408445]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:40 compute-0 sudo[408470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:40 compute-0 sudo[408470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:40 compute-0 sudo[408470]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:40 compute-0 sudo[408495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:04:40 compute-0 sudo[408495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:40.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:40.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:40 compute-0 ceph-mon[74496]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.589780046 +0000 UTC m=+0.041370397 container create 2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:04:40 compute-0 systemd[1]: Started libpod-conmon-2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9.scope.
Jan 31 09:04:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.568818986 +0000 UTC m=+0.020409277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.667956017 +0000 UTC m=+0.119546288 container init 2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jepsen, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.674914676 +0000 UTC m=+0.126504937 container start 2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jepsen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:04:40 compute-0 gracious_jepsen[408579]: 167 167
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.678365591 +0000 UTC m=+0.129955882 container attach 2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 09:04:40 compute-0 systemd[1]: libpod-2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9.scope: Deactivated successfully.
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.679423196 +0000 UTC m=+0.131013457 container died 2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:04:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0876bb44eeccb2f9d1bdaeb43edb11734d3afc7838d7d23bab36defebd0ca9-merged.mount: Deactivated successfully.
Jan 31 09:04:40 compute-0 podman[408562]: 2026-01-31 09:04:40.717180814 +0000 UTC m=+0.168771075 container remove 2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 31 09:04:40 compute-0 systemd[1]: libpod-conmon-2b323bb18a74a02996a0578955f5f9cc70abce8fe91e5ac91d54204487520cb9.scope: Deactivated successfully.
Jan 31 09:04:40 compute-0 podman[408603]: 2026-01-31 09:04:40.843382062 +0000 UTC m=+0.044244846 container create a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pare, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 09:04:40 compute-0 systemd[1]: Started libpod-conmon-a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5.scope.
Jan 31 09:04:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a58ffd6a82122fb7293623253fc7721ed92f32c962c1a2a32073fd5ff43b8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a58ffd6a82122fb7293623253fc7721ed92f32c962c1a2a32073fd5ff43b8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a58ffd6a82122fb7293623253fc7721ed92f32c962c1a2a32073fd5ff43b8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0a58ffd6a82122fb7293623253fc7721ed92f32c962c1a2a32073fd5ff43b8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:04:40 compute-0 podman[408603]: 2026-01-31 09:04:40.82475928 +0000 UTC m=+0.025622084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:04:40 compute-0 podman[408603]: 2026-01-31 09:04:40.930214644 +0000 UTC m=+0.131077448 container init a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 09:04:40 compute-0 podman[408603]: 2026-01-31 09:04:40.937452291 +0000 UTC m=+0.138315075 container start a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pare, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 09:04:40 compute-0 podman[408603]: 2026-01-31 09:04:40.941261253 +0000 UTC m=+0.142124067 container attach a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:04:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:41 compute-0 dreamy_pare[408619]: {
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:         "osd_id": 0,
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:         "type": "bluestore"
Jan 31 09:04:41 compute-0 dreamy_pare[408619]:     }
Jan 31 09:04:41 compute-0 dreamy_pare[408619]: }
Jan 31 09:04:41 compute-0 systemd[1]: libpod-a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5.scope: Deactivated successfully.
Jan 31 09:04:41 compute-0 podman[408603]: 2026-01-31 09:04:41.754847426 +0000 UTC m=+0.955710220 container died a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:04:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0a58ffd6a82122fb7293623253fc7721ed92f32c962c1a2a32073fd5ff43b8a-merged.mount: Deactivated successfully.
Jan 31 09:04:41 compute-0 podman[408603]: 2026-01-31 09:04:41.822208143 +0000 UTC m=+1.023070927 container remove a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 09:04:41 compute-0 systemd[1]: libpod-conmon-a607ae3b76cc408410e97e4af266b9f024d7e19cd71d176f31133339154443a5.scope: Deactivated successfully.
Jan 31 09:04:41 compute-0 sudo[408495]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:04:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:04:41 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c1923e4a-0674-4738-a9c7-f2fc95415b22 does not exist
Jan 31 09:04:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 4b30f69a-d9f9-48a5-941a-5be6e7a4cfed does not exist
Jan 31 09:04:41 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev da0a468d-ac20-4847-8c0d-a0dc4d745b40 does not exist
Jan 31 09:04:41 compute-0 sudo[408655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:41 compute-0 sudo[408655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:41 compute-0 sudo[408655]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:42 compute-0 sudo[408680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:04:42 compute-0 sudo[408680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:42 compute-0 sudo[408680]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:42.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:42.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:42 compute-0 ceph-mon[74496]: pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:42 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:04:42 compute-0 nova_compute[247704]: 2026-01-31 09:04:42.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:43 compute-0 nova_compute[247704]: 2026-01-31 09:04:43.127 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:43 compute-0 podman[408706]: 2026-01-31 09:04:43.990141369 +0000 UTC m=+0.161315253 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 09:04:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:44.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:44.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:45 compute-0 ceph-mon[74496]: pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:46 compute-0 ceph-mon[74496]: pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:46.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:46.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:46 compute-0 sudo[408737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:46 compute-0 sudo[408737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:46 compute-0 sudo[408737]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:46 compute-0 sudo[408762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:04:46 compute-0 sudo[408762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:04:46 compute-0 sudo[408762]: pam_unix(sudo:session): session closed for user root
Jan 31 09:04:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:47 compute-0 nova_compute[247704]: 2026-01-31 09:04:47.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:48 compute-0 nova_compute[247704]: 2026-01-31 09:04:48.128 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:48.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:48.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:48 compute-0 ceph-mon[74496]: pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2490033947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:04:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:04:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:04:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:50.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:50 compute-0 ceph-mon[74496]: pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:04:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 31 09:04:51 compute-0 ceph-mon[74496]: pgmap v3991: 305 pgs: 305 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 31 09:04:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:52.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:52.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:52 compute-0 nova_compute[247704]: 2026-01-31 09:04:52.909 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:53 compute-0 nova_compute[247704]: 2026-01-31 09:04:53.130 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:04:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2755407011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:04:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2455846117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:04:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2755407011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:04:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:54.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:54.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:54 compute-0 ceph-mon[74496]: pgmap v3992: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:04:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1592408549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:04:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:04:55 compute-0 ceph-mon[74496]: pgmap v3993: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:04:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:04:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:56.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:04:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:56.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:04:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 09:04:57 compute-0 nova_compute[247704]: 2026-01-31 09:04:57.912 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:58 compute-0 nova_compute[247704]: 2026-01-31 09:04:58.130 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:04:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:04:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:04:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:04:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:04:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:58.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:04:58 compute-0 ceph-mon[74496]: pgmap v3994: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 09:04:58 compute-0 podman[408793]: 2026-01-31 09:04:58.875898661 +0000 UTC m=+0.047091595 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:04:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 09:05:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:00.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:00 compute-0 ceph-mon[74496]: pgmap v3995: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 09:05:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 09:05:01 compute-0 ceph-mon[74496]: pgmap v3996: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 09:05:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:02.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:02 compute-0 nova_compute[247704]: 2026-01-31 09:05:02.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:03 compute-0 nova_compute[247704]: 2026-01-31 09:05:03.167 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 557 KiB/s wr, 85 op/s
Jan 31 09:05:03 compute-0 ceph-mon[74496]: pgmap v3997: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 557 KiB/s wr, 85 op/s
Jan 31 09:05:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:04.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:05:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:06.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:06 compute-0 ceph-mon[74496]: pgmap v3998: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:05:06 compute-0 sudo[408817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:06 compute-0 sudo[408817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:06 compute-0 sudo[408817]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:06 compute-0 sudo[408842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:06 compute-0 sudo[408842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:06 compute-0 sudo[408842]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:05:07 compute-0 nova_compute[247704]: 2026-01-31 09:05:07.917 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:08 compute-0 nova_compute[247704]: 2026-01-31 09:05:08.168 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:08 compute-0 ceph-mon[74496]: pgmap v3999: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:05:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:08.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:08.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 09:05:09 compute-0 nova_compute[247704]: 2026-01-31 09:05:09.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:10.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:10 compute-0 ceph-mon[74496]: pgmap v4000: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 09:05:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:05:11.234 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:05:11.235 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:05:11.235 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 82 op/s
Jan 31 09:05:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:12.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:12 compute-0 ceph-mon[74496]: pgmap v4001: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.5 MiB/s wr, 82 op/s
Jan 31 09:05:12 compute-0 nova_compute[247704]: 2026-01-31 09:05:12.918 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:13 compute-0 nova_compute[247704]: 2026-01-31 09:05:13.170 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 31 09:05:13 compute-0 nova_compute[247704]: 2026-01-31 09:05:13.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:13 compute-0 nova_compute[247704]: 2026-01-31 09:05:13.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:05:14 compute-0 ceph-mon[74496]: pgmap v4002: 305 pgs: 305 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 31 09:05:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:14 compute-0 podman[408871]: 2026-01-31 09:05:14.923493016 +0000 UTC m=+0.102840122 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:05:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:15 compute-0 nova_compute[247704]: 2026-01-31 09:05:15.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:16.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:16.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:16 compute-0 ceph-mon[74496]: pgmap v4003: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:17 compute-0 nova_compute[247704]: 2026-01-31 09:05:17.921 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:17 compute-0 ceph-mon[74496]: pgmap v4004: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:18 compute-0 nova_compute[247704]: 2026-01-31 09:05:18.171 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:18.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3269783881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:05:20 compute-0 ceph-mon[74496]: pgmap v4005: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2974409001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:05:20
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 31 09:05:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:05:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:20.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:20 compute-0 nova_compute[247704]: 2026-01-31 09:05:20.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:20 compute-0 nova_compute[247704]: 2026-01-31 09:05:20.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:05:20 compute-0 nova_compute[247704]: 2026-01-31 09:05:20.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:05:20 compute-0 nova_compute[247704]: 2026-01-31 09:05:20.578 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:05:20 compute-0 nova_compute[247704]: 2026-01-31 09:05:20.578 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:05:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:05:21.837 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:05:21 compute-0 nova_compute[247704]: 2026-01-31 09:05:21.838 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:21 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:05:21.839 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:05:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:22 compute-0 ceph-mon[74496]: pgmap v4006: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:05:22 compute-0 nova_compute[247704]: 2026-01-31 09:05:22.922 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:23 compute-0 nova_compute[247704]: 2026-01-31 09:05:23.173 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 707 KiB/s wr, 31 op/s
Jan 31 09:05:23 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:05:23.841 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:05:23 compute-0 ceph-mon[74496]: pgmap v4007: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 707 KiB/s wr, 31 op/s
Jan 31 09:05:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1473516798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 24 KiB/s wr, 27 op/s
Jan 31 09:05:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:25 compute-0 ceph-mon[74496]: pgmap v4008: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 24 KiB/s wr, 27 op/s
Jan 31 09:05:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/313106343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:26.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:26.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:26 compute-0 nova_compute[247704]: 2026-01-31 09:05:26.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:26 compute-0 nova_compute[247704]: 2026-01-31 09:05:26.640 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:26 compute-0 nova_compute[247704]: 2026-01-31 09:05:26.641 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:26 compute-0 nova_compute[247704]: 2026-01-31 09:05:26.641 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:26 compute-0 nova_compute[247704]: 2026-01-31 09:05:26.641 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:05:26 compute-0 nova_compute[247704]: 2026-01-31 09:05:26.641 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:05:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1755861136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:27 compute-0 sudo[408924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:27 compute-0 sudo[408924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:27 compute-0 sudo[408924]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:05:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415702408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:27 compute-0 sudo[408949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:27 compute-0 sudo[408949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:27 compute-0 sudo[408949]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.066 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.206 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.207 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4194MB free_disk=20.97071075439453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.208 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.208 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.368 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.369 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.443 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:05:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.509 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.510 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.523 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.543 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.559 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.925 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:05:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534179813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:27 compute-0 nova_compute[247704]: 2026-01-31 09:05:27.999 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:05:28 compute-0 nova_compute[247704]: 2026-01-31 09:05:28.004 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:05:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3415702408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:28 compute-0 ceph-mon[74496]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 31 09:05:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2534179813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:05:28 compute-0 nova_compute[247704]: 2026-01-31 09:05:28.051 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.071800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328071843, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2114, "num_deletes": 251, "total_data_size": 3812042, "memory_usage": 3880008, "flush_reason": "Manual Compaction"}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Jan 31 09:05:28 compute-0 nova_compute[247704]: 2026-01-31 09:05:28.088 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:05:28 compute-0 nova_compute[247704]: 2026-01-31 09:05:28.088 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328124202, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3744645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85343, "largest_seqno": 87455, "table_properties": {"data_size": 3735148, "index_size": 5990, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19595, "raw_average_key_size": 20, "raw_value_size": 3716158, "raw_average_value_size": 3866, "num_data_blocks": 261, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850112, "oldest_key_time": 1769850112, "file_creation_time": 1769850328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 52482 microseconds, and 6923 cpu microseconds.
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.124263) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3744645 bytes OK
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.124303) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.151537) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.151590) EVENT_LOG_v1 {"time_micros": 1769850328151578, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.151626) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3803545, prev total WAL file size 3803545, number of live WAL files 2.
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.152693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3656KB)], [197(12MB)]
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328152752, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 16877866, "oldest_snapshot_seqno": -1}
Jan 31 09:05:28 compute-0 nova_compute[247704]: 2026-01-31 09:05:28.176 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11457 keys, 14922409 bytes, temperature: kUnknown
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328360244, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 14922409, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14848665, "index_size": 43999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28677, "raw_key_size": 302377, "raw_average_key_size": 26, "raw_value_size": 14648697, "raw_average_value_size": 1278, "num_data_blocks": 1675, "num_entries": 11457, "num_filter_entries": 11457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.360590) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14922409 bytes
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.363071) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.3 rd, 71.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.5 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(8.5) write-amplify(4.0) OK, records in: 11979, records dropped: 522 output_compression: NoCompression
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.363108) EVENT_LOG_v1 {"time_micros": 1769850328363096, "job": 124, "event": "compaction_finished", "compaction_time_micros": 207615, "compaction_time_cpu_micros": 29941, "output_level": 6, "num_output_files": 1, "total_output_size": 14922409, "num_input_records": 11979, "num_output_records": 11457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328363715, "job": 124, "event": "table_file_deletion", "file_number": 199}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850328365383, "job": 124, "event": "table_file_deletion", "file_number": 197}
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.152574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.365482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.365487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.365489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.365491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:28.365493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:28.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:28.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 31 09:05:29 compute-0 podman[408999]: 2026-01-31 09:05:29.879072985 +0000 UTC m=+0.050364895 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 09:05:30 compute-0 nova_compute[247704]: 2026-01-31 09:05:30.089 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:30.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:30 compute-0 nova_compute[247704]: 2026-01-31 09:05:30.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:30.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:30 compute-0 ceph-mon[74496]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 31 09:05:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.983565) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850330983653, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 250, "total_data_size": 46451, "memory_usage": 52704, "flush_reason": "Manual Compaction"}
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850330988853, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 45921, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87456, "largest_seqno": 87724, "table_properties": {"data_size": 44036, "index_size": 113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5305, "raw_average_key_size": 20, "raw_value_size": 40404, "raw_average_value_size": 153, "num_data_blocks": 5, "num_entries": 263, "num_filter_entries": 263, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850329, "oldest_key_time": 1769850329, "file_creation_time": 1769850330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 5436 microseconds, and 1116 cpu microseconds.
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.989003) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 45921 bytes OK
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.989059) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.997827) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.997872) EVENT_LOG_v1 {"time_micros": 1769850330997861, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.997902) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 44390, prev total WAL file size 44390, number of live WAL files 2.
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.998691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323630' seq:72057594037927935, type:22 .. '6D6772737461740033353131' seq:0, type:0; will stop at (end)
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(44KB)], [200(14MB)]
Jan 31 09:05:30 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850330998760, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 14968330, "oldest_snapshot_seqno": -1}
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 11213 keys, 11145223 bytes, temperature: kUnknown
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331167647, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 11145223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11077959, "index_size": 38095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 297497, "raw_average_key_size": 26, "raw_value_size": 10887095, "raw_average_value_size": 970, "num_data_blocks": 1428, "num_entries": 11213, "num_filter_entries": 11213, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.168490) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11145223 bytes
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.183779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.6 rd, 66.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 14.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(568.7) write-amplify(242.7) OK, records in: 11720, records dropped: 507 output_compression: NoCompression
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.183834) EVENT_LOG_v1 {"time_micros": 1769850331183811, "job": 126, "event": "compaction_finished", "compaction_time_micros": 168978, "compaction_time_cpu_micros": 24012, "output_level": 6, "num_output_files": 1, "total_output_size": 11145223, "num_input_records": 11720, "num_output_records": 11213, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331184170, "job": 126, "event": "table_file_deletion", "file_number": 202}
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850331185740, "job": 126, "event": "table_file_deletion", "file_number": 200}
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:30.998594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.185871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.185877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.185879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.185882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:05:31.185884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:05:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Jan 31 09:05:31 compute-0 ceph-mon[74496]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Jan 31 09:05:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:32.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:32 compute-0 nova_compute[247704]: 2026-01-31 09:05:32.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:32.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:32 compute-0 nova_compute[247704]: 2026-01-31 09:05:32.927 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:33 compute-0 nova_compute[247704]: 2026-01-31 09:05:33.178 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Jan 31 09:05:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:34.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:34 compute-0 ceph-mon[74496]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Jan 31 09:05:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 31 09:05:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:05:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:05:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:36.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:36 compute-0 nova_compute[247704]: 2026-01-31 09:05:36.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:05:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:36.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:36 compute-0 ceph-mon[74496]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 31 09:05:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Jan 31 09:05:37 compute-0 nova_compute[247704]: 2026-01-31 09:05:37.929 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:38 compute-0 nova_compute[247704]: 2026-01-31 09:05:38.179 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:38.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:38.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:38 compute-0 ceph-mon[74496]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Jan 31 09:05:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 09:05:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:40.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:40 compute-0 ceph-mon[74496]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 31 09:05:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:40.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:42 compute-0 sudo[409025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:42 compute-0 sudo[409025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-0 sudo[409025]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-0 sudo[409050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:05:42 compute-0 sudo[409050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-0 sudo[409050]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-0 sudo[409075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:42 compute-0 sudo[409075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-0 sudo[409075]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-0 sudo[409100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:05:42 compute-0 sudo[409100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:42.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:42 compute-0 ceph-mon[74496]: pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:42 compute-0 sudo[409100]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:42 compute-0 nova_compute[247704]: 2026-01-31 09:05:42.931 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:43 compute-0 nova_compute[247704]: 2026-01-31 09:05:43.180 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 09:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 09:05:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:43 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:05:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 09:05:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:44.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d10b99c0-c730-4915-8b66-2059bc9b1645 does not exist
Jan 31 09:05:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bf4859b0-1641-4d62-917f-97f380804a8e does not exist
Jan 31 09:05:44 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2c6969c4-74ba-4a5c-a5bc-f1ef2da8e247 does not exist
Jan 31 09:05:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:05:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:05:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:05:44 compute-0 sudo[409156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:44 compute-0 sudo[409156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:44 compute-0 sudo[409156]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:44 compute-0 ceph-mon[74496]: pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:05:44 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:05:44 compute-0 sudo[409181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:05:44 compute-0 sudo[409181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:44 compute-0 sudo[409181]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:44 compute-0 sudo[409206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:44 compute-0 sudo[409206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:44 compute-0 sudo[409206]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:44 compute-0 sudo[409231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:05:44 compute-0 sudo[409231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:45 compute-0 podman[409294]: 2026-01-31 09:05:45.171536508 +0000 UTC m=+0.020974891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:05:45 compute-0 podman[409294]: 2026-01-31 09:05:45.445401097 +0000 UTC m=+0.294839470 container create af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dhawan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:05:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:45 compute-0 systemd[1]: Started libpod-conmon-af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8.scope.
Jan 31 09:05:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:05:45 compute-0 podman[409294]: 2026-01-31 09:05:45.815842445 +0000 UTC m=+0.665280838 container init af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 09:05:45 compute-0 podman[409307]: 2026-01-31 09:05:45.824068115 +0000 UTC m=+0.345864441 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:05:45 compute-0 podman[409294]: 2026-01-31 09:05:45.826845352 +0000 UTC m=+0.676283725 container start af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 09:05:45 compute-0 intelligent_dhawan[409324]: 167 167
Jan 31 09:05:45 compute-0 systemd[1]: libpod-af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8.scope: Deactivated successfully.
Jan 31 09:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:05:45 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:05:45 compute-0 ceph-mon[74496]: pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:45 compute-0 podman[409294]: 2026-01-31 09:05:45.971960181 +0000 UTC m=+0.821398554 container attach af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 09:05:45 compute-0 podman[409294]: 2026-01-31 09:05:45.973891088 +0000 UTC m=+0.823329481 container died af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 09:05:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c12ab827b65cdb6491d28312e33f55541f3ccfafe81169599660a3d51988baa-merged.mount: Deactivated successfully.
Jan 31 09:05:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:46.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:46.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:46 compute-0 podman[409294]: 2026-01-31 09:05:46.960452427 +0000 UTC m=+1.809890830 container remove af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 31 09:05:47 compute-0 systemd[1]: libpod-conmon-af1cce73b8d6f53e206aa0079e691e3060fa1a0b3746c0c5956387e48dc1c1c8.scope: Deactivated successfully.
Jan 31 09:05:47 compute-0 sudo[409369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:47 compute-0 sudo[409369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:47 compute-0 podman[409361]: 2026-01-31 09:05:47.119949106 +0000 UTC m=+0.063875935 container create ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_beaver, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:05:47 compute-0 sudo[409369]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:47 compute-0 podman[409361]: 2026-01-31 09:05:47.081582723 +0000 UTC m=+0.025509572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:05:47 compute-0 systemd[1]: Started libpod-conmon-ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b.scope.
Jan 31 09:05:47 compute-0 sudo[409400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:47 compute-0 sudo[409400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:47 compute-0 sudo[409400]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba26deb4ac5357aa941b4469fb3d9b7a0785ed69c43d5d7382d528a890417336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba26deb4ac5357aa941b4469fb3d9b7a0785ed69c43d5d7382d528a890417336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba26deb4ac5357aa941b4469fb3d9b7a0785ed69c43d5d7382d528a890417336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba26deb4ac5357aa941b4469fb3d9b7a0785ed69c43d5d7382d528a890417336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba26deb4ac5357aa941b4469fb3d9b7a0785ed69c43d5d7382d528a890417336/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:47 compute-0 podman[409361]: 2026-01-31 09:05:47.28378805 +0000 UTC m=+0.227714939 container init ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_beaver, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 09:05:47 compute-0 podman[409361]: 2026-01-31 09:05:47.291519298 +0000 UTC m=+0.235446127 container start ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_beaver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:05:47 compute-0 podman[409361]: 2026-01-31 09:05:47.344491775 +0000 UTC m=+0.288418614 container attach ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_beaver, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:05:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:47 compute-0 nova_compute[247704]: 2026-01-31 09:05:47.935 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:47 compute-0 ceph-mon[74496]: pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:48 compute-0 kind_beaver[409426]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:05:48 compute-0 kind_beaver[409426]: --> relative data size: 1.0
Jan 31 09:05:48 compute-0 kind_beaver[409426]: --> All data devices are unavailable
Jan 31 09:05:48 compute-0 systemd[1]: libpod-ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b.scope: Deactivated successfully.
Jan 31 09:05:48 compute-0 podman[409361]: 2026-01-31 09:05:48.146888416 +0000 UTC m=+1.090815255 container died ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:05:48 compute-0 nova_compute[247704]: 2026-01-31 09:05:48.182 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba26deb4ac5357aa941b4469fb3d9b7a0785ed69c43d5d7382d528a890417336-merged.mount: Deactivated successfully.
Jan 31 09:05:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:48.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:48.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:48 compute-0 podman[409361]: 2026-01-31 09:05:48.786559001 +0000 UTC m=+1.730485830 container remove ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:05:48 compute-0 sudo[409231]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:48 compute-0 systemd[1]: libpod-conmon-ff10c665ceb59595cc1fae8f083d7aa70fbd294db25ec7ebe8783d852017c60b.scope: Deactivated successfully.
Jan 31 09:05:48 compute-0 sudo[409454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:48 compute-0 sudo[409454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:48 compute-0 sudo[409454]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:48 compute-0 sudo[409479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:05:48 compute-0 sudo[409479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:48 compute-0 sudo[409479]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:48 compute-0 sudo[409504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:48 compute-0 sudo[409504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:48 compute-0 sudo[409504]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:49 compute-0 sudo[409529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:05:49 compute-0 sudo[409529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:49 compute-0 podman[409595]: 2026-01-31 09:05:49.328341145 +0000 UTC m=+0.027186812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:05:49 compute-0 podman[409595]: 2026-01-31 09:05:49.486581873 +0000 UTC m=+0.185427510 container create 0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 09:05:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:49 compute-0 systemd[1]: Started libpod-conmon-0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4.scope.
Jan 31 09:05:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:05:49 compute-0 podman[409595]: 2026-01-31 09:05:49.857184084 +0000 UTC m=+0.556029751 container init 0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:05:49 compute-0 podman[409595]: 2026-01-31 09:05:49.864201735 +0000 UTC m=+0.563047372 container start 0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_haslett, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 09:05:49 compute-0 dazzling_haslett[409612]: 167 167
Jan 31 09:05:49 compute-0 systemd[1]: libpod-0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4.scope: Deactivated successfully.
Jan 31 09:05:50 compute-0 ceph-mon[74496]: pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:50 compute-0 podman[409595]: 2026-01-31 09:05:50.019033781 +0000 UTC m=+0.717879438 container attach 0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 09:05:50 compute-0 podman[409595]: 2026-01-31 09:05:50.019672156 +0000 UTC m=+0.718517793 container died 0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 31 09:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:05:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1f5f5e467d0359e48420a63f4e4444f3cef0dd2aa3df404a78aacb24f48575b-merged.mount: Deactivated successfully.
Jan 31 09:05:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:05:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:50.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:05:50 compute-0 podman[409595]: 2026-01-31 09:05:50.57090055 +0000 UTC m=+1.269746187 container remove 0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_haslett, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:05:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:50 compute-0 systemd[1]: libpod-conmon-0b78cfb96462b38a8d7fbbdb84c8cce097ba0168ae46ecd0742598214c8437a4.scope: Deactivated successfully.
Jan 31 09:05:50 compute-0 podman[409637]: 2026-01-31 09:05:50.675188995 +0000 UTC m=+0.018853529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:05:50 compute-0 podman[409637]: 2026-01-31 09:05:50.839169253 +0000 UTC m=+0.182833757 container create 980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:05:50 compute-0 systemd[1]: Started libpod-conmon-980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a.scope.
Jan 31 09:05:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d29527d6dd145b4a08c1d4e8655fc47ed3c5f8d481e11dee99a5bf1658a027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d29527d6dd145b4a08c1d4e8655fc47ed3c5f8d481e11dee99a5bf1658a027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d29527d6dd145b4a08c1d4e8655fc47ed3c5f8d481e11dee99a5bf1658a027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d29527d6dd145b4a08c1d4e8655fc47ed3c5f8d481e11dee99a5bf1658a027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:51 compute-0 podman[409637]: 2026-01-31 09:05:51.2231629 +0000 UTC m=+0.566827434 container init 980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 09:05:51 compute-0 podman[409637]: 2026-01-31 09:05:51.229928924 +0000 UTC m=+0.573593438 container start 980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:05:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:51 compute-0 podman[409637]: 2026-01-31 09:05:51.35024583 +0000 UTC m=+0.693910364 container attach 980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 09:05:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:51 compute-0 relaxed_buck[409653]: {
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:     "0": [
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:         {
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "devices": [
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "/dev/loop3"
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             ],
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "lv_name": "ceph_lv0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "lv_size": "7511998464",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "name": "ceph_lv0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "tags": {
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.cluster_name": "ceph",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.crush_device_class": "",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.encrypted": "0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.osd_id": "0",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.type": "block",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:                 "ceph.vdo": "0"
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             },
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "type": "block",
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:             "vg_name": "ceph_vg0"
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:         }
Jan 31 09:05:51 compute-0 relaxed_buck[409653]:     ]
Jan 31 09:05:51 compute-0 relaxed_buck[409653]: }
Jan 31 09:05:51 compute-0 ceph-mon[74496]: pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:51 compute-0 systemd[1]: libpod-980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a.scope: Deactivated successfully.
Jan 31 09:05:51 compute-0 podman[409637]: 2026-01-31 09:05:51.986330997 +0000 UTC m=+1.329995511 container died 980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 31 09:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5d29527d6dd145b4a08c1d4e8655fc47ed3c5f8d481e11dee99a5bf1658a027-merged.mount: Deactivated successfully.
Jan 31 09:05:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:52.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:52.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:52 compute-0 podman[409637]: 2026-01-31 09:05:52.757692554 +0000 UTC m=+2.101357058 container remove 980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:05:52 compute-0 sudo[409529]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:52 compute-0 systemd[1]: libpod-conmon-980cf57d60ca821e3aa09d95141454b8d5d831489f4c395d3a81f82aef4b2c1a.scope: Deactivated successfully.
Jan 31 09:05:52 compute-0 sudo[409674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:52 compute-0 sudo[409674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:52 compute-0 sudo[409674]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:52 compute-0 sudo[409699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:05:52 compute-0 sudo[409699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:52 compute-0 sudo[409699]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:52 compute-0 nova_compute[247704]: 2026-01-31 09:05:52.963 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:52 compute-0 sudo[409724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:52 compute-0 sudo[409724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:52 compute-0 sudo[409724]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:53 compute-0 sudo[409749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:05:53 compute-0 sudo[409749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:53 compute-0 nova_compute[247704]: 2026-01-31 09:05:53.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:53 compute-0 podman[409814]: 2026-01-31 09:05:53.323596834 +0000 UTC m=+0.018484300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:05:53 compute-0 podman[409814]: 2026-01-31 09:05:53.425664266 +0000 UTC m=+0.120551692 container create 8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 09:05:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:53 compute-0 systemd[1]: Started libpod-conmon-8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d.scope.
Jan 31 09:05:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:05:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/825411166' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:05:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/825411166' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:05:53 compute-0 podman[409814]: 2026-01-31 09:05:53.707861788 +0000 UTC m=+0.402749244 container init 8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:05:53 compute-0 podman[409814]: 2026-01-31 09:05:53.713979677 +0000 UTC m=+0.408867133 container start 8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 09:05:53 compute-0 strange_hypatia[409830]: 167 167
Jan 31 09:05:53 compute-0 systemd[1]: libpod-8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d.scope: Deactivated successfully.
Jan 31 09:05:53 compute-0 podman[409814]: 2026-01-31 09:05:53.826837771 +0000 UTC m=+0.521725227 container attach 8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:05:53 compute-0 podman[409814]: 2026-01-31 09:05:53.827465536 +0000 UTC m=+0.522352972 container died 8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:05:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-06f297436a008d6a9eff93764c912d0147ad9f6a98498585b76cc7ecc3da1091-merged.mount: Deactivated successfully.
Jan 31 09:05:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:05:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:54.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:05:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:54.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:54 compute-0 podman[409814]: 2026-01-31 09:05:54.632933012 +0000 UTC m=+1.327820528 container remove 8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 09:05:54 compute-0 systemd[1]: libpod-conmon-8c58e53e7510d769e5d45ac05973a62d39b812061c590cf008eccbabe4d31f3d.scope: Deactivated successfully.
Jan 31 09:05:54 compute-0 podman[409856]: 2026-01-31 09:05:54.757744517 +0000 UTC m=+0.024083017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:05:55 compute-0 podman[409856]: 2026-01-31 09:05:55.039821805 +0000 UTC m=+0.306160255 container create 89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:05:55 compute-0 ceph-mon[74496]: pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:55 compute-0 systemd[1]: Started libpod-conmon-89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4.scope.
Jan 31 09:05:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9daa2f7a0cec1f96a5c01de2ba431a24a4514c3197a4096779334ea078e190b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9daa2f7a0cec1f96a5c01de2ba431a24a4514c3197a4096779334ea078e190b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9daa2f7a0cec1f96a5c01de2ba431a24a4514c3197a4096779334ea078e190b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9daa2f7a0cec1f96a5c01de2ba431a24a4514c3197a4096779334ea078e190b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:05:55 compute-0 podman[409856]: 2026-01-31 09:05:55.299441609 +0000 UTC m=+0.565780089 container init 89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:05:55 compute-0 podman[409856]: 2026-01-31 09:05:55.307947746 +0000 UTC m=+0.574286206 container start 89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 09:05:55 compute-0 podman[409856]: 2026-01-31 09:05:55.419798015 +0000 UTC m=+0.686136795 container attach 89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 09:05:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:56 compute-0 cool_engelbart[409872]: {
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:         "osd_id": 0,
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:         "type": "bluestore"
Jan 31 09:05:56 compute-0 cool_engelbart[409872]:     }
Jan 31 09:05:56 compute-0 cool_engelbart[409872]: }
Jan 31 09:05:56 compute-0 systemd[1]: libpod-89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4.scope: Deactivated successfully.
Jan 31 09:05:56 compute-0 podman[409856]: 2026-01-31 09:05:56.123649031 +0000 UTC m=+1.389987491 container died 89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 31 09:05:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:05:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:56.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:56 compute-0 ceph-mon[74496]: pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:56.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9daa2f7a0cec1f96a5c01de2ba431a24a4514c3197a4096779334ea078e190b9-merged.mount: Deactivated successfully.
Jan 31 09:05:57 compute-0 podman[409856]: 2026-01-31 09:05:57.354843857 +0000 UTC m=+2.621182307 container remove 89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:05:57 compute-0 sudo[409749]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:57 compute-0 systemd[1]: libpod-conmon-89a116da6ee965e65e618ba03bb9f6ac59c4e8ee138a6f0f59ceca4704df32a4.scope: Deactivated successfully.
Jan 31 09:05:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:05:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:05:57 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 18279567-bb26-402b-8b46-5e575e67e334 does not exist
Jan 31 09:05:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6239cadc-1643-4b42-863b-ea9f1f23ffbd does not exist
Jan 31 09:05:57 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 74c23849-3f71-4370-a013-f3e5d0bf42e9 does not exist
Jan 31 09:05:57 compute-0 sudo[409906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:05:57 compute-0 sudo[409906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:57 compute-0 sudo[409906]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:57 compute-0 sudo[409931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:05:57 compute-0 sudo[409931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:05:57 compute-0 sudo[409931]: pam_unix(sudo:session): session closed for user root
Jan 31 09:05:57 compute-0 nova_compute[247704]: 2026-01-31 09:05:57.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:58 compute-0 nova_compute[247704]: 2026-01-31 09:05:58.185 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:05:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:58.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:05:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:05:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:05:58 compute-0 ceph-mon[74496]: pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:58 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:05:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:05:59 compute-0 ceph-mon[74496]: pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:00.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:00.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:00 compute-0 podman[409958]: 2026-01-31 09:06:00.913049949 +0000 UTC m=+0.080698802 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 09:06:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:02 compute-0 ceph-mon[74496]: pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:02.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:02.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:02 compute-0 nova_compute[247704]: 2026-01-31 09:06:02.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:03 compute-0 nova_compute[247704]: 2026-01-31 09:06:03.187 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:03 compute-0 ceph-mon[74496]: pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:04.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:06.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:06 compute-0 ceph-mon[74496]: pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:07 compute-0 sudo[409981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:07 compute-0 sudo[409981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:07 compute-0 sudo[409981]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:07 compute-0 sudo[410006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:07 compute-0 sudo[410006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:07 compute-0 sudo[410006]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:07 compute-0 nova_compute[247704]: 2026-01-31 09:06:07.969 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:08 compute-0 ceph-mon[74496]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:08 compute-0 nova_compute[247704]: 2026-01-31 09:06:08.189 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:08.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:08.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:10 compute-0 nova_compute[247704]: 2026-01-31 09:06:10.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:10.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:10 compute-0 ceph-mon[74496]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:06:11.235 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:06:11.236 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:06:11.236 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:12 compute-0 ceph-mon[74496]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:12.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:12 compute-0 nova_compute[247704]: 2026-01-31 09:06:12.972 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:13 compute-0 nova_compute[247704]: 2026-01-31 09:06:13.191 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:13 compute-0 nova_compute[247704]: 2026-01-31 09:06:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:13 compute-0 nova_compute[247704]: 2026-01-31 09:06:13.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:06:14 compute-0 ceph-mon[74496]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:14.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:14.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:15 compute-0 ceph-mon[74496]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:06:16.290 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:06:16 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:06:16.292 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:06:16 compute-0 nova_compute[247704]: 2026-01-31 09:06:16.291 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:16 compute-0 nova_compute[247704]: 2026-01-31 09:06:16.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:16 compute-0 podman[410036]: 2026-01-31 09:06:16.91445458 +0000 UTC m=+0.091430915 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Jan 31 09:06:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:17 compute-0 nova_compute[247704]: 2026-01-31 09:06:17.974 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:18 compute-0 ceph-mon[74496]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:18 compute-0 nova_compute[247704]: 2026-01-31 09:06:18.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:18.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:19 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2691059760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:06:20
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', '.rgw.root', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 31 09:06:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:06:20 compute-0 ceph-mon[74496]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:20.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:20.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:06:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:21 compute-0 nova_compute[247704]: 2026-01-31 09:06:21.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:21 compute-0 nova_compute[247704]: 2026-01-31 09:06:21.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:06:21 compute-0 nova_compute[247704]: 2026-01-31 09:06:21.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:06:21 compute-0 nova_compute[247704]: 2026-01-31 09:06:21.599 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:06:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/69904375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:22 compute-0 nova_compute[247704]: 2026-01-31 09:06:22.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:22.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:22 compute-0 nova_compute[247704]: 2026-01-31 09:06:22.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:23 compute-0 ceph-mon[74496]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:23 compute-0 nova_compute[247704]: 2026-01-31 09:06:23.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:24 compute-0 ceph-mon[74496]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:24.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:24.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:25 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:06:25.295 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:06:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:26 compute-0 nova_compute[247704]: 2026-01-31 09:06:26.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:26 compute-0 nova_compute[247704]: 2026-01-31 09:06:26.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:26 compute-0 nova_compute[247704]: 2026-01-31 09:06:26.597 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:26 compute-0 nova_compute[247704]: 2026-01-31 09:06:26.598 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:26 compute-0 nova_compute[247704]: 2026-01-31 09:06:26.598 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:06:26 compute-0 nova_compute[247704]: 2026-01-31 09:06:26.598 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:26 compute-0 ceph-mon[74496]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3771282403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:06:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/656439385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.012 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.143 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.144 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4219MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.144 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.145 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.238 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.239 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.256 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:06:27 compute-0 sudo[410090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:27 compute-0 sudo[410090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:27 compute-0 sudo[410090]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:27 compute-0 sudo[410134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:27 compute-0 sudo[410134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:27 compute-0 sudo[410134]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:06:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687748023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.687 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.694 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.736 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.738 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.738 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:06:27 compute-0 nova_compute[247704]: 2026-01-31 09:06:27.978 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2809980560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/656439385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-0 ceph-mon[74496]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:06:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/30593137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1687748023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:06:28 compute-0 nova_compute[247704]: 2026-01-31 09:06:28.196 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:06:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 61K writes, 228K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 61K writes, 22K syncs, 2.72 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2886 writes, 10K keys, 2886 commit groups, 1.0 writes per commit group, ingest: 10.81 MB, 0.02 MB/s
                                           Interval WAL: 2886 writes, 1146 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:06:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s wr, 0 op/s
Jan 31 09:06:29 compute-0 nova_compute[247704]: 2026-01-31 09:06:29.739 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:29 compute-0 ceph-mon[74496]: pgmap v4040: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s wr, 0 op/s
Jan 31 09:06:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:30.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:31 compute-0 ceph-mon[74496]: pgmap v4041: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:31 compute-0 podman[410163]: 2026-01-31 09:06:31.872956891 +0000 UTC m=+0.041802867 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 09:06:32 compute-0 nova_compute[247704]: 2026-01-31 09:06:32.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:32.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:32 compute-0 nova_compute[247704]: 2026-01-31 09:06:32.979 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:33 compute-0 nova_compute[247704]: 2026-01-31 09:06:33.198 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3575391715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:06:34 compute-0 nova_compute[247704]: 2026-01-31 09:06:34.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:06:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:34.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:34 compute-0 ceph-mon[74496]: pgmap v4042: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1424143602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:06:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:35 compute-0 ceph-mon[74496]: pgmap v4043: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:06:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:06:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:36.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:06:37 compute-0 nova_compute[247704]: 2026-01-31 09:06:37.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:38 compute-0 ceph-mon[74496]: pgmap v4044: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:06:38 compute-0 nova_compute[247704]: 2026-01-31 09:06:38.199 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:38.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:38.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:06:39 compute-0 ceph-mon[74496]: pgmap v4045: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:06:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:40.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:40.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Jan 31 09:06:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 09:06:41 compute-0 ceph-mon[74496]: pgmap v4046: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 100 op/s
Jan 31 09:06:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:42.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:42 compute-0 nova_compute[247704]: 2026-01-31 09:06:42.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:43 compute-0 nova_compute[247704]: 2026-01-31 09:06:43.203 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:43 compute-0 ceph-mon[74496]: pgmap v4047: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:45 compute-0 ceph-mon[74496]: pgmap v4048: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:46.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:47 compute-0 sudo[410189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:47 compute-0 sudo[410189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:47 compute-0 sudo[410189]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:47 compute-0 sudo[410219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:47 compute-0 sudo[410219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:47 compute-0 sudo[410219]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:47 compute-0 podman[410213]: 2026-01-31 09:06:47.62381353 +0000 UTC m=+0.111379669 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 09:06:47 compute-0 nova_compute[247704]: 2026-01-31 09:06:47.986 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:48 compute-0 nova_compute[247704]: 2026-01-31 09:06:48.205 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:48.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:06:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:48.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:06:48 compute-0 ceph-mon[74496]: pgmap v4049: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:06:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 85 B/s wr, 63 op/s
Jan 31 09:06:49 compute-0 ceph-mon[74496]: pgmap v4050: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 85 B/s wr, 63 op/s
Jan 31 09:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:06:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:06:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:50.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:50.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 185 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 31 09:06:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:52.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:52 compute-0 ceph-mon[74496]: pgmap v4051: 305 pgs: 305 active+clean; 185 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 31 09:06:52 compute-0 nova_compute[247704]: 2026-01-31 09:06:52.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:53 compute-0 nova_compute[247704]: 2026-01-31 09:06:53.207 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 09:06:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/77891392' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:06:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/77891392' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:06:53 compute-0 ceph-mon[74496]: pgmap v4052: 305 pgs: 305 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 09:06:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:06:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:56 compute-0 ceph-mon[74496]: pgmap v4053: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:57 compute-0 ceph-mon[74496]: pgmap v4054: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:57 compute-0 nova_compute[247704]: 2026-01-31 09:06:57.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:58 compute-0 sudo[410270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:58 compute-0 sudo[410270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-0 sudo[410270]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-0 sudo[410295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:06:58 compute-0 sudo[410295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-0 sudo[410295]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-0 sudo[410320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:58 compute-0 sudo[410320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-0 sudo[410320]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:58 compute-0 sudo[410345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 09:06:58 compute-0 sudo[410345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:58 compute-0 nova_compute[247704]: 2026-01-31 09:06:58.208 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:06:58 compute-0 podman[410440]: 2026-01-31 09:06:58.588747744 +0000 UTC m=+0.082042056 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 09:06:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:06:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:58.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:06:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:06:58 compute-0 podman[410440]: 2026-01-31 09:06:58.706942398 +0000 UTC m=+0.200236730 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 09:06:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:06:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:58.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:06:59 compute-0 podman[410593]: 2026-01-31 09:06:59.312741609 +0000 UTC m=+0.051378101 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 09:06:59 compute-0 podman[410593]: 2026-01-31 09:06:59.350407445 +0000 UTC m=+0.089043947 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 09:06:59 compute-0 podman[410656]: 2026-01-31 09:06:59.526658541 +0000 UTC m=+0.047897986 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, version=2.2.4, io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, io.openshift.expose-services=)
Jan 31 09:06:59 compute-0 podman[410656]: 2026-01-31 09:06:59.540503967 +0000 UTC m=+0.061743392 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, release=1793, vcs-type=git, vendor=Red Hat, Inc., name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 09:06:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:06:59 compute-0 sudo[410345]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:06:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:06:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:06:59 compute-0 sudo[410684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:59 compute-0 sudo[410684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:59 compute-0 sudo[410684]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:59 compute-0 sudo[410709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:06:59 compute-0 sudo[410709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:59 compute-0 sudo[410709]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:59 compute-0 sudo[410734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:06:59 compute-0 sudo[410734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:06:59 compute-0 sudo[410734]: pam_unix(sudo:session): session closed for user root
Jan 31 09:06:59 compute-0 sudo[410759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:06:59 compute-0 sudo[410759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:00 compute-0 sudo[410759]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c525b535-1eb6-4529-aca1-5ec3187ea1c1 does not exist
Jan 31 09:07:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f9e677c3-301a-4f8f-b96d-afd55b8688ec does not exist
Jan 31 09:07:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b61faeb2-d158-469a-aaa1-fe180192dcc0 does not exist
Jan 31 09:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:07:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:07:00 compute-0 sudo[410815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:00 compute-0 sudo[410815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:00 compute-0 sudo[410815]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:00 compute-0 sudo[410840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:07:00 compute-0 sudo[410840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:00 compute-0 sudo[410840]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:00 compute-0 sudo[410865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:00 compute-0 sudo[410865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:00 compute-0 sudo[410865]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:00 compute-0 sudo[410890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:07:00 compute-0 sudo[410890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:00.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:00 compute-0 ceph-mon[74496]: pgmap v4055: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:07:00 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:07:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:00.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:00 compute-0 podman[410957]: 2026-01-31 09:07:00.731026006 +0000 UTC m=+0.110720163 container create 613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:07:00 compute-0 podman[410957]: 2026-01-31 09:07:00.640287729 +0000 UTC m=+0.019981906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:07:00 compute-0 systemd[1]: Started libpod-conmon-613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb.scope.
Jan 31 09:07:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:07:00 compute-0 podman[410957]: 2026-01-31 09:07:00.922290287 +0000 UTC m=+0.301984474 container init 613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 09:07:00 compute-0 podman[410957]: 2026-01-31 09:07:00.928684052 +0000 UTC m=+0.308378209 container start 613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 09:07:00 compute-0 nice_carson[410973]: 167 167
Jan 31 09:07:00 compute-0 systemd[1]: libpod-613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb.scope: Deactivated successfully.
Jan 31 09:07:00 compute-0 podman[410957]: 2026-01-31 09:07:00.95614664 +0000 UTC m=+0.335840817 container attach 613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:07:00 compute-0 podman[410957]: 2026-01-31 09:07:00.956609981 +0000 UTC m=+0.336304138 container died 613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2471b793f8ec25925bde3fbab30d99cc0e960ef362d184c75d52bc87e7963c1-merged.mount: Deactivated successfully.
Jan 31 09:07:01 compute-0 podman[410957]: 2026-01-31 09:07:01.128672475 +0000 UTC m=+0.508366632 container remove 613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:07:01 compute-0 systemd[1]: libpod-conmon-613f182753b57080444bdc3c029e744b1b90c4e3f6b4db3752cb9f92c20487cb.scope: Deactivated successfully.
Jan 31 09:07:01 compute-0 podman[410998]: 2026-01-31 09:07:01.286267017 +0000 UTC m=+0.068676390 container create 8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:07:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:01 compute-0 podman[410998]: 2026-01-31 09:07:01.238410823 +0000 UTC m=+0.020820216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:07:01 compute-0 systemd[1]: Started libpod-conmon-8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936.scope.
Jan 31 09:07:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c62344a7669f44a786aab30a67f701bafd96024718ac42df15fbeb0e5b6028a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c62344a7669f44a786aab30a67f701bafd96024718ac42df15fbeb0e5b6028a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c62344a7669f44a786aab30a67f701bafd96024718ac42df15fbeb0e5b6028a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c62344a7669f44a786aab30a67f701bafd96024718ac42df15fbeb0e5b6028a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c62344a7669f44a786aab30a67f701bafd96024718ac42df15fbeb0e5b6028a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:01 compute-0 podman[410998]: 2026-01-31 09:07:01.410011446 +0000 UTC m=+0.192420839 container init 8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:07:01 compute-0 podman[410998]: 2026-01-31 09:07:01.416873823 +0000 UTC m=+0.199283196 container start 8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 09:07:01 compute-0 podman[410998]: 2026-01-31 09:07:01.429575252 +0000 UTC m=+0.211984655 container attach 8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 09:07:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 09:07:01 compute-0 ceph-mon[74496]: pgmap v4056: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 09:07:02 compute-0 vibrant_taussig[411014]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:07:02 compute-0 vibrant_taussig[411014]: --> relative data size: 1.0
Jan 31 09:07:02 compute-0 vibrant_taussig[411014]: --> All data devices are unavailable
Jan 31 09:07:02 compute-0 systemd[1]: libpod-8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936.scope: Deactivated successfully.
Jan 31 09:07:02 compute-0 podman[410998]: 2026-01-31 09:07:02.178193796 +0000 UTC m=+0.960603199 container died 8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 09:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c62344a7669f44a786aab30a67f701bafd96024718ac42df15fbeb0e5b6028a-merged.mount: Deactivated successfully.
Jan 31 09:07:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:02.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:02.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:02 compute-0 podman[410998]: 2026-01-31 09:07:02.744642709 +0000 UTC m=+1.527052082 container remove 8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:07:02 compute-0 sudo[410890]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:02 compute-0 podman[411030]: 2026-01-31 09:07:02.822492032 +0000 UTC m=+0.612920345 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:07:02 compute-0 sudo[411061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:02 compute-0 sudo[411061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:02 compute-0 sudo[411061]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:02 compute-0 systemd[1]: libpod-conmon-8a9aa2ca862879ec1ad8e0b91f64385e47190e99634934f24be4edb168ecb936.scope: Deactivated successfully.
Jan 31 09:07:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Jan 31 09:07:02 compute-0 sudo[411088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:07:02 compute-0 sudo[411088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:02 compute-0 sudo[411088]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:02 compute-0 sudo[411113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:02 compute-0 sudo[411113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:02 compute-0 sudo[411113]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:02 compute-0 sudo[411138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:07:02 compute-0 sudo[411138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:02 compute-0 nova_compute[247704]: 2026-01-31 09:07:02.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Jan 31 09:07:03 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Jan 31 09:07:03 compute-0 nova_compute[247704]: 2026-01-31 09:07:03.210 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:03 compute-0 podman[411203]: 2026-01-31 09:07:03.238126039 +0000 UTC m=+0.018644145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:07:03 compute-0 podman[411203]: 2026-01-31 09:07:03.412598371 +0000 UTC m=+0.193116447 container create e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 09:07:03 compute-0 systemd[1]: Started libpod-conmon-e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1.scope.
Jan 31 09:07:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:07:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 145 KiB/s wr, 22 op/s
Jan 31 09:07:03 compute-0 podman[411203]: 2026-01-31 09:07:03.641279982 +0000 UTC m=+0.421798058 container init e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:07:03 compute-0 podman[411203]: 2026-01-31 09:07:03.646751575 +0000 UTC m=+0.427269651 container start e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:07:03 compute-0 flamboyant_goldberg[411219]: 167 167
Jan 31 09:07:03 compute-0 systemd[1]: libpod-e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1.scope: Deactivated successfully.
Jan 31 09:07:03 compute-0 podman[411203]: 2026-01-31 09:07:03.675525864 +0000 UTC m=+0.456043970 container attach e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 09:07:03 compute-0 podman[411203]: 2026-01-31 09:07:03.676228832 +0000 UTC m=+0.456746918 container died e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:07:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6171e52e147047dcb42534231b93ebe01250f5ea043493fe196cf1594d20728-merged.mount: Deactivated successfully.
Jan 31 09:07:04 compute-0 ceph-mon[74496]: osdmap e415: 3 total, 3 up, 3 in
Jan 31 09:07:04 compute-0 ceph-mon[74496]: pgmap v4058: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 145 KiB/s wr, 22 op/s
Jan 31 09:07:04 compute-0 podman[411203]: 2026-01-31 09:07:04.368353391 +0000 UTC m=+1.148871477 container remove e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:07:04 compute-0 systemd[1]: libpod-conmon-e4fc0abcae304d75092fd2d7032bdf86bc4c79903727dc85ede16da09524a8d1.scope: Deactivated successfully.
Jan 31 09:07:04 compute-0 podman[411243]: 2026-01-31 09:07:04.510116288 +0000 UTC m=+0.047537767 container create 6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_neumann, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 09:07:04 compute-0 systemd[1]: Started libpod-conmon-6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809.scope.
Jan 31 09:07:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348e81ff4d10659567497f221e1baf1ed49b6c69d7196175d311f6b6ee116f0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348e81ff4d10659567497f221e1baf1ed49b6c69d7196175d311f6b6ee116f0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348e81ff4d10659567497f221e1baf1ed49b6c69d7196175d311f6b6ee116f0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/348e81ff4d10659567497f221e1baf1ed49b6c69d7196175d311f6b6ee116f0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:04 compute-0 podman[411243]: 2026-01-31 09:07:04.486432292 +0000 UTC m=+0.023853791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:07:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:04.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:04 compute-0 podman[411243]: 2026-01-31 09:07:04.641904973 +0000 UTC m=+0.179326452 container init 6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:07:04 compute-0 podman[411243]: 2026-01-31 09:07:04.649647422 +0000 UTC m=+0.187068901 container start 6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 09:07:04 compute-0 podman[411243]: 2026-01-31 09:07:04.694258706 +0000 UTC m=+0.231680205 container attach 6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:07:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:04.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:05 compute-0 zealous_neumann[411259]: {
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:     "0": [
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:         {
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "devices": [
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "/dev/loop3"
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             ],
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "lv_name": "ceph_lv0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "lv_size": "7511998464",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "name": "ceph_lv0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "tags": {
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.cluster_name": "ceph",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.crush_device_class": "",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.encrypted": "0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.osd_id": "0",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.type": "block",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:                 "ceph.vdo": "0"
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             },
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "type": "block",
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:             "vg_name": "ceph_vg0"
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:         }
Jan 31 09:07:05 compute-0 zealous_neumann[411259]:     ]
Jan 31 09:07:05 compute-0 zealous_neumann[411259]: }
Jan 31 09:07:05 compute-0 systemd[1]: libpod-6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809.scope: Deactivated successfully.
Jan 31 09:07:05 compute-0 podman[411243]: 2026-01-31 09:07:05.446966279 +0000 UTC m=+0.984387788 container died 6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_neumann, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:07:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 41 op/s
Jan 31 09:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-348e81ff4d10659567497f221e1baf1ed49b6c69d7196175d311f6b6ee116f0d-merged.mount: Deactivated successfully.
Jan 31 09:07:06 compute-0 podman[411243]: 2026-01-31 09:07:06.126168504 +0000 UTC m=+1.663589973 container remove 6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_neumann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:07:06 compute-0 ceph-mon[74496]: pgmap v4059: 305 pgs: 305 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 41 op/s
Jan 31 09:07:06 compute-0 sudo[411138]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:06 compute-0 sudo[411280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:06 compute-0 sudo[411280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:06 compute-0 sudo[411280]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:06 compute-0 systemd[1]: libpod-conmon-6e509a6a85a40400bdb2352f89040e41ef3d92a2f6c27d496f5132cc2d7e0809.scope: Deactivated successfully.
Jan 31 09:07:06 compute-0 sudo[411305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:07:06 compute-0 sudo[411305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:06 compute-0 sudo[411305]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:06 compute-0 sudo[411330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:06 compute-0 sudo[411330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:06 compute-0 sudo[411330]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:06 compute-0 sudo[411355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:07:06 compute-0 sudo[411355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:06.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:06 compute-0 podman[411422]: 2026-01-31 09:07:06.685157837 +0000 UTC m=+0.091989468 container create e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:07:06 compute-0 podman[411422]: 2026-01-31 09:07:06.615063613 +0000 UTC m=+0.021895264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:07:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:06.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:06 compute-0 systemd[1]: Started libpod-conmon-e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079.scope.
Jan 31 09:07:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:07:06 compute-0 podman[411422]: 2026-01-31 09:07:06.791438771 +0000 UTC m=+0.198270422 container init e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 09:07:06 compute-0 podman[411422]: 2026-01-31 09:07:06.799523858 +0000 UTC m=+0.206355489 container start e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 09:07:06 compute-0 stupefied_saha[411438]: 167 167
Jan 31 09:07:06 compute-0 systemd[1]: libpod-e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079.scope: Deactivated successfully.
Jan 31 09:07:06 compute-0 conmon[411438]: conmon e56eaf200beac3cdce18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079.scope/container/memory.events
Jan 31 09:07:06 compute-0 podman[411422]: 2026-01-31 09:07:06.825334496 +0000 UTC m=+0.232166137 container attach e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:07:06 compute-0 podman[411422]: 2026-01-31 09:07:06.826176675 +0000 UTC m=+0.233008316 container died e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3776eee1e73be87174f33018aeb2d699e6bde217c53db79dd990948efa2962d9-merged.mount: Deactivated successfully.
Jan 31 09:07:07 compute-0 podman[411422]: 2026-01-31 09:07:07.01305919 +0000 UTC m=+0.419890821 container remove e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 09:07:07 compute-0 systemd[1]: libpod-conmon-e56eaf200beac3cdce18e625d5ccafcbf1b41f4b346790e204bccc0f0ea6d079.scope: Deactivated successfully.
Jan 31 09:07:07 compute-0 podman[411462]: 2026-01-31 09:07:07.171431451 +0000 UTC m=+0.077480485 container create 192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:07:07 compute-0 podman[411462]: 2026-01-31 09:07:07.114261931 +0000 UTC m=+0.020310995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:07:07 compute-0 systemd[1]: Started libpod-conmon-192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc.scope.
Jan 31 09:07:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1668d15e8a1af83b9d905203ff603a9da7078fd23ac1b03cbb2000196ef4d280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1668d15e8a1af83b9d905203ff603a9da7078fd23ac1b03cbb2000196ef4d280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1668d15e8a1af83b9d905203ff603a9da7078fd23ac1b03cbb2000196ef4d280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1668d15e8a1af83b9d905203ff603a9da7078fd23ac1b03cbb2000196ef4d280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:07:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Jan 31 09:07:07 compute-0 podman[411462]: 2026-01-31 09:07:07.32147904 +0000 UTC m=+0.227528094 container init 192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:07:07 compute-0 podman[411462]: 2026-01-31 09:07:07.329607657 +0000 UTC m=+0.235656691 container start 192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:07:07 compute-0 podman[411462]: 2026-01-31 09:07:07.370165394 +0000 UTC m=+0.276214458 container attach 192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 09:07:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Jan 31 09:07:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Jan 31 09:07:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.5 MiB/s wr, 89 op/s
Jan 31 09:07:07 compute-0 sudo[411484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:07 compute-0 sudo[411484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:07 compute-0 sudo[411484]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:07 compute-0 sudo[411509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:07 compute-0 sudo[411509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:07 compute-0 sudo[411509]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:07 compute-0 nova_compute[247704]: 2026-01-31 09:07:07.995 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]: {
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:         "osd_id": 0,
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:         "type": "bluestore"
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]:     }
Jan 31 09:07:08 compute-0 xenodochial_wescoff[411478]: }
Jan 31 09:07:08 compute-0 systemd[1]: libpod-192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc.scope: Deactivated successfully.
Jan 31 09:07:08 compute-0 podman[411462]: 2026-01-31 09:07:08.194907468 +0000 UTC m=+1.100956502 container died 192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:07:08 compute-0 nova_compute[247704]: 2026-01-31 09:07:08.213 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1668d15e8a1af83b9d905203ff603a9da7078fd23ac1b03cbb2000196ef4d280-merged.mount: Deactivated successfully.
Jan 31 09:07:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Jan 31 09:07:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:08.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:08 compute-0 ceph-mon[74496]: osdmap e416: 3 total, 3 up, 3 in
Jan 31 09:07:08 compute-0 ceph-mon[74496]: pgmap v4061: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.5 MiB/s wr, 89 op/s
Jan 31 09:07:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:08.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:08 compute-0 podman[411462]: 2026-01-31 09:07:08.848396389 +0000 UTC m=+1.754445463 container remove 192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:07:08 compute-0 systemd[1]: libpod-conmon-192bbf46cd5fb7ee48f613d25d8f2e85aef82d6d9f4f4db4b437f4aadedd32dc.scope: Deactivated successfully.
Jan 31 09:07:08 compute-0 sudo[411355]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:07:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Jan 31 09:07:09 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Jan 31 09:07:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 102 op/s
Jan 31 09:07:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:07:09 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2758c713-422e-46f3-89cd-e498418b2d59 does not exist
Jan 31 09:07:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7c9c103a-f041-4266-9e87-b4c7bd094aab does not exist
Jan 31 09:07:09 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 79e300ef-cef4-4bd4-99a0-45ab4bb2b469 does not exist
Jan 31 09:07:09 compute-0 sudo[411564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:09 compute-0 sudo[411564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:09 compute-0 sudo[411564]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:09 compute-0 sudo[411589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:07:09 compute-0 sudo[411589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:09 compute-0 sudo[411589]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:10 compute-0 ceph-mon[74496]: osdmap e417: 3 total, 3 up, 3 in
Jan 31 09:07:10 compute-0 ceph-mon[74496]: pgmap v4063: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.6 MiB/s wr, 102 op/s
Jan 31 09:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:10 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:07:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:10.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:07:11.236 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:07:11.238 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:07:11.238 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 113 op/s
Jan 31 09:07:12 compute-0 nova_compute[247704]: 2026-01-31 09:07:12.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:12.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:12.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:12 compute-0 ceph-mon[74496]: pgmap v4064: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 113 op/s
Jan 31 09:07:12 compute-0 nova_compute[247704]: 2026-01-31 09:07:12.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:13 compute-0 nova_compute[247704]: 2026-01-31 09:07:13.215 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 72 op/s
Jan 31 09:07:13 compute-0 ceph-mon[74496]: pgmap v4065: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 72 op/s
Jan 31 09:07:14 compute-0 nova_compute[247704]: 2026-01-31 09:07:14.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:14 compute-0 nova_compute[247704]: 2026-01-31 09:07:14.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:07:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:14.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 31 09:07:15 compute-0 ceph-mon[74496]: pgmap v4066: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Jan 31 09:07:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Jan 31 09:07:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Jan 31 09:07:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Jan 31 09:07:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:17 compute-0 ceph-mon[74496]: osdmap e418: 3 total, 3 up, 3 in
Jan 31 09:07:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1410142522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:17 compute-0 nova_compute[247704]: 2026-01-31 09:07:17.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Jan 31 09:07:17 compute-0 podman[411618]: 2026-01-31 09:07:17.900756037 +0000 UTC m=+0.068765533 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:07:17 compute-0 nova_compute[247704]: 2026-01-31 09:07:17.997 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:18 compute-0 nova_compute[247704]: 2026-01-31 09:07:18.216 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:18 compute-0 ceph-mon[74496]: pgmap v4068: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Jan 31 09:07:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:18.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 09:07:20 compute-0 ceph-mon[74496]: pgmap v4069: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:07:20
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Jan 31 09:07:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:07:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:20.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:07:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3941672233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 45 op/s
Jan 31 09:07:22 compute-0 ceph-mon[74496]: pgmap v4070: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 45 op/s
Jan 31 09:07:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1871789765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:22.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:22.999 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:23.217 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3951575014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:23.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:23.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:23.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:07:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 43 op/s
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:23.581 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:07:23 compute-0 nova_compute[247704]: 2026-01-31 09:07:23.581 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:24 compute-0 ceph-mon[74496]: pgmap v4071: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 43 op/s
Jan 31 09:07:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:24.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 09:07:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:26.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:26 compute-0 ceph-mon[74496]: pgmap v4072: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 09:07:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:26.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:27 compute-0 nova_compute[247704]: 2026-01-31 09:07:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 999 KiB/s wr, 56 op/s
Jan 31 09:07:27 compute-0 nova_compute[247704]: 2026-01-31 09:07:27.591 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:27 compute-0 nova_compute[247704]: 2026-01-31 09:07:27.592 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:27 compute-0 nova_compute[247704]: 2026-01-31 09:07:27.592 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:27 compute-0 nova_compute[247704]: 2026-01-31 09:07:27.592 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:07:27 compute-0 nova_compute[247704]: 2026-01-31 09:07:27.592 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:07:27 compute-0 sudo[411651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:27 compute-0 sudo[411651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:27 compute-0 sudo[411651]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1621740039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:07:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/993917815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:07:27 compute-0 sudo[411694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:27 compute-0 sudo[411694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:27 compute-0 sudo[411694]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.036 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441572639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.083 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.218 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.239 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.241 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4195MB free_disk=20.97762680053711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.241 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.241 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.355 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.355 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.377 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:07:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:28.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:28 compute-0 ceph-mon[74496]: pgmap v4073: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 999 KiB/s wr, 56 op/s
Jan 31 09:07:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1275098523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3441572639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1191431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:07:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803889982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.822 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.828 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.845 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.847 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:07:28 compute-0 nova_compute[247704]: 2026-01-31 09:07:28.847 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:07:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:07:29.176 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:07:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:07:29.177 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:07:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:07:29.178 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:07:29 compute-0 nova_compute[247704]: 2026-01-31 09:07:29.224 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 935 KiB/s wr, 41 op/s
Jan 31 09:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Jan 31 09:07:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Jan 31 09:07:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1803889982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:07:29 compute-0 ceph-mon[74496]: pgmap v4074: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 935 KiB/s wr, 41 op/s
Jan 31 09:07:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Jan 31 09:07:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:30.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:30 compute-0 nova_compute[247704]: 2026-01-31 09:07:30.848 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:30 compute-0 ceph-mon[74496]: osdmap e419: 3 total, 3 up, 3 in
Jan 31 09:07:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 243 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 145 op/s
Jan 31 09:07:32 compute-0 ceph-mon[74496]: pgmap v4076: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 243 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 145 op/s
Jan 31 09:07:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:32.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:33 compute-0 nova_compute[247704]: 2026-01-31 09:07:33.039 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:33 compute-0 nova_compute[247704]: 2026-01-31 09:07:33.219 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:33 compute-0 nova_compute[247704]: 2026-01-31 09:07:33.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 217 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 169 op/s
Jan 31 09:07:33 compute-0 podman[411746]: 2026-01-31 09:07:33.87801339 +0000 UTC m=+0.053207585 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:07:33 compute-0 ceph-mon[74496]: pgmap v4077: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 217 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.7 MiB/s wr, 169 op/s
Jan 31 09:07:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:35 compute-0 nova_compute[247704]: 2026-01-31 09:07:35.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 31 09:07:35 compute-0 ceph-mon[74496]: pgmap v4078: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002175591499162479 of space, bias 1.0, pg target 0.6526774497487436 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:07:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Jan 31 09:07:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Jan 31 09:07:36 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Jan 31 09:07:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:36.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.5 MiB/s wr, 221 op/s
Jan 31 09:07:37 compute-0 ceph-mon[74496]: osdmap e420: 3 total, 3 up, 3 in
Jan 31 09:07:38 compute-0 nova_compute[247704]: 2026-01-31 09:07:38.040 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:38 compute-0 nova_compute[247704]: 2026-01-31 09:07:38.268 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:38.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:38 compute-0 ceph-mon[74496]: pgmap v4080: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.5 MiB/s wr, 221 op/s
Jan 31 09:07:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.7 MiB/s wr, 180 op/s
Jan 31 09:07:40 compute-0 ceph-mon[74496]: pgmap v4081: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.7 MiB/s wr, 180 op/s
Jan 31 09:07:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:40.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:40.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:41 compute-0 nova_compute[247704]: 2026-01-31 09:07:41.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.7 KiB/s wr, 63 op/s
Jan 31 09:07:42 compute-0 ceph-mon[74496]: pgmap v4082: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.7 KiB/s wr, 63 op/s
Jan 31 09:07:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:43 compute-0 nova_compute[247704]: 2026-01-31 09:07:43.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:43 compute-0 nova_compute[247704]: 2026-01-31 09:07:43.269 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 50 op/s
Jan 31 09:07:43 compute-0 ceph-mon[74496]: pgmap v4083: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 50 op/s
Jan 31 09:07:44 compute-0 nova_compute[247704]: 2026-01-31 09:07:44.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:07:44 compute-0 nova_compute[247704]: 2026-01-31 09:07:44.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:07:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:45 compute-0 nova_compute[247704]: 2026-01-31 09:07:45.294 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:07:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 16 KiB/s wr, 52 op/s
Jan 31 09:07:45 compute-0 ceph-mon[74496]: pgmap v4084: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 16 KiB/s wr, 52 op/s
Jan 31 09:07:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 583 KiB/s rd, 23 KiB/s wr, 47 op/s
Jan 31 09:07:47 compute-0 sudo[411772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:47 compute-0 sudo[411772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:47 compute-0 sudo[411772]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:47 compute-0 sudo[411797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:07:47 compute-0 sudo[411797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:07:47 compute-0 sudo[411797]: pam_unix(sudo:session): session closed for user root
Jan 31 09:07:47 compute-0 ceph-mon[74496]: pgmap v4085: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 583 KiB/s rd, 23 KiB/s wr, 47 op/s
Jan 31 09:07:48 compute-0 podman[411821]: 2026-01-31 09:07:48.003425405 +0000 UTC m=+0.076222506 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 09:07:48 compute-0 nova_compute[247704]: 2026-01-31 09:07:48.043 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:48 compute-0 nova_compute[247704]: 2026-01-31 09:07:48.271 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 22 KiB/s wr, 47 op/s
Jan 31 09:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:07:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:07:50 compute-0 ceph-mon[74496]: pgmap v4086: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 22 KiB/s wr, 47 op/s
Jan 31 09:07:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 24 KiB/s wr, 49 op/s
Jan 31 09:07:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:52.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:52 compute-0 ceph-mon[74496]: pgmap v4087: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 24 KiB/s wr, 49 op/s
Jan 31 09:07:53 compute-0 nova_compute[247704]: 2026-01-31 09:07:53.080 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:53 compute-0 nova_compute[247704]: 2026-01-31 09:07:53.272 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096231141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:07:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:07:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096231141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:07:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 748 KiB/s rd, 23 KiB/s wr, 49 op/s
Jan 31 09:07:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2096231141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:07:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2096231141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:07:54 compute-0 ceph-mon[74496]: pgmap v4088: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 748 KiB/s rd, 23 KiB/s wr, 49 op/s
Jan 31 09:07:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:07:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:07:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:54.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 12 KiB/s wr, 51 op/s
Jan 31 09:07:56 compute-0 ceph-mon[74496]: pgmap v4089: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 12 KiB/s wr, 51 op/s
Jan 31 09:07:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:07:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:07:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:07:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 151 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 11 KiB/s wr, 20 op/s
Jan 31 09:07:57 compute-0 ceph-mon[74496]: pgmap v4090: 305 pgs: 305 active+clean; 151 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 11 KiB/s wr, 20 op/s
Jan 31 09:07:58 compute-0 nova_compute[247704]: 2026-01-31 09:07:58.083 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:58 compute-0 nova_compute[247704]: 2026-01-31 09:07:58.274 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:07:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:07:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:07:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:58.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:07:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 2.9 KiB/s wr, 32 op/s
Jan 31 09:07:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2616368524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:00.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:01 compute-0 ceph-mon[74496]: pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 2.9 KiB/s wr, 32 op/s
Jan 31 09:08:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Jan 31 09:08:02 compute-0 ceph-mon[74496]: pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 3.2 KiB/s wr, 31 op/s
Jan 31 09:08:02 compute-0 nova_compute[247704]: 2026-01-31 09:08:02.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:02.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:03 compute-0 nova_compute[247704]: 2026-01-31 09:08:03.120 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:03 compute-0 nova_compute[247704]: 2026-01-31 09:08:03.275 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 09:08:03 compute-0 ceph-mon[74496]: pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 09:08:04 compute-0 nova_compute[247704]: 2026-01-31 09:08:04.581 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:04 compute-0 nova_compute[247704]: 2026-01-31 09:08:04.582 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:08:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:04.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:04 compute-0 podman[411857]: 2026-01-31 09:08:04.863300899 +0000 UTC m=+0.041168651 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 09:08:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:08:05 compute-0 ceph-mon[74496]: pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:08:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:06.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:06.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 09:08:07 compute-0 sudo[411877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:07 compute-0 sudo[411877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:07 compute-0 sudo[411877]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:08 compute-0 sudo[411902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:08 compute-0 sudo[411902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:08 compute-0 sudo[411902]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:08 compute-0 nova_compute[247704]: 2026-01-31 09:08:08.123 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:08 compute-0 nova_compute[247704]: 2026-01-31 09:08:08.277 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:08.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:08 compute-0 ceph-mon[74496]: pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 09:08:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 09:08:09 compute-0 ceph-mon[74496]: pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 09:08:10 compute-0 sudo[411928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:10 compute-0 sudo[411928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[411928]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 sudo[411953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:08:10 compute-0 sudo[411953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[411953]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 sudo[411978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:10 compute-0 sudo[411978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[411978]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 sudo[412003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:08:10 compute-0 sudo[412003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[412003]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:10.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:10 compute-0 sudo[412061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:10 compute-0 sudo[412061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[412061]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:10.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:10 compute-0 sudo[412086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:08:10 compute-0 sudo[412086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[412086]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 sudo[412111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:10 compute-0 sudo[412111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:10 compute-0 sudo[412111]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:10 compute-0 sudo[412136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Jan 31 09:08:10 compute-0 sudo[412136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:11 compute-0 sudo[412136]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:08:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:08:11.237 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:08:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:08:11.238 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:08:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:08:11.238 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 66379f56-a7e9-42ad-951a-770284eb88af does not exist
Jan 31 09:08:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 46f63ffe-f3f7-4939-8154-16a8bbf15070 does not exist
Jan 31 09:08:11 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0edfad55-7f53-49b6-b1d2-e6fd3ab88661 does not exist
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:08:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:08:11 compute-0 sudo[412180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:11 compute-0 sudo[412180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:11 compute-0 sudo[412180]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:11 compute-0 sudo[412205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:08:11 compute-0 sudo[412205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:11 compute-0 sudo[412205]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:11 compute-0 sudo[412230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:11 compute-0 sudo[412230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:11 compute-0 sudo[412230]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:11 compute-0 sudo[412255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:08:11 compute-0 sudo[412255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.794277353 +0000 UTC m=+0.037853471 container create 04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 09:08:11 compute-0 systemd[1]: Started libpod-conmon-04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc.scope.
Jan 31 09:08:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.775478616 +0000 UTC m=+0.019054764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.871877461 +0000 UTC m=+0.115453599 container init 04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.878232615 +0000 UTC m=+0.121808723 container start 04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.883437741 +0000 UTC m=+0.127013889 container attach 04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 09:08:11 compute-0 sad_buck[412337]: 167 167
Jan 31 09:08:11 compute-0 systemd[1]: libpod-04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc.scope: Deactivated successfully.
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.884529058 +0000 UTC m=+0.128105176 container died 04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 09:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-eefc5eebdfe6d9153aca81e0b42e25f5a5f0ff48b11cae7d6924480931787e22-merged.mount: Deactivated successfully.
Jan 31 09:08:11 compute-0 podman[412320]: 2026-01-31 09:08:11.928699532 +0000 UTC m=+0.172275640 container remove 04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 09:08:11 compute-0 systemd[1]: libpod-conmon-04062dab877388a32936eecb4fd7197dddbe9397196d8e234fa15f9c622240bc.scope: Deactivated successfully.
Jan 31 09:08:12 compute-0 podman[412361]: 2026-01-31 09:08:12.058690063 +0000 UTC m=+0.041319185 container create 7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 09:08:12 compute-0 systemd[1]: Started libpod-conmon-7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919.scope.
Jan 31 09:08:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a870d1c70ca75b9af5d0596228ea0e75aa7c92e83b05fb236c5e58ba8232756c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a870d1c70ca75b9af5d0596228ea0e75aa7c92e83b05fb236c5e58ba8232756c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a870d1c70ca75b9af5d0596228ea0e75aa7c92e83b05fb236c5e58ba8232756c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a870d1c70ca75b9af5d0596228ea0e75aa7c92e83b05fb236c5e58ba8232756c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a870d1c70ca75b9af5d0596228ea0e75aa7c92e83b05fb236c5e58ba8232756c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:12 compute-0 podman[412361]: 2026-01-31 09:08:12.039175438 +0000 UTC m=+0.021804590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:08:12 compute-0 podman[412361]: 2026-01-31 09:08:12.136913545 +0000 UTC m=+0.119542687 container init 7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 09:08:12 compute-0 podman[412361]: 2026-01-31 09:08:12.142735666 +0000 UTC m=+0.125364818 container start 7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:08:12 compute-0 podman[412361]: 2026-01-31 09:08:12.146438567 +0000 UTC m=+0.129067689 container attach 7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:08:12 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:08:12 compute-0 ceph-mon[74496]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 31 09:08:12 compute-0 nova_compute[247704]: 2026-01-31 09:08:12.487 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:12.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:12.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:12 compute-0 blissful_lichterman[412377]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:08:12 compute-0 blissful_lichterman[412377]: --> relative data size: 1.0
Jan 31 09:08:12 compute-0 blissful_lichterman[412377]: --> All data devices are unavailable
Jan 31 09:08:12 compute-0 systemd[1]: libpod-7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919.scope: Deactivated successfully.
Jan 31 09:08:12 compute-0 podman[412361]: 2026-01-31 09:08:12.944953233 +0000 UTC m=+0.927582365 container died 7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a870d1c70ca75b9af5d0596228ea0e75aa7c92e83b05fb236c5e58ba8232756c-merged.mount: Deactivated successfully.
Jan 31 09:08:13 compute-0 podman[412361]: 2026-01-31 09:08:13.008130029 +0000 UTC m=+0.990759151 container remove 7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 09:08:13 compute-0 systemd[1]: libpod-conmon-7ce72544cb314420d6a750ca85491d986566ad0d54f7f265704b17304b6d1919.scope: Deactivated successfully.
Jan 31 09:08:13 compute-0 sudo[412255]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:13 compute-0 sudo[412407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:13 compute-0 sudo[412407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:13 compute-0 sudo[412407]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:13 compute-0 sudo[412432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:08:13 compute-0 nova_compute[247704]: 2026-01-31 09:08:13.161 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:13 compute-0 sudo[412432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:13 compute-0 sudo[412432]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:13 compute-0 sudo[412457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:13 compute-0 sudo[412457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:13 compute-0 sudo[412457]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:13 compute-0 sudo[412482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:08:13 compute-0 sudo[412482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:13 compute-0 nova_compute[247704]: 2026-01-31 09:08:13.279 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.564027386 +0000 UTC m=+0.035870052 container create 67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:08:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 85 B/s wr, 5 op/s
Jan 31 09:08:13 compute-0 nova_compute[247704]: 2026-01-31 09:08:13.597 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:13 compute-0 systemd[1]: Started libpod-conmon-67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805.scope.
Jan 31 09:08:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.547526666 +0000 UTC m=+0.019369332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.655721187 +0000 UTC m=+0.127563873 container init 67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.664198603 +0000 UTC m=+0.136041269 container start 67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.667864312 +0000 UTC m=+0.139706998 container attach 67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 09:08:13 compute-0 eloquent_bartik[412562]: 167 167
Jan 31 09:08:13 compute-0 systemd[1]: libpod-67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805.scope: Deactivated successfully.
Jan 31 09:08:13 compute-0 conmon[412562]: conmon 67b161a684b13a9d20da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805.scope/container/memory.events
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.673578101 +0000 UTC m=+0.145420767 container died 67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 09:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4db8ca6da9fdf714e5d46d8bc8417fd566d661e17f0703b484c42111750313d0-merged.mount: Deactivated successfully.
Jan 31 09:08:13 compute-0 podman[412546]: 2026-01-31 09:08:13.708141951 +0000 UTC m=+0.179984617 container remove 67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:08:13 compute-0 systemd[1]: libpod-conmon-67b161a684b13a9d20dac46ea7e9a4795930ebb343dc2fbd08c005f5119b3805.scope: Deactivated successfully.
Jan 31 09:08:13 compute-0 podman[412585]: 2026-01-31 09:08:13.848578246 +0000 UTC m=+0.042348111 container create 434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_liskov, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:08:13 compute-0 systemd[1]: Started libpod-conmon-434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d.scope.
Jan 31 09:08:13 compute-0 podman[412585]: 2026-01-31 09:08:13.829119543 +0000 UTC m=+0.022889438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:08:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b06a8a4fe81075eba0a494ddc2fe69f21f2a8a77ac15eb4015ded4140db85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b06a8a4fe81075eba0a494ddc2fe69f21f2a8a77ac15eb4015ded4140db85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b06a8a4fe81075eba0a494ddc2fe69f21f2a8a77ac15eb4015ded4140db85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881b06a8a4fe81075eba0a494ddc2fe69f21f2a8a77ac15eb4015ded4140db85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:13 compute-0 podman[412585]: 2026-01-31 09:08:13.94909938 +0000 UTC m=+0.142869275 container init 434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 09:08:13 compute-0 podman[412585]: 2026-01-31 09:08:13.954323187 +0000 UTC m=+0.148093052 container start 434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 09:08:13 compute-0 podman[412585]: 2026-01-31 09:08:13.957635088 +0000 UTC m=+0.151404963 container attach 434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_liskov, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:08:14 compute-0 nova_compute[247704]: 2026-01-31 09:08:14.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:14 compute-0 nova_compute[247704]: 2026-01-31 09:08:14.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:08:14 compute-0 ceph-mon[74496]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 85 B/s wr, 5 op/s
Jan 31 09:08:14 compute-0 agitated_liskov[412601]: {
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:     "0": [
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:         {
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "devices": [
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "/dev/loop3"
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             ],
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "lv_name": "ceph_lv0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "lv_size": "7511998464",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "name": "ceph_lv0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "tags": {
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.cluster_name": "ceph",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.crush_device_class": "",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.encrypted": "0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.osd_id": "0",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.type": "block",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:                 "ceph.vdo": "0"
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             },
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "type": "block",
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:             "vg_name": "ceph_vg0"
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:         }
Jan 31 09:08:14 compute-0 agitated_liskov[412601]:     ]
Jan 31 09:08:14 compute-0 agitated_liskov[412601]: }
Jan 31 09:08:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:14.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:14 compute-0 systemd[1]: libpod-434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d.scope: Deactivated successfully.
Jan 31 09:08:14 compute-0 podman[412585]: 2026-01-31 09:08:14.703637337 +0000 UTC m=+0.897407232 container died 434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 09:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-881b06a8a4fe81075eba0a494ddc2fe69f21f2a8a77ac15eb4015ded4140db85-merged.mount: Deactivated successfully.
Jan 31 09:08:14 compute-0 podman[412585]: 2026-01-31 09:08:14.786361829 +0000 UTC m=+0.980131694 container remove 434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 09:08:14 compute-0 systemd[1]: libpod-conmon-434476abf29cd416f63e80694f9d2c6f164e6386e0e7c94087f11b36a288b60d.scope: Deactivated successfully.
Jan 31 09:08:14 compute-0 sudo[412482]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:14.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:14 compute-0 sudo[412623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:14 compute-0 sudo[412623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:14 compute-0 sudo[412623]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:14 compute-0 sudo[412648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:08:14 compute-0 sudo[412648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:14 compute-0 sudo[412648]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:14 compute-0 sudo[412673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:14 compute-0 sudo[412673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:14 compute-0 sudo[412673]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:15 compute-0 sudo[412698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:08:15 compute-0 sudo[412698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.306263991 +0000 UTC m=+0.044755659 container create 98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 09:08:15 compute-0 systemd[1]: Started libpod-conmon-98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2.scope.
Jan 31 09:08:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.283251771 +0000 UTC m=+0.021743459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.380752553 +0000 UTC m=+0.119244221 container init 98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.386539783 +0000 UTC m=+0.125031451 container start 98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.389664969 +0000 UTC m=+0.128156637 container attach 98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 09:08:15 compute-0 elastic_bhaskara[412779]: 167 167
Jan 31 09:08:15 compute-0 systemd[1]: libpod-98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2.scope: Deactivated successfully.
Jan 31 09:08:15 compute-0 conmon[412779]: conmon 98c90047f4381f12929e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2.scope/container/memory.events
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.393014321 +0000 UTC m=+0.131505989 container died 98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 09:08:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-81cb4b8721a56a018ffc713f44e753c78112eaf71457f82a3464359429b5b5ed-merged.mount: Deactivated successfully.
Jan 31 09:08:15 compute-0 podman[412763]: 2026-01-31 09:08:15.43490316 +0000 UTC m=+0.173394828 container remove 98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 09:08:15 compute-0 systemd[1]: libpod-conmon-98c90047f4381f12929edb67b8fdb5d56a73935ec0dc778b2b1eb13a327f8be2.scope: Deactivated successfully.
Jan 31 09:08:15 compute-0 podman[412803]: 2026-01-31 09:08:15.548302707 +0000 UTC m=+0.034007109 container create 71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:08:15 compute-0 systemd[1]: Started libpod-conmon-71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9.scope.
Jan 31 09:08:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Jan 31 09:08:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23bda18f72b0c907bb203d4f53caa159f6696ae9afef218d437d8585b64bd1d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23bda18f72b0c907bb203d4f53caa159f6696ae9afef218d437d8585b64bd1d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23bda18f72b0c907bb203d4f53caa159f6696ae9afef218d437d8585b64bd1d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23bda18f72b0c907bb203d4f53caa159f6696ae9afef218d437d8585b64bd1d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:08:15 compute-0 podman[412803]: 2026-01-31 09:08:15.627146454 +0000 UTC m=+0.112850876 container init 71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sutherland, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 09:08:15 compute-0 podman[412803]: 2026-01-31 09:08:15.53322513 +0000 UTC m=+0.018929552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:08:15 compute-0 podman[412803]: 2026-01-31 09:08:15.633270192 +0000 UTC m=+0.118974594 container start 71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:08:15 compute-0 podman[412803]: 2026-01-31 09:08:15.637344332 +0000 UTC m=+0.123048734 container attach 71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sutherland, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 09:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]: {
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:         "osd_id": 0,
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:         "type": "bluestore"
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]:     }
Jan 31 09:08:16 compute-0 intelligent_sutherland[412819]: }
Jan 31 09:08:16 compute-0 systemd[1]: libpod-71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9.scope: Deactivated successfully.
Jan 31 09:08:16 compute-0 podman[412803]: 2026-01-31 09:08:16.469216519 +0000 UTC m=+0.954920931 container died 71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sutherland, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-23bda18f72b0c907bb203d4f53caa159f6696ae9afef218d437d8585b64bd1d1-merged.mount: Deactivated successfully.
Jan 31 09:08:16 compute-0 podman[412803]: 2026-01-31 09:08:16.518235691 +0000 UTC m=+1.003940093 container remove 71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:08:16 compute-0 systemd[1]: libpod-conmon-71c255916f2914b2c39c8909518b9436c78692f0c40f22dbdf2646cca51bb7d9.scope: Deactivated successfully.
Jan 31 09:08:16 compute-0 sudo[412698]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:08:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 771d1433-a599-44fe-838f-1429885297ae does not exist
Jan 31 09:08:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 05b91f43-4bed-45f7-86aa-c5aab5f130fd does not exist
Jan 31 09:08:16 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b5e409b4-fe62-4d71-99b7-91cb1a4c2973 does not exist
Jan 31 09:08:16 compute-0 sudo[412855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:16 compute-0 sudo[412855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:16 compute-0 sudo[412855]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:16 compute-0 ceph-mon[74496]: pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 9 op/s
Jan 31 09:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:16 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:08:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:16.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:16 compute-0 sudo[412880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:08:16 compute-0 sudo[412880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:16 compute-0 sudo[412880]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:16.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:17 compute-0 nova_compute[247704]: 2026-01-31 09:08:17.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 134 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 402 KiB/s wr, 15 op/s
Jan 31 09:08:18 compute-0 nova_compute[247704]: 2026-01-31 09:08:18.163 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:18 compute-0 nova_compute[247704]: 2026-01-31 09:08:18.280 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:18.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:18 compute-0 ceph-mon[74496]: pgmap v4100: 305 pgs: 305 active+clean; 134 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 402 KiB/s wr, 15 op/s
Jan 31 09:08:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:18.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:18 compute-0 podman[412906]: 2026-01-31 09:08:18.945938973 +0000 UTC m=+0.117795195 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:08:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 151 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 994 KiB/s wr, 38 op/s
Jan 31 09:08:19 compute-0 ceph-mon[74496]: pgmap v4101: 305 pgs: 305 active+clean; 151 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 994 KiB/s wr, 38 op/s
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:08:20
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Jan 31 09:08:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:08:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:20.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:08:21 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1243239800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.405834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501405951, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1784, "num_deletes": 257, "total_data_size": 3198291, "memory_usage": 3246240, "flush_reason": "Manual Compaction"}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501496069, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 3117258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87725, "largest_seqno": 89508, "table_properties": {"data_size": 3109048, "index_size": 5023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16910, "raw_average_key_size": 20, "raw_value_size": 3092580, "raw_average_value_size": 3681, "num_data_blocks": 220, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850331, "oldest_key_time": 1769850331, "file_creation_time": 1769850501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 90280 microseconds, and 6201 cpu microseconds.
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.496152) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 3117258 bytes OK
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.496174) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.507359) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.507402) EVENT_LOG_v1 {"time_micros": 1769850501507392, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.507430) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3190850, prev total WAL file size 3190850, number of live WAL files 2.
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.508215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373730' seq:72057594037927935, type:22 .. '6C6F676D0034303231' seq:0, type:0; will stop at (end)
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(3044KB)], [203(10MB)]
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501508273, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 14262481, "oldest_snapshot_seqno": -1}
Jan 31 09:08:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 11520 keys, 14130852 bytes, temperature: kUnknown
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501666585, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 14130852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14058353, "index_size": 42626, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28805, "raw_key_size": 304858, "raw_average_key_size": 26, "raw_value_size": 13858858, "raw_average_value_size": 1203, "num_data_blocks": 1619, "num_entries": 11520, "num_filter_entries": 11520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.666904) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14130852 bytes
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.709174) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.0 rd, 89.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 12053, records dropped: 533 output_compression: NoCompression
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.709225) EVENT_LOG_v1 {"time_micros": 1769850501709197, "job": 128, "event": "compaction_finished", "compaction_time_micros": 158408, "compaction_time_cpu_micros": 25984, "output_level": 6, "num_output_files": 1, "total_output_size": 14130852, "num_input_records": 12053, "num_output_records": 11520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501709705, "job": 128, "event": "table_file_deletion", "file_number": 205}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850501710661, "job": 128, "event": "table_file_deletion", "file_number": 203}
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.508035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.710695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.710699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.710701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.710702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:08:21.710704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:08:21 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 09:08:21 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 09:08:21 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 09:08:22 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 31 09:08:22 compute-0 ceph-mon[74496]: pgmap v4102: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 09:08:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3430568507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3031132125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:22.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:22.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:23 compute-0 nova_compute[247704]: 2026-01-31 09:08:23.211 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:23 compute-0 nova_compute[247704]: 2026-01-31 09:08:23.282 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1471333307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:08:23 compute-0 nova_compute[247704]: 2026-01-31 09:08:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 09:08:24 compute-0 ceph-mon[74496]: pgmap v4103: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 09:08:24 compute-0 nova_compute[247704]: 2026-01-31 09:08:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:24 compute-0 nova_compute[247704]: 2026-01-31 09:08:24.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:08:24 compute-0 nova_compute[247704]: 2026-01-31 09:08:24.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:08:24 compute-0 nova_compute[247704]: 2026-01-31 09:08:24.579 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:08:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:24.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 09:08:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:26 compute-0 ceph-mon[74496]: pgmap v4104: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 09:08:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:08:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:08:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:26.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 100 KiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 31 09:08:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3963900475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-0 sudo[412937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:28 compute-0 sudo[412937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:28 compute-0 sudo[412937]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:28 compute-0 sudo[412962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:28 compute-0 sudo[412962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:28 compute-0 sudo[412962]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:28 compute-0 nova_compute[247704]: 2026-01-31 09:08:28.214 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:28 compute-0 nova_compute[247704]: 2026-01-31 09:08:28.282 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:28 compute-0 ceph-mon[74496]: pgmap v4105: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 100 KiB/s rd, 1.8 MiB/s wr, 164 op/s
Jan 31 09:08:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3877195061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4123028729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:28.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:29 compute-0 nova_compute[247704]: 2026-01-31 09:08:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 119 KiB/s rd, 1.4 MiB/s wr, 197 op/s
Jan 31 09:08:29 compute-0 nova_compute[247704]: 2026-01-31 09:08:29.714 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:08:29 compute-0 nova_compute[247704]: 2026-01-31 09:08:29.715 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:08:29 compute-0 nova_compute[247704]: 2026-01-31 09:08:29.715 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:08:29 compute-0 nova_compute[247704]: 2026-01-31 09:08:29.715 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:08:29 compute-0 nova_compute[247704]: 2026-01-31 09:08:29.716 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:08:29 compute-0 ceph-mon[74496]: pgmap v4106: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 119 KiB/s rd, 1.4 MiB/s wr, 197 op/s
Jan 31 09:08:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:08:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036767708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.125 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.269 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.270 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4195MB free_disk=20.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.270 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.271 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.385 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.385 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.409 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:08:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:30.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:08:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033565918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.840 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.845 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:08:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:30.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.903 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.904 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:08:30 compute-0 nova_compute[247704]: 2026-01-31 09:08:30.904 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:08:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2036767708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4033565918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:08:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 834 KiB/s wr, 245 op/s
Jan 31 09:08:31 compute-0 nova_compute[247704]: 2026-01-31 09:08:31.905 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:32 compute-0 ceph-mon[74496]: pgmap v4107: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 834 KiB/s wr, 245 op/s
Jan 31 09:08:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:32.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:32.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:33 compute-0 nova_compute[247704]: 2026-01-31 09:08:33.216 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:33 compute-0 nova_compute[247704]: 2026-01-31 09:08:33.284 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:33 compute-0 nova_compute[247704]: 2026-01-31 09:08:33.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 238 op/s
Jan 31 09:08:34 compute-0 ceph-mon[74496]: pgmap v4108: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 238 op/s
Jan 31 09:08:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:34.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 242 op/s
Jan 31 09:08:35 compute-0 ceph-mon[74496]: pgmap v4109: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 242 op/s
Jan 31 09:08:35 compute-0 podman[413035]: 2026-01-31 09:08:35.884113473 +0000 UTC m=+0.055641704 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031506999697561884 of space, bias 1.0, pg target 0.9452099909268565 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:08:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:08:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:36 compute-0 nova_compute[247704]: 2026-01-31 09:08:36.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:08:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 210 op/s
Jan 31 09:08:38 compute-0 nova_compute[247704]: 2026-01-31 09:08:38.272 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:38 compute-0 nova_compute[247704]: 2026-01-31 09:08:38.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:38.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:38 compute-0 ceph-mon[74496]: pgmap v4110: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 210 op/s
Jan 31 09:08:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:38.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 146 op/s
Jan 31 09:08:39 compute-0 ceph-mon[74496]: pgmap v4111: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 146 op/s
Jan 31 09:08:40 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 09:08:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:40.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:40.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 344 KiB/s wr, 116 op/s
Jan 31 09:08:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:42.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:42 compute-0 ceph-mon[74496]: pgmap v4112: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 344 KiB/s wr, 116 op/s
Jan 31 09:08:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:42.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:43 compute-0 nova_compute[247704]: 2026-01-31 09:08:43.287 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:43 compute-0 nova_compute[247704]: 2026-01-31 09:08:43.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:43 compute-0 nova_compute[247704]: 2026-01-31 09:08:43.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:08:43 compute-0 nova_compute[247704]: 2026-01-31 09:08:43.289 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:43 compute-0 nova_compute[247704]: 2026-01-31 09:08:43.315 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:43 compute-0 nova_compute[247704]: 2026-01-31 09:08:43.316 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 792 KiB/s wr, 52 op/s
Jan 31 09:08:44 compute-0 ceph-mon[74496]: pgmap v4113: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 792 KiB/s wr, 52 op/s
Jan 31 09:08:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:08:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:44.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:08:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:44.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 194 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Jan 31 09:08:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:46.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:46.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:47 compute-0 ceph-mon[74496]: pgmap v4114: 305 pgs: 305 active+clean; 194 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Jan 31 09:08:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 199 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 09:08:48 compute-0 ceph-mon[74496]: pgmap v4115: 305 pgs: 305 active+clean; 199 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 09:08:48 compute-0 sudo[413060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:48 compute-0 sudo[413060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:48 compute-0 sudo[413060]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:48 compute-0 sudo[413085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:08:48 compute-0 sudo[413085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:08:48 compute-0 sudo[413085]: pam_unix(sudo:session): session closed for user root
Jan 31 09:08:48 compute-0 nova_compute[247704]: 2026-01-31 09:08:48.317 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:48 compute-0 nova_compute[247704]: 2026-01-31 09:08:48.320 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:48.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:48.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:08:49 compute-0 podman[413111]: 2026-01-31 09:08:49.916071245 +0000 UTC m=+0.079442483 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:08:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:08:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:50.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:50 compute-0 ceph-mon[74496]: pgmap v4116: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:08:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:50.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:08:51 compute-0 ceph-mon[74496]: pgmap v4117: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:08:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:08:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:52.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:08:53 compute-0 nova_compute[247704]: 2026-01-31 09:08:53.321 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:08:53 compute-0 nova_compute[247704]: 2026-01-31 09:08:53.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:53 compute-0 nova_compute[247704]: 2026-01-31 09:08:53.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:08:53 compute-0 nova_compute[247704]: 2026-01-31 09:08:53.322 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:53 compute-0 nova_compute[247704]: 2026-01-31 09:08:53.323 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:08:53 compute-0 nova_compute[247704]: 2026-01-31 09:08:53.324 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/616937897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:08:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/616937897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:08:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 352 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 09:08:54 compute-0 ceph-mon[74496]: pgmap v4118: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 352 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 09:08:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:08:54.655 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:08:54 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:08:54.655 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:08:54 compute-0 nova_compute[247704]: 2026-01-31 09:08:54.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:54.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 320 KiB/s rd, 1.4 MiB/s wr, 52 op/s
Jan 31 09:08:56 compute-0 ceph-mon[74496]: pgmap v4119: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 320 KiB/s rd, 1.4 MiB/s wr, 52 op/s
Jan 31 09:08:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:08:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:08:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:56.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:08:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:56.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 81 KiB/s wr, 13 op/s
Jan 31 09:08:58 compute-0 nova_compute[247704]: 2026-01-31 09:08:58.359 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:08:58 compute-0 ceph-mon[74496]: pgmap v4120: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 81 KiB/s wr, 13 op/s
Jan 31 09:08:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:08:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:08:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:58.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:08:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 62 KiB/s wr, 11 op/s
Jan 31 09:08:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4113501718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:00.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:00 compute-0 ceph-mon[74496]: pgmap v4121: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 62 KiB/s wr, 11 op/s
Jan 31 09:09:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:00.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 18 op/s
Jan 31 09:09:01 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:01.658 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.810566) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541810628, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 597, "num_deletes": 251, "total_data_size": 717446, "memory_usage": 729752, "flush_reason": "Manual Compaction"}
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Jan 31 09:09:01 compute-0 ceph-mon[74496]: pgmap v4122: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 18 op/s
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541822990, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 709428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89509, "largest_seqno": 90105, "table_properties": {"data_size": 706255, "index_size": 1080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7396, "raw_average_key_size": 19, "raw_value_size": 699981, "raw_average_value_size": 1808, "num_data_blocks": 48, "num_entries": 387, "num_filter_entries": 387, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850502, "oldest_key_time": 1769850502, "file_creation_time": 1769850541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 12518 microseconds, and 2486 cpu microseconds.
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.823055) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 709428 bytes OK
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.823109) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.831381) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.831413) EVENT_LOG_v1 {"time_micros": 1769850541831404, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.831438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 714240, prev total WAL file size 714240, number of live WAL files 2.
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.831891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(692KB)], [206(13MB)]
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541831926, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 14840280, "oldest_snapshot_seqno": -1}
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 11396 keys, 12878592 bytes, temperature: kUnknown
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850541950505, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 12878592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12807979, "index_size": 40996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 302976, "raw_average_key_size": 26, "raw_value_size": 12611699, "raw_average_value_size": 1106, "num_data_blocks": 1543, "num_entries": 11396, "num_filter_entries": 11396, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:09:01 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.950855) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12878592 bytes
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.022880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.0 rd, 108.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 13.5 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(39.1) write-amplify(18.2) OK, records in: 11907, records dropped: 511 output_compression: NoCompression
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.022933) EVENT_LOG_v1 {"time_micros": 1769850542022914, "job": 130, "event": "compaction_finished", "compaction_time_micros": 118720, "compaction_time_cpu_micros": 25329, "output_level": 6, "num_output_files": 1, "total_output_size": 12878592, "num_input_records": 11907, "num_output_records": 11396, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850542023431, "job": 130, "event": "table_file_deletion", "file_number": 208}
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850542024793, "job": 130, "event": "table_file_deletion", "file_number": 206}
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:01.831841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.024969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.024976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.024978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.024980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:09:02.024982) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:09:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:02.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:02.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2220800940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:09:03 compute-0 nova_compute[247704]: 2026-01-31 09:09:03.363 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:03 compute-0 nova_compute[247704]: 2026-01-31 09:09:03.368 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:03 compute-0 nova_compute[247704]: 2026-01-31 09:09:03.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5009 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:09:03 compute-0 nova_compute[247704]: 2026-01-31 09:09:03.369 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:03 compute-0 nova_compute[247704]: 2026-01-31 09:09:03.404 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:03 compute-0 nova_compute[247704]: 2026-01-31 09:09:03.405 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 18 op/s
Jan 31 09:09:04 compute-0 ceph-mon[74496]: pgmap v4123: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 39 KiB/s wr, 18 op/s
Jan 31 09:09:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/524816306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:04.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:04.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 36 KiB/s wr, 17 op/s
Jan 31 09:09:05 compute-0 ceph-mon[74496]: pgmap v4124: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 36 KiB/s wr, 17 op/s
Jan 31 09:09:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:06.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:06 compute-0 podman[413146]: 2026-01-31 09:09:06.879083278 +0000 UTC m=+0.052466057 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 09:09:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:06.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 596 B/s wr, 12 op/s
Jan 31 09:09:07 compute-0 ceph-mon[74496]: pgmap v4125: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.5 KiB/s rd, 596 B/s wr, 12 op/s
Jan 31 09:09:08 compute-0 sudo[413165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:08 compute-0 sudo[413165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:08 compute-0 sudo[413165]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:08 compute-0 sudo[413190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:08 compute-0 sudo[413190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:08 compute-0 sudo[413190]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:08 compute-0 nova_compute[247704]: 2026-01-31 09:09:08.406 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:08 compute-0 nova_compute[247704]: 2026-01-31 09:09:08.408 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:08.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:08.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 596 B/s wr, 12 op/s
Jan 31 09:09:10 compute-0 ceph-mon[74496]: pgmap v4126: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 596 B/s wr, 12 op/s
Jan 31 09:09:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:10.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:10.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:11.239 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:11.240 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:11.240 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 255 B/s wr, 9 op/s
Jan 31 09:09:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2197960158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:09:12 compute-0 ceph-mon[74496]: pgmap v4127: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 255 B/s wr, 9 op/s
Jan 31 09:09:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:12.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:12.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:13 compute-0 nova_compute[247704]: 2026-01-31 09:09:13.409 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:13 compute-0 nova_compute[247704]: 2026-01-31 09:09:13.410 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:13 compute-0 nova_compute[247704]: 2026-01-31 09:09:13.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:09:13 compute-0 nova_compute[247704]: 2026-01-31 09:09:13.411 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:13 compute-0 nova_compute[247704]: 2026-01-31 09:09:13.465 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:13 compute-0 nova_compute[247704]: 2026-01-31 09:09:13.466 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:09:13 compute-0 ceph-mon[74496]: pgmap v4128: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:09:14 compute-0 nova_compute[247704]: 2026-01-31 09:09:14.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:14 compute-0 nova_compute[247704]: 2026-01-31 09:09:14.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:09:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:14.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:15 compute-0 nova_compute[247704]: 2026-01-31 09:09:15.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 511 B/s wr, 37 op/s
Jan 31 09:09:15 compute-0 ceph-mon[74496]: pgmap v4129: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 511 B/s wr, 37 op/s
Jan 31 09:09:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:16.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:16 compute-0 sudo[413220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:16 compute-0 sudo[413220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:16 compute-0 sudo[413220]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:16 compute-0 sudo[413245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:16 compute-0 sudo[413245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:16 compute-0 sudo[413245]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-0 sudo[413270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:17 compute-0 sudo[413270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-0 sudo[413270]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-0 sudo[413295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:09:17 compute-0 sudo[413295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-0 sudo[413295]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-0 nova_compute[247704]: 2026-01-31 09:09:17.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:17 compute-0 sudo[413351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:17 compute-0 sudo[413351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-0 sudo[413351]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 50 op/s
Jan 31 09:09:17 compute-0 sudo[413376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:17 compute-0 sudo[413376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-0 sudo[413376]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-0 sudo[413401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:17 compute-0 sudo[413401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-0 sudo[413401]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:17 compute-0 sudo[413426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- inventory --format=json-pretty --filter-for-batch
Jan 31 09:09:17 compute-0 sudo[413426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 09:09:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 09:09:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:18 compute-0 podman[413492]: 2026-01-31 09:09:17.974200887 +0000 UTC m=+0.016422491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:18 compute-0 podman[413492]: 2026-01-31 09:09:18.181357544 +0000 UTC m=+0.223579128 container create ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 09:09:18 compute-0 systemd[1]: Started libpod-conmon-ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8.scope.
Jan 31 09:09:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:18 compute-0 podman[413492]: 2026-01-31 09:09:18.274359125 +0000 UTC m=+0.316580729 container init ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 09:09:18 compute-0 podman[413492]: 2026-01-31 09:09:18.281280813 +0000 UTC m=+0.323502397 container start ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:18 compute-0 affectionate_lamarr[413508]: 167 167
Jan 31 09:09:18 compute-0 systemd[1]: libpod-ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8.scope: Deactivated successfully.
Jan 31 09:09:18 compute-0 nova_compute[247704]: 2026-01-31 09:09:18.466 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:18 compute-0 podman[413492]: 2026-01-31 09:09:18.484042154 +0000 UTC m=+0.526263758 container attach ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 09:09:18 compute-0 podman[413492]: 2026-01-31 09:09:18.484823433 +0000 UTC m=+0.527045017 container died ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:09:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 09:09:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:18.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:18.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 09:09:18 compute-0 ceph-mon[74496]: pgmap v4130: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 50 op/s
Jan 31 09:09:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:18 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ceab3915197550a2ef72474932583a8d40d8b488134c800848ae030cd5a40b-merged.mount: Deactivated successfully.
Jan 31 09:09:18 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-0 podman[413492]: 2026-01-31 09:09:19.020993841 +0000 UTC m=+1.063215425 container remove ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:09:19 compute-0 systemd[1]: libpod-conmon-ac97ffd9987c3d219f1876763a993feab72ef524aaec1cdb729f715d1d4cf2c8.scope: Deactivated successfully.
Jan 31 09:09:19 compute-0 podman[413533]: 2026-01-31 09:09:19.135349711 +0000 UTC m=+0.036620881 container create 260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hamilton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:09:19 compute-0 systemd[1]: Started libpod-conmon-260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b.scope.
Jan 31 09:09:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4c8e914c1d0073bf05dde2b7afa899d0c9d3a84648c90de20c5317c6e0766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4c8e914c1d0073bf05dde2b7afa899d0c9d3a84648c90de20c5317c6e0766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4c8e914c1d0073bf05dde2b7afa899d0c9d3a84648c90de20c5317c6e0766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f4c8e914c1d0073bf05dde2b7afa899d0c9d3a84648c90de20c5317c6e0766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:19 compute-0 podman[413533]: 2026-01-31 09:09:19.1188161 +0000 UTC m=+0.020087290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:19 compute-0 podman[413533]: 2026-01-31 09:09:19.219797825 +0000 UTC m=+0.121068995 container init 260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:19 compute-0 podman[413533]: 2026-01-31 09:09:19.228434985 +0000 UTC m=+0.129706155 container start 260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:09:19 compute-0 podman[413533]: 2026-01-31 09:09:19.2327432 +0000 UTC m=+0.134014420 container attach 260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hamilton, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:09:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 09:09:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 09:09:19 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-0 ceph-mon[74496]: pgmap v4131: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:19 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]: [
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:     {
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "available": false,
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "ceph_device": false,
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "lsm_data": {},
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "lvs": [],
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "path": "/dev/sr0",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "rejected_reasons": [
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "Has a FileSystem",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "Insufficient space (<5GB)"
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         ],
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         "sys_api": {
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "actuators": null,
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "device_nodes": "sr0",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "devname": "sr0",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "human_readable_size": "482.00 KB",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "id_bus": "ata",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "model": "QEMU DVD-ROM",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "nr_requests": "2",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "parent": "/dev/sr0",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "partitions": {},
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "path": "/dev/sr0",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "removable": "1",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "rev": "2.5+",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "ro": "0",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "rotational": "1",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "sas_address": "",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "sas_device_handle": "",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "scheduler_mode": "mq-deadline",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "sectors": 0,
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "sectorsize": "2048",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "size": 493568.0,
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "support_discard": "2048",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "type": "disk",
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:             "vendor": "QEMU"
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:         }
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]:     }
Jan 31 09:09:20 compute-0 pensive_hamilton[413549]: ]
Jan 31 09:09:20 compute-0 systemd[1]: libpod-260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b.scope: Deactivated successfully.
Jan 31 09:09:20 compute-0 systemd[1]: libpod-260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b.scope: Consumed 1.030s CPU time.
Jan 31 09:09:20 compute-0 podman[413533]: 2026-01-31 09:09:20.289403714 +0000 UTC m=+1.190674894 container died 260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hamilton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 09:09:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0f4c8e914c1d0073bf05dde2b7afa899d0c9d3a84648c90de20c5317c6e0766-merged.mount: Deactivated successfully.
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:09:20
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data']
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:09:20 compute-0 podman[413533]: 2026-01-31 09:09:20.335501055 +0000 UTC m=+1.236772225 container remove 260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 31 09:09:20 compute-0 systemd[1]: libpod-conmon-260f500dfbac678a24df2781f7386419c46d53c5f45e85a59815dc9ff85a216b.scope: Deactivated successfully.
Jan 31 09:09:20 compute-0 sudo[413426]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:09:20 compute-0 podman[414628]: 2026-01-31 09:09:20.398593159 +0000 UTC m=+0.084634770 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b3d11702-0b79-4c2d-9d04-4a7bb8ab8019 does not exist
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cdf4e8bd-9f48-4d43-a9e8-db6bef1a8ec6 does not exist
Jan 31 09:09:20 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5574e5d4-7d25-471c-a654-571bea1d1253 does not exist
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:09:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:09:20 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:09:20 compute-0 sudo[414668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:20 compute-0 sudo[414668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:20 compute-0 sudo[414668]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:20 compute-0 sudo[414693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:20 compute-0 sudo[414693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:20 compute-0 sudo[414693]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:20.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:20 compute-0 sudo[414718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:20 compute-0 sudo[414718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:20 compute-0 sudo[414718]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:20 compute-0 sudo[414743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:09:20 compute-0 sudo[414743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:20.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.086451755 +0000 UTC m=+0.040164408 container create 7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_benz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:21 compute-0 systemd[1]: Started libpod-conmon-7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec.scope.
Jan 31 09:09:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.06938512 +0000 UTC m=+0.023097803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.173611844 +0000 UTC m=+0.127324517 container init 7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.181526617 +0000 UTC m=+0.135239280 container start 7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:21 compute-0 dreamy_benz[414824]: 167 167
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.187292057 +0000 UTC m=+0.141004710 container attach 7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:21 compute-0 systemd[1]: libpod-7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec.scope: Deactivated successfully.
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.188913476 +0000 UTC m=+0.142626139 container died 7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_benz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-57015d9f52f7caa8c55086b141027aa7ccc366f21e68f98b7178757bb4611ff0-merged.mount: Deactivated successfully.
Jan 31 09:09:21 compute-0 podman[414808]: 2026-01-31 09:09:21.231306576 +0000 UTC m=+0.185019229 container remove 7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_benz, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:21 compute-0 systemd[1]: libpod-conmon-7fa5ae1e8d909f84ae9c72586906f86d341c831e5ff980272730858c8c0788ec.scope: Deactivated successfully.
Jan 31 09:09:21 compute-0 podman[414850]: 2026-01-31 09:09:21.367449368 +0000 UTC m=+0.038384235 container create 938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 09:09:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:21 compute-0 systemd[1]: Started libpod-conmon-938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99.scope.
Jan 31 09:09:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9adaab2a844dfa5afa5c4e8e2b2e38ee5d861736481c46bfbbb3f0c463c493/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9adaab2a844dfa5afa5c4e8e2b2e38ee5d861736481c46bfbbb3f0c463c493/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9adaab2a844dfa5afa5c4e8e2b2e38ee5d861736481c46bfbbb3f0c463c493/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9adaab2a844dfa5afa5c4e8e2b2e38ee5d861736481c46bfbbb3f0c463c493/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e9adaab2a844dfa5afa5c4e8e2b2e38ee5d861736481c46bfbbb3f0c463c493/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:21 compute-0 podman[414850]: 2026-01-31 09:09:21.435202935 +0000 UTC m=+0.106137832 container init 938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 09:09:21 compute-0 podman[414850]: 2026-01-31 09:09:21.440568295 +0000 UTC m=+0.111503162 container start 938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:21 compute-0 podman[414850]: 2026-01-31 09:09:21.34863592 +0000 UTC m=+0.019570807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:21 compute-0 podman[414850]: 2026-01-31 09:09:21.44442438 +0000 UTC m=+0.115359267 container attach 938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:09:21 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:09:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:22 compute-0 reverent_williams[414867]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:09:22 compute-0 reverent_williams[414867]: --> relative data size: 1.0
Jan 31 09:09:22 compute-0 reverent_williams[414867]: --> All data devices are unavailable
Jan 31 09:09:22 compute-0 systemd[1]: libpod-938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99.scope: Deactivated successfully.
Jan 31 09:09:22 compute-0 podman[414850]: 2026-01-31 09:09:22.18264688 +0000 UTC m=+0.853581767 container died 938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e9adaab2a844dfa5afa5c4e8e2b2e38ee5d861736481c46bfbbb3f0c463c493-merged.mount: Deactivated successfully.
Jan 31 09:09:22 compute-0 podman[414850]: 2026-01-31 09:09:22.239613254 +0000 UTC m=+0.910548121 container remove 938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_williams, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:09:22 compute-0 systemd[1]: libpod-conmon-938e1ad8107567f6da47462c70f0213a5c3db99da788a3a9d6d1352c8f6e7d99.scope: Deactivated successfully.
Jan 31 09:09:22 compute-0 sudo[414743]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:22 compute-0 sudo[414894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:22 compute-0 sudo[414894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:22 compute-0 sudo[414894]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:22 compute-0 sudo[414919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:22 compute-0 sudo[414919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:22 compute-0 sudo[414919]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:22 compute-0 sudo[414944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:22 compute-0 sudo[414944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:22 compute-0 sudo[414944]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:22 compute-0 sudo[414969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:09:22 compute-0 sudo[414969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:22 compute-0 ceph-mon[74496]: pgmap v4132: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:22 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4073190936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.705529734 +0000 UTC m=+0.033107766 container create 7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:09:22 compute-0 systemd[1]: Started libpod-conmon-7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28.scope.
Jan 31 09:09:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:22.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.691094863 +0000 UTC m=+0.018672925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.806858258 +0000 UTC m=+0.134436310 container init 7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.812041594 +0000 UTC m=+0.139619626 container start 7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:09:22 compute-0 dreamy_cohen[415053]: 167 167
Jan 31 09:09:22 compute-0 systemd[1]: libpod-7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28.scope: Deactivated successfully.
Jan 31 09:09:22 compute-0 conmon[415053]: conmon 7497acf4583d18ec68b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28.scope/container/memory.events
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.820452608 +0000 UTC m=+0.148030640 container attach 7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.820734546 +0000 UTC m=+0.148312578 container died 7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5e47e778ffa008d9c118cf13e0c9b2cf62a20ae8c76e4cc626156e7185a845f-merged.mount: Deactivated successfully.
Jan 31 09:09:22 compute-0 podman[415036]: 2026-01-31 09:09:22.864759876 +0000 UTC m=+0.192337908 container remove 7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:09:22 compute-0 systemd[1]: libpod-conmon-7497acf4583d18ec68b5b94b774eafea9541f01e1d650454c3e6a7cf3c25ee28.scope: Deactivated successfully.
Jan 31 09:09:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:22.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:22 compute-0 podman[415076]: 2026-01-31 09:09:22.991502857 +0000 UTC m=+0.042697249 container create 51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 09:09:23 compute-0 systemd[1]: Started libpod-conmon-51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0.scope.
Jan 31 09:09:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e834f300bc4b99215a907209ab6c2faffdc945d76558dc04d97cf08619ec7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e834f300bc4b99215a907209ab6c2faffdc945d76558dc04d97cf08619ec7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e834f300bc4b99215a907209ab6c2faffdc945d76558dc04d97cf08619ec7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75e834f300bc4b99215a907209ab6c2faffdc945d76558dc04d97cf08619ec7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:23 compute-0 podman[415076]: 2026-01-31 09:09:22.972294691 +0000 UTC m=+0.023489103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:23 compute-0 podman[415076]: 2026-01-31 09:09:23.085708859 +0000 UTC m=+0.136903271 container init 51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:09:23 compute-0 podman[415076]: 2026-01-31 09:09:23.091332535 +0000 UTC m=+0.142526927 container start 51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:09:23 compute-0 podman[415076]: 2026-01-31 09:09:23.10263906 +0000 UTC m=+0.153833452 container attach 51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 09:09:23 compute-0 nova_compute[247704]: 2026-01-31 09:09:23.467 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:23 compute-0 nova_compute[247704]: 2026-01-31 09:09:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2880264468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]: {
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:     "0": [
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:         {
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "devices": [
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "/dev/loop3"
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             ],
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "lv_name": "ceph_lv0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "lv_size": "7511998464",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "name": "ceph_lv0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "tags": {
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.cluster_name": "ceph",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.crush_device_class": "",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.encrypted": "0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.osd_id": "0",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.type": "block",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:                 "ceph.vdo": "0"
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             },
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "type": "block",
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:             "vg_name": "ceph_vg0"
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:         }
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]:     ]
Jan 31 09:09:23 compute-0 wonderful_mccarthy[415092]: }
Jan 31 09:09:23 compute-0 systemd[1]: libpod-51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0.scope: Deactivated successfully.
Jan 31 09:09:23 compute-0 podman[415076]: 2026-01-31 09:09:23.832701042 +0000 UTC m=+0.883895434 container died 51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 09:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-75e834f300bc4b99215a907209ab6c2faffdc945d76558dc04d97cf08619ec7b-merged.mount: Deactivated successfully.
Jan 31 09:09:24 compute-0 nova_compute[247704]: 2026-01-31 09:09:24.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:24 compute-0 nova_compute[247704]: 2026-01-31 09:09:24.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:09:24 compute-0 nova_compute[247704]: 2026-01-31 09:09:24.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:09:24 compute-0 nova_compute[247704]: 2026-01-31 09:09:24.590 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:09:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:24 compute-0 ceph-mon[74496]: pgmap v4133: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:09:24 compute-0 podman[415076]: 2026-01-31 09:09:24.90185779 +0000 UTC m=+1.953052182 container remove 51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:09:24 compute-0 sudo[414969]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:24 compute-0 sudo[415115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:24 compute-0 sudo[415115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:24 compute-0 sudo[415115]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:24 compute-0 systemd[1]: libpod-conmon-51c4d614661086e53759bfc55604a99c8749a8cbb157cbf22f8c43fb2a022bc0.scope: Deactivated successfully.
Jan 31 09:09:25 compute-0 sudo[415140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:09:25 compute-0 sudo[415140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:25 compute-0 sudo[415140]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:25 compute-0 sudo[415165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:25 compute-0 sudo[415165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:25 compute-0 sudo[415165]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:25 compute-0 sudo[415190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:09:25 compute-0 sudo[415190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:25 compute-0 podman[415256]: 2026-01-31 09:09:25.575280885 +0000 UTC m=+0.118123544 container create 794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 09:09:25 compute-0 podman[415256]: 2026-01-31 09:09:25.480356147 +0000 UTC m=+0.023198786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 31 09:09:25 compute-0 systemd[1]: Started libpod-conmon-794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353.scope.
Jan 31 09:09:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:26 compute-0 podman[415256]: 2026-01-31 09:09:26.021555537 +0000 UTC m=+0.564398206 container init 794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:09:26 compute-0 podman[415256]: 2026-01-31 09:09:26.030272419 +0000 UTC m=+0.573115058 container start 794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:09:26 compute-0 dazzling_williamson[415272]: 167 167
Jan 31 09:09:26 compute-0 systemd[1]: libpod-794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353.scope: Deactivated successfully.
Jan 31 09:09:26 compute-0 podman[415256]: 2026-01-31 09:09:26.157855381 +0000 UTC m=+0.700698040 container attach 794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:09:26 compute-0 podman[415256]: 2026-01-31 09:09:26.158770633 +0000 UTC m=+0.701613272 container died 794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 09:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-67c04f02f801fc22bc93cdb501ba2bc8e59c4401f65372435dfbf527c083f1ee-merged.mount: Deactivated successfully.
Jan 31 09:09:26 compute-0 podman[415256]: 2026-01-31 09:09:26.27545432 +0000 UTC m=+0.818296959 container remove 794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:09:26 compute-0 systemd[1]: libpod-conmon-794bacde42150c9956fc33c507f93e038e2ff3b127510a42789b449570180353.scope: Deactivated successfully.
Jan 31 09:09:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:26 compute-0 ceph-mon[74496]: pgmap v4134: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 31 09:09:26 compute-0 podman[415297]: 2026-01-31 09:09:26.405052921 +0000 UTC m=+0.044478292 container create a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:09:26 compute-0 systemd[1]: Started libpod-conmon-a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20.scope.
Jan 31 09:09:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd28343d06698a7214c3130b098a1e152835dc802ed7b5d08dde7691b48b2f81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd28343d06698a7214c3130b098a1e152835dc802ed7b5d08dde7691b48b2f81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd28343d06698a7214c3130b098a1e152835dc802ed7b5d08dde7691b48b2f81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd28343d06698a7214c3130b098a1e152835dc802ed7b5d08dde7691b48b2f81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:09:26 compute-0 podman[415297]: 2026-01-31 09:09:26.379826928 +0000 UTC m=+0.019252319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:09:26 compute-0 podman[415297]: 2026-01-31 09:09:26.482805272 +0000 UTC m=+0.122230693 container init a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brattain, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 31 09:09:26 compute-0 podman[415297]: 2026-01-31 09:09:26.489378112 +0000 UTC m=+0.128803493 container start a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 09:09:26 compute-0 podman[415297]: 2026-01-31 09:09:26.501710502 +0000 UTC m=+0.141135873 container attach a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 09:09:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:26.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:26.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]: {
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:         "osd_id": 0,
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:         "type": "bluestore"
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]:     }
Jan 31 09:09:27 compute-0 wizardly_brattain[415315]: }
Jan 31 09:09:27 compute-0 systemd[1]: libpod-a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20.scope: Deactivated successfully.
Jan 31 09:09:27 compute-0 podman[415297]: 2026-01-31 09:09:27.315656024 +0000 UTC m=+0.955081395 container died a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brattain, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd28343d06698a7214c3130b098a1e152835dc802ed7b5d08dde7691b48b2f81-merged.mount: Deactivated successfully.
Jan 31 09:09:27 compute-0 podman[415297]: 2026-01-31 09:09:27.542401688 +0000 UTC m=+1.181827069 container remove a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_brattain, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 09:09:27 compute-0 systemd[1]: libpod-conmon-a17e760d9e6fbc8337e59905760e13a0ef02f13382c9ca7138d03c2a30e1be20.scope: Deactivated successfully.
Jan 31 09:09:27 compute-0 sudo[415190]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:09:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:09:27 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0c4f63fe-677a-40fc-9e92-c8414e3eb012 does not exist
Jan 31 09:09:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 408f8dd1-d8db-4901-9bf7-91cdffd972cb does not exist
Jan 31 09:09:27 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5f6ac7f2-f0eb-4f86-823d-13affe377cbe does not exist
Jan 31 09:09:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 53 op/s
Jan 31 09:09:27 compute-0 sudo[415349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:27 compute-0 sudo[415349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:27 compute-0 sudo[415349]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:27 compute-0 sudo[415374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:09:27 compute-0 sudo[415374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:27 compute-0 sudo[415374]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:28 compute-0 sudo[415399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:28 compute-0 sudo[415399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:28 compute-0 sudo[415399]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:28 compute-0 nova_compute[247704]: 2026-01-31 09:09:28.469 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:28 compute-0 sudo[415425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:28 compute-0 sudo[415425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:28 compute-0 sudo[415425]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:28 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:09:28 compute-0 ceph-mon[74496]: pgmap v4135: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 53 op/s
Jan 31 09:09:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:28.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:28.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:29 compute-0 nova_compute[247704]: 2026-01-31 09:09:29.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:29 compute-0 nova_compute[247704]: 2026-01-31 09:09:29.607 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:29 compute-0 nova_compute[247704]: 2026-01-31 09:09:29.608 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:29 compute-0 nova_compute[247704]: 2026-01-31 09:09:29.609 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:29 compute-0 nova_compute[247704]: 2026-01-31 09:09:29.609 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:09:29 compute-0 nova_compute[247704]: 2026-01-31 09:09:29.610 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.8 KiB/s wr, 55 op/s
Jan 31 09:09:29 compute-0 ceph-mon[74496]: pgmap v4136: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.8 KiB/s wr, 55 op/s
Jan 31 09:09:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2483571695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:09:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349455638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.084 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.215 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.216 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4138MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.217 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.217 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.311 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.312 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.332 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:09:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:09:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723041403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.769 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:09:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:30.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.774 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.800 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.801 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:09:30 compute-0 nova_compute[247704]: 2026-01-31 09:09:30.802 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:09:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/349455638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/164479365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3723041403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:30.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 528 KiB/s rd, 13 KiB/s wr, 44 op/s
Jan 31 09:09:31 compute-0 ceph-mon[74496]: pgmap v4137: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 528 KiB/s rd, 13 KiB/s wr, 44 op/s
Jan 31 09:09:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:32.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.505 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.507 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.508 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5037 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.508 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.508 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.509 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:09:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 31 09:09:33 compute-0 nova_compute[247704]: 2026-01-31 09:09:33.801 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:34 compute-0 ceph-mon[74496]: pgmap v4138: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 31 09:09:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:35 compute-0 nova_compute[247704]: 2026-01-31 09:09:35.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 31 09:09:35 compute-0 ceph-mon[74496]: pgmap v4139: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004331008340312675 of space, bias 1.0, pg target 1.2993025020938025 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:09:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:09:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:09:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:09:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:36.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:37 compute-0 nova_compute[247704]: 2026-01-31 09:09:37.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 527 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 31 09:09:37 compute-0 podman[415498]: 2026-01-31 09:09:37.905015815 +0000 UTC m=+0.074375059 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 09:09:38 compute-0 ceph-mon[74496]: pgmap v4140: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 527 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 31 09:09:38 compute-0 nova_compute[247704]: 2026-01-31 09:09:38.508 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:38 compute-0 nova_compute[247704]: 2026-01-31 09:09:38.509 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:38.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:38.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 22 KiB/s wr, 27 op/s
Jan 31 09:09:39 compute-0 ceph-mon[74496]: pgmap v4141: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 22 KiB/s wr, 27 op/s
Jan 31 09:09:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:40.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 21 KiB/s wr, 11 op/s
Jan 31 09:09:41 compute-0 ceph-mon[74496]: pgmap v4142: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 21 KiB/s wr, 11 op/s
Jan 31 09:09:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:42.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:43 compute-0 nova_compute[247704]: 2026-01-31 09:09:43.511 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:09:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 B/s rd, 13 KiB/s wr, 1 op/s
Jan 31 09:09:44 compute-0 nova_compute[247704]: 2026-01-31 09:09:44.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:09:44 compute-0 ceph-mon[74496]: pgmap v4143: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 B/s rd, 13 KiB/s wr, 1 op/s
Jan 31 09:09:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:44.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s wr, 0 op/s
Jan 31 09:09:45 compute-0 ceph-mon[74496]: pgmap v4144: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s wr, 0 op/s
Jan 31 09:09:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:46.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Jan 31 09:09:48 compute-0 nova_compute[247704]: 2026-01-31 09:09:48.512 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:48 compute-0 sudo[415525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:48 compute-0 sudo[415525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:48 compute-0 sudo[415525]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:48 compute-0 sudo[415550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:09:48 compute-0 sudo[415550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:09:48 compute-0 sudo[415550]: pam_unix(sudo:session): session closed for user root
Jan 31 09:09:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:48.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:48 compute-0 ceph-mon[74496]: pgmap v4145: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s wr, 0 op/s
Jan 31 09:09:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 7.7 KiB/s wr, 3 op/s
Jan 31 09:09:49 compute-0 ceph-mon[74496]: pgmap v4146: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 7.7 KiB/s wr, 3 op/s
Jan 31 09:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:09:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:09:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:50.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:50 compute-0 podman[415576]: 2026-01-31 09:09:50.893691858 +0000 UTC m=+0.069883770 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 09:09:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:50.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 31 09:09:52 compute-0 ceph-mon[74496]: pgmap v4147: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 31 09:09:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:09:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:52.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:09:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:53 compute-0 nova_compute[247704]: 2026-01-31 09:09:53.514 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 11 KiB/s wr, 8 op/s
Jan 31 09:09:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/669566419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:09:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/669566419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:09:53 compute-0 ceph-mon[74496]: pgmap v4148: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 11 KiB/s wr, 8 op/s
Jan 31 09:09:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:09:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1728827525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:09:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2053799275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:09:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:09:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2053799275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:09:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:54.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:09:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:09:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1728827525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:09:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2053799275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:09:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2053799275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:09:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 10 KiB/s wr, 25 op/s
Jan 31 09:09:56 compute-0 ceph-mon[74496]: pgmap v4149: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 10 KiB/s wr, 25 op/s
Jan 31 09:09:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:09:56 compute-0 nova_compute[247704]: 2026-01-31 09:09:56.428 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:56.429 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=109, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=108) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:09:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:56.430 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:09:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:09:56.431 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '109'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:09:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:56.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:56.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 10 KiB/s wr, 25 op/s
Jan 31 09:09:58 compute-0 nova_compute[247704]: 2026-01-31 09:09:58.516 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:09:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:58.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:58 compute-0 ceph-mon[74496]: pgmap v4150: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 10 KiB/s wr, 25 op/s
Jan 31 09:09:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:09:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:09:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:58.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:09:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 233 KiB/s rd, 10 KiB/s wr, 37 op/s
Jan 31 09:09:59 compute-0 ceph-mon[74496]: pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 233 KiB/s rd, 10 KiB/s wr, 37 op/s
Jan 31 09:10:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 09:10:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:00.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:01 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 09:10:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 5.2 KiB/s wr, 33 op/s
Jan 31 09:10:02 compute-0 ceph-mon[74496]: pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 5.2 KiB/s wr, 33 op/s
Jan 31 09:10:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:03 compute-0 nova_compute[247704]: 2026-01-31 09:10:03.519 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 31 09:10:03 compute-0 ceph-mon[74496]: pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 31 09:10:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:04.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 09:10:05 compute-0 ceph-mon[74496]: pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 09:10:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:06.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:06.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 09:10:08 compute-0 nova_compute[247704]: 2026-01-31 09:10:08.520 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:08 compute-0 sudo[415612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:08 compute-0 sudo[415612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:08 compute-0 sudo[415612]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:08 compute-0 ceph-mon[74496]: pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 09:10:08 compute-0 podman[415636]: 2026-01-31 09:10:08.745108645 +0000 UTC m=+0.044352459 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 09:10:08 compute-0 sudo[415643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:08 compute-0 sudo[415643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:08 compute-0 sudo[415643]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:08.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:08.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 09:10:09 compute-0 ceph-mon[74496]: pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 0 B/s wr, 11 op/s
Jan 31 09:10:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:10.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:11.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:10:11.241 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:10:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:10:11.242 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:10:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:10:11.242 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:10:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:11 compute-0 ceph-mon[74496]: pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:12.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:13.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:13 compute-0 nova_compute[247704]: 2026-01-31 09:10:13.523 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:10:13 compute-0 nova_compute[247704]: 2026-01-31 09:10:13.525 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:10:13 compute-0 nova_compute[247704]: 2026-01-31 09:10:13.525 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:10:13 compute-0 nova_compute[247704]: 2026-01-31 09:10:13.525 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:10:13 compute-0 nova_compute[247704]: 2026-01-31 09:10:13.573 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:13 compute-0 nova_compute[247704]: 2026-01-31 09:10:13.574 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:10:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:14 compute-0 ceph-mon[74496]: pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:14.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:15.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:15 compute-0 nova_compute[247704]: 2026-01-31 09:10:15.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:15 compute-0 nova_compute[247704]: 2026-01-31 09:10:15.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:10:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:16 compute-0 ceph-mon[74496]: pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:16.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:17.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:17 compute-0 nova_compute[247704]: 2026-01-31 09:10:17.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:17 compute-0 nova_compute[247704]: 2026-01-31 09:10:17.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:18 compute-0 ceph-mon[74496]: pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:18 compute-0 nova_compute[247704]: 2026-01-31 09:10:18.575 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:10:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:18.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:10:20
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'vms', 'backups', 'default.rgw.meta']
Jan 31 09:10:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:10:20 compute-0 ceph-mon[74496]: pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:20.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:10:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:21 compute-0 podman[415685]: 2026-01-31 09:10:21.88973675 +0000 UTC m=+0.064997522 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 09:10:22 compute-0 ceph-mon[74496]: pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:22.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:23.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:23 compute-0 nova_compute[247704]: 2026-01-31 09:10:23.578 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:10:23 compute-0 nova_compute[247704]: 2026-01-31 09:10:23.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:10:23 compute-0 nova_compute[247704]: 2026-01-31 09:10:23.580 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:10:23 compute-0 nova_compute[247704]: 2026-01-31 09:10:23.581 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:10:23 compute-0 nova_compute[247704]: 2026-01-31 09:10:23.610 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:23 compute-0 nova_compute[247704]: 2026-01-31 09:10:23.611 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:10:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:23 compute-0 ceph-mon[74496]: pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:24.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/722512109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:25 compute-0 nova_compute[247704]: 2026-01-31 09:10:25.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/644742043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:26 compute-0 ceph-mon[74496]: pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:26 compute-0 nova_compute[247704]: 2026-01-31 09:10:26.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:26 compute-0 nova_compute[247704]: 2026-01-31 09:10:26.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:10:26 compute-0 nova_compute[247704]: 2026-01-31 09:10:26.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:10:26 compute-0 nova_compute[247704]: 2026-01-31 09:10:26.665 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:10:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:26.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:10:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:27.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:10:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:27 compute-0 ceph-mon[74496]: pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:27 compute-0 sudo[415714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:27 compute-0 sudo[415714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:27 compute-0 sudo[415714]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-0 sudo[415739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:10:28 compute-0 sudo[415739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-0 sudo[415739]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-0 sudo[415764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-0 sudo[415764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-0 sudo[415764]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-0 sudo[415789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:10:28 compute-0 sudo[415789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-0 sudo[415789]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-0 nova_compute[247704]: 2026-01-31 09:10:28.612 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:28 compute-0 sudo[415846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-0 sudo[415846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-0 sudo[415846]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:28 compute-0 sudo[415871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:28 compute-0 sudo[415871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:28 compute-0 sudo[415871]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:10:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:29.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:10:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3030456916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/398262343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:30 compute-0 ceph-mon[74496]: pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:31.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:31 compute-0 nova_compute[247704]: 2026-01-31 09:10:31.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:31 compute-0 nova_compute[247704]: 2026-01-31 09:10:31.674 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:10:31 compute-0 nova_compute[247704]: 2026-01-31 09:10:31.675 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:10:31 compute-0 nova_compute[247704]: 2026-01-31 09:10:31.675 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:10:31 compute-0 nova_compute[247704]: 2026-01-31 09:10:31.675 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:10:31 compute-0 nova_compute[247704]: 2026-01-31 09:10:31.675 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d2281a1d-c8c7-449c-8c34-0b06ae664f58 does not exist
Jan 31 09:10:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0428179d-1883-4b7c-b9cd-4a6831f53623 does not exist
Jan 31 09:10:31 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6a9a4251-27ca-46ea-ae30-85552eebb833 does not exist
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:10:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:10:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:10:31 compute-0 sudo[415917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:31 compute-0 sudo[415917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:31 compute-0 sudo[415917]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:32 compute-0 sudo[415942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:10:32 compute-0 sudo[415942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:32 compute-0 sudo[415942]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:32 compute-0 sudo[415967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:32 compute-0 sudo[415967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:32 compute-0 sudo[415967]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:32 compute-0 sudo[415992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:10:32 compute-0 sudo[415992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:10:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149389464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.145 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.291 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.294 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4201MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.294 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.294 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.38979603 +0000 UTC m=+0.030179655 container create dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:10:32 compute-0 systemd[1]: Started libpod-conmon-dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3.scope.
Jan 31 09:10:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.456643555 +0000 UTC m=+0.097027180 container init dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.461202566 +0000 UTC m=+0.101586191 container start dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.464164618 +0000 UTC m=+0.104548273 container attach dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:10:32 compute-0 frosty_yonath[416076]: 167 167
Jan 31 09:10:32 compute-0 systemd[1]: libpod-dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3.scope: Deactivated successfully.
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.465796418 +0000 UTC m=+0.106180043 container died dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.376613649 +0000 UTC m=+0.016997294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-20d4320f82f934c266ebbd410d13f111bf4d20191edc7257fc2ef7ce7c8c1ff6-merged.mount: Deactivated successfully.
Jan 31 09:10:32 compute-0 podman[416058]: 2026-01-31 09:10:32.497660963 +0000 UTC m=+0.138044588 container remove dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 09:10:32 compute-0 systemd[1]: libpod-conmon-dbe4b9f3f41bf031204472008835c8960c1e21e5c77e074d59f1a8f39893a6f3.scope: Deactivated successfully.
Jan 31 09:10:32 compute-0 podman[416099]: 2026-01-31 09:10:32.617477166 +0000 UTC m=+0.038937547 container create 164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:10:32 compute-0 systemd[1]: Started libpod-conmon-164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655.scope.
Jan 31 09:10:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65ff3986813ccfda6bb5f58e5703b97afe3d8b3cff48d5c3259a9db0d544817/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65ff3986813ccfda6bb5f58e5703b97afe3d8b3cff48d5c3259a9db0d544817/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65ff3986813ccfda6bb5f58e5703b97afe3d8b3cff48d5c3259a9db0d544817/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65ff3986813ccfda6bb5f58e5703b97afe3d8b3cff48d5c3259a9db0d544817/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c65ff3986813ccfda6bb5f58e5703b97afe3d8b3cff48d5c3259a9db0d544817/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:32 compute-0 podman[416099]: 2026-01-31 09:10:32.689445836 +0000 UTC m=+0.110906237 container init 164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 09:10:32 compute-0 podman[416099]: 2026-01-31 09:10:32.69457867 +0000 UTC m=+0.116039051 container start 164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 09:10:32 compute-0 podman[416099]: 2026-01-31 09:10:32.6011913 +0000 UTC m=+0.022651711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:32 compute-0 podman[416099]: 2026-01-31 09:10:32.697780509 +0000 UTC m=+0.119240910 container attach 164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:10:32 compute-0 ceph-mon[74496]: pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:10:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1149389464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:32.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.975 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:10:32 compute-0 nova_compute[247704]: 2026-01-31 09:10:32.977 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:10:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.053 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.145 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.146 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.169 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.202 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.268 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:10:33 compute-0 mystifying_meitner[416115]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:10:33 compute-0 mystifying_meitner[416115]: --> relative data size: 1.0
Jan 31 09:10:33 compute-0 mystifying_meitner[416115]: --> All data devices are unavailable
Jan 31 09:10:33 compute-0 systemd[1]: libpod-164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655.scope: Deactivated successfully.
Jan 31 09:10:33 compute-0 podman[416099]: 2026-01-31 09:10:33.416031014 +0000 UTC m=+0.837491405 container died 164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 09:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c65ff3986813ccfda6bb5f58e5703b97afe3d8b3cff48d5c3259a9db0d544817-merged.mount: Deactivated successfully.
Jan 31 09:10:33 compute-0 podman[416099]: 2026-01-31 09:10:33.477751505 +0000 UTC m=+0.899211886 container remove 164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_meitner, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:10:33 compute-0 systemd[1]: libpod-conmon-164cc96bdec51fb74c0a215f8fa2a6f6e4f5b132ea7f1d88a428f0e9fcbd9655.scope: Deactivated successfully.
Jan 31 09:10:33 compute-0 sudo[415992]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:33 compute-0 sudo[416163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:33 compute-0 sudo[416163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:33 compute-0 sudo[416163]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:33 compute-0 sudo[416188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:10:33 compute-0 sudo[416188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:33 compute-0 sudo[416188]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.613 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:33 compute-0 sudo[416213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:33 compute-0 sudo[416213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:33 compute-0 sudo[416213]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:10:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2222062800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.711 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.717 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:10:33 compute-0 sudo[416240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:10:33 compute-0 sudo[416240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.824 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:10:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2222062800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.827 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:10:33 compute-0 nova_compute[247704]: 2026-01-31 09:10:33.827 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:10:34 compute-0 podman[416306]: 2026-01-31 09:10:34.016349141 +0000 UTC m=+0.038557298 container create 6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 09:10:34 compute-0 systemd[1]: Started libpod-conmon-6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a.scope.
Jan 31 09:10:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:10:34 compute-0 podman[416306]: 2026-01-31 09:10:34.090115085 +0000 UTC m=+0.112323282 container init 6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:10:34 compute-0 podman[416306]: 2026-01-31 09:10:34.094699416 +0000 UTC m=+0.116907573 container start 6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 09:10:34 compute-0 musing_dijkstra[416323]: 167 167
Jan 31 09:10:34 compute-0 systemd[1]: libpod-6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a.scope: Deactivated successfully.
Jan 31 09:10:34 compute-0 podman[416306]: 2026-01-31 09:10:34.097957756 +0000 UTC m=+0.120165933 container attach 6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:10:34 compute-0 podman[416306]: 2026-01-31 09:10:34.002683669 +0000 UTC m=+0.024891846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:34 compute-0 podman[416328]: 2026-01-31 09:10:34.128933859 +0000 UTC m=+0.019043475 container died 6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c3dc892f81aabd40e8ef5ae867789e85581a325d82479a0fd98a89c3aa23957-merged.mount: Deactivated successfully.
Jan 31 09:10:34 compute-0 podman[416328]: 2026-01-31 09:10:34.162770061 +0000 UTC m=+0.052879677 container remove 6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:10:34 compute-0 systemd[1]: libpod-conmon-6e949c1b0e0a952eaa4a70fae4f35555943957fdcbddb9a179185730aec51c6a.scope: Deactivated successfully.
Jan 31 09:10:34 compute-0 podman[416350]: 2026-01-31 09:10:34.281317774 +0000 UTC m=+0.040295311 container create 0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 09:10:34 compute-0 systemd[1]: Started libpod-conmon-0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b.scope.
Jan 31 09:10:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88636333bbea0480a30283a5872a87e8f4e868ec477840b2bd3da944fed916d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88636333bbea0480a30283a5872a87e8f4e868ec477840b2bd3da944fed916d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88636333bbea0480a30283a5872a87e8f4e868ec477840b2bd3da944fed916d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88636333bbea0480a30283a5872a87e8f4e868ec477840b2bd3da944fed916d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:34 compute-0 podman[416350]: 2026-01-31 09:10:34.264152776 +0000 UTC m=+0.023130333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:34 compute-0 podman[416350]: 2026-01-31 09:10:34.364856245 +0000 UTC m=+0.123833802 container init 0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_heyrovsky, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:10:34 compute-0 podman[416350]: 2026-01-31 09:10:34.37079813 +0000 UTC m=+0.129775667 container start 0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:10:34 compute-0 podman[416350]: 2026-01-31 09:10:34.374574661 +0000 UTC m=+0.133552228 container attach 0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 09:10:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:34.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:34 compute-0 ceph-mon[74496]: pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:35.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]: {
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:     "0": [
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:         {
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "devices": [
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "/dev/loop3"
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             ],
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "lv_name": "ceph_lv0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "lv_size": "7511998464",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "name": "ceph_lv0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "tags": {
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.cluster_name": "ceph",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.crush_device_class": "",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.encrypted": "0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.osd_id": "0",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.type": "block",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:                 "ceph.vdo": "0"
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             },
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "type": "block",
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:             "vg_name": "ceph_vg0"
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:         }
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]:     ]
Jan 31 09:10:35 compute-0 pedantic_heyrovsky[416365]: }
Jan 31 09:10:35 compute-0 systemd[1]: libpod-0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b.scope: Deactivated successfully.
Jan 31 09:10:35 compute-0 podman[416350]: 2026-01-31 09:10:35.11393697 +0000 UTC m=+0.872914517 container died 0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-88636333bbea0480a30283a5872a87e8f4e868ec477840b2bd3da944fed916d3-merged.mount: Deactivated successfully.
Jan 31 09:10:35 compute-0 podman[416350]: 2026-01-31 09:10:35.160841561 +0000 UTC m=+0.919819088 container remove 0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_heyrovsky, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:10:35 compute-0 systemd[1]: libpod-conmon-0f19d3ec71933cf78d9fbbeeaf4959aca9993d0c89ba06520fd3e9fffec7913b.scope: Deactivated successfully.
Jan 31 09:10:35 compute-0 sudo[416240]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:35 compute-0 sudo[416391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:35 compute-0 sudo[416391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:35 compute-0 sudo[416391]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:35 compute-0 sudo[416416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:10:35 compute-0 sudo[416416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:35 compute-0 sudo[416416]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:35 compute-0 sudo[416441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:35 compute-0 sudo[416441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:35 compute-0 sudo[416441]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:35 compute-0 sudo[416466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:10:35 compute-0 sudo[416466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.736868827 +0000 UTC m=+0.050588711 container create 2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:10:35 compute-0 systemd[1]: Started libpod-conmon-2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd.scope.
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.706452448 +0000 UTC m=+0.020172342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.818988894 +0000 UTC m=+0.132708808 container init 2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.824409266 +0000 UTC m=+0.138129160 container start 2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.827617774 +0000 UTC m=+0.141337688 container attach 2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:10:35 compute-0 gracious_austin[416547]: 167 167
Jan 31 09:10:35 compute-0 systemd[1]: libpod-2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd.scope: Deactivated successfully.
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.830102094 +0000 UTC m=+0.143821988 container died 2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 09:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-28eef96f8d5e780b013c2ae833bbb71f8ddcfd75282918ffc3effd39bfc7051a-merged.mount: Deactivated successfully.
Jan 31 09:10:35 compute-0 podman[416531]: 2026-01-31 09:10:35.871291026 +0000 UTC m=+0.185010910 container remove 2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:10:35 compute-0 systemd[1]: libpod-conmon-2e9fd05f31237fd0e9c7a59e49c674bc42b10289a608e81f8446c4e193091dfd.scope: Deactivated successfully.
Jan 31 09:10:35 compute-0 ceph-mon[74496]: pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:36.017999023 +0000 UTC m=+0.047618849 container create e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:10:36 compute-0 systemd[1]: Started libpod-conmon-e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97.scope.
Jan 31 09:10:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4c9eaf96a184392d0a439cfc03fd08bf5775badf4bf85ac04c94213454750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:35.998496739 +0000 UTC m=+0.028116595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4c9eaf96a184392d0a439cfc03fd08bf5775badf4bf85ac04c94213454750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4c9eaf96a184392d0a439cfc03fd08bf5775badf4bf85ac04c94213454750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a4c9eaf96a184392d0a439cfc03fd08bf5775badf4bf85ac04c94213454750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:36.12025939 +0000 UTC m=+0.149879236 container init e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:36.126773718 +0000 UTC m=+0.156393554 container start e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:36.130658462 +0000 UTC m=+0.160278298 container attach e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:10:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:10:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:36 compute-0 nova_compute[247704]: 2026-01-31 09:10:36.827 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:36.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]: {
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:         "osd_id": 0,
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:         "type": "bluestore"
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]:     }
Jan 31 09:10:36 compute-0 relaxed_hoover[416586]: }
Jan 31 09:10:36 compute-0 systemd[1]: libpod-e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97.scope: Deactivated successfully.
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:36.879219114 +0000 UTC m=+0.908838960 container died e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 09:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-17a4c9eaf96a184392d0a439cfc03fd08bf5775badf4bf85ac04c94213454750-merged.mount: Deactivated successfully.
Jan 31 09:10:36 compute-0 podman[416570]: 2026-01-31 09:10:36.940572757 +0000 UTC m=+0.970192583 container remove e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 09:10:36 compute-0 systemd[1]: libpod-conmon-e5c7db608c226a30edfa37cf23fd444b1aed329ed48f510496d7ecfbd03e9e97.scope: Deactivated successfully.
Jan 31 09:10:36 compute-0 sudo[416466]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:10:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:10:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b19d9c13-d71f-46af-8147-cce9cffcd96e does not exist
Jan 31 09:10:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 847e28ae-03c0-4a6f-8f94-0c371287ed01 does not exist
Jan 31 09:10:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 9f90c19b-827c-460a-af83-b2e26447b446 does not exist
Jan 31 09:10:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:37.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:37 compute-0 sudo[416621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:37 compute-0 sudo[416621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:37 compute-0 sudo[416621]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:37 compute-0 sudo[416646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:10:37 compute-0 sudo[416646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:37 compute-0 sudo[416646]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:37 compute-0 nova_compute[247704]: 2026-01-31 09:10:37.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:38 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:10:38 compute-0 ceph-mon[74496]: pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:38 compute-0 nova_compute[247704]: 2026-01-31 09:10:38.617 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:38.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:39.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:39 compute-0 podman[416672]: 2026-01-31 09:10:39.407620565 +0000 UTC m=+0.569242913 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:10:39 compute-0 nova_compute[247704]: 2026-01-31 09:10:39.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:10:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:39 compute-0 ceph-mon[74496]: pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:42 compute-0 ceph-mon[74496]: pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:43.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:10:43.286 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=110, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=109) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:10:43 compute-0 nova_compute[247704]: 2026-01-31 09:10:43.286 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:10:43.288 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:10:43 compute-0 nova_compute[247704]: 2026-01-31 09:10:43.618 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:44 compute-0 ceph-mon[74496]: pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:44.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:45 compute-0 ceph-mon[74496]: pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:47.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:48 compute-0 nova_compute[247704]: 2026-01-31 09:10:48.619 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:48 compute-0 ceph-mon[74496]: pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:10:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:48.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:10:48 compute-0 sudo[416696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:48 compute-0 sudo[416696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:48 compute-0 sudo[416696]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:48 compute-0 sudo[416721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:10:48 compute-0 sudo[416721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:10:48 compute-0 sudo[416721]: pam_unix(sudo:session): session closed for user root
Jan 31 09:10:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:49.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:49 compute-0 ceph-mon[74496]: pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:10:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:10:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:50.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:51.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:10:51.290 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '110'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:10:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:52 compute-0 ceph-mon[74496]: pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:52.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:52 compute-0 podman[416748]: 2026-01-31 09:10:52.910743088 +0000 UTC m=+0.082814255 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:10:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:53.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:53 compute-0 nova_compute[247704]: 2026-01-31 09:10:53.622 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3570653807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:10:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3570653807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:10:53 compute-0 ceph-mon[74496]: pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:54.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:10:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:55.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:10:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:56 compute-0 ceph-mon[74496]: pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:10:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:56.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:57.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:57 compute-0 ceph-mon[74496]: pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:58 compute-0 nova_compute[247704]: 2026-01-31 09:10:58.624 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:10:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:10:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:58.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:10:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:10:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:10:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:59.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:10:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:10:59 compute-0 ceph-mon[74496]: pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:00.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:01.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:01 compute-0 ceph-mon[74496]: pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:02.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:03.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:03 compute-0 nova_compute[247704]: 2026-01-31 09:11:03.626 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:03 compute-0 ceph-mon[74496]: pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:04.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:05.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:05 compute-0 ceph-mon[74496]: pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:07.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:07 compute-0 ceph-mon[74496]: pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:08 compute-0 nova_compute[247704]: 2026-01-31 09:11:08.629 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:08.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:09 compute-0 sudo[416783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:09 compute-0 sudo[416783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:09 compute-0 sudo[416783]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:09 compute-0 sudo[416808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:09 compute-0 sudo[416808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:09 compute-0 sudo[416808]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:09 compute-0 ceph-mon[74496]: pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:09 compute-0 podman[416833]: 2026-01-31 09:11:09.872698946 +0000 UTC m=+0.045070008 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 09:11:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:10.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:11:11.243 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:11:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:11:11.243 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:11:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:11:11.243 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:11:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:12 compute-0 ceph-mon[74496]: pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:12.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:13.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:13 compute-0 nova_compute[247704]: 2026-01-31 09:11:13.632 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:13 compute-0 nova_compute[247704]: 2026-01-31 09:11:13.633 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:13 compute-0 nova_compute[247704]: 2026-01-31 09:11:13.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:11:13 compute-0 nova_compute[247704]: 2026-01-31 09:11:13.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:11:13 compute-0 nova_compute[247704]: 2026-01-31 09:11:13.670 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:13 compute-0 nova_compute[247704]: 2026-01-31 09:11:13.671 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:11:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:14 compute-0 ceph-mon[74496]: pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:14.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:15.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:15 compute-0 nova_compute[247704]: 2026-01-31 09:11:15.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:15 compute-0 nova_compute[247704]: 2026-01-31 09:11:15.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:11:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:15 compute-0 ceph-mon[74496]: pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:16.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 09:11:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 09:11:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:17 compute-0 ceph-mon[74496]: pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:18 compute-0 nova_compute[247704]: 2026-01-31 09:11:18.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:18 compute-0 nova_compute[247704]: 2026-01-31 09:11:18.671 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:18.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:19 compute-0 nova_compute[247704]: 2026-01-31 09:11:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Jan 31 09:11:19 compute-0 ceph-mon[74496]: pgmap v4191: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 22 KiB/s wr, 1 op/s
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:11:20
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', 'backups']
Jan 31 09:11:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:11:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:20.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:11:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:21 compute-0 ceph-mon[74496]: pgmap v4192: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:22.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:23 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3332761948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:23 compute-0 nova_compute[247704]: 2026-01-31 09:11:23.674 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:23 compute-0 podman[416859]: 2026-01-31 09:11:23.901298555 +0000 UTC m=+0.077946900 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:11:24 compute-0 ceph-mon[74496]: pgmap v4193: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1859109925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:24.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:25.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:26 compute-0 ceph-mon[74496]: pgmap v4194: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:27.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:27 compute-0 nova_compute[247704]: 2026-01-31 09:11:27.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:27 compute-0 ceph-mon[74496]: pgmap v4195: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:28 compute-0 nova_compute[247704]: 2026-01-31 09:11:28.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:28 compute-0 nova_compute[247704]: 2026-01-31 09:11:28.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:11:28 compute-0 nova_compute[247704]: 2026-01-31 09:11:28.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:11:28 compute-0 nova_compute[247704]: 2026-01-31 09:11:28.592 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:11:28 compute-0 nova_compute[247704]: 2026-01-31 09:11:28.676 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:28.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:29.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:29 compute-0 sudo[416889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:29 compute-0 sudo[416889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:29 compute-0 sudo[416889]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:29 compute-0 sudo[416914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:29 compute-0 sudo[416914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:29 compute-0 sudo[416914]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3425866928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2631700456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:30 compute-0 ceph-mon[74496]: pgmap v4196: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 31 09:11:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2909810650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:30.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:31.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:31 compute-0 nova_compute[247704]: 2026-01-31 09:11:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:31 compute-0 nova_compute[247704]: 2026-01-31 09:11:31.638 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:11:31 compute-0 nova_compute[247704]: 2026-01-31 09:11:31.639 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:11:31 compute-0 nova_compute[247704]: 2026-01-31 09:11:31.639 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:11:31 compute-0 nova_compute[247704]: 2026-01-31 09:11:31.639 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:11:31 compute-0 nova_compute[247704]: 2026-01-31 09:11:31.639 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:11:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Jan 31 09:11:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:11:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686250951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.109 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:11:32 compute-0 ceph-mon[74496]: pgmap v4197: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.246 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.248 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4211MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.248 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.248 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.395 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.396 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.436 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:11:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:11:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/199953000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.840 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:11:32 compute-0 nova_compute[247704]: 2026-01-31 09:11:32.845 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:11:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:32.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:33 compute-0 nova_compute[247704]: 2026-01-31 09:11:33.067 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:11:33 compute-0 nova_compute[247704]: 2026-01-31 09:11:33.070 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:11:33 compute-0 nova_compute[247704]: 2026-01-31 09:11:33.070 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:11:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:33.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:11:33.213 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=111, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=110) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:11:33 compute-0 nova_compute[247704]: 2026-01-31 09:11:33.214 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:33 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:11:33.214 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:11:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/216516074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:11:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/686250951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/199953000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:11:33 compute-0 nova_compute[247704]: 2026-01-31 09:11:33.678 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:11:34 compute-0 ceph-mon[74496]: pgmap v4198: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:11:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:34.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:11:36 compute-0 nova_compute[247704]: 2026-01-31 09:11:36.070 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:36 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:11:36.216 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '111'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173955716080402 of space, bias 1.0, pg target 0.6521867148241206 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:11:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:11:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:36 compute-0 ceph-mon[74496]: pgmap v4199: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:11:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:36.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:37 compute-0 sudo[416987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:37 compute-0 sudo[416987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-0 sudo[416987]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-0 sudo[417012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:11:37 compute-0 sudo[417012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-0 sudo[417012]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-0 sudo[417037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:37 compute-0 sudo[417037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-0 sudo[417037]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-0 sudo[417062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Jan 31 09:11:37 compute-0 sudo[417062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:37 compute-0 sudo[417062]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: pgmap v4200: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:38 compute-0 sudo[417107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:38 compute-0 sudo[417107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:38 compute-0 sudo[417107]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:38 compute-0 sudo[417132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:11:38 compute-0 sudo[417132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:38 compute-0 sudo[417132]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:38 compute-0 sudo[417158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:38 compute-0 sudo[417158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:38 compute-0 sudo[417158]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:38 compute-0 sudo[417183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:11:38 compute-0 sudo[417183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.682 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.718 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:38 compute-0 nova_compute[247704]: 2026-01-31 09:11:38.719 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:11:38 compute-0 sudo[417183]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:38.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:38 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6ade9ac0-f7ee-4597-9552-941dc4609ae7 does not exist
Jan 31 09:11:38 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b97431eb-5731-43a5-ae58-4a65196c46f4 does not exist
Jan 31 09:11:38 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev aca8a9ad-eba0-421e-b923-83853842c620 does not exist
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:11:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:11:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:11:39 compute-0 sudo[417239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:39 compute-0 sudo[417239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:39 compute-0 sudo[417239]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:39 compute-0 sudo[417264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:11:39 compute-0 sudo[417264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:39 compute-0 sudo[417264]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:39 compute-0 sudo[417289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:39 compute-0 sudo[417289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:39 compute-0 sudo[417289]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:39.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:39 compute-0 sudo[417314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:11:39 compute-0 sudo[417314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:11:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.476652611 +0000 UTC m=+0.077427007 container create 6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_feynman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.419571395 +0000 UTC m=+0.020345801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:11:39 compute-0 systemd[1]: Started libpod-conmon-6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6.scope.
Jan 31 09:11:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.566956457 +0000 UTC m=+0.167730853 container init 6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.572781548 +0000 UTC m=+0.173555914 container start 6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:11:39 compute-0 trusting_feynman[417395]: 167 167
Jan 31 09:11:39 compute-0 systemd[1]: libpod-6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6.scope: Deactivated successfully.
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.58034684 +0000 UTC m=+0.181121296 container attach 6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_feynman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.58160459 +0000 UTC m=+0.182378976 container died 6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 09:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9285424e99d23ee7ca624b06866f64daf03462b1a44976882f7033a3d9acaf45-merged.mount: Deactivated successfully.
Jan 31 09:11:39 compute-0 podman[417378]: 2026-01-31 09:11:39.626722927 +0000 UTC m=+0.227497293 container remove 6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 31 09:11:39 compute-0 systemd[1]: libpod-conmon-6bf5908fddd1706a40c0e96cb584946287a8673cd9888c95f3cf58b454170de6.scope: Deactivated successfully.
Jan 31 09:11:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:39 compute-0 podman[417419]: 2026-01-31 09:11:39.76333457 +0000 UTC m=+0.038659993 container create 10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:11:39 compute-0 systemd[1]: Started libpod-conmon-10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c.scope.
Jan 31 09:11:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb1d537b2b08f9af0cdabcd45df969ec16f37e89de973e8e5acd962bb8d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb1d537b2b08f9af0cdabcd45df969ec16f37e89de973e8e5acd962bb8d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb1d537b2b08f9af0cdabcd45df969ec16f37e89de973e8e5acd962bb8d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb1d537b2b08f9af0cdabcd45df969ec16f37e89de973e8e5acd962bb8d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccddb1d537b2b08f9af0cdabcd45df969ec16f37e89de973e8e5acd962bb8d30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:39 compute-0 podman[417419]: 2026-01-31 09:11:39.841828462 +0000 UTC m=+0.117153895 container init 10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:11:39 compute-0 podman[417419]: 2026-01-31 09:11:39.744845985 +0000 UTC m=+0.020171428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:11:39 compute-0 podman[417419]: 2026-01-31 09:11:39.849768993 +0000 UTC m=+0.125094416 container start 10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:11:39 compute-0 podman[417419]: 2026-01-31 09:11:39.853782569 +0000 UTC m=+0.129107992 container attach 10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:11:40 compute-0 nova_compute[247704]: 2026-01-31 09:11:40.557 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:40 compute-0 sharp_bardeen[417436]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:11:40 compute-0 sharp_bardeen[417436]: --> relative data size: 1.0
Jan 31 09:11:40 compute-0 sharp_bardeen[417436]: --> All data devices are unavailable
Jan 31 09:11:40 compute-0 systemd[1]: libpod-10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c.scope: Deactivated successfully.
Jan 31 09:11:40 compute-0 podman[417419]: 2026-01-31 09:11:40.593482267 +0000 UTC m=+0.868807700 container died 10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:11:40 compute-0 ceph-mon[74496]: pgmap v4201: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccddb1d537b2b08f9af0cdabcd45df969ec16f37e89de973e8e5acd962bb8d30-merged.mount: Deactivated successfully.
Jan 31 09:11:40 compute-0 podman[417419]: 2026-01-31 09:11:40.65377047 +0000 UTC m=+0.929095893 container remove 10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:11:40 compute-0 systemd[1]: libpod-conmon-10038ef191b080ae6983dbd0b172bd45cee1e169b0db5cab37dafd19cdef607c.scope: Deactivated successfully.
Jan 31 09:11:40 compute-0 sudo[417314]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:40 compute-0 podman[417453]: 2026-01-31 09:11:40.685775262 +0000 UTC m=+0.056556035 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:11:40 compute-0 sudo[417482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:40 compute-0 sudo[417482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:40 compute-0 sudo[417482]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:40 compute-0 sudo[417507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:11:40 compute-0 sudo[417507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:40 compute-0 sudo[417507]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:40 compute-0 sudo[417532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:40 compute-0 sudo[417532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:40 compute-0 sudo[417532]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:40 compute-0 sudo[417557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:11:40 compute-0 sudo[417557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:40.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:41.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.136600967 +0000 UTC m=+0.017638117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.27738756 +0000 UTC m=+0.158424690 container create d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 09:11:41 compute-0 systemd[1]: Started libpod-conmon-d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc.scope.
Jan 31 09:11:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.359027237 +0000 UTC m=+0.240064467 container init d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.370084543 +0000 UTC m=+0.251121673 container start d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.374341676 +0000 UTC m=+0.255378806 container attach d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:11:41 compute-0 amazing_varahamihira[417638]: 167 167
Jan 31 09:11:41 compute-0 systemd[1]: libpod-d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc.scope: Deactivated successfully.
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.377647126 +0000 UTC m=+0.258684256 container died d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-51aa278c17a6951ef638f0b714bbb490bdf27f75737264cdd97194d4608c716c-merged.mount: Deactivated successfully.
Jan 31 09:11:41 compute-0 podman[417621]: 2026-01-31 09:11:41.421518523 +0000 UTC m=+0.302555653 container remove d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 09:11:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:41 compute-0 systemd[1]: libpod-conmon-d0f1ce01ecb0541b561698369303883c38bd497af4d42a8f6d5082a0ebfef3fc.scope: Deactivated successfully.
Jan 31 09:11:41 compute-0 podman[417663]: 2026-01-31 09:11:41.557313476 +0000 UTC m=+0.041164283 container create 87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:11:41 compute-0 systemd[1]: Started libpod-conmon-87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29.scope.
Jan 31 09:11:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53087b680a38653a5333d3d44c6f0b245be973595e5707e8ff8a24060022160d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53087b680a38653a5333d3d44c6f0b245be973595e5707e8ff8a24060022160d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53087b680a38653a5333d3d44c6f0b245be973595e5707e8ff8a24060022160d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53087b680a38653a5333d3d44c6f0b245be973595e5707e8ff8a24060022160d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:41 compute-0 podman[417663]: 2026-01-31 09:11:41.540687025 +0000 UTC m=+0.024537852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:11:41 compute-0 podman[417663]: 2026-01-31 09:11:41.638497902 +0000 UTC m=+0.122348739 container init 87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carson, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 31 09:11:41 compute-0 podman[417663]: 2026-01-31 09:11:41.64376892 +0000 UTC m=+0.127619727 container start 87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 09:11:41 compute-0 podman[417663]: 2026-01-31 09:11:41.646976867 +0000 UTC m=+0.130827694 container attach 87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 09:11:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4202: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:42 compute-0 ceph-mon[74496]: pgmap v4202: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:42 compute-0 quizzical_carson[417679]: {
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:     "0": [
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:         {
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "devices": [
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "/dev/loop3"
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             ],
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "lv_name": "ceph_lv0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "lv_size": "7511998464",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "name": "ceph_lv0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "tags": {
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.cluster_name": "ceph",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.crush_device_class": "",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.encrypted": "0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.osd_id": "0",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.type": "block",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:                 "ceph.vdo": "0"
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             },
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "type": "block",
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:             "vg_name": "ceph_vg0"
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:         }
Jan 31 09:11:42 compute-0 quizzical_carson[417679]:     ]
Jan 31 09:11:42 compute-0 quizzical_carson[417679]: }
Jan 31 09:11:42 compute-0 systemd[1]: libpod-87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29.scope: Deactivated successfully.
Jan 31 09:11:42 compute-0 podman[417663]: 2026-01-31 09:11:42.405003956 +0000 UTC m=+0.888854763 container died 87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-53087b680a38653a5333d3d44c6f0b245be973595e5707e8ff8a24060022160d-merged.mount: Deactivated successfully.
Jan 31 09:11:42 compute-0 podman[417663]: 2026-01-31 09:11:42.893414846 +0000 UTC m=+1.377265663 container remove 87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:11:42 compute-0 systemd[1]: libpod-conmon-87e96de4bd09092052001bc5575b06e59a276d747557b0828bf54b0127c05c29.scope: Deactivated successfully.
Jan 31 09:11:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:42.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:42 compute-0 sudo[417557]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:42 compute-0 sudo[417702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:43 compute-0 sudo[417702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:43 compute-0 sudo[417702]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:43 compute-0 sudo[417727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:11:43 compute-0 sudo[417727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:43 compute-0 sudo[417727]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:43 compute-0 sudo[417752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:43 compute-0 sudo[417752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:43 compute-0 sudo[417752]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:43.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:43 compute-0 sudo[417777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:11:43 compute-0 sudo[417777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.500648392 +0000 UTC m=+0.045246792 container create 4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 09:11:43 compute-0 systemd[1]: Started libpod-conmon-4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b.scope.
Jan 31 09:11:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.480326472 +0000 UTC m=+0.024924892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.588693394 +0000 UTC m=+0.133291844 container init 4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.594921093 +0000 UTC m=+0.139519493 container start 4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermi, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:11:43 compute-0 loving_fermi[417859]: 167 167
Jan 31 09:11:43 compute-0 systemd[1]: libpod-4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b.scope: Deactivated successfully.
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.599293409 +0000 UTC m=+0.143891839 container attach 4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.599986696 +0000 UTC m=+0.144585106 container died 4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1dca4b60c32247cbc481d4b398dcc31fb6ed74538633ff7ffda56a29cbdccff-merged.mount: Deactivated successfully.
Jan 31 09:11:43 compute-0 podman[417842]: 2026-01-31 09:11:43.645328258 +0000 UTC m=+0.189926658 container remove 4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 09:11:43 compute-0 systemd[1]: libpod-conmon-4d481cb4cbc4ca44f9811672ff264eda5670f05661ec2b8afb918653c6da932b.scope: Deactivated successfully.
Jan 31 09:11:43 compute-0 nova_compute[247704]: 2026-01-31 09:11:43.720 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:43 compute-0 podman[417883]: 2026-01-31 09:11:43.800768484 +0000 UTC m=+0.052470455 container create 07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 09:11:43 compute-0 systemd[1]: Started libpod-conmon-07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95.scope.
Jan 31 09:11:43 compute-0 ceph-mon[74496]: pgmap v4203: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd88363e073a9e4e26adcdcebca4532fef36c9c96d398bbf59c5e01d1d5bf6f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd88363e073a9e4e26adcdcebca4532fef36c9c96d398bbf59c5e01d1d5bf6f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd88363e073a9e4e26adcdcebca4532fef36c9c96d398bbf59c5e01d1d5bf6f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd88363e073a9e4e26adcdcebca4532fef36c9c96d398bbf59c5e01d1d5bf6f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:11:43 compute-0 podman[417883]: 2026-01-31 09:11:43.781285636 +0000 UTC m=+0.032987627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:11:43 compute-0 podman[417883]: 2026-01-31 09:11:43.890626681 +0000 UTC m=+0.142328682 container init 07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 09:11:43 compute-0 podman[417883]: 2026-01-31 09:11:43.897957337 +0000 UTC m=+0.149659308 container start 07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:11:43 compute-0 podman[417883]: 2026-01-31 09:11:43.902774833 +0000 UTC m=+0.154476804 container attach 07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:11:44 compute-0 nova_compute[247704]: 2026-01-31 09:11:44.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]: {
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:         "osd_id": 0,
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:         "type": "bluestore"
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]:     }
Jan 31 09:11:44 compute-0 affectionate_kapitsa[417899]: }
Jan 31 09:11:44 compute-0 systemd[1]: libpod-07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95.scope: Deactivated successfully.
Jan 31 09:11:44 compute-0 podman[417883]: 2026-01-31 09:11:44.818442441 +0000 UTC m=+1.070144412 container died 07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 09:11:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:44.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd88363e073a9e4e26adcdcebca4532fef36c9c96d398bbf59c5e01d1d5bf6f3-merged.mount: Deactivated successfully.
Jan 31 09:11:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:45.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:45 compute-0 podman[417883]: 2026-01-31 09:11:45.318811611 +0000 UTC m=+1.570513612 container remove 07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 09:11:45 compute-0 systemd[1]: libpod-conmon-07b26e5dee82ac4233fe69cb213248acd823a06641e7ec78bafe3989a48bdc95.scope: Deactivated successfully.
Jan 31 09:11:45 compute-0 sudo[417777]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:11:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:11:45 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a110e70d-b87f-4f73-a99e-ec291c384a74 does not exist
Jan 31 09:11:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 8a51dbf4-8a2a-40d3-94a5-baa698b7d2a2 does not exist
Jan 31 09:11:45 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 289921a3-d29a-4e8c-a3fd-869b40478e8a does not exist
Jan 31 09:11:45 compute-0 sudo[417934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:45 compute-0 sudo[417934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:45 compute-0 sudo[417934]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:45 compute-0 sudo[417959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:11:45 compute-0 sudo[417959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:45 compute-0 sudo[417959]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:46 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:11:46 compute-0 ceph-mon[74496]: pgmap v4204: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3893306248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:11:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:46.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:48 compute-0 ceph-mon[74496]: pgmap v4205: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:48 compute-0 nova_compute[247704]: 2026-01-31 09:11:48.724 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:11:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:49 compute-0 sudo[417986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:49 compute-0 sudo[417986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:49 compute-0 sudo[417986]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:49 compute-0 sudo[418011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:11:49 compute-0 sudo[418011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:11:49 compute-0 sudo[418011]: pam_unix(sudo:session): session closed for user root
Jan 31 09:11:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:11:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:11:50 compute-0 ceph-mon[74496]: pgmap v4206: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:11:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:50.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:51.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 31 09:11:51 compute-0 ceph-mon[74496]: pgmap v4207: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 31 09:11:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:52.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4140901243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:11:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4140901243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:11:53 compute-0 nova_compute[247704]: 2026-01-31 09:11:53.726 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:53 compute-0 nova_compute[247704]: 2026-01-31 09:11:53.729 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:11:54 compute-0 ceph-mon[74496]: pgmap v4208: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:11:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:54 compute-0 podman[418040]: 2026-01-31 09:11:54.934793282 +0000 UTC m=+0.108349553 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:11:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:55.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:56 compute-0 ceph-mon[74496]: pgmap v4209: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:11:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:11:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:57.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:11:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:57 compute-0 ceph-mon[74496]: pgmap v4210: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:58 compute-0 nova_compute[247704]: 2026-01-31 09:11:58.731 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:11:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:11:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:11:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:11:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:11:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:59.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:11:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:11:59 compute-0 ceph-mon[74496]: pgmap v4211: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:12:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:12:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:01.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:12:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:12:02 compute-0 ceph-mon[74496]: pgmap v4212: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Jan 31 09:12:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:03.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:03 compute-0 nova_compute[247704]: 2026-01-31 09:12:03.732 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 31 09:12:03 compute-0 ceph-mon[74496]: pgmap v4213: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 31 09:12:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:04.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3309363941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:05.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 17 op/s
Jan 31 09:12:06 compute-0 ceph-mon[74496]: pgmap v4214: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 13 KiB/s wr, 17 op/s
Jan 31 09:12:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:06.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:12:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:07.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:12:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 12 op/s
Jan 31 09:12:08 compute-0 ceph-mon[74496]: pgmap v4215: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 597 B/s wr, 12 op/s
Jan 31 09:12:08 compute-0 nova_compute[247704]: 2026-01-31 09:12:08.733 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:08 compute-0 nova_compute[247704]: 2026-01-31 09:12:08.736 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:08.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:09.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:09 compute-0 sudo[418072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:09 compute-0 sudo[418072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:09 compute-0 sudo[418072]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:09 compute-0 sudo[418097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:09 compute-0 sudo[418097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:09 compute-0 sudo[418097]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/734212161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:12:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/734212161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:12:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 09:12:10 compute-0 ceph-mon[74496]: pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 938 B/s wr, 16 op/s
Jan 31 09:12:10 compute-0 podman[418123]: 2026-01-31 09:12:10.879814667 +0000 UTC m=+0.051725258 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 09:12:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:10 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:10 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:10.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:11.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:11.245 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:11.245 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:11.246 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:12:11 compute-0 ceph-mon[74496]: pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:12:12 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:12 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:12 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:13.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:13 compute-0 nova_compute[247704]: 2026-01-31 09:12:13.735 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:12:14 compute-0 ceph-mon[74496]: pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 09:12:14 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:14 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:14 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:14.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:15.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 697 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Jan 31 09:12:16 compute-0 ceph-mon[74496]: pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 697 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Jan 31 09:12:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:16 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:16 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:16 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:17.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:17 compute-0 nova_compute[247704]: 2026-01-31 09:12:17.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:17 compute-0 nova_compute[247704]: 2026-01-31 09:12:17.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:12:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 09:12:18 compute-0 ceph-mon[74496]: pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 694 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 09:12:18 compute-0 nova_compute[247704]: 2026-01-31 09:12:18.737 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:18 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:18 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:18 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:18.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:19.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:19 compute-0 nova_compute[247704]: 2026-01-31 09:12:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 767 B/s wr, 22 op/s
Jan 31 09:12:20 compute-0 ceph-mon[74496]: pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 767 B/s wr, 22 op/s
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:12:20
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'vms', 'backups', '.mgr']
Jan 31 09:12:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:12:20 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:20 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:12:20 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:12:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:21.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:21 compute-0 nova_compute[247704]: 2026-01-31 09:12:21.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 19 op/s
Jan 31 09:12:22 compute-0 ceph-mon[74496]: pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 19 op/s
Jan 31 09:12:22 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:22 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:22 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:22.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:23.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:23 compute-0 nova_compute[247704]: 2026-01-31 09:12:23.738 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 31 09:12:24 compute-0 ceph-mon[74496]: pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 31 09:12:24 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:24 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:24 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:24.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1538677618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 17 op/s
Jan 31 09:12:25 compute-0 podman[418149]: 2026-01-31 09:12:25.906432819 +0000 UTC m=+0.081078235 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:12:26 compute-0 ceph-mon[74496]: pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 17 op/s
Jan 31 09:12:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2139921524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:26 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:26 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:26 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:26.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 767 B/s wr, 17 op/s
Jan 31 09:12:28 compute-0 ceph-mon[74496]: pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 767 B/s wr, 17 op/s
Jan 31 09:12:28 compute-0 nova_compute[247704]: 2026-01-31 09:12:28.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:28 compute-0 nova_compute[247704]: 2026-01-31 09:12:28.740 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:28 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:28 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:28 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:28.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Jan 31 09:12:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:29.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Jan 31 09:12:29 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Jan 31 09:12:29 compute-0 sudo[418178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:29 compute-0 sudo[418178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:29 compute-0 sudo[418178]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:29 compute-0 sudo[418203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:29 compute-0 sudo[418203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:29 compute-0 sudo[418203]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:29 compute-0 nova_compute[247704]: 2026-01-31 09:12:29.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:29 compute-0 nova_compute[247704]: 2026-01-31 09:12:29.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:12:29 compute-0 nova_compute[247704]: 2026-01-31 09:12:29.564 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:12:29 compute-0 nova_compute[247704]: 2026-01-31 09:12:29.589 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:12:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 148 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 847 KiB/s rd, 1.2 MiB/s wr, 42 op/s
Jan 31 09:12:30 compute-0 ceph-mon[74496]: osdmap e421: 3 total, 3 up, 3 in
Jan 31 09:12:30 compute-0 ceph-mon[74496]: pgmap v4227: 305 pgs: 305 active+clean; 148 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 847 KiB/s rd, 1.2 MiB/s wr, 42 op/s
Jan 31 09:12:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2901482307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:30 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:30 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:30 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:31.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:31 compute-0 nova_compute[247704]: 2026-01-31 09:12:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:31 compute-0 nova_compute[247704]: 2026-01-31 09:12:31.681 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:31 compute-0 nova_compute[247704]: 2026-01-31 09:12:31.682 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:31 compute-0 nova_compute[247704]: 2026-01-31 09:12:31.682 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:31 compute-0 nova_compute[247704]: 2026-01-31 09:12:31.682 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:12:31 compute-0 nova_compute[247704]: 2026-01-31 09:12:31.682 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 31 09:12:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3588450564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:12:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434427507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.089 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.159289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752159405, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 3942045, "memory_usage": 4016304, "flush_reason": "Manual Compaction"}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752206863, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 3821959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90106, "largest_seqno": 92227, "table_properties": {"data_size": 3812383, "index_size": 6069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19564, "raw_average_key_size": 20, "raw_value_size": 3793277, "raw_average_value_size": 3943, "num_data_blocks": 265, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850542, "oldest_key_time": 1769850542, "file_creation_time": 1769850752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 47602 microseconds, and 8054 cpu microseconds.
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.206908) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 3821959 bytes OK
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.206929) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.210130) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.210162) EVENT_LOG_v1 {"time_micros": 1769850752210154, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.210185) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3933468, prev total WAL file size 3933468, number of live WAL files 2.
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.210870) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(3732KB)], [209(12MB)]
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752210908, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 16700551, "oldest_snapshot_seqno": -1}
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.244 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.245 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4203MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.246 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.246 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 11835 keys, 14760528 bytes, temperature: kUnknown
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752407782, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 14760528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14685442, "index_size": 44381, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29637, "raw_key_size": 312754, "raw_average_key_size": 26, "raw_value_size": 14479970, "raw_average_value_size": 1223, "num_data_blocks": 1684, "num_entries": 11835, "num_filter_entries": 11835, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.408153) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 14760528 bytes
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.411360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.8 rd, 74.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.3 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.9) OK, records in: 12358, records dropped: 523 output_compression: NoCompression
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.411406) EVENT_LOG_v1 {"time_micros": 1769850752411379, "job": 132, "event": "compaction_finished", "compaction_time_micros": 196970, "compaction_time_cpu_micros": 31273, "output_level": 6, "num_output_files": 1, "total_output_size": 14760528, "num_input_records": 12358, "num_output_records": 11835, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752412109, "job": 132, "event": "table_file_deletion", "file_number": 211}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850752413275, "job": 132, "event": "table_file_deletion", "file_number": 209}
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.210778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.413394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.413404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.413405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.413407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:12:32.413409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.453 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.454 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.482 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:12:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432717354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.883 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.889 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:12:32 compute-0 ceph-mon[74496]: pgmap v4228: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 31 09:12:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1434427507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1432717354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:32 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:32 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:32 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:32.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.975 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.978 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:12:32 compute-0 nova_compute[247704]: 2026-01-31 09:12:32.979 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:33.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:33 compute-0 nova_compute[247704]: 2026-01-31 09:12:33.742 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 09:12:34 compute-0 ceph-mon[74496]: pgmap v4229: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 09:12:34 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:34 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:12:34 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:34.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:12:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:35.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 09:12:35 compute-0 ceph-mon[74496]: pgmap v4230: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150881723431975 of space, bias 1.0, pg target 0.9452645170295925 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:12:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.358 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.358 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.551 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.873 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.873 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.880 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.881 247708 INFO nova.compute.claims [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:12:36 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:36 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:36 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:36.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:36 compute-0 nova_compute[247704]: 2026-01-31 09:12:36.979 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:37 compute-0 nova_compute[247704]: 2026-01-31 09:12:37.150 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:37.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 09:12:38 compute-0 ceph-mon[74496]: pgmap v4231: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Jan 31 09:12:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:12:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/130320623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.119 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.970s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.126 247708 DEBUG nova.compute.provider_tree [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.247 247708 DEBUG nova.scheduler.client.report [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.364 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.365 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.588 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.588 247708 DEBUG nova.network.neutron [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.665 247708 INFO nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.744 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.746 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.747 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:38 compute-0 nova_compute[247704]: 2026-01-31 09:12:38.888 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:12:38 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:38 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:38 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:38.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:39 compute-0 nova_compute[247704]: 2026-01-31 09:12:39.015 247708 INFO nova.virt.block_device [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Booting with volume snapshot 3d93d6e9-0ee1-4a3d-9ffd-2035ed6de999 at /dev/vda
Jan 31 09:12:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/130320623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:12:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:39.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:39 compute-0 nova_compute[247704]: 2026-01-31 09:12:39.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:39 compute-0 nova_compute[247704]: 2026-01-31 09:12:39.569 247708 DEBUG nova.policy [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:12:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 925 KiB/s wr, 6 op/s
Jan 31 09:12:40 compute-0 ceph-mon[74496]: pgmap v4232: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 925 KiB/s wr, 6 op/s
Jan 31 09:12:40 compute-0 nova_compute[247704]: 2026-01-31 09:12:40.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:40 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:40 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:40 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:40.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:41 compute-0 nova_compute[247704]: 2026-01-31 09:12:41.159 247708 DEBUG nova.network.neutron [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Successfully created port: bf47178f-779f-480a-8138-c5e91ca8bea8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:12:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:41.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4233: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 806 KiB/s wr, 5 op/s
Jan 31 09:12:41 compute-0 podman[418300]: 2026-01-31 09:12:41.88933854 +0000 UTC m=+0.065569832 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:12:41 compute-0 ceph-mon[74496]: pgmap v4233: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 806 KiB/s wr, 5 op/s
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.361 247708 DEBUG nova.network.neutron [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Successfully updated port: bf47178f-779f-480a-8138-c5e91ca8bea8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.382 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-66b31a08-89b1-4470-b197-7fd3bf1820c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.382 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-66b31a08-89b1-4470-b197-7fd3bf1820c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.382 247708 DEBUG nova.network.neutron [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.504 247708 DEBUG nova.compute.manager [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-changed-bf47178f-779f-480a-8138-c5e91ca8bea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.504 247708 DEBUG nova.compute.manager [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Refreshing instance network info cache due to event network-changed-bf47178f-779f-480a-8138-c5e91ca8bea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:12:42 compute-0 nova_compute[247704]: 2026-01-31 09:12:42.505 247708 DEBUG oslo_concurrency.lockutils [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-66b31a08-89b1-4470-b197-7fd3bf1820c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:12:42 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:42 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:42 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:42.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:43.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:43 compute-0 nova_compute[247704]: 2026-01-31 09:12:43.390 247708 DEBUG nova.network.neutron [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:12:43 compute-0 nova_compute[247704]: 2026-01-31 09:12:43.746 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:43 compute-0 nova_compute[247704]: 2026-01-31 09:12:43.749 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 426 B/s wr, 4 op/s
Jan 31 09:12:43 compute-0 ceph-mon[74496]: pgmap v4234: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 426 B/s wr, 4 op/s
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.722 247708 DEBUG nova.network.neutron [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Updating instance_info_cache with network_info: [{"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.740 247708 DEBUG os_brick.utils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.743 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.754 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.754 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc6ffee-b1b5-4598-a721-73ed1586ca8b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.755 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.761 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.761 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc0a647-68bb-4604-88b2-f011430d539c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.762 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.765 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-66b31a08-89b1-4470-b197-7fd3bf1820c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.765 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Instance network_info: |[{"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.766 247708 DEBUG oslo_concurrency.lockutils [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-66b31a08-89b1-4470-b197-7fd3bf1820c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.766 247708 DEBUG nova.network.neutron [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Refreshing network info cache for port bf47178f-779f-480a-8138-c5e91ca8bea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.771 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.771 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[51f37eaf-792d-48bf-af35-a9951647ab92]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.772 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4c2f12-c153-4555-b990-4ede04983f6c]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.772 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.798 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.800 247708 DEBUG os_brick.initiator.connectors.lightos [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.800 247708 DEBUG os_brick.initiator.connectors.lightos [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.800 247708 DEBUG os_brick.initiator.connectors.lightos [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.801 247708 DEBUG os_brick.utils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:12:44 compute-0 nova_compute[247704]: 2026-01-31 09:12:44.801 247708 DEBUG nova.virt.block_device [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Updating existing volume attachment record: 263fbd73-b688-40bf-9156-ed122af11c41 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:12:44 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:44 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:12:44 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:44.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:12:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:45.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2610765387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:12:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 14 op/s
Jan 31 09:12:45 compute-0 sudo[418328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:45 compute-0 sudo[418328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:45 compute-0 sudo[418328]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-0 sudo[418353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:12:46 compute-0 sudo[418353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:46 compute-0 sudo[418353]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-0 sudo[418378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:46 compute-0 sudo[418378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:46 compute-0 sudo[418378]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-0 sudo[418403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:12:46 compute-0 sudo[418403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.160 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.162 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.162 247708 INFO nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Creating image(s)
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.163 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.163 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Ensure instance console log exists: /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.163 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.163 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.164 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.166 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Start _get_guest_xml network_info=[{"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '263fbd73-b688-40bf-9156-ed122af11c41', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d574299e-ff71-4c51-9235-0d741f4c633c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd574299e-ff71-4c51-9235-0d741f4c633c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '66b31a08-89b1-4470-b197-7fd3bf1820c4', 'attached_at': '', 'detached_at': '', 'volume_id': 'd574299e-ff71-4c51-9235-0d741f4c633c', 'serial': 'd574299e-ff71-4c51-9235-0d741f4c633c'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.169 247708 WARNING nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.175 247708 DEBUG nova.virt.libvirt.host [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.175 247708 DEBUG nova.virt.libvirt.host [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.177 247708 DEBUG nova.virt.libvirt.host [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.178 247708 DEBUG nova.virt.libvirt.host [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.179 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.179 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.180 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.180 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.180 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.180 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.180 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.180 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.181 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.181 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.181 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.181 247708 DEBUG nova.virt.hardware [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.213 247708 DEBUG nova.storage.rbd_utils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 66b31a08-89b1-4470-b197-7fd3bf1820c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.217 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 09:12:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:46 compute-0 sudo[418403]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 09:12:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:12:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114281962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.615 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.646 247708 DEBUG nova.virt.libvirt.vif [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1749459400',display_name='tempest-TestVolumeBootPattern-server-1749459400',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1749459400',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-72i51hhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:12:38Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=66b31a08-89b1-4470-b197-7fd3bf1820c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.646 247708 DEBUG nova.network.os_vif_util [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.647 247708 DEBUG nova.network.os_vif_util [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.648 247708 DEBUG nova.objects.instance [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 66b31a08-89b1-4470-b197-7fd3bf1820c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.666 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <uuid>66b31a08-89b1-4470-b197-7fd3bf1820c4</uuid>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <name>instance-000000db</name>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <metadata>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <nova:name>tempest-TestVolumeBootPattern-server-1749459400</nova:name>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 09:12:46</nova:creationTime>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <nova:port uuid="bf47178f-779f-480a-8138-c5e91ca8bea8">
Jan 31 09:12:46 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </metadata>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <system>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <entry name="serial">66b31a08-89b1-4470-b197-7fd3bf1820c4</entry>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <entry name="uuid">66b31a08-89b1-4470-b197-7fd3bf1820c4</entry>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </system>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <os>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </os>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <features>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <apic/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </features>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </clock>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </cpu>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   <devices>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/66b31a08-89b1-4470-b197-7fd3bf1820c4_disk.config">
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </source>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-d574299e-ff71-4c51-9235-0d741f4c633c">
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </source>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:12:46 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <serial>d574299e-ff71-4c51-9235-0d741f4c633c</serial>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:1e:75:f6"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <target dev="tapbf47178f-77"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </interface>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/console.log" append="off"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </serial>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <video>
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </video>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </rng>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 09:12:46 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 09:12:46 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 09:12:46 compute-0 nova_compute[247704]:   </devices>
Jan 31 09:12:46 compute-0 nova_compute[247704]: </domain>
Jan 31 09:12:46 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.666 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Preparing to wait for external event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.666 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.666 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.667 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.667 247708 DEBUG nova.virt.libvirt.vif [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1749459400',display_name='tempest-TestVolumeBootPattern-server-1749459400',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1749459400',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-72i51hhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:12:38Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=66b31a08-89b1-4470-b197-7fd3bf1820c4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.667 247708 DEBUG nova.network.os_vif_util [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.668 247708 DEBUG nova.network.os_vif_util [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.668 247708 DEBUG os_vif [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.669 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.669 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.669 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.673 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.674 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf47178f-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.675 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf47178f-77, col_values=(('external_ids', {'iface-id': 'bf47178f-779f-480a-8138-c5e91ca8bea8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:75:f6', 'vm-uuid': '66b31a08-89b1-4470-b197-7fd3bf1820c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:46 compute-0 NetworkManager[49108]: <info>  [1769850766.6780] manager: (tapbf47178f-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.681 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.682 247708 INFO os_vif [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77')
Jan 31 09:12:46 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.729 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.729 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.730 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:1e:75:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.730 247708 INFO nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Using config drive
Jan 31 09:12:46 compute-0 nova_compute[247704]: 2026-01-31 09:12:46.755 247708 DEBUG nova.storage.rbd_utils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 66b31a08-89b1-4470-b197-7fd3bf1820c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:12:46 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:46 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:12:46 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:46.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:12:47 compute-0 ceph-mon[74496]: pgmap v4235: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 14 op/s
Jan 31 09:12:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:47 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/114281962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:47.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.439 247708 DEBUG nova.network.neutron [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Updated VIF entry in instance network info cache for port bf47178f-779f-480a-8138-c5e91ca8bea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.439 247708 DEBUG nova.network.neutron [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Updating instance_info_cache with network_info: [{"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.514 247708 DEBUG oslo_concurrency.lockutils [req-05ef4c64-7487-4f9b-8c68-cda09cce97bf req-efbb0b91-c917-47b6-b600-f08a6821b97c 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-66b31a08-89b1-4470-b197-7fd3bf1820c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.746 247708 INFO nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Creating config drive at /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/disk.config
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.755 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfw2xx1f1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.881 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpfw2xx1f1" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 865d11cf-88e3-42d6-b883-b6040f51b0aa does not exist
Jan 31 09:12:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6854a0d0-df73-4435-bb81-a1d34f88ac58 does not exist
Jan 31 09:12:47 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1742ffe3-a562-45f3-9378-c65c8827d09f does not exist
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:12:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:12:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.928 247708 DEBUG nova.storage.rbd_utils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 66b31a08-89b1-4470-b197-7fd3bf1820c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:12:47 compute-0 nova_compute[247704]: 2026-01-31 09:12:47.931 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/disk.config 66b31a08-89b1-4470-b197-7fd3bf1820c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:12:47 compute-0 sudo[418536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:47 compute-0 sudo[418536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:47 compute-0 sudo[418536]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:48 compute-0 sudo[418566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:12:48 compute-0 sudo[418566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:48 compute-0 sudo[418566]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:48 compute-0 sudo[418606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:48 compute-0 sudo[418606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:48 compute-0 sudo[418606]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:48 compute-0 sudo[418631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:12:48 compute-0 sudo[418631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:12:48 compute-0 ceph-mon[74496]: pgmap v4236: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 10 op/s
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:12:48 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.40536426 +0000 UTC m=+0.045493098 container create b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 09:12:48 compute-0 systemd[1]: Started libpod-conmon-b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1.scope.
Jan 31 09:12:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.466322599 +0000 UTC m=+0.106451457 container init b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_johnson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.471582266 +0000 UTC m=+0.111711104 container start b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_johnson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.474861926 +0000 UTC m=+0.114990764 container attach b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_johnson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:12:48 compute-0 wonderful_johnson[418714]: 167 167
Jan 31 09:12:48 compute-0 systemd[1]: libpod-b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1.scope: Deactivated successfully.
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.381358852 +0000 UTC m=+0.021487710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.477311944 +0000 UTC m=+0.117440782 container died b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_johnson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 09:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f40079b75d506d3e7ced00a61f921678f7a384cb0d136d79f3aa18b3b8d46bf-merged.mount: Deactivated successfully.
Jan 31 09:12:48 compute-0 podman[418697]: 2026-01-31 09:12:48.633857107 +0000 UTC m=+0.273985945 container remove b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:12:48 compute-0 systemd[1]: libpod-conmon-b9bfaa974e2673e46618783ddf84f6338c83b49a88ad6028fe6f2f333416b9a1.scope: Deactivated successfully.
Jan 31 09:12:48 compute-0 nova_compute[247704]: 2026-01-31 09:12:48.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:48 compute-0 podman[418740]: 2026-01-31 09:12:48.790209396 +0000 UTC m=+0.061626827 container create 5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:12:48 compute-0 podman[418740]: 2026-01-31 09:12:48.75429402 +0000 UTC m=+0.025711471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:12:48 compute-0 systemd[1]: Started libpod-conmon-5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6.scope.
Jan 31 09:12:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f633e025f7f5ad2eebde44944f0ea133cc9c5db5fc8681fc67f485fba5abf02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f633e025f7f5ad2eebde44944f0ea133cc9c5db5fc8681fc67f485fba5abf02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f633e025f7f5ad2eebde44944f0ea133cc9c5db5fc8681fc67f485fba5abf02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f633e025f7f5ad2eebde44944f0ea133cc9c5db5fc8681fc67f485fba5abf02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f633e025f7f5ad2eebde44944f0ea133cc9c5db5fc8681fc67f485fba5abf02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:48 compute-0 podman[418740]: 2026-01-31 09:12:48.892541512 +0000 UTC m=+0.163958993 container init 5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 09:12:48 compute-0 podman[418740]: 2026-01-31 09:12:48.906198801 +0000 UTC m=+0.177616232 container start 5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 09:12:48 compute-0 podman[418740]: 2026-01-31 09:12:48.910599577 +0000 UTC m=+0.182017008 container attach 5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:12:48 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:48 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:48 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:48.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:49.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:49 compute-0 sudo[418766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:49 compute-0 sudo[418766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-0 sudo[418766]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-0 sudo[418795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:49 compute-0 sudo[418795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-0 romantic_hofstadter[418756]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:12:49 compute-0 romantic_hofstadter[418756]: --> relative data size: 1.0
Jan 31 09:12:49 compute-0 sudo[418795]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-0 romantic_hofstadter[418756]: --> All data devices are unavailable
Jan 31 09:12:49 compute-0 systemd[1]: libpod-5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6.scope: Deactivated successfully.
Jan 31 09:12:49 compute-0 podman[418740]: 2026-01-31 09:12:49.705598217 +0000 UTC m=+0.977015688 container died 5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:12:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f633e025f7f5ad2eebde44944f0ea133cc9c5db5fc8681fc67f485fba5abf02-merged.mount: Deactivated successfully.
Jan 31 09:12:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4237: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 31 09:12:49 compute-0 podman[418740]: 2026-01-31 09:12:49.761090304 +0000 UTC m=+1.032507735 container remove 5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 09:12:49 compute-0 systemd[1]: libpod-conmon-5112d6ced0ddc31ba386b59e23cd9868370cd37c311ea07b44e5b4ce752156d6.scope: Deactivated successfully.
Jan 31 09:12:49 compute-0 sudo[418631]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-0 sudo[418838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:49 compute-0 sudo[418838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-0 sudo[418838]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.877 247708 DEBUG oslo_concurrency.processutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/disk.config 66b31a08-89b1-4470-b197-7fd3bf1820c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.946s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.879 247708 INFO nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Deleting local config drive /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4/disk.config because it was imported into RBD.
Jan 31 09:12:49 compute-0 sudo[418863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:12:49 compute-0 sudo[418863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-0 sudo[418863]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-0 kernel: tapbf47178f-77: entered promiscuous mode
Jan 31 09:12:49 compute-0 NetworkManager[49108]: <info>  [1769850769.9189] manager: (tapbf47178f-77): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 31 09:12:49 compute-0 ovn_controller[149457]: 2026-01-31T09:12:49Z|00886|binding|INFO|Claiming lport bf47178f-779f-480a-8138-c5e91ca8bea8 for this chassis.
Jan 31 09:12:49 compute-0 ovn_controller[149457]: 2026-01-31T09:12:49Z|00887|binding|INFO|bf47178f-779f-480a-8138-c5e91ca8bea8: Claiming fa:16:3e:1e:75:f6 10.100.0.9
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.919 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.923 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.926 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:49 compute-0 sudo[418891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:49 compute-0 sudo[418891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-0 sudo[418891]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:49 compute-0 systemd-udevd[418926]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.942 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:75:f6 10.100.0.9'], port_security=['fa:16:3e:1e:75:f6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66b31a08-89b1-4470-b197-7fd3bf1820c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bffa9ac-c434-4d26-bfda-d8e35c99a8ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=bf47178f-779f-480a-8138-c5e91ca8bea8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.944 160028 INFO neutron.agent.ovn.metadata.agent [-] Port bf47178f-779f-480a-8138-c5e91ca8bea8 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.945 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:12:49 compute-0 systemd-machined[214448]: New machine qemu-93-instance-000000db.
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.948 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:49 compute-0 NetworkManager[49108]: <info>  [1769850769.9536] device (tapbf47178f-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:12:49 compute-0 NetworkManager[49108]: <info>  [1769850769.9542] device (tapbf47178f-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:12:49 compute-0 ovn_controller[149457]: 2026-01-31T09:12:49Z|00888|binding|INFO|Setting lport bf47178f-779f-480a-8138-c5e91ca8bea8 ovn-installed in OVS
Jan 31 09:12:49 compute-0 ovn_controller[149457]: 2026-01-31T09:12:49Z|00889|binding|INFO|Setting lport bf47178f-779f-480a-8138-c5e91ca8bea8 up in Southbound
Jan 31 09:12:49 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-000000db.
Jan 31 09:12:49 compute-0 ceph-mon[74496]: pgmap v4237: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 31 09:12:49 compute-0 nova_compute[247704]: 2026-01-31 09:12:49.956 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.956 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ae878a29-cc98-477d-a294-0d8add4b25be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.957 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.959 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.959 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a62b14f7-6b26-45d8-be8d-8b5551c31436]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.960 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[11129b5b-e8a0-404f-aaab-bcb0ec8b50f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.970 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd9cf27-58fc-4e56-94a5-c36fa477e596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:49 compute-0 sudo[418928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:12:49 compute-0 sudo[418928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:49 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:49.991 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3417ad9f-25ae-4794-8ecc-92434bd193a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.015 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a10d5369-1dc7-4b17-afd5-2d59bd3cf9ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 NetworkManager[49108]: <info>  [1769850770.0229] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.022 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd07f64-d051-48f9-9199-19de42550aa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.042 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[60a2ec4d-b6e5-4d6d-9c99-6fc7810d46eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.045 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8db269d2-6b23-4ede-acc0-621f2b4851c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 NetworkManager[49108]: <info>  [1769850770.0611] device (tap5c9ca540-50): carrier: link connected
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.065 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[983ce119-c8ef-4d4e-a8a7-255635265484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.078 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5b48be-23a2-49b3-8244-7ddd05544dba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1101267, 'reachable_time': 37429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418984, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.088 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4b22dd35-e8cd-4a66-88f3-e05a31183502]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1101267, 'tstamp': 1101267}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418985, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.102 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2ec15f-61d0-4811-964d-5227d9af92be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1101267, 'reachable_time': 37429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 418986, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.124 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[4f135913-6f4e-47bb-861b-e2e19fca02a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:12:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.170 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[86175e1e-8e6a-44ed-abd5-3205e7d9b765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.171 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.171 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.172 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:50 compute-0 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.174 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:50 compute-0 NetworkManager[49108]: <info>  [1769850770.1761] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.177 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.183 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.184 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:50 compute-0 ovn_controller[149457]: 2026-01-31T09:12:50Z|00890|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.192 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.194 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.195 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d30c870e-d400-48b9-adc2-c6f566bce2b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.196 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: global
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:12:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:50.197 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.269901437 +0000 UTC m=+0.037953125 container create bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 09:12:50 compute-0 systemd[1]: Started libpod-conmon-bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079.scope.
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.252261872 +0000 UTC m=+0.020313580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:12:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.371518206 +0000 UTC m=+0.139569914 container init bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.379919098 +0000 UTC m=+0.147970786 container start bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.384389016 +0000 UTC m=+0.152440704 container attach bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 09:12:50 compute-0 jolly_ritchie[419052]: 167 167
Jan 31 09:12:50 compute-0 systemd[1]: libpod-bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079.scope: Deactivated successfully.
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.386062247 +0000 UTC m=+0.154113935 container died bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 09:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3e46eba8609cac59f30a23416c8e8063e51c822b474eb6c75d8345922cba2fb-merged.mount: Deactivated successfully.
Jan 31 09:12:50 compute-0 podman[419033]: 2026-01-31 09:12:50.430142359 +0000 UTC m=+0.198194047 container remove bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 09:12:50 compute-0 systemd[1]: libpod-conmon-bf20a71f570bb3deaf2e5afc6b160adcbc1fe23bb0540ac9d2f225d6faa9c079.scope: Deactivated successfully.
Jan 31 09:12:50 compute-0 podman[419128]: 2026-01-31 09:12:50.556856843 +0000 UTC m=+0.052238650 container create d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:12:50 compute-0 podman[419142]: 2026-01-31 09:12:50.591515518 +0000 UTC m=+0.052649139 container create 8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hertz, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 09:12:50 compute-0 systemd[1]: Started libpod-conmon-d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b.scope.
Jan 31 09:12:50 compute-0 podman[419128]: 2026-01-31 09:12:50.522898984 +0000 UTC m=+0.018280821 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:12:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:50 compute-0 systemd[1]: Started libpod-conmon-8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2.scope.
Jan 31 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767fa9badbbd01bdaf9fc606e8d87b1656c871bcd7241d8bcc2f7543a355f605/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:50 compute-0 podman[419128]: 2026-01-31 09:12:50.637262731 +0000 UTC m=+0.132644578 container init d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:12:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff6e52ef819e100aee0e80ccf51ef254218d5abb9d840ddb1d194f89b8a0ad6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff6e52ef819e100aee0e80ccf51ef254218d5abb9d840ddb1d194f89b8a0ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff6e52ef819e100aee0e80ccf51ef254218d5abb9d840ddb1d194f89b8a0ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ff6e52ef819e100aee0e80ccf51ef254218d5abb9d840ddb1d194f89b8a0ad6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:50 compute-0 podman[419128]: 2026-01-31 09:12:50.645350336 +0000 UTC m=+0.140732143 container start d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:12:50 compute-0 podman[419142]: 2026-01-31 09:12:50.568426282 +0000 UTC m=+0.029559933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:12:50 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [NOTICE]   (419171) : New worker (419173) forked
Jan 31 09:12:50 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [NOTICE]   (419171) : Loading success.
Jan 31 09:12:50 compute-0 podman[419142]: 2026-01-31 09:12:50.671899556 +0000 UTC m=+0.133033207 container init 8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hertz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 09:12:50 compute-0 podman[419142]: 2026-01-31 09:12:50.67956724 +0000 UTC m=+0.140700871 container start 8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.685 247708 DEBUG nova.compute.manager [req-d62787fb-46ee-4019-964d-a8868dd5325f req-95b5cb06-395d-4a80-805d-ac7af54b5ee7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.686 247708 DEBUG oslo_concurrency.lockutils [req-d62787fb-46ee-4019-964d-a8868dd5325f req-95b5cb06-395d-4a80-805d-ac7af54b5ee7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.686 247708 DEBUG oslo_concurrency.lockutils [req-d62787fb-46ee-4019-964d-a8868dd5325f req-95b5cb06-395d-4a80-805d-ac7af54b5ee7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.686 247708 DEBUG oslo_concurrency.lockutils [req-d62787fb-46ee-4019-964d-a8868dd5325f req-95b5cb06-395d-4a80-805d-ac7af54b5ee7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.687 247708 DEBUG nova.compute.manager [req-d62787fb-46ee-4019-964d-a8868dd5325f req-95b5cb06-395d-4a80-805d-ac7af54b5ee7 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Processing event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:12:50 compute-0 podman[419142]: 2026-01-31 09:12:50.693591918 +0000 UTC m=+0.154725579 container attach 8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hertz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.776 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850770.7763228, 66b31a08-89b1-4470-b197-7fd3bf1820c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.777 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] VM Started (Lifecycle Event)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.779 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.782 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.786 247708 INFO nova.virt.libvirt.driver [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Instance spawned successfully.
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.786 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.806 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.809 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.836 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.837 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850770.776508, 66b31a08-89b1-4470-b197-7fd3bf1820c4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.837 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] VM Paused (Lifecycle Event)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.841 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.841 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.842 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.842 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.842 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.843 247708 DEBUG nova.virt.libvirt.driver [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.906 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.909 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850770.7821045, 66b31a08-89b1-4470-b197-7fd3bf1820c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.910 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] VM Resumed (Lifecycle Event)
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.953 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.957 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:12:50 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.979 247708 INFO nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Took 4.82 seconds to spawn the instance on the hypervisor.
Jan 31 09:12:50 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:50 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:50.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.980 247708 DEBUG nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:12:50 compute-0 nova_compute[247704]: 2026-01-31 09:12:50.992 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:12:50 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:12:50 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:12:51 compute-0 nova_compute[247704]: 2026-01-31 09:12:51.073 247708 INFO nova.compute.manager [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Took 14.23 seconds to build instance.
Jan 31 09:12:51 compute-0 nova_compute[247704]: 2026-01-31 09:12:51.112 247708 DEBUG oslo_concurrency.lockutils [None req-bb6d19b9-c733-4779-90b4-23bd1af5b6e9 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:51.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]: {
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:     "0": [
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:         {
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "devices": [
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "/dev/loop3"
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             ],
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "lv_name": "ceph_lv0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "lv_size": "7511998464",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "name": "ceph_lv0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "tags": {
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.cluster_name": "ceph",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.crush_device_class": "",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.encrypted": "0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.osd_id": "0",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.type": "block",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:                 "ceph.vdo": "0"
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             },
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "type": "block",
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:             "vg_name": "ceph_vg0"
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:         }
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]:     ]
Jan 31 09:12:51 compute-0 heuristic_hertz[419167]: }
Jan 31 09:12:51 compute-0 systemd[1]: libpod-8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2.scope: Deactivated successfully.
Jan 31 09:12:51 compute-0 podman[419142]: 2026-01-31 09:12:51.474019537 +0000 UTC m=+0.935153168 container died 8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 09:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ff6e52ef819e100aee0e80ccf51ef254218d5abb9d840ddb1d194f89b8a0ad6-merged.mount: Deactivated successfully.
Jan 31 09:12:51 compute-0 podman[419142]: 2026-01-31 09:12:51.557954771 +0000 UTC m=+1.019088402 container remove 8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hertz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 31 09:12:51 compute-0 systemd[1]: libpod-conmon-8af76fb0d6db410efa332807e52b26b388bb26a1d1ffa34564b3f6580cf018b2.scope: Deactivated successfully.
Jan 31 09:12:51 compute-0 sudo[418928]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:51 compute-0 sudo[419207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:51 compute-0 sudo[419207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:51 compute-0 sudo[419207]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:51 compute-0 nova_compute[247704]: 2026-01-31 09:12:51.677 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:51 compute-0 sudo[419232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:12:51 compute-0 sudo[419232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:51 compute-0 sudo[419232]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:51.705 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=112, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=111) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:12:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:51.706 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:12:51 compute-0 nova_compute[247704]: 2026-01-31 09:12:51.707 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:51 compute-0 sudo[419257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:51 compute-0 sudo[419257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:51 compute-0 sudo[419257]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 31 09:12:51 compute-0 sudo[419282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:12:51 compute-0 sudo[419282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.123144841 +0000 UTC m=+0.031396277 container create e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:12:52 compute-0 systemd[1]: Started libpod-conmon-e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab.scope.
Jan 31 09:12:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:52 compute-0 ceph-mon[74496]: pgmap v4238: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.10940614 +0000 UTC m=+0.017657596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.22847569 +0000 UTC m=+0.136727156 container init e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wiles, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.236847001 +0000 UTC m=+0.145098437 container start e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:12:52 compute-0 crazy_wiles[419364]: 167 167
Jan 31 09:12:52 compute-0 systemd[1]: libpod-e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab.scope: Deactivated successfully.
Jan 31 09:12:52 compute-0 conmon[419364]: conmon e34c4f5fdc24fe498ec5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab.scope/container/memory.events
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.246478264 +0000 UTC m=+0.154729700 container attach e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.246844723 +0000 UTC m=+0.155096159 container died e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wiles, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 09:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4d1f4bcc9bf8a84e8a8cfa665a710ab2e7d5ccf99489fe9bcd927f2b59788ab-merged.mount: Deactivated successfully.
Jan 31 09:12:52 compute-0 podman[419348]: 2026-01-31 09:12:52.297299739 +0000 UTC m=+0.205551175 container remove e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_wiles, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 09:12:52 compute-0 systemd[1]: libpod-conmon-e34c4f5fdc24fe498ec5ab629a887a6770d8cae735932bf2e6ca688ac67a8bab.scope: Deactivated successfully.
Jan 31 09:12:52 compute-0 podman[419385]: 2026-01-31 09:12:52.422720761 +0000 UTC m=+0.038043768 container create dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jang, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 31 09:12:52 compute-0 systemd[1]: Started libpod-conmon-dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4.scope.
Jan 31 09:12:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:12:52 compute-0 podman[419385]: 2026-01-31 09:12:52.406334606 +0000 UTC m=+0.021657623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a7716e01676f7a89a3f24b227bf14e315de4a3098a0ee2de55e3d562e121d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a7716e01676f7a89a3f24b227bf14e315de4a3098a0ee2de55e3d562e121d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a7716e01676f7a89a3f24b227bf14e315de4a3098a0ee2de55e3d562e121d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a7716e01676f7a89a3f24b227bf14e315de4a3098a0ee2de55e3d562e121d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:12:52 compute-0 podman[419385]: 2026-01-31 09:12:52.522877546 +0000 UTC m=+0.138200563 container init dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 09:12:52 compute-0 podman[419385]: 2026-01-31 09:12:52.527961737 +0000 UTC m=+0.143284734 container start dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jang, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 09:12:52 compute-0 podman[419385]: 2026-01-31 09:12:52.530880778 +0000 UTC m=+0.146203795 container attach dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jang, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:12:52 compute-0 nova_compute[247704]: 2026-01-31 09:12:52.855 247708 DEBUG nova.compute.manager [req-b60ddcfb-02c4-4beb-9009-2b2a4d868ffd req-2b04ad6d-4f11-4fd3-a7b3-97c113bf182a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:12:52 compute-0 nova_compute[247704]: 2026-01-31 09:12:52.858 247708 DEBUG oslo_concurrency.lockutils [req-b60ddcfb-02c4-4beb-9009-2b2a4d868ffd req-2b04ad6d-4f11-4fd3-a7b3-97c113bf182a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:52 compute-0 nova_compute[247704]: 2026-01-31 09:12:52.858 247708 DEBUG oslo_concurrency.lockutils [req-b60ddcfb-02c4-4beb-9009-2b2a4d868ffd req-2b04ad6d-4f11-4fd3-a7b3-97c113bf182a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:52 compute-0 nova_compute[247704]: 2026-01-31 09:12:52.858 247708 DEBUG oslo_concurrency.lockutils [req-b60ddcfb-02c4-4beb-9009-2b2a4d868ffd req-2b04ad6d-4f11-4fd3-a7b3-97c113bf182a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:52 compute-0 nova_compute[247704]: 2026-01-31 09:12:52.858 247708 DEBUG nova.compute.manager [req-b60ddcfb-02c4-4beb-9009-2b2a4d868ffd req-2b04ad6d-4f11-4fd3-a7b3-97c113bf182a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] No waiting events found dispatching network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:12:52 compute-0 nova_compute[247704]: 2026-01-31 09:12:52.859 247708 WARNING nova.compute.manager [req-b60ddcfb-02c4-4beb-9009-2b2a4d868ffd req-2b04ad6d-4f11-4fd3-a7b3-97c113bf182a 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received unexpected event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 for instance with vm_state active and task_state None.
Jan 31 09:12:52 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:52 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:52 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:52.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:53 compute-0 optimistic_jang[419403]: {
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:         "osd_id": 0,
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:         "type": "bluestore"
Jan 31 09:12:53 compute-0 optimistic_jang[419403]:     }
Jan 31 09:12:53 compute-0 optimistic_jang[419403]: }
Jan 31 09:12:53 compute-0 systemd[1]: libpod-dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4.scope: Deactivated successfully.
Jan 31 09:12:53 compute-0 podman[419385]: 2026-01-31 09:12:53.380131626 +0000 UTC m=+0.995454623 container died dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:12:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:12:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/599817729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:12:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:12:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/599817729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-46a7716e01676f7a89a3f24b227bf14e315de4a3098a0ee2de55e3d562e121d4-merged.mount: Deactivated successfully.
Jan 31 09:12:53 compute-0 podman[419385]: 2026-01-31 09:12:53.631207656 +0000 UTC m=+1.246530663 container remove dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jang, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:12:53 compute-0 systemd[1]: libpod-conmon-dae278a6cac81e4b06493245718c251659ad4d332c3747b7f125be266a7a0aa4.scope: Deactivated successfully.
Jan 31 09:12:53 compute-0 sudo[419282]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:12:53 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:53.712 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '112'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:53 compute-0 nova_compute[247704]: 2026-01-31 09:12:53.749 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 720 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Jan 31 09:12:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/599817729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:12:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/599817729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:12:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:12:54 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev f52249cf-5197-44f5-92ae-61bd9b51722b does not exist
Jan 31 09:12:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 95e4510f-5e41-4b66-9f28-9782e915e5d9 does not exist
Jan 31 09:12:54 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 24c7a497-b985-4687-8357-26342c44ccbc does not exist
Jan 31 09:12:54 compute-0 sudo[419440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:12:54 compute-0 sudo[419440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:54 compute-0 sudo[419440]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:54 compute-0 sudo[419465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:12:54 compute-0 sudo[419465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:12:54 compute-0 sudo[419465]: pam_unix(sudo:session): session closed for user root
Jan 31 09:12:54 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:54 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:54 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:54.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:55 compute-0 ceph-mon[74496]: pgmap v4239: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 720 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Jan 31 09:12:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:55 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:12:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:12:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:55.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.486 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.487 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.487 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.487 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.487 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.488 247708 INFO nova.compute.manager [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Terminating instance
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.489 247708 DEBUG nova.compute.manager [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:12:55 compute-0 kernel: tapbf47178f-77 (unregistering): left promiscuous mode
Jan 31 09:12:55 compute-0 NetworkManager[49108]: <info>  [1769850775.5828] device (tapbf47178f-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.583 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 ovn_controller[149457]: 2026-01-31T09:12:55Z|00891|binding|INFO|Releasing lport bf47178f-779f-480a-8138-c5e91ca8bea8 from this chassis (sb_readonly=0)
Jan 31 09:12:55 compute-0 ovn_controller[149457]: 2026-01-31T09:12:55Z|00892|binding|INFO|Setting lport bf47178f-779f-480a-8138-c5e91ca8bea8 down in Southbound
Jan 31 09:12:55 compute-0 ovn_controller[149457]: 2026-01-31T09:12:55Z|00893|binding|INFO|Removing iface tapbf47178f-77 ovn-installed in OVS
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.591 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.597 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.605 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:75:f6 10.100.0.9'], port_security=['fa:16:3e:1e:75:f6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '66b31a08-89b1-4470-b197-7fd3bf1820c4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bffa9ac-c434-4d26-bfda-d8e35c99a8ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=bf47178f-779f-480a-8138-c5e91ca8bea8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.607 160028 INFO neutron.agent.ovn.metadata.agent [-] Port bf47178f-779f-480a-8138-c5e91ca8bea8 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.608 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.609 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[014b783d-a6da-426e-991a-f3eb9cc5186c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.609 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore
Jan 31 09:12:55 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000db.scope: Deactivated successfully.
Jan 31 09:12:55 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000db.scope: Consumed 5.493s CPU time.
Jan 31 09:12:55 compute-0 systemd-machined[214448]: Machine qemu-93-instance-000000db terminated.
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.712 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.717 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [NOTICE]   (419171) : haproxy version is 2.8.14-c23fe91
Jan 31 09:12:55 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [NOTICE]   (419171) : path to executable is /usr/sbin/haproxy
Jan 31 09:12:55 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [WARNING]  (419171) : Exiting Master process...
Jan 31 09:12:55 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [WARNING]  (419171) : Exiting Master process...
Jan 31 09:12:55 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [ALERT]    (419171) : Current worker (419173) exited with code 143 (Terminated)
Jan 31 09:12:55 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[419158]: [WARNING]  (419171) : All workers exited. Exiting... (0)
Jan 31 09:12:55 compute-0 systemd[1]: libpod-d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b.scope: Deactivated successfully.
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.731 247708 INFO nova.virt.libvirt.driver [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Instance destroyed successfully.
Jan 31 09:12:55 compute-0 podman[419514]: 2026-01-31 09:12:55.73169119 +0000 UTC m=+0.044486493 container died d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.731 247708 DEBUG nova.objects.instance [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid 66b31a08-89b1-4470-b197-7fd3bf1820c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.748 247708 DEBUG nova.virt.libvirt.vif [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1749459400',display_name='tempest-TestVolumeBootPattern-server-1749459400',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1749459400',id=219,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:12:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-72i51hhe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:12:51Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=66b31a08-89b1-4470-b197-7fd3bf1820c4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.749 247708 DEBUG nova.network.os_vif_util [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "bf47178f-779f-480a-8138-c5e91ca8bea8", "address": "fa:16:3e:1e:75:f6", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf47178f-77", "ovs_interfaceid": "bf47178f-779f-480a-8138-c5e91ca8bea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.750 247708 DEBUG nova.network.os_vif_util [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.750 247708 DEBUG os_vif [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.753 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.753 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf47178f-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.755 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.759 247708 INFO os_vif [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:75:f6,bridge_name='br-int',has_traffic_filtering=True,id=bf47178f-779f-480a-8138-c5e91ca8bea8,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf47178f-77')
Jan 31 09:12:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 31 09:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b-userdata-shm.mount: Deactivated successfully.
Jan 31 09:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-767fa9badbbd01bdaf9fc606e8d87b1656c871bcd7241d8bcc2f7543a355f605-merged.mount: Deactivated successfully.
Jan 31 09:12:55 compute-0 podman[419514]: 2026-01-31 09:12:55.782036353 +0000 UTC m=+0.094831656 container cleanup d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:12:55 compute-0 systemd[1]: libpod-conmon-d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b.scope: Deactivated successfully.
Jan 31 09:12:55 compute-0 podman[419566]: 2026-01-31 09:12:55.84164628 +0000 UTC m=+0.045813295 container remove d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.845 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba80cff-2330-43ab-9734-68052269920a]: (4, ('Sat Jan 31 09:12:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b)\nd4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b\nSat Jan 31 09:12:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (d4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b)\nd4ad25ef0a76d4f2385ec6073fa7b134619cd7c4a64e6725b4c924ce9f9baa3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.846 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[6c16b3cf-9ee5-4d13-b3ad-4720aa0801bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.847 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:12:55 compute-0 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 nova_compute[247704]: 2026-01-31 09:12:55.855 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.857 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0743af-943e-4ef1-b221-04b8b4c1c1de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.877 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b52c09-799e-4ffb-a57e-85bfc59f78bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.879 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[328afb25-b00c-4628-a6d1-684cc10bee4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.891 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b64934a0-0088-4a18-8c4f-ad526b94258d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1101263, 'reachable_time': 30471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419585, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.894 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:12:55 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:12:55.894 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[45a43357-2fc3-4ce5-8902-18b7a653a785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:12:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:12:56 compute-0 ceph-mon[74496]: pgmap v4240: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 31 09:12:56 compute-0 podman[419587]: 2026-01-31 09:12:56.882740702 +0000 UTC m=+0.062505588 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 09:12:56 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:56 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:56 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:56.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:57.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:57 compute-0 nova_compute[247704]: 2026-01-31 09:12:57.545 247708 DEBUG nova.compute.manager [req-7db28c8f-3d84-42c5-827e-7127ffebcbda req-b6db7d61-7d09-4585-8e1d-8c99ccf25639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-vif-unplugged-bf47178f-779f-480a-8138-c5e91ca8bea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:12:57 compute-0 nova_compute[247704]: 2026-01-31 09:12:57.545 247708 DEBUG oslo_concurrency.lockutils [req-7db28c8f-3d84-42c5-827e-7127ffebcbda req-b6db7d61-7d09-4585-8e1d-8c99ccf25639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:57 compute-0 nova_compute[247704]: 2026-01-31 09:12:57.546 247708 DEBUG oslo_concurrency.lockutils [req-7db28c8f-3d84-42c5-827e-7127ffebcbda req-b6db7d61-7d09-4585-8e1d-8c99ccf25639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:57 compute-0 nova_compute[247704]: 2026-01-31 09:12:57.546 247708 DEBUG oslo_concurrency.lockutils [req-7db28c8f-3d84-42c5-827e-7127ffebcbda req-b6db7d61-7d09-4585-8e1d-8c99ccf25639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:57 compute-0 nova_compute[247704]: 2026-01-31 09:12:57.546 247708 DEBUG nova.compute.manager [req-7db28c8f-3d84-42c5-827e-7127ffebcbda req-b6db7d61-7d09-4585-8e1d-8c99ccf25639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] No waiting events found dispatching network-vif-unplugged-bf47178f-779f-480a-8138-c5e91ca8bea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:12:57 compute-0 nova_compute[247704]: 2026-01-31 09:12:57.546 247708 DEBUG nova.compute.manager [req-7db28c8f-3d84-42c5-827e-7127ffebcbda req-b6db7d61-7d09-4585-8e1d-8c99ccf25639 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-vif-unplugged-bf47178f-779f-480a-8138-c5e91ca8bea8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:12:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Jan 31 09:12:58 compute-0 ceph-mon[74496]: pgmap v4241: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.588 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.674 247708 INFO nova.virt.libvirt.driver [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Deleting instance files /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4_del
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.675 247708 INFO nova.virt.libvirt.driver [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Deletion of /var/lib/nova/instances/66b31a08-89b1-4470-b197-7fd3bf1820c4_del complete
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.733 247708 INFO nova.compute.manager [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Took 3.24 seconds to destroy the instance on the hypervisor.
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.734 247708 DEBUG oslo.service.loopingcall [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.734 247708 DEBUG nova.compute.manager [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.735 247708 DEBUG nova.network.neutron [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:12:58 compute-0 nova_compute[247704]: 2026-01-31 09:12:58.791 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:12:58 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:58 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:12:58 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:58.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:12:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:12:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:12:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:59.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:12:59 compute-0 nova_compute[247704]: 2026-01-31 09:12:59.700 247708 DEBUG nova.compute.manager [req-ba83a235-dff2-47c3-84b5-a2fbc68b6b8e req-7db930fa-f367-4b0a-944b-023b15763e24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:12:59 compute-0 nova_compute[247704]: 2026-01-31 09:12:59.701 247708 DEBUG oslo_concurrency.lockutils [req-ba83a235-dff2-47c3-84b5-a2fbc68b6b8e req-7db930fa-f367-4b0a-944b-023b15763e24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:12:59 compute-0 nova_compute[247704]: 2026-01-31 09:12:59.702 247708 DEBUG oslo_concurrency.lockutils [req-ba83a235-dff2-47c3-84b5-a2fbc68b6b8e req-7db930fa-f367-4b0a-944b-023b15763e24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:12:59 compute-0 nova_compute[247704]: 2026-01-31 09:12:59.702 247708 DEBUG oslo_concurrency.lockutils [req-ba83a235-dff2-47c3-84b5-a2fbc68b6b8e req-7db930fa-f367-4b0a-944b-023b15763e24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:12:59 compute-0 nova_compute[247704]: 2026-01-31 09:12:59.703 247708 DEBUG nova.compute.manager [req-ba83a235-dff2-47c3-84b5-a2fbc68b6b8e req-7db930fa-f367-4b0a-944b-023b15763e24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] No waiting events found dispatching network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:12:59 compute-0 nova_compute[247704]: 2026-01-31 09:12:59.703 247708 WARNING nova.compute.manager [req-ba83a235-dff2-47c3-84b5-a2fbc68b6b8e req-7db930fa-f367-4b0a-944b-023b15763e24 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received unexpected event network-vif-plugged-bf47178f-779f-480a-8138-c5e91ca8bea8 for instance with vm_state active and task_state deleting.
Jan 31 09:12:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 31 09:13:00 compute-0 ceph-mon[74496]: pgmap v4242: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 31 09:13:00 compute-0 nova_compute[247704]: 2026-01-31 09:13:00.454 247708 DEBUG nova.network.neutron [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:13:00 compute-0 nova_compute[247704]: 2026-01-31 09:13:00.475 247708 INFO nova.compute.manager [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Took 1.74 seconds to deallocate network for instance.
Jan 31 09:13:00 compute-0 nova_compute[247704]: 2026-01-31 09:13:00.736 247708 INFO nova.compute.manager [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Took 0.26 seconds to detach 1 volumes for instance.
Jan 31 09:13:00 compute-0 nova_compute[247704]: 2026-01-31 09:13:00.737 247708 DEBUG nova.compute.manager [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Deleting volume: d574299e-ff71-4c51-9235-0d741f4c633c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 09:13:00 compute-0 nova_compute[247704]: 2026-01-31 09:13:00.757 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:00 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:00 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:00 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:00.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.032 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.033 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.092 247708 DEBUG oslo_concurrency.processutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:01.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:13:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702245253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.552 247708 DEBUG oslo_concurrency.processutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.559 247708 DEBUG nova.compute.provider_tree [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.577 247708 DEBUG nova.scheduler.client.report [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.688 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.716 247708 INFO nova.scheduler.client.report [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance 66b31a08-89b1-4470-b197-7fd3bf1820c4
Jan 31 09:13:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1702245253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.837 247708 DEBUG nova.compute.manager [req-2bc3102e-16fc-4898-9f85-df4fd0f79828 req-5af41a57-68b2-4136-b348-17f8d2345f48 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Received event network-vif-deleted-bf47178f-779f-480a-8138-c5e91ca8bea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:13:01 compute-0 nova_compute[247704]: 2026-01-31 09:13:01.866 247708 DEBUG oslo_concurrency.lockutils [None req-19b58ad4-2a48-43a9-9f3e-f5902ad3cd59 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "66b31a08-89b1-4470-b197-7fd3bf1820c4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:02.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:03 compute-0 ceph-mon[74496]: pgmap v4243: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 09:13:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 31 09:13:03 compute-0 nova_compute[247704]: 2026-01-31 09:13:03.793 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:04 compute-0 ceph-mon[74496]: pgmap v4244: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 31 09:13:04 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:04 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:04 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:05 compute-0 nova_compute[247704]: 2026-01-31 09:13:05.761 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 31 09:13:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2207572594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2207572594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:06 compute-0 ceph-mon[74496]: pgmap v4245: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 31 09:13:06 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:06 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:06 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:06.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Jan 31 09:13:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Jan 31 09:13:07 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Jan 31 09:13:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1023 B/s wr, 33 op/s
Jan 31 09:13:08 compute-0 ceph-mon[74496]: osdmap e422: 3 total, 3 up, 3 in
Jan 31 09:13:08 compute-0 ceph-mon[74496]: pgmap v4247: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1023 B/s wr, 33 op/s
Jan 31 09:13:08 compute-0 nova_compute[247704]: 2026-01-31 09:13:08.795 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:08 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:08 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:08 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:08.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:09 compute-0 nova_compute[247704]: 2026-01-31 09:13:09.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:09 compute-0 nova_compute[247704]: 2026-01-31 09:13:09.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:13:09 compute-0 sudo[419644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:09 compute-0 sudo[419644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:09 compute-0 sudo[419644]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.5 KiB/s wr, 43 op/s
Jan 31 09:13:09 compute-0 sudo[419669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:09 compute-0 sudo[419669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:09 compute-0 sudo[419669]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:10 compute-0 ceph-mon[74496]: pgmap v4248: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.5 KiB/s wr, 43 op/s
Jan 31 09:13:10 compute-0 nova_compute[247704]: 2026-01-31 09:13:10.729 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850775.7275233, 66b31a08-89b1-4470-b197-7fd3bf1820c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:13:10 compute-0 nova_compute[247704]: 2026-01-31 09:13:10.730 247708 INFO nova.compute.manager [-] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] VM Stopped (Lifecycle Event)
Jan 31 09:13:10 compute-0 nova_compute[247704]: 2026-01-31 09:13:10.761 247708 DEBUG nova.compute.manager [None req-bb987dd9-6472-42b2-a7b4-ec7b283356f3 - - - - - -] [instance: 66b31a08-89b1-4470-b197-7fd3bf1820c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:13:10 compute-0 nova_compute[247704]: 2026-01-31 09:13:10.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:10 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:10.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:11.246 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:11.247 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:11.247 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 09:13:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1948929932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 09:13:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1948929932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 31 09:13:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1948929932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1948929932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:12 compute-0 podman[419696]: 2026-01-31 09:13:12.900891931 +0000 UTC m=+0.072460808 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 09:13:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:13.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:13 compute-0 ceph-mon[74496]: pgmap v4249: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 31 09:13:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:13.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 154 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 36 op/s
Jan 31 09:13:13 compute-0 nova_compute[247704]: 2026-01-31 09:13:13.797 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:14 compute-0 ceph-mon[74496]: pgmap v4250: 305 pgs: 305 active+clean; 154 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 36 op/s
Jan 31 09:13:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:15.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:15.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:15 compute-0 nova_compute[247704]: 2026-01-31 09:13:15.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:15 compute-0 nova_compute[247704]: 2026-01-31 09:13:15.767 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Jan 31 09:13:16 compute-0 ceph-mon[74496]: pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Jan 31 09:13:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Jan 31 09:13:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Jan 31 09:13:16 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Jan 31 09:13:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:17.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:17 compute-0 nova_compute[247704]: 2026-01-31 09:13:17.574 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:17 compute-0 nova_compute[247704]: 2026-01-31 09:13:17.575 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:13:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Jan 31 09:13:17 compute-0 ceph-mon[74496]: osdmap e423: 3 total, 3 up, 3 in
Jan 31 09:13:18 compute-0 nova_compute[247704]: 2026-01-31 09:13:18.799 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:19.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:19 compute-0 ceph-mon[74496]: pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.6 KiB/s wr, 32 op/s
Jan 31 09:13:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:19.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 473 KiB/s rd, 921 B/s wr, 26 op/s
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:13:20 compute-0 ceph-mon[74496]: pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 473 KiB/s rd, 921 B/s wr, 26 op/s
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:13:20
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'images', 'vms', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.control']
Jan 31 09:13:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:13:20 compute-0 nova_compute[247704]: 2026-01-31 09:13:20.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:20 compute-0 nova_compute[247704]: 2026-01-31 09:13:20.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:13:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:21.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 31 09:13:22 compute-0 ceph-mon[74496]: pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 921 B/s wr, 26 op/s
Jan 31 09:13:22 compute-0 nova_compute[247704]: 2026-01-31 09:13:22.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:23.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:23.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 23 op/s
Jan 31 09:13:23 compute-0 nova_compute[247704]: 2026-01-31 09:13:23.801 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:23 compute-0 ceph-mon[74496]: pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 23 op/s
Jan 31 09:13:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:25.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:25.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:25 compute-0 nova_compute[247704]: 2026-01-31 09:13:25.775 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 46 op/s
Jan 31 09:13:26 compute-0 ceph-mon[74496]: pgmap v4257: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 46 op/s
Jan 31 09:13:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:27.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2455795959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3879334382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 41 op/s
Jan 31 09:13:27 compute-0 podman[419723]: 2026-01-31 09:13:27.907321907 +0000 UTC m=+0.074355333 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:13:28 compute-0 ceph-mon[74496]: pgmap v4258: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 41 op/s
Jan 31 09:13:28 compute-0 nova_compute[247704]: 2026-01-31 09:13:28.804 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:29.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:29 compute-0 nova_compute[247704]: 2026-01-31 09:13:29.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 31 09:13:29 compute-0 sudo[419751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:29 compute-0 sudo[419751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:29 compute-0 sudo[419751]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:29 compute-0 sudo[419776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:29 compute-0 sudo[419776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:29 compute-0 sudo[419776]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:30 compute-0 ceph-mon[74496]: pgmap v4259: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 31 09:13:30 compute-0 ovn_controller[149457]: 2026-01-31T09:13:30Z|00894|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 09:13:30 compute-0 nova_compute[247704]: 2026-01-31 09:13:30.778 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:31.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:31.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.654 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.654 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2559977376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.761 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.761 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.761 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.761 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:13:31 compute-0 nova_compute[247704]: 2026-01-31 09:13:31.762 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 09:13:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:13:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401774579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.169 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.331 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.332 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4178MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.332 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.333 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.407 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.407 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.435 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.861 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.867 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.894 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.955 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:13:32 compute-0 nova_compute[247704]: 2026-01-31 09:13:32.956 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1909190248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:33 compute-0 ceph-mon[74496]: pgmap v4260: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 09:13:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2401774579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:33.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:33.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 09:13:33 compute-0 nova_compute[247704]: 2026-01-31 09:13:33.807 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3699418978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:34 compute-0 ceph-mon[74496]: pgmap v4261: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 09:13:34 compute-0 nova_compute[247704]: 2026-01-31 09:13:34.905 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:34 compute-0 nova_compute[247704]: 2026-01-31 09:13:34.906 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:34 compute-0 nova_compute[247704]: 2026-01-31 09:13:34.933 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:13:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:35.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.027 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.027 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.032 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.033 247708 INFO nova.compute.claims [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.199 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:35.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:13:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/645523033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.631 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.636 247708 DEBUG nova.compute.provider_tree [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.654 247708 DEBUG nova.scheduler.client.report [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.690 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.690 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:13:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/645523033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.738 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.739 247708 DEBUG nova.network.neutron [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.764 247708 INFO nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:13:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.781 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.793 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:13:35 compute-0 nova_compute[247704]: 2026-01-31 09:13:35.935 247708 INFO nova.virt.block_device [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Booting with volume bef842a0-76ba-4e96-aba4-abce4554dd0c at /dev/vda
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.135 247708 DEBUG os_brick.utils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.137 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.146 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.147 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef58bde-a780-4244-af37-e5b5d4009dc8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.148 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.155 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.155 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[129bf595-91d2-4888-b560-a4d30254b45b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.156 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.163 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.164 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[5bac397c-ab7f-4752-9f12-dd44e2474348]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.165 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7e519b62-018b-4c7b-80c3-801731bc7247]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.166 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.189 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.192 247708 DEBUG os_brick.initiator.connectors.lightos [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.192 247708 DEBUG os_brick.initiator.connectors.lightos [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.193 247708 DEBUG os_brick.initiator.connectors.lightos [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.193 247708 DEBUG os_brick.utils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.193 247708 DEBUG nova.virt.block_device [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating existing volume attachment record: ed0090a8-6dee-4bb3-bf34-9b308526c7b0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031506999697561884 of space, bias 1.0, pg target 0.9452099909268565 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:13:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:13:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.561 247708 DEBUG nova.policy [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:13:36 compute-0 nova_compute[247704]: 2026-01-31 09:13:36.864 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:37.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:37.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:37 compute-0 ceph-mon[74496]: pgmap v4262: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 09:13:37 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 760 KiB/s wr, 2 op/s
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.094 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.097 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.098 247708 INFO nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Creating image(s)
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.099 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.099 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Ensure instance console log exists: /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.100 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.101 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.102 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.324 247708 DEBUG nova.network.neutron [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Successfully created port: 055cb245-bd1c-48e0-85fa-1ff3257b1383 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:13:38 compute-0 nova_compute[247704]: 2026-01-31 09:13:38.808 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3936430947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:13:39 compute-0 ceph-mon[74496]: pgmap v4263: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 760 KiB/s wr, 2 op/s
Jan 31 09:13:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:39.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.771 247708 DEBUG nova.network.neutron [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Successfully updated port: 055cb245-bd1c-48e0-85fa-1ff3257b1383 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:13:39 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 596 B/s rd, 760 KiB/s wr, 2 op/s
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.793 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.793 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.793 247708 DEBUG nova.network.neutron [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.936 247708 DEBUG nova.compute.manager [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-changed-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.937 247708 DEBUG nova.compute.manager [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Refreshing instance network info cache due to event network-changed-055cb245-bd1c-48e0-85fa-1ff3257b1383. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:13:39 compute-0 nova_compute[247704]: 2026-01-31 09:13:39.937 247708 DEBUG oslo_concurrency.lockutils [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:13:40 compute-0 ceph-mon[74496]: pgmap v4264: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 596 B/s rd, 760 KiB/s wr, 2 op/s
Jan 31 09:13:40 compute-0 nova_compute[247704]: 2026-01-31 09:13:40.444 247708 DEBUG nova.network.neutron [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:13:40 compute-0 nova_compute[247704]: 2026-01-31 09:13:40.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:40 compute-0 nova_compute[247704]: 2026-01-31 09:13:40.786 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:41.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.709 247708 DEBUG nova.network.neutron [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating instance_info_cache with network_info: [{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.734 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.735 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Instance network_info: |[{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.736 247708 DEBUG oslo_concurrency.lockutils [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.737 247708 DEBUG nova.network.neutron [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Refreshing network info cache for port 055cb245-bd1c-48e0-85fa-1ff3257b1383 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.742 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Start _get_guest_xml network_info=[{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': True, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': 'ed0090a8-6dee-4bb3-bf34-9b308526c7b0', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bef842a0-76ba-4e96-aba4-abce4554dd0c', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bef842a0-76ba-4e96-aba4-abce4554dd0c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f82c07b4-3acf-4385-9b7a-8a49da3cd55a', 'attached_at': '', 'detached_at': '', 'volume_id': 'bef842a0-76ba-4e96-aba4-abce4554dd0c', 'serial': 'bef842a0-76ba-4e96-aba4-abce4554dd0c'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.748 247708 WARNING nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.758 247708 DEBUG nova.virt.libvirt.host [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.760 247708 DEBUG nova.virt.libvirt.host [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.763 247708 DEBUG nova.virt.libvirt.host [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.764 247708 DEBUG nova.virt.libvirt.host [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.766 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.767 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.767 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.768 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.768 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.769 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.770 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.770 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.771 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.771 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.771 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.772 247708 DEBUG nova.virt.hardware [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:13:41 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.807 247708 DEBUG nova.storage.rbd_utils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image f82c07b4-3acf-4385-9b7a-8a49da3cd55a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:13:41 compute-0 nova_compute[247704]: 2026-01-31 09:13:41.812 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:42 compute-0 ceph-mon[74496]: pgmap v4265: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:13:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:13:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446852087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.351 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.604 247708 DEBUG nova.virt.libvirt.vif [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:13:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1125575369',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1125575369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1125575369',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKWLSilDSatFdj+l72R7r4tT2LeUF3x3W2K5KxFW3XyiJuP7GkF+kAirfxylqPC6NcjBeQJfMv2H1rzxH82W47myV0/TExevXJFFnepwN96KbWJG7mo4f6FB99ScIugOCw==',key_name='tempest-keypair-1288716286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-u3vgr0xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:13:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=f82c07b4-3acf-4385-9b7a-8a49da3cd55a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.605 247708 DEBUG nova.network.os_vif_util [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.606 247708 DEBUG nova.network.os_vif_util [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.608 247708 DEBUG nova.objects.instance [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid f82c07b4-3acf-4385-9b7a-8a49da3cd55a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.626 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <uuid>f82c07b4-3acf-4385-9b7a-8a49da3cd55a</uuid>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <name>instance-000000dc</name>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <metadata>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-1125575369</nova:name>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 09:13:41</nova:creationTime>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <nova:port uuid="055cb245-bd1c-48e0-85fa-1ff3257b1383">
Jan 31 09:13:42 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </metadata>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <system>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <entry name="serial">f82c07b4-3acf-4385-9b7a-8a49da3cd55a</entry>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <entry name="uuid">f82c07b4-3acf-4385-9b7a-8a49da3cd55a</entry>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </system>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <os>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </os>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <features>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <apic/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </features>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </clock>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </cpu>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   <devices>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/f82c07b4-3acf-4385-9b7a-8a49da3cd55a_disk.config">
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </source>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-bef842a0-76ba-4e96-aba4-abce4554dd0c">
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </source>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:13:42 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <serial>bef842a0-76ba-4e96-aba4-abce4554dd0c</serial>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:57:d9:d5"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <target dev="tap055cb245-bd"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </interface>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/console.log" append="off"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </serial>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <video>
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </video>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </rng>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 09:13:42 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 09:13:42 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 09:13:42 compute-0 nova_compute[247704]:   </devices>
Jan 31 09:13:42 compute-0 nova_compute[247704]: </domain>
Jan 31 09:13:42 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.627 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Preparing to wait for external event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.627 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.628 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.628 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.629 247708 DEBUG nova.virt.libvirt.vif [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:13:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1125575369',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1125575369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1125575369',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKWLSilDSatFdj+l72R7r4tT2LeUF3x3W2K5KxFW3XyiJuP7GkF+kAirfxylqPC6NcjBeQJfMv2H1rzxH82W47myV0/TExevXJFFnepwN96KbWJG7mo4f6FB99ScIugOCw==',key_name='tempest-keypair-1288716286',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-u3vgr0xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:13:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=f82c07b4-3acf-4385-9b7a-8a49da3cd55a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.629 247708 DEBUG nova.network.os_vif_util [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.629 247708 DEBUG nova.network.os_vif_util [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.630 247708 DEBUG os_vif [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.630 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.631 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.631 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.634 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.634 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap055cb245-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.635 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap055cb245-bd, col_values=(('external_ids', {'iface-id': '055cb245-bd1c-48e0-85fa-1ff3257b1383', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:d9:d5', 'vm-uuid': 'f82c07b4-3acf-4385-9b7a-8a49da3cd55a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:42 compute-0 NetworkManager[49108]: <info>  [1769850822.6372] manager: (tap055cb245-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.639 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.642 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.643 247708 INFO os_vif [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd')
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.703 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.704 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.704 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:57:d9:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.705 247708 INFO nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Using config drive
Jan 31 09:13:42 compute-0 nova_compute[247704]: 2026-01-31 09:13:42.728 247708 DEBUG nova.storage.rbd_utils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image f82c07b4-3acf-4385-9b7a-8a49da3cd55a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:13:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:43.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2446852087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:13:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:43.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.366 247708 INFO nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Creating config drive at /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/disk.config
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.370 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsmaste9y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.494 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsmaste9y" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.517 247708 DEBUG nova.storage.rbd_utils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image f82c07b4-3acf-4385-9b7a-8a49da3cd55a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.520 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/disk.config f82c07b4-3acf-4385-9b7a-8a49da3cd55a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:13:43 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.801 247708 DEBUG nova.network.neutron [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updated VIF entry in instance network info cache for port 055cb245-bd1c-48e0-85fa-1ff3257b1383. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.802 247708 DEBUG nova.network.neutron [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating instance_info_cache with network_info: [{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.809 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:43 compute-0 nova_compute[247704]: 2026-01-31 09:13:43.842 247708 DEBUG oslo_concurrency.lockutils [req-790c77a6-e62b-4124-a0c4-c9724b0c653c req-1be193f5-0b23-4a32-89fb-5c97fa553826 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:13:43 compute-0 podman[419979]: 2026-01-31 09:13:43.925974478 +0000 UTC m=+0.096215380 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 09:13:44 compute-0 ceph-mon[74496]: pgmap v4266: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.596 247708 DEBUG oslo_concurrency.processutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/disk.config f82c07b4-3acf-4385-9b7a-8a49da3cd55a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.596 247708 INFO nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Deleting local config drive /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a/disk.config because it was imported into RBD.
Jan 31 09:13:44 compute-0 kernel: tap055cb245-bd: entered promiscuous mode
Jan 31 09:13:44 compute-0 NetworkManager[49108]: <info>  [1769850824.6440] manager: (tap055cb245-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Jan 31 09:13:44 compute-0 systemd-udevd[420012]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:13:44 compute-0 ovn_controller[149457]: 2026-01-31T09:13:44Z|00895|binding|INFO|Claiming lport 055cb245-bd1c-48e0-85fa-1ff3257b1383 for this chassis.
Jan 31 09:13:44 compute-0 ovn_controller[149457]: 2026-01-31T09:13:44Z|00896|binding|INFO|055cb245-bd1c-48e0-85fa-1ff3257b1383: Claiming fa:16:3e:57:d9:d5 10.100.0.9
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.696 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 ovn_controller[149457]: 2026-01-31T09:13:44Z|00897|binding|INFO|Setting lport 055cb245-bd1c-48e0-85fa-1ff3257b1383 ovn-installed in OVS
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.704 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 ovn_controller[149457]: 2026-01-31T09:13:44Z|00898|binding|INFO|Setting lport 055cb245-bd1c-48e0-85fa-1ff3257b1383 up in Southbound
Jan 31 09:13:44 compute-0 NetworkManager[49108]: <info>  [1769850824.7061] device (tap055cb245-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:13:44 compute-0 NetworkManager[49108]: <info>  [1769850824.7066] device (tap055cb245-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.706 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.706 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:d9:d5 10.100.0.9'], port_security=['fa:16:3e:57:d9:d5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f82c07b4-3acf-4385-9b7a-8a49da3cd55a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f1b11aa-db66-4189-a249-11f4c2357637', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=055cb245-bd1c-48e0-85fa-1ff3257b1383) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.707 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 055cb245-bd1c-48e0-85fa-1ff3257b1383 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.708 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:13:44 compute-0 systemd-machined[214448]: New machine qemu-94-instance-000000dc.
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.715 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[bd975c34-bbb4-4902-bde8-90fa921d0a3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.716 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.718 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.718 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e8abc0-55e3-4298-b211-25daf54574fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.719 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f74c1be4-30a3-4ca4-8ac1-b64368557eb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-000000dc.
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.730 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[e72cd07f-7a3f-47ce-bea3-11ac6ba4dc01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.739 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[7612fc4a-f18f-4f5d-810d-6eb954370e51]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.761 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a31ab23b-aec3-4586-8cda-f300b9f16ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 NetworkManager[49108]: <info>  [1769850824.7672] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.766 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ab4385-6389-40ea-89d1-0d312f9cb5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 systemd-udevd[420017]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.793 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[1d76b3a2-0399-4fea-9c80-6b83613f9b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.796 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[a21d677f-33ad-4ec3-a03f-a011d35efc68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 NetworkManager[49108]: <info>  [1769850824.8104] device (tap5c9ca540-50): carrier: link connected
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.814 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f42d7b57-9b86-49b3-bfd4-d9dd31cc10d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.828 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[713ab04f-9890-4b3a-a335-815b9b3bdec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1106742, 'reachable_time': 19921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420048, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.842 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f20c2e37-dcba-4712-83f0-354fb689420a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1106742, 'tstamp': 1106742}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420049, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.859 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[72012cef-051e-46c5-a8c1-3acba31a6600]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1106742, 'reachable_time': 19921, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 420050, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.879 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c88d2328-45bc-4657-b93c-26964fd1e3bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.932 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3b429f98-6b8f-449b-81b6-d9d91a2bcad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.933 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.933 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.934 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.936 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 NetworkManager[49108]: <info>  [1769850824.9369] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 31 09:13:44 compute-0 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.939 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.938 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 ovn_controller[149457]: 2026-01-31T09:13:44Z|00899|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.940 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 nova_compute[247704]: 2026-01-31 09:13:44.947 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.947 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.948 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[94f2b3f5-b499-417b-841b-ba645f37e79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.949 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: global
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:13:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:44.950 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:13:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:45.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:45 compute-0 podman[420118]: 2026-01-31 09:13:45.278362361 +0000 UTC m=+0.033279202 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.377 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850825.377356, f82c07b4-3acf-4385-9b7a-8a49da3cd55a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.378 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] VM Started (Lifecycle Event)
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.409 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.415 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850825.3782978, f82c07b4-3acf-4385-9b7a-8a49da3cd55a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.415 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] VM Paused (Lifecycle Event)
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.439 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.445 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.467 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:13:45 compute-0 podman[420118]: 2026-01-31 09:13:45.593192899 +0000 UTC m=+0.348109690 container create d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 09:13:45 compute-0 systemd[1]: Started libpod-conmon-d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400.scope.
Jan 31 09:13:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:13:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808f4514e732753f9df0a9a187991f6a60110a333e7091719e9c6e313b6ff57c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:13:45 compute-0 podman[420118]: 2026-01-31 09:13:45.739015913 +0000 UTC m=+0.493932764 container init d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 09:13:45 compute-0 podman[420118]: 2026-01-31 09:13:45.743879441 +0000 UTC m=+0.498796232 container start d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 09:13:45 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [NOTICE]   (420143) : New worker (420145) forked
Jan 31 09:13:45 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [NOTICE]   (420143) : Loading success.
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.783 247708 DEBUG nova.compute.manager [req-1c61b710-4fcc-476f-b0d6-3e01fa0e36d6 req-cee4d8cd-2cd5-45e3-9ca7-3bc089151e28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.783 247708 DEBUG oslo_concurrency.lockutils [req-1c61b710-4fcc-476f-b0d6-3e01fa0e36d6 req-cee4d8cd-2cd5-45e3-9ca7-3bc089151e28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.784 247708 DEBUG oslo_concurrency.lockutils [req-1c61b710-4fcc-476f-b0d6-3e01fa0e36d6 req-cee4d8cd-2cd5-45e3-9ca7-3bc089151e28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.784 247708 DEBUG oslo_concurrency.lockutils [req-1c61b710-4fcc-476f-b0d6-3e01fa0e36d6 req-cee4d8cd-2cd5-45e3-9ca7-3bc089151e28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.784 247708 DEBUG nova.compute.manager [req-1c61b710-4fcc-476f-b0d6-3e01fa0e36d6 req-cee4d8cd-2cd5-45e3-9ca7-3bc089151e28 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Processing event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.785 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:13:45 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.789 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769850825.7891312, f82c07b4-3acf-4385-9b7a-8a49da3cd55a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.789 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] VM Resumed (Lifecycle Event)
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.791 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.794 247708 INFO nova.virt.libvirt.driver [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Instance spawned successfully.
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.794 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.834 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.837 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.869 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.870 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.870 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.871 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.871 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.872 247708 DEBUG nova.virt.libvirt.driver [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.876 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.952 247708 INFO nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Took 7.86 seconds to spawn the instance on the hypervisor.
Jan 31 09:13:45 compute-0 nova_compute[247704]: 2026-01-31 09:13:45.952 247708 DEBUG nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:13:45 compute-0 ceph-mon[74496]: pgmap v4267: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:13:46 compute-0 nova_compute[247704]: 2026-01-31 09:13:46.009 247708 INFO nova.compute.manager [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Took 11.02 seconds to build instance.
Jan 31 09:13:46 compute-0 nova_compute[247704]: 2026-01-31 09:13:46.027 247708 DEBUG oslo_concurrency.lockutils [None req-39bd4a2e-d55a-4420-b5ae-860053edcc85 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:47.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.638 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:47 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:13:47 compute-0 ceph-mon[74496]: pgmap v4268: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 170 B/s wr, 5 op/s
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.980 247708 DEBUG nova.compute.manager [req-72f24807-daa4-47cf-aa94-7266df28246f req-b6424feb-a048-4c6e-a3a9-4fa0c6fbbfad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.984 247708 DEBUG oslo_concurrency.lockutils [req-72f24807-daa4-47cf-aa94-7266df28246f req-b6424feb-a048-4c6e-a3a9-4fa0c6fbbfad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.985 247708 DEBUG oslo_concurrency.lockutils [req-72f24807-daa4-47cf-aa94-7266df28246f req-b6424feb-a048-4c6e-a3a9-4fa0c6fbbfad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.986 247708 DEBUG oslo_concurrency.lockutils [req-72f24807-daa4-47cf-aa94-7266df28246f req-b6424feb-a048-4c6e-a3a9-4fa0c6fbbfad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.986 247708 DEBUG nova.compute.manager [req-72f24807-daa4-47cf-aa94-7266df28246f req-b6424feb-a048-4c6e-a3a9-4fa0c6fbbfad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] No waiting events found dispatching network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:13:47 compute-0 nova_compute[247704]: 2026-01-31 09:13:47.986 247708 WARNING nova.compute.manager [req-72f24807-daa4-47cf-aa94-7266df28246f req-b6424feb-a048-4c6e-a3a9-4fa0c6fbbfad 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received unexpected event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 for instance with vm_state active and task_state None.
Jan 31 09:13:48 compute-0 nova_compute[247704]: 2026-01-31 09:13:48.811 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:13:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:49.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:13:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:13:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:13:49 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 776 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 31 09:13:50 compute-0 sudo[420156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:50 compute-0 sudo[420156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:50 compute-0 sudo[420156]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:50 compute-0 NetworkManager[49108]: <info>  [1769850830.0381] manager: (patch-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Jan 31 09:13:50 compute-0 NetworkManager[49108]: <info>  [1769850830.0388] manager: (patch-br-int-to-provnet-e517dec2-a64c-4b9e-b50d-187fb8da8ba1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Jan 31 09:13:50 compute-0 nova_compute[247704]: 2026-01-31 09:13:50.037 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:50 compute-0 nova_compute[247704]: 2026-01-31 09:13:50.058 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:50 compute-0 ovn_controller[149457]: 2026-01-31T09:13:50Z|00900|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:13:50 compute-0 nova_compute[247704]: 2026-01-31 09:13:50.067 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:50 compute-0 sudo[420181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:50 compute-0 sudo[420181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:50 compute-0 sudo[420181]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:13:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:13:50 compute-0 ceph-mon[74496]: pgmap v4269: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 776 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 31 09:13:50 compute-0 nova_compute[247704]: 2026-01-31 09:13:50.825 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:50.824 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=113, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=112) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:13:50 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:50.826 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:13:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:51.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:51 compute-0 nova_compute[247704]: 2026-01-31 09:13:51.062 247708 DEBUG nova.compute.manager [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-changed-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:13:51 compute-0 nova_compute[247704]: 2026-01-31 09:13:51.064 247708 DEBUG nova.compute.manager [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Refreshing instance network info cache due to event network-changed-055cb245-bd1c-48e0-85fa-1ff3257b1383. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:13:51 compute-0 nova_compute[247704]: 2026-01-31 09:13:51.064 247708 DEBUG oslo_concurrency.lockutils [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:13:51 compute-0 nova_compute[247704]: 2026-01-31 09:13:51.065 247708 DEBUG oslo_concurrency.lockutils [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:13:51 compute-0 nova_compute[247704]: 2026-01-31 09:13:51.065 247708 DEBUG nova.network.neutron [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Refreshing network info cache for port 055cb245-bd1c-48e0-85fa-1ff3257b1383 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:13:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:51.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.703839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831703906, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 990, "num_deletes": 251, "total_data_size": 1478397, "memory_usage": 1502528, "flush_reason": "Manual Compaction"}
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Jan 31 09:13:51 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831821755, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 956587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92228, "largest_seqno": 93217, "table_properties": {"data_size": 952499, "index_size": 1675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10982, "raw_average_key_size": 21, "raw_value_size": 943687, "raw_average_value_size": 1825, "num_data_blocks": 72, "num_entries": 517, "num_filter_entries": 517, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850752, "oldest_key_time": 1769850752, "file_creation_time": 1769850831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 117959 microseconds, and 3174 cpu microseconds.
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.821794) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 956587 bytes OK
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.821824) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.917160) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.917204) EVENT_LOG_v1 {"time_micros": 1769850831917194, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.917228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 1473760, prev total WAL file size 1473760, number of live WAL files 2.
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.918581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353130' seq:72057594037927935, type:22 .. '6D6772737461740033373631' seq:0, type:0; will stop at (end)
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(934KB)], [212(14MB)]
Jan 31 09:13:51 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850831918641, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 15717115, "oldest_snapshot_seqno": -1}
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 11862 keys, 12350344 bytes, temperature: kUnknown
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832420917, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 12350344, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12278798, "index_size": 40777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 313550, "raw_average_key_size": 26, "raw_value_size": 12076802, "raw_average_value_size": 1018, "num_data_blocks": 1535, "num_entries": 11862, "num_filter_entries": 11862, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850831, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.421219) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12350344 bytes
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.533054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 31.3 rd, 24.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(29.3) write-amplify(12.9) OK, records in: 12352, records dropped: 490 output_compression: NoCompression
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.533152) EVENT_LOG_v1 {"time_micros": 1769850832533114, "job": 134, "event": "compaction_finished", "compaction_time_micros": 502373, "compaction_time_cpu_micros": 25572, "output_level": 6, "num_output_files": 1, "total_output_size": 12350344, "num_input_records": 12352, "num_output_records": 11862, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832533705, "job": 134, "event": "table_file_deletion", "file_number": 214}
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850832536225, "job": 134, "event": "table_file_deletion", "file_number": 212}
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:51.918485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.536450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.536459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.536463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.536467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:13:52.536471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:13:52 compute-0 nova_compute[247704]: 2026-01-31 09:13:52.641 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:53 compute-0 nova_compute[247704]: 2026-01-31 09:13:52.999 247708 DEBUG nova.network.neutron [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updated VIF entry in instance network info cache for port 055cb245-bd1c-48e0-85fa-1ff3257b1383. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:13:53 compute-0 nova_compute[247704]: 2026-01-31 09:13:53.000 247708 DEBUG nova.network.neutron [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating instance_info_cache with network_info: [{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:13:53 compute-0 nova_compute[247704]: 2026-01-31 09:13:53.030 247708 DEBUG oslo_concurrency.lockutils [req-cd0b0120-41c6-41cf-a3c3-626d705fe67d req-6a10b481-fa29-490a-9b24-bd73dabcf860 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:13:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:53.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:53 compute-0 ceph-mon[74496]: pgmap v4270: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:53.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:53 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:53 compute-0 nova_compute[247704]: 2026-01-31 09:13:53.813 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/455054707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:13:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/455054707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:13:54 compute-0 ceph-mon[74496]: pgmap v4271: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:54 compute-0 sudo[420210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:54 compute-0 sudo[420210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-0 sudo[420210]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:54 compute-0 sudo[420235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:13:54 compute-0 sudo[420235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-0 sudo[420235]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:54 compute-0 sudo[420260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:54 compute-0 sudo[420260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:54 compute-0 sudo[420260]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:55 compute-0 sudo[420285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:13:55 compute-0 sudo[420285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:55.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:55.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:55 compute-0 sudo[420285]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:13:55 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:13:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev bbe11e74-18e8-4fb0-9fb3-0e735b3b856a does not exist
Jan 31 09:13:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1656e965-c38a-4679-b0f0-545450f94c97 does not exist
Jan 31 09:13:55 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b2c7db85-2235-4772-9513-a14ebcdcff62 does not exist
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:13:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:13:55 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:13:55 compute-0 sudo[420342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:55 compute-0 sudo[420342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:55 compute-0 sudo[420342]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:56 compute-0 sudo[420367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:13:56 compute-0 sudo[420367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:56 compute-0 sudo[420367]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:56 compute-0 sudo[420392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:56 compute-0 sudo[420392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:56 compute-0 sudo[420392]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:56 compute-0 sudo[420417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:13:56 compute-0 sudo[420417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 09:13:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:13:56 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:13:56 compute-0 ceph-mon[74496]: pgmap v4272: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 09:13:56 compute-0 podman[420482]: 2026-01-31 09:13:56.42157911 +0000 UTC m=+0.110087674 container create 35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ellis, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:13:56 compute-0 podman[420482]: 2026-01-31 09:13:56.33317801 +0000 UTC m=+0.021686624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:13:56 compute-0 systemd[1]: Started libpod-conmon-35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc.scope.
Jan 31 09:13:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:13:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:13:56 compute-0 podman[420482]: 2026-01-31 09:13:56.647247329 +0000 UTC m=+0.335755923 container init 35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:13:56 compute-0 podman[420482]: 2026-01-31 09:13:56.655207911 +0000 UTC m=+0.343716485 container start 35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 09:13:56 compute-0 vibrant_ellis[420500]: 167 167
Jan 31 09:13:56 compute-0 systemd[1]: libpod-35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc.scope: Deactivated successfully.
Jan 31 09:13:56 compute-0 podman[420482]: 2026-01-31 09:13:56.774475845 +0000 UTC m=+0.462984419 container attach 35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 09:13:56 compute-0 podman[420482]: 2026-01-31 09:13:56.776402921 +0000 UTC m=+0.464911525 container died 35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:13:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:57.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-72b81edaff5f910a814e689d2c58a7772290fe1bfa834af118f044798157790b-merged.mount: Deactivated successfully.
Jan 31 09:13:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:13:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:13:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:13:57 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:13:57 compute-0 podman[420482]: 2026-01-31 09:13:57.607745058 +0000 UTC m=+1.296253632 container remove 35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:13:57 compute-0 systemd[1]: libpod-conmon-35ebed03ba4efc80b86ba89dd781b95375e2fb0c3f3cdca4a0fbdbf16d5938cc.scope: Deactivated successfully.
Jan 31 09:13:57 compute-0 nova_compute[247704]: 2026-01-31 09:13:57.644 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:57 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 68 op/s
Jan 31 09:13:57 compute-0 podman[420523]: 2026-01-31 09:13:57.812653957 +0000 UTC m=+0.097092012 container create b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 09:13:57 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:13:57.829 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '113'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:13:57 compute-0 podman[420523]: 2026-01-31 09:13:57.742878125 +0000 UTC m=+0.027316200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:13:57 compute-0 systemd[1]: Started libpod-conmon-b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852.scope.
Jan 31 09:13:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4db115103390a5844df6823af64c8a40961f3db42457d545caa3054f64c67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4db115103390a5844df6823af64c8a40961f3db42457d545caa3054f64c67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4db115103390a5844df6823af64c8a40961f3db42457d545caa3054f64c67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4db115103390a5844df6823af64c8a40961f3db42457d545caa3054f64c67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4db115103390a5844df6823af64c8a40961f3db42457d545caa3054f64c67/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:13:57 compute-0 podman[420523]: 2026-01-31 09:13:57.975463681 +0000 UTC m=+0.259901756 container init b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:13:57 compute-0 podman[420523]: 2026-01-31 09:13:57.983174156 +0000 UTC m=+0.267612181 container start b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:13:58 compute-0 podman[420523]: 2026-01-31 09:13:58.169430295 +0000 UTC m=+0.453868340 container attach b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:13:58 compute-0 podman[420543]: 2026-01-31 09:13:58.271200648 +0000 UTC m=+0.317616606 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 09:13:58 compute-0 ceph-mon[74496]: pgmap v4273: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 68 op/s
Jan 31 09:13:58 compute-0 focused_galois[420540]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:13:58 compute-0 focused_galois[420540]: --> relative data size: 1.0
Jan 31 09:13:58 compute-0 focused_galois[420540]: --> All data devices are unavailable
Jan 31 09:13:58 compute-0 systemd[1]: libpod-b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852.scope: Deactivated successfully.
Jan 31 09:13:58 compute-0 podman[420523]: 2026-01-31 09:13:58.738382797 +0000 UTC m=+1.022820842 container died b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:13:58 compute-0 nova_compute[247704]: 2026-01-31 09:13:58.816 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ff4db115103390a5844df6823af64c8a40961f3db42457d545caa3054f64c67-merged.mount: Deactivated successfully.
Jan 31 09:13:58 compute-0 podman[420523]: 2026-01-31 09:13:58.969294443 +0000 UTC m=+1.253732498 container remove b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:13:58 compute-0 systemd[1]: libpod-conmon-b70e54b67200b3a8818c0b795f3271fb08d4a853f61a069d6628e33868942852.scope: Deactivated successfully.
Jan 31 09:13:59 compute-0 sudo[420417]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:59 compute-0 sudo[420597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:59 compute-0 sudo[420597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:59 compute-0 sudo[420597]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:59 compute-0 sudo[420622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:13:59 compute-0 sudo[420622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:59 compute-0 sudo[420622]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:59 compute-0 sudo[420647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:13:59 compute-0 sudo[420647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:59 compute-0 sudo[420647]: pam_unix(sudo:session): session closed for user root
Jan 31 09:13:59 compute-0 sudo[420672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:13:59 compute-0 sudo[420672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:13:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:13:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:13:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.545697874 +0000 UTC m=+0.042142077 container create 1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:13:59 compute-0 systemd[1]: Started libpod-conmon-1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1.scope.
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.525554769 +0000 UTC m=+0.021999012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:13:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.640495909 +0000 UTC m=+0.136940122 container init 1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.646192596 +0000 UTC m=+0.142636799 container start 1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.650423008 +0000 UTC m=+0.146867201 container attach 1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:13:59 compute-0 loving_faraday[420750]: 167 167
Jan 31 09:13:59 compute-0 systemd[1]: libpod-1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1.scope: Deactivated successfully.
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.651727489 +0000 UTC m=+0.148171702 container died 1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f80f362e11c6bc3908d6291f951cd79cffe39e9bd209e17a73a4c0d65628f03-merged.mount: Deactivated successfully.
Jan 31 09:13:59 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 517 KiB/s wr, 76 op/s
Jan 31 09:13:59 compute-0 podman[420734]: 2026-01-31 09:13:59.886435016 +0000 UTC m=+0.382879229 container remove 1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:13:59 compute-0 systemd[1]: libpod-conmon-1f090d7c1188643b2546ec1e6efee788af8f275d773a4bdf5f0fde7d3bebe2d1.scope: Deactivated successfully.
Jan 31 09:14:00 compute-0 podman[420775]: 2026-01-31 09:14:00.095620638 +0000 UTC m=+0.063776018 container create 3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 09:14:00 compute-0 ceph-mon[74496]: pgmap v4274: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 517 KiB/s wr, 76 op/s
Jan 31 09:14:00 compute-0 podman[420775]: 2026-01-31 09:14:00.065737858 +0000 UTC m=+0.033893288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:14:00 compute-0 systemd[1]: Started libpod-conmon-3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318.scope.
Jan 31 09:14:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a63a9a4f6dd1702e358ed6a07cdb98663ca5c7967d0c5340e2dab5eafe888eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a63a9a4f6dd1702e358ed6a07cdb98663ca5c7967d0c5340e2dab5eafe888eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a63a9a4f6dd1702e358ed6a07cdb98663ca5c7967d0c5340e2dab5eafe888eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a63a9a4f6dd1702e358ed6a07cdb98663ca5c7967d0c5340e2dab5eafe888eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:00 compute-0 podman[420775]: 2026-01-31 09:14:00.276961038 +0000 UTC m=+0.245116418 container init 3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:14:00 compute-0 podman[420775]: 2026-01-31 09:14:00.286698213 +0000 UTC m=+0.254853613 container start 3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 09:14:00 compute-0 podman[420775]: 2026-01-31 09:14:00.346692588 +0000 UTC m=+0.314847988 container attach 3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]: {
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:     "0": [
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:         {
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "devices": [
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "/dev/loop3"
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             ],
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "lv_name": "ceph_lv0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "lv_size": "7511998464",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "name": "ceph_lv0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "tags": {
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.cluster_name": "ceph",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.crush_device_class": "",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.encrypted": "0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.osd_id": "0",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.type": "block",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:                 "ceph.vdo": "0"
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             },
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "type": "block",
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:             "vg_name": "ceph_vg0"
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:         }
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]:     ]
Jan 31 09:14:00 compute-0 beautiful_maxwell[420792]: }
Jan 31 09:14:01 compute-0 systemd[1]: libpod-3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318.scope: Deactivated successfully.
Jan 31 09:14:01 compute-0 podman[420775]: 2026-01-31 09:14:01.017552097 +0000 UTC m=+0.985707457 container died 3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:14:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:01.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a63a9a4f6dd1702e358ed6a07cdb98663ca5c7967d0c5340e2dab5eafe888eb-merged.mount: Deactivated successfully.
Jan 31 09:14:01 compute-0 podman[420775]: 2026-01-31 09:14:01.189648184 +0000 UTC m=+1.157803554 container remove 3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_maxwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:14:01 compute-0 systemd[1]: libpod-conmon-3811002c311d59440bbcc6e4e27d5d7a597e43aea8da3a6109d067689eb5f318.scope: Deactivated successfully.
Jan 31 09:14:01 compute-0 sudo[420672]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:01 compute-0 sudo[420817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:01 compute-0 sudo[420817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:01 compute-0 sudo[420817]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:01 compute-0 sudo[420842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:14:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:01.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:01 compute-0 sudo[420842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:01 compute-0 sudo[420842]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:01 compute-0 ovn_controller[149457]: 2026-01-31T09:14:01Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:d9:d5 10.100.0.9
Jan 31 09:14:01 compute-0 ovn_controller[149457]: 2026-01-31T09:14:01Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:d9:d5 10.100.0.9
Jan 31 09:14:01 compute-0 sudo[420867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:01 compute-0 sudo[420867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:01 compute-0 sudo[420867]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:01 compute-0 sudo[420892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:14:01 compute-0 sudo[420892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:01 compute-0 podman[420955]: 2026-01-31 09:14:01.791723855 +0000 UTC m=+0.083915123 container create 5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:14:01 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 67 op/s
Jan 31 09:14:01 compute-0 podman[420955]: 2026-01-31 09:14:01.731132374 +0000 UTC m=+0.023323662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:14:01 compute-0 systemd[1]: Started libpod-conmon-5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894.scope.
Jan 31 09:14:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:14:01 compute-0 podman[420955]: 2026-01-31 09:14:01.923820739 +0000 UTC m=+0.216012037 container init 5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:14:01 compute-0 podman[420955]: 2026-01-31 09:14:01.929742411 +0000 UTC m=+0.221933679 container start 5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 09:14:01 compute-0 nostalgic_tharp[420972]: 167 167
Jan 31 09:14:01 compute-0 systemd[1]: libpod-5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894.scope: Deactivated successfully.
Jan 31 09:14:01 compute-0 ceph-mon[74496]: pgmap v4275: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 67 op/s
Jan 31 09:14:01 compute-0 podman[420955]: 2026-01-31 09:14:01.973103596 +0000 UTC m=+0.265294914 container attach 5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:14:01 compute-0 podman[420955]: 2026-01-31 09:14:01.973781383 +0000 UTC m=+0.265972681 container died 5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf3f7036f22be8256a4ef426db1162eefc122270bf248e3e1a8862084e5e617-merged.mount: Deactivated successfully.
Jan 31 09:14:02 compute-0 nova_compute[247704]: 2026-01-31 09:14:02.646 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:02 compute-0 podman[420955]: 2026-01-31 09:14:02.975531016 +0000 UTC m=+1.267722334 container remove 5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 09:14:03 compute-0 systemd[1]: libpod-conmon-5e5df91c0d74ad6eded7454decf3378cd55eddbf0a9b91ea70bb5226e0c14894.scope: Deactivated successfully.
Jan 31 09:14:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:03.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:03 compute-0 podman[420997]: 2026-01-31 09:14:03.128779759 +0000 UTC m=+0.042311951 container create 72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:14:03 compute-0 systemd[1]: Started libpod-conmon-72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f.scope.
Jan 31 09:14:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:14:03 compute-0 podman[420997]: 2026-01-31 09:14:03.109484914 +0000 UTC m=+0.023017096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47f2aa8f4b4f61acf85f3364a448c003f38f89c09252f9bc0fab49cac30f247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47f2aa8f4b4f61acf85f3364a448c003f38f89c09252f9bc0fab49cac30f247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47f2aa8f4b4f61acf85f3364a448c003f38f89c09252f9bc0fab49cac30f247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47f2aa8f4b4f61acf85f3364a448c003f38f89c09252f9bc0fab49cac30f247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:14:03 compute-0 podman[420997]: 2026-01-31 09:14:03.243482803 +0000 UTC m=+0.157014995 container init 72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:14:03 compute-0 podman[420997]: 2026-01-31 09:14:03.24917139 +0000 UTC m=+0.162703542 container start 72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 09:14:03 compute-0 podman[420997]: 2026-01-31 09:14:03.285412624 +0000 UTC m=+0.198944776 container attach 72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:14:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:03 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 195 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 09:14:03 compute-0 nova_compute[247704]: 2026-01-31 09:14:03.819 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:03 compute-0 ceph-mon[74496]: pgmap v4276: 305 pgs: 305 active+clean; 195 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 09:14:04 compute-0 sleepy_noether[421013]: {
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:         "osd_id": 0,
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:         "type": "bluestore"
Jan 31 09:14:04 compute-0 sleepy_noether[421013]:     }
Jan 31 09:14:04 compute-0 sleepy_noether[421013]: }
Jan 31 09:14:04 compute-0 systemd[1]: libpod-72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f.scope: Deactivated successfully.
Jan 31 09:14:04 compute-0 podman[420997]: 2026-01-31 09:14:04.125404928 +0000 UTC m=+1.038937090 container died 72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 09:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47f2aa8f4b4f61acf85f3364a448c003f38f89c09252f9bc0fab49cac30f247-merged.mount: Deactivated successfully.
Jan 31 09:14:04 compute-0 podman[420997]: 2026-01-31 09:14:04.180803954 +0000 UTC m=+1.094336106 container remove 72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 09:14:04 compute-0 systemd[1]: libpod-conmon-72a25c1f0f6ce4bda1698a36a0625d1da9f27ec4b39313f894a62af1170fe27f.scope: Deactivated successfully.
Jan 31 09:14:04 compute-0 sudo[420892]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:14:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:14:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:14:04 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:14:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev c2d8886e-0981-448c-8575-ea2c3f304cbb does not exist
Jan 31 09:14:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 174ed22a-9aa0-4313-9ca2-fc2715fce97b does not exist
Jan 31 09:14:04 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e3f52d17-3511-4641-a352-113cff60b9fe does not exist
Jan 31 09:14:04 compute-0 sudo[421047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:04 compute-0 sudo[421047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:04 compute-0 sudo[421047]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:04 compute-0 sudo[421073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:14:04 compute-0 sudo[421073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:04 compute-0 sudo[421073]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:05.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:14:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:14:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:05.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:05 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 09:14:06 compute-0 ceph-mon[74496]: pgmap v4277: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 09:14:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:07.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:07.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:07 compute-0 nova_compute[247704]: 2026-01-31 09:14:07.650 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:07 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 09:14:08 compute-0 ceph-mon[74496]: pgmap v4278: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 09:14:08 compute-0 nova_compute[247704]: 2026-01-31 09:14:08.822 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:09.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:09 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:14:09 compute-0 ceph-mon[74496]: pgmap v4279: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 264 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 09:14:10 compute-0 sudo[421100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:10 compute-0 sudo[421100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:10 compute-0 sudo[421100]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:10 compute-0 sudo[421125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:10 compute-0 sudo[421125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:10 compute-0 sudo[421125]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:11.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:14:11.248 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:14:11.249 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:14:11.250 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:11.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:11 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.7 MiB/s wr, 52 op/s
Jan 31 09:14:11 compute-0 ceph-mon[74496]: pgmap v4280: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.7 MiB/s wr, 52 op/s
Jan 31 09:14:12 compute-0 nova_compute[247704]: 2026-01-31 09:14:12.652 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:13.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:13.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:13 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 125 KiB/s rd, 607 KiB/s wr, 31 op/s
Jan 31 09:14:13 compute-0 nova_compute[247704]: 2026-01-31 09:14:13.824 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:14 compute-0 ceph-mon[74496]: pgmap v4281: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 125 KiB/s rd, 607 KiB/s wr, 31 op/s
Jan 31 09:14:14 compute-0 podman[421153]: 2026-01-31 09:14:14.901342536 +0000 UTC m=+0.075016889 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:14:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:15.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:15.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:15 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 74 KiB/s rd, 48 KiB/s wr, 11 op/s
Jan 31 09:14:15 compute-0 ceph-mon[74496]: pgmap v4282: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 74 KiB/s rd, 48 KiB/s wr, 11 op/s
Jan 31 09:14:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:17.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:17 compute-0 nova_compute[247704]: 2026-01-31 09:14:17.655 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:17 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 31 09:14:17 compute-0 ceph-mon[74496]: pgmap v4283: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 31 09:14:18 compute-0 nova_compute[247704]: 2026-01-31 09:14:18.826 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:19.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:19.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:19 compute-0 nova_compute[247704]: 2026-01-31 09:14:19.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:19 compute-0 nova_compute[247704]: 2026-01-31 09:14:19.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:14:19 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 31 09:14:20 compute-0 ovn_controller[149457]: 2026-01-31T09:14:20Z|00901|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:14:20
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images']
Jan 31 09:14:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:14:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:21.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:14:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:21.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:21 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:21 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:14:21 compute-0 ceph-mon[74496]: pgmap v4284: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 31 09:14:22 compute-0 nova_compute[247704]: 2026-01-31 09:14:22.564 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:22 compute-0 nova_compute[247704]: 2026-01-31 09:14:22.658 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:23.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:23 compute-0 ceph-mon[74496]: pgmap v4285: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:14:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:23.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:23 compute-0 nova_compute[247704]: 2026-01-31 09:14:23.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:23 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 09:14:23 compute-0 nova_compute[247704]: 2026-01-31 09:14:23.829 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:24 compute-0 ceph-mon[74496]: pgmap v4286: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 31 09:14:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Jan 31 09:14:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Jan 31 09:14:25 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 31 09:14:25 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Jan 31 09:14:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Jan 31 09:14:26 compute-0 ceph-mon[74496]: pgmap v4287: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 31 09:14:26 compute-0 ceph-mon[74496]: osdmap e424: 3 total, 3 up, 3 in
Jan 31 09:14:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/620014776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Jan 31 09:14:26 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Jan 31 09:14:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:27 compute-0 nova_compute[247704]: 2026-01-31 09:14:27.660 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:27 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s
Jan 31 09:14:28 compute-0 ceph-mon[74496]: osdmap e425: 3 total, 3 up, 3 in
Jan 31 09:14:28 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3477090387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:28 compute-0 ceph-mon[74496]: pgmap v4290: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 4 op/s
Jan 31 09:14:28 compute-0 nova_compute[247704]: 2026-01-31 09:14:28.832 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:28 compute-0 podman[421180]: 2026-01-31 09:14:28.923139879 +0000 UTC m=+0.095005521 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:14:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:14:28.943 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=114, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=113) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:14:28 compute-0 nova_compute[247704]: 2026-01-31 09:14:28.943 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:28 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:14:28.945 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:14:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:29.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:29 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Jan 31 09:14:29 compute-0 ceph-mon[74496]: pgmap v4291: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Jan 31 09:14:30 compute-0 sudo[421207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:30 compute-0 sudo[421207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:30 compute-0 sudo[421207]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:30 compute-0 sudo[421232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:30 compute-0 sudo[421232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:30 compute-0 sudo[421232]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/658694514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:31.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:31 compute-0 nova_compute[247704]: 2026-01-31 09:14:31.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:31 compute-0 nova_compute[247704]: 2026-01-31 09:14:31.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:14:31 compute-0 nova_compute[247704]: 2026-01-31 09:14:31.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:14:31 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 24 op/s
Jan 31 09:14:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3177446391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:32 compute-0 ceph-mon[74496]: pgmap v4292: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 2.0 KiB/s wr, 24 op/s
Jan 31 09:14:32 compute-0 nova_compute[247704]: 2026-01-31 09:14:32.160 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:14:32 compute-0 nova_compute[247704]: 2026-01-31 09:14:32.161 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:14:32 compute-0 nova_compute[247704]: 2026-01-31 09:14:32.161 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 09:14:32 compute-0 nova_compute[247704]: 2026-01-31 09:14:32.161 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f82c07b4-3acf-4385-9b7a-8a49da3cd55a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:14:32 compute-0 nova_compute[247704]: 2026-01-31 09:14:32.663 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:33.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:33 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 5.7 KiB/s wr, 20 op/s
Jan 31 09:14:33 compute-0 nova_compute[247704]: 2026-01-31 09:14:33.833 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.117 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating instance_info_cache with network_info: [{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.143 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.144 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.144 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.144 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:34 compute-0 ceph-mon[74496]: pgmap v4293: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 5.7 KiB/s wr, 20 op/s
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.175 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.176 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.176 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.176 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.176 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:14:34 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 21K writes, 93K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1554 writes, 6866 keys, 1553 commit groups, 1.0 writes per commit group, ingest: 10.60 MB, 0.02 MB/s
                                           Interval WAL: 1554 writes, 1553 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     42.9      2.95              0.33        67    0.044       0      0       0.0       0.0
                                             L6      1/0   11.78 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6     65.2     56.1     12.60              1.76        66    0.191    555K    35K       0.0       0.0
                                            Sum      1/0   11.78 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     52.8     53.6     15.56              2.09       133    0.117    555K    35K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     53.0     52.6      1.68              0.19        12    0.140     72K   3086       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     65.2     56.1     12.60              1.76        66    0.191    555K    35K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.0      2.95              0.33        66    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.1      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.124, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.81 GB write, 0.11 MB/s write, 0.80 GB read, 0.11 MB/s read, 15.6 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5635b3b9d1f0#2 capacity: 304.00 MB usage: 87.47 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000528 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5061,83.56 MB,27.4867%) FilterBlock(134,1.50 MB,0.493215%) IndexBlock(134,2.41 MB,0.79165%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 09:14:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:14:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583078624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.584 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.902 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:14:34 compute-0 nova_compute[247704]: 2026-01-31 09:14:34.903 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.041 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.042 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3979MB free_disk=20.988109588623047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.043 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.043 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:14:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.113 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance f82c07b4-3acf-4385-9b7a-8a49da3cd55a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.113 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.113 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.149 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:14:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1583078624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:35.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:14:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282541616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.569 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.575 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.594 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.621 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:14:35 compute-0 nova_compute[247704]: 2026-01-31 09:14:35.621 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:14:35 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 6.9 KiB/s wr, 19 op/s
Jan 31 09:14:35 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:14:35.947 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '114'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:14:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3282541616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:36 compute-0 ceph-mon[74496]: pgmap v4294: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 6.9 KiB/s wr, 19 op/s
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432773677414852 of space, bias 1.0, pg target 1.298321032244556 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:14:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:14:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:37.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:37 compute-0 nova_compute[247704]: 2026-01-31 09:14:37.665 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 6.1 KiB/s wr, 17 op/s
Jan 31 09:14:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1936571576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:14:38 compute-0 nova_compute[247704]: 2026-01-31 09:14:38.836 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:39 compute-0 nova_compute[247704]: 2026-01-31 09:14:39.038 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:39 compute-0 ceph-mon[74496]: pgmap v4295: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 6.1 KiB/s wr, 17 op/s
Jan 31 09:14:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:39.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 5.5 KiB/s wr, 16 op/s
Jan 31 09:14:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:41 compute-0 ceph-mon[74496]: pgmap v4296: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 5.5 KiB/s wr, 16 op/s
Jan 31 09:14:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:41.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:41 compute-0 nova_compute[247704]: 2026-01-31 09:14:41.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 5.8 KiB/s wr, 19 op/s
Jan 31 09:14:42 compute-0 nova_compute[247704]: 2026-01-31 09:14:42.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:14:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:14:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1294371103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:14:42 compute-0 nova_compute[247704]: 2026-01-31 09:14:42.668 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:43.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:43 compute-0 ceph-mon[74496]: pgmap v4297: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 5.8 KiB/s wr, 19 op/s
Jan 31 09:14:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1294371103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:14:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:43 compute-0 nova_compute[247704]: 2026-01-31 09:14:43.839 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 5.3 KiB/s wr, 13 op/s
Jan 31 09:14:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:45.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:45 compute-0 ceph-mon[74496]: pgmap v4298: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 5.3 KiB/s wr, 13 op/s
Jan 31 09:14:45 compute-0 podman[421310]: 2026-01-31 09:14:45.906709236 +0000 UTC m=+0.070159952 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 09:14:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Jan 31 09:14:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2205534106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:14:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:47.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:47.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:47 compute-0 nova_compute[247704]: 2026-01-31 09:14:47.671 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 31 09:14:48 compute-0 ceph-mon[74496]: pgmap v4299: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 2.7 KiB/s wr, 12 op/s
Jan 31 09:14:48 compute-0 nova_compute[247704]: 2026-01-31 09:14:48.842 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.005000117s ======
Jan 31 09:14:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:49.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000117s
Jan 31 09:14:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:14:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:49.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:14:49 compute-0 ceph-mon[74496]: pgmap v4300: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:14:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:14:50 compute-0 sshd-session[421331]: Accepted publickey for zuul from 192.168.122.10 port 49084 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:14:50 compute-0 systemd-logind[816]: New session 64 of user zuul.
Jan 31 09:14:50 compute-0 systemd[1]: Started Session 64 of User zuul.
Jan 31 09:14:50 compute-0 sshd-session[421331]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:14:50 compute-0 sudo[421334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:50 compute-0 sudo[421334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:50 compute-0 sudo[421334]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:50 compute-0 sudo[421360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:14:50 compute-0 sudo[421360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:14:50 compute-0 sudo[421360]: pam_unix(sudo:session): session closed for user root
Jan 31 09:14:50 compute-0 sudo[421366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 09:14:50 compute-0 sudo[421366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:14:50 compute-0 ceph-mon[74496]: pgmap v4301: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 31 09:14:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:51.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:51.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 24 KiB/s wr, 32 op/s
Jan 31 09:14:52 compute-0 nova_compute[247704]: 2026-01-31 09:14:52.675 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:53 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37467 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:53.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:53 compute-0 ceph-mon[74496]: pgmap v4302: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 24 KiB/s wr, 32 op/s
Jan 31 09:14:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:53 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51443 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:53 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37482 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:53 compute-0 nova_compute[247704]: 2026-01-31 09:14:53.843 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51467 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:14:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176635303' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 23 KiB/s wr, 49 op/s
Jan 31 09:14:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:14:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3681007901' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: from='client.37467 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2147898005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2147898005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: from='client.51443 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: from='client.37482 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/176635303' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:55.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:55.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:55 compute-0 ceph-mon[74496]: from='client.51467 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:55 compute-0 ceph-mon[74496]: pgmap v4303: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 23 KiB/s wr, 49 op/s
Jan 31 09:14:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3681007901' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:14:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:14:57 compute-0 ovs-vsctl[421682]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 09:14:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:57.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:57 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:57.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:57 compute-0 nova_compute[247704]: 2026-01-31 09:14:57.679 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:57 compute-0 ceph-mon[74496]: pgmap v4304: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:14:57 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46444 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:57 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 09:14:57 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 09:14:57 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 09:14:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:14:58 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: cache status {prefix=cache status} (starting...)
Jan 31 09:14:58 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:58 compute-0 lvm[422028]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 09:14:58 compute-0 lvm[422028]: VG ceph_vg0 finished
Jan 31 09:14:58 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: client ls {prefix=client ls} (starting...)
Jan 31 09:14:58 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:58 compute-0 nova_compute[247704]: 2026-01-31 09:14:58.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:14:58 compute-0 ceph-mon[74496]: from='client.37500 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:58 compute-0 ceph-mon[74496]: from='client.46444 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/840669752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37506 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51488 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:14:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:59.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:14:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:14:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:59.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:14:59 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:14:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613753304' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37518 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:14:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:14:59 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1389096197' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:14:59 compute-0 ceph-mon[74496]: pgmap v4305: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.37506 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.51488 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.51506 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2613753304' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.37518 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1754209629' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:14:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1389096197' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:14:59 compute-0 podman[422193]: 2026-01-31 09:14:59.936732358 +0000 UTC m=+0.103898915 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 09:14:59 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:15:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:15:00 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 09:15:00 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:15:00 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51533 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:00 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:00 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:00.243+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:00 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37554 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:00 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:00 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:00.326+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:00 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 09:15:00 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:15:00 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: ops {prefix=ops} (starting...)
Jan 31 09:15:00 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 09:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347420494' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 09:15:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990179853' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3081761994' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: pgmap v4306: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 78 op/s
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.51533 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3338682382' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.37554 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2102815293' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/347420494' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1846108260' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2990179853' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:01.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:01 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37584 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:15:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292535670' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: session ls {prefix=session ls} (starting...)
Jan 31 09:15:01 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:15:01 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: status {prefix=status} (starting...)
Jan 31 09:15:01 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51581 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:01.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:01 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37596 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:15:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341601396' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:01 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51602 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:15:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2703213647' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.1 KiB/s wr, 72 op/s
Jan 31 09:15:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:15:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3053518246' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2417167780' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.37584 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/109357756' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1292535670' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.51581 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.37596 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2668804060' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3341601396' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.51602 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2703213647' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1922402219' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:15:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 09:15:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885162426' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:15:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3994025984' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:02 compute-0 nova_compute[247704]: 2026-01-31 09:15:02.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:02 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37644 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:02 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:02.866+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:15:02 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:15:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 09:15:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604331417' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51668 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:03 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:03.053+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:15:03 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:15:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:03.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:03 compute-0 ceph-mon[74496]: pgmap v4307: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.1 KiB/s wr, 72 op/s
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3053518246' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3047504095' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/885162426' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3175809376' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3994025984' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1890310869' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/549006704' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2604331417' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 09:15:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840059746' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:15:03 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183632181' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 09:15:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 09:15:03 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51692 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:03 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37686 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:03 compute-0 nova_compute[247704]: 2026-01-31 09:15:03.847 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46510 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51716 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 213 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 470 KiB/s wr, 90 op/s
Jan 31 09:15:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:15:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641837516' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37695 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46519 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37710 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.37644 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.51668 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3547050634' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/840059746' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/183632181' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3062583683' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.51692 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/488891734' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4009751160' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:15:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:15:04 compute-0 sudo[422955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:04 compute-0 sudo[422955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-0 sudo[422955]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:04 compute-0 sudo[422990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:15:04 compute-0 sudo[422990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-0 sudo[422990]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37722 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:04 compute-0 sudo[423023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:04 compute-0 sudo[423023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-0 sudo[423023]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:15:04 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476336237' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:04 compute-0 sudo[423054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:15:04 compute-0 sudo[423054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379895808 unmapped: 52887552 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.954183+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.954365+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.954526+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.954672+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.954843+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4177149 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.955074+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.955240+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.955394+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.955560+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.955760+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4177149 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.955970+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.956170+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.956345+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.956513+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.956695+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4177149 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.956906+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.957225+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.957419+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.957586+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.957728+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4177149 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.957920+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.958098+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a64f5000/0x0/0x1bfc00000, data 0x3149687/0x3343000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379904000 unmapped: 52879360 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9da029c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deb9a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deb9a000 session 0x55f9dcc510e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9da6f2f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dcc77e00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.734128952s of 29.934331894s, submitted: 47
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae25a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.959297+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9db568f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.960222+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.961136+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5f7e000/0x0/0x1bfc00000, data 0x36c56e9/0x38c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226612 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5f7e000/0x0/0x1bfc00000, data 0x36c56e9/0x38c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.962219+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.963000+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.963700+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.964213+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.964627+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226612 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5f7e000/0x0/0x1bfc00000, data 0x36c56e9/0x38c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.965035+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.965372+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.965744+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.965980+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5f7e000/0x0/0x1bfc00000, data 0x36c56e9/0x38c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.966219+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4226612 data_alloc: 218103808 data_used: 18423808
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.966485+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.966671+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dcd665a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dcce9c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dc3734a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.966883+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deb9a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deb9a000 session 0x55f9dcd670e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.454031944s of 15.581809044s, submitted: 34
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380043264 unmapped: 52740096 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc685860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dcae34a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dcae3a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.967020+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dc376960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f5000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db4f5000 session 0x55f9da74e000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5f7e000/0x0/0x1bfc00000, data 0x36c56e9/0x38c0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.967209+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4261805 data_alloc: 218103808 data_used: 18427904
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.967434+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.967653+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.967884+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afd000/0x0/0x1bfc00000, data 0x3b456f8/0x3d41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.968030+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.968230+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264269 data_alloc: 218103808 data_used: 18415616
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.968491+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.968673+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.968865+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380059648 unmapped: 52723712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.969011+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afd000/0x0/0x1bfc00000, data 0x3b456f8/0x3d41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380067840 unmapped: 52715520 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.969177+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264269 data_alloc: 218103808 data_used: 18415616
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 52707328 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.969434+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 52707328 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.969613+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 52707328 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afd000/0x0/0x1bfc00000, data 0x3b456f8/0x3d41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.969810+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afd000/0x0/0x1bfc00000, data 0x3b456f8/0x3d41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 52707328 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.969958+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380076032 unmapped: 52707328 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.132957458s of 16.235628128s, submitted: 26
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd0e5a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.970305+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afc000/0x0/0x1bfc00000, data 0x3b4571b/0x3d42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4263634 data_alloc: 218103808 data_used: 18415616
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379977728 unmapped: 52805632 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.970825+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379977728 unmapped: 52805632 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.971264+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379977728 unmapped: 52805632 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.971455+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 52797440 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afc000/0x0/0x1bfc00000, data 0x3b4571b/0x3d42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.971639+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379985920 unmapped: 52797440 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.971831+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4279954 data_alloc: 218103808 data_used: 20635648
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52789248 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.971990+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52789248 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dcc5d4a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.972143+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52789248 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.972316+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dcc61680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 379994112 unmapped: 52789248 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.972563+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afc000/0x0/0x1bfc00000, data 0x3b4571b/0x3d42000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf4f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc30c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcc30c00 session 0x55f9dcae23c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380010496 unmapped: 52772864 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.972725+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.630733490s of 10.655060768s, submitted: 5
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297076 data_alloc: 234881024 data_used: 22753280
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 52764672 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.972940+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380018688 unmapped: 52764672 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.973119+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 381337600 unmapped: 51445760 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.973312+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 381337600 unmapped: 51445760 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.973452+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afb000/0x0/0x1bfc00000, data 0x3b4572b/0x3d43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 381337600 unmapped: 51445760 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.973730+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4330676 data_alloc: 234881024 data_used: 26546176
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afb000/0x0/0x1bfc00000, data 0x3b4572b/0x3d43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 381337600 unmapped: 51445760 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.973894+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5afb000/0x0/0x1bfc00000, data 0x3b4572b/0x3d43000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [0,0,0,0,3,6])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383131648 unmapped: 49651712 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.974106+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382517248 unmapped: 50266112 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.974328+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382533632 unmapped: 50249728 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.974492+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382533632 unmapped: 50249728 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.974773+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a4e5b000/0x0/0x1bfc00000, data 0x47e472b/0x49e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4431288 data_alloc: 234881024 data_used: 26603520
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382533632 unmapped: 50249728 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.975036+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382533632 unmapped: 50249728 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.975207+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.633468628s of 12.290964127s, submitted: 119
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383754240 unmapped: 49029120 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.975442+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384688128 unmapped: 48095232 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dc8c12c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.975601+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a460b000/0x0/0x1bfc00000, data 0x503572b/0x5233000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384688128 unmapped: 48095232 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a460b000/0x0/0x1bfc00000, data 0x503572b/0x5233000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [0,0,0,0,0,1])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.975787+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a45ff000/0x0/0x1bfc00000, data 0x504172b/0x523f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dc86bc20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4324472 data_alloc: 234881024 data_used: 22544384
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384704512 unmapped: 48078848 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.975949+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384704512 unmapped: 48078848 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.976217+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384704512 unmapped: 48078848 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.976488+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384704512 unmapped: 48078848 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.976702+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384704512 unmapped: 48078848 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.977254+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4324472 data_alloc: 234881024 data_used: 22544384
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a45ff000/0x0/0x1bfc00000, data 0x3e026a6/0x3ffe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384704512 unmapped: 48078848 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.977478+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d9e3c3c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dccf45a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.977717+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.977915+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.978122+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5840000/0x0/0x1bfc00000, data 0x3e026a6/0x3ffe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.978364+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321208 data_alloc: 234881024 data_used: 22544384
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.978529+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.978752+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.978924+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5840000/0x0/0x1bfc00000, data 0x3e026a6/0x3ffe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.979164+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.979436+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a000 session 0x55f9dc685e00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.509607315s of 18.075372696s, submitted: 74
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc3763c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4321076 data_alloc: 234881024 data_used: 22544384
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384638976 unmapped: 48144384 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.979712+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae25a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5840000/0x0/0x1bfc00000, data 0x3e026a6/0x3ffe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.979924+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.980200+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.980485+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.980645+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169326 data_alloc: 218103808 data_used: 15421440
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.980819+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.981130+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 54910976 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.981323+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.981603+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.981786+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169138 data_alloc: 218103808 data_used: 15040512
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.982066+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.982248+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.982460+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.392737389s of 12.475758553s, submitted: 28
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.982600+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 54894592 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.982847+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169230 data_alloc: 218103808 data_used: 15040512
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 54886400 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.983049+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 54886400 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.983199+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 54886400 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.983347+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 54886400 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.983536+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 54886400 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.983706+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169230 data_alloc: 218103808 data_used: 15040512
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 54878208 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.983982+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 54878208 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.984159+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 54878208 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.984326+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 54878208 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.984482+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 54878208 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.984669+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4169230 data_alloc: 218103808 data_used: 15040512
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 54878208 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.984826+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377913344 unmapped: 54870016 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.814249039s of 13.821798325s, submitted: 2
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dc91e780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dc3772c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.984961+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a658b000/0x0/0x1bfc00000, data 0x30b76a6/0x32b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 374497280 unmapped: 58286080 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.985168+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9da6f2f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.985341+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.985478+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4042824 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.985771+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.985953+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7246000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.986134+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.986290+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.986437+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 54K writes, 205K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 54K writes, 19K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3480 writes, 13K keys, 3480 commit groups, 1.0 writes per commit group, ingest: 14.24 MB, 0.02 MB/s
                                           Interval WAL: 3480 writes, 1349 syncs, 2.58 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55f9d8323610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4042824 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57204736 heap: 432783360 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.986599+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcca4f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7246000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 60948480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.986784+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 60948480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.986974+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 60948480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.987186+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 60948480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.987398+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114096 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 60948480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.987629+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 60948480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.987794+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376045568 unmapped: 60940288 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.987959+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376045568 unmapped: 60940288 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.988171+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376053760 unmapped: 60932096 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.988338+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114096 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376053760 unmapped: 60932096 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.988545+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376053760 unmapped: 60932096 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.988744+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376053760 unmapped: 60932096 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.988924+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376053760 unmapped: 60932096 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.989154+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376053760 unmapped: 60932096 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.989331+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114096 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.989506+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.989725+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.989888+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.990225+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.990503+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114096 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.990698+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.990937+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376061952 unmapped: 60923904 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.991564+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 60915712 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.992273+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376070144 unmapped: 60915712 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.992499+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.243030548s of 33.400207520s, submitted: 38
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc74bc20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4115549 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 60907520 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.993131+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 60907520 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.993545+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [1])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.993738+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.994132+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.994524+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181469 data_alloc: 218103808 data_used: 20238336
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.994838+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.995192+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.995492+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.995823+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.996213+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181469 data_alloc: 218103808 data_used: 20238336
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.996805+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a696f000/0x0/0x1bfc00000, data 0x2cd5687/0x2ecf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.997223+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 377298944 unmapped: 59686912 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.596060753s of 12.614318848s, submitted: 6
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.997584+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 56877056 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.997854+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380403712 unmapped: 56582144 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6260000/0x0/0x1bfc00000, data 0x33e4687/0x35de000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.998126+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240835 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.998258+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.998636+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.998786+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.999024+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.999230+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240835 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.999558+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.999769+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380411904 unmapped: 56573952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.000123+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.000363+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.000639+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240835 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.000768+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.001159+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.001428+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.001648+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.001968+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380420096 unmapped: 56565760 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240835 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.002271+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.002517+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.002759+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.003043+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.003211+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240835 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.003412+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.003724+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.003897+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380428288 unmapped: 56557568 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.004189+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.004405+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240835 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.004604+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.004805+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.004976+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.005142+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.005292+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.019004822s of 32.221416473s, submitted: 67
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.005592+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4237203 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380436480 unmapped: 56549376 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.005718+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.005969+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.006160+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.006354+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.006578+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4237203 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.006813+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a6254000/0x0/0x1bfc00000, data 0x33f0687/0x35ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.007001+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380444672 unmapped: 56541184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.007147+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380452864 unmapped: 56532992 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.007288+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380452864 unmapped: 56532992 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.418061256s of 10.456464767s, submitted: 2
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dcc470e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.007515+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285989 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c73000/0x0/0x1bfc00000, data 0x39d1687/0x3bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.007769+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.007976+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.008213+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.008479+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.008711+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285989 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.008900+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 56344576 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c73000/0x0/0x1bfc00000, data 0x39d1687/0x3bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.009149+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380649472 unmapped: 56336384 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.009448+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380649472 unmapped: 56336384 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.009614+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380649472 unmapped: 56336384 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.009770+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285989 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380649472 unmapped: 56336384 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c73000/0x0/0x1bfc00000, data 0x39d1687/0x3bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.009984+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380649472 unmapped: 56336384 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.010201+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380649472 unmapped: 56336384 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.216380119s of 13.434361458s, submitted: 11
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.010471+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcc60000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380665856 unmapped: 56320000 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c73000/0x0/0x1bfc00000, data 0x39d1687/0x3bcb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9ab400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db9ab400 session 0x55f9dc9c3860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.010730+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380665856 unmapped: 56320000 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9ab400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcd11e00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5b71000/0x0/0x1bfc00000, data 0x3ad3687/0x3ccd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db9ab400 session 0x55f9dc6ff2c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.011158+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336445 data_alloc: 234881024 data_used: 21000192
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56098816 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.011486+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56098816 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc774a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.011754+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a570c000/0x0/0x1bfc00000, data 0x3f37697/0x4132000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dcbb01e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56098816 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.011962+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56098816 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.012129+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 380887040 unmapped: 56098816 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc3b1c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.012260+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a570c000/0x0/0x1bfc00000, data 0x3f37697/0x4132000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4347403 data_alloc: 234881024 data_used: 22061056
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 381026304 unmapped: 55959552 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.012480+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.012670+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.012836+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a570c000/0x0/0x1bfc00000, data 0x3f37697/0x4132000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.013047+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.013353+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4377483 data_alloc: 234881024 data_used: 23666688
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.013542+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.013754+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382238720 unmapped: 54747136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.091793060s of 14.538480759s, submitted: 12
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9db568780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.021208+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382304256 unmapped: 54681600 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.021403+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382304256 unmapped: 54681600 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a56e8000/0x0/0x1bfc00000, data 0x3f5b697/0x4156000, compress 0x0/0x0/0x0, omap 0x639, meta 0x163bf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.021600+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4387679 data_alloc: 234881024 data_used: 24727552
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 382337024 unmapped: 54648832 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.021777+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 53592064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.021995+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 49627136 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.022148+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388481024 unmapped: 48504832 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.022306+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c93000/0x0/0x1bfc00000, data 0x45e0697/0x47db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388595712 unmapped: 48390144 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.022641+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4463139 data_alloc: 234881024 data_used: 27070464
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388595712 unmapped: 48390144 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74be00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.022850+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388595712 unmapped: 48390144 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.023029+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388595712 unmapped: 48390144 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.023229+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c93000/0x0/0x1bfc00000, data 0x45e0697/0x47db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388595712 unmapped: 48390144 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.023427+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388603904 unmapped: 48381952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.023572+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4463139 data_alloc: 234881024 data_used: 27070464
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388603904 unmapped: 48381952 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5c93000/0x0/0x1bfc00000, data 0x45e0697/0x47db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.023803+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.992303848s of 13.684939384s, submitted: 429
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388571136 unmapped: 48414720 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.023991+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388325376 unmapped: 48660480 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.024189+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388333568 unmapped: 48652288 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.024339+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388571136 unmapped: 48414720 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.024609+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4554961 data_alloc: 234881024 data_used: 27459584
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388571136 unmapped: 48414720 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a52e6000/0x0/0x1bfc00000, data 0x4f8d697/0x5188000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.024753+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388571136 unmapped: 48414720 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a52e6000/0x0/0x1bfc00000, data 0x4f8d697/0x5188000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.024918+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388571136 unmapped: 48414720 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.025067+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388571136 unmapped: 48414720 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.025280+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a52e6000/0x0/0x1bfc00000, data 0x4f8d697/0x5188000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388505600 unmapped: 48480256 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.025458+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4552069 data_alloc: 234881024 data_used: 27467776
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388505600 unmapped: 48480256 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.025634+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388505600 unmapped: 48480256 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.025823+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388505600 unmapped: 48480256 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a52c5000/0x0/0x1bfc00000, data 0x4fae697/0x51a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.026001+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388505600 unmapped: 48480256 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.026157+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9ab400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.098273277s of 12.914017677s, submitted: 85
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392282112 unmapped: 44703744 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a52c5000/0x0/0x1bfc00000, data 0x4fae697/0x51a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,3,29])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.026357+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4580889 data_alloc: 234881024 data_used: 27467776
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 45907968 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db9ab400 session 0x55f9dcc5c960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.026536+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 45907968 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.026675+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 45907968 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.026895+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 45907968 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc8c0000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcc47680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.026998+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 45907968 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a4e93000/0x0/0x1bfc00000, data 0x53e0697/0x55db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc2f9c00 session 0x55f9dc74af00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.027219+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4424255 data_alloc: 234881024 data_used: 21798912
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.027386+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.027538+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5ce9000/0x0/0x1bfc00000, data 0x458a697/0x4785000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.027692+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.027870+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.028091+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4424607 data_alloc: 234881024 data_used: 21798912
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.028262+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5ce9000/0x0/0x1bfc00000, data 0x458a697/0x4785000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383090688 unmapped: 53895168 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.796846390s of 12.434602737s, submitted: 45
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.028454+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x4b13697/0x4d0e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,4])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd110e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca4780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9ab400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db9ab400 session 0x55f9dc9050e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5760000/0x0/0x1bfc00000, data 0x4b13697/0x4d0e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [0,0,1])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc86ab40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e095c400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9e095c400 session 0x55f9da029c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9db569a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.028600+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.028805+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6fe960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.029067+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447936 data_alloc: 218103808 data_used: 20742144
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.029322+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a0a000/0x0/0x1bfc00000, data 0x4869697/0x4a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:32.029469+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:33.029605+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:34.029785+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:35.030005+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447936 data_alloc: 218103808 data_used: 20742144
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a0a000/0x0/0x1bfc00000, data 0x4869697/0x4a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:36.030190+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a0a000/0x0/0x1bfc00000, data 0x4869697/0x4a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:37.030378+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dcd66780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.626328468s of 10.638116837s, submitted: 28
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9da74f860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:38.030545+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:39.030683+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:40.031188+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447804 data_alloc: 218103808 data_used: 20742144
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a0a000/0x0/0x1bfc00000, data 0x4869697/0x4a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:41.031365+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:42.031508+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:43.031677+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:44.031889+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:45.032178+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447804 data_alloc: 218103808 data_used: 20742144
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:46.032386+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a0a000/0x0/0x1bfc00000, data 0x4869697/0x4a64000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 53583872 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9ab400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db9ab400 session 0x55f9dcbb01e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:47.032580+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.598676682s of 10.011111259s, submitted: 29
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd66000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384614400 unmapped: 52371456 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:48.032881+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384622592 unmapped: 52363264 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:49.033184+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384622592 unmapped: 52363264 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:50.033343+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4333104 data_alloc: 234881024 data_used: 24563712
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:51.033544+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a69d7000/0x0/0x1bfc00000, data 0x389b6ba/0x3a97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:52.033739+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:53.033975+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:54.034227+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:55.034434+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4353584 data_alloc: 234881024 data_used: 27435008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:56.034625+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a69d7000/0x0/0x1bfc00000, data 0x389b6ba/0x3a97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:57.034743+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a69d7000/0x0/0x1bfc00000, data 0x389b6ba/0x3a97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:58.034945+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:59.035198+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 384720896 unmapped: 52264960 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:00.035389+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4353584 data_alloc: 234881024 data_used: 27435008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.325990677s of 13.349758148s, submitted: 7
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 385982464 unmapped: 51003392 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:01.035562+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a69d7000/0x0/0x1bfc00000, data 0x389b6ba/0x3a97000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 44785664 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:02.035784+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:03.035996+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:04.036242+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:05.036543+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5e29000/0x0/0x1bfc00000, data 0x44416ba/0x463d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4461938 data_alloc: 234881024 data_used: 29429760
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:06.036719+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:07.036902+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5e29000/0x0/0x1bfc00000, data 0x44416ba/0x463d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:08.037127+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcae2000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc470e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391585792 unmapped: 45400064 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:09.037279+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5e29000/0x0/0x1bfc00000, data 0x44416ba/0x463d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dcc5d4a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:10.037494+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a71ea000/0x0/0x1bfc00000, data 0x3064697/0x325f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230572 data_alloc: 218103808 data_used: 19140608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:11.037649+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:12.037778+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:13.037922+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a71ea000/0x0/0x1bfc00000, data 0x3064697/0x325f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:14.038039+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:15.038249+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230572 data_alloc: 218103808 data_used: 19140608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:16.038433+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:17.038696+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a71ea000/0x0/0x1bfc00000, data 0x3064697/0x325f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:18.038827+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a71ea000/0x0/0x1bfc00000, data 0x3064697/0x325f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:19.038957+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:20.039321+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230572 data_alloc: 218103808 data_used: 19140608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:21.039534+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:22.039686+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:23.039845+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:24.040053+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a71ea000/0x0/0x1bfc00000, data 0x3064697/0x325f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:25.040310+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230572 data_alloc: 218103808 data_used: 19140608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc3b1c00 session 0x55f9d97b0780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc8c03c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:26.040458+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.470623016s of 25.994760513s, submitted: 164
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc6fe3c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:27.040608+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:28.040812+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:29.040978+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:30.041142+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:31.041291+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:32.041461+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:33.041620+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:34.041825+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:35.042041+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:36.042168+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:37.042353+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:38.042558+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:39.042791+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:40.042989+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:41.043147+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:42.043333+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:43.043504+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:44.043675+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:45.043909+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:46.044151+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:47.044346+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:48.044533+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.044716+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.044912+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.045068+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.045283+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.045425+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.045573+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.045750+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.045938+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.046160+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.046351+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.046527+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.046711+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.046921+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.047160+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.047385+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.047677+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387129344 unmapped: 49856512 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.048142+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387129344 unmapped: 49856512 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.048613+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.049153+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.049415+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.049752+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.050224+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.050414+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.050669+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.050939+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 49840128 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.051300+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 49840128 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.051707+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.313423157s of 49.348728180s, submitted: 8
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387153920 unmapped: 49831936 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.051853+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9d9cf0780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc774a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dc6ff2c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc74be00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc46f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.051989+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.052155+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.052346+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.052492+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4141934 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.052691+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.052963+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.053310+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.053641+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.054021+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4141934 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.054222+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.054388+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.054702+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.055044+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.055263+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4141934 data_alloc: 218103808 data_used: 11055104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.055507+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.055679+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.055925+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.056101+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.576404572s of 18.677923203s, submitted: 25
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc2c21e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387366912 unmapped: 53297152 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.056956+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4142447 data_alloc: 218103808 data_used: 11059200
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387366912 unmapped: 53297152 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.057219+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387629056 unmapped: 53035008 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.057369+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.057568+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.057935+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.058147+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186127 data_alloc: 218103808 data_used: 16252928
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.058316+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.058472+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.058675+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.058844+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.059146+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186127 data_alloc: 218103808 data_used: 16252928
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.059408+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.059542+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.483976364s of 12.510684013s, submitted: 7
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [1])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.059724+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390914048 unmapped: 49750016 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.059883+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 49586176 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.060020+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 49586176 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a62d7000/0x0/0x1bfc00000, data 0x2de6687/0x2fe0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231003 data_alloc: 218103808 data_used: 16994304
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.060251+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 49586176 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd103c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcd105a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcd11c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9d97b0d20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9da73af00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dba3f860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc9f52c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.060494+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390512640 unmapped: 53829632 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd661e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dab14000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.060683+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.060870+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.061164+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292355 data_alloc: 218103808 data_used: 17002496
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.061506+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.061678+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.061846+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.062133+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.062364+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292355 data_alloc: 218103808 data_used: 17002496
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.062564+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.062736+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:03.062985+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:04.063173+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.550928116s of 17.227769852s, submitted: 100
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9da6f2780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:05.063350+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292580 data_alloc: 218103808 data_used: 17006592
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:06.063564+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:07.063779+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: mgrc ms_handle_reset ms_handle_reset con 0x55f9dcad4000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 09:15:04 compute-0 ceph-osd[84816]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: get_auth_request con 0x55f9dc3b1c00 auth_method 0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: mgrc handle_mgr_configure stats_period=5
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:08.064009+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:09.064166+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:10.064411+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292580 data_alloc: 218103808 data_used: 17006592
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:11.064724+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db87bc00 session 0x55f9dcca5e00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0186800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:12.064901+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:13.065034+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:14.065179+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392118272 unmapped: 52224000 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:15.065465+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347300 data_alloc: 234881024 data_used: 24342528
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:16.065631+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:17.065812+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:18.066017+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.694046974s of 14.721753120s, submitted: 7
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:19.066174+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8b000/0x0/0x1bfc00000, data 0x36476e9/0x3842000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:20.066364+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347608 data_alloc: 234881024 data_used: 24342528
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:21.066543+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:22.066697+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:23.066902+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8b000/0x0/0x1bfc00000, data 0x36476e9/0x3842000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:24.067161+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:25.067377+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395362304 unmapped: 48979968 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:26.067589+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4389566 data_alloc: 234881024 data_used: 24653824
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395370496 unmapped: 48971776 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:27.067730+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 48930816 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:28.067887+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcad5c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcad5c00 session 0x55f9d97f85a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9835000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9835000 session 0x55f9daa22960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcd10b40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 48930816 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc501e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcbb10e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a559a000/0x0/0x1bfc00000, data 0x3b386e9/0x3d33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc91e960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dcae21e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:29.068121+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395722752 unmapped: 48619520 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.719305992s of 10.298625946s, submitted: 70
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:30.068375+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc5df680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395747328 unmapped: 48594944 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:31.068612+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291518 data_alloc: 218103808 data_used: 18628608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395747328 unmapped: 48594944 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:32.068761+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395747328 unmapped: 48594944 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:33.069017+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:34.069263+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:35.069477+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:36.069631+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291518 data_alloc: 218103808 data_used: 18628608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:37.069824+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:38.070012+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:39.070137+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:40.070329+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:41.070528+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291518 data_alloc: 218103808 data_used: 18628608
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:42.070705+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:43.070838+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.467213631s of 13.787834167s, submitted: 37
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dccf5860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deafac00 session 0x55f9dc684f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395829248 unmapped: 48513024 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9d97f61e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:44.071005+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392740864 unmapped: 51601408 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:45.071147+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:46.071282+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a65b1000/0x0/0x1bfc00000, data 0x28d3687/0x2acd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4171022 data_alloc: 218103808 data_used: 16035840
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:47.071418+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:48.071537+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:49.071684+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:50.071862+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a65b1000/0x0/0x1bfc00000, data 0x28d3687/0x2acd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:51.072045+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4171022 data_alloc: 218103808 data_used: 16035840
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:52.072127+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:53.072745+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.610352516s of 10.054381371s, submitted: 81
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395640832 unmapped: 48701440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:54.072879+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:55.073108+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:56.073305+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4214370 data_alloc: 218103808 data_used: 17145856
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:57.073520+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:58.073731+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:59.073970+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:00.074187+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:01.074424+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4214370 data_alloc: 218103808 data_used: 17145856
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:02.074609+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:03.074788+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:04.074935+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:05.075181+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:06.075347+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4214370 data_alloc: 218103808 data_used: 17145856
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.801712036s of 13.848567963s, submitted: 17
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:07.075568+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:08.075731+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:09.075933+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:10.076222+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:11.076344+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213842 data_alloc: 218103808 data_used: 17149952
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcad5c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50e2000/0x0/0x1bfc00000, data 0x2e4f342/0x304b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:12.076659+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50e2000/0x0/0x1bfc00000, data 0x2e4f342/0x304b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9dcad5c00 session 0x55f9dcae3a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf54a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:13.076854+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50e2000/0x0/0x1bfc00000, data 0x2e4f342/0x304b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a4d26000/0x0/0x1bfc00000, data 0x320b342/0x3407000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:14.077042+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 44318720 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9d9834000 session 0x55f9dccf41e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:15.077349+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d9f1a1e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 47939584 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:16.077583+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 385 handle_osd_map epochs [386,387], i have 385, src has [1,387]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4263676 data_alloc: 218103808 data_used: 20717568
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 47939584 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:17.077782+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 47939584 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:18.078059+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a4d19000/0x0/0x1bfc00000, data 0x3216c02/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:19.078309+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:20.078561+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:21.078760+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4263676 data_alloc: 218103808 data_used: 20717568
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:22.078950+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.687474251s of 15.662240028s, submitted: 20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 387 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd11860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:23.079153+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a479d000/0x0/0x1bfc00000, data 0x3793c02/0x3991000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396861440 unmapped: 47480832 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:24.079305+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396861440 unmapped: 47480832 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:25.079519+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 47472640 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:26.079797+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310978 data_alloc: 218103808 data_used: 20725760
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 47472640 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4799000/0x0/0x1bfc00000, data 0x3795741/0x3994000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:27.080014+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 47472640 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:28.080219+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:29.080419+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:30.080601+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4799000/0x0/0x1bfc00000, data 0x3795741/0x3994000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcc474a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:31.080812+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310978 data_alloc: 218103808 data_used: 20725760
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc8c1860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:32.081043+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:33.081282+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd665a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.675912857s of 10.768301010s, submitted: 19
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc371a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:34.081464+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4798000/0x0/0x1bfc00000, data 0x3795751/0x3995000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:35.081719+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:36.081892+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4312816 data_alloc: 218103808 data_used: 20725760
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:37.082125+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4798000/0x0/0x1bfc00000, data 0x3795751/0x3995000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:38.082311+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:39.082493+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84dc00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dc84dc00 session 0x55f9dc9c2780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db839c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9db839c00 session 0x55f9dc685860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:40.082649+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:41.082824+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4341456 data_alloc: 234881024 data_used: 24690688
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcd10f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcce8000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:42.083015+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:43.083216+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dc3ad800 session 0x55f9da6f2b40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84dc00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4797000/0x0/0x1bfc00000, data 0x3795783/0x3997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:44.083388+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396894208 unmapped: 47448064 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:45.083637+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396902400 unmapped: 47439872 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:46.083803+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4360225 data_alloc: 234881024 data_used: 26959872
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396902400 unmapped: 47439872 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:47.084003+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4797000/0x0/0x1bfc00000, data 0x3795783/0x3997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396902400 unmapped: 47439872 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.298404694s of 14.317409515s, submitted: 6
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:48.084138+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 41746432 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:49.084208+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:50.084387+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:51.084531+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4469349 data_alloc: 234881024 data_used: 28704768
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:52.084685+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:53.084821+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a3c9c000/0x0/0x1bfc00000, data 0x4288783/0x448a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:54.084968+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:55.085162+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a3c9c000/0x0/0x1bfc00000, data 0x4288783/0x448a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a3ad8000/0x0/0x1bfc00000, data 0x4454783/0x4656000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:56.085321+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4479731 data_alloc: 234881024 data_used: 29097984
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:57.085685+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:58.085856+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:59.086054+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcbb0f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcc770e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:00.086438+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.952914238s of 12.495125771s, submitted: 104
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398991360 unmapped: 45350912 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafb400 session 0x55f9dcc512c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4b4c000/0x0/0x1bfc00000, data 0x33e1773/0x35e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:01.086528+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311306 data_alloc: 234881024 data_used: 21577728
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:02.086727+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:03.086871+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:04.087143+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:05.087350+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:06.087494+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311306 data_alloc: 234881024 data_used: 21577728
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4ade000/0x0/0x1bfc00000, data 0x344f773/0x3650000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4ade000/0x0/0x1bfc00000, data 0x344f773/0x3650000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:07.418945+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:08.419185+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4ade000/0x0/0x1bfc00000, data 0x344f773/0x3650000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:09.419349+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:10.419572+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:11.419799+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311946 data_alloc: 234881024 data_used: 21594112
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.308907509s of 11.493408203s, submitted: 17
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafb400 session 0x55f9dc9c3680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:12.420009+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 44253184 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 389 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d97b12c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 389 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc9c3c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 389 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc9c2b40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:13.420218+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 389 heartbeat osd_stat(store_statfs(0x1a4ad7000/0x0/0x1bfc00000, data 0x3452490/0x3656000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402612224 unmapped: 50135040 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:14.420429+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 390 ms_handle_reset con 0x55f9deafac00 session 0x55f9dccf45a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402628608 unmapped: 50118656 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:15.420654+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9d9834000 session 0x55f9dccf50e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:16.420896+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489310 data_alloc: 234881024 data_used: 25178112
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc6854a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:17.421136+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcbb05a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9deafb400 session 0x55f9da6f32c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:18.421476+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a350e000/0x0/0x1bfc00000, data 0x4a19d50/0x4c1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:19.421643+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 392 ms_handle_reset con 0x55f9dc2f9000 session 0x55f9dcd0ef00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:20.422047+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a4acf000/0x0/0x1bfc00000, data 0x34579b7/0x365d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a4acf000/0x0/0x1bfc00000, data 0x34579b7/0x365d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:21.422297+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4344236 data_alloc: 234881024 data_used: 25186304
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:22.422500+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:23.422697+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:24.423046+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 392 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc6fed20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.286382675s of 13.420795441s, submitted: 131
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:25.423351+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402702336 unmapped: 50044928 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dccf5e00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc3f1c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:26.423569+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347550 data_alloc: 234881024 data_used: 25178112
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402710528 unmapped: 50036736 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd67e00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a4ac7000/0x0/0x1bfc00000, data 0x322152e/0x3428000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:27.423784+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:28.424001+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcd105a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:29.424175+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:30.424366+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:31.424529+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316992 data_alloc: 234881024 data_used: 22863872
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a4d06000/0x0/0x1bfc00000, data 0x322155e/0x3427000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:32.424784+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 394 ms_handle_reset con 0x55f9deafb400 session 0x55f9dcae32c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:33.424992+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:34.425232+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a50c8000/0x0/0x1bfc00000, data 0x2e5f20b/0x3066000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:35.425523+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:36.425785+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267908 data_alloc: 218103808 data_used: 18927616
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.548620224s of 11.714035988s, submitted: 59
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 394 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc50000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:37.425962+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a4d00000/0x0/0x1bfc00000, data 0x322720b/0x342e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:38.426176+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 394 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9db568960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:39.426352+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dc317c00 session 0x55f9d97f6000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:40.426740+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:41.426974+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179262 data_alloc: 218103808 data_used: 11120640
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:42.427200+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:43.427366+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:44.427580+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:45.427791+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:46.428004+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179262 data_alloc: 218103808 data_used: 11120640
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:47.428205+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:48.428407+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:49.428612+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393306112 unmapped: 59441152 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:50.428787+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:51.428974+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4206142 data_alloc: 218103808 data_used: 14884864
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:52.429143+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:53.429502+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:54.429891+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.022670746s of 18.103244781s, submitted: 22
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf4960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:55.430377+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:56.430549+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280768 data_alloc: 218103808 data_used: 14884864
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:57.430917+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:58.431143+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a4dc0000/0x0/0x1bfc00000, data 0x3165d4a/0x336e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:59.431286+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:00.431432+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:01.431650+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374296 data_alloc: 218103808 data_used: 14958592
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395362304 unmapped: 57384960 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:02.431802+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9dcbb0f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396664832 unmapped: 56082432 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3c77000/0x0/0x1bfc00000, data 0x3e9ed4a/0x40a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:03.432032+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396664832 unmapped: 56082432 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:04.432230+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:05.432468+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:06.432632+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444711 data_alloc: 218103808 data_used: 19910656
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:07.432855+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x3ed6d4a/0x40df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:08.433046+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.094612122s of 13.529434204s, submitted: 93
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:09.433278+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:10.433619+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:11.434175+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441147 data_alloc: 218103808 data_used: 19910656
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:12.434386+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:13.434694+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3c1e000/0x0/0x1bfc00000, data 0x3ef7d4a/0x4100000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:14.434950+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:15.435176+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 54124544 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:16.435350+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4501453 data_alloc: 218103808 data_used: 20688896
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 54124544 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:17.435540+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:18.435742+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.956391335s of 10.231409073s, submitted: 77
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dc317c00 session 0x55f9d90fd4a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:19.436048+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3689000/0x0/0x1bfc00000, data 0x448cd4a/0x4695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:20.436258+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 396 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcc47860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:21.436460+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4501409 data_alloc: 218103808 data_used: 20697088
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399785984 unmapped: 52961280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:22.436669+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399785984 unmapped: 52961280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:23.436866+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399785984 unmapped: 52961280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3685000/0x0/0x1bfc00000, data 0x448e9a3/0x4698000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,1])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:24.437097+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 397 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9d9adb4a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb5400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 397 ms_handle_reset con 0x55f9deeb5400 session 0x55f9dc8c05a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c7400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 41361408 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:25.437383+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 41361408 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 398 ms_handle_reset con 0x55f9dc5c7400 session 0x55f9dc9063c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:26.437549+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604806 data_alloc: 234881024 data_used: 28688384
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 41353216 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:27.437745+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dcce90e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2b0c000/0x0/0x1bfc00000, data 0x500331a/0x5211000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 41353216 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:28.437991+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc6fe960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9dc6ff680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb5400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9deeb5400 session 0x55f9daa22d20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411344896 unmapped: 41402368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:29.438176+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411344896 unmapped: 41402368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:30.438462+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f8000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.656889915s of 12.057048798s, submitted: 74
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dc2f8000 session 0x55f9dc6ffc20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dccf4f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dcca5c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:31.438717+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4597582 data_alloc: 234881024 data_used: 28688384
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:32.438948+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2b0a000/0x0/0x1bfc00000, data 0x5004f8f/0x5214000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:33.439147+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:34.439280+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:35.439548+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408256512 unmapped: 44490752 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:36.439779+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc6ffe00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601756 data_alloc: 234881024 data_used: 28696576
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 44400640 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9db870960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:37.439944+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408354816 unmapped: 44392448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f3000/0x0/0x1bfc00000, data 0x541aace/0x562b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:38.440218+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:39.440387+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f3000/0x0/0x1bfc00000, data 0x541aace/0x562b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:40.440526+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:41.440668+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4631664 data_alloc: 234881024 data_used: 28696576
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:42.440809+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f3000/0x0/0x1bfc00000, data 0x541aace/0x562b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408371200 unmapped: 44376064 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:43.440946+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:44.441164+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:45.441356+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:46.441483+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.727028847s of 15.821485519s, submitted: 23
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4631972 data_alloc: 234881024 data_used: 28696576
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb5400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9deeb5400 session 0x55f9dcae3c20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:47.441631+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dcd0f860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:48.441816+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc86b0e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9da74ef00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dd69a000 session 0x55f9daa234a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db87a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:49.442009+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f04000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:50.442205+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:51.442353+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4643839 data_alloc: 234881024 data_used: 29982720
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408387584 unmapped: 44359680 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:52.442546+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:53.442690+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:54.442841+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:55.443009+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:56.443167+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671199 data_alloc: 234881024 data_used: 33910784
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:57.443352+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:58.443497+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:59.443660+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:00.443831+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:01.443969+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671199 data_alloc: 234881024 data_used: 33910784
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:02.444202+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 44253184 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:03.444383+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.425395966s of 17.477115631s, submitted: 7
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408764416 unmapped: 43982848 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:04.444547+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411484160 unmapped: 41263104 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a22b0000/0x0/0x1bfc00000, data 0x585dace/0x5a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:05.444741+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:06.444883+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4737281 data_alloc: 234881024 data_used: 37543936
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:07.445151+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2257000/0x0/0x1bfc00000, data 0x58b5ace/0x5ac6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2257000/0x0/0x1bfc00000, data 0x58b5ace/0x5ac6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:08.445307+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:09.445661+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2257000/0x0/0x1bfc00000, data 0x58b5ace/0x5ac6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:10.445857+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9db87a800 session 0x55f9dcc472c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc74ba40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:11.446011+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dc317c00 session 0x55f9db568f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647436 data_alloc: 234881024 data_used: 34361344
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:12.446212+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:13.446349+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:14.446496+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:15.446706+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:16.446895+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647436 data_alloc: 234881024 data_used: 34361344
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.982387543s of 13.243474007s, submitted: 56
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:17.447094+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:18.447253+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:19.447483+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:20.447761+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:21.447923+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649900 data_alloc: 234881024 data_used: 34349056
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:22.448085+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd10f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:23.448333+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc3734a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:24.448500+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:25.448768+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf5860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:26.448942+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4511636 data_alloc: 234881024 data_used: 31846400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:27.449194+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:28.449472+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:29.449663+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:30.449842+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:31.450035+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4511636 data_alloc: 234881024 data_used: 31846400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:32.450216+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:33.450403+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:34.450669+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.793992996s of 17.461553574s, submitted: 38
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:35.450910+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:36.451202+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4517643 data_alloc: 234881024 data_used: 33685504
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:37.451410+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:38.451612+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:39.451756+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:40.452066+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6854a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:41.452367+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4517419 data_alloc: 234881024 data_used: 33685504
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413851648 unmapped: 38895616 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:42.452517+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413851648 unmapped: 38895616 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:43.452654+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:44.452862+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9e0f04000 session 0x55f9dcc5cb40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.198714256s of 10.470458031s, submitted: 7
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:45.453056+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd66960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:46.453367+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512655 data_alloc: 234881024 data_used: 33579008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:47.453595+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a3a0f000/0x0/0x1bfc00000, data 0x40feace/0x430f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 38879232 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:48.453776+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 38879232 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:49.453946+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a3a0f000/0x0/0x1bfc00000, data 0x40feace/0x430f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 38879232 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:50.454120+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 38871040 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:51.454279+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9d97f85a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512787 data_alloc: 234881024 data_used: 33579008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 40599552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:52.454739+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dc6ffc20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a3a0f000/0x0/0x1bfc00000, data 0x40feace/0x430f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 40599552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:53.455014+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 40599552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:54.455144+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 401 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd0fa40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:55.455305+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.594058037s of 10.206480026s, submitted: 59
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 401 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae2780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a457c000/0x0/0x1bfc00000, data 0x359270a/0x37a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:56.455455+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 402 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc8c0960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4387946 data_alloc: 234881024 data_used: 24760320
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:57.455617+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:58.455833+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 58K writes, 218K keys, 58K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 58K writes, 21K syncs, 2.74 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3506 writes, 12K keys, 3506 commit groups, 1.0 writes per commit group, ingest: 15.02 MB, 0.03 MB/s
                                           Interval WAL: 3506 writes, 1419 syncs, 2.47 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:59.456003+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:00.456257+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 43294720 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a4574000/0x0/0x1bfc00000, data 0x3595f2e/0x37a8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 403 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd67680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:01.456420+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 43294720 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4390744 data_alloc: 234881024 data_used: 24760320
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:02.456578+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 43294720 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 403 ms_handle_reset con 0x55f9dc317c00 session 0x55f9d978c3c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:03.456770+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:04.457062+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:05.457331+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56ec000/0x0/0x1bfc00000, data 0x241ff2e/0x2632000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:06.457554+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213800 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:07.457803+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:08.457948+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:09.458169+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e8000/0x0/0x1bfc00000, data 0x2421a6d/0x2635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:10.458357+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:11.458513+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.421072960s of 16.197496414s, submitted: 62
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6850e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e8000/0x0/0x1bfc00000, data 0x2421a6d/0x2635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd10000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:12.458660+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397131776 unmapped: 55615488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:13.458929+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:14.459101+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:15.459259+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:16.459425+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:17.459575+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:18.459866+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:19.460172+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:20.460450+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:21.460668+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:22.460805+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:23.460951+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:24.461179+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:25.461600+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:26.461814+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:27.462153+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:28.462323+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:29.462547+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:30.462824+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:31.463176+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:32.463471+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:33.463688+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:34.463926+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:35.464156+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:36.464356+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:37.464526+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:38.464781+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.720899582s of 26.808219910s, submitted: 26
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 48365568 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc74b680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd10d20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f04000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9e0f04000 session 0x55f9dccf5a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dba3ef00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc376960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:39.464945+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:40.465118+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:41.465385+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:42.465583+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:43.465736+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:44.465968+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:45.466150+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:46.466314+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:47.466528+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:48.466754+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:49.466923+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:50.467113+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:51.467293+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:52.467430+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:53.467605+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:54.467777+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:55.467949+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:56.468260+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dcd10b40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:57.468426+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcc501e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:58.468663+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397336576 unmapped: 55410688 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9dc74b860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.606311798s of 20.880384445s, submitted: 23
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dccf45a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:59.468880+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397647872 unmapped: 55099392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:00.469037+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397647872 unmapped: 55099392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:01.469223+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397647872 unmapped: 55099392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:02.469486+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4293076 data_alloc: 218103808 data_used: 12902400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:03.469729+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:04.469912+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:05.470173+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:06.470332+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:07.470502+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 19103744
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:08.470671+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:09.470900+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:10.471160+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:11.471420+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:12.471690+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 19103744
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.583962440s of 13.613166809s, submitted: 7
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:13.471812+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401907712 unmapped: 50839552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:14.471969+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c1d000/0x0/0x1bfc00000, data 0x2ee6a3e/0x30fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:15.472237+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:16.472419+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:17.472678+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:18.472876+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:19.473132+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:20.473275+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:21.473466+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:22.473626+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:23.473799+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:24.474007+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:25.474256+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:26.474481+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:27.474672+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:28.474824+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:29.475057+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:30.475395+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:31.475646+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:32.475812+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:33.476035+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:34.476236+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:35.476432+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.823360443s of 23.011190414s, submitted: 63
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc8c14a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dcc46f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:36.476619+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401956864 unmapped: 50790400 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:37.476830+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368352 data_alloc: 218103808 data_used: 20267008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401956864 unmapped: 50790400 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:38.476996+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dccae000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dccae000 session 0x55f9dcce9a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:39.477275+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:40.477495+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c0b000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:41.477725+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:42.491064+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369826 data_alloc: 218103808 data_used: 20267008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dcd66000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401981440 unmapped: 50765824 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9da6f2b40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:43.491361+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401989632 unmapped: 50757632 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:44.491507+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dc74b680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:45.491710+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a3ee7000/0x0/0x1bfc00000, data 0x3c22a3e/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:46.491957+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:47.492165+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475235 data_alloc: 218103808 data_used: 20267008
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:48.492410+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:49.492608+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a3ee7000/0x0/0x1bfc00000, data 0x3c22a3e/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.249938011s of 14.454350471s, submitted: 45
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:50.492778+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402333696 unmapped: 50413568 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:51.492895+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 406 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcbb0f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402374656 unmapped: 50372608 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:52.493000+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dccae000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490773 data_alloc: 218103808 data_used: 20275200
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 54059008 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 407 ms_handle_reset con 0x55f9dd69a000 session 0x55f9d9f1a5a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:53.493142+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dccae000 session 0x55f9dc8c01e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc474a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399048704 unmapped: 53698560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:54.493334+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399147008 unmapped: 53600256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:55.493514+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9d97eed20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc9c2960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dd69a000 session 0x55f9da74fa40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a3b6a000/0x0/0x1bfc00000, data 0x3f96347/0x41b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc30400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dcc30400 session 0x55f9dcc47a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc5d680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dc684780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcae2780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dc6843c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc317400 session 0x55f9dc684000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:56.493708+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:57.493869+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4565685 data_alloc: 218103808 data_used: 20291584
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6845a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:58.494044+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc848400 session 0x55f9da9b5860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400121856 unmapped: 52625408 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:59.494255+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 409 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dcd0f860
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398974976 unmapped: 53772288 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 409 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc505a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc28c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:00.494408+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dcc28c00 session 0x55f9dcc50000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9d97b0780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:01.494554+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9d000/0x0/0x1bfc00000, data 0x4d5db6d/0x4f7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:02.494710+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651001 data_alloc: 218103808 data_used: 20307968
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:03.494868+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9d9f1ab40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dc92a1e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:04.495170+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc848400 session 0x55f9d97eeb40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc28c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.712867737s of 14.574018478s, submitted: 453
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dcc28c00 session 0x55f9dc6fe5a0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:05.495380+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9c000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:06.495589+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 52707328 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:07.495840+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4687399 data_alloc: 234881024 data_used: 24625152
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:08.496125+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9c000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:09.496366+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:10.496539+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:11.496765+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:12.496954+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4685671 data_alloc: 234881024 data_used: 24625152
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:13.497164+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:14.497360+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:15.497615+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.731956482s of 10.848281860s, submitted: 2
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 51470336 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:16.497853+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 51470336 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:17.498006+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4730091 data_alloc: 234881024 data_used: 25403392
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402014208 unmapped: 50733056 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:18.498186+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 50692096 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:19.498320+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 50487296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:20.498462+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408256512 unmapped: 44490752 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:21.498643+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 44482560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:22.498831+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803557 data_alloc: 234881024 data_used: 34942976
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 44482560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:23.499029+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 44482560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:24.499146+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:25.499366+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:26.499505+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:27.499672+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803557 data_alloc: 234881024 data_used: 34942976
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:28.499810+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:29.500022+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:30.500179+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408281088 unmapped: 44466176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.195389748s of 15.360485077s, submitted: 57
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:31.500312+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 40067072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc50960
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcd67a40
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:32.500531+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4828177 data_alloc: 234881024 data_used: 31006720
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc848400 session 0x55f9dccf4d20
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410935296 unmapped: 41811968 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:33.500698+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:34.500855+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:35.501111+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:36.501252+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a257a000/0x0/0x1bfc00000, data 0x5a99b5d/0x579c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:37.501427+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841694 data_alloc: 234881024 data_used: 30908416
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:38.501598+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:39.501764+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:40.501961+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:41.502147+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.386196136s of 10.751989365s, submitted: 135
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:42.502322+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835634 data_alloc: 234881024 data_used: 30908416
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:43.502535+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dc8c01e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:44.502705+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dcbb01e0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:45.502954+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:46.503170+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410976256 unmapped: 41771008 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:47.503391+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835634 data_alloc: 234881024 data_used: 30908416
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410976256 unmapped: 41771008 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:48.503542+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc3763c0
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dcca4f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407257088 unmapped: 45490176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc50f00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:49.503725+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:50.503876+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:51.504045+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a303f000/0x0/0x1bfc00000, data 0x4fddb2a/0x4cde000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:52.504142+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:04 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4688411 data_alloc: 234881024 data_used: 21827584
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:53.504284+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:04 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.301034927s of 11.727187157s, submitted: 38
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:54.504566+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 411 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc5d680
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 411 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dcae2780
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400760832 unmapped: 51986432 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:55.504905+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a4640000/0x0/0x1bfc00000, data 0x34be775/0x36dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a4640000/0x0/0x1bfc00000, data 0x34be775/0x36dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,0,0,2])
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400769024 unmapped: 51978240 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:56.505194+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 09:15:04 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 50880512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:04 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:57.505371+0000)
Jan 31 09:15:04 compute-0 ceph-osd[84816]: osd.0 412 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd0f860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 412 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9db86eb40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386451 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400818176 unmapped: 51929088 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 413 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dba3ef00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:58.505577+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400818176 unmapped: 51929088 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:59.505816+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400818176 unmapped: 51929088 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:00.506045+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400826368 unmapped: 51920896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:01.506253+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400826368 unmapped: 51920896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:02.506435+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292223 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:03.506677+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:04.506956+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.946926117s of 11.583917618s, submitted: 110
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:05.507169+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:06.507316+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:07.507494+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:08.507750+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:09.507933+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:10.508164+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:11.508336+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:12.508581+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:13.508784+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:14.508956+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:15.509193+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:16.509368+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:17.509542+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400850944 unmapped: 51896320 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:18.509709+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400850944 unmapped: 51896320 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:19.509869+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:20.510155+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:21.510355+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:22.510590+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:23.510799+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:24.510988+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:25.511227+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:26.511517+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 51871744 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:27.511710+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 51871744 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:28.511958+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 51871744 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:29.512154+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:30.512348+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:31.512589+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:32.512762+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:33.513461+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:34.513613+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:35.513820+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:36.514041+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:37.514225+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:38.514379+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:39.514557+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:40.514742+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:41.514963+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:42.515209+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:43.515357+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:44.515557+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:45.515797+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:46.515975+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.160724640s of 42.177688599s, submitted: 25
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:47.516148+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc774a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcc474a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dccf4780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd0e780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dcae21e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357916 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:48.516318+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4a88000/0x0/0x1bfc00000, data 0x2c67565/0x2e86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:49.516477+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:50.516632+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:51.516809+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:52.516969+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357916 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:53.517121+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4a88000/0x0/0x1bfc00000, data 0x2c67565/0x2e86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:54.517251+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dc6854a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb76400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:55.517421+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:56.517671+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:57.517887+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360163 data_alloc: 218103808 data_used: 11268096
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:58.518067+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4a87000/0x0/0x1bfc00000, data 0x2c67588/0x2e87000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:59.518218+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:00.518361+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:01.518479+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:02.518628+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.062685013s of 15.225501060s, submitted: 23
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dbb76400 session 0x55f9dcae3e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301889 data_alloc: 218103808 data_used: 11214848
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:03.518798+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc76780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:04.519029+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:05.519356+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:06.519569+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:07.519793+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:08.520008+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:09.520202+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:10.520438+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:11.520621+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:12.520837+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:13.521006+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:14.521133+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:15.521287+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:16.521501+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:17.521758+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:18.521951+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:19.522153+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:20.522387+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:21.522588+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:22.522865+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:23.523044+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:24.523299+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:25.523512+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:26.523732+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:27.523930+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:28.524065+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:29.524435+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:30.524726+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d97b0780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9db86fe00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc463c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f04400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9e0f04400 session 0x55f9daa23860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.561090469s of 28.726909637s, submitted: 31
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:31.524929+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc512c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:32.525177+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae2000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4309229 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:33.525365+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:34.525581+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:35.525829+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:36.526034+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:37.526249+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4309229 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:38.526479+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:39.526673+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:40.526817+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:41.527042+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:42.527206+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.237578392s of 11.162384033s, submitted: 22
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310134 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc684f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:43.527361+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:44.527551+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:45.527769+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:46.527949+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 57114624 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:47.528178+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4314202 data_alloc: 218103808 data_used: 11743232
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:48.528439+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:49.528611+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:50.528806+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:51.528958+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc8c0d20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dc685860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:52.529174+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 57098240 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.558219910s of 10.067571640s, submitted: 7
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4314230 data_alloc: 218103808 data_used: 11747328
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:53.530055+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 57098240 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:54.530311+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400982016 unmapped: 57090048 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:55.530486+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd103c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:56.530763+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:57.530948+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:58.531174+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:59.531431+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:00.531640+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:01.531793+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:02.532137+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:03.532334+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:04.532489+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:05.532743+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:06.533031+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:07.533296+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:08.533631+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:09.533928+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:10.534235+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:11.534418+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:12.534692+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:13.534934+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:14.535129+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:15.535377+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:16.535641+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:17.535902+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:18.536167+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:19.536366+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:20.536562+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:21.536730+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:22.536931+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:23.537169+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:24.537423+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:25.537674+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:26.537864+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:27.538103+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:28.538358+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:29.538538+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:30.538689+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:31.538869+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:32.539048+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:33.539245+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:34.539419+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 57032704 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:35.539618+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 57024512 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:36.539883+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 57024512 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:37.540035+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.556964874s of 45.118488312s, submitted: 26
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 57024512 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc8c0000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:38.540187+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387129 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc8c03c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:39.540384+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:40.540531+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:41.540689+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:42.540839+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a481f000/0x0/0x1bfc00000, data 0x2ecf5c7/0x30ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:43.540982+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387073 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:44.541148+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:45.541382+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9d97f61e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:46.541656+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a481f000/0x0/0x1bfc00000, data 0x2ecf5c7/0x30ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 58482688 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:47.541844+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2efc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402071552 unmapped: 57622528 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:48.542008+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4470002 data_alloc: 234881024 data_used: 22339584
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:49.542166+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:50.542317+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:51.542534+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:52.542759+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:53.543153+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4470002 data_alloc: 234881024 data_used: 22339584
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:54.543302+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:55.543447+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:56.543617+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:57.543813+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.682670593s of 20.474929810s, submitted: 36
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:58.543976+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4561438 data_alloc: 234881024 data_used: 22896640
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [1])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410664960 unmapped: 49029120 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:59.544110+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cdb000/0x0/0x1bfc00000, data 0x3a135c7/0x3c33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:00.544224+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:01.544409+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:02.544648+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cdb000/0x0/0x1bfc00000, data 0x3a135c7/0x3c33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:03.544876+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4577302 data_alloc: 234881024 data_used: 24141824
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:04.545040+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:05.545341+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:06.545525+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:07.545751+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cd8000/0x0/0x1bfc00000, data 0x3a165c7/0x3c36000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:08.545974+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4578070 data_alloc: 234881024 data_used: 24170496
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:09.546235+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:10.546467+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.194355011s of 12.445869446s, submitted: 104
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:11.546697+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:12.546927+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cd6000/0x0/0x1bfc00000, data 0x3a185c7/0x3c38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2efc00 session 0x55f9dc376000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dccf45a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 49324032 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:13.547033+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74a1e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:14.547270+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:15.547570+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:16.547763+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:17.547946+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:18.548212+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:19.548341+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:20.548582+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:21.548841+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:22.548986+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:23.549200+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:24.549349+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:25.549521+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:26.549672+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:27.549975+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:28.550186+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:29.550410+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:30.550558+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:31.550798+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:32.551039+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:33.551185+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:34.551328+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:35.551666+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:36.551882+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:37.552142+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:38.552329+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:39.552513+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:40.552660+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:41.552795+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:42.552977+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcce8f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc74be00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc6fef00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcce9680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.127571106s of 32.264827728s, submitted: 39
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:43.553149+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52ba000/0x0/0x1bfc00000, data 0x2433575/0x2653000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387796 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405553152 unmapped: 62013440 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc5c3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc5de00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9d97b0d20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c5c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:44.553362+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c5c00 session 0x55f9daa22960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc2c21e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:45.553576+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:46.553788+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:47.553992+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:48.554197+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387796 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:49.554416+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcbb0960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d978e960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:50.554549+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9d978c3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:51.554730+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2800 session 0x55f9dcc465a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405577728 unmapped: 61988864 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:52.554938+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405938176 unmapped: 61628416 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:53.555399+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458772 data_alloc: 218103808 data_used: 18571264
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:54.555603+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:55.555801+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:56.555990+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:57.556140+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:58.556306+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458772 data_alloc: 218103808 data_used: 18571264
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:59.556560+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:00.556792+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:01.557023+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:02.557266+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.095100403s of 20.558338165s, submitted: 21
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:03.557427+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487928 data_alloc: 218103808 data_used: 18571264
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409821184 unmapped: 57745408 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:04.557555+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcae32c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dcae23c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c6800 session 0x55f9dc74a3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316400 session 0x55f9dc684f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae8400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409821184 unmapped: 57745408 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:05.557760+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411303936 unmapped: 60465152 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:06.558290+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcae8400 session 0x55f9daa23860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc774a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9db86eb40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 60342272 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:07.558426+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316400 session 0x55f9dcae2780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c6800 session 0x55f9dcc50f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:08.558624+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592091 data_alloc: 218103808 data_used: 18952192
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:09.558802+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc3adc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:10.559042+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:11.559264+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:12.559413+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:13.559568+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640731 data_alloc: 234881024 data_used: 24547328
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:14.559698+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:15.559943+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:16.560228+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:17.560380+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:18.560607+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640731 data_alloc: 234881024 data_used: 24547328
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:19.560880+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:20.561181+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:21.561407+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.450225830s of 18.951761246s, submitted: 87
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:22.561562+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 54599680 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:23.561747+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729823 data_alloc: 234881024 data_used: 25812992
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 54599680 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:24.561965+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2f04000/0x0/0x1bfc00000, data 0x47e561a/0x4a09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e68000/0x0/0x1bfc00000, data 0x488261a/0x4aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:25.562222+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:26.562493+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:27.562702+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e68000/0x0/0x1bfc00000, data 0x488261a/0x4aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e68000/0x0/0x1bfc00000, data 0x488261a/0x4aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:28.562913+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4732831 data_alloc: 234881024 data_used: 25894912
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:29.563145+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e44000/0x0/0x1bfc00000, data 0x48a661a/0x4aca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:30.563314+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:31.563454+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:32.563658+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:33.563831+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4731187 data_alloc: 234881024 data_used: 25903104
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:34.564019+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:35.564268+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e44000/0x0/0x1bfc00000, data 0x48a661a/0x4aca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.692587852s of 13.324972153s, submitted: 124
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:36.564406+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc3adc00 session 0x55f9dc3763c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:37.564785+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:38.564979+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475045 data_alloc: 218103808 data_used: 13631488
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc5d4a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:39.565118+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:40.565348+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:41.565612+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4269000/0x0/0x1bfc00000, data 0x31775a8/0x3399000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:42.565803+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:43.565967+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc60780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcbb0b40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473859 data_alloc: 218103808 data_used: 13627392
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411877376 unmapped: 59891712 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:44.566139+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4576000/0x0/0x1bfc00000, data 0x3177598/0x3398000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dccf4b40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:45.566353+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:46.566515+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:47.566757+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:48.567015+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:49.567209+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:50.567463+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:51.567721+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:52.567868+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:53.568155+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:54.568402+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:55.568656+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:56.568842+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:57.569054+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:58.569296+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:59.569478+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:00.570216+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:01.570448+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:02.570604+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:03.570759+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:04.570905+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:05.571130+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:06.571301+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:07.571454+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:08.571793+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:09.571925+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:10.572062+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:11.572245+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:12.572461+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:13.572631+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:14.572759+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:15.572991+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:16.573151+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:17.573310+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316400 session 0x55f9dc74a3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcae23c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae32c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d978c3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.101123810s of 42.489017487s, submitted: 57
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9d978e960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413040640 unmapped: 58728448 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:18.573469+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c6800 session 0x55f9dcc5c3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4390501 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:19.573617+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:20.573773+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:21.574009+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:22.574208+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c61000/0x0/0x1bfc00000, data 0x2a8d5c7/0x2cad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74be00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:23.574385+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcce8f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4390501 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c61000/0x0/0x1bfc00000, data 0x2a8d5c7/0x2cad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:24.574534+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc74a1e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dccf45a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:25.574705+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc26800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9aac00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:26.574834+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c60000/0x0/0x1bfc00000, data 0x2a8d5d7/0x2cae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:27.574961+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:28.575181+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4438739 data_alloc: 218103808 data_used: 17670144
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:29.575330+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c60000/0x0/0x1bfc00000, data 0x2a8d5d7/0x2cae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:30.575474+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:31.575631+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-31T09:04:32.575776+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _finish_auth 0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:32.576525+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:33.575969+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c60000/0x0/0x1bfc00000, data 0x2a8d5d7/0x2cae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4438739 data_alloc: 218103808 data_used: 17670144
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:34.576168+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:35.576339+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:36.576490+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.828681946s of 18.997266769s, submitted: 30
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:37.576675+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 58630144 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:38.576826+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 57638912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490041 data_alloc: 218103808 data_used: 17670144
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:39.577047+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 57638912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a457d000/0x0/0x1bfc00000, data 0x31705d7/0x3391000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:40.577224+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 57638912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:41.577553+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4467000/0x0/0x1bfc00000, data 0x32865d7/0x34a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:42.577818+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:43.578184+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4508283 data_alloc: 218103808 data_used: 18108416
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:44.578473+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:45.578906+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:46.579056+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:47.579252+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4446000/0x0/0x1bfc00000, data 0x32a75d7/0x34c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:48.579434+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506303 data_alloc: 218103808 data_used: 18112512
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:49.579570+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:50.579706+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.345264435s of 14.012647629s, submitted: 88
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:51.579859+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db9aac00 session 0x55f9dcae3c20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcc26800 session 0x55f9d97f61e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:52.580013+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc684f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:53.580271+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:54.580512+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:55.580819+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:56.580963+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:57.581211+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:58.581511+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:59.581693+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:00.581874+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:01.582182+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:02.582411+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:03.582569+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:04.582757+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:05.583896+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:06.584312+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:07.584981+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:08.585331+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:09.586038+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:10.586402+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:11.586583+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:12.586743+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:13.587233+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:14.587402+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:15.587618+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:16.587849+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:17.588038+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:18.588318+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:19.588582+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:20.588736+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:21.589047+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:22.589371+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:23.589543+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:24.589790+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:25.590176+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:26.590443+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:27.590777+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:28.591005+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:29.591319+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:30.591590+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:31.591795+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:32.592058+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:33.592350+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:34.592612+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412540928 unmapped: 59228160 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:35.592886+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:36.593235+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:37.593880+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:38.594135+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:39.594261+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [1])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:40.594593+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:41.595034+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:42.595430+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:43.595703+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:44.595877+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:45.596144+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:46.596333+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:47.596692+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412565504 unmapped: 59203584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:48.597027+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412565504 unmapped: 59203584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:49.597183+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412565504 unmapped: 59203584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:50.597379+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:51.597631+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:52.597736+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:53.597930+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:54.598217+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:55.598505+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc76b40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dccf5680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9db568f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd0f4a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:56.599024+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 65.396324158s of 65.480628967s, submitted: 24
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 58662912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc772c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:57.599179+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412581888 unmapped: 59187200 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:58.599493+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 61K writes, 228K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 61K writes, 22K syncs, 2.72 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2886 writes, 10K keys, 2886 commit groups, 1.0 writes per commit group, ingest: 10.81 MB, 0.02 MB/s
                                           Interval WAL: 2886 writes, 1146 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:59.599696+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359000 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:00.599925+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51ba000/0x0/0x1bfc00000, data 0x2535565/0x2754000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:01.600143+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:02.600305+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51ba000/0x0/0x1bfc00000, data 0x2535565/0x2754000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [2])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:03.600538+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d97bb0e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412598272 unmapped: 59170816 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc26800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcc26800 session 0x55f9dcca5e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:04.600721+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359000 data_alloc: 218103808 data_used: 11210752
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc28800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcc28800 session 0x55f9dccf43c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412606464 unmapped: 59162624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dccf5860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:05.600933+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412614656 unmapped: 59154432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:06.601135+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412614656 unmapped: 59154432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51b8000/0x0/0x1bfc00000, data 0x2535598/0x2756000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: mgrc ms_handle_reset ms_handle_reset con 0x55f9dc3b1c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 09:15:05 compute-0 ceph-osd[84816]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: get_auth_request con 0x55f9dc2ee800 auth_method 0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: mgrc handle_mgr_configure stats_period=5
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:07.601263+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:08.601472+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:09.601654+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4372120 data_alloc: 218103808 data_used: 12267520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:10.602554+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51b8000/0x0/0x1bfc00000, data 0x2535598/0x2756000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9e0186800 session 0x55f9dc6ffa40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc26800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:11.603602+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:12.603982+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:13.604800+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:14.605062+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4372120 data_alloc: 218103808 data_used: 12267520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:15.605702+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51b8000/0x0/0x1bfc00000, data 0x2535598/0x2756000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 59088896 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:16.606167+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 59088896 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:17.606448+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.013828278s of 21.077655792s, submitted: 14
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 56639488 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:18.606608+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:19.607202+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444560 data_alloc: 218103808 data_used: 12468224
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:20.607748+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:21.608462+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:22.608890+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 58130432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:23.609173+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 58130432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:24.609362+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444560 data_alloc: 218103808 data_used: 12468224
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 58130432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:25.609658+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:26.609905+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:27.610169+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:28.610300+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:29.610466+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444560 data_alloc: 218103808 data_used: 12468224
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:30.610597+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc50960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.481883049s of 13.268753052s, submitted: 40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9da9b7680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413655040 unmapped: 58114048 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:31.610776+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413655040 unmapped: 58114048 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:32.610882+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f5c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413663232 unmapped: 58105856 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:33.611026+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9db4f5c00 session 0x55f9dcc46780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc50d000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413679616 unmapped: 58089472 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9dc50d000 session 0x55f9dcc77860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc91e780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:34.611161+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480537 data_alloc: 218103808 data_used: 14368768
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 428646400 unmapped: 43122688 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:35.611364+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 56844288 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74b2c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:36.611504+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 56844288 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 416 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca5c20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:37.611644+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 56836096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:38.612006+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 56827904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a28b8000/0x0/0x1bfc00000, data 0x3c90f00/0x3eb5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:39.612243+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4597081 data_alloc: 234881024 data_used: 22953984
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc8c03c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:40.612601+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f5c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9db4f5c00 session 0x55f9dba3f860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:41.612915+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc461e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca4d20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:42.613107+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:43.613282+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a28b4000/0x0/0x1bfc00000, data 0x3c92bd7/0x3eb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:44.613491+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4597025 data_alloc: 234881024 data_used: 22949888
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:45.613703+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.845607758s of 14.748682022s, submitted: 86
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a28b4000/0x0/0x1bfc00000, data 0x3c92bd7/0x3eb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,1,0,1])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 61841408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc685860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:46.613919+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:47.614095+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:48.614231+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:49.614387+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482406 data_alloc: 218103808 data_used: 11235328
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a33df000/0x0/0x1bfc00000, data 0x31676e3/0x338d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:50.614474+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:51.614639+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:52.614783+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:53.614989+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:54.615213+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482406 data_alloc: 218103808 data_used: 11235328
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a33df000/0x0/0x1bfc00000, data 0x31676e3/0x338d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4000 session 0x55f9da74fa40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:55.615426+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc3b0000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9dc3b0000 session 0x55f9dc74b2c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc91e780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc77860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.986372948s of 10.231092453s, submitted: 41
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc6ff4a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc46780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c5c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9dc5c5c00 session 0x55f9dcc50960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6ffa40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc9c3860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dccf5860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:56.615572+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dcca5e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4800 session 0x55f9dccf41e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 53477376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28f9000/0x0/0x1bfc00000, data 0x3c4e745/0x3e75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:57.615685+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd112c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 53477376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd0f4a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9db4f2400 session 0x55f9db568f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:58.615855+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 53477376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f05000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:59.616015+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651220 data_alloc: 234881024 data_used: 28344320
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 419 ms_handle_reset con 0x55f9deeb4800 session 0x55f9dcae25a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:00.616165+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:01.616354+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a361d000/0x0/0x1bfc00000, data 0x2f2333e/0x314a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:02.616577+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a361d000/0x0/0x1bfc00000, data 0x2f2333e/0x314a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:03.616751+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:04.616896+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512892 data_alloc: 218103808 data_used: 14528512
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:05.617165+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:06.617307+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.177050591s of 10.527448654s, submitted: 78
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 62357504 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:07.617421+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 62357504 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:08.617572+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3620000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 62357504 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:09.617850+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506170 data_alloc: 218103808 data_used: 14741504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 62349312 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3620000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,0,10,2])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:10.618010+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3620000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:11.618173+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:12.618347+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:13.618458+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:14.618597+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515714 data_alloc: 218103808 data_used: 15527936
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:15.618825+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:16.618987+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:17.619200+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:18.619385+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:19.619558+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517122 data_alloc: 218103808 data_used: 16220160
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:20.624052+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.357772827s of 14.491651535s, submitted: 33
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:21.624310+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9e0f05000 session 0x55f9dc684f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dcc76b40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:22.624493+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 62185472 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:23.624752+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 62185472 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:24.624894+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395578 data_alloc: 218103808 data_used: 11243520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 62201856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:25.625116+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74bc20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:26.625235+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:27.625447+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:28.625602+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:29.625768+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394093 data_alloc: 218103808 data_used: 11243520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:30.625956+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:31.626123+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:32.626301+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:33.626557+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:34.626819+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394093 data_alloc: 218103808 data_used: 11243520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:35.627160+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:36.627457+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:37.627618+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:38.627763+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:39.627923+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394093 data_alloc: 218103808 data_used: 11243520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.817047119s of 19.326761246s, submitted: 21
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414474240 unmapped: 61145088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:40.628064+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dab334a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcd11e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:41.628305+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:42.628476+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dc84dc00 session 0x55f9dcc77a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:43.628628+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:44.628778+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a410a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4393917 data_alloc: 218103808 data_used: 11243520
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:45.628939+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd110e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a410a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dccf52c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 61054976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dab33680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:46.629134+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 61046784 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:47.629309+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 61046784 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:48.629473+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414531584 unmapped: 61087744 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:49.629614+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436028 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 61054976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:50.629807+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.915438652s of 10.848832130s, submitted: 85
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 61054976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:51.629953+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 61038592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:52.630178+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 61038592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:53.630322+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 61038592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:54.630478+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436100 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 60981248 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:55.630625+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414711808 unmapped: 60907520 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:56.630775+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 60874752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:57.630989+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 60874752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:58.631157+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 60874752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:59.631279+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462880 data_alloc: 218103808 data_used: 14966784
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:00.631442+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:01.631588+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:02.631783+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:03.631934+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:04.632095+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462880 data_alloc: 218103808 data_used: 14966784
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:05.632270+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:06.632482+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:07.632618+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 60858368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:08.632776+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 60858368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:09.632884+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.021949768s of 18.781620026s, submitted: 346
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3bb8000/0x0/0x1bfc00000, data 0x298fe0b/0x2bb6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,0,0,14])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4492144 data_alloc: 218103808 data_used: 14999552
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418996224 unmapped: 56623104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:10.633035+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:11.633235+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:12.633401+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:13.633607+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:14.633729+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517650 data_alloc: 218103808 data_used: 15253504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:15.633943+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2128000/0x0/0x1bfc00000, data 0x2e6ee0b/0x3095000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:16.634150+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:17.634280+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2128000/0x0/0x1bfc00000, data 0x2e6ee0b/0x3095000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:18.634514+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:19.635013+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517650 data_alloc: 218103808 data_used: 15253504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:20.635657+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.549825668s of 10.879747391s, submitted: 70
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:21.635835+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:22.636002+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:23.636668+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4800 session 0x55f9da9b74a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:24.636817+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:25.637159+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:26.637904+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcad5000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcad5000 session 0x55f9dcd0f0e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:27.638232+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:28.638875+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:29.639141+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:30.639627+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:31.639825+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:32.640126+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:33.640276+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:34.640655+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:35.640886+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:36.641194+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:37.641499+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:38.641807+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:39.641996+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:40.642223+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:41.642421+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:42.642597+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.457544327s of 21.756736755s, submitted: 12
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9da9b74a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:43.642838+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:44.643020+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419577856 unmapped: 56041472 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518790 data_alloc: 218103808 data_used: 15290368
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:45.643339+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419586048 unmapped: 56033280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:46.643668+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:47.643858+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:48.644016+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:49.644199+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518790 data_alloc: 218103808 data_used: 15290368
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:50.644347+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:51.644528+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:52.644666+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:53.644816+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:54.644972+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419602432 unmapped: 56016896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.232875824s of 12.288291931s, submitted: 2
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519590 data_alloc: 218103808 data_used: 15310848
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:55.645126+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419610624 unmapped: 56008704 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:56.645239+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:57.645360+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:58.645508+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20fd000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:59.645665+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:00.645824+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527346 data_alloc: 218103808 data_used: 15941632
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:01.645986+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:02.646201+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419667968 unmapped: 55951360 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:03.646353+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 55943168 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:04.646525+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 55943168 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:05.646728+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527698 data_alloc: 218103808 data_used: 15937536
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 55943168 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:06.646951+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:07.647102+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.876531601s of 13.426343918s, submitted: 10
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:08.647248+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:09.647408+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:10.647571+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527346 data_alloc: 218103808 data_used: 15937536
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:11.647694+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:12.647852+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:13.648000+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:14.648161+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:15.648339+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527170 data_alloc: 218103808 data_used: 15937536
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:16.648484+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:17.648639+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:18.648791+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:19.648919+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dab33680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516279221s of 11.650573730s, submitted: 6
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9da73bc20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20f7000/0x0/0x1bfc00000, data 0x2e9de0b/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:20.649168+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4529174 data_alloc: 218103808 data_used: 16519168
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20fa000/0x0/0x1bfc00000, data 0x2e9de0b/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4800 session 0x55f9dcc77860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:21.649302+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:22.649452+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a211e000/0x0/0x1bfc00000, data 0x2e79e0b/0x30a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a211e000/0x0/0x1bfc00000, data 0x2e79e0b/0x30a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:23.649610+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dc74b2c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca4d20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:24.649773+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 55918592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:25.650024+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 55918592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:26.650146+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 55918592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:27.650287+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:28.650504+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:29.650693+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:30.650880+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:31.651034+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:32.651191+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:33.651367+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:34.651510+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:35.651698+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:36.651866+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:37.652126+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:38.652253+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:39.652418+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:40.652571+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:41.652731+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:42.652870+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:43.652967+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:44.653053+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:45.653198+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:46.653320+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:47.653433+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:48.653572+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:49.653747+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:50.653890+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419741696 unmapped: 55877632 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:51.654025+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:52.654213+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:53.654379+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:54.654537+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:55.655441+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:56.657568+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:57.658113+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:58.658675+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:59.659606+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:00.661582+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:01.662285+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:02.662422+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:03.662573+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:04.662719+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:05.662955+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:06.663235+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:07.663504+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:08.663774+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:09.664035+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:10.664473+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:11.664667+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:12.665138+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:13.665510+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:14.666179+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:15.666723+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:16.666929+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:17.667158+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:18.667328+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:19.667511+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:20.668006+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:21.668199+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:22.668464+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419799040 unmapped: 55820288 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:23.668637+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419807232 unmapped: 55812096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:24.668849+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419807232 unmapped: 55812096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:25.669133+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:26.669447+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:27.669621+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:28.669785+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:29.670602+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:30.671436+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:31.672157+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:32.672373+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:33.672727+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:34.672950+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:35.673184+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:36.673357+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:37.673514+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:38.673676+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:39.673858+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:40.673996+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:41.674280+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:42.674585+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:43.674734+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:44.674912+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:45.675201+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:46.675442+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:47.675652+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:48.675809+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:49.676053+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da9b6f00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:50.676339+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:51.676569+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:52.676755+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:53.676917+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:54.677114+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419864576 unmapped: 55754752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:55.677378+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419864576 unmapped: 55754752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:56.677857+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419864576 unmapped: 55754752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:57.678168+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:58.678312+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:59.678521+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:00.678674+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:01.678839+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:02.679036+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:03.679188+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:04.679403+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:05.679648+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:06.679840+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:07.679970+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:08.717316+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:09.717469+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419889152 unmapped: 55730176 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:10.717618+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 55721984 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:11.717827+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419905536 unmapped: 55713792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:12.718021+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419905536 unmapped: 55713792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:13.718196+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419905536 unmapped: 55713792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:14.718367+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dc9c32c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:15.718614+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:16.718765+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:17.719014+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc74a780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:18.719168+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4800 session 0x55f9daa22000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:19.719311+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.491500854s of 119.652130127s, submitted: 30
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:20.719464+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9db870960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408173 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:21.719738+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:22.719981+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de1b/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:23.720157+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:24.720300+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:25.720516+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408305 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:26.720827+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:27.720991+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de1b/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:28.721206+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:29.721426+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.795152664s of 10.765261650s, submitted: 7
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcd11a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:30.721581+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcce9680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de1b/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:31.721771+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:32.721978+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:33.722223+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:34.722361+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 55672832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:35.722630+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 55672832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:36.722816+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 55672832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:37.723003+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:38.723147+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc74a3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:39.723284+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:40.723488+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:41.723649+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:42.723797+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:43.723926+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:44.724108+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc27000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.601638794s of 14.800266266s, submitted: 18
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcc27000 session 0x55f9dc8c1a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:45.724361+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9daa234a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:46.724503+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:47.724649+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419971072 unmapped: 55648256 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:48.724820+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da029680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419971072 unmapped: 55648256 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:49.724987+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419979264 unmapped: 55640064 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:50.725162+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408462 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419979264 unmapped: 55640064 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:51.725328+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419979264 unmapped: 55640064 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:52.725520+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de6d/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 55631872 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:53.725661+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 55631872 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:54.725813+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcc770e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419995648 unmapped: 55623680 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.067562103s of 10.347117424s, submitted: 14
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:55.726030+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4469680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420200448 unmapped: 55418880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:56.726190+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9d9f1be00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420200448 unmapped: 55418880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:57.726343+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dccf5e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2375000/0x0/0x1bfc00000, data 0x2c21e6d/0x2e49000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420200448 unmapped: 55418880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:58.726527+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420216832 unmapped: 55402496 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:59.726712+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd67e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:00.726867+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473158 data_alloc: 218103808 data_used: 11259904
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:01.727056+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:02.727217+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:03.727457+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:04.727677+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:05.728167+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473158 data_alloc: 218103808 data_used: 11259904
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:06.728373+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:07.728545+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:08.728707+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:09.728961+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:10.729156+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.737280846s of 15.252003670s, submitted: 30
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da9b5860
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474940 data_alloc: 218103808 data_used: 11259904
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:11.729351+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:12.729585+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:13.729818+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:14.729955+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:15.730191+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dc6850e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474940 data_alloc: 218103808 data_used: 11259904
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:16.730321+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc510e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:17.730521+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc685a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:18.730665+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:19.730829+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc6fe960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:20.730971+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474940 data_alloc: 218103808 data_used: 11259904
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420265984 unmapped: 55353344 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:21.731138+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:22.731268+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:23.731396+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:24.731538+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d97f9c20
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:25.731690+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533820 data_alloc: 218103808 data_used: 19533824
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:26.731898+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:27.732150+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.701858521s of 17.707544327s, submitted: 1
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcae9800 session 0x55f9da74fa40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:28.732331+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:29.732536+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:30.732722+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4532764 data_alloc: 218103808 data_used: 19533824
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:31.732876+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:32.733031+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:33.733249+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc77a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:34.733472+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:35.733686+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2372000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dcc470e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4531891 data_alloc: 218103808 data_used: 19533824
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2372000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:36.733856+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421543936 unmapped: 54075392 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:37.733983+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae25a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421543936 unmapped: 54075392 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:38.734206+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.901397705s of 10.784977913s, submitted: 26
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:39.734365+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421552128 unmapped: 54067200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a236f000/0x0/0x1bfc00000, data 0x2c25773/0x2e4f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:40.734492+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421552128 unmapped: 54067200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d978c3c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420203 data_alloc: 218103808 data_used: 11268096
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:41.734664+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:42.734844+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2ad3000/0x0/0x1bfc00000, data 0x2441711/0x266a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:43.735062+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:44.735295+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:45.735564+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcc77a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2ad3000/0x0/0x1bfc00000, data 0x2441711/0x266a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420027 data_alloc: 218103808 data_used: 11268096
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dc6fe960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:46.735707+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421584896 unmapped: 54034432 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:47.735870+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:48.736184+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c5000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dc5c5000 session 0x55f9dc6850e0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:49.736397+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:50.736582+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2b4f000/0x0/0x1bfc00000, data 0x24432b2/0x266e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425983 data_alloc: 218103808 data_used: 11276288
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:51.736731+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.769523621s of 12.610586166s, submitted: 33
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2b4f000/0x0/0x1bfc00000, data 0x24432b2/0x266e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:52.736859+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:53.736979+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd67e00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:54.737142+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da029680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:55.737323+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:56.737461+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:57.737646+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:58.737810+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:59.738140+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:00.738357+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:01.738620+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:02.738759+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:03.738918+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:04.739117+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:05.739513+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:06.739655+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:07.739851+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:08.740045+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:09.740980+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:10.744262+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:11.744819+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcd11a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9db870960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:12.745303+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dcae7800 session 0x55f9daa22000
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.496446609s of 21.687971115s, submitted: 18
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:13.745716+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d5000/0x0/0x1bfc00000, data 0x25bd283/0x27e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc9c32c0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:14.745849+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:15.746653+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449537 data_alloc: 218103808 data_used: 11407360
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:16.746835+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:17.747122+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:18.747628+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:19.747965+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29ab000/0x0/0x1bfc00000, data 0x25e7283/0x2813000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:20.748282+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449697 data_alloc: 218103808 data_used: 11415552
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:21.748608+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:22.748784+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:23.748941+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29ab000/0x0/0x1bfc00000, data 0x25e7283/0x2813000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:24.749124+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:25.749387+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:26.749535+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449697 data_alloc: 218103808 data_used: 11415552
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.650345802s of 14.049152374s, submitted: 6
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:27.749677+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421191680 unmapped: 54427648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:05.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:28.749831+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423395328 unmapped: 52224000 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a289f000/0x0/0x1bfc00000, data 0x26eb283/0x2917000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:29.750011+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422731776 unmapped: 52887552 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a23e9000/0x0/0x1bfc00000, data 0x2ba9283/0x2dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:30.750152+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422731776 unmapped: 52887552 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:31.750259+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512563 data_alloc: 218103808 data_used: 11608064
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:32.750411+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2345000/0x0/0x1bfc00000, data 0x2c4c283/0x2e78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:33.750587+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:34.750770+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:35.750941+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:36.751131+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512723 data_alloc: 218103808 data_used: 11612160
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2345000/0x0/0x1bfc00000, data 0x2c4c283/0x2e78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:37.751297+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:38.751477+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:39.751697+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:40.751877+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:41.752152+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4510851 data_alloc: 218103808 data_used: 11612160
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:42.752469+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:43.752980+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:44.753759+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:45.753957+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:46.754253+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4510851 data_alloc: 218103808 data_used: 11612160
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:47.754492+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.587087631s of 20.651416779s, submitted: 74
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:48.754970+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:49.755112+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a231e000/0x0/0x1bfc00000, data 0x2c74283/0x2ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:50.755220+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a231e000/0x0/0x1bfc00000, data 0x2c74283/0x2ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:51.755464+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4510543 data_alloc: 218103808 data_used: 11612160
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:52.755626+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:53.755755+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:54.755963+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:55.756231+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a231e000/0x0/0x1bfc00000, data 0x2c74283/0x2ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423034880 unmapped: 52584448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:56.756547+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4516181 data_alloc: 218103808 data_used: 11620352
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc76780
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:57.756703+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:58.756866+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.948966980s of 10.389080048s, submitted: 6
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:59.757070+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:00.757296+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2310000/0x0/0x1bfc00000, data 0x2e35bb5/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:01.757514+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4535043 data_alloc: 218103808 data_used: 11620352
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:02.757713+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230e000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:03.757958+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:04.758280+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230e000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:05.758604+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:06.758794+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4553405 data_alloc: 218103808 data_used: 11620352
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:07.759023+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84c400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dc84c400 session 0x55f9dc8c1a40
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9d9f1b680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:08.759179+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:09.759317+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:10.759446+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423108608 unmapped: 52510720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:11.759638+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555587 data_alloc: 218103808 data_used: 11624448
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423116800 unmapped: 52502528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:12.759818+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423116800 unmapped: 52502528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:13.759964+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423116800 unmapped: 52502528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:14.760157+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dc74a5a0
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:15.760420+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dc8c1680
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc9c2960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.692956924s of 17.884611130s, submitted: 6
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:16.760706+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4557425 data_alloc: 218103808 data_used: 11624448
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:17.760954+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9dccf4960
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:18.761165+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84c400
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:19.761362+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:20.761547+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:21.761726+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559652 data_alloc: 218103808 data_used: 11677696
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:22.761928+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:23.762147+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:24.762293+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:25.762458+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:26.762599+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559652 data_alloc: 218103808 data_used: 11677696
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:27.762759+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:28.762885+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:29.762998+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:30.763150+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:31.763291+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.835231781s of 15.268264771s, submitted: 5
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:15:05 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:15:05 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4566379 data_alloc: 218103808 data_used: 13340672
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'config diff' '{prefix=config diff}'
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'config show' '{prefix=config show}'
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:32.763425+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422379520 unmapped: 53239808 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:33.763594+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:15:05 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:34.763749+0000)
Jan 31 09:15:05 compute-0 ceph-osd[84816]: do_command 'log dump' '{prefix=log dump}'
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46558 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:05 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:05.257+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37731 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 sudo[423054]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/816330882' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b99150f6-b031-4237-bf6b-809f63a223ba does not exist
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev fea03731-4721-40d6-bd79-eb00be8d231e does not exist
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1b599902-df1d-4df5-94c8-b5553d3a208c does not exist
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.37686 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.46510 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.51716 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: pgmap v4308: 305 pgs: 305 active+clean; 213 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 470 KiB/s wr, 90 op/s
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2641837516' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.37695 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.46519 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.37710 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3646074436' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4270911575' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.51740 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.37722 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2476336237' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2823572616' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2682141508' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/816330882' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/758987683' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3690361327' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:15:05 compute-0 sudo[423275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:05 compute-0 sudo[423275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:05 compute-0 sudo[423275]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51782 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:05 compute-0 sudo[423300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:15:05 compute-0 sudo[423300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:05 compute-0 sudo[423300]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864898821' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:05 compute-0 sudo[423325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:05 compute-0 sudo[423325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:05 compute-0 sudo[423325]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:05 compute-0 sudo[423355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:15:05 compute-0 sudo[423355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 09:15:05 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2887442106' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:05 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51803 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 494 KiB/s wr, 78 op/s
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.15024146 +0000 UTC m=+0.076468033 container create 734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_morse, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 09:15:06 compute-0 systemd[1]: Started libpod-conmon-734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689.scope.
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.095685425 +0000 UTC m=+0.021912018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46591 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.254094713 +0000 UTC m=+0.180321296 container init 734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_morse, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.263053149 +0000 UTC m=+0.189279722 container start 734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_morse, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.269017813 +0000 UTC m=+0.195244406 container attach 734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_morse, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:15:06 compute-0 youthful_morse[423484]: 167 167
Jan 31 09:15:06 compute-0 systemd[1]: libpod-734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689.scope: Deactivated successfully.
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.271524874 +0000 UTC m=+0.197751447 container died 734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_morse, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37782 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:15:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531390572' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1eebd164ac81d713ace26cb10f8cbc2f77cd97ea3114f58e7beb480ac5e2c195-merged.mount: Deactivated successfully.
Jan 31 09:15:06 compute-0 podman[423465]: 2026-01-31 09:15:06.335163147 +0000 UTC m=+0.261389720 container remove 734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_morse, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 09:15:06 compute-0 systemd[1]: libpod-conmon-734aaabc1c65bc7d87e725fcdec16c6f6f459d0391efd132763da072d6835689.scope: Deactivated successfully.
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51821 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:06 compute-0 podman[423551]: 2026-01-31 09:15:06.527940483 +0000 UTC m=+0.077335405 container create 4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46609 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:06 compute-0 systemd[1]: Started libpod-conmon-4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3.scope.
Jan 31 09:15:06 compute-0 podman[423551]: 2026-01-31 09:15:06.48175212 +0000 UTC m=+0.031147062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:15:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7973f3d7902a408e98bb907211cd47da426b7b0747933af65aba089e85ac6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7973f3d7902a408e98bb907211cd47da426b7b0747933af65aba089e85ac6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7973f3d7902a408e98bb907211cd47da426b7b0747933af65aba089e85ac6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7973f3d7902a408e98bb907211cd47da426b7b0747933af65aba089e85ac6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f7973f3d7902a408e98bb907211cd47da426b7b0747933af65aba089e85ac6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:06 compute-0 podman[423551]: 2026-01-31 09:15:06.63778818 +0000 UTC m=+0.187183162 container init 4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:15:06 compute-0 podman[423551]: 2026-01-31 09:15:06.647130875 +0000 UTC m=+0.196525807 container start 4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:15:06 compute-0 podman[423551]: 2026-01-31 09:15:06.676222227 +0000 UTC m=+0.225617149 container attach 4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 31 09:15:06 compute-0 crontab[423622]: (root) LIST (root)
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37791 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.46558 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.51755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.37731 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2139579517' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.51782 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3864898821' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2887442106' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2939993999' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/345464744' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2531390572' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3362502088' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1889120454' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 09:15:06 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926247127' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:06 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51842 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:15:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37803 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:07.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:07 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37818 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:07 compute-0 sad_northcutt[423607]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:15:07 compute-0 sad_northcutt[423607]: --> relative data size: 1.0
Jan 31 09:15:07 compute-0 sad_northcutt[423607]: --> All data devices are unavailable
Jan 31 09:15:07 compute-0 systemd[1]: libpod-4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3.scope: Deactivated successfully.
Jan 31 09:15:07 compute-0 podman[423551]: 2026-01-31 09:15:07.592730846 +0000 UTC m=+1.142125768 container died 4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:15:07 compute-0 nova_compute[247704]: 2026-01-31 09:15:07.683 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:07 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.51878 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:07.761+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:07 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:07 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 09:15:07 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523129022' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46657 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:07.786+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:15:07 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-86f7973f3d7902a408e98bb907211cd47da426b7b0747933af65aba089e85ac6-merged.mount: Deactivated successfully.
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.37755 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.51803 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: pgmap v4309: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 494 KiB/s wr, 78 op/s
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.46591 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.37782 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.51821 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.46609 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.37791 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2656656714' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2926247127' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1002175140' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1470515231' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/658685375' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2544725457' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1012007661' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:15:07 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4287032663' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:08 compute-0 podman[423551]: 2026-01-31 09:15:08.030478555 +0000 UTC m=+1.579873477 container remove 4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:15:08 compute-0 systemd[1]: libpod-conmon-4b1a3709ce9e86a3cf617f061f88f56675ab9422f86d37281e4ef22d254aefe3.scope: Deactivated successfully.
Jan 31 09:15:08 compute-0 sudo[423355]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:08 compute-0 sudo[423845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:08 compute-0 sudo[423845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:08 compute-0 sudo[423845]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 494 KiB/s wr, 52 op/s
Jan 31 09:15:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 09:15:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401814255' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:08 compute-0 sudo[423870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:15:08 compute-0 sudo[423870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:08 compute-0 sudo[423870]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:08 compute-0 sudo[423899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:08 compute-0 sudo[423899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:08 compute-0 sudo[423899]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:08 compute-0 sudo[423924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:15:08 compute-0 sudo[423924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:08 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37854 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:08 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:08.379+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:08 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:08 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46684 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 09:15:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731634458' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:08 compute-0 podman[424011]: 2026-01-31 09:15:08.582852758 +0000 UTC m=+0.061003472 container create 5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:15:08 compute-0 podman[424011]: 2026-01-31 09:15:08.555877647 +0000 UTC m=+0.034028381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:15:08 compute-0 systemd[1]: Started libpod-conmon-5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e.scope.
Jan 31 09:15:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:15:08 compute-0 podman[424011]: 2026-01-31 09:15:08.753264444 +0000 UTC m=+0.231415188 container init 5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galois, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 09:15:08 compute-0 podman[424011]: 2026-01-31 09:15:08.761656197 +0000 UTC m=+0.239806911 container start 5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 09:15:08 compute-0 blissful_galois[424051]: 167 167
Jan 31 09:15:08 compute-0 systemd[1]: libpod-5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e.scope: Deactivated successfully.
Jan 31 09:15:08 compute-0 podman[424011]: 2026-01-31 09:15:08.799594131 +0000 UTC m=+0.277744885 container attach 5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 09:15:08 compute-0 podman[424011]: 2026-01-31 09:15:08.80079472 +0000 UTC m=+0.278945434 container died 5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galois, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:15:08 compute-0 nova_compute[247704]: 2026-01-31 09:15:08.848 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 09:15:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949643787' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:08 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46699 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:08 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 09:15:08 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327770669' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea842a955ac22aeec34a426264d1bc5d58bde4e4fb7c6ffa93101fbce5c0957a-merged.mount: Deactivated successfully.
Jan 31 09:15:09 compute-0 podman[424011]: 2026-01-31 09:15:09.073057972 +0000 UTC m=+0.551208686 container remove 5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:15:09 compute-0 systemd[1]: libpod-conmon-5de81126c6ff34c2465e7a720d20491673924dccb3de4f3c1cfe2b4605d3ac1e.scope: Deactivated successfully.
Jan 31 09:15:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:09.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.51842 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.37803 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.37818 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.51878 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/523129022' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.46657 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2079264228' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: pgmap v4310: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 494 KiB/s wr, 52 op/s
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3401814255' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4269053121' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4249357889' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3246896467' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.37854 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.46684 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1731634458' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/366387459' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3684087532' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3227798696' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/949643787' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4091371072' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1327770669' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46711 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:09 compute-0 podman[424168]: 2026-01-31 09:15:09.273937503 +0000 UTC m=+0.109680754 container create f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:15:09 compute-0 podman[424168]: 2026-01-31 09:15:09.191028205 +0000 UTC m=+0.026771486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:15:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 09:15:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3770759324' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 09:15:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944244339' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:09 compute-0 systemd[1]: Started libpod-conmon-f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e.scope.
Jan 31 09:15:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094702ca2c7b862525198f57ada448df6c06853bb0d4f8447512f0679977949a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502968145' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094702ca2c7b862525198f57ada448df6c06853bb0d4f8447512f0679977949a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094702ca2c7b862525198f57ada448df6c06853bb0d4f8447512f0679977949a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094702ca2c7b862525198f57ada448df6c06853bb0d4f8447512f0679977949a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:09 compute-0 podman[424168]: 2026-01-31 09:15:09.413402225 +0000 UTC m=+0.249145496 container init f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 09:15:09 compute-0 podman[424168]: 2026-01-31 09:15:09.426169412 +0000 UTC m=+0.261912663 container start f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:15:09 compute-0 podman[424168]: 2026-01-31 09:15:09.45511721 +0000 UTC m=+0.290860461 container attach f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elion, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:15:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:09.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:09 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46720 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 09:15:09 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3608176849' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:09 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46735 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 53 op/s
Jan 31 09:15:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 09:15:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781276104' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:10 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.46699 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1762674557' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/278558995' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.46711 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3770759324' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/944244339' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2502968145' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.46720 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/786834206' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/367686405' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4037258470' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3608176849' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1255271531' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2280146538' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2781276104' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/72351150' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/44827191' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 09:15:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739556914' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 systemd[1]: Started Hostname Service.
Jan 31 09:15:10 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46750 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-0 hardcore_elion[424194]: {
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:     "0": [
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:         {
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "devices": [
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "/dev/loop3"
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             ],
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "lv_name": "ceph_lv0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "lv_size": "7511998464",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "name": "ceph_lv0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "tags": {
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.cluster_name": "ceph",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.crush_device_class": "",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.encrypted": "0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.osd_id": "0",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.type": "block",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:                 "ceph.vdo": "0"
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             },
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "type": "block",
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:             "vg_name": "ceph_vg0"
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:         }
Jan 31 09:15:10 compute-0 hardcore_elion[424194]:     ]
Jan 31 09:15:10 compute-0 hardcore_elion[424194]: }
Jan 31 09:15:10 compute-0 systemd[1]: libpod-f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e.scope: Deactivated successfully.
Jan 31 09:15:10 compute-0 podman[424168]: 2026-01-31 09:15:10.419922473 +0000 UTC m=+1.255665724 container died f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-094702ca2c7b862525198f57ada448df6c06853bb0d4f8447512f0679977949a-merged.mount: Deactivated successfully.
Jan 31 09:15:10 compute-0 podman[424168]: 2026-01-31 09:15:10.477802897 +0000 UTC m=+1.313546158 container remove f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elion, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 09:15:10 compute-0 systemd[1]: libpod-conmon-f2191e06971047fae1199fe7906477c45e8364835b521ec9751c2fa4a338765e.scope: Deactivated successfully.
Jan 31 09:15:10 compute-0 sudo[423924]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-0 sudo[424378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:10 compute-0 sudo[424378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-0 sudo[424378]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 09:15:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058079886' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:10 compute-0 sudo[424410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:10 compute-0 sudo[424410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-0 sudo[424410]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-0 sudo[424431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:10 compute-0 sudo[424431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-0 sudo[424431]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-0 sudo[424464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:15:10 compute-0 sudo[424464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-0 sudo[424464]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-0 sudo[424500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:10 compute-0 sudo[424500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-0 sudo[424500]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:10 compute-0 sudo[424544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:15:10 compute-0 sudo[424544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 09:15:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1159161859' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 09:15:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193824802' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.10567001 +0000 UTC m=+0.043139981 container create b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 09:15:11 compute-0 systemd[1]: Started libpod-conmon-b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a.scope.
Jan 31 09:15:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:11.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.086952608 +0000 UTC m=+0.024422569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.185830552 +0000 UTC m=+0.123300533 container init b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.200667009 +0000 UTC m=+0.138136970 container start b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 09:15:11 compute-0 amazing_khayyam[424700]: 167 167
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.203877657 +0000 UTC m=+0.141347628 container attach b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 31 09:15:11 compute-0 systemd[1]: libpod-b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a.scope: Deactivated successfully.
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.206375986 +0000 UTC m=+0.143845967 container died b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 09:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5e0bb123d9391802fba6b59b7d56018e3dd6b293467370038f078b38cda3c69-merged.mount: Deactivated successfully.
Jan 31 09:15:11 compute-0 podman[424655]: 2026-01-31 09:15:11.241989175 +0000 UTC m=+0.179459136 container remove b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:15:11 compute-0 systemd[1]: libpod-conmon-b15719fe9f586582d2caf131bee7f88f0824bf3b6f34484fc807243706fdef0a.scope: Deactivated successfully.
Jan 31 09:15:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:11.248 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:11.250 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:11.250 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 09:15:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3682084879' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.46735 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: pgmap v4311: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 53 op/s
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2739556914' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.46750 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1117598229' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3058079886' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/859707252' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1159161859' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.46762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/644624968' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4193824802' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2195936456' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2741872118' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52031 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 podman[424760]: 2026-01-31 09:15:11.402221346 +0000 UTC m=+0.069280270 container create 3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_northcutt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52022 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-0 systemd[1]: Started libpod-conmon-3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98.scope.
Jan 31 09:15:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e71bab1b26b33f4556b12307904cc5f7438b55b70efcd570949c9d41e6844ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e71bab1b26b33f4556b12307904cc5f7438b55b70efcd570949c9d41e6844ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e71bab1b26b33f4556b12307904cc5f7438b55b70efcd570949c9d41e6844ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e71bab1b26b33f4556b12307904cc5f7438b55b70efcd570949c9d41e6844ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:15:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:11 compute-0 podman[424760]: 2026-01-31 09:15:11.382909142 +0000 UTC m=+0.049968096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:15:11 compute-0 podman[424760]: 2026-01-31 09:15:11.481444736 +0000 UTC m=+0.148503680 container init 3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_northcutt, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 09:15:11 compute-0 podman[424760]: 2026-01-31 09:15:11.486888287 +0000 UTC m=+0.153947211 container start 3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_northcutt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 09:15:11 compute-0 podman[424760]: 2026-01-31 09:15:11.490265078 +0000 UTC m=+0.157324002 container attach 3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46780 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.595873) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911595905, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1131, "num_deletes": 257, "total_data_size": 1575551, "memory_usage": 1597688, "flush_reason": "Manual Compaction"}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46786 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911611448, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 1557821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93218, "largest_seqno": 94348, "table_properties": {"data_size": 1552111, "index_size": 2913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14514, "raw_average_key_size": 21, "raw_value_size": 1539829, "raw_average_value_size": 2228, "num_data_blocks": 127, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850832, "oldest_key_time": 1769850832, "file_creation_time": 1769850911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 15621 microseconds, and 2943 cpu microseconds.
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.611490) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 1557821 bytes OK
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.611511) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.614182) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.614211) EVENT_LOG_v1 {"time_micros": 1769850911614203, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.614231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1569921, prev total WAL file size 1569921, number of live WAL files 2.
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.614778) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303230' seq:72057594037927935, type:22 .. '6C6F676D0034323733' seq:0, type:0; will stop at (end)
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(1521KB)], [215(11MB)]
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911614833, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 13908165, "oldest_snapshot_seqno": -1}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 09:15:11 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143707149' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 12022 keys, 13784970 bytes, temperature: kUnknown
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911777203, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 13784970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13710621, "index_size": 43196, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 318743, "raw_average_key_size": 26, "raw_value_size": 13503869, "raw_average_value_size": 1123, "num_data_blocks": 1635, "num_entries": 12022, "num_filter_entries": 12022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.777772) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 13784970 bytes
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.781383) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.7 rd, 84.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.8 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(17.8) write-amplify(8.8) OK, records in: 12553, records dropped: 531 output_compression: NoCompression
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.781424) EVENT_LOG_v1 {"time_micros": 1769850911781408, "job": 136, "event": "compaction_finished", "compaction_time_micros": 162352, "compaction_time_cpu_micros": 24789, "output_level": 6, "num_output_files": 1, "total_output_size": 13784970, "num_input_records": 12553, "num_output_records": 12022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911782056, "job": 136, "event": "table_file_deletion", "file_number": 217}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850911783207, "job": 136, "event": "table_file_deletion", "file_number": 215}
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.614693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.783300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.783306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.783308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.783309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:15:11.783311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37974 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52043 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:11 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37980 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 53 op/s
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52055 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.37989 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46819 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:15:12.360+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:15:12 compute-0 happy_northcutt[424802]: {
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:         "osd_id": 0,
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:         "type": "bluestore"
Jan 31 09:15:12 compute-0 happy_northcutt[424802]:     }
Jan 31 09:15:12 compute-0 happy_northcutt[424802]: }
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.46768 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3682084879' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/229452426' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.52031 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.52022 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.46780 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.46786 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3143707149' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3506100947' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mon[74496]: from='client.37974 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 systemd[1]: libpod-3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98.scope: Deactivated successfully.
Jan 31 09:15:12 compute-0 podman[424760]: 2026-01-31 09:15:12.426188405 +0000 UTC m=+1.093247319 container died 3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e71bab1b26b33f4556b12307904cc5f7438b55b70efcd570949c9d41e6844ae-merged.mount: Deactivated successfully.
Jan 31 09:15:12 compute-0 podman[424760]: 2026-01-31 09:15:12.489624524 +0000 UTC m=+1.156683448 container remove 3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_northcutt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:15:12 compute-0 systemd[1]: libpod-conmon-3447c60d621c89d0b9dde7b04caf300e15a3d899632ee4e1b864970d5b2ebb98.scope: Deactivated successfully.
Jan 31 09:15:12 compute-0 sudo[424544]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:15:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:15:12 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0cd8ac10-85bd-4f49-90a8-f2fcac53924c does not exist
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev d8748bac-26f0-400f-9a4b-3a657385dcc4 does not exist
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 2099fc5b-67c9-492f-bc69-448e33941d34 does not exist
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52073 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38001 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 sudo[425022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:12 compute-0 sudo[425022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:12 compute-0 sudo[425022]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:12 compute-0 sudo[425065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:15:12 compute-0 sudo[425065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:12 compute-0 sudo[425065]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:12 compute-0 nova_compute[247704]: 2026-01-31 09:15:12.688 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:12 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 09:15:12 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390468352' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52091 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:12 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38019 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:13.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 09:15:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679429532' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52112 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38040 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:13.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.52043 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.37980 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: pgmap v4312: 305 pgs: 305 active+clean; 214 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 504 KiB/s wr, 53 op/s
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.52055 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.37989 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.46819 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.52073 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.38001 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/784799401' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3811429950' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3123822868' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/390468352' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/727621978' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2630241147' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1680094024' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3679429532' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1228540386' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38055 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:13 compute-0 nova_compute[247704]: 2026-01-31 09:15:13.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:13 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:15:13 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356496342' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 541 KiB/s wr, 55 op/s
Jan 31 09:15:14 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38064 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2399550788' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38091 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.52091 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.38019 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.52112 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.38040 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3907526084' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.52127 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3985781755' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1591563330' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.38055 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1356496342' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1669534733' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2454191839' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4113116959' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2399550788' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2745963781' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:14 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1497551720' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:15:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:15.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 09:15:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326806464' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46981 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: pgmap v4313: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 541 KiB/s wr, 55 op/s
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.38064 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.38091 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3118020784' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3308132613' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4007122892' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/183049023' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1026374254' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1314856845' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3326806464' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52238 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:15 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:16 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46990 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 71 KiB/s wr, 22 op/s
Jan 31 09:15:16 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.46996 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 09:15:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2534597622' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:16 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52244 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:16 compute-0 podman[425582]: 2026-01-31 09:15:16.886794078 +0000 UTC m=+0.063474681 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:15:16 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47008 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 09:15:16 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791200665' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:17.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:17 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47020 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:17.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:17 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47038 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:17 compute-0 nova_compute[247704]: 2026-01-31 09:15:17.690 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 09:15:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161788473' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:17 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47044 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:18 compute-0 ceph-mon[74496]: from='client.46981 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:18 compute-0 ceph-mon[74496]: from='client.52238 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2534597622' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2530731165' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s rd, 47 KiB/s wr, 6 op/s
Jan 31 09:15:18 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47056 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:18 compute-0 nova_compute[247704]: 2026-01-31 09:15:18.853 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:18 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 31 09:15:18 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778064777' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:19.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:19 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38226 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 31 09:15:19 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461464631' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 50 KiB/s wr, 3 op/s
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.38178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.46990 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: pgmap v4314: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 71 KiB/s wr, 22 op/s
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.46996 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.52244 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.47008 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1791200665' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3364397964' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3756849655' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.47020 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.47038 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1638522012' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/161788473' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/422475968' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.47044 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2729769620' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: pgmap v4315: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 86 KiB/s rd, 47 KiB/s wr, 6 op/s
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.47056 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3196856528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2778064777' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:15:20
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups']
Jan 31 09:15:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:15:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:15:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:21.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:21 compute-0 nova_compute[247704]: 2026-01-31 09:15:21.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:21 compute-0 nova_compute[247704]: 2026-01-31 09:15:21.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:15:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 40 KiB/s wr, 2 op/s
Jan 31 09:15:22 compute-0 nova_compute[247704]: 2026-01-31 09:15:22.694 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:22 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52319 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:22 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47125 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:23 compute-0 nova_compute[247704]: 2026-01-31 09:15:23.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:23 compute-0 nova_compute[247704]: 2026-01-31 09:15:23.854 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:24 compute-0 ovn_controller[149457]: 2026-01-31T09:15:24Z|00902|memory_trim|INFO|Detected inactivity (last active 30024 ms ago): trimming memory
Jan 31 09:15:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Jan 31 09:15:24 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52337 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:24 compute-0 nova_compute[247704]: 2026-01-31 09:15:24.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:24 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47155 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:25 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38268 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:25.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 31 09:15:25 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693196906' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:25.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:25 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:25 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:25 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:25 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:25 compute-0 ceph-mon[74496]: from='client.38226 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:25 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1461464631' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:25 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38286 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 09:15:26 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38292 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:26 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47176 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:26 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52358 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:26 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Jan 31 09:15:26 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236107094' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Jan 31 09:15:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1189658548' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:27.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3083985766' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:15:27 compute-0 ceph-mon[74496]: pgmap v4316: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 50 KiB/s wr, 3 op/s
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3875175616' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/772685924' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: pgmap v4317: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 40 KiB/s wr, 2 op/s
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.52319 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.47125 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3135131644' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3639415667' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3688358254' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/19574536' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4078247171' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: pgmap v4318: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.52337 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/538974174' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.47155 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.38268 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2338748616' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/833197903' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/693196906' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.38286 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3773263261' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47188 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38313 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47194 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 nova_compute[247704]: 2026-01-31 09:15:27.695 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:27 compute-0 ovs-appctl[427358]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 09:15:27 compute-0 ovs-appctl[427366]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 09:15:27 compute-0 ovs-appctl[427376]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38325 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.617607714498418e-05 of space, bias 1.0, pg target 0.004852823143495254 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:27 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 09:15:28 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Jan 31 09:15:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4124568240' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:28 compute-0 nova_compute[247704]: 2026-01-31 09:15:28.855 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:29 compute-0 ceph-mon[74496]: pgmap v4319: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.38292 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.47176 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.52358 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1092119460' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/236107094' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.52370 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1189658548' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/162892977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.47188 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.38313 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/703663555' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.47194 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4153647404' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2469722022' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 09:15:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Jan 31 09:15:29 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1815982370' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:29.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:29 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38355 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:29 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47215 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38361 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52418 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.617607714498418e-05 of space, bias 1.0, pg target 0.004852823143495254 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 6.7 KiB/s wr, 0 op/s
Jan 31 09:15:30 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:15:30 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47221 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.617607714498418e-05 of space, bias 1.0, pg target 0.004852823143495254 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:30 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.38325 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: pgmap v4320: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 0 B/s rd, 7.0 KiB/s wr, 0 op/s
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.52403 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4124568240' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2043893363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2314985797' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1815982370' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: from='client.38355 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:30 compute-0 podman[428347]: 2026-01-31 09:15:30.474512571 +0000 UTC m=+0.094643632 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 09:15:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:15:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472881809' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Jan 31 09:15:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4117241560' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:30 compute-0 sudo[428384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:30 compute-0 sudo[428384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:30 compute-0 sudo[428384]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:30 compute-0 sudo[428409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:30 compute-0 sudo[428409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:30 compute-0 sudo[428409]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:31 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Jan 31 09:15:31 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1374204682' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:31.207 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=115, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=114) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:15:31 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:31.208 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:15:31 compute-0 nova_compute[247704]: 2026-01-31 09:15:31.247 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:31 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47245 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:31 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52442 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:31.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:31 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47254 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:31 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52448 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 31 09:15:32 compute-0 nova_compute[247704]: 2026-01-31 09:15:32.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:32 compute-0 nova_compute[247704]: 2026-01-31 09:15:32.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:15:32 compute-0 nova_compute[247704]: 2026-01-31 09:15:32.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:15:32 compute-0 nova_compute[247704]: 2026-01-31 09:15:32.697 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:32 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:32 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200454611' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:33.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.47215 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.38361 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.52418 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: pgmap v4321: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 6.7 KiB/s wr, 0 op/s
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.47221 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1472881809' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4117241560' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4126448503' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2412532984' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1374204682' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38415 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:33 compute-0 nova_compute[247704]: 2026-01-31 09:15:33.405 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:15:33 compute-0 nova_compute[247704]: 2026-01-31 09:15:33.405 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:15:33 compute-0 nova_compute[247704]: 2026-01-31 09:15:33.405 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 09:15:33 compute-0 nova_compute[247704]: 2026-01-31 09:15:33.406 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f82c07b4-3acf-4385-9b7a-8a49da3cd55a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:15:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:33 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52478 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47293 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:33 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:15:33 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158303249' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:33 compute-0 nova_compute[247704]: 2026-01-31 09:15:33.907 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 3.3 KiB/s wr, 9 op/s
Jan 31 09:15:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Jan 31 09:15:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2205925257' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2428046179' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.47245 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.52442 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.47254 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.52448 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3354097032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3737881186' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1597342083' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: pgmap v4322: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3060654967' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2724833909' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2246476096' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4018119954' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4200454611' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2791884155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.38415 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.52478 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.47293 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1158303249' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1832749955' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4247018269' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2205925257' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:34 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614475393' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Jan 31 09:15:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3551598919' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.390 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating instance_info_cache with network_info: [{"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.412 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-f82c07b4-3acf-4385-9b7a-8a49da3cd55a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.413 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.413 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.413 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:35 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38466 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.434 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.434 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.435 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.435 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.435 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:15:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:35.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:35 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47335 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: pgmap v4323: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 3.3 KiB/s wr, 9 op/s
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1105557315' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3223520930' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2614475393' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3283029549' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3675144590' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3551598919' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/766298541' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/329207965' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Jan 31 09:15:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160169199' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:15:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063524191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.888 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.958 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:15:35 compute-0 nova_compute[247704]: 2026-01-31 09:15:35.959 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52523 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.107 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.108 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3726MB free_disk=20.988101959228516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.108 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.108 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.314 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance f82c07b4-3acf-4385-9b7a-8a49da3cd55a actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.314 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.314 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.542422761957937e-06 of space, bias 1.0, pg target 0.002562726828587381 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.390 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:15:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343188959' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.473 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.473 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.494 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.521 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:15:36 compute-0 nova_compute[247704]: 2026-01-31 09:15:36.575 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:15:36 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38508 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.38466 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.47335 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2160169199' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4063524191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1020822950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3445546757' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1343188959' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/251849106' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2421434504' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:15:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141514078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:37 compute-0 nova_compute[247704]: 2026-01-31 09:15:37.003 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:15:37 compute-0 nova_compute[247704]: 2026-01-31 09:15:37.009 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:15:37 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47374 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:37 compute-0 nova_compute[247704]: 2026-01-31 09:15:37.032 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:15:37 compute-0 nova_compute[247704]: 2026-01-31 09:15:37.034 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:15:37 compute-0 nova_compute[247704]: 2026-01-31 09:15:37.034 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Jan 31 09:15:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/102671148' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:37.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:37 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47383 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:37 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38532 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:37.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:37 compute-0 nova_compute[247704]: 2026-01-31 09:15:37.724 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:37 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47392 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:37 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38538 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:38.210 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '115'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:15:38 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52595 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 09:15:38 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47398 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52601 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.52523 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: pgmap v4324: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 255 B/s wr, 10 op/s
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.38508 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3141514078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.47374 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3073995165' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/102671148' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1106973295' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.47383 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.38532 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.47392 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/17899084' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Jan 31 09:15:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1006415844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:38 compute-0 nova_compute[247704]: 2026-01-31 09:15:38.951 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Jan 31 09:15:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190154118' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:39.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38562 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47416 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52625 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.38538 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.52595 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: pgmap v4325: 305 pgs: 305 active+clean; 218 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.47398 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.52601 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1773245035' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1006415844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/730697016' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2045968433' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/190154118' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2699539177' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/627123147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/627123147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38580 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47431 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52637 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046216324678950305 of space, bias 1.0, pg target 1.3864897403685092 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:15:39 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:15:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 205 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 31 09:15:40 compute-0 nova_compute[247704]: 2026-01-31 09:15:40.183 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113027999' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 09:15:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Jan 31 09:15:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555442405' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.38562 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.47416 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.52625 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.38580 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1441105675' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4113027999' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3189746421' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2542830996' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3502716840' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/555442405' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52673 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47458 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:40 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38619 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:41 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.52679 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47464 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.38625 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 09:15:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:15:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:15:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 09:15:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334200316' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Jan 31 09:15:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Jan 31 09:15:41 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Jan 31 09:15:41 compute-0 ceph-mon[74496]: from='client.47431 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mon[74496]: from='client.52637 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mon[74496]: pgmap v4326: 305 pgs: 305 active+clean; 205 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 31 09:15:41 compute-0 ceph-mon[74496]: from='client.52673 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3260021682' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1515151025' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:41 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/334200316' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 203 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.4 KiB/s wr, 43 op/s
Jan 31 09:15:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Jan 31 09:15:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3385107240' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:42 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 09:15:42 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 09:15:42 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 09:15:42 compute-0 systemd[1]: Started Hostname Service.
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.703 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.704 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.704 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.704 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.705 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.706 247708 INFO nova.compute.manager [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Terminating instance
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.707 247708 DEBUG nova.compute.manager [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:15:42 compute-0 nova_compute[247704]: 2026-01-31 09:15:42.727 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.47458 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.38619 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.52679 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.47464 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.38625 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: osdmap e426: 3 total, 3 up, 3 in
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4210385020' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/67754840' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 ceph-mon[74496]: pgmap v4328: 305 pgs: 305 active+clean; 203 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.4 KiB/s wr, 43 op/s
Jan 31 09:15:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3385107240' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 09:15:43 compute-0 kernel: tap055cb245-bd (unregistering): left promiscuous mode
Jan 31 09:15:43 compute-0 NetworkManager[49108]: <info>  [1769850943.4778] device (tap055cb245-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:15:43 compute-0 ovn_controller[149457]: 2026-01-31T09:15:43Z|00903|binding|INFO|Releasing lport 055cb245-bd1c-48e0-85fa-1ff3257b1383 from this chassis (sb_readonly=0)
Jan 31 09:15:43 compute-0 ovn_controller[149457]: 2026-01-31T09:15:43Z|00904|binding|INFO|Setting lport 055cb245-bd1c-48e0-85fa-1ff3257b1383 down in Southbound
Jan 31 09:15:43 compute-0 ovn_controller[149457]: 2026-01-31T09:15:43Z|00905|binding|INFO|Removing iface tap055cb245-bd ovn-installed in OVS
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.488 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:43.499 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:d9:d5 10.100.0.9'], port_security=['fa:16:3e:57:d9:d5 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'f82c07b4-3acf-4385-9b7a-8a49da3cd55a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f1b11aa-db66-4189-a249-11f4c2357637', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=055cb245-bd1c-48e0-85fa-1ff3257b1383) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:15:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:43.500 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 055cb245-bd1c-48e0-85fa-1ff3257b1383 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis
Jan 31 09:15:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:43.502 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.504 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:43.503 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f03bd293-5854-4dee-a052-b8803bf24618]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:43.504 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore
Jan 31 09:15:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:43.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:43 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Jan 31 09:15:43 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000dc.scope: Consumed 18.040s CPU time, 207.9M memory peak, no IO.
Jan 31 09:15:43 compute-0 systemd-machined[214448]: Machine qemu-94-instance-000000dc terminated.
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:43 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [NOTICE]   (420143) : haproxy version is 2.8.14-c23fe91
Jan 31 09:15:43 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [NOTICE]   (420143) : path to executable is /usr/sbin/haproxy
Jan 31 09:15:43 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [WARNING]  (420143) : Exiting Master process...
Jan 31 09:15:43 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [ALERT]    (420143) : Current worker (420145) exited with code 143 (Terminated)
Jan 31 09:15:43 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[420139]: [WARNING]  (420143) : All workers exited. Exiting... (0)
Jan 31 09:15:43 compute-0 systemd[1]: libpod-d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400.scope: Deactivated successfully.
Jan 31 09:15:43 compute-0 podman[429642]: 2026-01-31 09:15:43.614694459 +0000 UTC m=+0.041967213 container died d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 09:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400-userdata-shm.mount: Deactivated successfully.
Jan 31 09:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-808f4514e732753f9df0a9a187991f6a60110a333e7091719e9c6e313b6ff57c-merged.mount: Deactivated successfully.
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.740 247708 INFO nova.virt.libvirt.driver [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Instance destroyed successfully.
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.741 247708 DEBUG nova.objects.instance [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid f82c07b4-3acf-4385-9b7a-8a49da3cd55a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.825 247708 DEBUG nova.virt.libvirt.vif [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:13:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1125575369',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1125575369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1125575369',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKWLSilDSatFdj+l72R7r4tT2LeUF3x3W2K5KxFW3XyiJuP7GkF+kAirfxylqPC6NcjBeQJfMv2H1rzxH82W47myV0/TExevXJFFnepwN96KbWJG7mo4f6FB99ScIugOCw==',key_name='tempest-keypair-1288716286',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:13:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-u3vgr0xu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:13:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=f82c07b4-3acf-4385-9b7a-8a49da3cd55a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.825 247708 DEBUG nova.network.os_vif_util [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "address": "fa:16:3e:57:d9:d5", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap055cb245-bd", "ovs_interfaceid": "055cb245-bd1c-48e0-85fa-1ff3257b1383", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.826 247708 DEBUG nova.network.os_vif_util [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.827 247708 DEBUG os_vif [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.829 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.829 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap055cb245-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.831 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.833 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.847 247708 INFO os_vif [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:d9:d5,bridge_name='br-int',has_traffic_filtering=True,id=055cb245-bd1c-48e0-85fa-1ff3257b1383,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap055cb245-bd')
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.953 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.967 247708 DEBUG nova.compute.manager [req-adbf7cf3-64a0-481e-ac10-3bdf2fe11ee0 req-c232618b-e5ce-4650-8cbc-26fd4e5a83a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-vif-unplugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.967 247708 DEBUG oslo_concurrency.lockutils [req-adbf7cf3-64a0-481e-ac10-3bdf2fe11ee0 req-c232618b-e5ce-4650-8cbc-26fd4e5a83a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.968 247708 DEBUG oslo_concurrency.lockutils [req-adbf7cf3-64a0-481e-ac10-3bdf2fe11ee0 req-c232618b-e5ce-4650-8cbc-26fd4e5a83a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.968 247708 DEBUG oslo_concurrency.lockutils [req-adbf7cf3-64a0-481e-ac10-3bdf2fe11ee0 req-c232618b-e5ce-4650-8cbc-26fd4e5a83a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.968 247708 DEBUG nova.compute.manager [req-adbf7cf3-64a0-481e-ac10-3bdf2fe11ee0 req-c232618b-e5ce-4650-8cbc-26fd4e5a83a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] No waiting events found dispatching network-vif-unplugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:15:43 compute-0 nova_compute[247704]: 2026-01-31 09:15:43.968 247708 DEBUG nova.compute.manager [req-adbf7cf3-64a0-481e-ac10-3bdf2fe11ee0 req-c232618b-e5ce-4650-8cbc-26fd4e5a83a3 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-vif-unplugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:15:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Jan 31 09:15:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-crash-compute-0[81645]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 09:15:44 compute-0 podman[429642]: 2026-01-31 09:15:44.663757522 +0000 UTC m=+1.091030276 container cleanup d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:15:44 compute-0 systemd[1]: libpod-conmon-d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400.scope: Deactivated successfully.
Jan 31 09:15:44 compute-0 podman[429703]: 2026-01-31 09:15:44.729383023 +0000 UTC m=+0.048569391 container remove d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.743 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[84e13a89-5191-4937-a7ec-5c7df62ca0b0]: (4, ('Sat Jan 31 09:15:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400)\nd7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400\nSat Jan 31 09:15:44 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (d7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400)\nd7d0453b3a1e5d28f73fd5b1614735bea120e536d89a5fc89e2c390ca7a6d400\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.746 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d19410aa-9c6f-4f98-b032-6f25a5732243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.747 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.748 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:44 compute-0 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.751 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.754 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[f8efeafb-d88f-4a95-82d7-44d3d2049e5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.768 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c1122fb2-be5d-4566-a040-e2239ea424a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.769 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[8fef4f0c-4c15-4dfb-8cba-9f5d844cda70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.781 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[5dda4c62-c01e-46de-acce-0c68a75b8a15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1106737, 'reachable_time': 39369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 429719, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.784 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:15:44 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:15:44.784 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[a28b0da3-9cc5-43de-a4f8-25d550cd070d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:15:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.883 247708 INFO nova.virt.libvirt.driver [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Deleting instance files /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a_del
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.884 247708 INFO nova.virt.libvirt.driver [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Deletion of /var/lib/nova/instances/f82c07b4-3acf-4385-9b7a-8a49da3cd55a_del complete
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.956 247708 INFO nova.compute.manager [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Took 2.25 seconds to destroy the instance on the hypervisor.
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.957 247708 DEBUG oslo.service.loopingcall [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.957 247708 DEBUG nova.compute.manager [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:15:44 compute-0 nova_compute[247704]: 2026-01-31 09:15:44.957 247708 DEBUG nova.network.neutron [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:15:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:45.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:45 compute-0 ceph-mon[74496]: pgmap v4329: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 39 op/s
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.088 247708 DEBUG nova.compute.manager [req-c5177481-6ba8-44f7-8c19-eb8294fa219c req-3eaee8af-e7b1-4814-8038-44b61af6db83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.088 247708 DEBUG oslo_concurrency.lockutils [req-c5177481-6ba8-44f7-8c19-eb8294fa219c req-3eaee8af-e7b1-4814-8038-44b61af6db83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.089 247708 DEBUG oslo_concurrency.lockutils [req-c5177481-6ba8-44f7-8c19-eb8294fa219c req-3eaee8af-e7b1-4814-8038-44b61af6db83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.089 247708 DEBUG oslo_concurrency.lockutils [req-c5177481-6ba8-44f7-8c19-eb8294fa219c req-3eaee8af-e7b1-4814-8038-44b61af6db83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.089 247708 DEBUG nova.compute.manager [req-c5177481-6ba8-44f7-8c19-eb8294fa219c req-3eaee8af-e7b1-4814-8038-44b61af6db83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] No waiting events found dispatching network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.090 247708 WARNING nova.compute.manager [req-c5177481-6ba8-44f7-8c19-eb8294fa219c req-3eaee8af-e7b1-4814-8038-44b61af6db83 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received unexpected event network-vif-plugged-055cb245-bd1c-48e0-85fa-1ff3257b1383 for instance with vm_state active and task_state deleting.
Jan 31 09:15:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.431 247708 DEBUG nova.network.neutron [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.449 247708 INFO nova.compute.manager [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Took 1.49 seconds to deallocate network for instance.
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.564 247708 DEBUG nova.compute.manager [req-3752b72e-4457-4191-bef6-7928bd543491 req-fce73f63-d2df-420b-8eee-0154487ee1ff 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Received event network-vif-deleted-055cb245-bd1c-48e0-85fa-1ff3257b1383 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.694 247708 INFO nova.compute.manager [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Took 0.24 seconds to detach 1 volumes for instance.
Jan 31 09:15:46 compute-0 nova_compute[247704]: 2026-01-31 09:15:46.695 247708 DEBUG nova.compute.manager [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Deleting volume: bef842a0-76ba-4e96-aba4-abce4554dd0c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.038 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.038 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.109 247708 DEBUG oslo_concurrency.processutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:15:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:47.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:15:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7349103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.557 247708 DEBUG oslo_concurrency.processutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.564 247708 DEBUG nova.compute.provider_tree [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.588 247708 DEBUG nova.scheduler.client.report [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.629 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.670 247708 INFO nova.scheduler.client.report [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance f82c07b4-3acf-4385-9b7a-8a49da3cd55a
Jan 31 09:15:47 compute-0 ceph-mon[74496]: pgmap v4330: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Jan 31 09:15:47 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/7349103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:15:47 compute-0 nova_compute[247704]: 2026-01-31 09:15:47.774 247708 DEBUG oslo_concurrency.lockutils [None req-f1be8337-f367-4806-aeb1-08e21004608b ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "f82c07b4-3acf-4385-9b7a-8a49da3cd55a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:15:47 compute-0 podman[429744]: 2026-01-31 09:15:47.869548024 +0000 UTC m=+0.043776426 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:15:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Jan 31 09:15:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4058064785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4058064785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:48 compute-0 nova_compute[247704]: 2026-01-31 09:15:48.834 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:48 compute-0 nova_compute[247704]: 2026-01-31 09:15:48.957 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:49 compute-0 nova_compute[247704]: 2026-01-31 09:15:49.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:15:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Jan 31 09:15:49 compute-0 ceph-mon[74496]: pgmap v4331: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Jan 31 09:15:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Jan 31 09:15:49 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:15:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 163 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 31 09:15:50 compute-0 sudo[429765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:50 compute-0 sudo[429765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:50 compute-0 sudo[429765]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:50 compute-0 sudo[429790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:15:50 compute-0 sudo[429790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:15:50 compute-0 sudo[429790]: pam_unix(sudo:session): session closed for user root
Jan 31 09:15:51 compute-0 ceph-mon[74496]: osdmap e427: 3 total, 3 up, 3 in
Jan 31 09:15:51 compute-0 ceph-mon[74496]: pgmap v4333: 305 pgs: 305 active+clean; 163 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.9 KiB/s wr, 31 op/s
Jan 31 09:15:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:51.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Jan 31 09:15:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Jan 31 09:15:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Jan 31 09:15:52 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Jan 31 09:15:52 compute-0 ceph-mon[74496]: pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.3 KiB/s wr, 44 op/s
Jan 31 09:15:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:53.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:15:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:53.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:15:53 compute-0 nova_compute[247704]: 2026-01-31 09:15:53.837 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:53 compute-0 nova_compute[247704]: 2026-01-31 09:15:53.959 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Jan 31 09:15:54 compute-0 ceph-mon[74496]: osdmap e428: 3 total, 3 up, 3 in
Jan 31 09:15:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2514494485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:15:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2514494485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:15:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:15:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:55.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:15:55 compute-0 ceph-mon[74496]: pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Jan 31 09:15:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 54 op/s
Jan 31 09:15:56 compute-0 ceph-mon[74496]: pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 54 op/s
Jan 31 09:15:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:57.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 31 09:15:58 compute-0 nova_compute[247704]: 2026-01-31 09:15:58.739 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850943.7376797, f82c07b4-3acf-4385-9b7a-8a49da3cd55a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:15:58 compute-0 nova_compute[247704]: 2026-01-31 09:15:58.740 247708 INFO nova.compute.manager [-] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] VM Stopped (Lifecycle Event)
Jan 31 09:15:58 compute-0 nova_compute[247704]: 2026-01-31 09:15:58.773 247708 DEBUG nova.compute.manager [None req-45157bd3-89d9-416e-a7c4-ceb96bc1b2d7 - - - - - -] [instance: f82c07b4-3acf-4385-9b7a-8a49da3cd55a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:15:58 compute-0 nova_compute[247704]: 2026-01-31 09:15:58.840 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:58 compute-0 nova_compute[247704]: 2026-01-31 09:15:58.961 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:15:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:59 compute-0 ceph-mon[74496]: pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 31 09:15:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:15:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:15:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:15:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:15:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Jan 31 09:15:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Jan 31 09:15:59 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Jan 31 09:16:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 31 09:16:00 compute-0 podman[429821]: 2026-01-31 09:16:00.906312709 +0000 UTC m=+0.082709835 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 31 09:16:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:01 compute-0 ceph-mon[74496]: osdmap e429: 3 total, 3 up, 3 in
Jan 31 09:16:01 compute-0 ceph-mon[74496]: pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 31 09:16:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:16:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:01.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:16:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 992 B/s wr, 24 op/s
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.433379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850962433418, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 936, "num_deletes": 253, "total_data_size": 1107070, "memory_usage": 1133864, "flush_reason": "Manual Compaction"}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850962634910, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 1095439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94349, "largest_seqno": 95284, "table_properties": {"data_size": 1090209, "index_size": 2433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15100, "raw_average_key_size": 22, "raw_value_size": 1078572, "raw_average_value_size": 1600, "num_data_blocks": 104, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850912, "oldest_key_time": 1769850912, "file_creation_time": 1769850962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 201593 microseconds, and 3648 cpu microseconds.
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.634964) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 1095439 bytes OK
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.634992) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.673055) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.673097) EVENT_LOG_v1 {"time_micros": 1769850962673091, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.673119) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 1101789, prev total WAL file size 1101789, number of live WAL files 2.
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.673533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(1069KB)], [218(13MB)]
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850962673563, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 14880409, "oldest_snapshot_seqno": -1}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 12174 keys, 12881723 bytes, temperature: kUnknown
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850962940511, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 12881723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12807243, "index_size": 42943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 324136, "raw_average_key_size": 26, "raw_value_size": 12598880, "raw_average_value_size": 1034, "num_data_blocks": 1615, "num_entries": 12174, "num_filter_entries": 12174, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769850962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.940844) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 12881723 bytes
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.984360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.7 rd, 48.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(25.3) write-amplify(11.8) OK, records in: 12696, records dropped: 522 output_compression: NoCompression
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.984411) EVENT_LOG_v1 {"time_micros": 1769850962984391, "job": 138, "event": "compaction_finished", "compaction_time_micros": 267047, "compaction_time_cpu_micros": 22787, "output_level": 6, "num_output_files": 1, "total_output_size": 12881723, "num_input_records": 12696, "num_output_records": 12174, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850962984828, "job": 138, "event": "table_file_deletion", "file_number": 220}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850962986662, "job": 138, "event": "table_file_deletion", "file_number": 218}
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.673445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.986722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.986728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.986730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.986731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:02 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:16:02.986733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:16:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:03 compute-0 nova_compute[247704]: 2026-01-31 09:16:03.844 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:03 compute-0 nova_compute[247704]: 2026-01-31 09:16:03.963 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Jan 31 09:16:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:05.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:05 compute-0 ceph-mon[74496]: pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 992 B/s wr, 24 op/s
Jan 31 09:16:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 307 B/s wr, 5 op/s
Jan 31 09:16:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:07.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:07.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 09:16:08 compute-0 ceph-mon[74496]: pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Jan 31 09:16:08 compute-0 ceph-mon[74496]: pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 307 B/s wr, 5 op/s
Jan 31 09:16:08 compute-0 nova_compute[247704]: 2026-01-31 09:16:08.849 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:08 compute-0 nova_compute[247704]: 2026-01-31 09:16:08.964 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:09.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:09 compute-0 ceph-mon[74496]: pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 09:16:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 447 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 09:16:10 compute-0 sudo[429850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:10 compute-0 sudo[429850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:10 compute-0 sudo[429850]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:10 compute-0 sudo[429875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:10 compute-0 sudo[429875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:10 compute-0 sudo[429875]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:11.249 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:11.250 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:11.250 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:11.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:12 compute-0 ceph-mon[74496]: pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 447 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 09:16:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 0 B/s wr, 4 op/s
Jan 31 09:16:12 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 09:16:12 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 09:16:12 compute-0 sudo[429905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:12 compute-0 sudo[429905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:12 compute-0 sudo[429905]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-0 sudo[429930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:16:13 compute-0 sudo[429930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:13 compute-0 sudo[429930]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-0 sudo[429955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:13 compute-0 sudo[429955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:13 compute-0 sudo[429955]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-0 sudo[429980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:16:13 compute-0 sudo[429980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:13 compute-0 ceph-mon[74496]: pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 0 B/s wr, 4 op/s
Jan 31 09:16:13 compute-0 sudo[429980]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:13.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:13 compute-0 nova_compute[247704]: 2026-01-31 09:16:13.850 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:13 compute-0 nova_compute[247704]: 2026-01-31 09:16:13.966 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 31 09:16:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:15.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:16 compute-0 ceph-mon[74496]: pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Jan 31 09:16:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 31 09:16:16 compute-0 ovn_controller[149457]: 2026-01-31T09:16:16Z|00906|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Jan 31 09:16:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 09:16:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:16 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 09:16:16 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:16:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:16:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:16:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev e29fff5d-325c-4395-8b91-5b7e740fa455 does not exist
Jan 31 09:16:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ff8d19e8-cfdd-449d-b500-91c2e0b1234b does not exist
Jan 31 09:16:17 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev cdf6e18d-f5f7-4648-8d21-cc61bf691aee does not exist
Jan 31 09:16:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:16:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:16:17 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:16:17 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:16:17 compute-0 sudo[430038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:17 compute-0 sudo[430038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:17 compute-0 sudo[430038]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:17 compute-0 sudo[430063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:16:17 compute-0 sudo[430063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:17 compute-0 sudo[430063]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:17 compute-0 sudo[430088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:17 compute-0 sudo[430088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:17 compute-0 sudo[430088]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:17 compute-0 sudo[430113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:16:17 compute-0 sudo[430113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:17.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:17 compute-0 ceph-mon[74496]: pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:16:17 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:16:17 compute-0 podman[430179]: 2026-01-31 09:16:17.791014071 +0000 UTC m=+0.026097941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:16:17 compute-0 podman[430179]: 2026-01-31 09:16:17.886854271 +0000 UTC m=+0.121938111 container create a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:16:17 compute-0 systemd[1]: Started libpod-conmon-a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da.scope.
Jan 31 09:16:18 compute-0 podman[430193]: 2026-01-31 09:16:18.010245735 +0000 UTC m=+0.085725998 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 09:16:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:18 compute-0 podman[430179]: 2026-01-31 09:16:18.119417586 +0000 UTC m=+0.354501446 container init a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:16:18 compute-0 podman[430179]: 2026-01-31 09:16:18.128348311 +0000 UTC m=+0.363432151 container start a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 09:16:18 compute-0 brave_chandrasekhar[430213]: 167 167
Jan 31 09:16:18 compute-0 systemd[1]: libpod-a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da.scope: Deactivated successfully.
Jan 31 09:16:18 compute-0 conmon[430213]: conmon a50d93c997448cbb4666 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da.scope/container/memory.events
Jan 31 09:16:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 31 09:16:18 compute-0 podman[430179]: 2026-01-31 09:16:18.276704786 +0000 UTC m=+0.511788636 container attach a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 09:16:18 compute-0 podman[430179]: 2026-01-31 09:16:18.277568078 +0000 UTC m=+0.512651918 container died a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 09:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-714e951ee14c3f414bc53a21ec7b2d277dae752d9d78581dde5c2ef71dbc3fee-merged.mount: Deactivated successfully.
Jan 31 09:16:18 compute-0 podman[430179]: 2026-01-31 09:16:18.747709758 +0000 UTC m=+0.982793608 container remove a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 09:16:18 compute-0 systemd[1]: libpod-conmon-a50d93c997448cbb4666207495ead4bae4fe1caf01097ce591198c65d6a6f4da.scope: Deactivated successfully.
Jan 31 09:16:18 compute-0 nova_compute[247704]: 2026-01-31 09:16:18.904 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:18 compute-0 nova_compute[247704]: 2026-01-31 09:16:18.968 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:18 compute-0 ceph-mon[74496]: pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 9 op/s
Jan 31 09:16:19 compute-0 podman[430241]: 2026-01-31 09:16:19.00213622 +0000 UTC m=+0.162529018 container create 5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mahavira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:16:19 compute-0 podman[430241]: 2026-01-31 09:16:18.912427358 +0000 UTC m=+0.072820206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:16:19 compute-0 systemd[1]: Started libpod-conmon-5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3.scope.
Jan 31 09:16:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783716f660416479ed50f97507f9ee878fef45d006866de624b7ce8b19420da8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783716f660416479ed50f97507f9ee878fef45d006866de624b7ce8b19420da8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783716f660416479ed50f97507f9ee878fef45d006866de624b7ce8b19420da8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783716f660416479ed50f97507f9ee878fef45d006866de624b7ce8b19420da8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783716f660416479ed50f97507f9ee878fef45d006866de624b7ce8b19420da8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:19 compute-0 podman[430241]: 2026-01-31 09:16:19.134660314 +0000 UTC m=+0.295053212 container init 5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mahavira, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:16:19 compute-0 podman[430241]: 2026-01-31 09:16:19.143396674 +0000 UTC m=+0.303789502 container start 5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 09:16:19 compute-0 podman[430241]: 2026-01-31 09:16:19.162819663 +0000 UTC m=+0.323212551 container attach 5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:16:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:19.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:19.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:19 compute-0 friendly_mahavira[430258]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:16:19 compute-0 friendly_mahavira[430258]: --> relative data size: 1.0
Jan 31 09:16:19 compute-0 friendly_mahavira[430258]: --> All data devices are unavailable
Jan 31 09:16:19 compute-0 systemd[1]: libpod-5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3.scope: Deactivated successfully.
Jan 31 09:16:19 compute-0 podman[430241]: 2026-01-31 09:16:19.946327985 +0000 UTC m=+1.106720803 container died 5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-783716f660416479ed50f97507f9ee878fef45d006866de624b7ce8b19420da8-merged.mount: Deactivated successfully.
Jan 31 09:16:20 compute-0 podman[430241]: 2026-01-31 09:16:20.059336979 +0000 UTC m=+1.219729777 container remove 5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 09:16:20 compute-0 systemd[1]: libpod-conmon-5b4109ef3e7f292678152a64cb1917a5005bc71b9fb3dd34eb3c2a29ccb756a3.scope: Deactivated successfully.
Jan 31 09:16:20 compute-0 sudo[430113]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:16:20 compute-0 sudo[430287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:20 compute-0 sudo[430287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:16:20 compute-0 sudo[430287]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 133 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 312 KiB/s wr, 9 op/s
Jan 31 09:16:20 compute-0 sudo[430312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:16:20 compute-0 sudo[430312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:20 compute-0 sudo[430312]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:20 compute-0 sudo[430337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:20 compute-0 sudo[430337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:20 compute-0 sudo[430337]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:20 compute-0 sudo[430362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:16:20 compute-0 sudo[430362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:16:20
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Jan 31 09:16:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:16:20 compute-0 podman[430428]: 2026-01-31 09:16:20.597210222 +0000 UTC m=+0.055735634 container create 123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 09:16:20 compute-0 systemd[1]: Started libpod-conmon-123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85.scope.
Jan 31 09:16:20 compute-0 podman[430428]: 2026-01-31 09:16:20.561108112 +0000 UTC m=+0.019633544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:16:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:20 compute-0 podman[430428]: 2026-01-31 09:16:20.730215237 +0000 UTC m=+0.188740669 container init 123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:16:20 compute-0 podman[430428]: 2026-01-31 09:16:20.736446648 +0000 UTC m=+0.194972060 container start 123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 09:16:20 compute-0 vigilant_poincare[430445]: 167 167
Jan 31 09:16:20 compute-0 systemd[1]: libpod-123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85.scope: Deactivated successfully.
Jan 31 09:16:20 compute-0 podman[430428]: 2026-01-31 09:16:20.745571117 +0000 UTC m=+0.204096529 container attach 123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:16:20 compute-0 podman[430428]: 2026-01-31 09:16:20.745957128 +0000 UTC m=+0.204482540 container died 123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 09:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac6d408078d952f20f911b035880609f29934dd89ebfa545b557eb695feab90-merged.mount: Deactivated successfully.
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:16:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:16:21 compute-0 podman[430428]: 2026-01-31 09:16:21.208449364 +0000 UTC m=+0.666974776 container remove 123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 09:16:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:21 compute-0 systemd[1]: libpod-conmon-123cc9ebbc7c9a3bfeea33d1ffc16973938c4edd0c985f28db2a3054a0a1bc85.scope: Deactivated successfully.
Jan 31 09:16:21 compute-0 podman[430470]: 2026-01-31 09:16:21.314997311 +0000 UTC m=+0.022272298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:16:21 compute-0 podman[430470]: 2026-01-31 09:16:21.46138508 +0000 UTC m=+0.168660087 container create 9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:16:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:21 compute-0 ceph-mon[74496]: pgmap v4350: 305 pgs: 305 active+clean; 133 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 312 KiB/s wr, 9 op/s
Jan 31 09:16:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 31 09:16:22 compute-0 systemd[1]: Started libpod-conmon-9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05.scope.
Jan 31 09:16:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb581fed9291e75d7ff8dae7897570b9dc9ad7fad912c36aca160aed05f937e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb581fed9291e75d7ff8dae7897570b9dc9ad7fad912c36aca160aed05f937e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb581fed9291e75d7ff8dae7897570b9dc9ad7fad912c36aca160aed05f937e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb581fed9291e75d7ff8dae7897570b9dc9ad7fad912c36aca160aed05f937e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:22 compute-0 podman[430470]: 2026-01-31 09:16:22.406913758 +0000 UTC m=+1.114188745 container init 9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 09:16:22 compute-0 podman[430470]: 2026-01-31 09:16:22.41488543 +0000 UTC m=+1.122160397 container start 9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:16:22 compute-0 podman[430470]: 2026-01-31 09:16:22.422538924 +0000 UTC m=+1.129813911 container attach 9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 09:16:22 compute-0 ceph-mon[74496]: pgmap v4351: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.4 MiB/s wr, 36 op/s
Jan 31 09:16:23 compute-0 fervent_beaver[430486]: {
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:     "0": [
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:         {
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "devices": [
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "/dev/loop3"
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             ],
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "lv_name": "ceph_lv0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "lv_size": "7511998464",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "name": "ceph_lv0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "tags": {
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.cluster_name": "ceph",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.crush_device_class": "",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.encrypted": "0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.osd_id": "0",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.type": "block",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:                 "ceph.vdo": "0"
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             },
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "type": "block",
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:             "vg_name": "ceph_vg0"
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:         }
Jan 31 09:16:23 compute-0 fervent_beaver[430486]:     ]
Jan 31 09:16:23 compute-0 fervent_beaver[430486]: }
Jan 31 09:16:23 compute-0 systemd[1]: libpod-9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05.scope: Deactivated successfully.
Jan 31 09:16:23 compute-0 podman[430470]: 2026-01-31 09:16:23.23854863 +0000 UTC m=+1.945823597 container died 9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 09:16:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:23.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb581fed9291e75d7ff8dae7897570b9dc9ad7fad912c36aca160aed05f937e-merged.mount: Deactivated successfully.
Jan 31 09:16:23 compute-0 podman[430470]: 2026-01-31 09:16:23.300434182 +0000 UTC m=+2.007709149 container remove 9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 09:16:23 compute-0 systemd[1]: libpod-conmon-9be685ee22efd48ba77b13f2192bb3ec2a14d53633f8c8413b355949193d3f05.scope: Deactivated successfully.
Jan 31 09:16:23 compute-0 sudo[430362]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:23 compute-0 sudo[430510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:23 compute-0 sudo[430510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:23 compute-0 sudo[430510]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:23 compute-0 sudo[430535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:16:23 compute-0 sudo[430535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:23 compute-0 sudo[430535]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:23 compute-0 sudo[430560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:23 compute-0 sudo[430560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:23 compute-0 sudo[430560]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:23 compute-0 sudo[430585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:16:23 compute-0 nova_compute[247704]: 2026-01-31 09:16:23.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:23 compute-0 nova_compute[247704]: 2026-01-31 09:16:23.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:16:23 compute-0 sudo[430585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:23 compute-0 nova_compute[247704]: 2026-01-31 09:16:23.912 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:23 compute-0 podman[430650]: 2026-01-31 09:16:23.946196555 +0000 UTC m=+0.089383215 container create 226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:16:23 compute-0 nova_compute[247704]: 2026-01-31 09:16:23.971 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:23 compute-0 podman[430650]: 2026-01-31 09:16:23.885897662 +0000 UTC m=+0.029084342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:16:23 compute-0 systemd[1]: Started libpod-conmon-226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9.scope.
Jan 31 09:16:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:24 compute-0 podman[430650]: 2026-01-31 09:16:24.036728267 +0000 UTC m=+0.179914957 container init 226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 09:16:24 compute-0 podman[430650]: 2026-01-31 09:16:24.049970546 +0000 UTC m=+0.193157236 container start 226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 09:16:24 compute-0 podman[430650]: 2026-01-31 09:16:24.055166521 +0000 UTC m=+0.198353211 container attach 226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:16:24 compute-0 lucid_williams[430667]: 167 167
Jan 31 09:16:24 compute-0 systemd[1]: libpod-226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9.scope: Deactivated successfully.
Jan 31 09:16:24 compute-0 podman[430650]: 2026-01-31 09:16:24.056522474 +0000 UTC m=+0.199709164 container died 226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 09:16:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0224d263e2498072c71660fa8ada3effcbf1613c157c8d1e4fcf654322eef04e-merged.mount: Deactivated successfully.
Jan 31 09:16:24 compute-0 podman[430650]: 2026-01-31 09:16:24.099975331 +0000 UTC m=+0.243162001 container remove 226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:16:24 compute-0 systemd[1]: libpod-conmon-226dc949fd8fed82715c7f24bb7def4c66ef86f1e33823e0d872055c22d6b7c9.scope: Deactivated successfully.
Jan 31 09:16:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:16:24 compute-0 podman[430691]: 2026-01-31 09:16:24.235198811 +0000 UTC m=+0.051841261 container create 37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dhawan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 09:16:24 compute-0 systemd[1]: Started libpod-conmon-37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa.scope.
Jan 31 09:16:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d818d845ea636740323d15abc1383aa821f9c11c9b778f87470e07183adfd1bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d818d845ea636740323d15abc1383aa821f9c11c9b778f87470e07183adfd1bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d818d845ea636740323d15abc1383aa821f9c11c9b778f87470e07183adfd1bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d818d845ea636740323d15abc1383aa821f9c11c9b778f87470e07183adfd1bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:24 compute-0 podman[430691]: 2026-01-31 09:16:24.212491053 +0000 UTC m=+0.029133513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:16:24 compute-0 podman[430691]: 2026-01-31 09:16:24.309533932 +0000 UTC m=+0.126176402 container init 37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dhawan, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 09:16:24 compute-0 podman[430691]: 2026-01-31 09:16:24.31525082 +0000 UTC m=+0.131893270 container start 37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 09:16:24 compute-0 podman[430691]: 2026-01-31 09:16:24.329863261 +0000 UTC m=+0.146505791 container attach 37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dhawan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 09:16:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:25 compute-0 practical_dhawan[430707]: {
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:         "osd_id": 0,
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:         "type": "bluestore"
Jan 31 09:16:25 compute-0 practical_dhawan[430707]:     }
Jan 31 09:16:25 compute-0 practical_dhawan[430707]: }
Jan 31 09:16:25 compute-0 systemd[1]: libpod-37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa.scope: Deactivated successfully.
Jan 31 09:16:25 compute-0 podman[430691]: 2026-01-31 09:16:25.17125981 +0000 UTC m=+0.987902260 container died 37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 09:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d818d845ea636740323d15abc1383aa821f9c11c9b778f87470e07183adfd1bb-merged.mount: Deactivated successfully.
Jan 31 09:16:25 compute-0 podman[430691]: 2026-01-31 09:16:25.243989973 +0000 UTC m=+1.060632443 container remove 37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:16:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:25 compute-0 systemd[1]: libpod-conmon-37e809f36f042497cdcaa015aa771c4bf5a47fea58d11650bf18f2dcc3166bfa.scope: Deactivated successfully.
Jan 31 09:16:25 compute-0 sudo[430585]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:16:25 compute-0 nova_compute[247704]: 2026-01-31 09:16:25.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:25.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:16:25 compute-0 ceph-mon[74496]: pgmap v4352: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:16:25 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 7abf5731-a191-4840-a3cc-d80608e94903 does not exist
Jan 31 09:16:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 66ad0c13-6182-471e-8458-e5564679899f does not exist
Jan 31 09:16:26 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 72da0dfe-c3be-4d52-a1bb-82d38df1693c does not exist
Jan 31 09:16:26 compute-0 sudo[430743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:26 compute-0 sudo[430743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:26 compute-0 sudo[430743]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:26 compute-0 sudo[430768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:16:26 compute-0 sudo[430768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:26 compute-0 sudo[430768]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 09:16:26 compute-0 nova_compute[247704]: 2026-01-31 09:16:26.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:26 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:16:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:27.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:27.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:27 compute-0 ceph-mon[74496]: pgmap v4353: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 09:16:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 09:16:28 compute-0 nova_compute[247704]: 2026-01-31 09:16:28.915 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:28 compute-0 ceph-mon[74496]: pgmap v4354: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 09:16:28 compute-0 nova_compute[247704]: 2026-01-31 09:16:28.973 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:29.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.327 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.328 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.345 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:16:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:16:29 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 63K writes, 235K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 63K writes, 23K syncs, 2.71 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2203 writes, 6446 keys, 2203 commit groups, 1.0 writes per commit group, ingest: 5.30 MB, 0.01 MB/s
                                           Interval WAL: 2203 writes, 956 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.423 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.424 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.431 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.431 247708 INFO nova.compute.claims [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:16:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:29 compute-0 nova_compute[247704]: 2026-01-31 09:16:29.616 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/305204523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:29 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3815745731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:16:30 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4027901827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.030 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.037 247708 DEBUG nova.compute.provider_tree [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.050 247708 DEBUG nova.scheduler.client.report [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.083 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.084 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.139 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.139 247708 DEBUG nova.network.neutron [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:16:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.175 247708 INFO nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.193 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.312 247708 INFO nova.virt.block_device [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Booting with volume 16971cc6-6c7b-47eb-9656-600213b7ac4e at /dev/vda
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.511 247708 DEBUG os_brick.utils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.513 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.521 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.521 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[9aadd9a7-7b24-4d46-9a56-37aaf2d117b9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.522 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.527 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.527 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb95ec-5b72-4144-9a24-4b4974468459]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.529 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.536 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.536 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[a67c2388-d0e2-4d23-a23e-cf80c1f297bf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.537 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[919c6abe-eb1c-40b6-8560-ccbd9b5cda5c]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.538 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.556 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.558 247708 DEBUG os_brick.initiator.connectors.lightos [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.558 247708 DEBUG os_brick.initiator.connectors.lightos [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.559 247708 DEBUG os_brick.initiator.connectors.lightos [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.559 247708 DEBUG os_brick.utils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.559 247708 DEBUG nova.virt.block_device [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updating existing volume attachment record: 6a3470bf-28e7-4cb3-86a3-aead50eba653 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:16:30 compute-0 nova_compute[247704]: 2026-01-31 09:16:30.781 247708 DEBUG nova.policy [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:16:30 compute-0 sudo[421366]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:30 compute-0 sshd-session[421345]: Received disconnect from 192.168.122.10 port 49084:11: disconnected by user
Jan 31 09:16:30 compute-0 sshd-session[421345]: Disconnected from user zuul 192.168.122.10 port 49084
Jan 31 09:16:30 compute-0 sshd-session[421331]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:16:30 compute-0 systemd[1]: session-64.scope: Deactivated successfully.
Jan 31 09:16:30 compute-0 systemd[1]: session-64.scope: Consumed 2min 53.250s CPU time, 1.0G memory peak, read 346.0M from disk, written 454.2M to disk.
Jan 31 09:16:30 compute-0 systemd-logind[816]: Session 64 logged out. Waiting for processes to exit.
Jan 31 09:16:30 compute-0 systemd-logind[816]: Removed session 64.
Jan 31 09:16:31 compute-0 sudo[430827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:31 compute-0 sudo[430827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:31 compute-0 sudo[430827]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:31 compute-0 sshd-session[430825]: Accepted publickey for zuul from 192.168.122.10 port 40202 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:16:31 compute-0 systemd-logind[816]: New session 65 of user zuul.
Jan 31 09:16:31 compute-0 systemd[1]: Started Session 65 of User zuul.
Jan 31 09:16:31 compute-0 sshd-session[430825]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:16:31 compute-0 sudo[430853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:31 compute-0 sudo[430853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:31 compute-0 sudo[430853]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:31 compute-0 sudo[430896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-31-bzgtmcg.tar.xz
Jan 31 09:16:31 compute-0 sudo[430896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:16:31 compute-0 podman[430851]: 2026-01-31 09:16:31.220538472 +0000 UTC m=+0.182939350 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Jan 31 09:16:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4027901827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:31 compute-0 ceph-mon[74496]: pgmap v4355: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 31 09:16:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:31.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:31 compute-0 sudo[430896]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:31 compute-0 sshd-session[430889]: Received disconnect from 192.168.122.10 port 40202:11: disconnected by user
Jan 31 09:16:31 compute-0 sshd-session[430889]: Disconnected from user zuul 192.168.122.10 port 40202
Jan 31 09:16:31 compute-0 sshd-session[430825]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:16:31 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Jan 31 09:16:31 compute-0 systemd-logind[816]: Session 65 logged out. Waiting for processes to exit.
Jan 31 09:16:31 compute-0 systemd-logind[816]: Removed session 65.
Jan 31 09:16:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:31.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:31 compute-0 sshd-session[430930]: Accepted publickey for zuul from 192.168.122.10 port 40214 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:16:31 compute-0 systemd-logind[816]: New session 66 of user zuul.
Jan 31 09:16:31 compute-0 systemd[1]: Started Session 66 of User zuul.
Jan 31 09:16:31 compute-0 sshd-session[430930]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:16:31 compute-0 sudo[430934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 31 09:16:31 compute-0 sudo[430934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:16:31 compute-0 sudo[430934]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:31 compute-0 sshd-session[430933]: Received disconnect from 192.168.122.10 port 40214:11: disconnected by user
Jan 31 09:16:31 compute-0 sshd-session[430933]: Disconnected from user zuul 192.168.122.10 port 40214
Jan 31 09:16:31 compute-0 sshd-session[430930]: pam_unix(sshd:session): session closed for user zuul
Jan 31 09:16:31 compute-0 systemd[1]: session-66.scope: Deactivated successfully.
Jan 31 09:16:31 compute-0 systemd-logind[816]: Session 66 logged out. Waiting for processes to exit.
Jan 31 09:16:31 compute-0 systemd-logind[816]: Removed session 66.
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.811 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.813 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.813 247708 INFO nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Creating image(s)
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.813 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.814 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Ensure instance console log exists: /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.814 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.814 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:31 compute-0 nova_compute[247704]: 2026-01-31 09:16:31.815 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Jan 31 09:16:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3152122720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:16:32 compute-0 nova_compute[247704]: 2026-01-31 09:16:32.330 247708 DEBUG nova.network.neutron [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Successfully created port: 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:16:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:33.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:33 compute-0 ceph-mon[74496]: pgmap v4356: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Jan 31 09:16:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1902503427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:33 compute-0 nova_compute[247704]: 2026-01-31 09:16:33.920 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:33 compute-0 nova_compute[247704]: 2026-01-31 09:16:33.974 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.046 247708 DEBUG nova.network.neutron [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Successfully updated port: 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.061 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.062 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.062 247708 DEBUG nova.network.neutron [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:16:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s wr, 0 op/s
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.313 247708 DEBUG nova.compute.manager [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-changed-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.313 247708 DEBUG nova.compute.manager [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Refreshing instance network info cache due to event network-changed-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.313 247708 DEBUG oslo_concurrency.lockutils [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:16:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4121527407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.418 247708 DEBUG nova.network.neutron [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.598 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.598 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.599 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.599 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.638 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.639 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.639 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.639 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:16:34 compute-0 nova_compute[247704]: 2026-01-31 09:16:34.639 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:16:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3800926238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.051 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.196 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.197 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3940MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.198 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.198 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:35.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.345 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance 6b297264-ad85-4d75-97eb-afe8844db41c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.347 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.347 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:16:35 compute-0 ceph-mon[74496]: pgmap v4357: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s wr, 0 op/s
Jan 31 09:16:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3800926238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.426 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:35.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:16:35 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599360944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.824 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.828 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.864 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.930 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:16:35 compute-0 nova_compute[247704]: 2026-01-31 09:16:35.931 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.151 247708 DEBUG nova.network.neutron [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updating instance_info_cache with network_info: [{"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.227 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.227 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Instance network_info: |[{"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.228 247708 DEBUG oslo_concurrency.lockutils [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.228 247708 DEBUG nova.network.neutron [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Refreshing network info cache for port 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.231 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Start _get_guest_xml network_info=[{"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '6a3470bf-28e7-4cb3-86a3-aead50eba653', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-16971cc6-6c7b-47eb-9656-600213b7ac4e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '16971cc6-6c7b-47eb-9656-600213b7ac4e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6b297264-ad85-4d75-97eb-afe8844db41c', 'attached_at': '', 'detached_at': '', 'volume_id': '16971cc6-6c7b-47eb-9656-600213b7ac4e', 'serial': '16971cc6-6c7b-47eb-9656-600213b7ac4e'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.235 247708 WARNING nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.240 247708 DEBUG nova.virt.libvirt.host [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.241 247708 DEBUG nova.virt.libvirt.host [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.245 247708 DEBUG nova.virt.libvirt.host [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.246 247708 DEBUG nova.virt.libvirt.host [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.247 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.247 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.248 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.248 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.248 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.248 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.249 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.249 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.249 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.250 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.250 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.250 247708 DEBUG nova.virt.hardware [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.280 247708 DEBUG nova.storage.rbd_utils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 6b297264-ad85-4d75-97eb-afe8844db41c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.284 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031506999697561884 of space, bias 1.0, pg target 0.9452099909268565 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:16:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:16:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2599360944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:16:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:16:36 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766357343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.685 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.721 247708 DEBUG nova.virt.libvirt.vif [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:16:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1369053659',display_name='tempest-TestVolumeBootPattern-server-1369053659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1369053659',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-3fw9pso4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:16:30Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=6b297264-ad85-4d75-97eb-afe8844db41c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.722 247708 DEBUG nova.network.os_vif_util [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.722 247708 DEBUG nova.network.os_vif_util [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.723 247708 DEBUG nova.objects.instance [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b297264-ad85-4d75-97eb-afe8844db41c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.755 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <uuid>6b297264-ad85-4d75-97eb-afe8844db41c</uuid>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <name>instance-000000de</name>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <metadata>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <nova:name>tempest-TestVolumeBootPattern-server-1369053659</nova:name>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 09:16:36</nova:creationTime>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <nova:port uuid="9df79f25-f35d-48f6-83e4-cd77dc9ca7ad">
Jan 31 09:16:36 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </metadata>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <system>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <entry name="serial">6b297264-ad85-4d75-97eb-afe8844db41c</entry>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <entry name="uuid">6b297264-ad85-4d75-97eb-afe8844db41c</entry>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </system>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <os>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </os>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <features>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <apic/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </features>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </clock>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </cpu>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   <devices>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/6b297264-ad85-4d75-97eb-afe8844db41c_disk.config">
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </source>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-16971cc6-6c7b-47eb-9656-600213b7ac4e">
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </source>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:16:36 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <serial>16971cc6-6c7b-47eb-9656-600213b7ac4e</serial>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:26:08:15"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <target dev="tap9df79f25-f3"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </interface>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/console.log" append="off"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </serial>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <video>
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </video>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </rng>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 09:16:36 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 09:16:36 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 09:16:36 compute-0 nova_compute[247704]:   </devices>
Jan 31 09:16:36 compute-0 nova_compute[247704]: </domain>
Jan 31 09:16:36 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.756 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Preparing to wait for external event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.756 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.756 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.757 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.758 247708 DEBUG nova.virt.libvirt.vif [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:16:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1369053659',display_name='tempest-TestVolumeBootPattern-server-1369053659',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1369053659',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-3fw9pso4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:16:30Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=6b297264-ad85-4d75-97eb-afe8844db41c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.758 247708 DEBUG nova.network.os_vif_util [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.758 247708 DEBUG nova.network.os_vif_util [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.759 247708 DEBUG os_vif [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.759 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.760 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.760 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.763 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.764 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9df79f25-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.764 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9df79f25-f3, col_values=(('external_ids', {'iface-id': '9df79f25-f35d-48f6-83e4-cd77dc9ca7ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:08:15', 'vm-uuid': '6b297264-ad85-4d75-97eb-afe8844db41c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.766 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:36 compute-0 NetworkManager[49108]: <info>  [1769850996.7670] manager: (tap9df79f25-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.769 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.773 247708 INFO os_vif [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3')
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.909 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.909 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.910 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:26:08:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.910 247708 INFO nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Using config drive
Jan 31 09:16:36 compute-0 nova_compute[247704]: 2026-01-31 09:16:36.929 247708 DEBUG nova.storage.rbd_utils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 6b297264-ad85-4d75-97eb-afe8844db41c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:16:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:37 compute-0 ceph-mon[74496]: pgmap v4358: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:16:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2766357343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:16:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:16:38 compute-0 nova_compute[247704]: 2026-01-31 09:16:38.761 247708 INFO nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Creating config drive at /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/disk.config
Jan 31 09:16:38 compute-0 nova_compute[247704]: 2026-01-31 09:16:38.765 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsarcms37 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:38 compute-0 nova_compute[247704]: 2026-01-31 09:16:38.891 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsarcms37" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:38 compute-0 nova_compute[247704]: 2026-01-31 09:16:38.916 247708 DEBUG nova.storage.rbd_utils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image 6b297264-ad85-4d75-97eb-afe8844db41c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:16:38 compute-0 nova_compute[247704]: 2026-01-31 09:16:38.920 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/disk.config 6b297264-ad85-4d75-97eb-afe8844db41c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:16:38 compute-0 nova_compute[247704]: 2026-01-31 09:16:38.975 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.125 247708 DEBUG oslo_concurrency.processutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/disk.config 6b297264-ad85-4d75-97eb-afe8844db41c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.126 247708 INFO nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Deleting local config drive /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c/disk.config because it was imported into RBD.
Jan 31 09:16:39 compute-0 kernel: tap9df79f25-f3: entered promiscuous mode
Jan 31 09:16:39 compute-0 NetworkManager[49108]: <info>  [1769850999.1731] manager: (tap9df79f25-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Jan 31 09:16:39 compute-0 ovn_controller[149457]: 2026-01-31T09:16:39Z|00907|binding|INFO|Claiming lport 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad for this chassis.
Jan 31 09:16:39 compute-0 ovn_controller[149457]: 2026-01-31T09:16:39Z|00908|binding|INFO|9df79f25-f35d-48f6-83e4-cd77dc9ca7ad: Claiming fa:16:3e:26:08:15 10.100.0.4
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.174 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 ovn_controller[149457]: 2026-01-31T09:16:39Z|00909|binding|INFO|Setting lport 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad ovn-installed in OVS
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.183 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.184 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 ovn_controller[149457]: 2026-01-31T09:16:39Z|00910|binding|INFO|Setting lport 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad up in Southbound
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.187 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:08:15 10.100.0.4'], port_security=['fa:16:3e:26:08:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6b297264-ad85-4d75-97eb-afe8844db41c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11348f26-2c0a-4b92-a927-856bca145e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.189 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.191 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:16:39 compute-0 systemd-udevd[431120]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:16:39 compute-0 systemd-machined[214448]: New machine qemu-95-instance-000000de.
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.200 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[59bbcf40-d790-4b57-a1c2-369b62f10924]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.201 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.203 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.203 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[624b974e-d464-4c0e-b9b4-a9f44e06672f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.204 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e6973d66-7816-4933-8c3a-a208f1054aff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 NetworkManager[49108]: <info>  [1769850999.2074] device (tap9df79f25-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:16:39 compute-0 NetworkManager[49108]: <info>  [1769850999.2079] device (tap9df79f25-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.214 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[49cc25ab-3182-46a0-8103-b57cf4a8ffba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-000000de.
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.235 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfe4a1f-eccc-4683-944e-7c8c0277a36d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.260 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d780ae-21ae-4bef-b7be-929251a45430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 NetworkManager[49108]: <info>  [1769850999.2650] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/405)
Jan 31 09:16:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.265 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[88bffce1-5570-49eb-a0e5-c4021670dc03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.296 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[c328bd04-c59c-43d9-9781-5f06a0cd97a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.300 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3fc36d-b59e-4a47-93fb-fcec1e5d3bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 NetworkManager[49108]: <info>  [1769850999.3153] device (tap5c9ca540-50): carrier: link connected
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.320 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[ab474ead-401a-4d5f-8196-6c838ea08c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.333 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[443e77da-3a4f-4f99-94db-126efee4f34f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1124193, 'reachable_time': 16495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431154, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.344 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[027e9c7f-95b3-44ca-8886-8cdc7f491b82]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1124193, 'tstamp': 1124193}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431155, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.356 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[303101df-bac0-4c00-88a1-9986f5b19f14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1124193, 'reachable_time': 16495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 431156, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.377 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ab7bbb-0b14-4cc7-a43d-4ef68c43617a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.421 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a8553a15-be52-4dde-9675-25c1e45eec89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.422 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.423 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.423 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:39 compute-0 NetworkManager[49108]: <info>  [1769850999.4253] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.425 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.428 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.429 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 ovn_controller[149457]: 2026-01-31T09:16:39Z|00911|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.436 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.436 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.437 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[df283f65-7dcc-4807-aa2b-5c8f75f959b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.438 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: global
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:16:39 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:39.438 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:16:39 compute-0 ceph-mon[74496]: pgmap v4359: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:16:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:39.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.825 247708 DEBUG nova.compute.manager [req-b1592ee1-65c5-41b8-aa8f-6484ee080547 req-9e0df0bd-3b73-44e3-83ab-c9938c58c9dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:16:39 compute-0 podman[431188]: 2026-01-31 09:16:39.827214789 +0000 UTC m=+0.063035550 container create 620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.828 247708 DEBUG oslo_concurrency.lockutils [req-b1592ee1-65c5-41b8-aa8f-6484ee080547 req-9e0df0bd-3b73-44e3-83ab-c9938c58c9dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.828 247708 DEBUG oslo_concurrency.lockutils [req-b1592ee1-65c5-41b8-aa8f-6484ee080547 req-9e0df0bd-3b73-44e3-83ab-c9938c58c9dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.828 247708 DEBUG oslo_concurrency.lockutils [req-b1592ee1-65c5-41b8-aa8f-6484ee080547 req-9e0df0bd-3b73-44e3-83ab-c9938c58c9dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.829 247708 DEBUG nova.compute.manager [req-b1592ee1-65c5-41b8-aa8f-6484ee080547 req-9e0df0bd-3b73-44e3-83ab-c9938c58c9dd 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Processing event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:16:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:39 compute-0 systemd[1]: Started libpod-conmon-620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf.scope.
Jan 31 09:16:39 compute-0 podman[431188]: 2026-01-31 09:16:39.789697824 +0000 UTC m=+0.025518605 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:16:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:16:39 compute-0 nova_compute[247704]: 2026-01-31 09:16:39.893 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc350db11948dbc1ce7b4b3ce4ffc290a550b7a935747120f2d96dd6ff5563c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:16:39 compute-0 podman[431188]: 2026-01-31 09:16:39.912408142 +0000 UTC m=+0.148228933 container init 620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 09:16:39 compute-0 podman[431188]: 2026-01-31 09:16:39.91731153 +0000 UTC m=+0.153132291 container start 620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:16:39 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [NOTICE]   (431207) : New worker (431209) forked
Jan 31 09:16:39 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [NOTICE]   (431207) : Loading success.
Jan 31 09:16:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.195 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.196 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769851000.1950462, 6b297264-ad85-4d75-97eb-afe8844db41c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.196 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] VM Started (Lifecycle Event)
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.199 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.201 247708 INFO nova.virt.libvirt.driver [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Instance spawned successfully.
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.202 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.227 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.230 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.231 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.231 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.232 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.232 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.232 247708 DEBUG nova.virt.libvirt.driver [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.236 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.315 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.316 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769851000.1959403, 6b297264-ad85-4d75-97eb-afe8844db41c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.316 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] VM Paused (Lifecycle Event)
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.361 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.365 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769851000.1986396, 6b297264-ad85-4d75-97eb-afe8844db41c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.365 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] VM Resumed (Lifecycle Event)
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.392 247708 INFO nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Took 8.58 seconds to spawn the instance on the hypervisor.
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.392 247708 DEBUG nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.394 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.401 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.441 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.482 247708 INFO nova.compute.manager [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Took 11.08 seconds to build instance.
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.538 247708 DEBUG oslo_concurrency.lockutils [None req-41e205ac-be13-497c-83bc-0138cf5bae7f ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.838 247708 DEBUG nova.network.neutron [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updated VIF entry in instance network info cache for port 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.840 247708 DEBUG nova.network.neutron [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updating instance_info_cache with network_info: [{"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:16:40 compute-0 nova_compute[247704]: 2026-01-31 09:16:40.864 247708 DEBUG oslo_concurrency.lockutils [req-d4becdb9-6385-40b6-bf4c-93420111eb2b req-5f287fbf-fb9e-4e5d-94eb-942f2456e60b 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:16:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:41 compute-0 ceph-mgr[74791]: [devicehealth INFO root] Check health
Jan 31 09:16:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:41.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:41 compute-0 ceph-mon[74496]: pgmap v4360: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 09:16:41 compute-0 nova_compute[247704]: 2026-01-31 09:16:41.766 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:42 compute-0 nova_compute[247704]: 2026-01-31 09:16:42.123 247708 DEBUG nova.compute.manager [req-65653e32-ddac-4d60-ab60-ba1756fdb5cc req-8eba5e3b-3a45-4da6-be9a-1a28d20371bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:16:42 compute-0 nova_compute[247704]: 2026-01-31 09:16:42.123 247708 DEBUG oslo_concurrency.lockutils [req-65653e32-ddac-4d60-ab60-ba1756fdb5cc req-8eba5e3b-3a45-4da6-be9a-1a28d20371bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:16:42 compute-0 nova_compute[247704]: 2026-01-31 09:16:42.123 247708 DEBUG oslo_concurrency.lockutils [req-65653e32-ddac-4d60-ab60-ba1756fdb5cc req-8eba5e3b-3a45-4da6-be9a-1a28d20371bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:16:42 compute-0 nova_compute[247704]: 2026-01-31 09:16:42.123 247708 DEBUG oslo_concurrency.lockutils [req-65653e32-ddac-4d60-ab60-ba1756fdb5cc req-8eba5e3b-3a45-4da6-be9a-1a28d20371bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:16:42 compute-0 nova_compute[247704]: 2026-01-31 09:16:42.123 247708 DEBUG nova.compute.manager [req-65653e32-ddac-4d60-ab60-ba1756fdb5cc req-8eba5e3b-3a45-4da6-be9a-1a28d20371bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] No waiting events found dispatching network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:16:42 compute-0 nova_compute[247704]: 2026-01-31 09:16:42.124 247708 WARNING nova.compute.manager [req-65653e32-ddac-4d60-ab60-ba1756fdb5cc req-8eba5e3b-3a45-4da6-be9a-1a28d20371bc 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received unexpected event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad for instance with vm_state active and task_state None.
Jan 31 09:16:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Jan 31 09:16:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:43.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:43 compute-0 nova_compute[247704]: 2026-01-31 09:16:43.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:43.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:43 compute-0 ceph-mon[74496]: pgmap v4361: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 KiB/s rd, 12 KiB/s wr, 7 op/s
Jan 31 09:16:43 compute-0 nova_compute[247704]: 2026-01-31 09:16:43.976 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 869 KiB/s rd, 12 KiB/s wr, 37 op/s
Jan 31 09:16:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:45.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:45 compute-0 nova_compute[247704]: 2026-01-31 09:16:45.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:16:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:45.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:45 compute-0 ceph-mon[74496]: pgmap v4362: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 869 KiB/s rd, 12 KiB/s wr, 37 op/s
Jan 31 09:16:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:46 compute-0 nova_compute[247704]: 2026-01-31 09:16:46.768 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:46 compute-0 ceph-mon[74496]: pgmap v4363: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:47.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:48 compute-0 podman[431265]: 2026-01-31 09:16:48.875647171 +0000 UTC m=+0.050958338 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 09:16:48 compute-0 nova_compute[247704]: 2026-01-31 09:16:48.978 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:49 compute-0 ceph-mon[74496]: pgmap v4364: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:49.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:16:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:51.006 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=116, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=115) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:16:51 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:51.007 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.007 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:51 compute-0 sudo[431283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:51 compute-0 sudo[431283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:51 compute-0 sudo[431283]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:51 compute-0 sudo[431308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:16:51 compute-0 sudo[431308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:16:51 compute-0 sudo[431308]: pam_unix(sudo:session): session closed for user root
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.265 247708 DEBUG nova.compute.manager [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-changed-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.266 247708 DEBUG nova.compute.manager [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Refreshing instance network info cache due to event network-changed-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.266 247708 DEBUG oslo_concurrency.lockutils [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.266 247708 DEBUG oslo_concurrency.lockutils [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.266 247708 DEBUG nova.network.neutron [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Refreshing network info cache for port 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:16:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:51 compute-0 ceph-mon[74496]: pgmap v4365: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 09:16:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:51 compute-0 nova_compute[247704]: 2026-01-31 09:16:51.770 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:52 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:16:52.009 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '116'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:16:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Jan 31 09:16:53 compute-0 nova_compute[247704]: 2026-01-31 09:16:53.199 247708 DEBUG nova.network.neutron [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updated VIF entry in instance network info cache for port 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:16:53 compute-0 nova_compute[247704]: 2026-01-31 09:16:53.200 247708 DEBUG nova.network.neutron [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updating instance_info_cache with network_info: [{"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:16:53 compute-0 nova_compute[247704]: 2026-01-31 09:16:53.221 247708 DEBUG oslo_concurrency.lockutils [req-50647631-1757-4961-952f-06117d91c935 req-6f7851e5-e08f-42ea-a95f-e4c45e173643 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-6b297264-ad85-4d75-97eb-afe8844db41c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:16:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:53.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:53.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:53 compute-0 ceph-mon[74496]: pgmap v4366: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Jan 31 09:16:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3871755717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:16:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/3871755717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:16:53 compute-0 nova_compute[247704]: 2026-01-31 09:16:53.981 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 177 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Jan 31 09:16:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:16:55 compute-0 ceph-mon[74496]: pgmap v4367: 305 pgs: 305 active+clean; 177 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 79 op/s
Jan 31 09:16:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:55.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:16:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:55.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:16:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 31 09:16:56 compute-0 ovn_controller[149457]: 2026-01-31T09:16:56Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:08:15 10.100.0.4
Jan 31 09:16:56 compute-0 ovn_controller[149457]: 2026-01-31T09:16:56Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:08:15 10.100.0.4
Jan 31 09:16:56 compute-0 nova_compute[247704]: 2026-01-31 09:16:56.771 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:57.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:57 compute-0 ceph-mon[74496]: pgmap v4368: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 31 09:16:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:57.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 186 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 210 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:16:58 compute-0 nova_compute[247704]: 2026-01-31 09:16:58.983 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:16:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:16:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:16:59 compute-0 ceph-mon[74496]: pgmap v4369: 305 pgs: 305 active+clean; 186 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 210 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 31 09:16:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:16:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:16:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:16:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 191 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 31 09:17:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:01.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:01.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:01 compute-0 ceph-mon[74496]: pgmap v4370: 305 pgs: 305 active+clean; 191 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 31 09:17:01 compute-0 nova_compute[247704]: 2026-01-31 09:17:01.772 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:01 compute-0 podman[431339]: 2026-01-31 09:17:01.909293693 +0000 UTC m=+0.084096027 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 09:17:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:17:03 compute-0 ceph-mon[74496]: pgmap v4371: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:17:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:03.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:03 compute-0 nova_compute[247704]: 2026-01-31 09:17:03.985 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:17:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:05.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:05 compute-0 ceph-mon[74496]: pgmap v4372: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 09:17:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 09:17:06 compute-0 nova_compute[247704]: 2026-01-31 09:17:06.774 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.177 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.178 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.179 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.179 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.179 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.180 247708 INFO nova.compute.manager [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Terminating instance
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.182 247708 DEBUG nova.compute.manager [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:17:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:07.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:07 compute-0 kernel: tap9df79f25-f3 (unregistering): left promiscuous mode
Jan 31 09:17:07 compute-0 NetworkManager[49108]: <info>  [1769851027.7349] device (tap9df79f25-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.734 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:07 compute-0 ovn_controller[149457]: 2026-01-31T09:17:07Z|00912|binding|INFO|Releasing lport 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad from this chassis (sb_readonly=0)
Jan 31 09:17:07 compute-0 ovn_controller[149457]: 2026-01-31T09:17:07Z|00913|binding|INFO|Setting lport 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad down in Southbound
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.745 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:07 compute-0 ovn_controller[149457]: 2026-01-31T09:17:07Z|00914|binding|INFO|Removing iface tap9df79f25-f3 ovn-installed in OVS
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.749 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:07 compute-0 nova_compute[247704]: 2026-01-31 09:17:07.756 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:07.765 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:08:15 10.100.0.4'], port_security=['fa:16:3e:26:08:15 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '6b297264-ad85-4d75-97eb-afe8844db41c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11348f26-2c0a-4b92-a927-856bca145e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:17:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:07.768 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 9df79f25-f35d-48f6-83e4-cd77dc9ca7ad in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis
Jan 31 09:17:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:07.770 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:17:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:07.772 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[36a38613-c986-48ce-b21a-daf123e502c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:07 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:07.773 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore
Jan 31 09:17:07 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000de.scope: Deactivated successfully.
Jan 31 09:17:07 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000de.scope: Consumed 13.916s CPU time.
Jan 31 09:17:07 compute-0 systemd-machined[214448]: Machine qemu-95-instance-000000de terminated.
Jan 31 09:17:07 compute-0 ceph-mon[74496]: pgmap v4373: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.000 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.004 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.015 247708 INFO nova.virt.libvirt.driver [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Instance destroyed successfully.
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.016 247708 DEBUG nova.objects.instance [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid 6b297264-ad85-4d75-97eb-afe8844db41c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.033 247708 DEBUG nova.virt.libvirt.vif [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:16:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1369053659',display_name='tempest-TestVolumeBootPattern-server-1369053659',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1369053659',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:16:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-3fw9pso4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:16:40Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=6b297264-ad85-4d75-97eb-afe8844db41c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.034 247708 DEBUG nova.network.os_vif_util [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "address": "fa:16:3e:26:08:15", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9df79f25-f3", "ovs_interfaceid": "9df79f25-f35d-48f6-83e4-cd77dc9ca7ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.035 247708 DEBUG nova.network.os_vif_util [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.036 247708 DEBUG os_vif [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.039 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.041 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9df79f25-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:08 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [NOTICE]   (431207) : haproxy version is 2.8.14-c23fe91
Jan 31 09:17:08 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [NOTICE]   (431207) : path to executable is /usr/sbin/haproxy
Jan 31 09:17:08 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [WARNING]  (431207) : Exiting Master process...
Jan 31 09:17:08 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [ALERT]    (431207) : Current worker (431209) exited with code 143 (Terminated)
Jan 31 09:17:08 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[431203]: [WARNING]  (431207) : All workers exited. Exiting... (0)
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.045 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.046 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:08 compute-0 systemd[1]: libpod-620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf.scope: Deactivated successfully.
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.049 247708 INFO os_vif [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:08:15,bridge_name='br-int',has_traffic_filtering=True,id=9df79f25-f35d-48f6-83e4-cd77dc9ca7ad,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9df79f25-f3')
Jan 31 09:17:08 compute-0 podman[431391]: 2026-01-31 09:17:08.054442216 +0000 UTC m=+0.197695166 container died 620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:17:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 610 KiB/s wr, 39 op/s
Jan 31 09:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf-userdata-shm.mount: Deactivated successfully.
Jan 31 09:17:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc350db11948dbc1ce7b4b3ce4ffc290a550b7a935747120f2d96dd6ff5563c9-merged.mount: Deactivated successfully.
Jan 31 09:17:08 compute-0 podman[431391]: 2026-01-31 09:17:08.71890501 +0000 UTC m=+0.862158000 container cleanup 620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 09:17:08 compute-0 systemd[1]: libpod-conmon-620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf.scope: Deactivated successfully.
Jan 31 09:17:08 compute-0 nova_compute[247704]: 2026-01-31 09:17:08.989 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:08 compute-0 podman[431449]: 2026-01-31 09:17:08.989968223 +0000 UTC m=+0.251896292 container remove 620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 09:17:08 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:08.998 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b721cddb-0bcb-4f81-ae07-f21e2eeb5048]: (4, ('Sat Jan 31 09:17:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf)\n620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf\nSat Jan 31 09:17:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf)\n620fc80f3173422c302863e47f24236ddc663169cb170190fbfda05830d308cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.000 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[fa92c987-a28e-4177-967f-4a5526e9e76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.002 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:09 compute-0 nova_compute[247704]: 2026-01-31 09:17:09.005 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:09 compute-0 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 09:17:09 compute-0 nova_compute[247704]: 2026-01-31 09:17:09.006 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:09 compute-0 nova_compute[247704]: 2026-01-31 09:17:09.012 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.012 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3aee4e99-66b5-4c16-ab13-b8decd6585af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.029 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b2587602-1152-4061-b7ed-b12a1b5fb4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.031 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[086a2fbe-c197-40ec-8b16-1fbcaaac9ce4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.042 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[12d6434f-7d54-4461-927a-c7ec2ffcbaf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1124187, 'reachable_time': 18284, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431465, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.046 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:17:09 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:09.046 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[92d94c12-421b-4ed0-8271-4631c1790a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:09.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:09 compute-0 ceph-mon[74496]: pgmap v4374: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 610 KiB/s wr, 39 op/s
Jan 31 09:17:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:10 compute-0 nova_compute[247704]: 2026-01-31 09:17:10.113 247708 DEBUG nova.compute.manager [req-ecacff90-f1c4-44e4-84f3-e1109bf154b2 req-e62d2211-027d-448d-ab98-213b31b13573 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-vif-unplugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:10 compute-0 nova_compute[247704]: 2026-01-31 09:17:10.114 247708 DEBUG oslo_concurrency.lockutils [req-ecacff90-f1c4-44e4-84f3-e1109bf154b2 req-e62d2211-027d-448d-ab98-213b31b13573 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:10 compute-0 nova_compute[247704]: 2026-01-31 09:17:10.115 247708 DEBUG oslo_concurrency.lockutils [req-ecacff90-f1c4-44e4-84f3-e1109bf154b2 req-e62d2211-027d-448d-ab98-213b31b13573 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:10 compute-0 nova_compute[247704]: 2026-01-31 09:17:10.116 247708 DEBUG oslo_concurrency.lockutils [req-ecacff90-f1c4-44e4-84f3-e1109bf154b2 req-e62d2211-027d-448d-ab98-213b31b13573 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:10 compute-0 nova_compute[247704]: 2026-01-31 09:17:10.116 247708 DEBUG nova.compute.manager [req-ecacff90-f1c4-44e4-84f3-e1109bf154b2 req-e62d2211-027d-448d-ab98-213b31b13573 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] No waiting events found dispatching network-vif-unplugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:17:10 compute-0 nova_compute[247704]: 2026-01-31 09:17:10.117 247708 DEBUG nova.compute.manager [req-ecacff90-f1c4-44e4-84f3-e1109bf154b2 req-e62d2211-027d-448d-ab98-213b31b13573 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-vif-unplugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:17:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 330 KiB/s wr, 34 op/s
Jan 31 09:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:11.250 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:11.251 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:11.251 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:11 compute-0 sudo[431467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:11 compute-0 sudo[431467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:11 compute-0 sudo[431467]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:11 compute-0 sudo[431493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:11 compute-0 sudo[431493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:11 compute-0 sudo[431493]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:11 compute-0 nova_compute[247704]: 2026-01-31 09:17:11.554 247708 INFO nova.virt.libvirt.driver [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Deleting instance files /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c_del
Jan 31 09:17:11 compute-0 nova_compute[247704]: 2026-01-31 09:17:11.555 247708 INFO nova.virt.libvirt.driver [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Deletion of /var/lib/nova/instances/6b297264-ad85-4d75-97eb-afe8844db41c_del complete
Jan 31 09:17:11 compute-0 nova_compute[247704]: 2026-01-31 09:17:11.641 247708 INFO nova.compute.manager [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Took 4.46 seconds to destroy the instance on the hypervisor.
Jan 31 09:17:11 compute-0 nova_compute[247704]: 2026-01-31 09:17:11.642 247708 DEBUG oslo.service.loopingcall [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:17:11 compute-0 nova_compute[247704]: 2026-01-31 09:17:11.642 247708 DEBUG nova.compute.manager [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:17:11 compute-0 nova_compute[247704]: 2026-01-31 09:17:11.642 247708 DEBUG nova.network.neutron [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:17:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:11.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:11 compute-0 ceph-mon[74496]: pgmap v4375: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 330 KiB/s wr, 34 op/s
Jan 31 09:17:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 92 KiB/s wr, 21 op/s
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.046 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:13 compute-0 ceph-mon[74496]: pgmap v4376: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 92 KiB/s wr, 21 op/s
Jan 31 09:17:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.761 247708 DEBUG nova.compute.manager [req-9145c0d1-7f8f-40aa-9f6c-3bcc654ffc10 req-e34b6cfe-998d-456b-83da-521f6dc5acac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.762 247708 DEBUG oslo_concurrency.lockutils [req-9145c0d1-7f8f-40aa-9f6c-3bcc654ffc10 req-e34b6cfe-998d-456b-83da-521f6dc5acac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.762 247708 DEBUG oslo_concurrency.lockutils [req-9145c0d1-7f8f-40aa-9f6c-3bcc654ffc10 req-e34b6cfe-998d-456b-83da-521f6dc5acac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.763 247708 DEBUG oslo_concurrency.lockutils [req-9145c0d1-7f8f-40aa-9f6c-3bcc654ffc10 req-e34b6cfe-998d-456b-83da-521f6dc5acac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.763 247708 DEBUG nova.compute.manager [req-9145c0d1-7f8f-40aa-9f6c-3bcc654ffc10 req-e34b6cfe-998d-456b-83da-521f6dc5acac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] No waiting events found dispatching network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.764 247708 WARNING nova.compute.manager [req-9145c0d1-7f8f-40aa-9f6c-3bcc654ffc10 req-e34b6cfe-998d-456b-83da-521f6dc5acac 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received unexpected event network-vif-plugged-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad for instance with vm_state active and task_state deleting.
Jan 31 09:17:13 compute-0 nova_compute[247704]: 2026-01-31 09:17:13.992 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 30 KiB/s wr, 16 op/s
Jan 31 09:17:14 compute-0 nova_compute[247704]: 2026-01-31 09:17:14.528 247708 DEBUG nova.network.neutron [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:17:14 compute-0 nova_compute[247704]: 2026-01-31 09:17:14.555 247708 INFO nova.compute.manager [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Took 2.91 seconds to deallocate network for instance.
Jan 31 09:17:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.029 247708 INFO nova.compute.manager [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Took 0.47 seconds to detach 1 volumes for instance.
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.141 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.142 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.215 247708 DEBUG oslo_concurrency.processutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:15 compute-0 ceph-mon[74496]: pgmap v4377: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 30 KiB/s wr, 16 op/s
Jan 31 09:17:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:17:15 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946098308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.746 247708 DEBUG oslo_concurrency.processutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.753 247708 DEBUG nova.compute.provider_tree [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:17:15 compute-0 nova_compute[247704]: 2026-01-31 09:17:15.957 247708 DEBUG nova.scheduler.client.report [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:17:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 16 op/s
Jan 31 09:17:16 compute-0 nova_compute[247704]: 2026-01-31 09:17:16.333 247708 DEBUG nova.compute.manager [req-33b7067d-b1e7-4161-8e95-b9c15bfcc45a req-67eb8744-df7e-451c-a252-a9e9892e04c8 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Received event network-vif-deleted-9df79f25-f35d-48f6-83e4-cd77dc9ca7ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:16 compute-0 nova_compute[247704]: 2026-01-31 09:17:16.397 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:16 compute-0 nova_compute[247704]: 2026-01-31 09:17:16.479 247708 INFO nova.scheduler.client.report [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance 6b297264-ad85-4d75-97eb-afe8844db41c
Jan 31 09:17:16 compute-0 nova_compute[247704]: 2026-01-31 09:17:16.594 247708 DEBUG oslo_concurrency.lockutils [None req-b70c1ba2-7ede-42df-b372-3523e354d971 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "6b297264-ad85-4d75-97eb-afe8844db41c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:17 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/946098308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:17.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:17.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:18 compute-0 nova_compute[247704]: 2026-01-31 09:17:18.049 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:18 compute-0 ceph-mon[74496]: pgmap v4378: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 30 KiB/s wr, 16 op/s
Jan 31 09:17:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 16 op/s
Jan 31 09:17:18 compute-0 nova_compute[247704]: 2026-01-31 09:17:18.993 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:19 compute-0 ceph-mon[74496]: pgmap v4379: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 16 op/s
Jan 31 09:17:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:19 compute-0 podman[431544]: 2026-01-31 09:17:19.923501458 +0000 UTC m=+0.096040076 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 15 op/s
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:17:20
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.data']
Jan 31 09:17:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:17:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:17:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:21 compute-0 ceph-mon[74496]: pgmap v4380: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 17 KiB/s wr, 15 op/s
Jan 31 09:17:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 8.2 KiB/s wr, 8 op/s
Jan 31 09:17:23 compute-0 nova_compute[247704]: 2026-01-31 09:17:23.013 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769851028.011268, 6b297264-ad85-4d75-97eb-afe8844db41c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:17:23 compute-0 nova_compute[247704]: 2026-01-31 09:17:23.014 247708 INFO nova.compute.manager [-] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] VM Stopped (Lifecycle Event)
Jan 31 09:17:23 compute-0 nova_compute[247704]: 2026-01-31 09:17:23.047 247708 DEBUG nova.compute.manager [None req-38808fa4-9b25-4d80-9b2f-b161ff078051 - - - - - -] [instance: 6b297264-ad85-4d75-97eb-afe8844db41c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:17:23 compute-0 nova_compute[247704]: 2026-01-31 09:17:23.052 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:23.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:23 compute-0 ceph-mon[74496]: pgmap v4381: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 8.2 KiB/s wr, 8 op/s
Jan 31 09:17:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:23.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:23 compute-0 nova_compute[247704]: 2026-01-31 09:17:23.996 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 341 B/s wr, 6 op/s
Jan 31 09:17:24 compute-0 nova_compute[247704]: 2026-01-31 09:17:24.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:24 compute-0 nova_compute[247704]: 2026-01-31 09:17:24.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:17:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:25.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:25 compute-0 ceph-mon[74496]: pgmap v4382: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 KiB/s rd, 341 B/s wr, 6 op/s
Jan 31 09:17:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:17:26 compute-0 sudo[431566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:26 compute-0 sudo[431566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-0 sudo[431566]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:26 compute-0 sudo[431591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:17:26 compute-0 sudo[431591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-0 sudo[431591]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.480 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.480 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:26 compute-0 sudo[431617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:26 compute-0 sudo[431617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-0 sudo[431617]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.509 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 31 09:17:26 compute-0 sudo[431642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Jan 31 09:17:26 compute-0 sudo[431642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.648 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.649 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.660 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.661 247708 INFO nova.compute.claims [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Claim successful on node compute-0.ctlplane.example.com
Jan 31 09:17:26 compute-0 nova_compute[247704]: 2026-01-31 09:17:26.871 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:26 compute-0 podman[431738]: 2026-01-31 09:17:26.973444426 +0000 UTC m=+0.066847212 container exec c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:17:27 compute-0 podman[431738]: 2026-01-31 09:17:27.075550337 +0000 UTC m=+0.168953093 container exec_died c6500841c07b36a8c971bbdcb750ce9cda5744cce00d5b6822bb699bead089c7 (image=quay.io/ceph/ceph:v18, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 09:17:27 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:17:27 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735691117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.323 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:27.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.330 247708 DEBUG nova.compute.provider_tree [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.350 247708 DEBUG nova.scheduler.client.report [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.382 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.383 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.481 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.481 247708 DEBUG nova.network.neutron [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.527 247708 INFO nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.557 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:27 compute-0 podman[431911]: 2026-01-31 09:17:27.603500291 +0000 UTC m=+0.057713422 container exec b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.632 247708 INFO nova.virt.block_device [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Booting with volume 16971cc6-6c7b-47eb-9656-600213b7ac4e at /dev/vda
Jan 31 09:17:27 compute-0 ceph-mon[74496]: pgmap v4383: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:17:27 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3735691117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:27 compute-0 podman[431933]: 2026-01-31 09:17:27.670380983 +0000 UTC m=+0.052812504 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 09:17:27 compute-0 podman[431911]: 2026-01-31 09:17:27.680854135 +0000 UTC m=+0.135067266 container exec_died b160f7ad791bcb51d7340140782e9b69931fca86a37ca28ddd2321bd134bedd3 (image=quay.io/ceph/haproxy:2.3, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-haproxy-rgw-default-compute-0-cwtxbj)
Jan 31 09:17:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:27.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.862 247708 DEBUG nova.policy [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ecd39871d7fd438f88b36601f25d6eb6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98d10c0290e340a08e9d1726bf0066bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 31 09:17:27 compute-0 podman[431978]: 2026-01-31 09:17:27.911798031 +0000 UTC m=+0.062516077 container exec 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, architecture=x86_64, name=keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Jan 31 09:17:27 compute-0 podman[431978]: 2026-01-31 09:17:27.925544032 +0000 UTC m=+0.076262078 container exec_died 37a60ce08c4c74b1713cee8109229a211cbb75cdd957895056cf506b036b5f51 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, release=1793)
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.972 247708 DEBUG os_brick.utils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.974 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.987 255323 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.987 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[56aed841-387d-4050-ad0e-0943f538712c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.989 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.996 255323 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.996 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9acf81-93af-4871-a1c6-faf74134bfbe]: (4, ('InitiatorName=iqn.1994-05.com.redhat:8e229365167', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:27 compute-0 nova_compute[247704]: 2026-01-31 09:17:27.997 255323 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.003 255323 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.003 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[61317728-f32e-4b63-b762-1244d8b641d5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.004 255323 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0ee503-c87e-41a7-b66e-d25c1c804902]: (4, '8f281c2a-1a44-41ea-8268-6c420f002b7f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.004 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:28 compute-0 sudo[431642]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.025 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.030 247708 DEBUG os_brick.initiator.connectors.lightos [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.031 247708 DEBUG os_brick.initiator.connectors.lightos [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.031 247708 DEBUG os_brick.initiator.connectors.lightos [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.031 247708 DEBUG os_brick.utils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:8e229365167', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '8f281c2a-1a44-41ea-8268-6c420f002b7f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.031 247708 DEBUG nova.virt.block_device [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating existing volume attachment record: 13d7a7e2-35f7-4e10-b402-b0425f10344f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:17:28 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.053 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:28 compute-0 sudo[432016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:28 compute-0 sudo[432016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432016]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 sudo[432041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:17:28 compute-0 sudo[432041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432041]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:28 compute-0 sudo[432066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:28 compute-0 sudo[432066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432066]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 sudo[432091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:17:28 compute-0 sudo[432091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432091]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 73ffff60-6fa1-4ae5-8390-1483cb891897 does not exist
Jan 31 09:17:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 23fbc3f1-e902-4216-841d-33c4b77924ee does not exist
Jan 31 09:17:28 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 602c1188-2f9c-46da-80a2-f69dc3b22de0 does not exist
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:17:28 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:17:28 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:17:28 compute-0 sudo[432146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:28 compute-0 sudo[432146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432146]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 sudo[432171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:17:28 compute-0 sudo[432171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432171]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 sudo[432196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:28 compute-0 sudo[432196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:28 compute-0 sudo[432196]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:28 compute-0 sudo[432221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:17:28 compute-0 sudo[432221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:29 compute-0 nova_compute[247704]: 2026-01-31 09:17:28.999 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:29 compute-0 ceph-mon[74496]: pgmap v4384: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:17:29 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.183287975 +0000 UTC m=+0.038590031 container create 226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 09:17:29 compute-0 systemd[1]: Started libpod-conmon-226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0.scope.
Jan 31 09:17:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.252052323 +0000 UTC m=+0.107354399 container init 226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.258821115 +0000 UTC m=+0.114123171 container start 226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:17:29 compute-0 happy_haibt[432301]: 167 167
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.166638054 +0000 UTC m=+0.021940130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.263318724 +0000 UTC m=+0.118620800 container attach 226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:17:29 compute-0 systemd[1]: libpod-226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0.scope: Deactivated successfully.
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.265378213 +0000 UTC m=+0.120680269 container died 226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:17:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b19e2f512216868e41e0957c8b630989588229668254faa271a1121239fb4ab8-merged.mount: Deactivated successfully.
Jan 31 09:17:29 compute-0 podman[432285]: 2026-01-31 09:17:29.307490638 +0000 UTC m=+0.162792704 container remove 226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 31 09:17:29 compute-0 systemd[1]: libpod-conmon-226b441270238f09d786c6915d119686e5ad6bd621859f71acaa6d78c1e8e7e0.scope: Deactivated successfully.
Jan 31 09:17:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:29.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:29.384 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=117, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=116) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:17:29 compute-0 nova_compute[247704]: 2026-01-31 09:17:29.385 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:29 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:29.386 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:17:29 compute-0 podman[432325]: 2026-01-31 09:17:29.449023929 +0000 UTC m=+0.050316254 container create 053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:17:29 compute-0 systemd[1]: Started libpod-conmon-053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6.scope.
Jan 31 09:17:29 compute-0 podman[432325]: 2026-01-31 09:17:29.425912363 +0000 UTC m=+0.027204728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:17:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e9aa031592e65ccf2deda5d49896cd3309fe6038aad9189911c4551ed4c2d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e9aa031592e65ccf2deda5d49896cd3309fe6038aad9189911c4551ed4c2d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e9aa031592e65ccf2deda5d49896cd3309fe6038aad9189911c4551ed4c2d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e9aa031592e65ccf2deda5d49896cd3309fe6038aad9189911c4551ed4c2d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e9aa031592e65ccf2deda5d49896cd3309fe6038aad9189911c4551ed4c2d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:29 compute-0 podman[432325]: 2026-01-31 09:17:29.545276609 +0000 UTC m=+0.146568944 container init 053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 09:17:29 compute-0 podman[432325]: 2026-01-31 09:17:29.550929665 +0000 UTC m=+0.152221990 container start 053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:17:29 compute-0 podman[432325]: 2026-01-31 09:17:29.554366889 +0000 UTC m=+0.155659244 container attach 053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 09:17:29 compute-0 nova_compute[247704]: 2026-01-31 09:17:29.615 247708 DEBUG nova.network.neutron [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Successfully created port: 9d925d3a-15af-4795-b206-2c45063bc1f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 31 09:17:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/695105816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:30 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/35238300' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:17:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:30 compute-0 pedantic_hertz[432342]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:17:30 compute-0 pedantic_hertz[432342]: --> relative data size: 1.0
Jan 31 09:17:30 compute-0 pedantic_hertz[432342]: --> All data devices are unavailable
Jan 31 09:17:30 compute-0 systemd[1]: libpod-053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6.scope: Deactivated successfully.
Jan 31 09:17:30 compute-0 podman[432325]: 2026-01-31 09:17:30.367109886 +0000 UTC m=+0.968402231 container died 053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 09:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-85e9aa031592e65ccf2deda5d49896cd3309fe6038aad9189911c4551ed4c2d3-merged.mount: Deactivated successfully.
Jan 31 09:17:30 compute-0 podman[432325]: 2026-01-31 09:17:30.422131232 +0000 UTC m=+1.023423557 container remove 053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 09:17:30 compute-0 systemd[1]: libpod-conmon-053031b489626d79c74f2270b7f3da028c98c84a871f846f7d7cae9930258da6.scope: Deactivated successfully.
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.438 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.440 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.442 247708 INFO nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Creating image(s)
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.443 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.443 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Ensure instance console log exists: /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.444 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.445 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:30 compute-0 nova_compute[247704]: 2026-01-31 09:17:30.445 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:30 compute-0 sudo[432221]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:30 compute-0 sudo[432369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:30 compute-0 sudo[432369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:30 compute-0 sudo[432369]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:30 compute-0 sudo[432394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:17:30 compute-0 sudo[432394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:30 compute-0 sudo[432394]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:30 compute-0 sudo[432419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:30 compute-0 sudo[432419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:30 compute-0 sudo[432419]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:30 compute-0 sudo[432444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:17:30 compute-0 sudo[432444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:30 compute-0 podman[432510]: 2026-01-31 09:17:30.941841927 +0000 UTC m=+0.037004452 container create 0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 09:17:30 compute-0 systemd[1]: Started libpod-conmon-0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f.scope.
Jan 31 09:17:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:30 compute-0 podman[432510]: 2026-01-31 09:17:30.991471504 +0000 UTC m=+0.086634049 container init 0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:17:30 compute-0 podman[432510]: 2026-01-31 09:17:30.997524159 +0000 UTC m=+0.092686684 container start 0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:17:31 compute-0 happy_borg[432526]: 167 167
Jan 31 09:17:31 compute-0 systemd[1]: libpod-0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f.scope: Deactivated successfully.
Jan 31 09:17:31 compute-0 podman[432510]: 2026-01-31 09:17:31.004495917 +0000 UTC m=+0.099658532 container attach 0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:17:31 compute-0 podman[432510]: 2026-01-31 09:17:31.004812085 +0000 UTC m=+0.099974610 container died 0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 09:17:31 compute-0 podman[432510]: 2026-01-31 09:17:30.924409897 +0000 UTC m=+0.019572442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a4b173a135672d568e492de5158669211bf5daa4c323392324e75576a7eb339-merged.mount: Deactivated successfully.
Jan 31 09:17:31 compute-0 podman[432510]: 2026-01-31 09:17:31.044276307 +0000 UTC m=+0.139438832 container remove 0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 09:17:31 compute-0 systemd[1]: libpod-conmon-0ae4a4c4072d37ac0e19c46c5593ae5947addb9001008355b57fc2a921cc412f.scope: Deactivated successfully.
Jan 31 09:17:31 compute-0 ceph-mon[74496]: pgmap v4385: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:31 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2282457959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:31 compute-0 podman[432550]: 2026-01-31 09:17:31.158034918 +0000 UTC m=+0.032573986 container create d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 09:17:31 compute-0 systemd[1]: Started libpod-conmon-d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0.scope.
Jan 31 09:17:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15090245f43210ac28e2e5a3843172b11d1b922a085d1278d724d4dea654d3da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15090245f43210ac28e2e5a3843172b11d1b922a085d1278d724d4dea654d3da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15090245f43210ac28e2e5a3843172b11d1b922a085d1278d724d4dea654d3da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15090245f43210ac28e2e5a3843172b11d1b922a085d1278d724d4dea654d3da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:31 compute-0 podman[432550]: 2026-01-31 09:17:31.143607161 +0000 UTC m=+0.018146249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:17:31 compute-0 podman[432550]: 2026-01-31 09:17:31.246839768 +0000 UTC m=+0.121378866 container init d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:17:31 compute-0 podman[432550]: 2026-01-31 09:17:31.251924841 +0000 UTC m=+0.126463909 container start d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 09:17:31 compute-0 podman[432550]: 2026-01-31 09:17:31.25524521 +0000 UTC m=+0.129784308 container attach d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 09:17:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:31.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:31 compute-0 sudo[432571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:31 compute-0 sudo[432571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:31 compute-0 sudo[432571]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:31 compute-0 sudo[432596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:31 compute-0 sudo[432596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:31 compute-0 sudo[432596]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:31.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:31 compute-0 intelligent_colden[432566]: {
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:     "0": [
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:         {
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "devices": [
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "/dev/loop3"
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             ],
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "lv_name": "ceph_lv0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "lv_size": "7511998464",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "name": "ceph_lv0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "tags": {
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.cluster_name": "ceph",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.crush_device_class": "",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.encrypted": "0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.osd_id": "0",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.type": "block",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:                 "ceph.vdo": "0"
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             },
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "type": "block",
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:             "vg_name": "ceph_vg0"
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:         }
Jan 31 09:17:31 compute-0 intelligent_colden[432566]:     ]
Jan 31 09:17:31 compute-0 intelligent_colden[432566]: }
Jan 31 09:17:32 compute-0 systemd[1]: libpod-d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0.scope: Deactivated successfully.
Jan 31 09:17:32 compute-0 podman[432550]: 2026-01-31 09:17:32.004570349 +0000 UTC m=+0.879109437 container died d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:17:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-15090245f43210ac28e2e5a3843172b11d1b922a085d1278d724d4dea654d3da-merged.mount: Deactivated successfully.
Jan 31 09:17:32 compute-0 podman[432550]: 2026-01-31 09:17:32.372259411 +0000 UTC m=+1.246798479 container remove d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:17:32 compute-0 systemd[1]: libpod-conmon-d89a2eb576b49d291b840ffa113ae00ee74605c158d4940aff4abcd88f1874c0.scope: Deactivated successfully.
Jan 31 09:17:32 compute-0 sudo[432444]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:32 compute-0 podman[432626]: 2026-01-31 09:17:32.44521333 +0000 UTC m=+0.414994533 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 31 09:17:32 compute-0 sudo[432660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:32 compute-0 sudo[432660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:32 compute-0 sudo[432660]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:32 compute-0 sudo[432690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:17:32 compute-0 sudo[432690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:32 compute-0 sudo[432690]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:32 compute-0 sudo[432715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:32 compute-0 sudo[432715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:32 compute-0 sudo[432715]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:32 compute-0 sudo[432740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:17:32 compute-0 sudo[432740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:32 compute-0 podman[432804]: 2026-01-31 09:17:32.921779006 +0000 UTC m=+0.075737607 container create f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:17:32 compute-0 podman[432804]: 2026-01-31 09:17:32.871489864 +0000 UTC m=+0.025448495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:17:33 compute-0 systemd[1]: Started libpod-conmon-f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6.scope.
Jan 31 09:17:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.055 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:33 compute-0 podman[432804]: 2026-01-31 09:17:33.168503571 +0000 UTC m=+0.322462182 container init f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hermann, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:17:33 compute-0 podman[432804]: 2026-01-31 09:17:33.177691832 +0000 UTC m=+0.331650433 container start f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:17:33 compute-0 laughing_hermann[432820]: 167 167
Jan 31 09:17:33 compute-0 systemd[1]: libpod-f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6.scope: Deactivated successfully.
Jan 31 09:17:33 compute-0 conmon[432820]: conmon f06da97408feb06f3359 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6.scope/container/memory.events
Jan 31 09:17:33 compute-0 podman[432804]: 2026-01-31 09:17:33.255790785 +0000 UTC m=+0.409749386 container attach f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 09:17:33 compute-0 podman[432804]: 2026-01-31 09:17:33.256265996 +0000 UTC m=+0.410224597 container died f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:17:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:33.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:33 compute-0 ceph-mon[74496]: pgmap v4386: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.478 247708 DEBUG nova.network.neutron [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Successfully updated port: 9d925d3a-15af-4795-b206-2c45063bc1f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 31 09:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f4198438a4a13515c7596e4211afeee94378db112ba48e846961265f2a31c22-merged.mount: Deactivated successfully.
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.548 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.548 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquired lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.548 247708 DEBUG nova.network.neutron [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 31 09:17:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.734 247708 DEBUG nova.compute.manager [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-changed-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.735 247708 DEBUG nova.compute.manager [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Refreshing instance network info cache due to event network-changed-9d925d3a-15af-4795-b206-2c45063bc1f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.735 247708 DEBUG oslo_concurrency.lockutils [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:17:33 compute-0 podman[432804]: 2026-01-31 09:17:33.782176841 +0000 UTC m=+0.936135442 container remove f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 09:17:33 compute-0 systemd[1]: libpod-conmon-f06da97408feb06f335986e8fe52f8de60660fd6d39bf2693769722e10ac61b6.scope: Deactivated successfully.
Jan 31 09:17:33 compute-0 nova_compute[247704]: 2026-01-31 09:17:33.924 247708 DEBUG nova.network.neutron [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 31 09:17:33 compute-0 podman[432844]: 2026-01-31 09:17:33.888743289 +0000 UTC m=+0.023315572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:17:34 compute-0 nova_compute[247704]: 2026-01-31 09:17:34.000 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:34 compute-0 podman[432844]: 2026-01-31 09:17:34.001563309 +0000 UTC m=+0.136135572 container create 66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:17:34 compute-0 systemd[1]: Started libpod-conmon-66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617.scope.
Jan 31 09:17:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20be1a91c5a525bdb5e271672bb84580773099e2590a643018bd587da9066de4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20be1a91c5a525bdb5e271672bb84580773099e2590a643018bd587da9066de4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20be1a91c5a525bdb5e271672bb84580773099e2590a643018bd587da9066de4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20be1a91c5a525bdb5e271672bb84580773099e2590a643018bd587da9066de4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:34 compute-0 podman[432844]: 2026-01-31 09:17:34.297940651 +0000 UTC m=+0.432512954 container init 66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:17:34 compute-0 podman[432844]: 2026-01-31 09:17:34.309837718 +0000 UTC m=+0.444409991 container start 66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 09:17:34 compute-0 podman[432844]: 2026-01-31 09:17:34.516344384 +0000 UTC m=+0.650916677 container attach 66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 09:17:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1297135005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:34 compute-0 nova_compute[247704]: 2026-01-31 09:17:34.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:34 compute-0 nova_compute[247704]: 2026-01-31 09:17:34.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:17:34 compute-0 nova_compute[247704]: 2026-01-31 09:17:34.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:17:34 compute-0 nova_compute[247704]: 2026-01-31 09:17:34.594 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 31 09:17:34 compute-0 nova_compute[247704]: 2026-01-31 09:17:34.594 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:17:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:35 compute-0 reverent_easley[432861]: {
Jan 31 09:17:35 compute-0 reverent_easley[432861]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:17:35 compute-0 reverent_easley[432861]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:17:35 compute-0 reverent_easley[432861]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:17:35 compute-0 reverent_easley[432861]:         "osd_id": 0,
Jan 31 09:17:35 compute-0 reverent_easley[432861]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:17:35 compute-0 reverent_easley[432861]:         "type": "bluestore"
Jan 31 09:17:35 compute-0 reverent_easley[432861]:     }
Jan 31 09:17:35 compute-0 reverent_easley[432861]: }
Jan 31 09:17:35 compute-0 systemd[1]: libpod-66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617.scope: Deactivated successfully.
Jan 31 09:17:35 compute-0 podman[432844]: 2026-01-31 09:17:35.12404779 +0000 UTC m=+1.258620053 container died 66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:17:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:35.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-20be1a91c5a525bdb5e271672bb84580773099e2590a643018bd587da9066de4-merged.mount: Deactivated successfully.
Jan 31 09:17:36 compute-0 ceph-mon[74496]: pgmap v4387: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2126205365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004327373266796948 of space, bias 1.0, pg target 1.2982119800390843 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:17:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.616 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.617 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.617 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.618 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:17:36 compute-0 nova_compute[247704]: 2026-01-31 09:17:36.618 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:36 compute-0 podman[432844]: 2026-01-31 09:17:36.769057707 +0000 UTC m=+2.903630000 container remove 66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:17:36 compute-0 sudo[432740]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:36 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:17:36 compute-0 systemd[1]: libpod-conmon-66fb9a6dd442d88f719dd5c5e7110bbb6f15c1fd2ca2a453dc661e8826a55617.scope: Deactivated successfully.
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.007 247708 DEBUG nova.network.neutron [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating instance_info_cache with network_info: [{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:17:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.074 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Releasing lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.075 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Instance network_info: |[{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.077 247708 DEBUG oslo_concurrency.lockutils [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.077 247708 DEBUG nova.network.neutron [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Refreshing network info cache for port 9d925d3a-15af-4795-b206-2c45063bc1f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.084 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Start _get_guest_xml network_info=[{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'device_type': 'disk', 'delete_on_termination': False, 'guest_format': None, 'disk_bus': 'virtio', 'attachment_id': '13d7a7e2-35f7-4e10-b402-b0425f10344f', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-16971cc6-6c7b-47eb-9656-600213b7ac4e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '16971cc6-6c7b-47eb-9656-600213b7ac4e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'dcc57ec3-f59a-4574-9cd4-389ae92c9e11', 'attached_at': '', 'detached_at': '', 'volume_id': '16971cc6-6c7b-47eb-9656-600213b7ac4e', 'serial': '16971cc6-6c7b-47eb-9656-600213b7ac4e'}, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.090 247708 WARNING nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.095 247708 DEBUG nova.virt.libvirt.host [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.096 247708 DEBUG nova.virt.libvirt.host [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.099 247708 DEBUG nova.virt.libvirt.host [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.100 247708 DEBUG nova.virt.libvirt.host [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.101 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.101 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:29:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fea01737-128b-41fa-a695-aaaa6e96e4b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.102 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.102 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.102 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.102 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.102 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.103 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.103 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.103 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.103 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.103 247708 DEBUG nova.virt.hardware [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 31 09:17:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:17:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843149169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.166 247708 DEBUG nova.storage.rbd_utils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image dcc57ec3-f59a-4574-9cd4-389ae92c9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.171 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.199 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:37 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 61673173-bf86-4c22-938d-267adc25a90e does not exist
Jan 31 09:17:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 64c29fbe-78ca-40ba-b22a-5641d2a2e046 does not exist
Jan 31 09:17:37 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1bfc5da4-38df-4705-8914-2b279da6532b does not exist
Jan 31 09:17:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:37 compute-0 sudo[432958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:37 compute-0 sudo[432958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:37 compute-0 sudo[432958]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.422 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.423 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4035MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.424 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.424 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:37 compute-0 sudo[432983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:17:37 compute-0 sudo[432983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:37 compute-0 sudo[432983]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:37 compute-0 ceph-mon[74496]: pgmap v4388: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:37 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.658 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance dcc57ec3-f59a-4574-9cd4-389ae92c9e11 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.659 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.659 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:17:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:37.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:37 compute-0 nova_compute[247704]: 2026-01-31 09:17:37.863 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 09:17:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025134325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.006 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.836s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.059 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.111 247708 DEBUG nova.virt.libvirt.vif [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2109844157',display_name='tempest-TestVolumeBootPattern-server-2109844157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2109844157',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-5956ry02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:17:27Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=dcc57ec3-f59a-4574-9cd4-389ae92c9e11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.112 247708 DEBUG nova.network.os_vif_util [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.113 247708 DEBUG nova.network.os_vif_util [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.114 247708 DEBUG nova.objects.instance [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'pci_devices' on Instance uuid dcc57ec3-f59a-4574-9cd4-389ae92c9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.138 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] End _get_guest_xml xml=<domain type="kvm">
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <uuid>dcc57ec3-f59a-4574-9cd4-389ae92c9e11</uuid>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <name>instance-000000df</name>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <memory>131072</memory>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <vcpu>1</vcpu>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <metadata>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <nova:name>tempest-TestVolumeBootPattern-server-2109844157</nova:name>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <nova:creationTime>2026-01-31 09:17:37</nova:creationTime>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <nova:flavor name="m1.nano">
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:memory>128</nova:memory>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:disk>1</nova:disk>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:swap>0</nova:swap>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:ephemeral>0</nova:ephemeral>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:vcpus>1</nova:vcpus>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </nova:flavor>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <nova:owner>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:user uuid="ecd39871d7fd438f88b36601f25d6eb6">tempest-TestVolumeBootPattern-1294459393-project-member</nova:user>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:project uuid="98d10c0290e340a08e9d1726bf0066bf">tempest-TestVolumeBootPattern-1294459393</nova:project>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </nova:owner>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <nova:ports>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <nova:port uuid="9d925d3a-15af-4795-b206-2c45063bc1f7">
Jan 31 09:17:38 compute-0 nova_compute[247704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         </nova:port>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </nova:ports>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </nova:instance>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </metadata>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <sysinfo type="smbios">
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <system>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <entry name="manufacturer">RDO</entry>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <entry name="product">OpenStack Compute</entry>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <entry name="serial">dcc57ec3-f59a-4574-9cd4-389ae92c9e11</entry>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <entry name="uuid">dcc57ec3-f59a-4574-9cd4-389ae92c9e11</entry>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <entry name="family">Virtual Machine</entry>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </system>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </sysinfo>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <os>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <boot dev="hd"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <smbios mode="sysinfo"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </os>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <features>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <acpi/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <apic/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <vmcoreinfo/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </features>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <clock offset="utc">
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <timer name="pit" tickpolicy="delay"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <timer name="hpet" present="no"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </clock>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <cpu mode="custom" match="exact">
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <model>Nehalem</model>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <topology sockets="1" cores="1" threads="1"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </cpu>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   <devices>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <disk type="network" device="cdrom">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <driver type="raw" cache="none"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <source protocol="rbd" name="vms/dcc57ec3-f59a-4574-9cd4-389ae92c9e11_disk.config">
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </source>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <target dev="sda" bus="sata"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <disk type="network" device="disk">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <source protocol="rbd" name="volumes/volume-16971cc6-6c7b-47eb-9656-600213b7ac4e">
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <host name="192.168.122.100" port="6789"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <host name="192.168.122.102" port="6789"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <host name="192.168.122.101" port="6789"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </source>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <auth username="openstack">
Jan 31 09:17:38 compute-0 nova_compute[247704]:         <secret type="ceph" uuid="f70fcd2a-dcb4-5f89-a4ba-79a09959083b"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       </auth>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <target dev="vda" bus="virtio"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <serial>16971cc6-6c7b-47eb-9656-600213b7ac4e</serial>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </disk>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <interface type="ethernet">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <mac address="fa:16:3e:d8:14:74"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <driver name="vhost" rx_queue_size="512"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <mtu size="1442"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <target dev="tap9d925d3a-15"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </interface>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <serial type="pty">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <log file="/var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/console.log" append="off"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </serial>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <video>
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <model type="virtio"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </video>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <input type="tablet" bus="usb"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <rng model="virtio">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <backend model="random">/dev/urandom</backend>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </rng>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="pci" model="pcie-root-port"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <controller type="usb" index="0"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     <memballoon model="virtio">
Jan 31 09:17:38 compute-0 nova_compute[247704]:       <stats period="10"/>
Jan 31 09:17:38 compute-0 nova_compute[247704]:     </memballoon>
Jan 31 09:17:38 compute-0 nova_compute[247704]:   </devices>
Jan 31 09:17:38 compute-0 nova_compute[247704]: </domain>
Jan 31 09:17:38 compute-0 nova_compute[247704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.138 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Preparing to wait for external event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.138 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.139 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.139 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.139 247708 DEBUG nova.virt.libvirt.vif [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2109844157',display_name='tempest-TestVolumeBootPattern-server-2109844157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2109844157',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-5956ry02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:17:27Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=dcc57ec3-f59a-4574-9cd4-389ae92c9e11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.140 247708 DEBUG nova.network.os_vif_util [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.140 247708 DEBUG nova.network.os_vif_util [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.140 247708 DEBUG os_vif [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.141 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.141 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.142 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.144 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.145 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d925d3a-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.145 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d925d3a-15, col_values=(('external_ids', {'iface-id': '9d925d3a-15af-4795-b206-2c45063bc1f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:14:74', 'vm-uuid': 'dcc57ec3-f59a-4574-9cd4-389ae92c9e11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:38 compute-0 NetworkManager[49108]: <info>  [1769851058.1481] manager: (tap9d925d3a-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.150 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.153 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.154 247708 INFO os_vif [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15')
Jan 31 09:17:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:38 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:38.389 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '117'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:17:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095028618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.477 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.483 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.532 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.576 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.577 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.577 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] No VIF found with MAC fa:16:3e:d8:14:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.577 247708 INFO nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Using config drive
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.660 247708 DEBUG nova.storage.rbd_utils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image dcc57ec3-f59a-4574-9cd4-389ae92c9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.669 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:17:38 compute-0 nova_compute[247704]: 2026-01-31 09:17:38.670 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:39 compute-0 nova_compute[247704]: 2026-01-31 09:17:39.002 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3843149169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:39 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:17:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3025134325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:17:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1095028618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:17:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:39.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:40 compute-0 nova_compute[247704]: 2026-01-31 09:17:40.879 247708 INFO nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Creating config drive at /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/disk.config
Jan 31 09:17:40 compute-0 nova_compute[247704]: 2026-01-31 09:17:40.882 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpodlp8coy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:40 compute-0 ceph-mon[74496]: pgmap v4389: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.003 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpodlp8coy" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.125 247708 DEBUG nova.storage.rbd_utils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] rbd image dcc57ec3-f59a-4574-9cd4-389ae92c9e11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.130 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/disk.config dcc57ec3-f59a-4574-9cd4-389ae92c9e11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.158 247708 DEBUG nova.network.neutron [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updated VIF entry in instance network info cache for port 9d925d3a-15af-4795-b206-2c45063bc1f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.159 247708 DEBUG nova.network.neutron [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating instance_info_cache with network_info: [{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.205 247708 DEBUG oslo_concurrency.lockutils [req-af4e0f80-0487-4952-97df-f66777fc64f1 req-44b00462-a941-401b-87c9-90e966097f08 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:17:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:41 compute-0 nova_compute[247704]: 2026-01-31 09:17:41.669 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:41.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:42 compute-0 ceph-mon[74496]: pgmap v4390: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:42 compute-0 nova_compute[247704]: 2026-01-31 09:17:42.942 247708 DEBUG oslo_concurrency.processutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/disk.config dcc57ec3-f59a-4574-9cd4-389ae92c9e11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.812s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:17:42 compute-0 nova_compute[247704]: 2026-01-31 09:17:42.942 247708 INFO nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Deleting local config drive /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11/disk.config because it was imported into RBD.
Jan 31 09:17:42 compute-0 kernel: tap9d925d3a-15: entered promiscuous mode
Jan 31 09:17:42 compute-0 NetworkManager[49108]: <info>  [1769851062.9952] manager: (tap9d925d3a-15): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Jan 31 09:17:42 compute-0 ovn_controller[149457]: 2026-01-31T09:17:42Z|00915|binding|INFO|Claiming lport 9d925d3a-15af-4795-b206-2c45063bc1f7 for this chassis.
Jan 31 09:17:42 compute-0 ovn_controller[149457]: 2026-01-31T09:17:42Z|00916|binding|INFO|9d925d3a-15af-4795-b206-2c45063bc1f7: Claiming fa:16:3e:d8:14:74 10.100.0.6
Jan 31 09:17:42 compute-0 nova_compute[247704]: 2026-01-31 09:17:42.995 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 ovn_controller[149457]: 2026-01-31T09:17:43Z|00917|binding|INFO|Setting lport 9d925d3a-15af-4795-b206-2c45063bc1f7 ovn-installed in OVS
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.003 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 ovn_controller[149457]: 2026-01-31T09:17:43Z|00918|binding|INFO|Setting lport 9d925d3a-15af-4795-b206-2c45063bc1f7 up in Southbound
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.012 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:14:74 10.100.0.6'], port_security=['fa:16:3e:d8:14:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dcc57ec3-f59a-4574-9cd4-389ae92c9e11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11348f26-2c0a-4b92-a927-856bca145e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=9d925d3a-15af-4795-b206-2c45063bc1f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.013 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 9d925d3a-15af-4795-b206-2c45063bc1f7 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 bound to our chassis
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.014 160028 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:17:43 compute-0 systemd-machined[214448]: New machine qemu-96-instance-000000df.
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.027 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0d306e-acce-464b-b305-5336da703403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.027 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c9ca540-51 in ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.030 254935 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c9ca540-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.030 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[16d4d4e6-d113-4360-a887-efd3c9a33ba2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.031 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdd7213-1907-4ebe-b313-2da37e41ee8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.039 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[da7fc6e2-10f5-4fb5-9dd6-8900f31533c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-000000df.
Jan 31 09:17:43 compute-0 systemd-udevd[433111]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:17:43 compute-0 NetworkManager[49108]: <info>  [1769851063.0575] device (tap9d925d3a-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 09:17:43 compute-0 NetworkManager[49108]: <info>  [1769851063.0587] device (tap9d925d3a-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.060 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b4faee7f-29a6-4601-875c-a36ee898e664]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.090 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[61ddccff-c52b-4735-9efc-7a794d1013b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.095 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8d858f-0efd-4313-a213-d3bc30566671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 NetworkManager[49108]: <info>  [1769851063.0971] manager: (tap5c9ca540-50): new Veth device (/org/freedesktop/NetworkManager/Devices/409)
Jan 31 09:17:43 compute-0 systemd-udevd[433113]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.119 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[eec831f8-cce4-40b7-b5df-01c34f1dd86e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.123 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[946d121d-18fe-47e4-b26b-07fbf4fd0772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 NetworkManager[49108]: <info>  [1769851063.1388] device (tap5c9ca540-50): carrier: link connected
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.141 255113 DEBUG oslo.privsep.daemon [-] privsep: reply[9866a14d-00fe-4fe9-9522-3850a6b391ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.147 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.152 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa31202-89d5-4a90-a976-12e4bc729c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1130575, 'reachable_time': 20676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 433141, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.161 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a721deac-e0cd-4028-b0db-c91807dd8cbe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:dcf7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1130575, 'tstamp': 1130575}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 433142, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.174 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[617998ac-c43e-4c0f-96ac-5cb54a349ed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c9ca540-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:dc:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 271], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1130575, 'reachable_time': 20676, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 433143, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.197 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[2377c132-d707-400e-ae73-772daa4f59b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.239 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[00aa14d8-68b5-4dad-863b-3940fcecc569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.241 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.241 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.241 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c9ca540-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.243 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 NetworkManager[49108]: <info>  [1769851063.2438] manager: (tap5c9ca540-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Jan 31 09:17:43 compute-0 kernel: tap5c9ca540-50: entered promiscuous mode
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.245 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.246 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c9ca540-50, col_values=(('external_ids', {'iface-id': '016c97be-36ee-470a-8bac-28db98577a8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.248 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 ovn_controller[149457]: 2026-01-31T09:17:43Z|00919|binding|INFO|Releasing lport 016c97be-36ee-470a-8bac-28db98577a8c from this chassis (sb_readonly=0)
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.249 160028 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.249 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d5011c93-9a2d-4c53-bc38-c73ba158c4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.250 160028 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: global
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     log         /dev/log local0 debug
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     log-tag     haproxy-metadata-proxy-5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     user        root
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     group       root
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     maxconn     1024
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     pidfile     /var/lib/neutron/external/pids/5c9ca540-57e7-412d-8ef3-af923db0a265.pid.haproxy
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     daemon
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: defaults
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     log global
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     mode http
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     option httplog
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     option dontlognull
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     option http-server-close
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     option forwardfor
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     retries                 3
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     timeout http-request    30s
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     timeout connect         30s
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     timeout client          32s
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     timeout server          32s
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     timeout http-keep-alive 30s
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: listen listener
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     bind 169.254.169.254:80
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     server metadata /var/lib/neutron/metadata_proxy
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:     http-request add-header X-OVN-Network-ID 5c9ca540-57e7-412d-8ef3-af923db0a265
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 31 09:17:43 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:17:43.251 160028 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'env', 'PROCESS_TAG=haproxy-5c9ca540-57e7-412d-8ef3-af923db0a265', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c9ca540-57e7-412d-8ef3-af923db0a265.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.253 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.476 247708 DEBUG nova.compute.manager [req-36abe724-b953-489b-966f-d99aa1d29a3d req-14260b59-c32f-465b-abc9-8d783f1770c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.478 247708 DEBUG oslo_concurrency.lockutils [req-36abe724-b953-489b-966f-d99aa1d29a3d req-14260b59-c32f-465b-abc9-8d783f1770c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.479 247708 DEBUG oslo_concurrency.lockutils [req-36abe724-b953-489b-966f-d99aa1d29a3d req-14260b59-c32f-465b-abc9-8d783f1770c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.479 247708 DEBUG oslo_concurrency.lockutils [req-36abe724-b953-489b-966f-d99aa1d29a3d req-14260b59-c32f-465b-abc9-8d783f1770c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.479 247708 DEBUG nova.compute.manager [req-36abe724-b953-489b-966f-d99aa1d29a3d req-14260b59-c32f-465b-abc9-8d783f1770c0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Processing event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 31 09:17:43 compute-0 podman[433193]: 2026-01-31 09:17:43.598812978 +0000 UTC m=+0.028846105 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 09:17:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:43.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.996 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.997 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769851063.9956286, dcc57ec3-f59a-4574-9cd4-389ae92c9e11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:17:43 compute-0 nova_compute[247704]: 2026-01-31 09:17:43.997 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] VM Started (Lifecycle Event)
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.002 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.005 247708 INFO nova.virt.libvirt.driver [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Instance spawned successfully.
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.005 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.007 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.036 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.042 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.074 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.075 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.076 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.077 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.078 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.078 247708 DEBUG nova.virt.libvirt.driver [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.202 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.203 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769851063.9991372, dcc57ec3-f59a-4574-9cd4-389ae92c9e11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:17:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.204 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] VM Paused (Lifecycle Event)
Jan 31 09:17:44 compute-0 ceph-mon[74496]: pgmap v4391: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.390 247708 INFO nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Took 13.95 seconds to spawn the instance on the hypervisor.
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.390 247708 DEBUG nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:17:44 compute-0 podman[433193]: 2026-01-31 09:17:44.409015955 +0000 UTC m=+0.839049072 container create 769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.622 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.626 247708 DEBUG nova.virt.driver [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] Emitting event <LifecycleEvent: 1769851064.0015855, dcc57ec3-f59a-4574-9cd4-389ae92c9e11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.627 247708 INFO nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] VM Resumed (Lifecycle Event)
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.704 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.706 247708 INFO nova.compute.manager [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Took 18.12 seconds to build instance.
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.711 247708 DEBUG nova.compute.manager [None req-faa8023e-46b8-4e3c-92f0-9e75e3cf6e45 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 31 09:17:44 compute-0 systemd[1]: Started libpod-conmon-769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177.scope.
Jan 31 09:17:44 compute-0 nova_compute[247704]: 2026-01-31 09:17:44.744 247708 DEBUG oslo_concurrency.lockutils [None req-27f61439-495a-4ced-918e-d4a3c07031f5 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:17:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1026b9e3d120ee123bf3ab4c21e3a16b8f568de4b13f5b56edcfca63ae7fdc7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 09:17:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:45 compute-0 podman[433193]: 2026-01-31 09:17:45.177890455 +0000 UTC m=+1.607923642 container init 769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:17:45 compute-0 podman[433193]: 2026-01-31 09:17:45.184895734 +0000 UTC m=+1.614928881 container start 769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 09:17:45 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [NOTICE]   (433238) : New worker (433240) forked
Jan 31 09:17:45 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [NOTICE]   (433238) : Loading success.
Jan 31 09:17:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:45 compute-0 nova_compute[247704]: 2026-01-31 09:17:45.662 247708 DEBUG nova.compute.manager [req-2e45df79-5efa-44f4-a9cd-f54098946cb4 req-07fd31e1-9d8c-45ab-8b39-0e6121717da0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:45 compute-0 nova_compute[247704]: 2026-01-31 09:17:45.662 247708 DEBUG oslo_concurrency.lockutils [req-2e45df79-5efa-44f4-a9cd-f54098946cb4 req-07fd31e1-9d8c-45ab-8b39-0e6121717da0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:17:45 compute-0 nova_compute[247704]: 2026-01-31 09:17:45.663 247708 DEBUG oslo_concurrency.lockutils [req-2e45df79-5efa-44f4-a9cd-f54098946cb4 req-07fd31e1-9d8c-45ab-8b39-0e6121717da0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:17:45 compute-0 nova_compute[247704]: 2026-01-31 09:17:45.663 247708 DEBUG oslo_concurrency.lockutils [req-2e45df79-5efa-44f4-a9cd-f54098946cb4 req-07fd31e1-9d8c-45ab-8b39-0e6121717da0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:17:45 compute-0 nova_compute[247704]: 2026-01-31 09:17:45.663 247708 DEBUG nova.compute.manager [req-2e45df79-5efa-44f4-a9cd-f54098946cb4 req-07fd31e1-9d8c-45ab-8b39-0e6121717da0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] No waiting events found dispatching network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:17:45 compute-0 nova_compute[247704]: 2026-01-31 09:17:45.664 247708 WARNING nova.compute.manager [req-2e45df79-5efa-44f4-a9cd-f54098946cb4 req-07fd31e1-9d8c-45ab-8b39-0e6121717da0 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received unexpected event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 for instance with vm_state active and task_state None.
Jan 31 09:17:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:45.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:46 compute-0 ceph-mon[74496]: pgmap v4392: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 12 KiB/s wr, 4 op/s
Jan 31 09:17:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 43 op/s
Jan 31 09:17:47 compute-0 ceph-mon[74496]: pgmap v4393: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 43 op/s
Jan 31 09:17:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:47 compute-0 nova_compute[247704]: 2026-01-31 09:17:47.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:47.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:48 compute-0 nova_compute[247704]: 2026-01-31 09:17:48.149 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 60 op/s
Jan 31 09:17:49 compute-0 nova_compute[247704]: 2026-01-31 09:17:49.008 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000046s ======
Jan 31 09:17:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:49.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Jan 31 09:17:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:17:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:50 compute-0 ceph-mon[74496]: pgmap v4394: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 60 op/s
Jan 31 09:17:50 compute-0 podman[433252]: 2026-01-31 09:17:50.87089084 +0000 UTC m=+0.041318316 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:17:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:51.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:51 compute-0 sudo[433272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:51 compute-0 sudo[433272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:51 compute-0 sudo[433272]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:51 compute-0 sudo[433297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:17:51 compute-0 sudo[433297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:17:51 compute-0 sudo[433297]: pam_unix(sudo:session): session closed for user root
Jan 31 09:17:51 compute-0 nova_compute[247704]: 2026-01-31 09:17:51.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:17:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:51.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:51 compute-0 ceph-mon[74496]: pgmap v4395: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:53 compute-0 nova_compute[247704]: 2026-01-31 09:17:53.152 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:53.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:53 compute-0 ceph-mon[74496]: pgmap v4396: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:54 compute-0 nova_compute[247704]: 2026-01-31 09:17:54.011 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:54 compute-0 nova_compute[247704]: 2026-01-31 09:17:54.060 247708 DEBUG nova.compute.manager [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-changed-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:17:54 compute-0 nova_compute[247704]: 2026-01-31 09:17:54.062 247708 DEBUG nova.compute.manager [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Refreshing instance network info cache due to event network-changed-9d925d3a-15af-4795-b206-2c45063bc1f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:17:54 compute-0 nova_compute[247704]: 2026-01-31 09:17:54.063 247708 DEBUG oslo_concurrency.lockutils [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:17:54 compute-0 nova_compute[247704]: 2026-01-31 09:17:54.063 247708 DEBUG oslo_concurrency.lockutils [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:17:54 compute-0 nova_compute[247704]: 2026-01-31 09:17:54.064 247708 DEBUG nova.network.neutron [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Refreshing network info cache for port 9d925d3a-15af-4795-b206-2c45063bc1f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:17:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:17:55 compute-0 ceph-mon[74496]: pgmap v4397: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 31 09:17:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:55.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:55.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 341 B/s wr, 74 op/s
Jan 31 09:17:56 compute-0 nova_compute[247704]: 2026-01-31 09:17:56.265 247708 DEBUG nova.network.neutron [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updated VIF entry in instance network info cache for port 9d925d3a-15af-4795-b206-2c45063bc1f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:17:56 compute-0 nova_compute[247704]: 2026-01-31 09:17:56.266 247708 DEBUG nova.network.neutron [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating instance_info_cache with network_info: [{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:17:56 compute-0 nova_compute[247704]: 2026-01-31 09:17:56.304 247708 DEBUG oslo_concurrency.lockutils [req-0ef443da-70e9-43ac-872a-558092990ebc req-a07e341e-998e-447a-a5b3-475d6e982869 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:17:57 compute-0 ovn_controller[149457]: 2026-01-31T09:17:57Z|00118|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.6
Jan 31 09:17:57 compute-0 ovn_controller[149457]: 2026-01-31T09:17:57Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d8:14:74 10.100.0.6
Jan 31 09:17:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:17:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:17:57 compute-0 ceph-mon[74496]: pgmap v4398: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 341 B/s wr, 74 op/s
Jan 31 09:17:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:57.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:58 compute-0 nova_compute[247704]: 2026-01-31 09:17:58.155 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 682 B/s wr, 46 op/s
Jan 31 09:17:59 compute-0 nova_compute[247704]: 2026-01-31 09:17:59.014 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:17:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:17:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:17:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:17:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:17:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:59.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:17:59 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:00 compute-0 ceph-mon[74496]: pgmap v4399: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 682 B/s wr, 46 op/s
Jan 31 09:18:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 829 KiB/s rd, 11 KiB/s wr, 46 op/s
Jan 31 09:18:00 compute-0 nova_compute[247704]: 2026-01-31 09:18:00.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:00 compute-0 nova_compute[247704]: 2026-01-31 09:18:00.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 09:18:00 compute-0 nova_compute[247704]: 2026-01-31 09:18:00.580 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 09:18:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:01.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:01 compute-0 ceph-mon[74496]: pgmap v4400: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 829 KiB/s rd, 11 KiB/s wr, 46 op/s
Jan 31 09:18:01 compute-0 ovn_controller[149457]: 2026-01-31T09:18:01Z|00120|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.6
Jan 31 09:18:01 compute-0 ovn_controller[149457]: 2026-01-31T09:18:01Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d8:14:74 10.100.0.6
Jan 31 09:18:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:01.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:02 compute-0 ovn_controller[149457]: 2026-01-31T09:18:02Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:14:74 10.100.0.6
Jan 31 09:18:02 compute-0 ovn_controller[149457]: 2026-01-31T09:18:02Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:14:74 10.100.0.6
Jan 31 09:18:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 12 KiB/s wr, 44 op/s
Jan 31 09:18:02 compute-0 podman[433329]: 2026-01-31 09:18:02.925974957 +0000 UTC m=+0.101106368 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 09:18:03 compute-0 nova_compute[247704]: 2026-01-31 09:18:03.157 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:03 compute-0 ceph-mon[74496]: pgmap v4401: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 12 KiB/s wr, 44 op/s
Jan 31 09:18:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:03.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:04 compute-0 nova_compute[247704]: 2026-01-31 09:18:04.017 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 31 09:18:04 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:05 compute-0 ceph-mon[74496]: pgmap v4402: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 31 09:18:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 31 09:18:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:07 compute-0 ceph-mon[74496]: pgmap v4403: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 31 09:18:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:07.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:08 compute-0 nova_compute[247704]: 2026-01-31 09:18:08.161 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 26 KiB/s wr, 40 op/s
Jan 31 09:18:09 compute-0 nova_compute[247704]: 2026-01-31 09:18:09.019 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:09 compute-0 ceph-mon[74496]: pgmap v4404: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 26 KiB/s wr, 40 op/s
Jan 31 09:18:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:09.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:09 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 29 KiB/s wr, 28 op/s
Jan 31 09:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:18:11.251 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:18:11.252 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:18:11.252 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:11 compute-0 sudo[433361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:11 compute-0 sudo[433361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:11 compute-0 sudo[433361]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:11 compute-0 sudo[433386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:11 compute-0 sudo[433386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:11 compute-0 sudo[433386]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Jan 31 09:18:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:11.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:11 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Jan 31 09:18:11 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Jan 31 09:18:11 compute-0 ceph-mon[74496]: pgmap v4405: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 29 KiB/s wr, 28 op/s
Jan 31 09:18:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 31 09:18:13 compute-0 nova_compute[247704]: 2026-01-31 09:18:13.163 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:13 compute-0 ceph-mon[74496]: osdmap e430: 3 total, 3 up, 3 in
Jan 31 09:18:13 compute-0 ceph-mon[74496]: pgmap v4407: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 31 09:18:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:13 compute-0 nova_compute[247704]: 2026-01-31 09:18:13.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:13 compute-0 nova_compute[247704]: 2026-01-31 09:18:13.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 09:18:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:13.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:14 compute-0 nova_compute[247704]: 2026-01-31 09:18:14.023 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 6.8 KiB/s wr, 10 op/s
Jan 31 09:18:14 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:15 compute-0 ceph-mon[74496]: pgmap v4408: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 6.8 KiB/s wr, 10 op/s
Jan 31 09:18:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 8.6 KiB/s wr, 30 op/s
Jan 31 09:18:16 compute-0 nova_compute[247704]: 2026-01-31 09:18:16.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:17 compute-0 ceph-mon[74496]: pgmap v4409: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 8.6 KiB/s wr, 30 op/s
Jan 31 09:18:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:17.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:18 compute-0 nova_compute[247704]: 2026-01-31 09:18:18.166 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 9.2 KiB/s wr, 31 op/s
Jan 31 09:18:19 compute-0 nova_compute[247704]: 2026-01-31 09:18:19.024 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:19 compute-0 ceph-mon[74496]: pgmap v4410: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 9.2 KiB/s wr, 31 op/s
Jan 31 09:18:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:19.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:19 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:18:20
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.mgr', 'vms']
Jan 31 09:18:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:18:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:18:21 compute-0 ceph-mon[74496]: pgmap v4411: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 271 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Jan 31 09:18:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:21 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 31 09:18:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:21.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:21 compute-0 podman[433416]: 2026-01-31 09:18:21.882056 +0000 UTC m=+0.057683152 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 09:18:22 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 31 09:18:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Jan 31 09:18:22 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 09:18:22 compute-0 radosgw[94239]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 09:18:23 compute-0 ovn_controller[149457]: 2026-01-31T09:18:23Z|00920|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Jan 31 09:18:23 compute-0 nova_compute[247704]: 2026-01-31 09:18:23.169 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:23.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:23 compute-0 ceph-mon[74496]: pgmap v4412: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Jan 31 09:18:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:23.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:24 compute-0 nova_compute[247704]: 2026-01-31 09:18:24.026 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 4.3 KiB/s wr, 37 op/s
Jan 31 09:18:24 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3716107651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:24 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:25.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:25 compute-0 nova_compute[247704]: 2026-01-31 09:18:25.600 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:25 compute-0 nova_compute[247704]: 2026-01-31 09:18:25.600 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:18:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:25.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:26 compute-0 ceph-mon[74496]: pgmap v4413: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 4.3 KiB/s wr, 37 op/s
Jan 31 09:18:26 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1731910585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:18:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Jan 31 09:18:27 compute-0 ceph-mon[74496]: pgmap v4414: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Jan 31 09:18:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:27.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:27.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:28 compute-0 nova_compute[247704]: 2026-01-31 09:18:28.172 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 511 B/s wr, 40 op/s
Jan 31 09:18:28 compute-0 nova_compute[247704]: 2026-01-31 09:18:28.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:29 compute-0 nova_compute[247704]: 2026-01-31 09:18:29.030 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:29.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:29 compute-0 ceph-mon[74496]: pgmap v4415: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 511 B/s wr, 40 op/s
Jan 31 09:18:29 compute-0 nova_compute[247704]: 2026-01-31 09:18:29.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:29.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:29 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 31 09:18:31 compute-0 ceph-mon[74496]: pgmap v4416: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 31 09:18:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:31.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:31 compute-0 sudo[433442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:31 compute-0 sudo[433442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:31 compute-0 sudo[433442]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:31.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:31 compute-0 sudo[433467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:31 compute-0 sudo[433467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:31 compute-0 sudo[433467]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Jan 31 09:18:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2472609812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 09:18:32 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3661090237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:33 compute-0 nova_compute[247704]: 2026-01-31 09:18:33.175 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:33.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/668408901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:33 compute-0 ceph-mon[74496]: pgmap v4417: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Jan 31 09:18:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:33.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:33 compute-0 podman[433493]: 2026-01-31 09:18:33.917897683 +0000 UTC m=+0.093689909 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 09:18:34 compute-0 nova_compute[247704]: 2026-01-31 09:18:34.032 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 31 09:18:34 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:35.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:35 compute-0 ceph-mon[74496]: pgmap v4418: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 31 09:18:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 110 KiB/s rd, 15 KiB/s wr, 92 op/s
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.399503303554811e-05 of space, bias 1.0, pg target 0.004198509910664433 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004333007630746324 of space, bias 1.0, pg target 1.2999022892238972 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:18:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:18:36 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3402391468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.960 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.961 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquired lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.961 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 31 09:18:36 compute-0 nova_compute[247704]: 2026-01-31 09:18:36.961 247708 DEBUG nova.objects.instance [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dcc57ec3-f59a-4574-9cd4-389ae92c9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:18:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:37 compute-0 sudo[433521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:37 compute-0 sudo[433521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:37 compute-0 sudo[433521]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:37.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:37 compute-0 sudo[433546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:18:37 compute-0 sudo[433546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:37 compute-0 sudo[433546]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:37 compute-0 sudo[433571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:37 compute-0 sudo[433571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:37 compute-0 sudo[433571]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:37 compute-0 sudo[433596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:18:37 compute-0 sudo[433596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:38 compute-0 nova_compute[247704]: 2026-01-31 09:18:38.181 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 571 KiB/s rd, 15 KiB/s wr, 123 op/s
Jan 31 09:18:38 compute-0 nova_compute[247704]: 2026-01-31 09:18:38.712 247708 DEBUG nova.network.neutron [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating instance_info_cache with network_info: [{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:18:38 compute-0 nova_compute[247704]: 2026-01-31 09:18:38.731 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Releasing lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:18:38 compute-0 nova_compute[247704]: 2026-01-31 09:18:38.731 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 31 09:18:38 compute-0 nova_compute[247704]: 2026-01-31 09:18:38.731 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:38 compute-0 nova_compute[247704]: 2026-01-31 09:18:38.732 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.034 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:39 compute-0 ceph-mon[74496]: pgmap v4419: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 110 KiB/s rd, 15 KiB/s wr, 92 op/s
Jan 31 09:18:39 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-keepalived-rgw-default-compute-0-rwjfwq[96432]: Sat Jan 31 09:18:39 2026: A thread timer expired 1.250529 seconds ago
Jan 31 09:18:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2063266035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.283 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.283 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.283 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.283 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.284 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:39 compute-0 sudo[433596]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:39.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 257ca5f1-444c-446c-9928-824cf04acc97 does not exist
Jan 31 09:18:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 1fbc400b-a4cc-45cb-a9bc-0d820cf2090c does not exist
Jan 31 09:18:39 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b081891f-25df-4314-a2ff-5f279d743c5a does not exist
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:18:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:18:39 compute-0 sudo[433673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:39 compute-0 sudo[433673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:39 compute-0 sudo[433673]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:39.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:39 compute-0 sudo[433698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:18:39 compute-0 sudo[433698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:39 compute-0 sudo[433698]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:39 compute-0 sudo[433723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:39 compute-0 sudo[433723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:39 compute-0 sudo[433723]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:39 compute-0 sudo[433748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:18:39 compute-0 sudo[433748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:39 compute-0 nova_compute[247704]: 2026-01-31 09:18:39.992 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.095 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.095 247708 DEBUG nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.227 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.228 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3894MB free_disk=20.987987518310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.228 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.228 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 15 KiB/s wr, 136 op/s
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.264719195 +0000 UTC m=+0.081725941 container create f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.203997631 +0000 UTC m=+0.021004397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:18:40 compute-0 ceph-mon[74496]: pgmap v4420: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 571 KiB/s rd, 15 KiB/s wr, 123 op/s
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:18:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1275901134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:40 compute-0 systemd[1]: Started libpod-conmon-f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82.scope.
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.362 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Instance dcc57ec3-f59a-4574-9cd4-389ae92c9e11 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.363 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.363 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:18:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.446605449 +0000 UTC m=+0.263612235 container init f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.449 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.45578093 +0000 UTC m=+0.272787716 container start f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 09:18:40 compute-0 funny_bartik[433832]: 167 167
Jan 31 09:18:40 compute-0 systemd[1]: libpod-f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82.scope: Deactivated successfully.
Jan 31 09:18:40 compute-0 conmon[433832]: conmon f261b6dc652d0a1073dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82.scope/container/memory.events
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.472732028 +0000 UTC m=+0.289738814 container attach f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.473263961 +0000 UTC m=+0.290270717 container died f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 09:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f35acaf2a80d448af8827ba59e2666e302902666e6e33dd0a169f9e9fc0d6592-merged.mount: Deactivated successfully.
Jan 31 09:18:40 compute-0 podman[433816]: 2026-01-31 09:18:40.811731099 +0000 UTC m=+0.628737885 container remove f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:18:40 compute-0 systemd[1]: libpod-conmon-f261b6dc652d0a1073dc461fde1bb34c2c99beed4679daa7be4b50c786b9ab82.scope: Deactivated successfully.
Jan 31 09:18:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:18:40 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447361038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.915 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.921 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.957 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.984 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:18:40 compute-0 nova_compute[247704]: 2026-01-31 09:18:40.985 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:40 compute-0 podman[433879]: 2026-01-31 09:18:40.987611847 +0000 UTC m=+0.065399257 container create 52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:18:41 compute-0 podman[433879]: 2026-01-31 09:18:40.941544847 +0000 UTC m=+0.019332257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:18:41 compute-0 systemd[1]: Started libpod-conmon-52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0.scope.
Jan 31 09:18:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c4b22467f514e4fbe97886014623b31ecc1066a8cc372361a68f174a835c24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c4b22467f514e4fbe97886014623b31ecc1066a8cc372361a68f174a835c24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c4b22467f514e4fbe97886014623b31ecc1066a8cc372361a68f174a835c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c4b22467f514e4fbe97886014623b31ecc1066a8cc372361a68f174a835c24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c4b22467f514e4fbe97886014623b31ecc1066a8cc372361a68f174a835c24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:41 compute-0 podman[433879]: 2026-01-31 09:18:41.177797191 +0000 UTC m=+0.255584621 container init 52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:18:41 compute-0 podman[433879]: 2026-01-31 09:18:41.185783424 +0000 UTC m=+0.263570864 container start 52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:18:41 compute-0 podman[433879]: 2026-01-31 09:18:41.253650039 +0000 UTC m=+0.331437469 container attach 52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:18:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:41.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:41 compute-0 quirky_hamilton[433896]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:18:41 compute-0 quirky_hamilton[433896]: --> relative data size: 1.0
Jan 31 09:18:41 compute-0 quirky_hamilton[433896]: --> All data devices are unavailable
Jan 31 09:18:41 compute-0 systemd[1]: libpod-52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0.scope: Deactivated successfully.
Jan 31 09:18:42 compute-0 podman[433912]: 2026-01-31 09:18:42.019965608 +0000 UTC m=+0.024715546 container died 52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:18:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 176 op/s
Jan 31 09:18:42 compute-0 ceph-mon[74496]: pgmap v4421: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 15 KiB/s wr, 136 op/s
Jan 31 09:18:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3447361038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-41c4b22467f514e4fbe97886014623b31ecc1066a8cc372361a68f174a835c24-merged.mount: Deactivated successfully.
Jan 31 09:18:42 compute-0 nova_compute[247704]: 2026-01-31 09:18:42.717 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:42 compute-0 nova_compute[247704]: 2026-01-31 09:18:42.719 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:42 compute-0 nova_compute[247704]: 2026-01-31 09:18:42.746 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Triggering sync for uuid dcc57ec3-f59a-4574-9cd4-389ae92c9e11 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 31 09:18:42 compute-0 nova_compute[247704]: 2026-01-31 09:18:42.747 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:18:42 compute-0 nova_compute[247704]: 2026-01-31 09:18:42.747 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:18:42 compute-0 nova_compute[247704]: 2026-01-31 09:18:42.773 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:18:42 compute-0 podman[433912]: 2026-01-31 09:18:42.905553721 +0000 UTC m=+0.910303639 container remove 52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hamilton, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:18:42 compute-0 systemd[1]: libpod-conmon-52dda2e9fba2b486a49dfba65d97cb3352d4f59335764d585a233bc8c4d93fc0.scope: Deactivated successfully.
Jan 31 09:18:42 compute-0 sudo[433748]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:43 compute-0 sudo[433928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:43 compute-0 sudo[433928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:43 compute-0 sudo[433928]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:43 compute-0 sudo[433953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:18:43 compute-0 sudo[433953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:43 compute-0 sudo[433953]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:43 compute-0 sudo[433978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:43 compute-0 sudo[433978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:43 compute-0 sudo[433978]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:43 compute-0 nova_compute[247704]: 2026-01-31 09:18:43.183 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:43 compute-0 sudo[434003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:18:43 compute-0 sudo[434003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:43.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.550201037 +0000 UTC m=+0.074680500 container create c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.49386856 +0000 UTC m=+0.018348053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:18:43 compute-0 systemd[1]: Started libpod-conmon-c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1.scope.
Jan 31 09:18:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.67933027 +0000 UTC m=+0.203809753 container init c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.6851401 +0000 UTC m=+0.209619563 container start c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 31 09:18:43 compute-0 naughty_sutherland[434086]: 167 167
Jan 31 09:18:43 compute-0 systemd[1]: libpod-c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1.scope: Deactivated successfully.
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.692430056 +0000 UTC m=+0.216909549 container attach c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.693202605 +0000 UTC m=+0.217682068 container died c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a53eb3ba67633186b2901713b64e145b90a71bb81f596ea9640e863beca938b-merged.mount: Deactivated successfully.
Jan 31 09:18:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:43.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:43 compute-0 podman[434070]: 2026-01-31 09:18:43.80588431 +0000 UTC m=+0.330363773 container remove c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 09:18:43 compute-0 systemd[1]: libpod-conmon-c81ce845c846f00b27f12bc3a1766e0ec1ba05dd1f705734dae6185a58080ed1.scope: Deactivated successfully.
Jan 31 09:18:44 compute-0 ceph-mon[74496]: pgmap v4422: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 176 op/s
Jan 31 09:18:44 compute-0 podman[434112]: 2026-01-31 09:18:43.937280407 +0000 UTC m=+0.025023874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:18:44 compute-0 nova_compute[247704]: 2026-01-31 09:18:44.068 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:44 compute-0 podman[434112]: 2026-01-31 09:18:44.079916555 +0000 UTC m=+0.167659972 container create c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 09:18:44 compute-0 systemd[1]: Started libpod-conmon-c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3.scope.
Jan 31 09:18:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 172 op/s
Jan 31 09:18:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b27e1190f5e1cfdfe0419d0563b95d99987d40fb3c4808c1bc2e23cc07747d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b27e1190f5e1cfdfe0419d0563b95d99987d40fb3c4808c1bc2e23cc07747d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b27e1190f5e1cfdfe0419d0563b95d99987d40fb3c4808c1bc2e23cc07747d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b27e1190f5e1cfdfe0419d0563b95d99987d40fb3c4808c1bc2e23cc07747d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:44 compute-0 podman[434112]: 2026-01-31 09:18:44.319769625 +0000 UTC m=+0.407513062 container init c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:18:44 compute-0 podman[434112]: 2026-01-31 09:18:44.328922426 +0000 UTC m=+0.416665903 container start c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leakey, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 09:18:44 compute-0 podman[434112]: 2026-01-31 09:18:44.436129969 +0000 UTC m=+0.523873496 container attach c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leakey, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:18:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:45 compute-0 determined_leakey[434130]: {
Jan 31 09:18:45 compute-0 determined_leakey[434130]:     "0": [
Jan 31 09:18:45 compute-0 determined_leakey[434130]:         {
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "devices": [
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "/dev/loop3"
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             ],
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "lv_name": "ceph_lv0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "lv_size": "7511998464",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "name": "ceph_lv0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "tags": {
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.cluster_name": "ceph",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.crush_device_class": "",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.encrypted": "0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.osd_id": "0",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.type": "block",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:                 "ceph.vdo": "0"
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             },
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "type": "block",
Jan 31 09:18:45 compute-0 determined_leakey[434130]:             "vg_name": "ceph_vg0"
Jan 31 09:18:45 compute-0 determined_leakey[434130]:         }
Jan 31 09:18:45 compute-0 determined_leakey[434130]:     ]
Jan 31 09:18:45 compute-0 determined_leakey[434130]: }
Jan 31 09:18:45 compute-0 systemd[1]: libpod-c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3.scope: Deactivated successfully.
Jan 31 09:18:45 compute-0 podman[434139]: 2026-01-31 09:18:45.102257984 +0000 UTC m=+0.028702104 container died c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leakey, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:18:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:45.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:45 compute-0 ceph-mon[74496]: pgmap v4423: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 172 op/s
Jan 31 09:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b27e1190f5e1cfdfe0419d0563b95d99987d40fb3c4808c1bc2e23cc07747d-merged.mount: Deactivated successfully.
Jan 31 09:18:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:45 compute-0 podman[434139]: 2026-01-31 09:18:45.981837243 +0000 UTC m=+0.908281353 container remove c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leakey, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 09:18:45 compute-0 systemd[1]: libpod-conmon-c9bb3a1e508559f0529f60b3f1b15bb672553beb5bd5bd7a41a0aa1568045ed3.scope: Deactivated successfully.
Jan 31 09:18:46 compute-0 sudo[434003]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:46 compute-0 sudo[434155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:46 compute-0 sudo[434155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:46 compute-0 sudo[434155]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:46 compute-0 sudo[434180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:18:46 compute-0 sudo[434180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:46 compute-0 sudo[434180]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:46 compute-0 sudo[434205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:46 compute-0 sudo[434205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:46 compute-0 sudo[434205]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:46 compute-0 sudo[434230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:18:46 compute-0 sudo[434230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 597 B/s wr, 146 op/s
Jan 31 09:18:46 compute-0 podman[434293]: 2026-01-31 09:18:46.565454069 +0000 UTC m=+0.083491284 container create 3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:18:46 compute-0 nova_compute[247704]: 2026-01-31 09:18:46.591 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:46 compute-0 podman[434293]: 2026-01-31 09:18:46.500520804 +0000 UTC m=+0.018558069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:18:46 compute-0 systemd[1]: Started libpod-conmon-3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713.scope.
Jan 31 09:18:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:18:47 compute-0 podman[434293]: 2026-01-31 09:18:47.000622097 +0000 UTC m=+0.518659342 container init 3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 09:18:47 compute-0 podman[434293]: 2026-01-31 09:18:47.006182811 +0000 UTC m=+0.524220026 container start 3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 09:18:47 compute-0 busy_haibt[434311]: 167 167
Jan 31 09:18:47 compute-0 systemd[1]: libpod-3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713.scope: Deactivated successfully.
Jan 31 09:18:47 compute-0 podman[434293]: 2026-01-31 09:18:47.211872238 +0000 UTC m=+0.729909453 container attach 3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 09:18:47 compute-0 podman[434293]: 2026-01-31 09:18:47.213361604 +0000 UTC m=+0.731398819 container died 3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 09:18:47 compute-0 ceph-mon[74496]: pgmap v4424: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 597 B/s wr, 146 op/s
Jan 31 09:18:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:47.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-44cf9cdc65b0eafc05eedda0ada497d6a9979d2c3271f7fde11d66a4ae54d9f1-merged.mount: Deactivated successfully.
Jan 31 09:18:47 compute-0 podman[434293]: 2026-01-31 09:18:47.793331812 +0000 UTC m=+1.311369027 container remove 3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:18:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:47.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:47 compute-0 systemd[1]: libpod-conmon-3ad8fe70ff09f9bffec14251805efc9d947fe72b0b3435809145f154552d8713.scope: Deactivated successfully.
Jan 31 09:18:47 compute-0 podman[434337]: 2026-01-31 09:18:47.93722758 +0000 UTC m=+0.045131700 container create 8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mestorf, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 09:18:47 compute-0 systemd[1]: Started libpod-conmon-8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89.scope.
Jan 31 09:18:48 compute-0 podman[434337]: 2026-01-31 09:18:47.913552489 +0000 UTC m=+0.021456639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:18:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18350ceca371fa751d5ca9c123aeb21fe337d78a5d18b590e82e1843cfef77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18350ceca371fa751d5ca9c123aeb21fe337d78a5d18b590e82e1843cfef77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18350ceca371fa751d5ca9c123aeb21fe337d78a5d18b590e82e1843cfef77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e18350ceca371fa751d5ca9c123aeb21fe337d78a5d18b590e82e1843cfef77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:18:48 compute-0 podman[434337]: 2026-01-31 09:18:48.051506063 +0000 UTC m=+0.159410203 container init 8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 09:18:48 compute-0 podman[434337]: 2026-01-31 09:18:48.057495178 +0000 UTC m=+0.165399298 container start 8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mestorf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:18:48 compute-0 podman[434337]: 2026-01-31 09:18:48.06796975 +0000 UTC m=+0.175873870 container attach 8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:18:48 compute-0 nova_compute[247704]: 2026-01-31 09:18:48.186 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 206 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 62 KiB/s wr, 126 op/s
Jan 31 09:18:48 compute-0 kind_mestorf[434354]: {
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:         "osd_id": 0,
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:         "type": "bluestore"
Jan 31 09:18:48 compute-0 kind_mestorf[434354]:     }
Jan 31 09:18:48 compute-0 kind_mestorf[434354]: }
Jan 31 09:18:48 compute-0 systemd[1]: libpod-8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89.scope: Deactivated successfully.
Jan 31 09:18:48 compute-0 podman[434337]: 2026-01-31 09:18:48.858622056 +0000 UTC m=+0.966526176 container died 8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mestorf, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e18350ceca371fa751d5ca9c123aeb21fe337d78a5d18b590e82e1843cfef77-merged.mount: Deactivated successfully.
Jan 31 09:18:49 compute-0 podman[434337]: 2026-01-31 09:18:49.036655007 +0000 UTC m=+1.144559147 container remove 8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 09:18:49 compute-0 systemd[1]: libpod-conmon-8883dc271e387fc82046c90c00e310c7b3525d2e8d660bdcffcfdba67c22bf89.scope: Deactivated successfully.
Jan 31 09:18:49 compute-0 sudo[434230]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:18:49 compute-0 nova_compute[247704]: 2026-01-31 09:18:49.070 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:18:49 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:49 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 5282bfb0-8b0e-48ca-aaf2-eccc884ecf8e does not exist
Jan 31 09:18:49 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 6f2d2adf-a005-46cf-93d7-535b8fcbe850 does not exist
Jan 31 09:18:49 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 0b868119-7877-4016-b80a-b92b11d5447a does not exist
Jan 31 09:18:49 compute-0 sudo[434389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:49 compute-0 sudo[434389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:49 compute-0 sudo[434389]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:49 compute-0 sudo[434414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:18:49 compute-0 sudo[434414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:49 compute-0 sudo[434414]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:49.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:49 compute-0 ceph-mon[74496]: pgmap v4425: 305 pgs: 305 active+clean; 206 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 62 KiB/s wr, 126 op/s
Jan 31 09:18:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:49 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:18:49 compute-0 nova_compute[247704]: 2026-01-31 09:18:49.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:18:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:49.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:18:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 208 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 89 KiB/s wr, 105 op/s
Jan 31 09:18:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:51 compute-0 ceph-mon[74496]: pgmap v4426: 305 pgs: 305 active+clean; 208 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 89 KiB/s wr, 105 op/s
Jan 31 09:18:51 compute-0 sudo[434441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:51 compute-0 sudo[434441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:51 compute-0 sudo[434441]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:51 compute-0 sudo[434466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:18:51 compute-0 sudo[434466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:18:51 compute-0 sudo[434466]: pam_unix(sudo:session): session closed for user root
Jan 31 09:18:51 compute-0 podman[434490]: 2026-01-31 09:18:51.971838596 +0000 UTC m=+0.047474365 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:18:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 499 KiB/s wr, 98 op/s
Jan 31 09:18:52 compute-0 ceph-mon[74496]: pgmap v4427: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 499 KiB/s wr, 98 op/s
Jan 31 09:18:53 compute-0 nova_compute[247704]: 2026-01-31 09:18:53.191 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:18:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1372370753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:18:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/1372370753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:18:54 compute-0 nova_compute[247704]: 2026-01-31 09:18:54.073 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 53 op/s
Jan 31 09:18:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:18:55 compute-0 ceph-mon[74496]: pgmap v4428: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 53 op/s
Jan 31 09:18:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 499 KiB/s wr, 54 op/s
Jan 31 09:18:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:18:56.314 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=118, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=117) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:18:56 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:18:56.315 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:18:56 compute-0 nova_compute[247704]: 2026-01-31 09:18:56.354 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:57 compute-0 ceph-mon[74496]: pgmap v4429: 305 pgs: 305 active+clean; 216 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 499 KiB/s wr, 54 op/s
Jan 31 09:18:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:18:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:57.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:18:58 compute-0 nova_compute[247704]: 2026-01-31 09:18:58.193 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 220 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 581 KiB/s wr, 60 op/s
Jan 31 09:18:59 compute-0 nova_compute[247704]: 2026-01-31 09:18:59.074 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:18:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:18:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:59.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:18:59 compute-0 ceph-mon[74496]: pgmap v4430: 305 pgs: 305 active+clean; 220 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 581 KiB/s wr, 60 op/s
Jan 31 09:18:59 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1164652490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:18:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:18:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:18:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 220 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 523 KiB/s wr, 61 op/s
Jan 31 09:19:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:01.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:01 compute-0 ceph-mon[74496]: pgmap v4431: 305 pgs: 305 active+clean; 220 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 523 KiB/s wr, 61 op/s
Jan 31 09:19:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/822958513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:01 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/822958513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 212 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 815 KiB/s rd, 498 KiB/s wr, 68 op/s
Jan 31 09:19:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Jan 31 09:19:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Jan 31 09:19:02 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Jan 31 09:19:03 compute-0 ceph-mon[74496]: pgmap v4432: 305 pgs: 305 active+clean; 212 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 815 KiB/s rd, 498 KiB/s wr, 68 op/s
Jan 31 09:19:03 compute-0 ceph-mon[74496]: osdmap e431: 3 total, 3 up, 3 in
Jan 31 09:19:03 compute-0 nova_compute[247704]: 2026-01-31 09:19:03.195 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:03 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:03.316 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=5c307474-e9ec-4d19-9f52-463eb0ff26d1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '118'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:19:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:03.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.077 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 363 KiB/s rd, 106 KiB/s wr, 53 op/s
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.400 247708 DEBUG nova.compute.manager [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-changed-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.401 247708 DEBUG nova.compute.manager [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Refreshing instance network info cache due to event network-changed-9d925d3a-15af-4795-b206-2c45063bc1f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.401 247708 DEBUG oslo_concurrency.lockutils [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.401 247708 DEBUG oslo_concurrency.lockutils [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquired lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.401 247708 DEBUG nova.network.neutron [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Refreshing network info cache for port 9d925d3a-15af-4795-b206-2c45063bc1f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.504 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.505 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.505 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.505 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.506 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.507 247708 INFO nova.compute.manager [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Terminating instance
Jan 31 09:19:04 compute-0 nova_compute[247704]: 2026-01-31 09:19:04.509 247708 DEBUG nova.compute.manager [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 31 09:19:04 compute-0 podman[434519]: 2026-01-31 09:19:04.920987001 +0000 UTC m=+0.088607627 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:19:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:05 compute-0 kernel: tap9d925d3a-15 (unregistering): left promiscuous mode
Jan 31 09:19:05 compute-0 NetworkManager[49108]: <info>  [1769851145.1352] device (tap9d925d3a-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.145 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:05 compute-0 ovn_controller[149457]: 2026-01-31T09:19:05Z|00921|binding|INFO|Releasing lport 9d925d3a-15af-4795-b206-2c45063bc1f7 from this chassis (sb_readonly=0)
Jan 31 09:19:05 compute-0 ovn_controller[149457]: 2026-01-31T09:19:05Z|00922|binding|INFO|Setting lport 9d925d3a-15af-4795-b206-2c45063bc1f7 down in Southbound
Jan 31 09:19:05 compute-0 ovn_controller[149457]: 2026-01-31T09:19:05Z|00923|binding|INFO|Removing iface tap9d925d3a-15 ovn-installed in OVS
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.149 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.156 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:05.155 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:14:74 10.100.0.6'], port_security=['fa:16:3e:d8:14:74 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dcc57ec3-f59a-4574-9cd4-389ae92c9e11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c9ca540-57e7-412d-8ef3-af923db0a265', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98d10c0290e340a08e9d1726bf0066bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11348f26-2c0a-4b92-a927-856bca145e48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e88fefe4-17cc-4664-bc86-8614a5f025ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>], logical_port=9d925d3a-15af-4795-b206-2c45063bc1f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fb99a246b20>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:19:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:05.208 160028 INFO neutron.agent.ovn.metadata.agent [-] Port 9d925d3a-15af-4795-b206-2c45063bc1f7 in datapath 5c9ca540-57e7-412d-8ef3-af923db0a265 unbound from our chassis
Jan 31 09:19:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:05.209 160028 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c9ca540-57e7-412d-8ef3-af923db0a265, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 31 09:19:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:05.210 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d163cd6d-64be-44f4-afaf-ca122909618f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:05 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:05.210 160028 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 namespace which is not needed anymore
Jan 31 09:19:05 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000df.scope: Deactivated successfully.
Jan 31 09:19:05 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000df.scope: Consumed 15.605s CPU time.
Jan 31 09:19:05 compute-0 systemd-machined[214448]: Machine qemu-96-instance-000000df terminated.
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.341 247708 INFO nova.virt.libvirt.driver [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Instance destroyed successfully.
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.342 247708 DEBUG nova.objects.instance [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lazy-loading 'resources' on Instance uuid dcc57ec3-f59a-4574-9cd4-389ae92c9e11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.373 247708 DEBUG nova.virt.libvirt.vif [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:17:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2109844157',display_name='tempest-TestVolumeBootPattern-server-2109844157',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2109844157',id=223,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDAfqvbo84AGo7M8dHeUcfV7f8XDMsbbH3qvJ7QZYUuK8Mi+q4nVx9EyFqWsp6cOXT2AG4HbgkO3dUTMLlMtCu+TvTdRzopwn8vz5la3KIOsONTeEClwFEs29TOnQ3Rwg==',key_name='tempest-TestVolumeBootPattern-498126782',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:17:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98d10c0290e340a08e9d1726bf0066bf',ramdisk_id='',reservation_id='r-5956ry02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1294459393',owner_user_name='tempest-TestVolumeBootPattern-1294459393-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:17:44Z,user_data=None,user_id='ecd39871d7fd438f88b36601f25d6eb6',uuid=dcc57ec3-f59a-4574-9cd4-389ae92c9e11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.373 247708 DEBUG nova.network.os_vif_util [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converting VIF {"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.374 247708 DEBUG nova.network.os_vif_util [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.375 247708 DEBUG os_vif [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.377 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.377 247708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d925d3a-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.379 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.382 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 31 09:19:05 compute-0 nova_compute[247704]: 2026-01-31 09:19:05.384 247708 INFO os_vif [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:14:74,bridge_name='br-int',has_traffic_filtering=True,id=9d925d3a-15af-4795-b206-2c45063bc1f7,network=Network(5c9ca540-57e7-412d-8ef3-af923db0a265),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d925d3a-15')
Jan 31 09:19:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:05.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:05 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [NOTICE]   (433238) : haproxy version is 2.8.14-c23fe91
Jan 31 09:19:05 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [NOTICE]   (433238) : path to executable is /usr/sbin/haproxy
Jan 31 09:19:05 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [WARNING]  (433238) : Exiting Master process...
Jan 31 09:19:05 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [ALERT]    (433238) : Current worker (433240) exited with code 143 (Terminated)
Jan 31 09:19:05 compute-0 neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265[433234]: [WARNING]  (433238) : All workers exited. Exiting... (0)
Jan 31 09:19:05 compute-0 systemd[1]: libpod-769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177.scope: Deactivated successfully.
Jan 31 09:19:05 compute-0 podman[434572]: 2026-01-31 09:19:05.477635656 +0000 UTC m=+0.197603523 container died 769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177-userdata-shm.mount: Deactivated successfully.
Jan 31 09:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1026b9e3d120ee123bf3ab4c21e3a16b8f568de4b13f5b56edcfca63ae7fdc7f-merged.mount: Deactivated successfully.
Jan 31 09:19:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:05.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:05 compute-0 ceph-mon[74496]: pgmap v4434: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 363 KiB/s rd, 106 KiB/s wr, 53 op/s
Jan 31 09:19:06 compute-0 podman[434572]: 2026-01-31 09:19:06.002316882 +0000 UTC m=+0.722284749 container cleanup 769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:19:06 compute-0 systemd[1]: libpod-conmon-769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177.scope: Deactivated successfully.
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.189 247708 DEBUG nova.network.neutron [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updated VIF entry in instance network info cache for port 9d925d3a-15af-4795-b206-2c45063bc1f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.189 247708 DEBUG nova.network.neutron [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating instance_info_cache with network_info: [{"id": "9d925d3a-15af-4795-b206-2c45063bc1f7", "address": "fa:16:3e:d8:14:74", "network": {"id": "5c9ca540-57e7-412d-8ef3-af923db0a265", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-547475823-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98d10c0290e340a08e9d1726bf0066bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d925d3a-15", "ovs_interfaceid": "9d925d3a-15af-4795-b206-2c45063bc1f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:19:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 107 KiB/s wr, 65 op/s
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.282 247708 DEBUG oslo_concurrency.lockutils [req-5ab0ec6d-6474-49e5-9649-2b8e04c058b9 req-5c3bff2f-344c-4a57-9aba-ab62df32c3ec 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Releasing lock "refresh_cache-dcc57ec3-f59a-4574-9cd4-389ae92c9e11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.545 247708 DEBUG nova.compute.manager [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-vif-unplugged-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.546 247708 DEBUG oslo_concurrency.lockutils [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.546 247708 DEBUG oslo_concurrency.lockutils [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.546 247708 DEBUG oslo_concurrency.lockutils [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.547 247708 DEBUG nova.compute.manager [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] No waiting events found dispatching network-vif-unplugged-9d925d3a-15af-4795-b206-2c45063bc1f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.547 247708 DEBUG nova.compute.manager [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-vif-unplugged-9d925d3a-15af-4795-b206-2c45063bc1f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.547 247708 DEBUG nova.compute.manager [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.547 247708 DEBUG oslo_concurrency.lockutils [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Acquiring lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.547 247708 DEBUG oslo_concurrency.lockutils [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.548 247708 DEBUG oslo_concurrency.lockutils [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.548 247708 DEBUG nova.compute.manager [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] No waiting events found dispatching network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.548 247708 WARNING nova.compute.manager [req-91d049c6-860a-4da5-bdca-ad2e79637796 req-1a205055-663f-400c-b3ea-6c8b7f1834e2 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received unexpected event network-vif-plugged-9d925d3a-15af-4795-b206-2c45063bc1f7 for instance with vm_state active and task_state deleting.
Jan 31 09:19:06 compute-0 podman[434632]: 2026-01-31 09:19:06.661591251 +0000 UTC m=+0.638569461 container remove 769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.668 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e884a4-fed2-4e3e-9950-3554df79ffbc]: (4, ('Sat Jan 31 09:19:05 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177)\n769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177\nSat Jan 31 09:19:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 (769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177)\n769d242170b07fec71176f02c997f29b8b85d704eb75022eb864c5ed99121177\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.669 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b29197cb-1a9c-47be-a009-7ff7885fff52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.671 160028 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c9ca540-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.672 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:06 compute-0 kernel: tap5c9ca540-50: left promiscuous mode
Jan 31 09:19:06 compute-0 nova_compute[247704]: 2026-01-31 09:19:06.680 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.683 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[d40d13a7-dfdf-45db-8988-55a2669ff116]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.698 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[a8df6c3b-d763-4206-844d-ece27f8d987e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.700 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a42f1b-9ee5-4160-bd1a-7eeed9f6f6f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.711 254935 DEBUG oslo.privsep.daemon [-] privsep: reply[82160018-e186-4010-8e43-b60118b81c1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1130570, 'reachable_time': 43264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 434648, 'error': None, 'target': 'ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.714 160297 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c9ca540-57e7-412d-8ef3-af923db0a265 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 31 09:19:06 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:06.714 160297 DEBUG oslo.privsep.daemon [-] privsep: reply[39895eb9-d50e-4fa3-a274-6ea064ed9ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 09:19:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c9ca540\x2d57e7\x2d412d\x2d8ef3\x2daf923db0a265.mount: Deactivated successfully.
Jan 31 09:19:07 compute-0 ceph-mon[74496]: pgmap v4435: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 107 KiB/s wr, 65 op/s
Jan 31 09:19:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:07.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 9.4 KiB/s wr, 67 op/s
Jan 31 09:19:08 compute-0 nova_compute[247704]: 2026-01-31 09:19:08.788 247708 INFO nova.virt.libvirt.driver [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Deleting instance files /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11_del
Jan 31 09:19:08 compute-0 nova_compute[247704]: 2026-01-31 09:19:08.789 247708 INFO nova.virt.libvirt.driver [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Deletion of /var/lib/nova/instances/dcc57ec3-f59a-4574-9cd4-389ae92c9e11_del complete
Jan 31 09:19:08 compute-0 nova_compute[247704]: 2026-01-31 09:19:08.872 247708 INFO nova.compute.manager [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Took 4.36 seconds to destroy the instance on the hypervisor.
Jan 31 09:19:08 compute-0 nova_compute[247704]: 2026-01-31 09:19:08.873 247708 DEBUG oslo.service.loopingcall [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 31 09:19:08 compute-0 nova_compute[247704]: 2026-01-31 09:19:08.873 247708 DEBUG nova.compute.manager [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 31 09:19:08 compute-0 nova_compute[247704]: 2026-01-31 09:19:08.873 247708 DEBUG nova.network.neutron [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.079 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:09.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.507 247708 DEBUG nova.network.neutron [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.524 247708 INFO nova.compute.manager [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Took 0.65 seconds to deallocate network for instance.
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.612 247708 DEBUG nova.compute.manager [req-56acbab8-16ed-4eba-9491-88ab1b88c9ab req-f2e91ea5-32ff-4eb8-ac19-a0a3d707ff23 59630ee0089643e78d944136d6bced30 8aa8accd246f4c91857447e1cc9391b2 - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Received event network-vif-deleted-9d925d3a-15af-4795-b206-2c45063bc1f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.723 247708 INFO nova.compute.manager [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Took 0.20 seconds to detach 1 volumes for instance.
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.805 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.805 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:09.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:09 compute-0 ceph-mon[74496]: pgmap v4436: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 129 KiB/s rd, 9.4 KiB/s wr, 67 op/s
Jan 31 09:19:09 compute-0 nova_compute[247704]: 2026-01-31 09:19:09.883 247708 DEBUG oslo_concurrency.processutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:19:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Jan 31 09:19:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 4.2 KiB/s wr, 62 op/s
Jan 31 09:19:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Jan 31 09:19:10 compute-0 ceph-mon[74496]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.380 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:19:10 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503540223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.421 247708 DEBUG oslo_concurrency.processutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.427 247708 DEBUG nova.compute.provider_tree [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.455 247708 DEBUG nova.scheduler.client.report [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.480 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.509 247708 INFO nova.scheduler.client.report [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Deleted allocations for instance dcc57ec3-f59a-4574-9cd4-389ae92c9e11
Jan 31 09:19:10 compute-0 nova_compute[247704]: 2026-01-31 09:19:10.591 247708 DEBUG oslo_concurrency.lockutils [None req-e472bc7d-09fe-429f-b45d-1d6ca1daf7e0 ecd39871d7fd438f88b36601f25d6eb6 98d10c0290e340a08e9d1726bf0066bf - - default default] Lock "dcc57ec3-f59a-4574-9cd4-389ae92c9e11" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:11.252 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:11.252 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:19:11.252 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:11 compute-0 ceph-mon[74496]: pgmap v4437: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 4.2 KiB/s wr, 62 op/s
Jan 31 09:19:11 compute-0 ceph-mon[74496]: osdmap e432: 3 total, 3 up, 3 in
Jan 31 09:19:11 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3503540223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:11.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:11.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:11 compute-0 sudo[434675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:11 compute-0 sudo[434675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:11 compute-0 sudo[434675]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:12 compute-0 sudo[434700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:12 compute-0 sudo[434700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:12 compute-0 sudo[434700]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Jan 31 09:19:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:13.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:13 compute-0 ceph-mon[74496]: pgmap v4439: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Jan 31 09:19:14 compute-0 nova_compute[247704]: 2026-01-31 09:19:14.081 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Jan 31 09:19:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:15 compute-0 ceph-mon[74496]: pgmap v4440: 305 pgs: 305 active+clean; 201 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Jan 31 09:19:15 compute-0 nova_compute[247704]: 2026-01-31 09:19:15.383 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:15.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:15.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 168 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 09:19:17 compute-0 ceph-mon[74496]: pgmap v4441: 305 pgs: 305 active+clean; 168 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 09:19:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:17.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:17.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 142 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 09:19:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/314481979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:18 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/314481979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:19 compute-0 nova_compute[247704]: 2026-01-31 09:19:19.083 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:19.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.002000047s ======
Jan 31 09:19:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:19.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Jan 31 09:19:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:20 compute-0 ceph-mon[74496]: pgmap v4442: 305 pgs: 305 active+clean; 142 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 09:19:20 compute-0 nova_compute[247704]: 2026-01-31 09:19:20.341 247708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769851145.3389919, dcc57ec3-f59a-4574-9cd4-389ae92c9e11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 31 09:19:20 compute-0 nova_compute[247704]: 2026-01-31 09:19:20.342 247708 INFO nova.compute.manager [-] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] VM Stopped (Lifecycle Event)
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:19:20
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['backups', 'images', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Jan 31 09:19:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:19:20 compute-0 nova_compute[247704]: 2026-01-31 09:19:20.374 247708 DEBUG nova.compute.manager [None req-2f1be746-492d-401f-af34-88c5cff7ff73 - - - - - -] [instance: dcc57ec3-f59a-4574-9cd4-389ae92c9e11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 31 09:19:20 compute-0 nova_compute[247704]: 2026-01-31 09:19:20.386 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:19:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:19:21 compute-0 ceph-mon[74496]: pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 31 09:19:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:21.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 31 09:19:22 compute-0 podman[434730]: 2026-01-31 09:19:22.880387652 +0000 UTC m=+0.047464175 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 09:19:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:23 compute-0 ceph-mon[74496]: pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 31 09:19:24 compute-0 nova_compute[247704]: 2026-01-31 09:19:24.084 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Jan 31 09:19:24 compute-0 nova_compute[247704]: 2026-01-31 09:19:24.276 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:24 compute-0 nova_compute[247704]: 2026-01-31 09:19:24.310 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:25 compute-0 ceph-mon[74496]: pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Jan 31 09:19:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:25 compute-0 nova_compute[247704]: 2026-01-31 09:19:25.388 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:25.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Jan 31 09:19:26 compute-0 nova_compute[247704]: 2026-01-31 09:19:26.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:26 compute-0 nova_compute[247704]: 2026-01-31 09:19:26.561 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:19:27 compute-0 ceph-mon[74496]: pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Jan 31 09:19:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:27.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:27.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 1 op/s
Jan 31 09:19:28 compute-0 nova_compute[247704]: 2026-01-31 09:19:28.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:29 compute-0 nova_compute[247704]: 2026-01-31 09:19:29.086 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:29 compute-0 ceph-mon[74496]: pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 1 op/s
Jan 31 09:19:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:29.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:29.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 1 op/s
Jan 31 09:19:30 compute-0 nova_compute[247704]: 2026-01-31 09:19:30.391 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:31 compute-0 nova_compute[247704]: 2026-01-31 09:19:31.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:31 compute-0 ceph-mon[74496]: pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 1 op/s
Jan 31 09:19:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:31.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:32 compute-0 sudo[434756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:32 compute-0 sudo[434756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:32 compute-0 sudo[434756]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:32 compute-0 sudo[434781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:32 compute-0 sudo[434781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:32 compute-0 sudo[434781]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:19:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:33.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:33 compute-0 ceph-mon[74496]: pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 31 09:19:33 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4079400756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:33.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:34 compute-0 nova_compute[247704]: 2026-01-31 09:19:34.087 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:34 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2578349354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:35 compute-0 nova_compute[247704]: 2026-01-31 09:19:35.394 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:35.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:35 compute-0 ceph-mon[74496]: pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:35.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:35 compute-0 podman[434808]: 2026-01-31 09:19:35.913047568 +0000 UTC m=+0.082408047 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:19:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:19:36 compute-0 nova_compute[247704]: 2026-01-31 09:19:36.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:36 compute-0 nova_compute[247704]: 2026-01-31 09:19:36.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:19:36 compute-0 nova_compute[247704]: 2026-01-31 09:19:36.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:19:36 compute-0 nova_compute[247704]: 2026-01-31 09:19:36.578 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:19:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:37.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.595 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.596 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.596 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.596 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.596 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:19:37 compute-0 ceph-mon[74496]: pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:37.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:37 compute-0 nova_compute[247704]: 2026-01-31 09:19:37.999 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.158 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.159 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4111MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.159 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.160 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.227 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.227 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.253 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:19:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:19:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676889966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.660 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.666 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.687 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.722 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:19:38 compute-0 nova_compute[247704]: 2026-01-31 09:19:38.723 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:19:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/766887324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3719793373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/676889966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:39 compute-0 nova_compute[247704]: 2026-01-31 09:19:39.090 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:39 compute-0 nova_compute[247704]: 2026-01-31 09:19:39.725 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:39.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:39 compute-0 ceph-mon[74496]: pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2518777605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:19:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:40 compute-0 nova_compute[247704]: 2026-01-31 09:19:40.397 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:41 compute-0 ceph-mon[74496]: pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:41.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:41 compute-0 nova_compute[247704]: 2026-01-31 09:19:41.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.364359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182364398, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 2130, "num_deletes": 252, "total_data_size": 3915807, "memory_usage": 3976280, "flush_reason": "Manual Compaction"}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182409353, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 3837073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95285, "largest_seqno": 97414, "table_properties": {"data_size": 3827441, "index_size": 6125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19746, "raw_average_key_size": 20, "raw_value_size": 3808261, "raw_average_value_size": 3946, "num_data_blocks": 267, "num_entries": 965, "num_filter_entries": 965, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850963, "oldest_key_time": 1769850963, "file_creation_time": 1769851182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 45057 microseconds, and 6040 cpu microseconds.
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.409409) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 3837073 bytes OK
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.409435) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.418579) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.418606) EVENT_LOG_v1 {"time_micros": 1769851182418599, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.418629) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 3907233, prev total WAL file size 3907233, number of live WAL files 2.
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.419315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(3747KB)], [221(12MB)]
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182419347, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 16718796, "oldest_snapshot_seqno": -1}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 12616 keys, 14724726 bytes, temperature: kUnknown
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182647703, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 14724726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14645626, "index_size": 46475, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 334110, "raw_average_key_size": 26, "raw_value_size": 14427912, "raw_average_value_size": 1143, "num_data_blocks": 1760, "num_entries": 12616, "num_filter_entries": 12616, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843071, "oldest_key_time": 0, "file_creation_time": 1769851182, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "68bfdccf-b92f-403c-86dd-82f34f44773c", "db_session_id": "DMSZC7CZYI433L98KJVN", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.647970) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 14724726 bytes
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.650750) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.2 rd, 64.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 13139, records dropped: 523 output_compression: NoCompression
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.650771) EVENT_LOG_v1 {"time_micros": 1769851182650761, "job": 140, "event": "compaction_finished", "compaction_time_micros": 228432, "compaction_time_cpu_micros": 28029, "output_level": 6, "num_output_files": 1, "total_output_size": 14724726, "num_input_records": 13139, "num_output_records": 12616, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182651377, "job": 140, "event": "table_file_deletion", "file_number": 223}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851182653095, "job": 140, "event": "table_file_deletion", "file_number": 221}
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.419206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.653186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.653191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.653193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.653195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:42 compute-0 ceph-mon[74496]: rocksdb: (Original Log Time 2026/01/31-09:19:42.653197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 09:19:43 compute-0 ceph-mon[74496]: pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:43.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:44 compute-0 nova_compute[247704]: 2026-01-31 09:19:44.091 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:45 compute-0 nova_compute[247704]: 2026-01-31 09:19:45.400 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:45.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:45 compute-0 ceph-mon[74496]: pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:45.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:47.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:47 compute-0 nova_compute[247704]: 2026-01-31 09:19:47.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:47 compute-0 ceph-mon[74496]: pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:47.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:49 compute-0 nova_compute[247704]: 2026-01-31 09:19:49.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:49 compute-0 sudo[434886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:49 compute-0 sudo[434886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-0 sudo[434886]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:49 compute-0 sudo[434911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:19:49 compute-0 sudo[434911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-0 sudo[434911]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:49.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:49 compute-0 sudo[434936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:49 compute-0 sudo[434936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-0 sudo[434936]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:49 compute-0 nova_compute[247704]: 2026-01-31 09:19:49.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:49 compute-0 sudo[434961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:19:49 compute-0 sudo[434961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:49 compute-0 ceph-mon[74496]: pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:49.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:49 compute-0 sudo[434961]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:19:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:50 compute-0 nova_compute[247704]: 2026-01-31 09:19:50.403 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:19:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 64510703-c0cd-407c-9f62-17124385880e does not exist
Jan 31 09:19:51 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 86f5c9ec-ba77-4018-a90c-5f0af8f526d5 does not exist
Jan 31 09:19:51 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 614eb1ab-7241-4250-81cd-82b24c684ec8 does not exist
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:19:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:19:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:19:51 compute-0 ceph-mon[74496]: pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:19:51 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:19:52 compute-0 sudo[435018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:52 compute-0 sudo[435018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-0 sudo[435018]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-0 sudo[435043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:19:52 compute-0 sudo[435043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-0 sudo[435043]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-0 sudo[435068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:52 compute-0 sudo[435068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-0 sudo[435068]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-0 sudo[435093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:19:52 compute-0 sudo[435093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-0 sudo[435118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:52 compute-0 sudo[435118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-0 sudo[435118]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:52 compute-0 sudo[435143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:52 compute-0 sudo[435143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:52 compute-0 sudo[435143]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:52 compute-0 podman[435208]: 2026-01-31 09:19:52.441476923 +0000 UTC m=+0.019872690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:19:52 compute-0 podman[435208]: 2026-01-31 09:19:52.592507413 +0000 UTC m=+0.170903090 container create cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:19:52 compute-0 systemd[1]: Started libpod-conmon-cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254.scope.
Jan 31 09:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:19:53 compute-0 podman[435208]: 2026-01-31 09:19:53.084302506 +0000 UTC m=+0.662698243 container init cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 09:19:53 compute-0 podman[435208]: 2026-01-31 09:19:53.090954486 +0000 UTC m=+0.669350173 container start cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 09:19:53 compute-0 elegant_aryabhata[435225]: 167 167
Jan 31 09:19:53 compute-0 systemd[1]: libpod-cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254.scope: Deactivated successfully.
Jan 31 09:19:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:19:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:19:53 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:19:53 compute-0 ceph-mon[74496]: pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:53 compute-0 podman[435208]: 2026-01-31 09:19:53.153603136 +0000 UTC m=+0.731998823 container attach cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 09:19:53 compute-0 podman[435208]: 2026-01-31 09:19:53.155236605 +0000 UTC m=+0.733632322 container died cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 09:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4943e31040fa966fca1901d529f17326bb0bd99dc9d60fa8bda254bfbd1dc7b-merged.mount: Deactivated successfully.
Jan 31 09:19:53 compute-0 podman[435208]: 2026-01-31 09:19:53.386108519 +0000 UTC m=+0.964504206 container remove cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:19:53 compute-0 systemd[1]: libpod-conmon-cd7138465230fda9427f6cf16e8432ce04822de648e93bfc9526b58ffd883254.scope: Deactivated successfully.
Jan 31 09:19:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:53 compute-0 podman[435260]: 2026-01-31 09:19:53.519022863 +0000 UTC m=+0.054689199 container create f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 09:19:53 compute-0 podman[435230]: 2026-01-31 09:19:53.538546593 +0000 UTC m=+0.414343107 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 09:19:53 compute-0 systemd[1]: Started libpod-conmon-f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6.scope.
Jan 31 09:19:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71a9402982868a4c292e01f47426e82ed130d044b7a326fb01c8e9919c80fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71a9402982868a4c292e01f47426e82ed130d044b7a326fb01c8e9919c80fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:53 compute-0 podman[435260]: 2026-01-31 09:19:53.492425782 +0000 UTC m=+0.028092138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71a9402982868a4c292e01f47426e82ed130d044b7a326fb01c8e9919c80fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71a9402982868a4c292e01f47426e82ed130d044b7a326fb01c8e9919c80fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71a9402982868a4c292e01f47426e82ed130d044b7a326fb01c8e9919c80fd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:53 compute-0 podman[435260]: 2026-01-31 09:19:53.61477772 +0000 UTC m=+0.150444076 container init f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 09:19:53 compute-0 podman[435260]: 2026-01-31 09:19:53.621122993 +0000 UTC m=+0.156789329 container start f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 09:19:53 compute-0 podman[435260]: 2026-01-31 09:19:53.652858439 +0000 UTC m=+0.188524795 container attach f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 09:19:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:53.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:54 compute-0 nova_compute[247704]: 2026-01-31 09:19:54.095 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:54 compute-0 dreamy_johnson[435284]: --> passed data devices: 0 physical, 1 LVM
Jan 31 09:19:54 compute-0 dreamy_johnson[435284]: --> relative data size: 1.0
Jan 31 09:19:54 compute-0 dreamy_johnson[435284]: --> All data devices are unavailable
Jan 31 09:19:54 compute-0 systemd[1]: libpod-f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6.scope: Deactivated successfully.
Jan 31 09:19:54 compute-0 podman[435260]: 2026-01-31 09:19:54.349711942 +0000 UTC m=+0.885378278 container died f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 09:19:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2690524856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:19:54 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/2690524856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea71a9402982868a4c292e01f47426e82ed130d044b7a326fb01c8e9919c80fd-merged.mount: Deactivated successfully.
Jan 31 09:19:54 compute-0 podman[435260]: 2026-01-31 09:19:54.468482985 +0000 UTC m=+1.004149321 container remove f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 09:19:54 compute-0 systemd[1]: libpod-conmon-f5657e321f545a2cd247d0f9c4ba1e94c6ba180eedf73158697860126bc7c0f6.scope: Deactivated successfully.
Jan 31 09:19:54 compute-0 sudo[435093]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:54 compute-0 sudo[435314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:54 compute-0 sudo[435314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:54 compute-0 sudo[435314]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:54 compute-0 sudo[435339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:19:54 compute-0 sudo[435339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:54 compute-0 sudo[435339]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:54 compute-0 sudo[435364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:54 compute-0 sudo[435364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:54 compute-0 sudo[435364]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:54 compute-0 sudo[435389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- lvm list --format json
Jan 31 09:19:54 compute-0 sudo[435389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:54 compute-0 podman[435451]: 2026-01-31 09:19:54.982136705 +0000 UTC m=+0.036616734 container create a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_knuth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:19:55 compute-0 systemd[1]: Started libpod-conmon-a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273.scope.
Jan 31 09:19:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:19:55 compute-0 podman[435451]: 2026-01-31 09:19:54.964429688 +0000 UTC m=+0.018909737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:19:55 compute-0 podman[435451]: 2026-01-31 09:19:55.074789988 +0000 UTC m=+0.129270027 container init a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_knuth, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 09:19:55 compute-0 podman[435451]: 2026-01-31 09:19:55.081040329 +0000 UTC m=+0.135520358 container start a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 09:19:55 compute-0 compassionate_knuth[435468]: 167 167
Jan 31 09:19:55 compute-0 systemd[1]: libpod-a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273.scope: Deactivated successfully.
Jan 31 09:19:55 compute-0 podman[435451]: 2026-01-31 09:19:55.08859652 +0000 UTC m=+0.143076569 container attach a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 09:19:55 compute-0 podman[435451]: 2026-01-31 09:19:55.088985099 +0000 UTC m=+0.143465158 container died a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:19:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d0ad5e962bc53375875fa94d0698d3338ea445c46fe9d9a1187eaf5d9c97c2a-merged.mount: Deactivated successfully.
Jan 31 09:19:55 compute-0 podman[435451]: 2026-01-31 09:19:55.172757179 +0000 UTC m=+0.227237208 container remove a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 09:19:55 compute-0 systemd[1]: libpod-conmon-a87da1300ac754643ae67ec63842851635a897cfde69282cf480b63e42a49273.scope: Deactivated successfully.
Jan 31 09:19:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:19:55 compute-0 podman[435495]: 2026-01-31 09:19:55.320336126 +0000 UTC m=+0.055253313 container create b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 09:19:55 compute-0 systemd[1]: Started libpod-conmon-b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11.scope.
Jan 31 09:19:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ab51c33aba2bc39109fa0eebb0c68b4ccbe366e1236611cc88158c257dd283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:55 compute-0 podman[435495]: 2026-01-31 09:19:55.287880703 +0000 UTC m=+0.022779000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ab51c33aba2bc39109fa0eebb0c68b4ccbe366e1236611cc88158c257dd283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ab51c33aba2bc39109fa0eebb0c68b4ccbe366e1236611cc88158c257dd283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ab51c33aba2bc39109fa0eebb0c68b4ccbe366e1236611cc88158c257dd283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:55 compute-0 nova_compute[247704]: 2026-01-31 09:19:55.406 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:55 compute-0 podman[435495]: 2026-01-31 09:19:55.444088768 +0000 UTC m=+0.178987035 container init b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_noyce, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 09:19:55 compute-0 podman[435495]: 2026-01-31 09:19:55.450695457 +0000 UTC m=+0.185593724 container start b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:19:55 compute-0 podman[435495]: 2026-01-31 09:19:55.460262638 +0000 UTC m=+0.195160905 container attach b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 09:19:55 compute-0 ceph-mon[74496]: pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:55.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:55 compute-0 nova_compute[247704]: 2026-01-31 09:19:55.555 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:19:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:55.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]: {
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:     "0": [
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:         {
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "devices": [
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "/dev/loop3"
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             ],
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "lv_name": "ceph_lv0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "lv_size": "7511998464",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=f70fcd2a-dcb4-5f89-a4ba-79a09959083b,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d19aa227-e399-4341-9824-b20a6ddbc903,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "lv_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "name": "ceph_lv0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "tags": {
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.block_uuid": "Yax8Ec-hteU-k4Ux-8Rk3-1oCh-XAAN-r8dowq",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.cluster_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.cluster_name": "ceph",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.crush_device_class": "",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.encrypted": "0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.osd_fsid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.osd_id": "0",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.type": "block",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:                 "ceph.vdo": "0"
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             },
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "type": "block",
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:             "vg_name": "ceph_vg0"
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:         }
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]:     ]
Jan 31 09:19:56 compute-0 inspiring_noyce[435511]: }
Jan 31 09:19:56 compute-0 systemd[1]: libpod-b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11.scope: Deactivated successfully.
Jan 31 09:19:56 compute-0 podman[435495]: 2026-01-31 09:19:56.220319445 +0000 UTC m=+0.955217712 container died b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 09:19:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-07ab51c33aba2bc39109fa0eebb0c68b4ccbe366e1236611cc88158c257dd283-merged.mount: Deactivated successfully.
Jan 31 09:19:56 compute-0 podman[435495]: 2026-01-31 09:19:56.403347677 +0000 UTC m=+1.138245944 container remove b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 09:19:56 compute-0 systemd[1]: libpod-conmon-b9f28d68659c0b73a27b9ed4cff86db3f43029102926433ea9982faaa4430a11.scope: Deactivated successfully.
Jan 31 09:19:56 compute-0 sudo[435389]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:56 compute-0 sudo[435534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:56 compute-0 sudo[435534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:56 compute-0 sudo[435534]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:56 compute-0 sudo[435559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:19:56 compute-0 sudo[435559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:56 compute-0 sudo[435559]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:56 compute-0 sudo[435584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:56 compute-0 sudo[435584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:56 compute-0 sudo[435584]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:56 compute-0 sudo[435609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b -- raw list --format json
Jan 31 09:19:56 compute-0 sudo[435609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:56 compute-0 podman[435673]: 2026-01-31 09:19:56.952220135 +0000 UTC m=+0.064421214 container create d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kalam, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 09:19:57 compute-0 podman[435673]: 2026-01-31 09:19:56.907314763 +0000 UTC m=+0.019515862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:19:57 compute-0 systemd[1]: Started libpod-conmon-d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730.scope.
Jan 31 09:19:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:19:57 compute-0 podman[435673]: 2026-01-31 09:19:57.116466413 +0000 UTC m=+0.228667492 container init d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kalam, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 09:19:57 compute-0 podman[435673]: 2026-01-31 09:19:57.124795744 +0000 UTC m=+0.236996813 container start d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 09:19:57 compute-0 laughing_kalam[435689]: 167 167
Jan 31 09:19:57 compute-0 systemd[1]: libpod-d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730.scope: Deactivated successfully.
Jan 31 09:19:57 compute-0 conmon[435689]: conmon d005b118107bbfd520b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730.scope/container/memory.events
Jan 31 09:19:57 compute-0 podman[435673]: 2026-01-31 09:19:57.183852198 +0000 UTC m=+0.296053277 container attach d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kalam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 09:19:57 compute-0 podman[435673]: 2026-01-31 09:19:57.184650647 +0000 UTC m=+0.296851726 container died d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-def2700dfdb653178ba9c01b42cd3dae0b36ecea259b141c3cb1ff4e8dec5b43-merged.mount: Deactivated successfully.
Jan 31 09:19:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:19:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:19:57 compute-0 podman[435673]: 2026-01-31 09:19:57.547317617 +0000 UTC m=+0.659518706 container remove d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:19:57 compute-0 systemd[1]: libpod-conmon-d005b118107bbfd520b54e1d99ea4ec90613f8c85c08ca3b004396b81139b730.scope: Deactivated successfully.
Jan 31 09:19:57 compute-0 ceph-mon[74496]: pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:57 compute-0 podman[435714]: 2026-01-31 09:19:57.701264747 +0000 UTC m=+0.059774282 container create c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 09:19:57 compute-0 podman[435714]: 2026-01-31 09:19:57.665339701 +0000 UTC m=+0.023849266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:19:57 compute-0 systemd[1]: Started libpod-conmon-c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a.scope.
Jan 31 09:19:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3ee34e9610789562403cc31f2d2554c09b489dbfde1bacdd0601e44d9666ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3ee34e9610789562403cc31f2d2554c09b489dbfde1bacdd0601e44d9666ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3ee34e9610789562403cc31f2d2554c09b489dbfde1bacdd0601e44d9666ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a3ee34e9610789562403cc31f2d2554c09b489dbfde1bacdd0601e44d9666ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 09:19:57 compute-0 podman[435714]: 2026-01-31 09:19:57.887857915 +0000 UTC m=+0.246367460 container init c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 09:19:57 compute-0 podman[435714]: 2026-01-31 09:19:57.89428748 +0000 UTC m=+0.252797005 container start c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 09:19:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:19:57 compute-0 podman[435714]: 2026-01-31 09:19:57.920129142 +0000 UTC m=+0.278638657 container attach c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 09:19:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:58 compute-0 elated_shockley[435730]: {
Jan 31 09:19:58 compute-0 elated_shockley[435730]:     "d19aa227-e399-4341-9824-b20a6ddbc903": {
Jan 31 09:19:58 compute-0 elated_shockley[435730]:         "ceph_fsid": "f70fcd2a-dcb4-5f89-a4ba-79a09959083b",
Jan 31 09:19:58 compute-0 elated_shockley[435730]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 09:19:58 compute-0 elated_shockley[435730]:         "osd_id": 0,
Jan 31 09:19:58 compute-0 elated_shockley[435730]:         "osd_uuid": "d19aa227-e399-4341-9824-b20a6ddbc903",
Jan 31 09:19:58 compute-0 elated_shockley[435730]:         "type": "bluestore"
Jan 31 09:19:58 compute-0 elated_shockley[435730]:     }
Jan 31 09:19:58 compute-0 elated_shockley[435730]: }
Jan 31 09:19:58 compute-0 systemd[1]: libpod-c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a.scope: Deactivated successfully.
Jan 31 09:19:58 compute-0 podman[435714]: 2026-01-31 09:19:58.70173798 +0000 UTC m=+1.060247495 container died c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a3ee34e9610789562403cc31f2d2554c09b489dbfde1bacdd0601e44d9666ee-merged.mount: Deactivated successfully.
Jan 31 09:19:58 compute-0 podman[435714]: 2026-01-31 09:19:58.84615797 +0000 UTC m=+1.204667495 container remove c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:19:58 compute-0 systemd[1]: libpod-conmon-c924921fd854d479adef941d1a74d3c3778483fc32398b177eef37332c532d3a.scope: Deactivated successfully.
Jan 31 09:19:58 compute-0 sudo[435609]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 09:19:58 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 09:19:59 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 02123ba1-935a-4f22-8499-801a7d08cb01 does not exist
Jan 31 09:19:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev b92351b5-1d26-471b-b6ba-e8e50b712fcf does not exist
Jan 31 09:19:59 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ecfc7d0f-8a65-4de8-8f0f-d2ccf4bb2e0d does not exist
Jan 31 09:19:59 compute-0 nova_compute[247704]: 2026-01-31 09:19:59.098 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:19:59 compute-0 sudo[435763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:19:59 compute-0 sudo[435763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:59 compute-0 sudo[435763]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:59 compute-0 sudo[435788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 09:19:59 compute-0 sudo[435788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:19:59 compute-0 sudo[435788]: pam_unix(sudo:session): session closed for user root
Jan 31 09:19:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:19:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:19:59 compute-0 ceph-mon[74496]: pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:19:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:59 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:19:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:19:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:19:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:00 compute-0 ceph-mon[74496]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 09:20:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:00 compute-0 nova_compute[247704]: 2026-01-31 09:20:00.410 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:00 compute-0 ceph-mon[74496]: overall HEALTH_OK
Jan 31 09:20:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:01 compute-0 ovn_controller[149457]: 2026-01-31T09:20:01Z|00924|memory_trim|INFO|Detected inactivity (last active 30027 ms ago): trimming memory
Jan 31 09:20:01 compute-0 ceph-mon[74496]: pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:01.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:03 compute-0 ceph-mon[74496]: pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:03 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:03 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:03 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:03.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:04 compute-0 nova_compute[247704]: 2026-01-31 09:20:04.100 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:04 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:05 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:05 compute-0 nova_compute[247704]: 2026-01-31 09:20:05.413 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:05 compute-0 ceph-mon[74496]: pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:05 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:05 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:05 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:05.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:06 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:06 compute-0 podman[435818]: 2026-01-31 09:20:06.908121449 +0000 UTC m=+0.082583601 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 09:20:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:07 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:07 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:07 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:07.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:07 compute-0 ceph-mon[74496]: pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:08 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:09 compute-0 ceph-mon[74496]: pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:09 compute-0 nova_compute[247704]: 2026-01-31 09:20:09.101 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:09.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:09 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:09 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:09 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:09.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:10 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:10 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:10 compute-0 nova_compute[247704]: 2026-01-31 09:20:10.415 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:20:11.253 160028 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:20:11.253 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:20:11 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:20:11.253 160028 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:20:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:11 compute-0 ceph-mon[74496]: pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:11 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:11 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:11 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:11.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:12 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:12 compute-0 sudo[435850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:12 compute-0 sudo[435850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:12 compute-0 sudo[435850]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:12 compute-0 sudo[435875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:12 compute-0 sudo[435875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:12 compute-0 sudo[435875]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:13.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:13 compute-0 ceph-mon[74496]: pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:13 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:13 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:13 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:13.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:14 compute-0 nova_compute[247704]: 2026-01-31 09:20:14.103 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:14 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:15 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:15 compute-0 nova_compute[247704]: 2026-01-31 09:20:15.418 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:15.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:15 compute-0 ceph-mon[74496]: pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:15 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:15 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:15 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:15.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:16 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:17 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:17 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:17 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:17.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:17 compute-0 ceph-mon[74496]: pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:18 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:18 compute-0 ceph-mon[74496]: pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:19 compute-0 nova_compute[247704]: 2026-01-31 09:20:19.105 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:19.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:19 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:19 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:19 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:19.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:20:20 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Optimize plan auto_2026-01-31_09:20:20
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] do_upmap
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'backups', 'vms', 'default.rgw.meta', '.mgr', 'images']
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: [balancer INFO root] prepared 0/10 changes
Jan 31 09:20:20 compute-0 nova_compute[247704]: 2026-01-31 09:20:20.421 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:20 compute-0 ceph-mgr[74791]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3465938080
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 09:20:21 compute-0 ceph-mgr[74791]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 09:20:21 compute-0 ceph-mon[74496]: pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:21.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:21 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:21 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:21 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:21.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:22 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:23 compute-0 ceph-mon[74496]: pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:23.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:23 compute-0 podman[435906]: 2026-01-31 09:20:23.888957511 +0000 UTC m=+0.060317655 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 09:20:23 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:23 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:23 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:24 compute-0 nova_compute[247704]: 2026-01-31 09:20:24.107 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:24 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:25 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:25 compute-0 nova_compute[247704]: 2026-01-31 09:20:25.424 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:25 compute-0 ceph-mon[74496]: pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:25.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:25 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:25 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:25 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:26 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:26 compute-0 nova_compute[247704]: 2026-01-31 09:20:26.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:26 compute-0 nova_compute[247704]: 2026-01-31 09:20:26.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 09:20:27 compute-0 ceph-mon[74496]: pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:27 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:27 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:27 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:28 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:29 compute-0 nova_compute[247704]: 2026-01-31 09:20:29.108 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:29 compute-0 ceph-mon[74496]: pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:29 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:29 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:29 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:29.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:30 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:30 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:30 compute-0 nova_compute[247704]: 2026-01-31 09:20:30.427 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:30 compute-0 nova_compute[247704]: 2026-01-31 09:20:30.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:31.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:31 compute-0 ceph-mon[74496]: pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:31 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:31 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:31 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:31.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:32 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:32 compute-0 sudo[435929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:32 compute-0 sudo[435929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:32 compute-0 sudo[435929]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:32 compute-0 sudo[435954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:32 compute-0 sudo[435954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:32 compute-0 sudo[435954]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:32 compute-0 nova_compute[247704]: 2026-01-31 09:20:32.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:33.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:33 compute-0 ceph-mon[74496]: pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:33 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:33 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:33 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:33.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:34 compute-0 nova_compute[247704]: 2026-01-31 09:20:34.110 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:34 compute-0 sshd-session[435980]: Accepted publickey for zuul from 192.168.122.10 port 60456 ssh2: ECDSA SHA256:/XjW4njRnFkaMo3aYOSKPaOEQq6UYC1L631cF4V0Rd4
Jan 31 09:20:34 compute-0 systemd-logind[816]: New session 67 of user zuul.
Jan 31 09:20:34 compute-0 systemd[1]: Started Session 67 of User zuul.
Jan 31 09:20:34 compute-0 sshd-session[435980]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 09:20:34 compute-0 sudo[435984]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 09:20:34 compute-0 sudo[435984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 09:20:34 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:35 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:35 compute-0 nova_compute[247704]: 2026-01-31 09:20:35.430 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:35.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:35 compute-0 ceph-mon[74496]: pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:35 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2450555092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:35 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:35 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:35 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:35.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53033 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39042 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021619599734785037 of space, bias 1.0, pg target 0.6485879920435511 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 09:20:36 compute-0 nova_compute[247704]: 2026-01-31 09:20:36.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:36 compute-0 nova_compute[247704]: 2026-01-31 09:20:36.562 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 09:20:36 compute-0 nova_compute[247704]: 2026-01-31 09:20:36.563 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53042 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:36 compute-0 nova_compute[247704]: 2026-01-31 09:20:36.690 247708 DEBUG nova.compute.manager [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 09:20:36 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39048 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:37 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2351824196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:37 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 09:20:37 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378316414' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:37.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:37 compute-0 nova_compute[247704]: 2026-01-31 09:20:37.563 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:37 compute-0 nova_compute[247704]: 2026-01-31 09:20:37.693 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:20:37 compute-0 nova_compute[247704]: 2026-01-31 09:20:37.693 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:20:37 compute-0 nova_compute[247704]: 2026-01-31 09:20:37.693 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:20:37 compute-0 nova_compute[247704]: 2026-01-31 09:20:37.694 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 09:20:37 compute-0 nova_compute[247704]: 2026-01-31 09:20:37.694 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:20:37 compute-0 podman[436239]: 2026-01-31 09:20:37.759373327 +0000 UTC m=+0.097513941 container health_status 1e25cafb71c13c532a40aebd7c2342bf479b9dc985b78d8189396343297af238 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 09:20:37 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:37 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:37 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:37.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:38 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:20:38 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797563121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.196 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.53033 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mon[74496]: pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.39042 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.53042 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.39048 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1860320840' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/378316414' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2864978510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:38 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.347 247708 WARNING nova.virt.libvirt.driver [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.348 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3937MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.348 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.348 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.668 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.669 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.836 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing inventories for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 09:20:38 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47830 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.951 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating ProviderTree inventory for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.952 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Updating inventory in ProviderTree for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 09:20:38 compute-0 nova_compute[247704]: 2026-01-31 09:20:38.974 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing aggregate associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.047 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Refreshing trait associations for resource provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.080 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.111 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:39 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47842 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:39 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 09:20:39 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841653288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.514 247708 DEBUG oslo_concurrency.processutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.520 247708 DEBUG nova.compute.provider_tree [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed in ProviderTree for provider: 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 09:20:39 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/797563121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:39 compute-0 ceph-mon[74496]: pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:39 compute-0 ceph-mon[74496]: from='client.47830 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.626 247708 DEBUG nova.scheduler.client.report [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Inventory has not changed for provider 39dae8fb-a3d6-4f01-ab04-67eb06f4b735 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.628 247708 DEBUG nova.compute.resource_tracker [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 09:20:39 compute-0 nova_compute[247704]: 2026-01-31 09:20:39.629 247708 DEBUG oslo_concurrency.lockutils [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 09:20:39 compute-0 ovs-vsctl[436342]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 09:20:39 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:39 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:39 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:40 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:40 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:40 compute-0 nova_compute[247704]: 2026-01-31 09:20:40.432 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3307499885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:40 compute-0 ceph-mon[74496]: from='client.47842 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/841653288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 09:20:40 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2043660081' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 09:20:40 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 09:20:40 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 09:20:40 compute-0 virtqemud[247621]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 09:20:41 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: cache status {prefix=cache status} (starting...)
Jan 31 09:20:41 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:41 compute-0 lvm[436688]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 09:20:41 compute-0 lvm[436688]: VG ceph_vg0 finished
Jan 31 09:20:41 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: client ls {prefix=client ls} (starting...)
Jan 31 09:20:41 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:41 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53066 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:41.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:41 compute-0 ceph-mon[74496]: pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:41 compute-0 nova_compute[247704]: 2026-01-31 09:20:41.628 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:41 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:41 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53084 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:41 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:20:41 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:41 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:41 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:41 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:41.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:20:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860847577' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39087 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:20:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357587613' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 nova_compute[247704]: 2026-01-31 09:20:42.562 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.53066 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.39075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.53084 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3750243798' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1860847577' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/265012548' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3357587613' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53111 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:42.678+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:42 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:42 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 09:20:42 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3373967391' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 09:20:42 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:42 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39117 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:42 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:42.937+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:42 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:43 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 09:20:43 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:43 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: ops {prefix=ops} (starting...)
Jan 31 09:20:43 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 09:20:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560769704' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 09:20:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3084447510' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:43.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.39087 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.53111 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1582570693' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3373967391' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/557018449' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1224619771' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2560769704' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3084447510' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39135 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 09:20:43 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061422320' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47872 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53162 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:43 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: session ls {prefix=session ls} (starting...)
Jan 31 09:20:43 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui Can't run that command on an inactive MDS!
Jan 31 09:20:43 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:43 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:43 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:43.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:44 compute-0 ceph-mds[94769]: mds.cephfs.compute-0.voybui asok_command: status {prefix=status} (starting...)
Jan 31 09:20:44 compute-0 nova_compute[247704]: 2026-01-31 09:20:44.115 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:44 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53177 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 09:20:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:20:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4282404701' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:20:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:20:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534305306' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 09:20:44 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691839288' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.39117 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.53147 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3586976088' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.39135 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4061422320' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.47872 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.53162 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/212560233' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4051288875' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4282404701' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2415556615' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3362579220' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1534305306' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/4217219295' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47908 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:44 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:44 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:44.933+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 09:20:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2360045533' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53213 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:20:45 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:45.166+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:20:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:20:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133334762' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:45 compute-0 nova_compute[247704]: 2026-01-31 09:20:45.435 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:45 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39201 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:45.496+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:20:45 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:20:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 09:20:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660041370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47947 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.53177 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.47887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1256069273' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2691839288' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3537338723' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/662583840' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2360045533' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1133334762' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3556641598' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2502158679' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3302768494' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3709068946' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/41175167' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2660041370' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:45 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 09:20:45 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1562770847' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:45 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:45 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:45 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:45.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:46 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.47959 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:20:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171911681' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:46 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53255 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 09:20:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2189887801' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 09:20:46 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53267 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39246 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.47908 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.53213 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.39201 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.47947 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2386080269' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1562770847' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1149910150' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/480645662' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4171911681' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2189887801' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/361904903' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/358223706' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 09:20:46 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3387722562' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:47 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53291 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:47 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:47 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48001 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:47 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:47.351+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:20:47 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 09:20:47 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 09:20:47 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2480106286' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:21.039534+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:22.039686+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:23.039845+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:24.040053+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a71ea000/0x0/0x1bfc00000, data 0x3064697/0x325f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:25.040310+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230572 data_alloc: 218103808 data_used: 19140608
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 388636672 unmapped: 48349184 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc3b1c00 session 0x55f9d97b0780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc8c03c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:26.040458+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.470623016s of 25.994760513s, submitted: 164
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc6fe3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:27.040608+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:28.040812+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:29.040978+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:30.041142+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:31.041291+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:32.041461+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:33.041620+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:34.041825+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:35.042041+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:36.042168+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:37.042353+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:38.042558+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:39.042791+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:40.042989+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:41.043147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:42.043333+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:43.043504+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:44.043675+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:45.043909+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:46.044151+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:47.044346+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:48.044533+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.044716+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387104768 unmapped: 49881088 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.044912+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.045068+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.045283+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.045425+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.045573+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.045750+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.045938+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.046160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 49872896 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.046351+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.046527+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.046711+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.046921+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.047160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.047385+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 49864704 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.047677+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387129344 unmapped: 49856512 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.048142+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387129344 unmapped: 49856512 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.048613+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.049153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.049415+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.049752+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.050224+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.050414+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.050669+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 49848320 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.050939+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 49840128 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.051300+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 49840128 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.051707+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7e76000/0x0/0x1bfc00000, data 0x23fe687/0x25f8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4079321 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.313423157s of 49.348728180s, submitted: 8
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387153920 unmapped: 49831936 heap: 436985856 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.051853+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9d9cf0780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc774a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb71400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb71400 session 0x55f9dc6ff2c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc74be00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc46f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.051989+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.052155+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.052346+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.052492+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4141934 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.052691+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.052963+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.053310+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.053641+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.054021+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4141934 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.054222+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.054388+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.054702+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.055044+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387350528 unmapped: 53313536 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.055263+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4141934 data_alloc: 218103808 data_used: 11055104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.055507+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.055679+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.055925+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387358720 unmapped: 53305344 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.056101+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.576404572s of 18.677923203s, submitted: 25
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc2c21e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387366912 unmapped: 53297152 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb77800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.056956+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4142447 data_alloc: 218103808 data_used: 11059200
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387366912 unmapped: 53297152 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.057219+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387629056 unmapped: 53035008 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.057369+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.057568+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.057935+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.058147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186127 data_alloc: 218103808 data_used: 16252928
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.058316+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.058472+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.058675+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.058844+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.059146+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4186127 data_alloc: 218103808 data_used: 16252928
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.059408+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.059542+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 387981312 unmapped: 52682752 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.483976364s of 12.510684013s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a7767000/0x0/0x1bfc00000, data 0x2b0d687/0x2d07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1578f9c7), peers [1,2] op hist [1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.059724+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390914048 unmapped: 49750016 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.059883+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 49586176 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.060020+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 49586176 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a62d7000/0x0/0x1bfc00000, data 0x2de6687/0x2fe0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231003 data_alloc: 218103808 data_used: 16994304
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.060251+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 391077888 unmapped: 49586176 heap: 440664064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd103c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcd105a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcd11c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9d97b0d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9da73af00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dba3f860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc9f52c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.060494+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390512640 unmapped: 53829632 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd661e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dab14000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.060683+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.060870+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.061164+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292355 data_alloc: 218103808 data_used: 17002496
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.061506+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.061678+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.061846+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.062133+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.062364+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292355 data_alloc: 218103808 data_used: 17002496
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.062564+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.062736+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:03.062985+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:04.063173+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.550928116s of 17.227769852s, submitted: 100
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9da6f2780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:05.063350+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292580 data_alloc: 218103808 data_used: 17006592
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:06.063564+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:07.063779+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc ms_handle_reset ms_handle_reset con 0x55f9dcad4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: get_auth_request con 0x55f9dc3b1c00 auth_method 0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc handle_mgr_configure stats_period=5
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:08.064009+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:09.064166+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:10.064411+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292580 data_alloc: 218103808 data_used: 17006592
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:11.064724+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9db87bc00 session 0x55f9dcca5e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0186800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:12.064901+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:13.065034+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 390520832 unmapped: 53821440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:14.065179+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392118272 unmapped: 52224000 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:15.065465+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347300 data_alloc: 234881024 data_used: 24342528
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:16.065631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8d000/0x0/0x1bfc00000, data 0x36466e9/0x3841000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:17.065812+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:18.066017+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.694046974s of 14.721753120s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:19.066174+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8b000/0x0/0x1bfc00000, data 0x36476e9/0x3842000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:20.066364+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347608 data_alloc: 234881024 data_used: 24342528
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:21.066543+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:22.066697+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:23.066902+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5a8b000/0x0/0x1bfc00000, data 0x36476e9/0x3842000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:24.067161+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393625600 unmapped: 50716672 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:25.067377+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395362304 unmapped: 48979968 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:26.067589+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4389566 data_alloc: 234881024 data_used: 24653824
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395370496 unmapped: 48971776 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:27.067730+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 48930816 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:28.067887+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcad5c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcad5c00 session 0x55f9d97f85a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9835000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9835000 session 0x55f9daa22960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcd10b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395411456 unmapped: 48930816 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc501e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcbb10e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a559a000/0x0/0x1bfc00000, data 0x3b386e9/0x3d33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc91e960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dbb77800 session 0x55f9dcae21e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:29.068121+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395722752 unmapped: 48619520 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.719305992s of 10.298625946s, submitted: 70
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:30.068375+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc5df680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395747328 unmapped: 48594944 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:31.068612+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291518 data_alloc: 218103808 data_used: 18628608
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395747328 unmapped: 48594944 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:32.068761+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395747328 unmapped: 48594944 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:33.069017+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:34.069263+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:35.069477+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:36.069631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291518 data_alloc: 218103808 data_used: 18628608
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:37.069824+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:38.070012+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:39.070137+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:40.070329+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:41.070528+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4291518 data_alloc: 218103808 data_used: 18628608
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a5aaf000/0x0/0x1bfc00000, data 0x36246e9/0x381f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:42.070705+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395755520 unmapped: 48586752 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:43.070838+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.467213631s of 13.787834167s, submitted: 37
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dccf5860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9deafac00 session 0x55f9dc684f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395829248 unmapped: 48513024 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 ms_handle_reset con 0x55f9dc848400 session 0x55f9d97f61e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:44.071005+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392740864 unmapped: 51601408 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:45.071147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:46.071282+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a65b1000/0x0/0x1bfc00000, data 0x28d3687/0x2acd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4171022 data_alloc: 218103808 data_used: 16035840
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:47.071418+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:48.071537+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:49.071684+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:50.071862+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a65b1000/0x0/0x1bfc00000, data 0x28d3687/0x2acd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1692f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:51.072045+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4171022 data_alloc: 218103808 data_used: 16035840
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:52.072127+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 392749056 unmapped: 51593216 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:53.072745+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.610352516s of 10.054381371s, submitted: 81
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395640832 unmapped: 48701440 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:54.072879+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:55.073108+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:56.073305+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4214370 data_alloc: 218103808 data_used: 17145856
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:57.073520+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:58.073731+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:59.073970+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:00.074187+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:01.074424+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4214370 data_alloc: 218103808 data_used: 17145856
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:02.074609+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:03.074788+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:04.074935+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:05.075181+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:06.075347+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4214370 data_alloc: 218103808 data_used: 17145856
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.801712036s of 13.848567963s, submitted: 17
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:07.075568+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:08.075731+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a50e7000/0x0/0x1bfc00000, data 0x2e4d687/0x3047000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:09.075933+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:10.076222+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:11.076344+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213842 data_alloc: 218103808 data_used: 17149952
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcad5c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50e2000/0x0/0x1bfc00000, data 0x2e4f342/0x304b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:12.076659+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50e2000/0x0/0x1bfc00000, data 0x2e4f342/0x304b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9dcad5c00 session 0x55f9dcae3a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf54a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:13.076854+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50e2000/0x0/0x1bfc00000, data 0x2e4f342/0x304b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395575296 unmapped: 48766976 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a4d26000/0x0/0x1bfc00000, data 0x320b342/0x3407000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:14.077042+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 44318720 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9d9834000 session 0x55f9dccf41e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:15.077349+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d9f1a1e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 47939584 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:16.077583+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 385 handle_osd_map epochs [386,387], i have 385, src has [1,387]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4263676 data_alloc: 218103808 data_used: 20717568
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 47939584 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:17.077782+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396402688 unmapped: 47939584 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:18.078059+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a4d19000/0x0/0x1bfc00000, data 0x3216c02/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:19.078309+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:20.078561+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:21.078760+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4263676 data_alloc: 218103808 data_used: 20717568
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:22.078950+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396410880 unmapped: 47931392 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.687474251s of 15.662240028s, submitted: 20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 387 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd11860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:23.079153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a479d000/0x0/0x1bfc00000, data 0x3793c02/0x3991000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396861440 unmapped: 47480832 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:24.079305+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396861440 unmapped: 47480832 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:25.079519+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 47472640 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:26.079797+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310978 data_alloc: 218103808 data_used: 20725760
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 47472640 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4799000/0x0/0x1bfc00000, data 0x3795741/0x3994000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:27.080014+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 47472640 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:28.080219+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:29.080419+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:30.080601+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4799000/0x0/0x1bfc00000, data 0x3795741/0x3994000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcc474a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:31.080812+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310978 data_alloc: 218103808 data_used: 20725760
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc8c1860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:32.081043+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:33.081282+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd665a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.675912857s of 10.768301010s, submitted: 19
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc371a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:34.081464+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4798000/0x0/0x1bfc00000, data 0x3795751/0x3995000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:35.081719+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:36.081892+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4312816 data_alloc: 218103808 data_used: 20725760
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:37.082125+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4798000/0x0/0x1bfc00000, data 0x3795751/0x3995000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:38.082311+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:39.082493+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84dc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dc84dc00 session 0x55f9dc9c2780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db839c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9db839c00 session 0x55f9dc685860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:40.082649+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:41.082824+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4341456 data_alloc: 234881024 data_used: 24690688
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9834000 session 0x55f9dcd10f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcce8000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:42.083015+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:43.083216+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dc3ad800 session 0x55f9da6f2b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84dc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4797000/0x0/0x1bfc00000, data 0x3795783/0x3997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396877824 unmapped: 47464448 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:44.083388+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396894208 unmapped: 47448064 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:45.083637+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396902400 unmapped: 47439872 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:46.083803+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4360225 data_alloc: 234881024 data_used: 26959872
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396902400 unmapped: 47439872 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:47.084003+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4797000/0x0/0x1bfc00000, data 0x3795783/0x3997000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396902400 unmapped: 47439872 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.298404694s of 14.317409515s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:48.084138+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 41746432 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:49.084208+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:50.084387+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:51.084531+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4469349 data_alloc: 234881024 data_used: 28704768
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:52.084685+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:53.084821+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a3c9c000/0x0/0x1bfc00000, data 0x4288783/0x448a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:54.084968+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 41459712 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:55.085162+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a3c9c000/0x0/0x1bfc00000, data 0x4288783/0x448a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a3ad8000/0x0/0x1bfc00000, data 0x4454783/0x4656000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:56.085321+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4479731 data_alloc: 234881024 data_used: 29097984
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:57.085685+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:58.085856+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:52:59.086054+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafac00 session 0x55f9dcbb0f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcc770e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 43425792 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:00.086438+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.952914238s of 12.495125771s, submitted: 104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398991360 unmapped: 45350912 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafb400 session 0x55f9dcc512c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4b4c000/0x0/0x1bfc00000, data 0x33e1773/0x35e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:01.086528+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311306 data_alloc: 234881024 data_used: 21577728
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:02.086727+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:03.086871+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:04.087143+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:05.087350+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:06.087494+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311306 data_alloc: 234881024 data_used: 21577728
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4ade000/0x0/0x1bfc00000, data 0x344f773/0x3650000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4ade000/0x0/0x1bfc00000, data 0x344f773/0x3650000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:07.418945+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:08.419185+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4ade000/0x0/0x1bfc00000, data 0x344f773/0x3650000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:09.419349+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:10.419572+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:11.419799+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4311946 data_alloc: 234881024 data_used: 21594112
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 44941312 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.308907509s of 11.493408203s, submitted: 17
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 ms_handle_reset con 0x55f9deafb400 session 0x55f9dc9c3680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:12.420009+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 44253184 heap: 444342272 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 389 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d97b12c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 389 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc9c3c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 389 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc9c2b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:13.420218+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 389 heartbeat osd_stat(store_statfs(0x1a4ad7000/0x0/0x1bfc00000, data 0x3452490/0x3656000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402612224 unmapped: 50135040 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:14.420429+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 390 ms_handle_reset con 0x55f9deafac00 session 0x55f9dccf45a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402628608 unmapped: 50118656 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:15.420654+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9d9834000 session 0x55f9dccf50e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:16.420896+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489310 data_alloc: 234881024 data_used: 25178112
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc6854a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:17.421136+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcbb05a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 391 ms_handle_reset con 0x55f9deafb400 session 0x55f9da6f32c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:18.421476+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402636800 unmapped: 50110464 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a350e000/0x0/0x1bfc00000, data 0x4a19d50/0x4c1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f9000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:19.421643+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 392 ms_handle_reset con 0x55f9dc2f9000 session 0x55f9dcd0ef00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:20.422047+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a4acf000/0x0/0x1bfc00000, data 0x34579b7/0x365d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a4acf000/0x0/0x1bfc00000, data 0x34579b7/0x365d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:21.422297+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4344236 data_alloc: 234881024 data_used: 25186304
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:22.422500+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:23.422697+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:24.423046+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402677760 unmapped: 50069504 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9834000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 392 ms_handle_reset con 0x55f9d9834000 session 0x55f9dc6fed20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.286382675s of 13.420795441s, submitted: 131
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:25.423351+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402702336 unmapped: 50044928 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dccf5e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc3f1c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:26.423569+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4347550 data_alloc: 234881024 data_used: 25178112
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402710528 unmapped: 50036736 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd67e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a4ac7000/0x0/0x1bfc00000, data 0x322152e/0x3428000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:27.423784+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:28.424001+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcd105a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:29.424175+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:30.424366+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:31.424529+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316992 data_alloc: 234881024 data_used: 22863872
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402726912 unmapped: 50020352 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deafb400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a4d06000/0x0/0x1bfc00000, data 0x322155e/0x3427000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:32.424784+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 394 ms_handle_reset con 0x55f9deafb400 session 0x55f9dcae32c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:33.424992+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:34.425232+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a50c8000/0x0/0x1bfc00000, data 0x2e5f20b/0x3066000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:35.425523+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:36.425785+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4267908 data_alloc: 218103808 data_used: 18927616
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.548620224s of 11.714035988s, submitted: 59
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 394 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc50000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:37.425962+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a4d00000/0x0/0x1bfc00000, data 0x322720b/0x342e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:38.426176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397459456 unmapped: 55287808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 394 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9db568960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:39.426352+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dc317c00 session 0x55f9d97f6000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:40.426740+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:41.426974+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179262 data_alloc: 218103808 data_used: 11120640
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:42.427200+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:43.427366+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:44.427580+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:45.427791+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:46.428004+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4179262 data_alloc: 218103808 data_used: 11120640
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:47.428205+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:48.428407+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393297920 unmapped: 59449344 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:49.428612+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393306112 unmapped: 59441152 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:50.428787+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:51.428974+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4206142 data_alloc: 218103808 data_used: 14884864
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:52.429143+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:53.429502+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:54.429891+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393641984 unmapped: 59105280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a574b000/0x0/0x1bfc00000, data 0x27d9d4a/0x29e2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.022670746s of 18.103244781s, submitted: 22
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf4960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:55.430377+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:56.430549+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280768 data_alloc: 218103808 data_used: 14884864
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:57.430917+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:58.431143+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a4dc0000/0x0/0x1bfc00000, data 0x3165d4a/0x336e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17acf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:53:59.431286+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:00.431432+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 393658368 unmapped: 59088896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:01.431650+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374296 data_alloc: 218103808 data_used: 14958592
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 395362304 unmapped: 57384960 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:02.431802+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9dcbb0f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396664832 unmapped: 56082432 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3c77000/0x0/0x1bfc00000, data 0x3e9ed4a/0x40a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:03.432032+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396664832 unmapped: 56082432 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:04.432230+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:05.432468+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:06.432632+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444711 data_alloc: 218103808 data_used: 19910656
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:07.432855+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x3ed6d4a/0x40df000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:08.433046+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.094612122s of 13.529434204s, submitted: 93
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:09.433278+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:10.433619+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:11.434175+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441147 data_alloc: 218103808 data_used: 19910656
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:12.434386+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:13.434694+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 396845056 unmapped: 55902208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3c1e000/0x0/0x1bfc00000, data 0x3ef7d4a/0x4100000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:14.434950+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:15.435176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 54124544 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:16.435350+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4501453 data_alloc: 218103808 data_used: 20688896
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 54124544 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:17.435540+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:18.435742+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.956391335s of 10.231409073s, submitted: 77
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 ms_handle_reset con 0x55f9dc317c00 session 0x55f9d90fd4a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:19.436048+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a3689000/0x0/0x1bfc00000, data 0x448cd4a/0x4695000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:20.436258+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399777792 unmapped: 52969472 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 396 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dcc47860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:21.436460+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4501409 data_alloc: 218103808 data_used: 20697088
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399785984 unmapped: 52961280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:22.436669+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399785984 unmapped: 52961280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:23.436866+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399785984 unmapped: 52961280 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a3685000/0x0/0x1bfc00000, data 0x448e9a3/0x4698000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:24.437097+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 397 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9d9adb4a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb5400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 397 ms_handle_reset con 0x55f9deeb5400 session 0x55f9dc8c05a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c7400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 41361408 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:25.437383+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 41361408 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 398 ms_handle_reset con 0x55f9dc5c7400 session 0x55f9dc9063c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:26.437549+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604806 data_alloc: 234881024 data_used: 28688384
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 41353216 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:27.437745+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dcce90e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2b0c000/0x0/0x1bfc00000, data 0x500331a/0x5211000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 41353216 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:28.437991+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc6fe960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9dc6ff680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb5400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9deeb5400 session 0x55f9daa22d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411344896 unmapped: 41402368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:29.438176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411344896 unmapped: 41402368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:30.438462+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f8000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.656889915s of 12.057048798s, submitted: 74
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dc2f8000 session 0x55f9dc6ffc20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dccf4f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dcca5c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:31.438717+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4597582 data_alloc: 234881024 data_used: 28688384
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:32.438948+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2b0a000/0x0/0x1bfc00000, data 0x5004f8f/0x5214000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:33.439147+0000)
Jan 31 09:20:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:34.439280+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408248320 unmapped: 44498944 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:35.439548+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408256512 unmapped: 44490752 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:36.439779+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc6ffe00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601756 data_alloc: 234881024 data_used: 28696576
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408346624 unmapped: 44400640 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9db870960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:37.439944+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408354816 unmapped: 44392448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f3000/0x0/0x1bfc00000, data 0x541aace/0x562b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:38.440218+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:39.440387+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f3000/0x0/0x1bfc00000, data 0x541aace/0x562b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:40.440526+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:41.440668+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4631664 data_alloc: 234881024 data_used: 28696576
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408363008 unmapped: 44384256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:42.440809+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f3000/0x0/0x1bfc00000, data 0x541aace/0x562b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408371200 unmapped: 44376064 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:43.440946+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:44.441164+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:45.441356+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:46.441483+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.727028847s of 15.821485519s, submitted: 23
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4631972 data_alloc: 234881024 data_used: 28696576
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb5400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9deeb5400 session 0x55f9dcae3c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:47.441631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dcd0f860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:48.441816+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dc86b0e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9da74ef00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dd69a000 session 0x55f9daa234a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db87a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:49.442009+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f04000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:50.442205+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408379392 unmapped: 44367872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:51.442353+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4643839 data_alloc: 234881024 data_used: 29982720
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408387584 unmapped: 44359680 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:52.442546+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:53.442690+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:54.442841+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:55.443009+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:56.443167+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671199 data_alloc: 234881024 data_used: 33910784
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:57.443352+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:58.443497+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a26f1000/0x0/0x1bfc00000, data 0x541bace/0x562c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:54:59.443660+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:00.443831+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:01.443969+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671199 data_alloc: 234881024 data_used: 33910784
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 44261376 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:02.444202+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 44253184 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:03.444383+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.425395966s of 17.477115631s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408764416 unmapped: 43982848 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:04.444547+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411484160 unmapped: 41263104 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a22b0000/0x0/0x1bfc00000, data 0x585dace/0x5a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:05.444741+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:06.444883+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4737281 data_alloc: 234881024 data_used: 37543936
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:07.445151+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2257000/0x0/0x1bfc00000, data 0x58b5ace/0x5ac6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2257000/0x0/0x1bfc00000, data 0x58b5ace/0x5ac6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:08.445307+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:09.445661+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2257000/0x0/0x1bfc00000, data 0x58b5ace/0x5ac6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:10.445857+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9db87a800 session 0x55f9dcc472c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc74ba40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412491776 unmapped: 40255488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:11.446011+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dc317c00 session 0x55f9db568f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647436 data_alloc: 234881024 data_used: 34361344
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:12.446212+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:13.446349+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:14.446496+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:15.446706+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:16.446895+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647436 data_alloc: 234881024 data_used: 34361344
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412368896 unmapped: 40378368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.982387543s of 13.243474007s, submitted: 56
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:17.447094+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:18.447253+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:19.447483+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:20.447761+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:21.447923+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649900 data_alloc: 234881024 data_used: 34349056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:22.448085+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd10f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:23.448333+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc3734a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:24.448500+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a2ac0000/0x0/0x1bfc00000, data 0x504dace/0x525e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 40370176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:25.448768+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dcae7000 session 0x55f9dccf5860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:26.448942+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4511636 data_alloc: 234881024 data_used: 31846400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:27.449194+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:28.449472+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:29.449663+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:30.449842+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:31.450035+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4511636 data_alloc: 234881024 data_used: 31846400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:32.450216+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:33.450403+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 40361984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:34.450669+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.793992996s of 17.461553574s, submitted: 38
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:35.450910+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:36.451202+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4517643 data_alloc: 234881024 data_used: 33685504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:37.451410+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:38.451612+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:39.451756+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:40.452066+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6854a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413843456 unmapped: 38903808 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:41.452367+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4517419 data_alloc: 234881024 data_used: 33685504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413851648 unmapped: 38895616 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:42.452517+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413851648 unmapped: 38895616 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:43.452654+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a39c9000/0x0/0x1bfc00000, data 0x4144ace/0x4355000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:44.452862+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9e0f04000 session 0x55f9dcc5cb40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.198714256s of 10.470458031s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:45.453056+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd66960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:46.453367+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512655 data_alloc: 234881024 data_used: 33579008
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413859840 unmapped: 38887424 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:47.453595+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a3a0f000/0x0/0x1bfc00000, data 0x40feace/0x430f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 38879232 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:48.453776+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 38879232 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:49.453946+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a3a0f000/0x0/0x1bfc00000, data 0x40feace/0x430f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413868032 unmapped: 38879232 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:50.454120+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413876224 unmapped: 38871040 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:51.454279+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9d97f85a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512787 data_alloc: 234881024 data_used: 33579008
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 40599552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:52.454739+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 ms_handle_reset con 0x55f9dc317c00 session 0x55f9dc6ffc20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a3a0f000/0x0/0x1bfc00000, data 0x40feace/0x430f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 40599552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:53.455014+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 40599552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:54.455144+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 401 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd0fa40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:55.455305+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.594058037s of 10.206480026s, submitted: 59
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 401 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae2780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a457c000/0x0/0x1bfc00000, data 0x359270a/0x37a2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:56.455455+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 402 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc8c0960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4387946 data_alloc: 234881024 data_used: 24760320
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:57.455617+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:58.455833+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 58K writes, 218K keys, 58K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 58K writes, 21K syncs, 2.74 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3506 writes, 12K keys, 3506 commit groups, 1.0 writes per commit group, ingest: 15.02 MB, 0.03 MB/s
                                           Interval WAL: 3506 writes, 1419 syncs, 2.47 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:55:59.456003+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409444352 unmapped: 43302912 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:00.456257+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 43294720 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a4574000/0x0/0x1bfc00000, data 0x3595f2e/0x37a8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 403 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd67680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:01.456420+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 43294720 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4390744 data_alloc: 234881024 data_used: 24760320
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:02.456578+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 43294720 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 403 ms_handle_reset con 0x55f9dc317c00 session 0x55f9d978c3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:03.456770+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:04.457062+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:05.457331+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56ec000/0x0/0x1bfc00000, data 0x241ff2e/0x2632000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:06.457554+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213800 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:07.457803+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:08.457948+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:09.458169+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e8000/0x0/0x1bfc00000, data 0x2421a6d/0x2635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:10.458357+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:11.458513+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397115392 unmapped: 55631872 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.421072960s of 16.197496414s, submitted: 62
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6850e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e8000/0x0/0x1bfc00000, data 0x2421a6d/0x2635000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd10000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:12.458660+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397131776 unmapped: 55615488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:13.458929+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:14.459101+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:15.459259+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:16.459425+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:17.459575+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:18.459866+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:19.460172+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:20.460450+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:21.460668+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:22.460805+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:23.460951+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:24.461179+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:25.461600+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:26.461814+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:27.462153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:28.462323+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:29.462547+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:30.462824+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:31.463176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:32.463471+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:33.463688+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:34.463926+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:35.464156+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:36.464356+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211794 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:37.464526+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 55607296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a56e9000/0x0/0x1bfc00000, data 0x2421a0b/0x2634000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:38.464781+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.720899582s of 26.808219910s, submitted: 26
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 48365568 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc74b680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcd10d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f04000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9e0f04000 session 0x55f9dccf5a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dba3ef00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc376960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:39.464945+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:40.465118+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:41.465385+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:42.465583+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:43.465736+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:44.465968+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:45.466150+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:46.466314+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397303808 unmapped: 55443456 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:47.466528+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:48.466754+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:49.466923+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:50.467113+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:51.467293+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:52.467430+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:53.467605+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397312000 unmapped: 55435264 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:54.467777+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:55.467949+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:56.468260+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dcd10b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:57.468426+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4273449 data_alloc: 218103808 data_used: 11169792
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397320192 unmapped: 55427072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcc501e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:58.468663+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397336576 unmapped: 55410688 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dcae7c00 session 0x55f9dc74b860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.606311798s of 20.880384445s, submitted: 23
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dccf45a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:56:59.468880+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f45000/0x0/0x1bfc00000, data 0x2bc6a0b/0x2dd9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397647872 unmapped: 55099392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:00.469037+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397647872 unmapped: 55099392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:01.469223+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 397647872 unmapped: 55099392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:02.469486+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4293076 data_alloc: 218103808 data_used: 12902400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:03.469729+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:04.469912+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:05.470173+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:06.470332+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:07.470502+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 19103744
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:08.470671+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:09.470900+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:10.471160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4f1f000/0x0/0x1bfc00000, data 0x2beaa3e/0x2dff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:11.471420+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:12.471690+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4336756 data_alloc: 218103808 data_used: 19103744
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 53886976 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.583962440s of 13.613166809s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:13.471812+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401907712 unmapped: 50839552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:14.471969+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c1d000/0x0/0x1bfc00000, data 0x2ee6a3e/0x30fb000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:15.472237+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:16.472419+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:17.472678+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:18.472876+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:19.473132+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:20.473275+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:21.473466+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:22.473626+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:23.473799+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:24.474007+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:25.474256+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:26.474481+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:27.474672+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:28.474824+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:29.475057+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:30.475395+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:31.475646+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:32.475812+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373760 data_alloc: 218103808 data_used: 20262912
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:33.476035+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:34.476236+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:35.476432+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c03000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 50798592 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.823360443s of 23.011190414s, submitted: 63
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc8c14a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dcc46f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:36.476619+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401956864 unmapped: 50790400 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:37.476830+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368352 data_alloc: 218103808 data_used: 20267008
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401956864 unmapped: 50790400 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:38.476996+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dccae000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dccae000 session 0x55f9dcce9a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:39.477275+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:40.477495+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a4c0b000/0x0/0x1bfc00000, data 0x2efea3e/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:41.477725+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 50782208 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:42.491064+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369826 data_alloc: 218103808 data_used: 20267008
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dcd66000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401981440 unmapped: 50765824 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9da6f2b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:43.491361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401989632 unmapped: 50757632 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:44.491507+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dc74b680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:45.491710+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a3ee7000/0x0/0x1bfc00000, data 0x3c22a3e/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:46.491957+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:47.492165+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475235 data_alloc: 218103808 data_used: 20267008
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:48.492410+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:49.492608+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402309120 unmapped: 50438144 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a3ee7000/0x0/0x1bfc00000, data 0x3c22a3e/0x3e37000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.249938011s of 14.454350471s, submitted: 45
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:50.492778+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402333696 unmapped: 50413568 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:51.492895+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 406 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcbb0f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402374656 unmapped: 50372608 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:52.493000+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dccae000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490773 data_alloc: 218103808 data_used: 20275200
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 54059008 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 407 ms_handle_reset con 0x55f9dd69a000 session 0x55f9d9f1a5a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:53.493142+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dccae000 session 0x55f9dc8c01e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc474a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399048704 unmapped: 53698560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:54.493334+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399147008 unmapped: 53600256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:55.493514+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9d97eed20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc848400 session 0x55f9dc9c2960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dd69a000 session 0x55f9da74fa40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a3b6a000/0x0/0x1bfc00000, data 0x3f96347/0x41b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc30400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dcc30400 session 0x55f9dcc47a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc5d680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dc684780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcae2780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dc6843c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc317400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc317400 session 0x55f9dc684000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:56.493708+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:57.493869+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4565685 data_alloc: 218103808 data_used: 20291584
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 53567488 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6845a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:58.494044+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 ms_handle_reset con 0x55f9dc848400 session 0x55f9da9b5860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400121856 unmapped: 52625408 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:57:59.494255+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 409 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dcd0f860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 398974976 unmapped: 53772288 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 409 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc505a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc28c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:00.494408+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dcc28c00 session 0x55f9dcc50000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9d97b0780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:01.494554+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9d000/0x0/0x1bfc00000, data 0x4d5db6d/0x4f7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:02.494710+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4651001 data_alloc: 218103808 data_used: 20307968
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:03.494868+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9d9f1ab40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dc92a1e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:04.495170+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc848400 session 0x55f9d97eeb40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc28c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.712867737s of 14.574018478s, submitted: 453
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dcc28c00 session 0x55f9dc6fe5a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:05.495380+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 52715520 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9c000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:06.495589+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 52707328 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:07.495840+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4687399 data_alloc: 234881024 data_used: 24625152
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:08.496125+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9c000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:09.496366+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:10.496539+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 51560448 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:11.496765+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:12.496954+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4685671 data_alloc: 234881024 data_used: 24625152
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:13.497164+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:14.497360+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 51552256 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:15.497615+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.731956482s of 10.848281860s, submitted: 2
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 51470336 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:16.497853+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 51470336 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:17.498006+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2d9e000/0x0/0x1bfc00000, data 0x4d5db7d/0x4f80000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4730091 data_alloc: 234881024 data_used: 25403392
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402014208 unmapped: 50733056 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:18.498186+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 50692096 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:19.498320+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 50487296 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:20.498462+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408256512 unmapped: 44490752 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:21.498643+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 44482560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:22.498831+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803557 data_alloc: 234881024 data_used: 34942976
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 44482560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:23.499029+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408264704 unmapped: 44482560 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:24.499146+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:25.499366+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:26.499505+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:27.499672+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4803557 data_alloc: 234881024 data_used: 34942976
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:28.499810+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:29.500022+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408272896 unmapped: 44474368 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:30.500179+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408281088 unmapped: 44466176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a27ba000/0x0/0x1bfc00000, data 0x5341b7d/0x5564000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.195389748s of 15.360485077s, submitted: 57
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:31.500312+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 40067072 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc50960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcd67a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:32.500531+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4828177 data_alloc: 234881024 data_used: 31006720
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc848400 session 0x55f9dccf4d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410935296 unmapped: 41811968 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:33.500698+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:34.500855+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:35.501111+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:36.501252+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a257a000/0x0/0x1bfc00000, data 0x5a99b5d/0x579c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:37.501427+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4841694 data_alloc: 234881024 data_used: 30908416
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:38.501598+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:39.501764+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:40.501961+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:41.502147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.386196136s of 10.751989365s, submitted: 135
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:42.502322+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835634 data_alloc: 234881024 data_used: 30908416
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:43.502535+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dc8c01e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:44.502705+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9dd69a000 session 0x55f9dcbb01e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:45.502954+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410959872 unmapped: 41787392 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:46.503170+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410976256 unmapped: 41771008 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2561000/0x0/0x1bfc00000, data 0x5abab5d/0x57bd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:47.503391+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835634 data_alloc: 234881024 data_used: 30908416
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410976256 unmapped: 41771008 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:48.503542+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc3763c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dcca4f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407257088 unmapped: 45490176 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc50f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:49.503725+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:50.503876+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:51.504045+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a303f000/0x0/0x1bfc00000, data 0x4fddb2a/0x4cde000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:52.504142+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4688411 data_alloc: 234881024 data_used: 21827584
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:53.504284+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.301034927s of 11.727187157s, submitted: 38
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 45481984 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:54.504566+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 411 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc5d680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 411 ms_handle_reset con 0x55f9dc5c6400 session 0x55f9dcae2780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400760832 unmapped: 51986432 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:55.504905+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a4640000/0x0/0x1bfc00000, data 0x34be775/0x36dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a4640000/0x0/0x1bfc00000, data 0x34be775/0x36dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x17edf9c7), peers [1,2] op hist [0,0,0,0,2])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400769024 unmapped: 51978240 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:56.505194+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 50880512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:57.505371+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 412 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd0f860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 412 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9db86eb40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386451 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400818176 unmapped: 51929088 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 413 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dba3ef00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:58.505577+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400818176 unmapped: 51929088 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:58:59.505816+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400818176 unmapped: 51929088 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:00.506045+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400826368 unmapped: 51920896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:01.506253+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400826368 unmapped: 51920896 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:02.506435+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292223 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:03.506677+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:04.506956+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.946926117s of 11.583917618s, submitted: 110
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bc000/0x0/0x1bfc00000, data 0x24319ee/0x264f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:05.507169+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:06.507316+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:07.507494+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:08.507750+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400834560 unmapped: 51912704 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:09.507933+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:10.508164+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:11.508336+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:12.508581+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:13.508784+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:14.508956+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:15.509193+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:16.509368+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400842752 unmapped: 51904512 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:17.509542+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400850944 unmapped: 51896320 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:18.509709+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400850944 unmapped: 51896320 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:19.509869+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:20.510155+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:21.510355+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:22.510590+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:23.510799+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:24.510988+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:25.511227+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400867328 unmapped: 51879936 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:26.511517+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 51871744 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:27.511710+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 51871744 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:28.511958+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400875520 unmapped: 51871744 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:29.512154+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:30.512348+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:31.512589+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:32.512762+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:33.513461+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400883712 unmapped: 51863552 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:34.513613+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:35.513820+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:36.514041+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:37.514225+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:38.514379+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:39.514557+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:40.514742+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 51855360 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:41.514963+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:42.515209+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4294349 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:43.515357+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:44.515557+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:45.515797+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:46.515975+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 51847168 heap: 452747264 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.160724640s of 42.177688599s, submitted: 25
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:47.516148+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc774a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc848400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc848400 session 0x55f9dcc474a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dccf4780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd0e780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dcae21e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357916 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:48.516318+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4a88000/0x0/0x1bfc00000, data 0x2c67565/0x2e86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:49.516477+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:50.516632+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:51.516809+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:52.516969+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357916 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 57139200 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:53.517121+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4a88000/0x0/0x1bfc00000, data 0x2c67565/0x2e86000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:54.517251+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dc6854a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb76400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:55.517421+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:56.517671+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:57.517887+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360163 data_alloc: 218103808 data_used: 11268096
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:58.518067+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4a87000/0x0/0x1bfc00000, data 0x2c67588/0x2e87000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:59:59.518218+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:00.518361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:01.518479+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 57163776 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:02.518628+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.062685013s of 15.225501060s, submitted: 23
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dbb76400 session 0x55f9dcae3e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301889 data_alloc: 218103808 data_used: 11214848
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:03.518798+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc76780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:04.519029+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:05.519356+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:06.519569+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:07.519793+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:08.520008+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:09.520202+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:10.520438+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:11.520621+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:12.520837+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:13.521006+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:14.521133+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:15.521287+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:16.521501+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:17.521758+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:18.521951+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:19.522153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:20.522387+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:21.522588+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:22.522865+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400916480 unmapped: 57155584 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:23.523044+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:24.523299+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:25.523512+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:26.523732+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:27.523930+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300378 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:28.524065+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:29.524435+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:30.524726+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400924672 unmapped: 57147392 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9d97b0780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9db86fe00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dcc463c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f04400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9e0f04400 session 0x55f9daa23860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.561090469s of 28.726909637s, submitted: 31
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:31.524929+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc512c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:32.525177+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae2000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4309229 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:33.525365+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:34.525581+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:35.525829+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:36.526034+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:37.526249+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4309229 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:38.526479+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400941056 unmapped: 57131008 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:39.526673+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:40.526817+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:41.527042+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:42.527206+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.237578392s of 11.162384033s, submitted: 22
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310134 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc684f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:43.527361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:44.527551+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:45.527769+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 57122816 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:46.527949+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 57114624 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:47.528178+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4314202 data_alloc: 218103808 data_used: 11743232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:48.528439+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:49.528611+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:50.528806+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:51.528958+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 57106432 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a5239000/0x0/0x1bfc00000, data 0x24b55c7/0x26d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc8c0d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316c00 session 0x55f9dc685860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:52.529174+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 57098240 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.558219910s of 10.067571640s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4314230 data_alloc: 218103808 data_used: 11747328
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:53.530055+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 57098240 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:54.530311+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400982016 unmapped: 57090048 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:55.530486+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd103c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:56.530763+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:57.530948+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:58.531174+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:00:59.531431+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:00.531640+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:01.531793+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 57081856 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:02.532137+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:03.532334+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:04.532489+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:05.532743+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:06.533031+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 400998400 unmapped: 57073664 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:07.533296+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:08.533631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:09.533928+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:10.534235+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 57065472 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:11.534418+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:12.534692+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:13.534934+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:14.535129+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:15.535377+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:16.535641+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:17.535902+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:18.536167+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 57057280 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:19.536366+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:20.536562+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:21.536730+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:22.536931+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:23.537169+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:24.537423+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:25.537674+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:26.537864+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401022976 unmapped: 57049088 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:27.538103+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:28.538358+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:29.538538+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:30.538689+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:31.538869+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:32.539048+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:33.539245+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303112 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 57040896 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:34.539419+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 57032704 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:35.539618+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 57024512 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:36.539883+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 57024512 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:37.540035+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.556964874s of 45.118488312s, submitted: 26
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 57024512 heap: 458072064 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc8c0000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:38.540187+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387129 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc8c03c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:39.540384+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:40.540531+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:41.540689+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 58638336 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:42.540839+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a481f000/0x0/0x1bfc00000, data 0x2ecf5c7/0x30ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:43.540982+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387073 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:44.541148+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:45.541382+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401063936 unmapped: 58630144 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9d97f61e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:46.541656+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a481f000/0x0/0x1bfc00000, data 0x2ecf5c7/0x30ef000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 58482688 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:47.541844+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2efc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 402071552 unmapped: 57622528 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:48.542008+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4470002 data_alloc: 234881024 data_used: 22339584
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:49.542166+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:50.542317+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:51.542534+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:52.542759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:53.543153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4470002 data_alloc: 234881024 data_used: 22339584
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:54.543302+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:55.543447+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:56.543617+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:57.543813+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 404168704 unmapped: 55525376 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.682670593s of 20.474929810s, submitted: 36
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:58.543976+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4561438 data_alloc: 234881024 data_used: 22896640
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a47fb000/0x0/0x1bfc00000, data 0x2ef35c7/0x3113000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410664960 unmapped: 49029120 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:01:59.544110+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cdb000/0x0/0x1bfc00000, data 0x3a135c7/0x3c33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:00.544224+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:01.544409+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:02.544648+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cdb000/0x0/0x1bfc00000, data 0x3a135c7/0x3c33000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:03.544876+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4577302 data_alloc: 234881024 data_used: 24141824
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:04.545040+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:05.545341+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:06.545525+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:07.545751+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cd8000/0x0/0x1bfc00000, data 0x3a165c7/0x3c36000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:08.545974+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4578070 data_alloc: 234881024 data_used: 24170496
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:09.546235+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:10.546467+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.194355011s of 12.445869446s, submitted: 104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:11.546697+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410361856 unmapped: 49332224 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:12.546927+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a3cd6000/0x0/0x1bfc00000, data 0x3a185c7/0x3c38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2efc00 session 0x55f9dc376000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dccf45a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 410370048 unmapped: 49324032 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:13.547033+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74a1e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:14.547270+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:15.547570+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:16.547763+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:17.547946+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 54165504 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:18.548212+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:19.548341+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:20.548582+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:21.548841+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:22.548986+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:23.549200+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:24.549349+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:25.549521+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:26.549672+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:27.549975+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:28.550186+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:29.550410+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:30.550558+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:31.550798+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:32.551039+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:33.551185+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:34.551328+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:35.551666+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:36.551882+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:37.552142+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:38.552329+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4315958 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:39.552513+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:40.552660+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:41.552795+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:42.552977+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcce8f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc74be00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f3c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f3c00 session 0x55f9dc6fef00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcce9680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.127571106s of 32.264827728s, submitted: 39
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405536768 unmapped: 54157312 heap: 459694080 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:43.553149+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52ba000/0x0/0x1bfc00000, data 0x2433575/0x2653000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387796 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405553152 unmapped: 62013440 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc5c3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc5de00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9d97b0d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c5c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:44.553362+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c5c00 session 0x55f9daa22960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc2c21e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:45.553576+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:46.553788+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:47.553992+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:48.554197+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4387796 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:49.554416+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcbb0960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e9000/0x0/0x1bfc00000, data 0x2d05575/0x2f25000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d978e960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:50.554549+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9d978c3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405561344 unmapped: 62005248 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:51.554730+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2800 session 0x55f9dcc465a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405577728 unmapped: 61988864 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:52.554938+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 405938176 unmapped: 61628416 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:53.555399+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458772 data_alloc: 218103808 data_used: 18571264
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:54.555603+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:55.555801+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:56.555990+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:57.556140+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:58.556306+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458772 data_alloc: 218103808 data_used: 18571264
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:02:59.556560+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:00.556792+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:01.557023+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:02.557266+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 408494080 unmapped: 59072512 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a49e7000/0x0/0x1bfc00000, data 0x2d055a8/0x2f27000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.095100403s of 20.558338165s, submitted: 21
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:03.557427+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487928 data_alloc: 218103808 data_used: 18571264
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409821184 unmapped: 57745408 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:04.557555+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcae32c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dcae23c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c6800 session 0x55f9dc74a3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316400 session 0x55f9dc684f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae8400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 409821184 unmapped: 57745408 heap: 467566592 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:05.557760+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411303936 unmapped: 60465152 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:06.558290+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcae8400 session 0x55f9daa23860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc774a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9db86eb40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 60342272 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:07.558426+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316400 session 0x55f9dcae2780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c6800 session 0x55f9dcc50f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:08.558624+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4592091 data_alloc: 218103808 data_used: 18952192
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:09.558802+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc3adc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:10.559042+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 60268544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:11.559264+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:12.559413+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:13.559568+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640731 data_alloc: 234881024 data_used: 24547328
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:14.559698+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:15.559943+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:16.560228+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:17.560380+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:18.560607+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640731 data_alloc: 234881024 data_used: 24547328
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:19.560880+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:20.561181+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:21.561407+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a39a9000/0x0/0x1bfc00000, data 0x3d3b61a/0x3f5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413253632 unmapped: 58515456 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.450225830s of 18.951761246s, submitted: 87
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:22.561562+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 54599680 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:23.561747+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4729823 data_alloc: 234881024 data_used: 25812992
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417169408 unmapped: 54599680 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:24.561965+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2f04000/0x0/0x1bfc00000, data 0x47e561a/0x4a09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e68000/0x0/0x1bfc00000, data 0x488261a/0x4aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:25.562222+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:26.562493+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:27.562702+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e68000/0x0/0x1bfc00000, data 0x488261a/0x4aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e68000/0x0/0x1bfc00000, data 0x488261a/0x4aa6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:28.562913+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4732831 data_alloc: 234881024 data_used: 25894912
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 54083584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:29.563145+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e44000/0x0/0x1bfc00000, data 0x48a661a/0x4aca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:30.563314+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:31.563454+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:32.563658+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:33.563831+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4731187 data_alloc: 234881024 data_used: 25903104
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:34.564019+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:35.564268+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2e44000/0x0/0x1bfc00000, data 0x48a661a/0x4aca000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.692587852s of 13.324972153s, submitted: 124
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:36.564406+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc3adc00 session 0x55f9dc3763c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:37.564785+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 54075392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:38.564979+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475045 data_alloc: 218103808 data_used: 13631488
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc5d4a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:39.565118+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:40.565348+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:41.565612+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4269000/0x0/0x1bfc00000, data 0x31775a8/0x3399000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:42.565803+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415490048 unmapped: 56279040 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:43.565967+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc60780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcbb0b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473859 data_alloc: 218103808 data_used: 13627392
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411877376 unmapped: 59891712 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:44.566139+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4576000/0x0/0x1bfc00000, data 0x3177598/0x3398000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dccf4b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:45.566353+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:46.566515+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:47.566757+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:48.567015+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:49.567209+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:50.567463+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:51.567721+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:52.567868+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:53.568155+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:54.568402+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:55.568656+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:56.568842+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:57.569054+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:58.569296+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:03:59.569478+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:00.570216+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:01.570448+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:02.570604+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:03.570759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:04.570905+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:05.571130+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:06.571301+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:07.571454+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:08.571793+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:09.571925+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:10.572062+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:11.572245+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:12.572461+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:13.572631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4335878 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:14.572759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:15.572991+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:16.573151+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a52bb000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 411893760 unmapped: 59875328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:17.573310+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc316400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc316400 session 0x55f9dc74a3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcae23c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae32c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d978c3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.101123810s of 42.489017487s, submitted: 57
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9d978e960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413040640 unmapped: 58728448 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:18.573469+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c6800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc5c6800 session 0x55f9dcc5c3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4390501 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:19.573617+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:20.573773+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:21.574009+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:22.574208+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c61000/0x0/0x1bfc00000, data 0x2a8d5c7/0x2cad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74be00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:23.574385+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcce8f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4390501 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c61000/0x0/0x1bfc00000, data 0x2a8d5c7/0x2cad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:24.574534+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc74a1e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9dccf45a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:25.574705+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc26800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db9aac00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 58851328 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:26.574834+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c60000/0x0/0x1bfc00000, data 0x2a8d5d7/0x2cae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:27.574961+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:28.575181+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4438739 data_alloc: 218103808 data_used: 17670144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:29.575330+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c60000/0x0/0x1bfc00000, data 0x2a8d5d7/0x2cae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:30.575474+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:31.575631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-31T09:04:32.575776+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _finish_auth 0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:32.576525+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 58843136 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:33.575969+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4c60000/0x0/0x1bfc00000, data 0x2a8d5d7/0x2cae000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4438739 data_alloc: 218103808 data_used: 17670144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:34.576168+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:35.576339+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:36.576490+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 58834944 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.828681946s of 18.997266769s, submitted: 30
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:37.576675+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 58630144 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:38.576826+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 57638912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490041 data_alloc: 218103808 data_used: 17670144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:39.577047+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 57638912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a457d000/0x0/0x1bfc00000, data 0x31705d7/0x3391000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:40.577224+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 57638912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:41.577553+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4467000/0x0/0x1bfc00000, data 0x32865d7/0x34a7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:42.577818+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:43.578184+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4508283 data_alloc: 218103808 data_used: 18108416
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:44.578473+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:45.578906+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414343168 unmapped: 57425920 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:46.579056+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:47.579252+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4446000/0x0/0x1bfc00000, data 0x32a75d7/0x34c8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:48.579434+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506303 data_alloc: 218103808 data_used: 18112512
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:49.579570+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:50.579706+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.345264435s of 14.012647629s, submitted: 88
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:51.579859+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414351360 unmapped: 57417728 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db9aac00 session 0x55f9dcae3c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcc26800 session 0x55f9d97f61e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:52.580013+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc684f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:53.580271+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:54.580512+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:55.580819+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:56.580963+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:57.581211+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:58.581511+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:04:59.581693+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412508160 unmapped: 59260928 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:00.581874+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:01.582182+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:02.582411+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:03.582569+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:04.582757+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:05.583896+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:06.584312+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:07.584981+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:08.585331+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:09.586038+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:10.586402+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:11.586583+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:12.586743+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412516352 unmapped: 59252736 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:13.587233+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:14.587402+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:15.587618+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:16.587849+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:17.588038+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:18.588318+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:19.588582+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:20.588736+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:21.589047+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:22.589371+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:23.589543+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:24.589790+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:25.590176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412524544 unmapped: 59244544 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:26.590443+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:27.590777+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:28.591005+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:29.591319+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:30.591590+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:31.591795+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:32.592058+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:33.592350+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412532736 unmapped: 59236352 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:34.592612+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412540928 unmapped: 59228160 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:35.592886+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:36.593235+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:37.593880+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:38.594135+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:39.594261+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:40.594593+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:41.595034+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412549120 unmapped: 59219968 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:42.595430+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:43.595703+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:44.595877+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:45.596144+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:46.596333+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59211776 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:47.596692+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412565504 unmapped: 59203584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:48.597027+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412565504 unmapped: 59203584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:49.597183+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412565504 unmapped: 59203584 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:50.597379+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:51.597631+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:52.597736+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:53.597930+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:54.598217+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4346836 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:55.598505+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412573696 unmapped: 59195392 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc76b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dccf5680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2ee800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dc2ee800 session 0x55f9db568f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd0f4a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:56.599024+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 65.396324158s of 65.480628967s, submitted: 24
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 58662912 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4857000/0x0/0x1bfc00000, data 0x2433565/0x2652000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc772c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:57.599179+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412581888 unmapped: 59187200 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:58.599493+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 61K writes, 228K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 61K writes, 22K syncs, 2.72 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2886 writes, 10K keys, 2886 commit groups, 1.0 writes per commit group, ingest: 10.81 MB, 0.02 MB/s
                                           Interval WAL: 2886 writes, 1146 syncs, 2.52 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:05:59.599696+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359000 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:00.599925+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51ba000/0x0/0x1bfc00000, data 0x2535565/0x2754000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:01.600143+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:02.600305+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412590080 unmapped: 59179008 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51ba000/0x0/0x1bfc00000, data 0x2535565/0x2754000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [2])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:03.600538+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d97bb0e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412598272 unmapped: 59170816 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc26800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcc26800 session 0x55f9dcca5e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:04.600721+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4359000 data_alloc: 218103808 data_used: 11210752
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc28800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9dcc28800 session 0x55f9dccf43c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412606464 unmapped: 59162624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dccf5860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:05.600933+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412614656 unmapped: 59154432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:06.601135+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412614656 unmapped: 59154432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51b8000/0x0/0x1bfc00000, data 0x2535598/0x2756000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc ms_handle_reset ms_handle_reset con 0x55f9dc3b1c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3465938080
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3465938080,v1:192.168.122.100:6801/3465938080]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: get_auth_request con 0x55f9dc2ee800 auth_method 0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: mgrc handle_mgr_configure stats_period=5
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:07.601263+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:08.601472+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:09.601654+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4372120 data_alloc: 218103808 data_used: 12267520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:10.602554+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51b8000/0x0/0x1bfc00000, data 0x2535598/0x2756000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9e0186800 session 0x55f9dc6ffa40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc26800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:11.603602+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:12.603982+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:13.604800+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:14.605062+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4372120 data_alloc: 218103808 data_used: 12267520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412672000 unmapped: 59097088 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:15.605702+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a51b8000/0x0/0x1bfc00000, data 0x2535598/0x2756000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 59088896 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:16.606167+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 412680192 unmapped: 59088896 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:17.606448+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.013828278s of 21.077655792s, submitted: 14
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 56639488 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:18.606608+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:19.607202+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444560 data_alloc: 218103808 data_used: 12468224
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:20.607748+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:21.608462+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 58138624 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:22.608890+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 58130432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53315 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:23.609173+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 58130432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:24.609362+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444560 data_alloc: 218103808 data_used: 12468224
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 58130432 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:25.609658+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:26.609905+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:27.610169+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:28.610300+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:29.610466+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444560 data_alloc: 218103808 data_used: 12468224
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413646848 unmapped: 58122240 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:30.610597+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc50960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.481883049s of 13.268753052s, submitted: 40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9da9b7680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413655040 unmapped: 58114048 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:31.610776+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413655040 unmapped: 58114048 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:32.610882+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f5c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413663232 unmapped: 58105856 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a35ed000/0x0/0x1bfc00000, data 0x2f60598/0x3181000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:33.611026+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9db4f5c00 session 0x55f9dcc46780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc50d000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413679616 unmapped: 58089472 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9dc50d000 session 0x55f9dcc77860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc91e780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:34.611161+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480537 data_alloc: 218103808 data_used: 14368768
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 428646400 unmapped: 43122688 heap: 471769088 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:35.611364+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 56844288 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 415 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74b2c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:36.611504+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 56844288 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 416 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca5c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:37.611644+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 56836096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:38.612006+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418791424 unmapped: 56827904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a28b8000/0x0/0x1bfc00000, data 0x3c90f00/0x3eb5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:39.612243+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4597081 data_alloc: 234881024 data_used: 22953984
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc8c03c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:40.612601+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f5c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9db4f5c00 session 0x55f9dba3f860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:41.612915+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcc461e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca4d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:42.613107+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:43.613282+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a28b4000/0x0/0x1bfc00000, data 0x3c92bd7/0x3eb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:44.613491+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4597025 data_alloc: 234881024 data_used: 22949888
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 56868864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:45.613703+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.845607758s of 14.748682022s, submitted: 86
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a28b4000/0x0/0x1bfc00000, data 0x3c92bd7/0x3eb9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,1,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413777920 unmapped: 61841408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dc685860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:46.613919+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:47.614095+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:48.614231+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:49.614387+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482406 data_alloc: 218103808 data_used: 11235328
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a33df000/0x0/0x1bfc00000, data 0x31676e3/0x338d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:50.614474+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:51.614639+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:52.614783+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:53.614989+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:54.615213+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482406 data_alloc: 218103808 data_used: 11235328
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a33df000/0x0/0x1bfc00000, data 0x31676e3/0x338d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4000 session 0x55f9da74fa40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:55.615426+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc3b0000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9dc3b0000 session 0x55f9dc74b2c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc91e780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc77860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.986372948s of 10.231092453s, submitted: 41
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc6ff4a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcc46780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c5c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9dc5c5c00 session 0x55f9dcc50960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc6ffa40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413794304 unmapped: 61825024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc9c3860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dccf5860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:56.615572+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dcca5e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9deeb4800 session 0x55f9dccf41e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 53477376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a28f9000/0x0/0x1bfc00000, data 0x3c4e745/0x3e75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:57.615685+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dcd112c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 53477376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd0f4a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 ms_handle_reset con 0x55f9db4f2400 session 0x55f9db568f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:58.615855+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422141952 unmapped: 53477376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9e0f05000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:06:59.616015+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651220 data_alloc: 234881024 data_used: 28344320
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 419 ms_handle_reset con 0x55f9deeb4800 session 0x55f9dcae25a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:00.616165+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:01.616354+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a361d000/0x0/0x1bfc00000, data 0x2f2333e/0x314a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:02.616577+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a361d000/0x0/0x1bfc00000, data 0x2f2333e/0x314a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:03.616751+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:04.616896+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512892 data_alloc: 218103808 data_used: 14528512
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:05.617165+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 416432128 unmapped: 59187200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:06.617307+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.177050591s of 10.527448654s, submitted: 78
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 62357504 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:07.617421+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 62357504 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:08.617572+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3620000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 62357504 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:09.617850+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4506170 data_alloc: 218103808 data_used: 14741504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 62349312 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3620000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,0,10,2])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:10.618010+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3620000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:11.618173+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:12.618347+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:13.618458+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:14.618597+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515714 data_alloc: 218103808 data_used: 15527936
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413327360 unmapped: 62291968 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:15.618825+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:16.618987+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:17.619200+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:18.619385+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:19.619558+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517122 data_alloc: 218103808 data_used: 16220160
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:20.624052+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.357772827s of 14.491651535s, submitted: 33
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:21.624310+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9e0f05000 session 0x55f9dc684f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dcc76b40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413425664 unmapped: 62193664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:22.624493+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 62185472 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:23.624752+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3621000/0x0/0x1bfc00000, data 0x2f24e7d/0x314d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413433856 unmapped: 62185472 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:24.624894+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395578 data_alloc: 218103808 data_used: 11243520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 413417472 unmapped: 62201856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:25.625116+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9c8a800 session 0x55f9dc74bc20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:26.625235+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:27.625447+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:28.625602+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:29.625768+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394093 data_alloc: 218103808 data_used: 11243520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:30.625956+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:31.626123+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:32.626301+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:33.626557+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:34.626819+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394093 data_alloc: 218103808 data_used: 11243520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:35.627160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:36.627457+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:37.627618+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:38.627763+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 61153280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:39.627923+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a4105000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394093 data_alloc: 218103808 data_used: 11243520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.817047119s of 19.326761246s, submitted: 21
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414474240 unmapped: 61145088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:40.628064+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dab334a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcd11e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:41.628305+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:42.628476+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dc84dc00 session 0x55f9dcc77a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9c8a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:43.628628+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:44.628778+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a410a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4393917 data_alloc: 218103808 data_used: 11243520
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414482432 unmapped: 61136896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:45.628939+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd110e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a410a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dccf52c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 61054976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dab33680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:46.629134+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 61046784 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:47.629309+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 61046784 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:48.629473+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414531584 unmapped: 61087744 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:49.629614+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436028 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 61054976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:50.629807+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.915438652s of 10.848832130s, submitted: 85
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 61054976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:51.629953+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 61038592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:52.630178+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 61038592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:53.630322+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 61038592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:54.630478+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4436100 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 60981248 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:55.630625+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414711808 unmapped: 60907520 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:56.630775+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 60874752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:57.630989+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 60874752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:58.631157+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 60874752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:07:59.631279+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462880 data_alloc: 218103808 data_used: 14966784
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:00.631442+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:01.631588+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:02.631783+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:03.631934+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:04.632095+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462880 data_alloc: 218103808 data_used: 14966784
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:05.632270+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:06.632482+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 60866560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:07.632618+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 60858368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3c56000/0x0/0x1bfc00000, data 0x28f1e0b/0x2b18000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:08.632776+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 60858368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:09.632884+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.021949768s of 18.781620026s, submitted: 346
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3bb8000/0x0/0x1bfc00000, data 0x298fe0b/0x2bb6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1948f9c7), peers [1,2] op hist [0,0,0,0,0,14])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4492144 data_alloc: 218103808 data_used: 14999552
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 418996224 unmapped: 56623104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:10.633035+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:11.633235+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:12.633401+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:13.633607+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:14.633729+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517650 data_alloc: 218103808 data_used: 15253504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:15.633943+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2128000/0x0/0x1bfc00000, data 0x2e6ee0b/0x3095000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:16.634150+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:17.634280+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2128000/0x0/0x1bfc00000, data 0x2e6ee0b/0x3095000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:18.634514+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:19.635013+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4517650 data_alloc: 218103808 data_used: 15253504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:20.635657+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.549825668s of 10.879747391s, submitted: 70
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:21.635835+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:22.636002+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:23.636668+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4800 session 0x55f9da9b74a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:24.636817+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:25.637159+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:26.637904+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcad5000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcad5000 session 0x55f9dcd0f0e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:27.638232+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:28.638875+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:29.639141+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:30.639627+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:31.639825+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:32.640126+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:33.640276+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:34.640655+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419561472 unmapped: 56057856 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:35.640886+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:36.641194+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:37.641499+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:38.641807+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:39.641996+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4515310 data_alloc: 218103808 data_used: 15253504
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:40.642223+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:41.642421+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2123000/0x0/0x1bfc00000, data 0x2e74e0b/0x309b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:42.642597+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.457544327s of 21.756736755s, submitted: 12
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9da9b74a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:43.642838+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419569664 unmapped: 56049664 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:44.643020+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419577856 unmapped: 56041472 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518790 data_alloc: 218103808 data_used: 15290368
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:45.643339+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419586048 unmapped: 56033280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:46.643668+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:47.643858+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:48.644016+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:49.644199+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4518790 data_alloc: 218103808 data_used: 15290368
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:50.644347+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:51.644528+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:52.644666+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:53.644816+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419594240 unmapped: 56025088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:54.644972+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419602432 unmapped: 56016896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.232875824s of 12.288291931s, submitted: 2
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519590 data_alloc: 218103808 data_used: 15310848
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:55.645126+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419610624 unmapped: 56008704 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:56.645239+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:57.645360+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:58.645508+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20fd000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:08:59.645665+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:00.645824+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527346 data_alloc: 218103808 data_used: 15941632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:01.645986+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419618816 unmapped: 56000512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:02.646201+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419667968 unmapped: 55951360 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:03.646353+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 55943168 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:04.646525+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 55943168 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:05.646728+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527698 data_alloc: 218103808 data_used: 15937536
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 55943168 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:06.646951+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:07.647102+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.876531601s of 13.426343918s, submitted: 10
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:08.647248+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:09.647408+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:10.647571+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527346 data_alloc: 218103808 data_used: 15937536
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:11.647694+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:12.647852+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:13.648000+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20ff000/0x0/0x1bfc00000, data 0x2e98e0b/0x30bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:14.648161+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:15.648339+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4527170 data_alloc: 218103808 data_used: 15937536
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:16.648484+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:17.648639+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:18.648791+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:19.648919+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dab33680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516279221s of 11.650573730s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9da73bc20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20f7000/0x0/0x1bfc00000, data 0x2e9de0b/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:20.649168+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4529174 data_alloc: 218103808 data_used: 16519168
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a20fa000/0x0/0x1bfc00000, data 0x2e9de0b/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4800 session 0x55f9dcc77860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:21.649302+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:22.649452+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a211e000/0x0/0x1bfc00000, data 0x2e79e0b/0x30a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a211e000/0x0/0x1bfc00000, data 0x2e79e0b/0x30a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:23.649610+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dc74b2c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419684352 unmapped: 55934976 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcca4d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:24.649773+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 55918592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:25.650024+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 55918592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:26.650146+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419700736 unmapped: 55918592 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:27.650287+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:28.650504+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:29.650693+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:30.650880+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:31.651034+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:32.651191+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:33.651367+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419708928 unmapped: 55910400 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:34.651510+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:35.651698+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:36.651866+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:37.652126+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:38.652253+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:39.652418+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:40.652571+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:41.652731+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419717120 unmapped: 55902208 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:42.652870+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:43.652967+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:44.653053+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:45.653198+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:46.653320+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:47.653433+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:48.653572+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:49.653747+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419725312 unmapped: 55894016 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:50.653890+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419741696 unmapped: 55877632 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:51.654025+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:52.654213+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:53.654379+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:54.654537+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:55.655441+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:56.657568+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:57.658113+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419749888 unmapped: 55869440 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:58.658675+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:09:59.659606+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:00.661582+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:01.662285+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:02.662422+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:03.662573+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:04.662719+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:05.662955+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419766272 unmapped: 55853056 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:06.663235+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:07.663504+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:08.663774+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:09.664035+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:10.664473+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:11.664667+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:12.665138+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:13.665510+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419774464 unmapped: 55844864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:14.666179+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:15.666723+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:16.666929+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:17.667158+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:18.667328+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:19.667511+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:20.668006+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:21.668199+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419790848 unmapped: 55828480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:22.668464+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419799040 unmapped: 55820288 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:23.668637+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419807232 unmapped: 55812096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:24.668849+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419807232 unmapped: 55812096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:25.669133+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:26.669447+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:27.669621+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:28.669785+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:29.670602+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:30.671436+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:31.672157+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:32.672373+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419815424 unmapped: 55803904 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:33.672727+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:34.672950+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:35.673184+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:36.673357+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:37.673514+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419823616 unmapped: 55795712 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:38.673676+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:39.673858+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:40.673996+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:41.674280+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:42.674585+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:43.674734+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:44.674912+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419840000 unmapped: 55779328 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:45.675201+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:46.675442+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:47.675652+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:48.675809+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419848192 unmapped: 55771136 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:49.676053+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da9b6f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:50.676339+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:51.676569+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:52.676755+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:53.676917+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419856384 unmapped: 55762944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:54.677114+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419864576 unmapped: 55754752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:55.677378+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419864576 unmapped: 55754752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:56.677857+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419864576 unmapped: 55754752 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:57.678168+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:58.678312+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:10:59.678521+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:00.678674+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:01.678839+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419872768 unmapped: 55746560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:02.679036+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:03.679188+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:04.679403+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:05.679648+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:06.679840+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:07.679970+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:08.717316+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419880960 unmapped: 55738368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:09.717469+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419889152 unmapped: 55730176 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:10.717618+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 55721984 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:11.717827+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419905536 unmapped: 55713792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:12.718021+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419905536 unmapped: 55713792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:13.718196+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419905536 unmapped: 55713792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:14.718367+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dc9c32c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:15.718614+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404838 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:16.718765+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:17.719014+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc74a780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:18.719168+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4800 session 0x55f9daa22000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:19.719311+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 119.491500854s of 119.652130127s, submitted: 30
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419913728 unmapped: 55705600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:20.719464+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9db870960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408173 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:21.719738+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:22.719981+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de1b/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:23.720157+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:24.720300+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:25.720516+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408305 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419921920 unmapped: 55697408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:26.720827+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:27.720991+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de1b/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:28.721206+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:29.721426+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 55689216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.795152664s of 10.765261650s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcd11a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:30.721581+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcce9680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de1b/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:31.721771+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:32.721978+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:33.722223+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:34.722361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 55672832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:35.722630+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 55672832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:36.722816+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419946496 unmapped: 55672832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:37.723003+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:38.723147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc74a3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:39.723284+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:40.723488+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419954688 unmapped: 55664640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:41.723649+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:42.723797+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:43.723926+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:44.724108+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc27000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.601638794s of 14.800266266s, submitted: 18
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcc27000 session 0x55f9dc8c1a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:45.724361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9daa234a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:46.724503+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 55656448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:47.724649+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419971072 unmapped: 55648256 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b5a000/0x0/0x1bfc00000, data 0x243de0b/0x2664000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:48.724820+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da029680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419971072 unmapped: 55648256 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:49.724987+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419979264 unmapped: 55640064 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:50.725162+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4408462 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419979264 unmapped: 55640064 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:51.725328+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419979264 unmapped: 55640064 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:52.725520+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2b59000/0x0/0x1bfc00000, data 0x243de6d/0x2665000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 55631872 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:53.725661+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419987456 unmapped: 55631872 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:54.725813+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcc770e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419995648 unmapped: 55623680 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.067562103s of 10.347117424s, submitted: 14
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:55.726030+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4469680 data_alloc: 218103808 data_used: 11251712
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420200448 unmapped: 55418880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:56.726190+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9deeb4000 session 0x55f9d9f1be00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420200448 unmapped: 55418880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:57.726343+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dccf5e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2375000/0x0/0x1bfc00000, data 0x2c21e6d/0x2e49000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420200448 unmapped: 55418880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:58.726527+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420216832 unmapped: 55402496 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:11:59.726712+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd67e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:00.726867+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473158 data_alloc: 218103808 data_used: 11259904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:01.727056+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420225024 unmapped: 55394304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:02.727217+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:03.727457+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:04.727677+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:05.728167+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473158 data_alloc: 218103808 data_used: 11259904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:06.728373+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 55386112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:07.728545+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:08.728707+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:09.728961+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:10.729156+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.737280846s of 15.252003670s, submitted: 30
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da9b5860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474940 data_alloc: 218103808 data_used: 11259904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:11.729351+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:12.729585+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:13.729818+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420241408 unmapped: 55377920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:14.729955+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:15.730191+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dc6850e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474940 data_alloc: 218103808 data_used: 11259904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:16.730321+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc510e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:17.730521+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dc685a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:18.730665+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:19.730829+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc6fe960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 55369728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:20.730971+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474940 data_alloc: 218103808 data_used: 11259904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420265984 unmapped: 55353344 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:21.731138+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:22.731268+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:23.731396+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:24.731538+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2370000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d97f9c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:25.731690+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533820 data_alloc: 218103808 data_used: 19533824
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:26.731898+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:27.732150+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.701858521s of 17.707544327s, submitted: 1
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcae9800 session 0x55f9da74fa40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:28.732331+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:29.732536+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2371000/0x0/0x1bfc00000, data 0x2c23b28/0x2e4d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:30.732722+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4532764 data_alloc: 218103808 data_used: 19533824
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421527552 unmapped: 54091776 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:31.732876+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:32.733031+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:33.733249+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc77a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:34.733472+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:35.733686+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2372000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9deeb4000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 ms_handle_reset con 0x55f9deeb4000 session 0x55f9dcc470e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4531891 data_alloc: 218103808 data_used: 19533824
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2372000/0x0/0x1bfc00000, data 0x2c23ac6/0x2e4c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421535744 unmapped: 54083584 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:36.733856+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421543936 unmapped: 54075392 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:37.733983+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcae25a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421543936 unmapped: 54075392 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:38.734206+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.901397705s of 10.784977913s, submitted: 26
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:39.734365+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421552128 unmapped: 54067200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a236f000/0x0/0x1bfc00000, data 0x2c25773/0x2e4f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:40.734492+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421552128 unmapped: 54067200 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9db4f2400 session 0x55f9d978c3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420203 data_alloc: 218103808 data_used: 11268096
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:41.734664+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:42.734844+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2ad3000/0x0/0x1bfc00000, data 0x2441711/0x266a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:43.735062+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:44.735295+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:45.735564+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421568512 unmapped: 54050816 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcc77a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2ad3000/0x0/0x1bfc00000, data 0x2441711/0x266a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4420027 data_alloc: 218103808 data_used: 11268096
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 422 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dc6fe960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:46.735707+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421584896 unmapped: 54034432 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:47.735870+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:48.736184+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc5c5000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dc5c5000 session 0x55f9dc6850e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:49.736397+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:50.736582+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2b4f000/0x0/0x1bfc00000, data 0x24432b2/0x266e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4425983 data_alloc: 218103808 data_used: 11276288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:51.736731+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.769523621s of 12.610586166s, submitted: 33
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2b4f000/0x0/0x1bfc00000, data 0x24432b2/0x266e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:52.736859+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:53.736979+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421593088 unmapped: 54026240 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd67e00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:54.737142+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9db4f2400 session 0x55f9da029680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:55.737323+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:56.737461+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:57.737646+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:58.737810+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:12:59.738140+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:00.738357+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:01.738620+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:02.738759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:03.738918+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:04.739117+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:05.739513+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:06.739655+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420020224 unmapped: 55599104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:07.739851+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x25bd250/0x27e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:08.740045+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:09.740980+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:10.744262+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4442164 data_alloc: 218103808 data_used: 11276288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:11.744819+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcd11a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9db870960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:12.745303+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420028416 unmapped: 55590912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae7800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9dcae7800 session 0x55f9daa22000
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.496446609s of 21.687971115s, submitted: 18
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:13.745716+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 55681024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29d5000/0x0/0x1bfc00000, data 0x25bd283/0x27e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc9c32c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:14.745849+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db4f2400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:15.746653+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449537 data_alloc: 218103808 data_used: 11407360
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:16.746835+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:17.747122+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:18.747628+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:19.747965+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29ab000/0x0/0x1bfc00000, data 0x25e7283/0x2813000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:20.748282+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449697 data_alloc: 218103808 data_used: 11415552
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:21.748608+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:22.748784+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420126720 unmapped: 55492608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:23.748941+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a29ab000/0x0/0x1bfc00000, data 0x25e7283/0x2813000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:24.749124+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:25.749387+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:26.749535+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449697 data_alloc: 218103808 data_used: 11415552
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 420134912 unmapped: 55484416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.650345802s of 14.049152374s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:27.749677+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421191680 unmapped: 54427648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:28.749831+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423395328 unmapped: 52224000 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a289f000/0x0/0x1bfc00000, data 0x26eb283/0x2917000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:29.750011+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422731776 unmapped: 52887552 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a23e9000/0x0/0x1bfc00000, data 0x2ba9283/0x2dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:30.750152+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422731776 unmapped: 52887552 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:31.750259+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512563 data_alloc: 218103808 data_used: 11608064
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:32.750411+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2345000/0x0/0x1bfc00000, data 0x2c4c283/0x2e78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:33.750587+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:34.750770+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:35.750941+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:36.751131+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512723 data_alloc: 218103808 data_used: 11612160
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422993920 unmapped: 52625408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2345000/0x0/0x1bfc00000, data 0x2c4c283/0x2e78000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:37.751297+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:38.751477+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:39.751697+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:40.751877+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:41.752152+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4510851 data_alloc: 218103808 data_used: 11612160
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423002112 unmapped: 52617216 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:42.752469+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:43.752980+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:44.753759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:45.753957+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:46.754253+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4510851 data_alloc: 218103808 data_used: 11612160
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:47.754492+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a2321000/0x0/0x1bfc00000, data 0x2c71283/0x2e9d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.587087631s of 20.651416779s, submitted: 74
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:48.754970+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:49.755112+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a231e000/0x0/0x1bfc00000, data 0x2c74283/0x2ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423010304 unmapped: 52609024 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:50.755220+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a231e000/0x0/0x1bfc00000, data 0x2c74283/0x2ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:51.755464+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4510543 data_alloc: 218103808 data_used: 11612160
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:52.755626+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:53.755755+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:54.755963+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423018496 unmapped: 52600832 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:55.756231+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a231e000/0x0/0x1bfc00000, data 0x2c74283/0x2ea0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 423 handle_osd_map epochs [424,424], i have 424, src has [1,424]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423034880 unmapped: 52584448 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:56.756547+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4516181 data_alloc: 218103808 data_used: 11620352
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc76780
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:57.756703+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:58.756866+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.948966980s of 10.389080048s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:13:59.757070+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:00.757296+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2310000/0x0/0x1bfc00000, data 0x2e35bb5/0x2ead000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:01.757514+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4535043 data_alloc: 218103808 data_used: 11620352
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:02.757713+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230e000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:03.757958+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:04.758280+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230e000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:05.758604+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423092224 unmapped: 52527104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:06.758794+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4553405 data_alloc: 218103808 data_used: 11620352
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:07.759023+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84c400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dc84c400 session 0x55f9dc8c1a40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9d9f1b680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:08.759179+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:09.759317+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423100416 unmapped: 52518912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:10.759446+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423108608 unmapped: 52510720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:11.759638+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555587 data_alloc: 218103808 data_used: 11624448
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423116800 unmapped: 52502528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:12.759818+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423116800 unmapped: 52502528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:13.759964+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423116800 unmapped: 52502528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:14.760157+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dc74a5a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:15.760420+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dc8c1680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dc9c2960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.692956924s of 17.884611130s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:16.760706+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4557425 data_alloc: 218103808 data_used: 11624448
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060c17/0x2eb1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:17.760954+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9dccf4960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:18.761165+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84c400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcc2bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:19.761362+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:20.761547+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:21.761726+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559652 data_alloc: 218103808 data_used: 11677696
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:22.761928+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:23.762147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:24.762293+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:25.762458+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:26.762599+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559652 data_alloc: 218103808 data_used: 11677696
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:27.762759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:28.762885+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:29.762998+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:30.763150+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423133184 unmapped: 52486144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:31.763291+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.835231781s of 15.268264771s, submitted: 5
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4566379 data_alloc: 218103808 data_used: 13340672
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423124992 unmapped: 52494336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config diff' '{prefix=config diff}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config show' '{prefix=config show}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:32.763425+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422379520 unmapped: 53239808 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:33.763594+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:34.763749+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'log dump' '{prefix=log dump}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'perf dump' '{prefix=perf dump}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:35.763906+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'perf schema' '{prefix=perf schema}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 53108736 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:36.764445+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4572331 data_alloc: 218103808 data_used: 14598144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 53108736 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:37.764588+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422510592 unmapped: 53108736 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:38.764717+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230b000/0x0/0x1bfc00000, data 0x3060c49/0x2eb3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422658048 unmapped: 52961280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:39.764877+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422658048 unmapped: 52961280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:40.765016+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422658048 unmapped: 52961280 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:41.765153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581655 data_alloc: 218103808 data_used: 14598144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422666240 unmapped: 52953088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:42.765291+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229e000/0x0/0x1bfc00000, data 0x30cdc49/0x2f20000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422666240 unmapped: 52953088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:43.765457+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422666240 unmapped: 52953088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.120386124s of 12.494596481s, submitted: 13
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:44.765590+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422666240 unmapped: 52953088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:45.765787+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422666240 unmapped: 52953088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:46.765923+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581875 data_alloc: 218103808 data_used: 14598144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422666240 unmapped: 52953088 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:47.766115+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229d000/0x0/0x1bfc00000, data 0x30cec49/0x2f21000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422674432 unmapped: 52944896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:48.766241+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422674432 unmapped: 52944896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:49.766393+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422674432 unmapped: 52944896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:50.766549+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422674432 unmapped: 52944896 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229c000/0x0/0x1bfc00000, data 0x30cfc49/0x2f22000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:51.766666+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4582111 data_alloc: 218103808 data_used: 14598144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422264832 unmapped: 53354496 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:52.766826+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422264832 unmapped: 53354496 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:53.767066+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422264832 unmapped: 53354496 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:54.767288+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229c000/0x0/0x1bfc00000, data 0x30cfc49/0x2f22000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:55.767547+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:56.767709+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579903 data_alloc: 218103808 data_used: 14602240
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:57.767837+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:58.767992+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dc84c400 session 0x55f9dcc47860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.377226830s of 15.020299911s, submitted: 2
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229c000/0x0/0x1bfc00000, data 0x30cfc49/0x2f22000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:14:59.768144+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dcc2bc00 session 0x55f9dcc770e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:00.768314+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422273024 unmapped: 53346304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:01.768440+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229c000/0x0/0x1bfc00000, data 0x30cfc49/0x2f22000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579727 data_alloc: 218103808 data_used: 14602240
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422281216 unmapped: 53338112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:02.768570+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422289408 unmapped: 53329920 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:03.768702+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dcbb10e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:04.768838+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a229d000/0x0/0x1bfc00000, data 0x30cfc17/0x2f20000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:05.769014+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9da74ef00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:06.769176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4577723 data_alloc: 218103808 data_used: 14594048
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:07.769307+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:08.769431+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9d9e3cb40
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:09.769623+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84c400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422297600 unmapped: 53321728 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 ms_handle_reset con 0x55f9dc84c400 session 0x55f9dc9c3860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:10.769849+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.704545975s of 11.373355865s, submitted: 23
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422305792 unmapped: 53313536 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:11.770002+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a230d000/0x0/0x1bfc00000, data 0x3060bb5/0x2eb0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555955 data_alloc: 218103808 data_used: 14594048
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422363136 unmapped: 53256192 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 ms_handle_reset con 0x55f9db4f2400 session 0x55f9dcca4d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:12.770140+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcd672c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422363136 unmapped: 53256192 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:13.770286+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422363136 unmapped: 53256192 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:14.770419+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dc74b0e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a233a000/0x0/0x1bfc00000, data 0x2c547f0/0x2e84000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422371328 unmapped: 53248000 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:15.770639+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422371328 unmapped: 53248000 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:16.770765+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4545780 data_alloc: 218103808 data_used: 14450688
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 422371328 unmapped: 53248000 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcc774a0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:17.770876+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 54280192 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:18.771010+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 54272000 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:19.771126+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2b47000/0x0/0x1bfc00000, data 0x24487cd/0x2677000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 426 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421355520 unmapped: 54263808 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:20.771280+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2b43000/0x0/0x1bfc00000, data 0x244a30c/0x267a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421355520 unmapped: 54263808 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2b43000/0x0/0x1bfc00000, data 0x244a30c/0x267a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:21.771453+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.461960793s of 11.394262314s, submitted: 50
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456241 data_alloc: 218103808 data_used: 11300864
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421355520 unmapped: 54263808 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:22.771627+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _renew_subs
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 54255616 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:23.771830+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 54255616 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:24.772127+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 54255616 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2b41000/0x0/0x1bfc00000, data 0x244bf9b/0x267c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:25.772293+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 428 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9dc906d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 54247424 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:26.772455+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458335 data_alloc: 218103808 data_used: 11300864
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 54247424 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:27.772629+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2b42000/0x0/0x1bfc00000, data 0x244bf9b/0x267c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 54247424 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:28.772840+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 54247424 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:29.773009+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b42000/0x0/0x1bfc00000, data 0x244bf9b/0x267c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:30.773204+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:31.773392+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b3e000/0x0/0x1bfc00000, data 0x244dada/0x267f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462509 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:32.773577+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:33.773769+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b3e000/0x0/0x1bfc00000, data 0x244dada/0x267f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:34.774007+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:35.774303+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:36.774492+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462509 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 54222848 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc84c400
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:37.774706+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b3e000/0x0/0x1bfc00000, data 0x244dada/0x267f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.994053841s of 15.577157974s, submitted: 28
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9dc84c400 session 0x55f9dcc77c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 54206464 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:38.774927+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9dc6fe3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 54206464 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:39.775147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 54206464 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:40.775252+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 54206464 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:41.775423+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461629 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b3f000/0x0/0x1bfc00000, data 0x244dada/0x267f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 54206464 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:42.775556+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b3f000/0x0/0x1bfc00000, data 0x244dada/0x267f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 54198272 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:43.775688+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 54198272 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:44.775826+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 54198272 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:45.775987+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 54198272 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:46.776150+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9dcd0f680
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461629 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 54198272 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:47.778689+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.660315514s of 10.348015785s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421666816 unmapped: 53952512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9d97f9c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:48.778856+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421666816 unmapped: 53952512 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:49.778985+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421675008 unmapped: 53944320 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:50.779130+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421675008 unmapped: 53944320 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:51.779266+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504699 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421675008 unmapped: 53944320 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:52.779402+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421675008 unmapped: 53944320 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:53.779564+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 53936128 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:54.779723+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 53936128 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:55.779896+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 53936128 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:56.780024+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504699 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 53936128 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:57.780432+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 53936128 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:58.780675+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 63K writes, 235K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 63K writes, 23K syncs, 2.71 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2203 writes, 6446 keys, 2203 commit groups, 1.0 writes per commit group, ingest: 5.30 MB, 0.01 MB/s
                                           Interval WAL: 2203 writes, 956 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:15:59.780790+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:00.780996+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:01.781149+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504699 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:02.781279+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:03.781452+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:04.781616+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:05.781804+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:06.781958+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421691392 unmapped: 53927936 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504699 data_alloc: 218103808 data_used: 11309056
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:07.782158+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421699584 unmapped: 53919744 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:08.782308+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421699584 unmapped: 53919744 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:09.782464+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421699584 unmapped: 53919744 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:10.782601+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 421707776 unmapped: 53911552 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:11.782819+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423297024 unmapped: 52322304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540859 data_alloc: 218103808 data_used: 16396288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:12.782967+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423297024 unmapped: 52322304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:13.783147+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423297024 unmapped: 52322304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:14.783316+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423297024 unmapped: 52322304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:15.783571+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423297024 unmapped: 52322304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:16.783872+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423297024 unmapped: 52322304 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540859 data_alloc: 218103808 data_used: 16396288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:17.784126+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423305216 unmapped: 52314112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:18.784390+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423305216 unmapped: 52314112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a264d000/0x0/0x1bfc00000, data 0x293fada/0x2b71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:19.784557+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423305216 unmapped: 52314112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:20.784710+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423305216 unmapped: 52314112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:21.784870+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 423305216 unmapped: 52314112 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.344356537s of 33.812114716s, submitted: 10
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4589143 data_alloc: 218103808 data_used: 16396288
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:22.785054+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 50061312 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:23.785284+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1f97000/0x0/0x1bfc00000, data 0x2ff5ada/0x3227000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 49782784 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:24.785483+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 49668096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:25.785699+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427245568 unmapped: 48373760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:26.785825+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427253760 unmapped: 48365568 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4633615 data_alloc: 218103808 data_used: 18370560
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:27.786022+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:28.786234+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:29.786373+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:30.786609+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:31.786760+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4633615 data_alloc: 218103808 data_used: 18370560
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:32.786926+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:33.787054+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:34.787287+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:35.787497+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.571746826s of 14.364773750s, submitted: 76
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:36.787696+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcd110e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:37.788176+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 48332800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dd69a800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:38.788342+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:39.788610+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:40.788810+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9dd69a800 session 0x55f9d97bb0e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:41.789006+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:42.789250+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:43.789398+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:44.789580+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:45.789766+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427294720 unmapped: 48324608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:46.789977+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427311104 unmapped: 48308224 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:47.790213+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:48.790482+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:49.790831+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:50.791274+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:51.791526+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:52.791741+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:53.791929+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427319296 unmapped: 48300032 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:54.792202+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:55.792508+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:56.792677+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:57.792815+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:58.792953+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:16:59.793211+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:00.793432+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:01.793658+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427327488 unmapped: 48291840 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:02.793811+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:03.793997+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:04.794244+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:05.794565+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:06.794733+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4630751 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:07.794930+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:08.795199+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:09.795440+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427335680 unmapped: 48283648 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:10.795694+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 48275456 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.851081848s of 34.988925934s, submitted: 3
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:11.795843+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 48275456 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4632028 data_alloc: 218103808 data_used: 18399232
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:12.796025+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9d978e960
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 48275456 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9ef2800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:13.796190+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 48275456 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dbb6f800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:14.796356+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427343872 unmapped: 48275456 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:15.796605+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:16.796806+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4632828 data_alloc: 218103808 data_used: 18489344
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:17.797022+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:18.797194+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:19.797339+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:20.797527+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:21.797691+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4632828 data_alloc: 218103808 data_used: 18489344
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:22.797848+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:23.797973+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:24.798153+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:25.798337+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.479707718s of 14.567975998s, submitted: 6
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:26.798450+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4643100 data_alloc: 218103808 data_used: 19345408
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:27.798582+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:28.798688+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1ca6000/0x0/0x1bfc00000, data 0x32e6ada/0x3518000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:29.798845+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:30.798971+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:31.799123+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4645986 data_alloc: 218103808 data_used: 19353600
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:32.799288+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1c9a000/0x0/0x1bfc00000, data 0x32f2ada/0x3524000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:33.799445+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427352064 unmapped: 48267264 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:34.799595+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:35.799842+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:36.800001+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4645986 data_alloc: 218103808 data_used: 19353600
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:37.800292+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:38.800443+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a1c9a000/0x0/0x1bfc00000, data 0x32f2ada/0x3524000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:39.800593+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:40.800664+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427360256 unmapped: 48259072 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.682967186s of 15.914974213s, submitted: 8
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:41.800823+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427368448 unmapped: 48250880 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4657040 data_alloc: 218103808 data_used: 20471808
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:42.800976+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427376640 unmapped: 48242688 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:43.801152+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 49692672 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dcae9800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c95000/0x0/0x1bfc00000, data 0x3363733/0x3528000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 ms_handle_reset con 0x55f9dcae9800 session 0x55f9dcae21e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:44.801330+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 49692672 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:45.801527+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 49692672 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:46.801666+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 49692672 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:47.801804+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4666496 data_alloc: 218103808 data_used: 20471808
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 49684480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:48.801945+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 49684480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:49.802135+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 49684480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:50.802279+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 49684480 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:51.802426+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.528195381s of 10.012949944s, submitted: 7
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 49668096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:52.802537+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4666055 data_alloc: 218103808 data_used: 20471808
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 49668096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:53.802678+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 49668096 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:54.802804+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425975808 unmapped: 49643520 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:55.802986+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 427048960 unmapped: 48570368 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:56.803145+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426000384 unmapped: 49618944 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:57.803298+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4665968 data_alloc: 218103808 data_used: 20471808
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426016768 unmapped: 49602560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:58.803434+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426016768 unmapped: 49602560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:17:59.803568+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426016768 unmapped: 49602560 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c94000/0x0/0x1bfc00000, data 0x33637a5/0x352a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,1,0,0,1,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:00.803689+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426033152 unmapped: 49586176 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:01.803808+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.032189369s of 10.038306236s, submitted: 140
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426041344 unmapped: 49577984 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:02.803933+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4666850 data_alloc: 218103808 data_used: 20471808
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426057728 unmapped: 49561600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:03.804065+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426057728 unmapped: 49561600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc3afc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:04.804216+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426065920 unmapped: 49553408 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:05.804374+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 49520640 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:06.804482+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 49487872 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:07.804622+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4666762 data_alloc: 218103808 data_used: 20475904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:08.804759+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:09.804896+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:10.805025+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:11.805129+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:12.805315+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4666762 data_alloc: 218103808 data_used: 20475904
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:13.805476+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:14.805738+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:15.806166+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x33637a5/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:16.806348+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.657768250s of 15.584535599s, submitted: 171
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:17.806777+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4682851 data_alloc: 234881024 data_used: 21594112
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 49373184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:18.806913+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 49373184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:19.807091+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1bdd000/0x0/0x1bfc00000, data 0x34197a5/0x35e1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 49373184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:20.807285+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 49373184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:21.807870+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 49373184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:22.808021+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4683811 data_alloc: 234881024 data_used: 21618688
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 49373184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:23.808190+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426942464 unmapped: 48676864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:24.808400+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426942464 unmapped: 48676864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1b0b000/0x0/0x1bfc00000, data 0x34eb7a5/0x36b3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:25.808601+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426942464 unmapped: 48676864 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 ms_handle_reset con 0x55f9dc3afc00 session 0x55f9dcc510e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:26.808793+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db87bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 ms_handle_reset con 0x55f9db87bc00 session 0x55f9dc6fe3c0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:27.809178+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4715873 data_alloc: 234881024 data_used: 21766144
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:28.809309+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:29.809479+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f7800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 ms_handle_reset con 0x55f9dc2f7800 session 0x55f9dcc77c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.091876030s of 12.676451683s, submitted: 24
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 49471488 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9d97f9c20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x37f67a5/0x36b8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:30.809690+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9db87bc00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 ms_handle_reset con 0x55f9db87bc00 session 0x55f9dcc60f00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:31.809830+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:32.810024+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4703357 data_alloc: 234881024 data_used: 21762048
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:33.810188+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 ms_handle_reset con 0x55f9d9ef2800 session 0x55f9daa3d860
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x32f73e0/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:34.810376+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 ms_handle_reset con 0x55f9dbb6f800 session 0x55f9dcd0f0e0
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9dc2f7800
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:35.810584+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 49463296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:36.810723+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:37.813160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4685231 data_alloc: 234881024 data_used: 21762048
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 ms_handle_reset con 0x55f9dc2f7800 session 0x55f9da73bc20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:38.813330+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x32f73e0/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a1c93000/0x0/0x1bfc00000, data 0x32f73e0/0x352b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:39.813486+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.863423347s of 10.294848442s, submitted: 44
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:40.813644+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:41.813856+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: handle_auth_request added challenge on 0x55f9d9d87c00
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:42.813998+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4688589 data_alloc: 234881024 data_used: 21770240
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:43.814197+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 49455104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a1c8f000/0x0/0x1bfc00000, data 0x32f8f1f/0x352e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [1,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:44.814385+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 50503680 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:45.814582+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 50503680 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:46.814858+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 ms_handle_reset con 0x55f9d9d87c00 session 0x55f9db7c0d20
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:47.815061+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:48.815298+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:49.815452+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:50.815606+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:51.815768+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:52.815943+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 50487296 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:53.816141+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425140224 unmapped: 50479104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:54.816338+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425140224 unmapped: 50479104 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:55.816568+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:56.816745+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:57.816885+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:58.817070+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:18:59.817361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:00.817580+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:01.817780+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 50470912 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:02.817939+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:03.818160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:04.818343+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:05.818508+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:06.818660+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:07.818820+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:08.818983+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 50462720 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:09.819173+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425164800 unmapped: 50454528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:10.819372+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425164800 unmapped: 50454528 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:11.819524+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425172992 unmapped: 50446336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:12.819732+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425172992 unmapped: 50446336 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:13.819946+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 50438144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:14.820175+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 50438144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:15.820389+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 50438144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:16.820592+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 50438144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:17.820750+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 50438144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:18.820936+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 50438144 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:19.821132+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:20.821288+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:21.821445+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:22.821605+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:23.821851+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:24.822045+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:25.822318+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 50429952 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:26.822456+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:27.822607+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:28.822763+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:29.822929+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:30.823160+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:31.823313+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:32.823521+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 50421760 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:33.823678+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 50413568 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:34.823867+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 50413568 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:35.824143+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 50413568 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:36.824388+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 50413568 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:37.824702+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 50405376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:38.824861+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 50405376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:39.824995+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 50405376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:40.825198+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 50405376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:41.825406+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 50405376 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:42.825565+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:43.825742+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:44.825896+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:45.826141+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:46.826312+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:47.826461+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:48.826621+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:49.826801+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 50397184 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:50.827208+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:51.827430+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:52.827599+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:53.827745+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:54.827903+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:55.828145+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:56.828275+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 50388992 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:57.828410+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 50380800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:58.828596+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 50380800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:19:59.828731+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 50380800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:00.828877+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 50380800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:01.829021+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 50380800 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:02.829204+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 50372608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:03.829355+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 50372608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:04.829565+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 50372608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:05.829743+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 50372608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:06.829890+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 50372608 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:07.830154+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425254912 unmapped: 50364416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:08.830361+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425254912 unmapped: 50364416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:09.830516+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425254912 unmapped: 50364416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:10.830661+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425254912 unmapped: 50364416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:11.830820+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425254912 unmapped: 50364416 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:12.830997+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 09:20:47 compute-0 ceph-osd[84816]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 09:20:47 compute-0 ceph-osd[84816]: bluestore.MempoolThread(0x55f9d8401b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486529 data_alloc: 218103808 data_used: 11333632
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 50356224 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:13.831174+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 50356224 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:14.831301+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config diff' '{prefix=config diff}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config show' '{prefix=config show}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425033728 unmapped: 50585600 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2b36000/0x0/0x1bfc00000, data 0x2452f1f/0x2688000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1aa3f9c7), peers [1,2] op hist [])
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:15.831479+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: prioritycache tune_memory target: 4294967296 mapped: 425025536 unmapped: 50593792 heap: 475619328 old mem: 2845415832 new mem: 2845415832
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: tick
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_tickets
Jan 31 09:20:47 compute-0 ceph-osd[84816]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T09:20:16.831619+0000)
Jan 31 09:20:47 compute-0 ceph-osd[84816]: do_command 'log dump' '{prefix=log dump}'
Jan 31 09:20:47 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 09:20:47 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:47 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:47 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:47.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48034 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39276 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.47959 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.53255 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.39231 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.53267 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.39246 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1624731936' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2700614294' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/4221179603' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2047880028' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.53291 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.39252 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.48001 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/730144916' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2480106286' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1707424259' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.53315 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.39267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1045953086' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1469538700' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48046 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39285 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 nova_compute[247704]: 2026-01-31 09:20:48.561 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53354 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48064 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 09:20:48 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3899040109' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:48 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53372 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 nova_compute[247704]: 2026-01-31 09:20:49.117 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:49 compute-0 crontab[437918]: (root) LIST (root)
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48076 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 09:20:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3610126339' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39321 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53387 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48091 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53399 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48100 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.53339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3621906018' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1791372070' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.48034 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.39276 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/418996094' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.48046 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.39285 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.53354 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1113723371' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.48064 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3899040109' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2293239702' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1725596042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39342 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:49 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 09:20:49 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779901335' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:49 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:49 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:49 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:49.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 09:20:50 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48112 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:50 compute-0 nova_compute[247704]: 2026-01-31 09:20:50.438 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:50 compute-0 nova_compute[247704]: 2026-01-31 09:20:50.538 247708 DEBUG oslo_concurrency.processutils [None req-bd2c07e4-843a-4507-9f7a-8063f84b952d 7236ad21d7ab4000b7ab6db9df93bca9 f1803bf3df964a3f90dda65daa6f9a53 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48127 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:50 compute-0 nova_compute[247704]: 2026-01-31 09:20:50.583 247708 DEBUG oslo_concurrency.processutils [None req-bd2c07e4-843a-4507-9f7a-8063f84b952d 7236ad21d7ab4000b7ab6db9df93bca9 f1803bf3df964a3f90dda65daa6f9a53 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53420 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:50 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:50.713+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39369 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:50 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:50.883+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:50 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48139 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 09:20:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2421450604' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.39306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.53372 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.48076 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3610126339' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.39321 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.53387 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3710960062' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.48091 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3061686296' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.53399 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.48100 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.39342 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2779901335' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1036068566' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3224229522' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2362632188' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 09:20:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824071166' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 nova_compute[247704]: 2026-01-31 09:20:51.556 247708 DEBUG oslo_service.periodic_task [None req-7a4c4282-4b52-4900-be7b-2c1fa0adac81 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 09:20:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:51 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48169 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-f70fcd2a-dcb4-5f89-a4ba-79a09959083b-mgr-compute-0-hhuoua[74787]: 2026-01-31T09:20:51.682+0000 7fb604d35640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:51 compute-0 ceph-mgr[74791]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 09:20:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 09:20:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/872526189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:51 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 09:20:51 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672146262' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:51 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 09:20:51 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:51 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:51 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:51.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:52 compute-0 systemd[1]: Started Hostname Service.
Jan 31 09:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 09:20:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258845554' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 09:20:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/105534902' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.48112 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.48127 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.53420 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.39369 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.48139 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/798322276' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2421450604' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2202396126' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/486569988' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1824071166' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1377901725' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1184234516' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2731074068' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.48169 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/872526189' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/159826231' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3672146262' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/440537423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4221942119' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1990826684' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:52 compute-0 sudo[438374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 09:20:52 compute-0 sudo[438374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134706696' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:52 compute-0 sudo[438374]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:52 compute-0 sudo[438401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:52 compute-0 sudo[438401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:52 compute-0 sudo[438401]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 09:20:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216596902' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:52 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 09:20:52 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/993795347' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 09:20:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198751604' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:53.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 09:20:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840181540' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 09:20:53 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280397369' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1258845554' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/105534902' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3753261357' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/423536003' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3247337375' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1134706696' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1182153588' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2404465682' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2216596902' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/377591572' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/725191915' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/993795347' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1212520424' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1381767948' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/198751604' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:53 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/329836358' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:53 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:53 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:53 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:53.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:54 compute-0 nova_compute[247704]: 2026-01-31 09:20:54.120 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53585 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 09:20:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268494990' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 09:20:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184909786' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53603 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 09:20:54 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372215000' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53621 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:54 compute-0 podman[438802]: 2026-01-31 09:20:54.899933068 +0000 UTC m=+0.062039666 container health_status 4f21208ce36b9cf814979a8ea23334150fc70d101a3b284bd29d2af80b4ea3b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '93c1edad6c3ce19ccbf4cad1c823140b960799b036165432d2a9b50972fa7d6a-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-1ebd21696fff3d9ce9d1c627d87eb768e7a7895873c4ad726f2d4c0751d2120c-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53615 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:54 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39519 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48319 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:20:55 compute-0 nova_compute[247704]: 2026-01-31 09:20:55.440 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48328 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:20:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:55.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3369620064' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1903579792' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3884220018' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1767818276' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2512753890' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1555477891' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/840181540' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3280397369' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1587363947' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3092234834' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4253000560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.10:0/4253000560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3472749174' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1268494990' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1184909786' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1290483748' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2293008804' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1372215000' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1046202068' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39540 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39543 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48334 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53660 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39549 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:55 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:55 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:55 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:55.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39555 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48364 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53675 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39564 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48376 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53696 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53585 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53603 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53609 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.39504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53621 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3857913238' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53615 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.39519 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.48313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53630 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.48319 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.48328 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.39540 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.39543 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.48334 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.53660 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.48346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/2130671381' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1562061555' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3066766351' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39579 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48388 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:56 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 09:20:56 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939106005' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39591 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48394 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454513562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:57.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.39549 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.39555 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.48364 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.53675 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.39564 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.48376 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.53696 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/1463004898' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.39579 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.48388 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3781054631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/2939106005' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/834676227' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3772205554' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3454513562' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/772713638' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:57 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 09:20:57 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1734205460' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:57 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:57 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:57 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:57.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:20:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:20:58.079 160028 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=119, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '52:b2:f5', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'a6:2b:58:cd:91:59'}, ipsec=False) old=SB_Global(nb_cfg=118) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 09:20:58 compute-0 ovn_metadata_agent[160021]: 2026-01-31 09:20:58.080 160028 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 09:20:58 compute-0 nova_compute[247704]: 2026-01-31 09:20:58.082 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:58 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:20:58 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 09:20:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661100414' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.53795 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='client.39591 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='client.48394 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/1734205460' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/661100414' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/472564801' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:20:58 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:20:59 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48511 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:20:59 compute-0 nova_compute[247704]: 2026-01-31 09:20:59.122 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:20:59 compute-0 sudo[439358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:59 compute-0 sudo[439358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-0 sudo[439358]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:59 compute-0 sudo[439411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:20:59 compute-0 sudo[439411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-0 sudo[439411]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 09:20:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:59.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 09:20:59 compute-0 sudo[439445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:20:59 compute-0 sudo[439445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-0 sudo[439445]: pam_unix(sudo:session): session closed for user root
Jan 31 09:20:59 compute-0 sudo[439475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Jan 31 09:20:59 compute-0 sudo[439475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:20:59 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:20:59 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:20:59 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:21:00 compute-0 ceph-mon[74496]: pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='client.53795 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/184316149' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/3081234303' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3609422971' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/4252084512' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/608546425' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 09:21:00 compute-0 sudo[439475]: pam_unix(sudo:session): session closed for user root
Jan 31 09:21:00 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.39708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 09:21:00 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:21:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev ecf0c84f-6880-4297-b7e8-6a93a317aed1 does not exist
Jan 31 09:21:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev a15d6d42-2a01-4f27-b8bc-e914fb5cf16a does not exist
Jan 31 09:21:00 compute-0 ceph-mgr[74791]: [progress WARNING root] complete: ev 655d3dca-7e90-4262-b8a5-a7e117a6e3b9 does not exist
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:21:00 compute-0 nova_compute[247704]: 2026-01-31 09:21:00.443 247708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 31 09:21:00 compute-0 sudo[439640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:21:00 compute-0 sudo[439640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:21:00 compute-0 sudo[439640]: pam_unix(sudo:session): session closed for user root
Jan 31 09:21:00 compute-0 sudo[439665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 09:21:00 compute-0 sudo[439665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:21:00 compute-0 sudo[439665]: pam_unix(sudo:session): session closed for user root
Jan 31 09:21:00 compute-0 sudo[439693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 31 09:21:00 compute-0 sudo[439693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:21:00 compute-0 sudo[439693]: pam_unix(sudo:session): session closed for user root
Jan 31 09:21:00 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 09:21:00 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396172421' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:00 compute-0 sudo[439718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/f70fcd2a-dcb4-5f89-a4ba-79a09959083b/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid f70fcd2a-dcb4-5f89-a4ba-79a09959083b --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 31 09:21:00 compute-0 sudo[439718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 09:21:01 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48556 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:01 compute-0 podman[439821]: 2026-01-31 09:21:01.011418829 +0000 UTC m=+0.021669053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 09:21:01 compute-0 ceph-mgr[74791]: log_channel(audit) log [DBG] : from='client.48562 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 09:21:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732117868' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:21:01 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:21:01 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 09:21:01 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:01.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 09:21:01 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 09:21:01 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193787100' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:21:02 compute-0 radosgw[94239]: ====== starting new request req=0x7fdbf7a9a6f0 =====
Jan 31 09:21:02 compute-0 radosgw[94239]: ====== req done req=0x7fdbf7a9a6f0 op status=0 http_status=200 latency=0.001000023s ======
Jan 31 09:21:02 compute-0 radosgw[94239]: beast: 0x7fdbf7a9a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.48511 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/2918085594' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.39708 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/185072006' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' 
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='mgr.14132 192.168.122.100:0/828660362' entity='mgr.compute-0.hhuoua' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/1992965450' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.100:0/3396172421' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.101:0/201906423' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mon[74496]: from='client.? 192.168.122.102:0/3890139264' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:21:02 compute-0 ceph-mgr[74791]: log_channel(cluster) log [DBG] : pgmap v4494: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 09:21:02 compute-0 podman[439821]: 2026-01-31 09:21:02.543447412 +0000 UTC m=+1.553697606 container create 40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 09:21:02 compute-0 systemd[1]: Started libpod-conmon-40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28.scope.
Jan 31 09:21:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 09:21:02 compute-0 podman[439821]: 2026-01-31 09:21:02.640361778 +0000 UTC m=+1.650611982 container init 40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:21:02 compute-0 podman[439821]: 2026-01-31 09:21:02.648552445 +0000 UTC m=+1.658802639 container start 40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 09:21:02 compute-0 systemd[1]: libpod-40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28.scope: Deactivated successfully.
Jan 31 09:21:02 compute-0 ecstatic_williams[440078]: 167 167
Jan 31 09:21:02 compute-0 conmon[440078]: conmon 40dae6451478d3225128 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28.scope/container/memory.events
Jan 31 09:21:02 compute-0 podman[439821]: 2026-01-31 09:21:02.663651259 +0000 UTC m=+1.673901453 container attach 40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williams, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 09:21:02 compute-0 podman[439821]: 2026-01-31 09:21:02.664996971 +0000 UTC m=+1.675247185 container died 40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williams, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 09:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ee9c672279cd56d7691d859570b7700b515958e83510edc077f3fcdc2cce05-merged.mount: Deactivated successfully.
Jan 31 09:21:02 compute-0 podman[439821]: 2026-01-31 09:21:02.822834376 +0000 UTC m=+1.833084570 container remove 40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_williams, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 09:21:02 compute-0 ceph-mon[74496]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 31 09:21:02 compute-0 ceph-mon[74496]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2583984142' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 09:21:02 compute-0 systemd[1]: libpod-conmon-40dae6451478d3225128462effea6682b14d44871efc40b0ed887680c90fae28.scope: Deactivated successfully.
